text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
The AMD Phenom family is a64-bitmicroprocessorfamily fromAdvanced Micro Devices(AMD), based on theK10 microarchitecture. It includes theAMD PhenomII X6 hex-core series, Phenom X4 and Phenom II X4 quad-core series, Phenom X3 and Phenom II X3 tri-core series, and Phenom II X2 dual-core series. Other related processors based on the K10 microarchitecture include the Athlon X2Kumaprocessors, Athlon II processors, and various Opteron, Sempron, and Turion series. The first Phenoms were released in November 2007. An improved second generation was released in December 2008, namedPhenom II. Processors with anefollowing the model number (e.g., 910e) are low-power models, usually 45 W for Athlons, 65 W for Phenoms. Processors with a "u" following the model number (e.g., 270u) are ultra-low-power models, usually 20 W for single core chips or 25 W for dual core chips.
AMD released a limited edition Deneb-based processor to extremeoverclockersand partners. Fewer than 100 were made.
The "42" officially represents four cores running at 2 GHz, but is also a reference tothe answer to life, the universe, and everythingfromThe Hitchhiker's Guide to the Galaxy.[19]
|
https://en.wikipedia.org/wiki/List_of_AMD_Phenom_processors
|
AMD FX is a series ofAMD microprocessorsforpersonal computers. The following is a list ofAMD FXbrand microprocessors. SomeAPUsalso carry an FX model name, but the term "FX" normally only refers toCPUswhich are not just APUs with theiGPUdisabled.
APU features table
These processors were the first AMD CPUs to use the "FX" designation and identified the chip as being higher-performance. The frequency multiplier was unlocked in these chips.
FX-51 (2.2 GHz) and FX-53 (2.4 GHz)
FX-53 (2.4 GHz) and FX-55 (2.6 GHz)
FX-55 (2.6 GHz) and FX-57 (2.8 GHz)
FX-60 (2.6 GHz)[1]
FX-62 (2.8 GHz)
FX-70 (2.6 GHz), FX-72 (2.8 GHz), FX-74 (3.0 GHz)
|
https://en.wikipedia.org/wiki/List_of_AMD_FX_processors
|
This is a list of microprocessors designed byAMDcontaining a 3Dintegrated graphics processing unit(iGPU), including those under theAMD APU(Accelerated Processing Unit) product series.
The following table shows features ofAMD's processors with 3D graphics, includingAPUs.
The following table shows the graphics and computeAPIssupport across ATI/AMD GPU microarchitectures. Note that a branding series might include older generation chips.
1.3 (GCN 4)
[21][22][23]
Common features ofZenbasedRaven Ridgedesktop APUs:
Common features ofZen+based desktop APUs:
Common features of Ryzen 4000 desktop APUs:
Common features of Ryzen 5000 desktop APUs:
Common features of Ryzen 7000 desktop CPUs:
Common features of Ryzen 8000G desktop APUs:
Memorysupport
(W)
(GHz)
(GHz)
(MHz)
6/14
6/14
6/14
6/14
6/14
6/14
Memorysupport
(W)
Cores/threads
(GHz)
(GHz)
(MHz)
(MHz)
(MB)
2MB
2013
2013
2MB
Memorysupport
(W)
Cores/threads
(GHz)
(GHz)
(MHz)
(MHz)
(MB)
Memorysupport
(W)
Cores/threads
(GHz)
(GHz)
(MHz)
(MHz)
(MB)
Memorysupport
(W)
Cores/threads
(GHz)
(GHz)
(MHz)
(MB)
35
1MB
Memorysupport
(W)
Cores/threads
(GHz)
(GHz)
(MHz)
(MB)
15
15
45
15
45
Common features of Ryzen 3000 notebook APUs:
Common features of Ryzen 5000 notebook APUs:
Common features of Ryzen 6000 notebook APUs:
Memorysupport
(W)
(GHz)
(GHz)
(MHz)
(MHz)
512KB
Memorysupport
(W)
(GHz)
(GHz)
(MHz)
(MHz)
Memorysupport
(W)
(GHz)
(GHz)
(MHz)
(MHz)
(MB)
Memorysupport
(W)
(GHz)
(MHz)
(MB)
Memorysupport
(W)
(GHz)
(GHz)
(MHz)
(MHz)
(MB)
(W)
(GHz)
(GHz)
(MHz)
(MB)
Memorysupport
(W)
Cores/threads
(GHz)
(GHz)
(MHz)
(MB)
Common features:
Memorysupport
(W)
(GHz)
(MHz)
Memorysupport
(W)
(GHz)
(MHz)
(MB)
Memorysupport
(W)
(GHz)
(MHz)
(MB)
Memorysupport
(W)
temperature
(°C)
(GHz)
(MB)
Memorysupport
(W)
(GHz)
(MHz)
(MB)
(W)
(GHz)
(GHz)
(MHz)
(MB)
(W)
(GHz)
(GHz)
(MHz)
(MB)
10
15
Memorysupport
(W)
(GHz)
(GHz)
(MHz)
(MHz)
(MB)
Memorysupport
(W)
(GHz)
(GHz)
(MHz)
(MHz)
(MB)
17
35
35
(W)
(GHz)
(GHz)
(GHz)
(MB)
15
35
As of May 1, 2013, AMD opened the doors of their "semi-custom" business unit.[213]Since these chips are custom-made for specific customer needs, they vary widely from both consumer-grade APUs and even the other custom-built ones. Some notable examples of semi-custom chips that have come from this sector include the chips from thePlayStation 4andXbox One.[214]So far the size of the integrated GPU in these semi-custom APUs exceed by far the GPU size in the consumer-grade APUs.
|
https://en.wikipedia.org/wiki/List_of_AMD_accelerated_processing_units
|
TheRyzenfamily is anx86-64microprocessorfamily fromAMD, based on theZen microarchitecture. The Ryzen lineup includes Ryzen 3, Ryzen 5, Ryzen 7, Ryzen 9, andRyzen Threadripperwith up to 96 cores. All consumer desktop Ryzens (except PRO models) and all mobile processors with the HX suffix have an unlocked multiplier. In addition, all supportSimultaneous Multithreading(SMT) except earlier Zen/Zen+ based desktop and mobile Ryzen 3, and some models of Zen 2 based mobile Ryzen.
Common features of Ryzen 1000 desktop CPUs:
Common features of Ryzen 1000 HEDT CPUs:
Common features of Ryzen 2000 desktopAPUs:
Common features of Ryzen 2000 desktop CPUs:
Common features of Ryzen 2000 HEDT CPUs:
Common features of Ryzen 3000 desktop APUs:
Common features of Ryzen 3000 desktop CPUs:
Common features of Ryzen 3000 HEDT/workstation CPUs:
Based on the Ryzen 4000G series APUs with theintegrated GPUdisabled.
Common features of Ryzen 4000 desktop CPUs:
The AMD 4700S and 4800S desktop processors are part of a "desktop kit" that comes bundled with a motherboard andGDDR6RAM. The CPU is soldered, and provides 4PCIe 2.0lanes. These are reportedly cut-down variants of the APUs found on thePlayStation 5andXbox Series X and Srepurposed from defective chip stock.[26][27][28]
Common features of Ryzen 4000 desktop APUs:
Common features of Ryzen 5000 desktop CPUs:
Cezanne based CPUs that have theintegrated GPUdisabled.
Common features of Ryzen 5000 (Cezanne) desktop CPUs:
Common features of Ryzen 5000 desktop APUs:
Common features of Ryzen 5000 workstation CPUs:
Common features of Ryzen 7000 desktop CPUs:
Common features of Ryzen 7000 HEDT/workstation CPUs:
Common features of Ryzen 8000 desktop CPUs:
Common features of Ryzen 8000G desktop APUs:
Common features of Ryzen 9000 desktop CPUs:
Common features of Ryzen 2000 notebook APUs:
Common features of Ryzen 3000 notebook APUs:
Common features of Ryzen 3000 notebook APUs:
Common features of Ryzen 4000 notebook APUs:
Common features of Ryzen 5000 notebook APUs:
Cezanne(2021 models),Barceló(2022 models).
Common features of Ryzen 5000 notebook APUs:
Common features of Ryzen 6000 notebook APUs:
Common features of Ryzen 7020 notebook APUs:
Common features of Ryzen 7030 notebook APUs:
Common features of Ryzen 7035 notebook APUs:
Common features of Ryzen 7040 notebook APUs:
Common features of Ryzen 7045 notebook CPUs:
Key features of Ryzen 8040 notebook APUs:
Common features of Ryzen AI 300 notebook APUs:
Common features of Ryzen 9000 Fire Range series:
Common features of Ryzen Z1 handheld APUs:
[144][145][146][147][148][149][150]
[144][145][146][147][148][149]
[144][145][146][147][148][149]
Common features of Ryzen Embedded 7000 series CPUs:
|
https://en.wikipedia.org/wiki/List_of_AMD_Ryzen_processors
|
The following is alist ofAMDCPUmicroarchitectures.
Historically, AMD's CPU families were given a "K-number" (which originally stood forKryptonite,[1]an allusion to theSupermancomic book character's fatal weakness) starting with their first internal x86 CPU design, the K5, to represent generational changes. AMD has not used K-nomenclaturecodenamesin official AMD documents and press releases since the beginning of 2005, whenK8described theAthlon 64processor family. AMD now refers to the codename K8 processors as theFamily 0Fhprocessors. 10h and 0Fh refer to the main result of theCPUIDx86processor instruction. Inhexadecimalnumbering, 0F(h) (where thehrepresents hexadecimal numbering) equals thedecimalnumber 15, and 10(h) equals the decimal number 16. (The "K10h" form that sometimes pops up is an improper hybrid of the "K" code andFamily XXhidentifier number.)
The Family hexadecimal identifier number can be determined for a particular processor using thefreewaresystem profilingapplicationCPU-Z, which shows the Family number in theExt. Familyfield of the application, as can be seen on various screenshots on theCPU-Z Validator World Recordswebsite.
Below is a list of microarchitectures many of which havecodenamesassociated:[2]
|
https://en.wikipedia.org/wiki/List_of_AMD_CPU_microarchitectures
|
Apple A12Z(iPad Pro)
Apple M1is a series ofARM-basedsystem-on-a-chip(SoC) designed byApple Inc., launched 2020 to 2022. It is part of theApple siliconseries, as acentral processing unit(CPU) andgraphics processing unit(GPU) for itsMacdesktopsandnotebooks, and theiPad ProandiPad Airtablets.[4]The M1 chip initiated Apple's third change to theinstruction set architectureused by Macintosh computers,switching from Intel to Apple siliconfourteen years after they wereswitched from PowerPC to Intel, and twenty-six years after the transition from the originalMotorola 68000 seriestoPowerPC. At the time of its introduction in 2020, Apple said that the M1 had "the world's fastest CPU core in low power silicon" and the world's best CPUperformance per watt.[4][5]Its successor,Apple M2, was announced on June 6, 2022, atWorldwide Developers Conference(WWDC).
The original M1 chip was introduced in November 2020, and was followed by the professional-focusedM1 ProandM1 Maxchips in October 2021. The M1 Max is a higher-powered version of the M1 Pro, with moreGPUcores andmemory bandwidth, a largerdie size, and a large used interconnect. Apple introduced theM1 Ultrain 2022, a desktopworkstationchip containing two interconnected M1 Max units. These chips differ largely in size and the number of functional units: for example, while the original M1 has about 16 billiontransistors, the M1 Ultra has 114 billion.
Apple'smacOSandiPadOSoperating systemsboth run on the M1. Initial support for the M1 SoC in theLinuxkernel was released in version 5.13 on June 27, 2021.[6]
The initial versions of the M1 chips contain an architectural defect that permits sandboxed applications to exchange data, violating the security model, an issue that has been described as "mostly harmless".[7]
The M1 has four high-performance "Firestorm" and four energy-efficient "Icestorm"cores, first seen on theA14 Bionic. It has ahybridconfiguration similar toARM big.LITTLEand Intel'sLakefieldprocessors.[8]This combination allows power-use optimizations not possible with previousApple–Intel architecturedevices. Apple claims the energy-efficient cores use one-tenth the power of the high-performance ones.[9]The high-performance cores have an unusually large[10]192 KB of L1instruction cacheand 128 KB of L1 data cache and share a 12 MB L2 cache; the energy-efficient cores have a 128 KB L1 instruction cache, 64 KB L1 data cache, and a shared 4 MB L2 cache. The SoC also has an 8 MB System Level Cache shared by the GPU.
The M1 Pro and M1 Max use the sameARM big.LITTLEdesign as the M1, with eight high-performance "Firestorm" (six in thelower-binnedvariants of the M1 Pro) and two energy-efficient "Icestorm"cores, providing a total of ten cores (eight in the lower-binned variants of the M1 Pro).[11]The high-performance cores are clocked at 3228 MHz, and the high-efficiency cores are clocked at 2064 MHz. The eight high-performance cores are split into two clusters. Each high-performance cluster shares 12 MB of L2 cache. The two high-efficiency cores share 4 MB of L2 cache. The M1 Pro and M1 Max have 24 MB and 48 MB respectively of system level cache (SLC).[12]
The M1 Ultra consists of two M1 Max units connected with UltraFusion Interconnect with a total of 20 CPU cores and 96 MB system level cache (SLC).
The M1 integrates an Apple designed[13]eight-core (seven in some base models) graphics processing unit (GPU). Each GPU core is split into 16execution units(EUs), which each contain 8arithmetic logic units(ALUs). In total, the M1 GPU contains up to 128 EUs and 1024 ALUs,[14]which Apple says can execute up to 24,576 threads simultaneously and which have a maximum floating point (FP32) performance of 2.6TFLOPs.[8][15]
The M1 Pro integrates a 16-core (14 in some base models) graphics processing unit (GPU), while the M1 Max integrates a 32-core (24 in some base models) GPU. In total, the M1 Max GPU contains up to 512execution unitsor 4096 ALUs, which have a maximum floating point (FP32) performance of 10.4TFLOPs.
The M1 Ultra features a 48- or 64-core GPU with up to 8192 ALUs and 21 TFLOPs of FP32 performance.
The M1 uses a 128-bitLPDDR4X SDRAM[16]in aunified memoryconfiguration shared by all the components of the processor, aka memory on package (MOP). The SoC and DRAM chips are mounted together in asystem-in-a-packagedesign. 8 GB and 16 GB configurations are available.
The M1 Pro has 256-bitLPDDR5 SDRAM, and the M1 Max has 512-bit LPDDR5 SDRAM memory. While the M1 SoC has 70 GB/s memory bandwidth, the M1 Pro has 200 GB/s bandwidth and the M1 Max has 400 GB/s bandwidth.[8]The M1 Pro comes in memory configurations of 16 GB and 32 GB, and the M1 Max comes in configurations of 32 GB and 64 GB.[17]
The M1 Ultra doubles the specs of the M1 Max for a 1024-bit or 1-kilobit memory bus with 800 GB/s bandwidth in a 64 GB or 128 GB configuration.
The M1 is the successor to and integrates all functionality of the Apple T2 chip that was present in Intel-based Macs. It keeps bridgeOS and sepOS active even if the main computer is in a halted low power mode to handle and store encryption keys, including keys for Touch ID, FileVault, macOS Keychain, and UEFI firmware passwords. It also stores the machine's unique ID (UID) and group ID (GID).
The M1 contains dedicatedneural network hardwarein a 16-core Neural Engine, capable of executing 11 trillion operations per second.[8]Other components include animage signal processor, aNVM Expressstorage controller, aUSB4controller that includesThunderbolt 3support, and aSecure Enclave. The M1 Pro, Max and Ultra supportThunderbolt 4.
The M1 has video codec encoding support forHEVCandH.264. It has decoding support for HEVC, H.264, andProRes.[18]The M1 Pro, M1 Max, and M1 Ultra have a media engine which has hardware-accelerated H.264, HEVC, ProRes, and ProRes RAW. This media engine includes a video decode engine (the M1 Ultra has two), a video encode engine (the M1 Max has two and the M1 Ultra has four), and a ProRes encode and decode engine (again the M1 Max has two and the M1 Ultra has four).[19][20]
The M1 Max supports High Power Mode on the 16-inch MacBook Pro for intensive tasks.[21]The M1 Pro supports two 6K displays at 60 Hz over Thunderbolt, while the M1 Max supports a third 6K display over Thunderbolt and a4Kmonitor overHDMI 2.0.[17]All parameters of the M1 Max processors are doubled in M1 Ultra processors, as they are essentially two M1 Max processors operating in parallel; they are in a single package (in size being bigger thanSocket AM4AMD Ryzenprocessors)[22]and seen as one processor in macOS.
The M1 recorded competitive performance with contemporary Intel and AMD processors in popular benchmarks (such asGeekbenchandCinebenchR23).[23]
The 2020 M1-equippedMac Minidraws 7 watts when idle and 39 watts at maximum load,[24]compared to 20 watts at idle and 122 watts maximum load for the 2018 6-core Core i7 Mac Mini.[25]Theenergy efficiencyof the M1 increases battery life of M1-based MacBooks by 50% compared to previous Intel-based MacBooks.[26]
At release, the MacBook Air (M1, 2020) and MacBook Pro (M1, 2020) were praised by critics for their CPU performance and battery life, particularly compared to previous MacBooks.[27][28]
After its release, some users who charged M1 devices through USB-C hubs reportedbrickingtheir device.[34]The devices that are reported to cause this issue were third-party USB-C hubs and non-Thunderbolt docks (excluding Apple's own dongle).[34]Apple handled this issue by replacing the logic board and by telling its customers not to charge through those hubs.[34]macOS Big Sur11.2.2 includes a fix to prevent 2019 or later MacBook Pro models and 2020 or later MacBook Air models from being damaged by certain third-party USB-C hubs and docks.[35][36]
A flaw in M1 processors, given the name "M1racles", was announced in May 2021. Two sandboxed applications can exchange data without the system's knowledge by using an unintentionally writableprocessor registeras acovert channel, violating the security model and constituting a minor vulnerability. It was discovered byHector Martin, founder of theAsahi Linuxproject for Linux on Apple Silicon.[37]
In May 2022 a flaw termed "Augury" was announced involving theData-Memory Dependent Prefetcher(DMP) in M1 chips, discovered by researchers atTel Aviv University, theUniversity of Illinois Urbana-Champaign, and theUniversity of Washington. It was not considered a substantial security risk at the time.[38]
In June 2022,MITresearchers announced they had found aspeculative executionvulnerability in M1 chips which they called "Pacman" after pointer authentication codes (PAC).[39]Apple said they did not believe this posed a serious threat to users.[40]
An exploit namedGoFetch[41]is able to extract cryptographic keys from M-series chip devices without administrative privileges.[42]
The table below shows the various SoCs based on the "Firestorm" and "Icestorm" microarchitectures.[43][44]
|
https://en.wikipedia.org/wiki/Apple_M1
|
XScaleis amicroarchitectureforcentral processing unitsinitially designed byIntelimplementing theARM architecture(version 5)instruction set. XScale comprises several distinct families: IXP, IXC, IOP, PXA and CE (see more below), with some later models designed assystem-on-a-chip(SoC). Intel sold the PXA family toMarvell Technology Groupin June 2006.[1]Marvell then extended the brand to include processors with othermicroarchitectures, likeArm'sCortex.
The XScale architecture is based on the ARMv5TEISAwithout thefloating-pointinstructions. XScale uses a seven-stage integer and an eight-stage memory super-pipelinedmicroarchitecture. It is the successor to the IntelStrongARMline ofmicroprocessorsandmicrocontrollers, which Intel acquired fromDEC's Digital Semiconductor division as part of a settlement of a lawsuit between the two companies. Intel used the StrongARM to replace its ailing line of outdatedRISCprocessors, thei860andi960.
All the generations of XScale are 32-bit ARMv5TE processors manufactured with a 0.18 μm or 0.13 μm (as in IXP43x parts) process and have a 32KBdatacacheand a 32 KB instruction cache. First- and second-generation XScalemulti-core processorsalso have a 2 KB mini data cache (claimed it "avoids 'thrashing' of the D-Cache for frequently changing data streams"[2]). Products based on the third-generation XScale have up to 512 KB unified L2 cache.[3]
The XScale core is used in a number ofmicrocontrollerfamilies manufactured byInteland Marvell:
There are also standalone processors: the 80200 and 80219 (targeted primarily atPCIapplications).
PXA System on a Chip (SoC) products were designed in Austin, Texas. The code-names for this product line are small towns in Texas, primarily near deer hunting leases frequented by the Intel XScale core and mobile phone SoC marketing team. PXA System on a Chip products were popular on smartphones and PDAs (withWindows Mobile,Symbian OS,Palm OS) during 2000 to 2006.[4]
The PXA210 was Intel's entry-level XScale targeted atmobile phoneapplications. It was released with the PXA250 in February 2002 and comes clocked at 133 MHz and 200 MHz.
The PXA25x family (code-namedCotulla) consists of the PXA250 and PXA255. The PXA250 was Intel's first generation of XScale processors. There was a choice of threeclock speeds: 200MHz, 300 MHz and 400 MHz. It came out in February 2002. In March 2003, the revision C0 of the PXA250 was renamed to PXA255. The main differences were a doubled internal bus speed (100 MHz to 200 MHz) for faster data transfer, lower core voltage (only 1.3 V at 400 MHz) for lower power consumption andwritebackfunctionality for the data cache, the lack of which had severely impaired performance on the PXA250.
Intel XScale Core Features :
The PXA26x family (code-namedDalhart) consists of the PXA260 and PXA261-PXA263. The PXA260 is a stand-alone processor clocked at the same frequency as the PXA25x, but features a TPBGA package which is about 53% smaller than the PXA25x's PBGA package. The PXA261-PXA263 are the same as the PXA260 but have IntelStrataFlashmemory stacked on top of the processor in the same package; 16 MB of 16-bit memory in the PXA261, 32 MB of 16-bit memory in the PXA262 and 32 MB of 32-bit memory in the PXA263. The PXA26x family was released in March 2003.
The PXA27x family (code-namedBulverde) consists of the PXA270 and PXA271-PXA272 processors. This revision is a huge update to the XScale family of processors. The PXA270 is clocked in four different speeds: 312 MHz, 416 MHz, 520 MHz and 624 MHz and is a stand-alone processor with no packaged memory. The PXA271 can be clocked to 13, 104, 208 MHz or 416 MHz and has 32 MB of 16-bit stacked StrataFlash memory and 32 MB of 16-bit SDRAM in the same package. The PXA272 can be clocked to 312 MHz, 416 MHz or 520 MHz and has 64 MB of 32-bit stacked StrataFlash memory.
Intel also added many new technologies to the PXA27x family such as:
The PXA27x family was released in April 2004. Along with the PXA27x family Intel released the2700Gembedded graphicsco-processor(code-named Marathon).
In August 2005 Intel announced the successor toBulverde, codenamedMonahans.
They demonstrated it showing its capability to play back high definition encoded video on aPDAscreen.
The new processor was shown clocked at 1.25 GHz but Intel said it only offered a 25% increase in performance (800MIPSfor the 624 MHz PXA270 processor vs. 1000 MIPS for 1.25 GHzMonahans). An announced successor to the 2700G graphics processor, code named Stanwood, has since been canceled. sd features of Stanwood are integrated intoMonahans. For extra graphics capabilities, Intel recommends third-party chips like theNvidiaGoForcechip family.
In November 2006,Marvell Semiconductorofficially introduced theMonahansfamily as Marvell PXA320, PXA300, and PXA310.[9]PXA320 is currently shipping in high volume, and is scalable up to 806 MHz. PXA300 and PXA310 deliver performance "scalable to 624 MHz", and are software-compatible with PXA320.
CodenamedManitoba,Intel PXA800F was a SoC introduced by Intel in 2003 for use inGSM- andGPRS-enabled mobile phones. The chip was built around an XScale processor core, the likes of which had been used in PDAs, clocked at 312 MHz and manufactured with a 0.13 μm process, with 4 MB of integrated flash memory and adigital signal processor.[10]
A prototype board with the chip was demoed during the Intel Developer Forum.[11]Intel noted it was in talks with leading mobile phone manufacturers, such asNokia,Motorola,Samsung,SiemensandSony Ericsson, about incorporating Manitoba into their phones.[12]
O2XM, released in 2005, was the only mobile phone with a documented use of the Manitoba chip.[13]An Intel executive stated that the chip version used in the phone was reworked to be less expensive than the initial one.[14]
The PXA90x, codenamedHermon, was a successor to Manitoba with3Gsupport. The PXA90x is built using a 130 nm process.[15]The SoC continued being marketed by Marvell as they acquired Intel's XScale business.[16][17]
PXA16x is a processor designed by Marvell, combining the earlier Intel designed PXASoCcomponents with a new ARMv5TE CPU core namedMohawkorPJ1from Marvell'sSheevafamily instead of using wdc Xscale or ARM design. The CPU core is derived from theFeroceoncore used in Marvell's embeddedKirkwoodproduct line, but extended for instruction level compatibility with the XScale IWMMX.
The PXA16x delivers strong performance at a mass market price point for cost sensitive consumer and embedded markets such as digital picture frames, E Readers, multifunction printer user interface (UI) displays, interactive VoIP phones, IP surveillance cameras, and home control gadgets.[18]
The PXA930 and PXA935 processor series were again built using the Sheeva microarchitecture developed by Marvell but upgraded to ARMv7 instruction set compatibility.[19]This core is a so-called Tri-core architecture[20]codenamed Tavor; Tri-core means it supports the ARMv5TE, ARMv6 and ARMv7 instruction sets.[20][21]This new architecture was a significant leap from the old Xscale architecture. The PXA930 uses 65 nm technology[22]while the PXA935 is built using the 45 nm process.[21]
The PXA930 is used in theBlackBerry Bold 9700.
Little is known about the PXA940, although it is known to beARM Cortex-A8compliant.[23]It is utilized in the BlackBerry Torch 9800[24][25]and is built using 45 nm technology.
After XScale and Sheeva, the PXA98x uses the third CPU core design, this time licensed directly from ARM, in form of dual coreCortex A9application processors[26]utilized by devices likeSamsung Galaxy Tab 3 7.0.[27]
It is a quad coreCortex A7application processor withVivanteGPU.[28]
The IXC1100 processor features clock speeds at 266, 400, and 533 MHz, a 133 MHz bus, 32 KB of instruction cache, 32 KB of data cache, and 2 KB of mini-data cache. It is also designed for low power consumption, using 2.4 W at 533 MHz. The chip comes in the 35 mm PBGA package.
The IOP line of processors is designed to allow computers and storage devices to transfer data and increase performance by offloading I/O functionality from the main CPU of the device. The IOP3XX processors are based on the XScale architecture and designed to replace the older 80219 sd and i960 family of chips. There are ten different IOP processors currently available: IOP303, IOP310, IOP315, IOP321, IOP331, IOP332, IOP333, IOP341, IOP342 and IOP348. Clock speeds range from 100 MHz to 1.2 GHz. The processors also differ in PCI bus type, PCI bus speed, memory type, maximum memory allowable, and the number of processor cores.
The XScale core is utilized in the second generation of Intel's IXP network processor line, while the first generation used StrongARM cores. The IXP network processor family ranges from solutions aimed at small/medium office network applications, IXP4XX, to high performance network processors such as the IXP2850, capable of sustaining up toOC-192line rates. In IXP4XX devices the XScale core is used as both a control and data plane processor, providing both system control and data processing. The task of the XScale in the IXP2XXX devices is typically to provide control plane functionality only, with data processing performed by themicroengines, examples of such control plane tasks include routing table updates, microengine control, and memory management.
In April 2007, Intel announced an XScale-based processor targetingconsumer electronicsmarkets, the Intel CE 2110 (codenamed Olo River).[29]
XScale microprocessors were used inRIM'sBlackBerryhandheld, theDell Aximfamily ofPocket PCs, most of theZire,TreoandTungsten Handheldlines byPalm, later versions of theSharp Zaurus, theMotorola A780, the Acer n50, the CompaqiPaq3900 series, and in otherPDAs. It was theCPUin theIyonix PCdesktop computer runningRISC OS, and theNSLU2(Slug) runningLinux. XScale is also used in devices such as PVPs (Portable Video Players), PMCs (Portable Media Centres), including theCreative ZenPortable Media Player andAmazon KindleE-Book reader, and in industrial embedded systems.
At the other end of the market, the XScale IOP33x Storage I/O processors are used in some IntelXeon-based server platforms.
On June 27, 2006, the sale of Intel's XScale PXA mobile processor assets was announced. Intel agreed to sell the XScale PXA business toMarvell Technology Groupfor an estimated $600 million in cash and the assumption of unspecified liabilities. The move was intended to permit Intel to focus its resources on its core x86 and server businesses. Marvell holds a full architecture license for ARM, allowing it to design chips to implement the ARM instruction set, not just license a processor core.[30]
The acquisition was completed on November 9, 2006. Intel was expected to continue manufacturing XScale processors until Marvell secures other manufacturing facilities, and would continue manufacturing and selling the IXP and IOP processors, as they were not part of the deal.[31]
The XScale effort at Intel was initiated by the purchase of theStrongARMdivision fromDigital Equipment Corporationin 1998.[32]Intel still holds an ARM license even after the sale of XScale;[32]this license is at the architectural level.[33]
|
https://en.wikipedia.org/wiki/XScale
|
Transient execution CPU vulnerabilitiesarevulnerabilitiesin which instructions, most often optimized usingspeculative execution, are executed temporarily by amicroprocessor, without committing their results due to a misprediction or error, resulting in leaking secret data to an unauthorized party. The archetype isSpectre, and transient execution attacks like Spectre belong to the cache-attack category, one of several categories ofside-channel attacks. Since January 2018 many different cache-attack vulnerabilities have been identified.
Modern computers are highly parallel devices, composed of components with very different performance characteristics. If an operation (such as a branch) cannot yet be performed because some earlier slow operation (such as a memory read) has not yet completed, a microprocessor may attempt topredictthe result of the earlier operation and execute the later operationspeculatively, acting as if the prediction were correct. The prediction may be based on recent behavior of the system. When the earlier, slower operation completes, the microprocessor determines whether the prediction was correct or incorrect. If it was correct then execution proceeds uninterrupted; if it was incorrect then the microprocessor rolls back the speculatively executed operations and repeats the original instruction with the real result of the slow operation. Specifically, atransient instruction[1]refers to an instruction processed by error by the processor (incriminating the branch predictor in the case ofSpectre) which can affect the micro-architectural state of the processor, leaving the architectural state without any trace of its execution.
In terms of the directly visible behavior of the computer it is as if the speculatively executed code "never happened". However, this speculative execution may affect the state of certain components of the microprocessor, such as thecache, and this effect may be discovered by careful monitoring of the timing of subsequent operations.
If an attacker can arrange that the speculatively executed code (which may be directly written by the attacker, or may be a suitablegadgetthat they have found in the targeted system) operates on secret data that they are unauthorized to access, and has a different effect on the cache for different values of the secret data, they may be able to discover the value of the secret data.
In early January 2018, it was reported that allIntel processorsmade since 1995[2][3](besidesIntel Itaniumand pre-2013Intel Atom) have been subject to two security flaws dubbedMeltdownandSpectre.[4][5]
The impact on performance resulting from software patches is "workload-dependent". Several procedures to help protect home computers and related devices from the Spectre and Meltdown security vulnerabilities have been published.[6][7][8][9]Spectre patches have been reported to significantly slow down performance, especially on older computers; on the newer 8th-generation Core platforms, benchmark performance drops of 2–14% have been measured.[10]Meltdown patches may also produce performance loss.[11][12][13]It is believed that "hundreds of millions" of systems could be affected by these flaws.[3][14]More security flaws were disclosed on May 3, 2018,[15]on August 14, 2018, on January 18, 2019, and on March 5, 2020.[16][17][18][19]
At the time, Intel was not commenting on this issue.[20][21]
On March 15, 2018, Intel reported that it will redesign itsCPUs(performance losses to be determined) to protect against theSpectre security vulnerability, and expects to release the newly redesigned processors later in 2018.[22][23]
On May 3, 2018, eight additional Spectre-class flaws were reported. Intel reported that they are preparing new patches to mitigate these flaws.[24]
On August 14, 2018, Intel disclosed three additional chip flaws referred to as L1 Terminal Fault (L1TF). They reported that previously released microcode updates, along with new, pre-release microcode updates can be used to mitigate these flaws.[25][26]
On January 18, 2019, Intel disclosed three new vulnerabilities affecting all Intel CPUs, named "Fallout", "RIDL", and "ZombieLoad", allowing a program to read information recently written, read data in the line-fill buffers and load ports, and leak information from other processes and virtual machines.[27][28][29]Coffee Lake-series CPUs are even more vulnerable, due to hardware mitigations forSpectre.[citation needed][30]
On March 5, 2020, computer security experts reported another Intel chip security flaw, besides theMeltdownandSpectreflaws, with the systematic nameCVE-2019-0090(or "Intel CSME Bug").[16]This newly found flaw is not fixable with a firmware update, and affects nearly "all Intel chips released in the past five years".[17][18][19]
In March 2021 AMD security researchers discovered that the Predictive Store Forwarding algorithm inZen 3CPUs could be used by malicious applications to access data it shouldn't be accessing.[31]According to Phoronix there's little performance impact in disabling the feature.[32]
In June 2021, two new vulnerabilities,Speculative Code Store Bypass(SCSB,CVE-2021-0086) andFloating Point Value Injection(FPVI,CVE-2021-0089), affectingallmodern x86-64 CPUs both from Intel and AMD were discovered.[33]In order to mitigate them software has to be rewritten and recompiled. ARM CPUs are not affected by SCSB but some certain ARM architectures are affected by FPVI.[34]
Also in June 2021,MITresearchers revealed thePACMANattack on Pointer Authentication Codes (PAC) inARMv8.3A.[35][36][37]
In August 2021 a vulnerability called "Transient Execution of Non-canonical Accesses" affecting certain AMD CPUs was disclosed.[38][39][40]It requires the same mitigations as the MDS vulnerability affecting certain Intel CPUs.[41]It was assignedCVE-2020-12965. Since most x86 software is already patched against MDS and this vulnerability has the exact same mitigations, software vendors don't have to address this vulnerability.
In October 2021 for the first time ever a vulnerability similar to Meltdown was disclosed[42][43]to be affecting all AMD CPUs however the company doesn't think any new mitigations have to be applied and the existing ones are already sufficient.[44]
In March 2022, a new variant of the Spectre vulnerability calledBranch History Injectionwas disclosed.[45][46]It affects certain ARM64 CPUs[47]and the following Intel CPU families:Cascade Lake,Ice Lake,Tiger LakeandAlder Lake. According to Linux kernel developers AMD CPUs are also affected.[48]
In March 2022, a vulnerability affecting a wide range of AMD CPUs was disclosed underCVE-2021-26341.[49][50]
In June 2022, multipleMMIOIntel CPUs vulnerabilities related to execution invirtual environmentswere announced.[51]The following CVEs were designated:CVE-2022-21123,CVE-2022-21125,CVE-2022-21166.
In July 2022, theRetbleedvulnerability was disclosed affecting Intel Core 6 to 8th generation CPUs and AMD Zen 1, 1+ and 2 generation CPUs. Newer Intel microarchitectures as well as AMD starting with Zen 3 are not affected. The mitigations for the vulnerability decrease the performance of the affected Intel CPUs by up to 39%, while AMD CPUs lose up to 14%.
In August 2022, theSQUIPvulnerability was disclosed affecting Ryzen 2000–5000 series CPUs.[52]According to AMD the existing mitigations are enough to protect from it.[53]
According to a Phoronix review released in October, 2022Zen 4/Ryzen 7000CPUs are not slowed down by mitigations, in fact disabling them leads to a performance loss.[54][55]
In February 2023 a vulnerability affecting a wide range of AMD CPU architectures called "Cross-Thread Return Address Predictions" was disclosed.[56][57][58]
In July 2023 a critical vulnerability in theZen 2AMD microarchitecture calledZenbleedwas made public.[59][1]AMD released a microcode update to fix it.[60]
In August 2023 a vulnerability in AMD'sZen 1,Zen 2,Zen 3, andZen 4microarchitectures calledInception[61][62]was revealed and assignedCVE-2023-20569. According to AMD it is not practical but the company will release a microcode update for the affected products.
Also in August 2023 a new vulnerability calledDownfallorGather Data Samplingwas disclosed,[63][64][65]affecting Intel CPU Skylake, Cascade Lake, Cooper Lake, Ice Lake, Tiger Lake, Amber Lake, Kaby Lake, Coffee Lake, Whiskey Lake, Comet Lake & Rocket Lake CPU families. Intel will release a microcode update for affected products.
TheSLAM[66][67][68][69]vulnerability (Spectre based on Linear Address Masking) reported in 2023 neither has received a corresponding CVE, nor has been confirmed or mitigated against.
In March 2024, a variant of Spectre-V1 attack calledGhostRacewas published.[70]It was claimed it affected all the major microarchitectures and vendors, including Intel, AMD and ARM. It was assignedCVE-2024-2193. AMD dismissed the vulnerability (calling it "Speculative Race Conditions (SRCs)") claiming that existing mitigations were enough.[71]Linux kernel developers chose not to add mitigations citing performance concerns.[72]TheXen hypervisorproject released patches to mitigate the vulnerability but they are not enabled by default.[73]
Also in March 2024, a vulnerability inIntel Atomprocessors calledRegister File Data Sampling(RFDS) was revealed.[74]It was assignedCVE-2023-28746. Its mitigations incur a slight performance degradation.[75]
In April 2024, it was revealed that the BHI vulnerability in certain Intel CPU families could be still exploited in Linux entirely inuser spacewithout using any kernel features or root access despite existing mitigations.[76][77][78]Intel recommended "additional software hardening".[79]The attack was assignedCVE-2024-2201.
In June 2024,SamsungResearch andSeoul National Universityresearchers revealed theTikTagattack against the Memory Tagging Extension inARMv8.5A CPUs. The researchers created PoCs forGoogle Chromeand theLinux kernel.[80][81][82][83]Researchers from VUSec previously revealed ARM's Memory Tagging Extension is vulnerable to speculative probing.[84][85]
In July 2024,UC San Diegoresearchers revealed theIndirectorattack againstIntelAlder LakeandRaptor LakeCPUs leveraging high-precision Branch Target Injection (BTI).[86][87][88]Intel downplayed the severity of the vulnerability and claimed the existing mitigations are enough to tackle the issue.[89]No CVE was assigned.
In January 2025, Georgia Institute of Technology researchers published two whitepapers on Data Speculation Attacks via Load Address Prediction on Apple Silicon (SLAP) and Breaking the Apple M3 CPU via False Load Output Predictions (FLOP).[90][91][92]
Also in January 2025,Armdisclosed a vulnerability (CVE-2024-7881) in which an unprivileged context can trigger a data memory-dependentprefetchengine to fetch data from a privileged location, potentially leading to unauthorized access. To mitigate the issue, Arm recommends disabling the affected prefetcher by setting CPUACTLR6_EL1[41].[93][94]
In May 2025, VUSec released three vulnerabilities extending on Spectre-v2 in various Intel and ARM architectures under the moniker Training Solo.[95][96][97]Mitigations require a microcode update for Intel CPUs and changes in the Linux kernel.
Also in May 2025, ETH Zurich Computer Security Group "COMSEC" disclosed the Branch Privilege Injection vulnerability affecting all Intel x86 architectures starting from the 9th generation (Coffee Lake Refresh) under CVE-2024-45332.[98][99][100]A microcode update is required to mitigate it. It comes with a performance cost up to 8%.
Spectre class vulnerabilities will remain unfixed because otherwise CPU designers will have to disablespeculative executionwhich will entail a massive performance loss.[citation needed]Despite this, AMD has managed to designZen 4such a way its performance isnotaffected by mitigations.[54][55]
*Various CPU microarchitectures not included above are also affected, among them areARM,IBM Power,MIPSand others.[149][150][151][152]
**The 8th generation Coffee Lake architecture in this tablealsoapplies to a wide range of previously released Intel CPUs, not limited to the architectures based onIntel Core,Pentium 4andIntel Atomstarting withSilvermont.[153][154]
|
https://en.wikipedia.org/wiki/Transient_execution_CPU_vulnerability
|
TheC3microprocessor fromVIA Technologiesis a fifth-generation CPU targeted at the desktop and mobile markets.
|
https://en.wikipedia.org/wiki/List_of_VIA_C3_microprocessors
|
TheC7microprocessor fromVIA Technologiesis a seventh-generation CPU targeted at the consumer and embedded market.
|
https://en.wikipedia.org/wiki/List_of_VIA_C7_microprocessors
|
TheEdenmicroprocessors fromVIA Technologiesare fifth- and sixth-generation CPUs targeted at the embedded market.
Announced on 13 August 2015 was a VIA embedded PC using a 1.2 GHz VIA Eden X4 5000-series CPU.
|
https://en.wikipedia.org/wiki/List_of_VIA_Eden_microprocessors
|
TheNano[1]microprocessor fromVIA Technologiesis an eighth-generation CPU targeted at the consumer and embedded market.
|
https://en.wikipedia.org/wiki/List_of_VIA_Nano_microprocessors
|
As the32-bitIntelArchitecture became the dominant computing platform during the 1980s and 1990s, multiple companies have tried to buildmicroprocessorsthat are compatible with that Intelinstruction setarchitecture. Most of these companies were not successful in the mainstream computing market. So far, onlyAMDhas had any market presence in the computing market for more than a couple of product generations. Cyrix was successful during the386and486generations of products but did not do well after thePentiumwas introduced.
List of formerIA-32compatiblemicroprocessorvendors:
|
https://en.wikipedia.org/wiki/List_of_former_IA-32_compatible_processor_manufacturers
|
Incomputing,Intel'sAdvanced Programmable Interrupt Controller(APIC) is a family ofprogrammable interrupt controllers. As its name suggests, the APIC is more advanced than Intel's8259Programmable Interrupt Controller (PIC), particularly enabling the construction ofmultiprocessorsystems. It is one of several architectural designs intended to solve interrupt routing efficiency issues in multiprocessor computer systems.
The APIC is a split architecture design, with a local component (LAPIC) usually integrated into the processor itself, and an optional I/O APIC on a system bus. The first APIC was the 82489DX – it was a discrete chip that functioned both as local and I/O APIC. The 82489DX enabled construction ofsymmetric multiprocessor(SMP) systems with theIntel 486and earlyPentiumprocessors; for example, the reference two-way 486 SMP system used three 82489DX chips, two as local APICs and one as I/O APIC. Starting with theP54Cprocessor, the local APIC functionality was integrated into the Intel processors' silicon. The first dedicated I/O APIC was the Intel 82093AA, which was intended forPIIX3-based systems.
There are two components in the Intel APIC system, thelocal APIC(LAPIC) and theI/O APIC. There is one LAPIC in each CPU in the system. In the very first implementation (82489DX), the LAPIC was a discrete circuit, as opposed to its later implementation in Intel processors' silicon. There is typically one I/O APIC for each peripheral bus in the system. In original system designs, LAPICs and I/O APICs were connected by a dedicated APIC bus. Newer systems use the system bus for communication between all APIC components.
Each APIC, whether a discrete chip or integrated in a CPU, has a version register containing a four-bit version number for its specific APIC implementation. For example, the 82489DX has an APIC version number of 0, while version 1 was assigned to the first generation of local APICs integrated in the Pentium 90 and 100 processors.[1]
In systems containing an8259 PIC, the 8259 may be connected to the LAPIC in the system's bootstrap processor (BSP), one of the system's I/O APICs, or both. Logically, however, the 8259 is only connected once at any given time.
The first-generation Intel APIC chip, the 82489DX, which was meant to be used withIntel 80486and early Pentium processors, is actually an external local and I/O APIC in one circuit. The Intel MP 1.4 specification refers to it as "discrete APIC" in contrast with the "integrated APIC" found in most of the Pentium processors.[2]The 82489DX had 16 interrupt lines;[3]it also had a quirk that it could lose some ISA interrupts.[4]
In a multiprocessor 486 system, each CPU had to be paired with its own 82489DX; additionally a supplementary 82489DX had to be used as I/O APIC. The 82489DX could not emulate the 8259A (XT-PIC) so these also had to be included as physical chips for backwards compatibility.[5]The 82489DX was packaged as a 132-pinPQFP.[3]
Local APICs (LAPICs) manage all external interrupts for some specific processor in an SMP system. In addition, they are able to accept and generateinter-processor interrupts(IPIs) between LAPICs. A single LAPIC may support up to 224 usableinterruptvectors from an I/O APIC. Vector numbers 0 to 31, out of 0 to 255, are reserved for exception handling by x86 processors.
All Intel processors starting with the P5 microarchitecture (P54C) have a built-in local APIC.[6][7]However, if the local APIC is disabled in a P5 processor, it cannot be re-enabled by software; this limitation no longer exists in theP6 processorsand later ones.[7]
With the introduction ofPentium 4 HTandPentium D, each CPU core and each CPU thread have the integrated LAPIC.
TheMessage Signaled Interrupts(MSI) feature of the PCI 2.2 and later specifications cannot be used without the local APIC being enabled.[8]Use of MSI obviates the need for an I/O APIC. Additionally, up to 224 interrupts are supported in MSI mode, and IRQ sharing is not allowed.[9]
Another advantage of the local APIC is that it also provides a high-resolution (on the order of onemicrosecondor better) timer that can be used in both interval and one-off mode.[7]
The APIC timer had its initial acceptance woes. A Microsoft document from 2002 (which advocated for the adoption ofHigh Precision Event Timerinstead) criticized the LAPIC timer for having "poor resolution" and stating that "the clocks silicon is sometimes very buggy".[10]Nevertheless, the APIC timer is used for example byWindows 7whenprofilingis enabled, and byWindows 8in all circumstances. (Before Windows 8 claimed exclusive rights to this timer, it was also used by some programs likeCPU-Z.) Under Microsoft Windows the APIC timer is not a shareable resource.[11]
The aperiodic interrupts offered by the APIC timer are used by theLinux kerneltickless kernelfeature. This optional but default feature is new with 2.6.18. When enabled on a computer with an APIC timer, the kernel does not use the8253programmable interval timerfor timekeeping.[12]AVMwaredocument notes that "software does not have a reliable way to determine its frequency. Generally, the only way to determine the local APIC timer’s frequency is to measure it using the PIT or CMOS timer, which yields only an approximate result."[13]
I/O APICs contain a redirection table, which is used to route the interrupts it receives from peripheral buses to one or more local APICs. Early I/O APICs (like 82489DX, SIO.A and PCEB/ESC) only had support for 16 interrupt lines, but later ones like 82093AA (separate chip for PIIX3/PIIX4) had support for 24 interrupt lines.[9]It was packaged as a 64-PinPQFP.[14]The 82093AA normally connected to thePIIX3/PIIX4and used its integrated legacy 8259 PICs.[14]TheICH1integrated the I/O APIC. An integrated I/O APIC of modern chipsets may provide more than 24 interrupt lines.[15]
According to a 2009 Intel benchmark usingLinux, the I/O APIC reduced interrupt latency by a factor of almost three relative to the 8259 emulation (XT-PIC), while using MSI reduced the latency even more, by a factor of nearly seven relative to the XT-PIC baseline.[16]
ThexAPICwas introduced with thePentium 4, while thex2APICis the most recent generation of the Intel's programmable interrupt controller, introduced with theNehalem microarchitecturein November 2008.[17]The major improvements of the x2APIC address the number of supported CPUs and performance of the interface.
The x2APIC now uses 32 bits to address CPUs, allowing to address up to 232− 1 CPUs using the physical destination mode. The logical destination mode now works differently and introduces clusters; using this mode, one can address up to 220− 16 processors.
The improved interface reduces the number of needed APIC register accesses for sendinginter-processor interrupts(IPIs). Because of this advantage,KVMcan and does emulate the x2APIC for older processors that do not physically support it, and this support is exposed fromQEMUgoing back toConroeand even for AMDOpteronG-series processors (neither of which natively support x2APIC).[18][19]
APICvis Intel's brand name forhardware virtualizationsupport aimed at reducing interrupt overhead in guests. APICv was introduced in theIvy Bridge-EPprocessor series, which is sold as Xeon E5-26xx v2 (launched in late 2013) and as Xeon E5-46xx v2 (launched in early 2014).[20][21]AMD announced a similar technology calledAVIC,[22][23]it is available family15h models 6Xh (Carrizo) processorsand newer.[24]
There are a number of known bugs in implementations of APIC systems, especially with concern to how the8254is connected. DefectiveBIOSesmay not set up interrupt routing properly, or provide incorrectACPItables and IntelMultiProcessor Specification(MPS) tables.
The APIC can also be a cause of system failure when the operating system does not support it properly. On older operating systems, the I/O and local APICs often had to be disabled. While this is not possible anymore due to the prevalence ofsymmetric multiprocessorandmulti-coresystems, the bugs in the firmware and the operating systems are now a rare occurrence.
AMDandCyrixonce proposed a somewhat similar-in-purposeOpenPICarchitecture supporting up to 32 processors;[25]it had at least declarative support fromIBMandCompaqaround 1995.[26]No x86 motherboard was released with OpenPIC however.[27]After the OpenPIC's failure in the x86 market, AMD licensed Intel's APIC for itsAMD Athlonand later processors.
IBM however developed theirMultiProcessor Interrupt Controller(MPIC) based on the OpenPIC register specifications.[28]MPIC was used inPowerPCbased designs, including those of IBM, for instance in someRS/6000systems,[29]but also by Apple, as late as theirPower Mac G5s.[30][31]
|
https://en.wikipedia.org/wiki/Advanced_Programmable_Interrupt_Controller
|
Incomputing, aprogrammable interrupt controller(PIC) is anintegrated circuitthat helps amicroprocessor(orCPU) handleinterrupt requests(IRQs) coming from multiple different sources (like external I/O devices) which may occur simultaneously.[1]It helps prioritize IRQs so that the CPU switches execution to the most appropriateinterrupt handler(ISR) after the PIC assesses the IRQs' relative priorities. Common modes of interrupt priority include hard priorities, rotating priorities, and cascading priorities.[citation needed]PICs often allow mapping input to outputs in a configurable way. On thePC architecturePIC are typically embedded into asouthbridge chipwhose internal architecture is defined by the chipset vendor's standards.
PICs typically have a common set of registers: interrupt request register (IRR), in-service register (ISR), and interrupt mask register (IMR). The IRR specifies which interrupts are pending acknowledgement, and is typically a symbolic register which can not be directly accessed. The ISR register specifies which interrupts have been acknowledged, but are still waiting for anend of interrupt(EOI). The IMR specifies which interrupts are to be ignored and not acknowledged. A simple register schema such as this allows up to two distinct interrupt requests to be outstanding at one time, one waiting for acknowledgement, and one waiting for EOI.
There are a number of common priority schemas in PICs including hard priorities, specific priorities, and rotating priorities.
Interrupts may be eitheredge triggeredorlevel triggered.
There are a number of common ways of acknowledging an interrupt has completed when an EOI is issued. These include specifying which interrupt completed, using an implied interrupt which has completed (usually the highest priority pending in the ISR), and treating interrupt acknowledgement as the EOI.
One of the best known PICs, the8259A, was included in thex86PC. In modern times, this is not included as a separate chip in an x86 PC, but rather as part of the motherboard'ssouthbridgechipset.[2]In other cases, it has been replaced by the newerAdvanced Programmable Interrupt Controllerswhich support more interrupt outputs and more flexible priority schemas.
|
https://en.wikipedia.org/wiki/Programmable_Interrupt_Controller
|
TheIntel 8259is aprogrammable interrupt controller(PIC) designed for theIntel 8085and8086microprocessors. The initial part was 8259, a later A suffix version was upward compatible and usable with the 8086 or8088processor. The 8259 combines multipleinterruptinput sources into a single interrupt output to the host microprocessor, extending the interrupt levels available in a system beyond the one or two levels found on the processor chip. The 8259A was the interrupt controller for theISA busin the originalIBM PCandIBM PC AT.
The 8259 was introduced as part of Intel'sMCS 85family in 1976. The 8259A was included in the original PC introduced in 1981 and maintained by thePC/XTwhen introduced in 1983. A second 8259A was added with the introduction of thePC/AT. The 8259 has coexisted with theIntel APIC Architecturesince its introduction insymmetric multiprocessorPCs. Modern PCs have begun to phase out the 8259A in favor of the Intel APIC Architecture. However, while not anymore a separate chip, the 8259A interface is still provided by thePlatform Controller Huborsouthbridgeon modernx86motherboards.[1]
The main signal pins on an 8259 are as follows: eightinterrupt requestinput lines named IRQ0 through IRQ7, an interrupt request output line named INTR, interrupt acknowledgment line named INTA, D0 through D7 for communicating the interrupt level or vector offset. Other connections include CAS0 through CAS2 for cascading between 8259s.
Up to eightslave8259s may be cascaded to amaster8259 to provide up to 64 IRQs. 8259s are cascaded by connecting the INT line of oneslave8259 to the IRQ line of onemaster8259.
End of interrupt(EOI) operations support specific EOI, non-specific EOI, and auto-EOI. A specific EOI specifies the IRQ level it is acknowledging in the ISR. A non-specific EOI resets the IRQ level in the ISR. Auto-EOI resets the IRQ level in the ISR immediately after the interrupt is acknowledged.
Edge and level interrupt trigger modes are supported by the 8259A. Fixed priority and rotating priority modes are supported.
The 8259 may be configured to work with an 8080/8085 or an 8086/8088. On the 8086/8088, the interrupt controller will provide an interrupt number on the data bus when an interrupt occurs. The interrupt cycle of the 8080/8085 will issue three bytes on the data bus (corresponding to a CALL instruction in the 8080/8085 instruction set).
The 8259A provides additional functionality compared to the 8259 (in particular buffered mode and level-triggered mode) and is upward compatible with it.
Programming an 8259 in conjunction withDOSandMicrosoft Windowshas introduced a number of confusing issues for the sake of backwards compatibility, which extends as far back as the original PC introduced in 1981.
The first issue is more or less the root of the second issue. DOS device drivers are expected to send a non-specific EOI to the 8259s when they finish servicing their device. This prevents the use of any of the 8259's other EOI modes in DOS, and excludes the differentiation between device interrupts rerouted from the master 8259 to the slave 8259.
The second issue deals with the use of IRQ2 and IRQ9 from the introduction of a slave 8259 in the PC/AT. The slave 8259's INT output is connected to the master's IR2. The IRQ2 line of the ISA bus, originally connected to this IR2, was rerouted to IR1 of the slave. Thus the old IRQ2 line now generates IRQ9 in the CPU. To allow backwards compatibility with DOS device drivers that still set up for IRQ2, a handler is installed by the BIOS for IRQ9 that redirects interrupts to the original IRQ2 handler.
In the PC/clone family of platforms, the BIOS (and thus also DOS) traditionally maps the master 8259 interrupt requests (IRQ0–IRQ7) to interrupt vector offset 8 (corresponding to INT 08–INT 0Fh), and when present, the PC/AT’s slave 8259 is mapped to interrupt vector offset 112 (INT 70–INT 77h). This was done despite the first 32 (INT 00-INT 1F) interrupt vectors being reserved by the processor for internal exceptions.
This meant that, on later chips, handlers for lower-numbered vectors needed to differentiate between three causes:
Because of this, most operating systems that don’t make use of the BIOS will configure the interrupt controller(s) to avoid the reserved vector range entirely. In protected mode, the OS can restrict use of INT instructions to specific vectors only (e.g., Linux exposes INT 80h for system calls), and any attempt to use a disallowed vector will raise a protection fault.
This avoids some of the need for cause determination in interrupt vector handlers, although spurious interrupts and IRQ-sharing can still complicate matters. Fortunately, most peripheral devices can be queried with regards to outstanding IRQs, and if no source can be found an interrupt can be treated as spurious or ignored altogether.
Since most other operating systems allow for changes in device driver expectations, other 8259 modes of operation, such as Auto-EOI, may be used. This is especially important for modernx86hardware in which a significant amount of time may be spent on I/O address space delay when communicating with the 8259s. This also allows a number of other optimizations in synchronization, such as critical sections, in a multiprocessor x86 system with 8259s.
Since the ISA bus does not supportlevel triggeredinterrupts, level triggered mode may not be used for interrupts connected to ISA devices. This means that on PC/XT, PC/AT, and compatible systems the 8259 must be programmed foredge triggeredmode. On MCA systems, devices use level triggered interrupts and the interrupt controller is hardwired to always work in level triggered mode. On newer EISA, PCI, and later systems the Edge/Level Control Registers (ELCRs) control the mode per IRQ line, effectively making the mode of the 8259 irrelevant for such systems with ISA buses. The ELCR is programmed by the BIOS at system startup for correct operation.
The ELCRs are located 0x4d0 and 0x4d1 in the x86 I/O address space. They are 8-bits wide, each bit corresponding to an IRQ from the 8259s. When a bit is set, the IRQ is in level triggered mode; otherwise, the IRQ is in edge triggered mode.
The 8259 generates spurious interrupts in response to a number of conditions.
The first is an IRQ line being deasserted before it is acknowledged. This may occur due to noise on the IRQ lines. In edge triggered mode, the noise must maintain the line in the low state for 100 ns. When the noise diminishes, apull-up resistorreturns the IRQ line to high, thus generating a false interrupt. In level triggered mode, the noise may cause a high signal level on the systems INTR line. If the system sends an acknowledgment request, the 8259 has nothing to resolve and thus sends an IRQ7 in response. This first case will generate spurious IRQ7's.
A similar case can occur when the 8259 unmask and the IRQ input de-assertion are not properly synchronized. In many systems, the IRQ input is deasserted by an I/O write, and the processor doesn't wait until the write reaches the I/O device. If the processor continues and unmasks the 8259 IRQ before the IRQ input is deasserted, the 8259 will assert INTR again. By the time the processor recognizes this INTR and issues an acknowledgment to read the IRQ from the 8259, the IRQ input may be deasserted, and the 8259 returns a spurious IRQ7.
The second is the master 8259's IRQ2 is active high when the slave 8259's IRQ lines are inactive on the falling edge of an interrupt acknowledgment. This second case will generate spurious IRQ15's, but is rare.
The PC/XTISAsystem had one 8259 controller, while PC/AT and later systems had two 8259 controllers, master and slave. IRQ0 through IRQ7 are the master 8259's interrupt lines, while IRQ8 through IRQ15 are the slave 8259's interrupt lines. The labels on the pins on an 8259 are IR0 through IR7. IRQ0 through IRQ15 are the names of the ISA bus's lines to which the 8259s are attached.
|
https://en.wikipedia.org/wiki/Intel_8259
|
Incomputing, aplug and play(PnP) device orcomputer busis one with a specification that facilitates the recognition of a hardware component in a system without the need for physical device configuration or user intervention in resolving resource conflicts.[1][2]The term "plug and play" has since been expanded to a wide variety of applications to which the same lack of user setup applies.[3][4]
Expansion devices are controlled and exchange data with the host system through defined memory orI/Ospace port addresses,direct memory accesschannels,interrupt requestlines and other mechanisms, which must be uniquely associated with a particular device to operate. Some computers provided unique combinations of these resources to each slot of amotherboardorbackplane. Other designs provided all resources to all slots, and each peripheral device had its own address decoding for the registers or memory blocks it needed to communicate with the host system. Since fixed assignments made expansion of a system difficult, devices used several manual methods for assigning addresses and other resources, such as hard-wired jumpers, pins that could be connected with wire or removable straps, or switches that could be set for particular addresses.[5]As microprocessors made mass-market computers affordable, software configuration of I/O devices was advantageous to allow installation by non-specialist users. Early systems for software configuration of devices included theMSXstandard,NuBus,AmigaAutoconfig, and IBM Microchannel. Initially allexpansion cardsfor theIBM PCrequired physical selection of I/O configuration on the board with jumper straps orDIP switches, but increasinglyISA busdevices were arranged for software configuration.[6]By 1995,Microsoft Windowsincluded a comprehensive method of enumerating hardware at boot time and allocatingresources, which was called the "Plug and Play" standard.[7]
Plug and play devices can have resources allocated at boot-time only, or may behotplugsystems such asUSBandIEEE 1394(FireWire).[8]
Some early microcomputer peripheral devices required the end user physically to cut some wires and solder together others in order to make configuration changes;[9]such changes were intended to be largely permanent for the life of the hardware.
As computers became more accessible to the general public, the need developed for more frequent changes to be made by computer users unskilled with using soldering irons. Rather than cutting and soldering connections, configuration was accomplished byjumpersorDIP switches.
Later on this configuration process was automated: Plug and Play.[6]
TheMSXsystem, released in 1983,[10]was designed to be plug and play from the ground up, and achieved this by a system of slots and subslots, where each had its ownvirtual address space, thus eliminating device addressing conflicts in its very source. No jumpers or any manual configuration was required, and the independent address space for each slot allowed very cheap and commonplace chips to be used, alongside cheapglue logic.
On the software side, the drivers and extensions were supplied in the card's own ROM, thus requiring no disks or any kind of user intervention to configure the software. The ROM extensionsabstracted any hardware differencesand offered standard APIs as specified byASCII Corporation.
In 1984, theNuBusarchitecture was developed by the Massachusetts Institute of Technology (MIT)[11]as a platform agnostic peripheral interface that fully automated device configuration. The specification was sufficiently intelligent that it could work with bothbig endianandlittle endiancomputer platforms that had previously been mutually incompatible. However, this agnostic approach increased interfacing complexity and required support chips on every device which in the 1980s was expensive to do, and apart from its use inAppleMacintoshesandNeXTmachines, the technology was not widely adopted.
In 1984, Commodore developed theAutoconfigprotocol and the Zorro expansion bus for itsAmigaline of expandable computers. The first public appearance was in the CES computer show at Las Vegas in 1985, with the so-called "Lorraine" prototype. Like NuBus, Zorro devices had absolutely no jumpers or DIP switches. Configuration information was stored on a read-only device on each peripheral, and at boot time the host system allocated the requested resources to the installed card. The Zorro architecture did not spread to general computing use outside of the Amiga product line, but was eventually upgraded asZorro IIandZorro IIIfor the later iteration of Amiga computers.
In 1987, IBM released an update to theIBM PCknown as thePersonal System/2line of computers using theMicro Channel Architecture.[12]The PS/2 was capable of totally automatic self-configuration. Every piece of expansion hardware was issued with a floppy disk containing a special file used toauto-configurethe hardware to work with the computer. The user would install the device, turn on the computer, load the configuration information from the disk, and the hardware automatically assigned interrupts, DMA, and other needed settings.
However, the disks posed a problem if they were damaged or lost, as the only options at the time to obtain replacements were via postal mail or IBM's dial-upBBSservice. Without the disks, any new hardware would be completely useless and the computer would occasionally not boot at all until the unconfigured device was removed.
Micro Channel did not gain widespread support,[13]because IBM wanted to exclude clone manufacturers from this next-generation computing platform. Anyone developing for MCA had to sign non-disclosure agreements and pay royalties to IBM for each device sold, putting a price premium on MCA devices. End-users and clone manufacturers revolted against IBM and developed their own open standards bus, known as EISA. Consequently, MCA usage languished except in IBM's mainframes.
In time, manyIndustry Standard Architecture(ISA) cards incorporated, through proprietary and varied techniques, hardware to self-configure or to provide for software configuration; often, the card came with a configuration program on disk that could automatically set the software-configurable (but not itself self-configuring) hardware. Some cards had both jumpers and software-configuration, with some settings controlled by each; this compromise reduced the number of jumpers that had to be set, while avoiding great expense for certain settings, e.g. nonvolatile registers for a base address setting. The problems of required jumpers continued on, but slowly diminished as more and more devices, both ISA and other types, included extra self-configuration hardware. However, these efforts still did not solve the problem of making sure the end-user has the appropriate software driver for the hardware.
ISA PnP or (legacy) Plug & Play ISA was a plug-and-play system that used a combination of modifications to hardware, the system BIOS, and operating system software to automatically manage resource allocations. It was superseded by thePCIbus during the mid-1990s.
The PCI plug and play (autoconfiguration) is based on the PCI BIOS Specification in 1990s, the PCI BIOS Specification is superseded by theACPIin 2000s.
In 1995, Microsoft releasedWindows 95, which tried to automate device detection and configuration as much as possible, but could still fall back to manual settings if necessary. During the initial install process of Windows 95, it would attempt to automatically detect all devices installed in the system. Since full auto-detection of everything was a new process without full industry support, the detection process constantly wrote to a progress tracking log file during the detection process. In the event that device probing would fail and the system would freeze, the end-user could reboot the computer, restart the detection process, and the installer would use the tracking log to skip past the point that caused the previous freeze.[14]
At the time, there could be a mix of devices in a system, some capable of automatic configuration, and some still using fully manual settings via jumpers and DIP switches. The old world of DOS still lurked underneath Windows 95, and systems could be configured to load devices in three different ways:
Microsoft could not assert full control over all device settings, so configuration files could include a mix of driver entries inserted by the Windows 95 automatic configuration process, and could also include driver entries inserted or modified manually by the computer users themselves. The Windows 95 Device Manager also could offer users a choice of several semi-automatic configurations to try to free up resources for devices that still needed manual configuration.
Also, although some later ISA devices were capable of automatic configuration, it was common for PC ISA expansion cards to limit themselves to a very small number of choices for interrupt request lines. For example, a network interface might limit itself to only interrupts 3, 7, and 10, while a sound card might limit itself to interrupts 5, 7, and 12. This results in few configuration choices if some of those interrupts are already used by some other device.
The hardware of PC computers additionally limited device expansion options because interrupts could not be shared, and some multifunction expansion cards would use multiple interrupts for different card functions, such as a dual-port serial card requiring a separate interrupt for each serial port.
Because of this complex operating environment, the autodetection process sometimes produced incorrect results, especially in systems with large numbers of expansion devices. This led to device conflicts within Windows 95, resulting in devices which were supposed to be fully self-configuring failing to work. The unreliability of the device installation process led to Plug and Play being sometimes referred to asPlug and Pray.[15]
Until approximately 2000, PC computers could still be purchased with a mix of ISA and PCI slots, so it was still possible that manual ISA device configuration might be necessary. But with successive releases of new operating systems like Windows 2000 and Windows XP, Microsoft had sufficient clout to say that drivers would no longer be provided for older devices that did not support auto-detection. In some cases, the user was forced to purchase new expansion devices or a whole new system to support the next operating system release.
Several completely automated computer interfaces are currently used, each of which requires no device configuration or other action on the part of the computer user, apart from software installation, for the self-configuring devices. These interfaces include:
For most of these interfaces, very little technical information is available to the end user about the performance of the interface. Although both FireWire and USB have bandwidth that must be shared by all devices, most modern operating systems are unable to monitor and report the amount of bandwidth being used or available, or to identify which devices are currently using the interface.[citation needed]
|
https://en.wikipedia.org/wiki/Plug_and_play
|
Polling, orinterrogation, refers to actively sampling the status of anexternal deviceby aclient programas a synchronous activity. Polling is most often used in terms ofinput/output(I/O), and is also referred to aspolledI/Oorsoftware-drivenI/O. A good example of hardware implementation is awatchdog timer.
Polling is the process where the computer or controlling device waits for anexternal deviceto check for its readiness or state, often with low-level hardware. For example, when aprinteris connected via a parallel port, the computer waits until the printer has received the next character. These processes can be as minute as only readingone bit. This is sometimes used synonymously with 'busy-wait' polling. In this situation, when anI/Ooperation is required, the computer does nothing other than check the status of theI/Odevice until it is ready, at which point the device is accessed. In other words, the computer waits until the device is ready. Polling also refers to the situation where a device is repeatedly checked for readiness, and if it is not, the computer returns to a different task. Although not as wasteful ofCPUcycles as busy waiting, this is generally not as efficient as the alternative to polling,interrupt-drivenI/O.
In a simple single-purpose system, even busy-wait is perfectly appropriate if no action is possible until theI/Oaccess, but more often than not this was traditionally a consequence of simple hardware or non-multitaskingoperating systems.
Polling is often intimately involved with verylow-level hardware. For example, polling a parallel printer port to check whether it is ready for another character involves examining as little as onebitof abyte. That bit represents, at the time of reading, whether a single wire in the printer cable is at low or high voltage. TheI/Oinstruction that reads this byte directly transfers the voltage state of eight real world wires to the eight circuits (flip flops) that make up one byte of a CPU register.
Polling has the disadvantage that if there are too many devices to check, the time required to poll them can exceed the time available to service the I/O device.
Polling can be described in the following steps:
Host actions:
Controller actions:
Apolling cycleis the time in which each element is monitored once. The optimal polling cycle will vary according to several factors, including the desired speed of response and the overhead (e.g.,processor timeandbandwidth) of the polling.
Inroll call polling, the polling device or process queries each element on a list in a fixed sequence. Because it waits for a response from each element, a timing mechanism is necessary to prevent lock-ups caused by non-responding elements. Roll call polling can be inefficient if the overhead for the polling messages is high, there are numerous elements to be polled in each polling cycle and only a few elements are active.
Inhub polling, also referred to as token polling, each element polls the next element in some fixed sequence. This continues until the first element is reached, at which time the polling cycle starts all over again.
Polling can be employed in various computing contexts in order to control the execution or transmission sequence of the elements involved. For example, in multitasking operating systems, polling can be used to allocate processor time and other resources to the various competing processes.
In networks, polling is used to determine which nodes want to access the network. It is also used by routing protocols to retrieve routing information, as is the case with EGP (exterior gateway protocol).
An alternative to polling is the use ofinterrupts, which aresignalsgenerated by devices or processes to indicate that they need attention, want to communicate, etc. Although polling can be very simple, in many situations (e.g., multitasking operating systems) it is more efficient to use interrupts because it can reduce processor usage and/or bandwidth consumption.
Apoll messageis a control-acknowledgment message.
In a multidrop line arrangement (a centralcomputerand different terminals in which theterminalsshare a single communication line to and from the computer), the system uses amaster/slavepolling arrangement whereby the central computer sends message (called polling message) to a specific terminal on the outgoing line. All terminals listen to the outgoing line, but only the terminal that is polled replies by sending any information that it has ready for transmission on the incoming line.[1]
Instar networks, which, in its simplest form, consists of one centralswitch,hub, or computer that acts as a conduit to transmit messages, polling is not required to avoid chaos on the lines, but it is often used to allow the master to acquire input in an orderly fashion. These poll messages differ from those of the multidrop lines case because there are no site addresses needed, and each terminal only receives those polls that are directed to it.[1]
|
https://en.wikipedia.org/wiki/Polling_(computer_science)
|
Indigital computers, aninterrupt(sometimes referred to as atrap)[1]is a request for theprocessortointerruptcurrently executing code (when permitted), so that the event can be processed in a timely manner. If the request is accepted, the processor will suspend its current activities, save itsstate, and execute afunctioncalled aninterrupt handler(or aninterrupt service routine, ISR) to deal with the event. This interruption is often temporary, allowing the software to resume[a]normal activities after the interrupt handler finishes, although the interrupt could instead indicate a fatal error.[2]
Interrupts are commonly used by hardware devices to indicate electronic or physical state changes that require time-sensitive attention. Interrupts are also commonly used to implementcomputer multitaskingandsystem calls, especially inreal-time computing. Systems that use interrupts in these ways are said to be interrupt-driven.[3]
Hardware interrupts were introduced as an optimization, eliminating unproductive waiting time inpolling loops, waiting for external events. The first system to use this approach was theDYSEAC, completed in 1954, although earlier systems provided error trap functions.[4]
TheUNIVAC 1103Acomputer is generally credited with the earliest use of interrupts in 1953.[5][6]Earlier, on theUNIVAC I(1951) "Arithmetic overflow either triggered the execution of a two-instruction fix-up routine at address 0, or, at the programmer's option, caused the computer to stop." TheIBM 650(1954) incorporated the first occurrence of interrupt masking. TheNational Bureau of StandardsDYSEAC(1954) was the first to use interrupts for I/O. TheIBM 704was the first to use interrupts fordebugging, with a "transfer trap", which could invoke a special routine when a branch instruction was encountered. The MITLincoln LaboratoryTX-2system (1957) was the first to provide multiple levels of priority interrupts.[6]
Interrupt signals may be issued in response tohardwareorsoftwareevents. These are classified ashardware interruptsorsoftware interrupts, respectively. For any particular processor, the number of interrupt types is limited by the architecture.
A hardware interrupt is a condition related to the state of the hardware that may be signaled by an external hardware device, e.g., aninterrupt request(IRQ) line on a PC, or detected by devices embedded in processor logic (e.g., the CPU timer in IBM System/370), to communicate that the device needs attention from theoperating system(OS)[7]or, if there is no OS, from thebare metalprogram running on the CPU. Such external devices may be part of the computer (e.g.,disk controller) or they may be externalperipherals. For example, pressing akeyboardkey or moving amouseplugged into aPS/2 porttriggers hardware interrupts that cause the processor to read the keystroke or mouse position.
Hardware interrupts can arriveasynchronouslywith respect to the processor clock, and at any time during instruction execution. Consequently, all incoming hardware interrupt signals are conditioned by synchronizing them to the processor clock, and acted upon only at instruction execution boundaries.
In many systems, each device is associated with a particular IRQ signal. This makes it possible to quickly determine which hardware device is requesting service, and to expedite servicing of that device.
On some older systems, such as the 1964CDC 3600,[8]all interrupts went to the same location, and the OS used a specialized instruction to determine the highest-priority outstanding unmasked interrupt. On contemporary systems, there is generally a distinct interrupt routine for each type of interrupt (or for each interrupt source), often implemented as one or moreinterrupt vector tables.
Tomaskan interrupt is to disable it, so it is deferred[b]or ignored[c]by the processor, while tounmaskan interrupt is to enable it.[9]
Processors typically have an internalinterrupt maskregister,[d]which allows selective enabling[2](and disabling) of hardware interrupts. Each interrupt signal is associated with a bit in the mask register. On some systems, the interrupt is enabled when the bit is set, and disabled when the bit is clear. On others, the reverse is true, and a set bit disables the interrupt. When the interrupt is disabled, the associated interrupt signal may be ignored by the processor, or it may remain pending. Signals which are affected by the mask are calledmaskable interrupts.
Some interrupt signals are not affected by the interrupt mask and therefore cannot be disabled; these are callednon-maskable interrupts(NMIs). These indicate high-priority events which cannot be ignored under any circumstances, such as the timeout signal from awatchdog timer. With regard toSPARC, the Non-Maskable Interrupt (NMI), despite having the highest priority among interrupts, can be prevented from occurring through the use of an interrupt mask.[10]
One failure mode is when the hardware does not generate the expected interrupt for a change in state, causing the operating system to wait indefinitely. Depending on the details, the failure might affect only a single process or might have global impact. Some operating systems have code specifically to deal with this.
As an example, IBMOperating System/360(OS/360) relies on a not-ready to ready device-end interrupt when a tape has been mounted on a tape drive, and will not read the tape label until that interrupt occurs or is simulated. IBM added code in OS/360 so that the VARY ONLINE command will simulate a device end interrupt on the target device.
Aspurious interruptis a hardware interrupt for which no source can be found. The term "phantom interrupt" or "ghost interrupt" may also be used to describe this phenomenon. Spurious interrupts tend to be a problem with awired-ORinterrupt circuit attached to a level-sensitive processor input. Such interrupts may be difficult to identify when a system misbehaves.
In a wired-OR circuit,parasitic capacitancecharging/discharging through the interrupt line's bias resistor will cause a small delay before the processor recognizes that the interrupt source has been cleared. If the interrupting device is cleared too late in the interrupt service routine (ISR), there will not be enough time for the interrupt circuit to return to the quiescent state before the current instance of the ISR terminates. The result is the processor will think another interrupt is pending, since the voltage at its interrupt request input will be not high or low enough to establish an unambiguous internal logic 1 or logic 0. The apparent interrupt will have no identifiable source, hence the "spurious" moniker.
A spurious interrupt may also be the result of electricalanomaliesdue to faulty circuit design, highnoiselevels,crosstalk, timing issues, or more rarely,device errata.[11]
A spurious interrupt may result in system deadlock or other undefined operation if the ISR does not account for the possibility of such an interrupt occurring. As spurious interrupts are mostly a problem with wired-OR interrupt circuits, good programming practice in such systems is for the ISR to check all interrupt sources for activity and take no action (other than possibly logging the event) if none of the sources is interrupting.
A software interrupt is requested by the processor itself upon executing particular instructions or when certain conditions are met. Every software interrupt signal is associated with a particular interrupt handler.
A software interrupt may be intentionally caused by executing a specialinstructionwhich, by design, invokes an interrupt when executed.[e]Such instructions function similarly tosubroutine callsand are used for a variety of purposes, such as requesting operating system services and interacting withdevice drivers(e.g., to read or write storage media). Software interrupts may also be triggered by program execution errors or by thevirtual memorysystem.
Typically, the operating systemkernelwill catch and handle such interrupts. Some interrupts are handled transparently to the program - for example, the normal resolution of apage faultis to make the required page accessible in physical memory. But in other cases such as asegmentation faultthe operating system executes a process callback. OnUnix-likeoperating systemsthis involves sending asignalsuch asSIGSEGV,SIGBUS,SIGILLorSIGFPE, which may either call a signal handler or execute a default action (terminating the program). On Windows the callback is made usingStructured Exception Handlingwith an exception code such as STATUS_ACCESS_VIOLATION or STATUS_INTEGER_DIVIDE_BY_ZERO.[12]
In a kernelprocess, it is often the case that some types of software interrupts are not supposed to happen. If they occur nonetheless, anoperating system crashmay result.
The termsinterrupt,trap,exception,fault, andabortare used to distinguish types of interrupts, although "there is no clear consensus as to the exact meaning of these terms".[13]The termtrapmay refer to any interrupt, to any software interrupt, to any synchronous software interrupt, or only to interrupts caused by instructions withtrapin their names. In some usages, the termtraprefers specifically to abreakpointintended to initiate acontext switchto amonitor programordebugger.[1]It may also refer to a synchronous interrupt caused by an exceptional condition (e.g.,division by zero,invalid memory access,illegal opcode),[13]although the termexceptionis more common for this.
x86divides interrupts into (hardware)interruptsand softwareexceptions, and identifies three types of exceptions: faults, traps, and aborts.[14][15](Hardware) interrupts are interrupts triggered asynchronously by an I/O device, and allow the program to be restarted with no loss of continuity.[14]A fault is restartable as well but is tied to the synchronous execution of an instruction - the return address points to the faulting instruction. A trap is similar to a fault except that the return address points to the instruction to be executed after the trapping instruction;[16]one prominent use is to implementsystem calls.[15]An abort is used for severe errors, such as hardware errors and illegal values in system tables, and often[f]does not allow a restart of the program.[16]
Armuses the termexceptionto refer to all types of interrupts,[17]and divides exceptions into (hardware)interrupts,aborts,reset, and exception-generating instructions. Aborts correspond to x86 exceptions and may be prefetch aborts (failed instruction fetches) or data aborts (failed data accesses), and may be synchronous or asynchronous. Asynchronous aborts may be precise or imprecise. MMU aborts (page faults) are synchronous.[18]
RISC-Vuses interrupt as the overall term as well as for the external subset; internal interrupts are called exceptions.
Each interrupt signal input is designed to be triggered by either a logic signal level or a particular signal edge (level transition). Level-sensitive inputs continuously request processor service so long as a particular (high or low) logic level is applied to the input. Edge-sensitive inputs react to signal edges: a particular (rising or falling) edge will cause a service request to be latched; the processor resets the latch when the interrupt handler executes.
Alevel-triggered interruptis requested by holding the interrupt signal at its particular (high or low) activelogic level. A device invokes a level-triggered interrupt by driving the signal to and holding it at the active level. It negates the signal when the processor commands it to do so, typically after the device has been serviced.
The processor samples the interrupt input signal during each instruction cycle. The processor will recognize the interrupt request if the signal is asserted when sampling occurs.
Level-triggered inputs allow multiple devices to share a common interrupt signal via wired-OR connections. The processor polls to determine which devices are requesting service. After servicing a device, the processor may again poll and, if necessary, service other devices before exiting the ISR. As previously described, a processor whose level-sensitive interrupt input is connected to a wired-OR circuit is susceptible to spurious interrupts, which should they occur, may causedeadlockor some other potentially-fatal system fault.
Anedge-triggered interruptis an interrupt signaled by alevel transitionon the interrupt line, either a falling edge (high to low) or a rising edge (low to high). A device wishing to signal an interrupt drives a pulse onto the line and then releases the line to its inactive state.
The important part of edge triggering is that the signal must transition to trigger the interrupt; for example, if the transition was high-low, there would only be one falling edge interrupt triggered, and the continued low level would not trigger a further interrupt. The signal must return to the high level and fall again in order to trigger a further interrupt. This contrasts with a level trigger where the low level would continue to create interrupts (if they are enabled) until the signal returns to its high level.
Computers with edge-triggered interrupts may include aninterrupt registerthat retains the status of pending interrupts. Systems with interrupt registers generally have interrupt mask registers as well.
The processor samples the interrupt trigger signals or interrupt register during each instruction cycle, and will process the highest priority enabled interrupt found.
Regardless of the triggering method, the processor will begin interrupt processing at the next instruction boundary following a detected trigger, thus ensuring:
There are several different architectures for handling interrupts. In some, there is a single interrupt handler[19]that must scan for the highest priority enabled interrupt. In others, there are separate interrupt handlers for separate interrupt types,[20]separate I/O channels or devices, or both.[21][22]Several interrupt causes may have the same interrupt type and thus the same interrupt handler, requiring the interrupt handler to determine the cause.[20]
Interrupts may be fully handled in hardware by the CPU, or may be handled by both the CPU and another component such as aprogrammable interrupt controlleror asouthbridge.
If an additional component is used, that component would be connected between the interrupting device and the processor's interrupt pin tomultiplexseveral sources of interrupt onto the one or two CPU lines typically available. If implemented as part of thememory controller, interrupts are mapped into the system's memoryaddress space.[citation needed]
Insystems on a chip(SoC) implementations, interrupts come from different blocks of the chip and are usually aggregated in an interrupt controller attached to one or several processors (in a multi-core system).[23]
Multiple devices may share an edge-triggered interrupt line if they are designed to. The interrupt line must have a pull-down or pull-up resistor so that when not actively driven it settles to its inactive state, which is the default state of it. Devices signal an interrupt by briefly driving the line to its non-default state, and let the line float (do not actively drive it) when not signaling an interrupt. This type of connection is also referred to asopen collector. The line then carries all the pulses generated by all the devices. (This is analogous to thepull cordon some buses and trolleys that any passenger can pull to signal the driver that they are requesting a stop.) However, interrupt pulses from different devices may merge if they occur close in time. To avoid losing interrupts the CPU must trigger on the trailing edge of the pulse (e.g. the rising edge if the line is pulled up and driven low). After detecting an interrupt the CPU must check all the devices for service requirements.
Edge-triggered interrupts do not suffer the problems that level-triggered interrupts have with sharing. Service of a low-priority device can be postponed arbitrarily, while interrupts from high-priority devices continue to be received and get serviced. If there is a device that the CPU does not know how to service, which may raise spurious interrupts, it will not interfere with interrupt signaling of other devices. However, it is easy for an edge-triggered interrupt to be missed - for example, when interrupts are masked for a period - and unless there is some type of hardware latch that records the event it is impossible to recover. This problem caused many "lockups" in early computer hardware because the processor did not know it was expected to do something. More modern hardware often has one or more interrupt status registers that latch interrupts requests; well-written edge-driven interrupt handling code can check these registers to ensure no events are missed.
TheIndustry Standard Architecture(ISA) bus uses edge-triggered interrupts, without mandating that devices be able to share IRQ lines, but all mainstream ISA motherboards include pull-up resistors on their IRQ lines, so well-behaved ISA devices sharing IRQ lines should just work fine. Theparallel portalso uses edge-triggered interrupts. Many older devices assume that they have exclusive use of IRQ lines, making it electrically unsafe to share them.
There are three ways multiple devices "sharing the same line" can be raised. First is by exclusive conduction (switching) or exclusive connection (to pins). Next is by bus (all connected to the same line listening): cards on a bus must know when they are to talk and not talk (i.e., the ISA bus). Talking can be triggered in two ways: by accumulation latch or by logic gates. Logic gates expect a continual data flow that is monitored for key signals. Accumulators only trigger when the remote side excites the gate beyond a threshold, thus no negotiated speed is required. Each has its speed versus distance advantages. A trigger, generally, is the method in which excitation is detected: rising edge, falling edge, threshold (oscilloscopecan trigger a wide variety of shapes and conditions).
Triggering for software interrupts must be built into the software (both in OS and app). A 'C' app has a trigger table (a table of functions) in its header, which both the app and OS know of and use appropriately that is not related to hardware. However do not confuse this with hardware interrupts which signal the CPU (the CPU enacts software from a table of functions, similarly to software interrupts).
Multiple devices sharing an interrupt line (of any triggering style) all act as spurious interrupt sources with respect to each other. With many devices on one line, the workload in servicing interrupts grows in proportion to the number of devices. It is therefore preferred to spread devices evenly across the available interrupt lines. Shortage of interrupt lines is a problem in older system designs where the interrupt lines are distinct physical conductors. Message-signaled interrupts, where the interrupt line is virtual, are favored in new system architectures (such asPCI Express) and relieve this problem to a considerable extent.
Some devices with a poorly designed programming interface provide no way to determine whether they have requested service. They may lock up or otherwise misbehave if serviced when they do not want it. Such devices cannot tolerate spurious interrupts, and so also cannot tolerate sharing an interrupt line.ISAcards, due to often cheap design and construction, are notorious for this problem. Such devices are becoming much rarer, ashardware logicbecomes cheaper and new system architectures mandate shareable interrupts.
Some systems use a hybrid of level-triggered and edge-triggered signaling. The hardware not only looks for an edge, but it also verifies that the interrupt signal stays active for a certain period of time.
A common use of a hybrid interrupt is for the NMI (non-maskable interrupt) input. Because NMIs generally signal major – or even catastrophic – system events, a good implementation of this signal tries to ensure that the interrupt is valid by verifying that it remains active for a period of time. This 2-step approach helps to eliminate false interrupts from affecting the system.
Amessage-signaled interruptdoes not use a physical interrupt line. Instead, a device signals its request for service by sending a short message over some communications medium, typically acomputer bus. The message might be of a type reserved for interrupts, or it might be of some pre-existing type such as a memory write.
Message-signalled interrupts behave very much like edge-triggered interrupts, in that the interrupt is a momentary signal rather than a continuous condition. Interrupt-handling software treats the two in much the same manner. Typically, multiple pending message-signaled interrupts with the same message (the same virtual interrupt line) are allowed to merge, just as closely spaced edge-triggered interrupts can merge.
Message-signalledinterrupt vectorscan be shared, to the extent that the underlying communication medium can be shared. No additional effort is required.
Because the identity of the interrupt is indicated by a pattern of data bits, not requiring a separate physical conductor, many more distinct interrupts can be efficiently handled. This reduces the need for sharing. Interrupt messages can also be passed over a serial bus, not requiring any additional lines.
PCI Express, a serial computer bus, usesmessage-signaled interruptsexclusively.
In apush buttonanalogy applied tocomputer systems, the termdoorbellordoorbell interruptis often used to describe a mechanism whereby asoftwaresystem can signal or notify acomputer hardwaredevice that there is some work to be done. Typically, the software system will place data in some well-known and mutually agreed upon memory locations, and "ring the doorbell" by writing to a different memory location. This different memory location is often called the doorbell region, and there may even be multiple doorbells serving different purposes in this region. It is this act of writing to the doorbell region of memory that "rings the bell" and notifies the hardware device that the data are ready and waiting. The hardware device would now know that the data are valid and can be acted upon. It would typically write the data to ahard disk drive, or send them over anetwork, orencryptthem, etc.
The termdoorbell interruptis usually amisnomer. It is similar to an interrupt, because it causes some work to be done by the device; however, the doorbell region is sometimes implemented as apolledregion, sometimes the doorbell region writes through to physical deviceregisters, and sometimes the doorbell region is hardwired directly to physical device registers. When either writing through or directly to physical device registers, this may cause a real interrupt to occur at the device's central processor unit (CPU), if it has one.
Doorbell interrupts can be compared toMessage Signaled Interrupts, as they have some similarities.
Inmultiprocessorsystems, a processor may send an interrupt request to another processor viainter-processor interrupts[h](IPI).
Interrupts provide low overhead and goodlatencyat low load, but degrade significantly at high interrupt rate unless care is taken to prevent several pathologies. The phenomenon where the overall system performance is severely hindered by excessive amounts of processing time spent handling interrupts is called aninterrupt storm.
There are various forms oflivelocks, when the system spends all of its time processing interrupts to the exclusion of other required tasks.
Under extreme conditions, a large number of interrupts (like very high network traffic) may completely stall the system. To avoid such problems, anoperating systemmust schedule network interrupt handling as carefully as it schedules process execution.[24]
With multi-core processors, additional performance improvements in interrupt handling can be achieved throughreceive-side scaling(RSS) whenmultiqueue NICsare used. Such NICs provide multiple receivequeuesassociated to separate interrupts; by routing each of those interrupts to different cores, processing of the interrupt requests triggered by the network traffic received by a single NIC can be distributed among multiple cores. Distribution of the interrupts among cores can be performed automatically by the operating system, or the routing of interrupts (usually referred to asIRQ affinity) can be manually configured.[25][26]
A purely software-based implementation of the receiving traffic distribution, known asreceive packet steering(RPS), distributes received traffic among cores later in the data path, as part of theinterrupt handlerfunctionality. Advantages of RPS over RSS include no requirements for specific hardware, more advanced traffic distribution filters, and reduced rate of interrupts produced by a NIC. As a downside, RPS increases the rate ofinter-processor interrupts(IPIs).Receive flow steering(RFS) takes the software-based approach further by accounting forapplication locality; further performance improvements are achieved by processing interrupt requests by the same cores on which particular network packets will be consumed by the targeted application.[25][27][28]
Interrupts are commonly used to service hardware timers, transfer data to and from storage (e.g., disk I/O) and communication interfaces (e.g.,UART,Ethernet), handle keyboard and mouse events, and to respond to any other time-sensitive events as required by the application system. Non-maskable interrupts are typically used to respond to high-priority requests such as watchdog timer timeouts, power-down signals andtraps.
Hardware timers are often used to generate periodic interrupts. In some applications, such interrupts are counted by the interrupt handler to keep track of absolute or elapsed time, or used by the OS taskschedulerto manage execution of runningprocesses, or both. Periodic interrupts are also commonly used to invoke sampling from input devices such asanalog-to-digital converters,incremental encoder interfaces, andGPIOinputs, and to program output devices such asdigital-to-analog converters,motor controllers, and GPIO outputs.
A disk interrupt signals the completion of a data transfer from or to the disk peripheral; this may cause a process to run which is waiting to read or write. A power-off interrupt predicts imminent loss of power, allowing the computer to perform an orderly shut-down while there still remains enough power to do so. Keyboard interrupts typically causekeystrokesto be buffered so as to implementtypeahead.
Interrupts are sometimes used to emulate instructions which are unimplemented on some computers in a product family.[29][30]For examplefloating pointinstructions may be implemented in hardware on some systems and emulated on lower-cost systems. In the latter case, execution of an unimplemented floating point instruction will cause an "illegal instruction" exception interrupt. The interrupt handler will implement the floating point function in software and then return to the interrupted program as if the hardware-implemented instruction had been executed.[31]This provides application software portability across the entire line.
Interrupts are similar tosignals, the difference being that signals are used forinter-process communication(IPC), mediated by the kernel (possibly via system calls) and handled by processes, while interrupts are mediated by the processor and handled by thekernel. The kernel may pass an interrupt as a signal to the process that caused it (typical examples areSIGSEGV,SIGBUS,SIGILLandSIGFPE).
|
https://en.wikipedia.org/wiki/Interrupt
|
TheA20, oraddress line 20, is one of theelectricallines that make up thesystem busof anx86-based computer system. The A20 line in particular is used to transmit the 21st bit on theaddress bus.
A microprocessor typically has a number ofaddress linesequal to the base-twologarithmof the number ofwordsin its physicaladdress space. For example, a processor with 4 GB ofbyte-addressablephysical space requires 32 lines (log2(4 GB) = log2(232B) = 32), which are named A0 through A31. The lines are named after the zero-based number of the bit in the address that they are transmitting. Theleast significant bitis first and is therefore numbered bit 0 and signaled on line A0. A20 transmits bit 20 (the 21st bit) and becomes active once addresses reach 1 MB, or 220.
TheIntel 8086,Intel 8088, andIntel 80186processors had 20 address lines, numbered A0 to A19; with these, the processor can access 220bytes, or 1 MB. Internal address registers of such processors only had 16 bits. To access a 20-bit address space, an external memory reference was made up of a 16-bitoffsetaddress added to a 16-bitsegmentnumber, shifted 4 bits to the left so as to produce a 20-bit physical address. The resulting address is equal tosegment× 16 +offset.[1]There are many combinations of segment and offset that produce the same 20-bit physical address. Therefore, there were various ways to address the same byte in memory.[2]For example, here are four of the 4096 different segment:offset combinations, all referencing the byte whose physical address is0x000FFFFF(the last byte in 1 MB-memory space):
Referenced the last way, an increase of one in the offset yieldsF800:8000, which is a proper address for the processor, but since it translates to the physical address0x00100000(the first byte over 1 MB), the processor would need another address line for actual access to that byte. Since there is no such line on the 8086 line of processors, the 21st bit above, while set, gets dropped, causing the addressF800:8000to "wrap around"[1]and to actually point to the physical address0x00000000.
WhenIBMdesigned theIBM PC AT(1984) machine, it decided to use the new higher-performanceIntel 80286microprocessor. The 80286 could address up to 16 MB of system memory inprotected mode. However, the CPU was supposed to emulate an 8086's behavior inreal mode, its startup mode, so that it could run operating systems and programs that were not written for protected mode. The 80286 did not force the A20 line to zero in real mode, however. Therefore, the combinationF800:8000would no longer point to the physical address0x00000000, but to the address0x00100000. As a result, programs relying on the address wrap around would no longer work. To remain compatible with such programs, IBM decided to correct the problem on themotherboard.
That was accomplished by inserting alogic gateon the A20 line between the processor and system bus, which got namedGate-A20. Gate-A20 can be enabled or disabled by software to allow or prevent the address bus from receiving a signal from A20. It is set to non-passing for the execution of older programs that rely on the wrap-around. At boot time, theBIOSfirst enables Gate-A20 when it counts and tests all of the system memory, and then disables it before transferring control to the operating system.
Originally, the logic gate was a gate connected to theIntel 8042keyboard controller.[1]Controlling it was a relatively slow process. Other methods have since been added to allow more efficientmultitaskingof programs that require this wrap-around with programs that access all of the system memory. There are multiple methods to control the A20 line.[3]
Disconnecting A20 would not wrapallmemory accesses above 1 MB, just those in the 1–2 MB, 3–4 MB, 5–6 MB, etc. ranges.Real-modesoftware cared only about the area slightly above 1 MB, so the Gate-A20 line was enough.
Enabling the Gate-A20 line is one of the first steps that aprotected-modex86operating systemdoes in the bootup process, often before control has been passed to thekernelfrom thebootstrap(in the case ofLinux, for example).
Virtual 8086 mode, introduced with theIntel 80386, allows the A20 wrap-around to be simulated by using thevirtual memoryfacilities of the processor; physical memory may be mapped to multiple virtual addresses. Thus, the memory mapped at the first megabyte of virtual memory may be mapped again in the second megabyte of virtual memory. The operating system may intercept changes to Gate A20 and make corresponding changes to the virtual-memory address space, which also makes irrelevant the efficiency of Gate-A20 line toggling.
Controlling the A20 line was an important feature at one stage in the growth of the IBM PC architecture, as it added access to an additional 65,520 bytes (64 KB − 16 bytes) of memory inreal mode, without significant software changes.
In what was arguably a "hack", the A20 gate was originally part of the keyboard controller on the motherboard, which could open or close it depending on what behavior was desired.[4]
In order to keep full compatibility with theIntel 8086, the A20 gate was still present in Intel CPUs until 2008.[5]As the gate was initially closed right after boot,protected-modeoperating systems typically opened the A20 gate early during the boot process to never close it again. Such operating systems had no compatibility reasons for keeping it closed, and they gained access to the full range of physical addresses available by opening it.
TheIntel 80486andPentiumadded a special pin namedA20M#, which when asserted low forces bit 20 of the physical address to be zero for all on-chipcache- or external-memory accesses. It was necessary, since the 80486 introduced an on-chip cache and so masking this bit in external logic was no longer possible. Software still needs to manipulate the gate and must still deal with external peripherals (thechipset) for that.[6]
The PC System Design GuidePC 2001removes compatibility for the A20 line: "If A20M# generation logic is still present in the system, this logic must be terminated such that software writes to I/O port 92, bit 1, do not result in A20M# being asserted to the processor."[7]
Support for the A20 gate was changed in theNehalem microarchitecture(some sources incorrectly claim that A20 support was removed). Rather than the CPU having a dedicated A20M# pin that receives the signal whether or not to mask the A20 bit, it has been virtualized so that the information is sent from the peripheral hardware to the CPU using special bus cycles.[citation needed]From a software point of view, the mechanism works exactly as before, and an operating system must still program external hardware (which in-turn sends the aforementioned bus cycles to the CPU) to disable the A20 masking.[citation needed]
Intel no longer supports the A20 gate, starting withHaswell. Page 271 of the Intel System Programmers Manual Vol. 3A from June 2013 states: "The functionality of A20M# is used primarily by older operating systems and not used by modern operating systems. On newerIntel 64processors, A20M# may be absent."[8]
TheA20 handlerisIBM PCmemory managersoftware that controls access to thehigh memory area(HMA).Extended-memorymanagers usually provide this functionality. A20 handlers are named after the 21st address line of the microprocessor, the A20 line.
InDOS, HMA managers such asHIMEM.SYShave the "extra task" of managing A20. HIMEM.SYS provided anAPIfor opening/closing A20. DOS itself could use the area for some of its storage needs, thereby freeing up more conventional memory for programs. That functionality was enabled by theDOS=HIGHorHIDOS=ONdirectives in theCONFIG.SYSconfiguration file.
Since 1980, the address wrap was internally used by86-DOSandMS-DOSto implement the DOS CALL 5 entry point at offset +5 to +9 (which emulates theCP/M-80-styleCALL 5BDOSAPIentry point at offset +5 to +7) in theProgram Segment Prefix(PSP) (which partially resembles CP/M-80'szero page).[9][10]This was, in particular, utilized by programsmachine-translatedfrom CP/M-80 throughassembly language translators[9]likeSeattle Computer Products'TRANS86.[11]The CALL 5 handler this entry point refers to resides at the machine's physical address 0x000000C0 (thereby overlapping the four bytes of theinterrupt service routineentry point reserved for INT 30h and the first byte of INT 31h in the x86real modeinterrupt vector table).[12][13][14]However, by the design of CP/M-80, which loaded the operating system immediately above the memory available for the application program to run in, the8080/Z8016-bit target address stored at offset +6 to +7 in the zero page could deliberately also be interpreted as the size of the first memory segment.[9]In order to emulate this in DOS with its 8086 segment:offset addressing scheme, the far call entry point's 16-bit offset had to match this segment size (i.e.0xFEF0), which is stored at offset +6 to +7 in the PSP, overlapping parts of the CALL 5.[13][14]The only way to reconcile these requirements was to choose a segment value that, when added to0xFEF0, results in an address of0x001000C0, which, on an 8086, wraps around to0x000000C0.[15][12][14]
A20 had to be disabled for the wraparound to occur and DOS programs using this interface to work. Newer DOS versions which can relocate parts of themselves into the HMA, typically craft a copy of the entry point atFFFF:00D0in the HMA (which again resolves to physical0x001000C0), so that the interface can work without regard to the state of A20.[14][16]
One program known to use the CALL 5 interface is the DOS version of theSmall-Ccompiler.[17]Also, the SPELL utility in Microsoft'sWord 3.0(1987) is one of the programs depending on the CALL 5 interface to be set up correspondingly.[18]Sun Microsystems'PC-NFS(1993) requires the CALL 5 fix-up as well.[16]
Also, to save program space,[1]a trick was used by someBIOSand DOS programmers, for example, to have one segment that has access to program data (such as fromF800:0000toF800:7FFF, pointing to the physical addresses0x000F8000–0x000FFFFF), as well as the I/O data (such as the keyboard buffer) that was located in the first memory segment (with addressesF800:8000toF800:FFFFpointing to the physical addresses0x00000000to0x00007FFF).
This trick works for as long as the code isn't executed inlow memory, the first 64 KB of RAM, a condition that was always true in older DOS versions without load-high capabilities.
With the DOS kernel relocated into higher memory areas, low memory increasingly became available for programs, causing those depending on the wraparound to fail.[19]Theexecutableloaders in newer versions of DOS attempt to detect some common types of affected programs and either patch them on-the-fly to function also in low memory[20]or load them above the first 64 KB before passing execution on to them.[20]For programs, which are not detected automatically,LOADFIX[21]orMEMMAX-L[21]can be used to force programs to be loaded above the first 64 KB.
The trick was utilized byIBM/Microsoft Pascalitself as well as by programs compiled with it,[22][23][10][17]including Microsoft'sMASM.[17]Other commonly used development utilities using this wereexecutable compressorslike Realia'sSpacemaker[20](written byRobert B. K. Dewarin 1982 and used to compress early versions of theNorton Utilities[24][25][26][27]) and Microsoft'sEXEPACK[19][20][1][28][17](written by Reuben Borman in 1985) as well as the equivalent /E[XEPACK] option in Microsoft's LINK 3.02 and higher.[19][1][28][26]Programs processed with EXEPACK would display a "Packed file is corrupt" error message.[1][20][28]
Various third-party utilities exist to modifycompressedexecutables either replacing the problematic uncompression routine(s) through restubbing, or attempting to expand and restore the original file.
ModernLegacy BIOSboot loaders(such asGNU GRUB) use the A20 line.[3]UEFIboot loaders use32-bitprotected modeor64-bitlong mode.
|
https://en.wikipedia.org/wiki/A20_line
|
In acomputer, aninterrupt request(orIRQ) is a hardware signal sent to the processor that temporarily stops a running program and allows a special program, aninterrupt handler, to run instead. Hardware interrupts are used to handle events such as receiving data from amodemornetwork card, key presses, or mouse movements.
Interrupt lines are often identified by an index with the format ofIRQfollowed by a number. For example, on theIntel 8259family ofprogrammable interrupt controllers(PICs) there are eight interrupt inputs commonly referred to asIRQ0throughIRQ7. Inx86basedcomputer systemsthat use two of thesePICs, the combined set of lines are referred to asIRQ0throughIRQ15. Technically these lines are namedIR0throughIR7, and the lines on theISAbus to which they were historically attached are namedIRQ0throughIRQ15(although historically as the number of hardware devices increased, the total possible number of interrupts was increased by means of cascading requests, by making one of the IRQ numbers cascade to another set or sets of numbered IRQs, handled by one or more subsequent controllers).
Newerx86systems integrate anAdvanced Programmable Interrupt Controller(APIC) that conforms to the Intel APIC Architecture. Each Local APIC typically support up to 255 IRQ lines, with each I/O APIC typically support up to 24 IRQ lines.[1]
During the early years of personal computing, IRQ management was often of user concern. With the introduction ofplug and playdevices this has been alleviated through automatic configuration.[2]
When working with personal computer hardware, installing and removing devices, the system relies on interrupt requests. There are default settings that are configured in the systemBIOSand recognized by the operating system. These default settings can be altered by advanced users. Modernplug and playtechnology has not only reduced the need for concern for these settings, but has also virtually eliminated manual configuration.
Early PCs using the Intel 8086/8088 processors only had a single PIC, and are therefore limited to eight interrupts. This was expanded to two PICs with the introduction of the 286 based PCs.
Typically, on systems using theIntel 8259PIC, 16 IRQs are used. IRQs 0 to 7 are managed by one Intel 8259 PIC, and IRQs 8 to 15 by a second Intel 8259 PIC. The first PIC, the master, is the only one that directly signals the CPU. The second PIC, the slave, instead signals to the master on its IRQ 2 line, and the master passes the signal on to the CPU. There are therefore only 15 interrupt request lines available for hardware.
On APIC withIOAPICsystems, typically there are 24 IRQs available, and the extra 8 IRQs are used to route PCI interrupts, avoiding conflict between dynamically configured PCI interrupts and statically configured ISA interrupts. On early APIC systems with only 16 IRQs or with only Intel 8259 interrupt controllers, PCI interrupt lines were routed to the 16 IRQs using a PIR (PCI interrupt routing) table integrated into the BIOS. Operating systems such asWindows 95 OSR2may use PIR table to process PCI IRQ steering;[3][4]later, the PIR table has been superseded by theACPI_PRT (PCI routing table) protocol. On APIC withMSIsystems, typically there are 224 interrupts available.[5]
The easiest way of viewing this information onWindowsis to useDevice ManagerorSystem Information(msinfo32.exe). OnLinux, IRQ mappings can be viewed by executingcat /proc/interruptsor using theprocinfoutility.
In early IBM-compatiblepersonal computers, anIRQ conflictis a once common hardware error, received when two devices were trying to use the same interrupt request (or IRQ) to signal an interrupt to theProgrammable Interrupt Controller(PIC). The PIC expects interrupt requests from only one device per line, thus more than one device sending IRQ signals along the same line will generally cause an IRQ conflict that can freeze acomputer.
For example, if amodemexpansion cardis added into a system and assigned to IRQ4, which is traditionally assigned to theserial port1, it will likely cause an IRQ conflict. Initially, IRQ 7 was a common choice for the use of asound card, but later IRQ 5 was used when it was found that IRQ 7 would interfere with theprinter port(LPT1). Theserial portsare frequently disabled to free an IRQ line for another device. IRQ 2/9 is the traditional interrupt line for an MPU-401 MIDI port, but this conflicts with theACPIsystem control interrupt (SCI is hardwired to IRQ9 on Intel chipsets);[6]this means ISAMPU-401cards with a hardwired IRQ 2/9, and MPU-401 device drivers with a hardcoded IRQ 2/9, cannot be used in interrupt-driven mode on a system with ACPI enabled.
In some conditions, twoISAdevices could share the same IRQ as long as they were not used simultaneously. To solve this problem, the laterPCI busallows for IRQ sharing.PCI Expressdoes not have physical interrupt lines, and usesMessage Signaled Interrupts(MSI) to theoperating systemsif available.
|
https://en.wikipedia.org/wiki/INTR
|
JoeorJOEmay refer to:
|
https://en.wikipedia.org/wiki/Joe_(disambiguation)
|
Josis a city in Nigeria's middle belt.
JosorJOSmay also refer to:
|
https://en.wikipedia.org/wiki/Jos_(disambiguation)
|
Josephis a masculine given name.
Josephmay also refer to:
|
https://en.wikipedia.org/wiki/Joseph_(disambiguation)
|
Incomputing,channel I/Ois a high-performanceinput/output(I/O) architecture that is implemented in various forms on a number of computer architectures, especially onmainframe computers. In the past, channels were generally[a]implemented with custom devices, variously namedchannel,I/O processor,I/O controller, I/Osynchronizer, orDMA controller.
Many I/O tasks can be complex and require logic to be applied to the data to convert formats and other similar duties. In these situations, the simplest solution is to ask theCPUto handle the logic, but because I/O devices are relatively slow, a CPU could waste time waiting for the data from the device. This situation is called 'I/O bound'.
Channel architecture avoids this problem by processing some or all of the I/O task without the aid of the CPU by offloading the work to dedicated logic. Channels are logically[a]self-contained, with sufficient logic and working storage to handle I/O tasks. Some are powerful or flexible enough to be used as a computer on their own and can be construed as a form ofcoprocessor, for example, the 7909 Data Channel on anIBM 7090orIBM 7094; however, most are not. On some systems the channels use memory or registers addressable by the central processor as their working storage, while on other systems it is present in the channel hardware. Typically, there are standard interfaces[b]between channels and external peripheral devices, and multiple channels can operate concurrently.
A CPU typically designates a block of storage as, or sends, a relatively smallchannel programto the channel in order to handle I/O tasks, which the channel and controller can, in many cases, complete without further intervention from the CPU (exception: those channel programs which utilize 'program controlled interrupts', PCIs, to facilitate program loading, demand paging and other essential system tasks).
When I/O transfer is complete or an error is detected, the controller typically communicates with the CPU through the channel using aninterrupt. Since the channel normally has direct access to the main memory, it is also often referred to as adirect memory access(DMA) controller.
In the most recent implementations, the channel program is initiated and the channel processor performsallrequired processing until either an ending condition or a program controlled interrupt (PCI). This eliminates much of the CPU—Channel interaction and greatly improves overall system performance. The channel may report several different types of ending conditions, which may be unambiguously normal, may unambiguously indicate an error or whose meaning may depend on the context and the results of a subsequent sense operation. In some systems an I/O controller can request an automatic retry of some operations without CPU intervention. In earlier implementations,anyerror, no matter how small, required CPU intervention, and the overhead was, consequently, much higher. A program-controlled interruption (PCI) is still used by certain legacy operations, but the trend is to move away from such PCIs, except where unavoidable.
The first use of channel I/O was with theIBM 709[2]vacuum tube mainframe in 1957, whose Model 766 Data Synchronizer was the first channel controller. The 709's transistorized successor, theIBM 7090,[3]had two to eight 6-bit channels (the 7607) and a channel multiplexor (the 7606) which could control up to eight channels. The 7090 and 7094 could also have up to eight 8-bit channels with the 7909.
While IBM used datachannel commandson some of its computers, and allowedcommand chainingon, e.g., the 7090, most other vendors used channels that dealt with single records. However, some systems, e.g.,GE-600 series, had more sophisticated I/O architectures.
Later, theIBM System/360andSystem/370families of computer offered channel I/O on all models. For the lower-end System/360 Models50and below and System/370 Model158and below, channels were implemented inmicrocodeon the CPU, and the CPU itself operated in one of two modes, either "CPU Mode" or "Channel Mode", with the channel mode 'stealing' cycles from the CPU mode. For largerIBM System/360andSystem/370computers the channels were still bulky and expensive separate components, such as the IBM 2860 Selector channel (one to three selector channels in a single box), the IBM 2870 Byte multiplexor channel (one multiplexer channel, and, optionally, one selector subchannel in a single box), and the IBM 2880 Block multiplexor channel (one or two block multiplexor channels in a single box). On the303xprocessor complexes, the channels were implemented in independent channel directors in the same cabinet as the CPU, with each channel director implementing a group of channels.[4]
Much later, the channels were implemented as an on-board processor residing in the same box as the CPU, generally referred to as a "channel processor", and which was usually aRISCprocessor, but which could be a System/390 microprocessor with special microcode as in IBM'sCMOSmainframes.
Amdahl Corporation'shardware implementation of System/370 compatible channels was quite different. A single internal unit, called the "C-Unit", supported up to sixteen channels using the very same hardware for all supported channels. Two internal "C-Units" were possible, supporting up to 32 total channels. Each "C-Unit" independently performed a process generally called a "shifting channel state processor" (a type ofbarrel processor), which implemented a specializedfinite-state machine(FSM). Each CPU cycle, every 32 nanoseconds in the 470V/6 and /5 and every 26 nanoseconds in the 470V/7 and /8, the "C-unit" read the complete status of next channel in priority sequence and itsI/O Channel in-tags. The necessary actions defined by that channel'slast stateand itsin-tagswere performed: data was read from or written to main storage, the operating system program was interrupted if such interruption was specified by the channel program's Program Control Interrupt flag, and the "C-Unit" finally stored that channel's next state and set itsI/O Channel out-tags, and then went on to the next lower priority channel. Preemption was possible, in some instances. Sufficient FIFO storage was provided within the "C-Unit" for all channels which were emulated by this FSM. Channels could be easily reconfigured to the customer's choice of selector, byte multiplexor) or block multiplexor channel, without any significant restrictions by using maintenance console commands. "Two-byte interface" was also supported as was "Data-In/Data-Out" and other high-performance IBM channel options. Built-inchannel-to-channel adapterswere also offered, called CCAs in Amdahl-speak, but called CTCs or CTCAs in IBM-speak. A real game-changer, and this forced IBM to redesign its mainframes to provide similar channel capability and flexibility. IBM's initial response was to include stripped-down Model 158s, operating in "Channel Mode", only, as the Model 303x channel units. In the Amdahl "C-unit" any channel could be any type, selector, byte multiplexor, or block multiplexor, without reserving channels 0 and 4 for the byte multiplexers, as on some IBM models.
Some of the earliest commercial non-IBM channel systems were on theUNIVAC 490,CDC 1604,Burroughs B5000,UNIVAC 1107andGE 635. Since then, channel controllers have been a standard part of most mainframe designs and primary advantage mainframes have over smaller, faster, personal computers and network computing.
The 1965CDC 6600supercomputerutilized 10 logically independent computers called peripheral processors (PPs) and 12 simple I/O channels for this role. PPs were a modified version of CDC's first personal computers, the 12-bitCDC 160and 160A. The operating system initially resided and executed in PP0. The channels had no direct access to memory and could not cause interrupts; software on a PP used synchronous instructions[c]to transfer data between the channel and either the A register or PP memory.
SCSIintroduced in 1981 as a low cost channel equivalent to the IBM Block Multiplexer Channel[5]is now ubiquitous in the form of theFibre Channel ProtocolandSerial Attached SCSI.
Modern computers may have channels in the form ofbus masteringperipheral devices, such asPCIdirect memory access(DMA) devices. The rationale for these devices is the same as for the original channel controllers, namely off-loading transfer, interrupts, andcontext switchingfrom the main CPU.
Channel controllers have been made as small as single-chip designs with multiple channels on them, used in theNeXTcomputers for instance.
The reference implementation of channel I/O is that of the IBM System/360 family of mainframes and its successors, but similar implementations have been adopted by IBM on other lines, e.g.,1410 and 7010,7030, and by other mainframe vendors, such asControl Data,Bull(General Electric/Honeywell) andUnisys.
Computer systems that use channel I/O have special hardware components that handle all input/output operations in their entirety independently of the systems' CPU(s). The CPU of a system that uses channel I/O typically has only onemachine instructionin its repertoire for input and output; this instruction is used to pass input/output commands to the specialized I/O hardware in the form ofchannel programs. I/O thereafter proceeds without intervention from the CPU until an event requiring notification of the operating system occurs, at which point the I/O hardware signals an interrupt to the CPU.
A channel is an independent hardware component that coordinates all I/O to a set of controllers or devices. It is not merely a medium of communication, despite the name; it is aprogrammabledevice that handles all details of I/O after being given a list of I/O operations to carry out (the channel program).
Each channel may support one or more controllers and/or devices, but each channel program may only be directed at one of those connected devices. A channel program contains lists of commands to the channel itself and to the controller and device to which it is directed. Once the operating system has prepared a complete list of channel commands, it executes a single I/O machine instruction to initiate the channel program; the channel thereafter assumes control of the I/O operations until they are completed.
It is possible to develop very complex channel programs, including testing of data and conditional branching within that channel program. This flexibility frees the CPU from the overhead of starting, monitoring, and managing individual I/O operations. The specialized channel hardware, in turn, is dedicated to I/O and can carry it out more efficiently than the CPU (and entirely in parallel with the CPU). Channel I/O is not unlike theDirect Memory Access(DMA) of microcomputers, only more complex and advanced.
On large mainframe computer systems, CPUs are only one of several powerful hardware components that work in parallel. Special input/output controllers (the exact names of which vary from one manufacturer to another) handle I/O exclusively, and these, in turn, are connected to hardware channels that also are dedicated to input and output. There may be several CPUs and several I/O processors. The overall architecture optimizes input/output performance without degrading pure CPU performance. Since most real-world applications of mainframe systems are heavily I/O-intensive business applications, this architecture helps provide the very high levels ofthroughputthat distinguish mainframes from other types of computers.
InIBMESA/390terminology, a channel is a parallel data connection inside the tree-like or hierarchically organized I/O subsystem. In System/390 I/O cages, channels either directly connect to devices which are installed inside the cage (communication adapter such asESCON,FICON,Open Systems Adapter) or they run outside of the cage, below theraised flooras cables of the thickness of a thumb and directly connect to channel interfaces on bigger devices like tape subsystems,direct access storage devices(DASDs), terminal concentrators and other ESA/390 systems.
Channels differ in the number and type of concurrent I/O operations they support. In IBM terminology, amultiplexer channelsupports a number of concurrent interleaved slow-speed operations, each transferring one byte from a device at a time. Aselector channelsupports one high-speed operation, transferring ablockof data at a time. Ablock multiplexersupports a number of logically concurrent channel programs, but only one high-speed data transfer at a time.
Channels may also differ in how they associate peripheral devices with storage buffers. In UNIVAC terminology, a channel may either beinternally specified index(ISI), with a single buffer and device active at a time, orexternally specified index(ESI), with the device selecting which buffer to use.
In the IBM System/360 and subsequent architectures, achannel programis a sequence of channel command words (CCWs) that are executed by the I/O channel subsystem. A channel program consists of one or more channel command words. The operating system signals the I/O channel subsystem to begin executing the channel program with an SSCH (start sub-channel) instruction. The central processor is then free to proceed with non-I/O instructions until interrupted. When the channel operations are complete, the channel interrupts the central processor with an I/O interruption. In earlier models of the IBM mainframe line, the channel unit was an identifiable component, one for each channel. In modern mainframes, the channels are implemented using an independent RISC processor, the channel processor, one for all channels. IBM System/370 Extended Architecture[6]and its successors replaced the earlier SIO (start I/O) and SIOF (start I/O fast release) machine instructions (System/360 and early System/370) with the SSCH (start sub-channel) instruction (ESA/370 and successors).
Channel I/O provides considerable economies in input/output. For example, on IBM'sLinux on IBM Z, the formatting of an entire track of aDASDrequires only one channel program (and thus only one I/O instruction), but multiple channel command words (one per block). The program is executed by thededicatedI/O processor, while theapplicationprocessor (the CPU) is free for other work.
Achannel command word(CCW) is aninstructionto a specialized I/O channel processor which is, in fact, an FSM. It is used to initiate an I/O operation, such as "read", "write" or "sense", on a channel-attached device. On system architectures that implement channel I/O, typically all devices are connected by channels, and soallI/O requires the use of CCWs.
CCWs are organized intochannel programsby the operating system, and I/O subroutine, a utility program, or by standalone software (such as test and diagnostic programs). A limited "branching" capability, hence a dynamically programmable capability, is available within such channel programs, by use of the "status modifier" channel flag and the "transfer-in-channel" CCW.
IBM CCWs arechainedto form the channel program. Bits in the CCW indicates that the following location in storage contains a CCW that is part of the same channel program. The channel program normally executes sequential CCWs until an exception occurs, a Transfer-in-Channel (TIC) CCW is executed, or a CCW is executed without chaining indicated.Command chainingtells the channel that the next CCW contains a new command.Data chainingindicates that the next CCW contains the address of additional data for the same command, allowing, for example, portions of one record to be written from or read to multiple data areas in storage (gather-writing and scatter-reading).[7]
Channel programs can modify their own operation during execution based on data read. For example, self modification is used extensively in OS/360ISAM.[8]
The following example[9]reads a disk record identified by arecorded key. The track containing the record and the desired value of the key is known. The device control unit will search the track to find the requested record. In this example <> indicate that the channel program contains the storage address of the specified field.
The TIC (transfer in the channel) will cause the channel program to branch to the SEARCH command until a record with a matching key (or the end of the track) is encountered. When a record with a matching key is found the DASD controller will include Status Modifier in the channel status, causing the channel to skip the TIC CCW; thus the channel program will not branch and the channel will execute the READ command.
The above example is correct forunblockedrecords (one record per block). Forblockedrecords (more than one record per block), therecorded keymust be the same as the highest key within that block (and the records must be in key sequence), and the following channel program would be utilized:
If the dataset is allocated in tracks, and the end of the track is reached without the requested record being found the channel program terminates and returns a "no record found" status indication. Similarly, if the dataset is allocated in cylinders, and the end of the cylinder is reached without the requested record being found the channel program terminates and returns a "no record found" status indication. In some cases, the system software has the option of updating the track or cylinder number andredrivingthe I/O operation without interrupting the application program.
On most systems channels operate usingreal (or physical) addresses, while the channel programs are built usingvirtual addresses.[10]The operating system is responsible fortranslatingthese channel programs before executing them, and for this particular purpose theInput/Output Supervisor(IOS) has a specialfast fixfunction which was designed into the OS Supervisor just for those "fixes" which are of relatively short duration (i.e., significantly shorter than "wall-clock time"). Pages containing data to be used by the I/O operation are locked into real memory, orpage fixed. The channel program is copied and all virtual addresses are replaced by real addresses before the I/O operation is started. After the operation completes, the pages are unfixed.
As page fixing and unfixing is a CPU-expensive process long-term page fixing is sometimes used to reduce the CPU cost. Here the virtual memory is page-fixed for the life of the application, rather than fixing and freeing around each I/O operation. An example of a program that can use long-term page fixing isDb2.
An alternative to long-term page fixing is moving the entire application, including all its data buffers, to apreferredarea of main storage. This is accomplished by a special SYSEVENT in MVS/370 through z/OS operating systems, wherein the application is, first, swapped-outfrom wherever it may be, presumably from anon-preferredarea, to swap and page external storage, and is, second, swapped-into apreferredarea (SYSEVENT TRANSWAP). Thereafter, the application may be markednon-swappableby another special SYSEVENT (SYSEVENT DONTSWAP). Whenever such an application terminates, whether normally or abnormally, the operating system implicitly issues yet another special SYSEVENT on the application's behalf if it has not already done so (SYSEVENT OKSWAP).
Evenbootstrappingof the system, orInitial Program Load(IPL) in IBM nomenclature, is carried out by channels, although the process is partially simulated by the CPU through an implied Start I/O (SIO) instruction, an implied Channel Address Word (CAW) at location 0 and an implied channel command word (CCW) with an opcode of Read IPL, also at location 0. Command chaining is assumed, so the implied CCW at location 0 falls through to the continuation of the channel program at locations 8 and 16, and possibly elsewhere should one of those CCWs be a transfer-in-channel (TIC).[11]
To load a system, the implied Read IPL CCW reads the first block of the selected IPL device into the 24-byte data area at location 0, the channel continues with the second and third double words, which are CCWs, and this channel program loads the first portion of the system loading software elsewhere in main storage. The first double word contains a PSW which, when fetched at the conclusion of the IPL, causes the CPU to execute the IPL Text (bootstrap loader) read in by the CCW at location 8. The IPL Text then locates, loads and transfers control to the operating system's Nucleus. The Nucleus performs or initiates any necessary initialization and then commences normal OS operations.
This IPL concept is device-independent. It is capable of IPL-ing from a card deck, from a magnetic tape, or from adirect access storage device, (DASD), e.g., disk, drum. The Read IPL (X'02') command, which is simulated by the CPU, is a Read EBCDIC Select Stacker 1 read command on the card reader and a Read command on tape media (which are inherently sequential access in nature), but a special Read-IPL command on DASD.
DASD controllers accept the X'02' command, seek to cylinder X'0000' head X'0000', skip to the index point (i.e., just past the track descriptor record (R0)) and then treat the Read IPL command as if it were a Read Data (X'06') command. Without this special DASD controller behavior, device-independent IPL would not be possible. On a DASD, the IPL Text is contained on cylinder X'0000', track X'0000', and record X'01' (24 bytes), and cylinder X'0000', track X'0000', and record X'02' (fairly large, certainly somewhat more than 3,000 bytes). The volume label is always contained on cylinder X'0000', track X'0000', and block X'03' (80 bytes). The volume label always points to the VTOC, with a pointer of the form HHHH (that is, the VTOC must reside within the first 65,536 tracks). The VTOC'sFormat 4 DSCBdefines the extent (size) of the VTOC, so the volume label only needs a pointer to the first track in the VTOC's extent, and as the Format 4 DSCB, which describes the VTOC, is always the very first DSCB in the VTOC, HHHH also points to the Format 4 DSCB.
If an attempt is made to IPL from a device that was not initialized with IPL Text, the system simply enters a wait state. The DASD (direct access storage device) initialization program, IBCDASDI, or the DASD initialization application, ICKDSF, places a wait state PSW and a dummy CCW string in the 24 bytes, should the device be designated for data only, not for IPL, after which these programs format theVTOCand perform other hard drive initialization functions.
|
https://en.wikipedia.org/wiki/Channel_I/O
|
Aperipheral DMA controller(PDC) is a feature found in modernmicrocontrollers. This is typically aFIFOwith automated control features for driving implicitly included modules in a microcontroller such asUARTs.
This takes a large burden from theoperating systemand reduces the number ofinterruptsrequired to service and control these type of functions.
Thismicrocomputer- ormicroprocessor-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Peripheral_DMA_controller
|
Incomputer architecture,clock gatingis a popularpower managementtechnique used in manysynchronous circuitsfor reducing dynamicpower dissipation, by removing theclock signalwhen the circuit, or a subpart of it, is not in use or ignores the clock signal. Clock gatingsaves powerby pruning theclock tree, at the cost of adding more logic to a circuit. Pruning the clock disables portions of the circuitry so that theflip-flopsin them do not switch state, as switching the state consumes power. When not being switched, theswitching powerconsumption goes to zero, and onlyleakage currentsare incurred.[1]This technique is particularly effective in systems with significant idle time or predictable periods of inactivity within specific modules.[1]
Althoughasynchronous circuitsby definition do not have a global "clock", the termperfect clock gatingis used to illustrate how various clock gating techniques are simply approximations of the data-dependent behavior exhibited by asynchronous circuitry. As thegranularityon which one gates the clock of a synchronous circuit approaches zero, the power consumption of that circuit approaches that of an asynchronous circuit: the circuit only generates logic transitions when it is actively computing.[2]
An alternative solution to clock gating is to use Clock Enable (CE) logic on synchronous data path employing the input multiplexer, such as for D-type flip-flops: using C / Verilog language notation,Dff = CE ? D : Q; whereDffis the D-input of a D-type flip-flop,Dis the module information input (without CE input), andQis the D-type flip-flop output. This type of clock gating israce-condition-free and is preferred forFPGAdesigns. For FPGAs, every D-type flip-flop has an additional CE input signal.
Clock gating works by taking the enable conditions attached to registers, and uses them to gate the clocks. A design must contain these enable conditions in order to use and benefit from clock gating. This clock gating process can also save significant die area as well as power, since it removes large numbers ofmuxesand replaces them with clock-gating logic. This clock-gating logic is generally in the form of "integrated clock gating" (ICG) cells. However, the clock-gating logic will change the clock-tree structure, since the clock-gating logic will sit in the clock tree.
Clock-gating logic can be added into a design in a variety of ways:
In general, clock gating applied at a coarser granularity leads to reduced resource overhead and greater power savings.[3]
Any RTL modifications to improve clock gating will result in functional changes to the design (since the registers will now hold different values), which need to be verified.
Sequential clock gating is the process of extracting/propagating the enable conditions to the upstream/downstream sequential elements, so that additional registers can be clock gated.
Chips intended to run on batteries or with very low power such as those used in mobile phones, wearable devices, andembedded systemswould implement several forms of clock gating together. At one end is the manual gating of clocks by software, where a driver enables or disables the various clocks used by a given idle controller. On the other end is automatic clock gating, where the hardware can be told to detect whether there is any work to do, and turn off a given clock if it is not needed. These forms interact with each other and may be part of the same enable tree. For example, an internal bridge or bus might use automatic gating so that it is gated off until theCPUor aDMAengine needs to use it, while several of the peripherals on that bus might be permanently gated off if they are unused on that board.
|
https://en.wikipedia.org/wiki/Clock_gating
|
Incomputer architecture,clock gatingis a popularpower managementtechnique used in manysynchronous circuitsfor reducing dynamicpower dissipation, by removing theclock signalwhen the circuit, or a subpart of it, is not in use or ignores the clock signal. Clock gatingsaves powerby pruning theclock tree, at the cost of adding more logic to a circuit. Pruning the clock disables portions of the circuitry so that theflip-flopsin them do not switch state, as switching the state consumes power. When not being switched, theswitching powerconsumption goes to zero, and onlyleakage currentsare incurred.[1]This technique is particularly effective in systems with significant idle time or predictable periods of inactivity within specific modules.[1]
Althoughasynchronous circuitsby definition do not have a global "clock", the termperfect clock gatingis used to illustrate how various clock gating techniques are simply approximations of the data-dependent behavior exhibited by asynchronous circuitry. As thegranularityon which one gates the clock of a synchronous circuit approaches zero, the power consumption of that circuit approaches that of an asynchronous circuit: the circuit only generates logic transitions when it is actively computing.[2]
An alternative solution to clock gating is to use Clock Enable (CE) logic on synchronous data path employing the input multiplexer, such as for D-type flip-flops: using C / Verilog language notation,Dff = CE ? D : Q; whereDffis the D-input of a D-type flip-flop,Dis the module information input (without CE input), andQis the D-type flip-flop output. This type of clock gating israce-condition-free and is preferred forFPGAdesigns. For FPGAs, every D-type flip-flop has an additional CE input signal.
Clock gating works by taking the enable conditions attached to registers, and uses them to gate the clocks. A design must contain these enable conditions in order to use and benefit from clock gating. This clock gating process can also save significant die area as well as power, since it removes large numbers ofmuxesand replaces them with clock-gating logic. This clock-gating logic is generally in the form of "integrated clock gating" (ICG) cells. However, the clock-gating logic will change the clock-tree structure, since the clock-gating logic will sit in the clock tree.
Clock-gating logic can be added into a design in a variety of ways:
In general, clock gating applied at a coarser granularity leads to reduced resource overhead and greater power savings.[3]
Any RTL modifications to improve clock gating will result in functional changes to the design (since the registers will now hold different values), which need to be verified.
Sequential clock gating is the process of extracting/propagating the enable conditions to the upstream/downstream sequential elements, so that additional registers can be clock gated.
Chips intended to run on batteries or with very low power such as those used in mobile phones, wearable devices, andembedded systemswould implement several forms of clock gating together. At one end is the manual gating of clocks by software, where a driver enables or disables the various clocks used by a given idle controller. On the other end is automatic clock gating, where the hardware can be told to detect whether there is any work to do, and turn off a given clock if it is not needed. These forms interact with each other and may be part of the same enable tree. For example, an internal bridge or bus might use automatic gating so that it is gated off until theCPUor aDMAengine needs to use it, while several of the peripherals on that bus might be permanently gated off if they are unused on that board.
|
https://en.wikipedia.org/wiki/Autonomous_peripheral_clock_gating
|
Power gatingis a technique used inintegrated circuitdesign to reducepowerconsumption, by shutting off thecurrentto blocks of the circuit that are not in use. In addition to reducing stand-by or leakage power, power gating has the benefit of enablingIddq testing.
Power gating affects design architecture more thanclock gating. It increases time delays, as power gated modes have to be safely entered and exited. Architectural trade-offs exist between designing for the amount of leakage power saving in low power modes and the energy dissipation to enter and exit the low power modes. Shutting down the blocks can be accomplished either by software or hardware. Driver software can schedule the power down operations. Hardware timers can be utilized. A dedicated power management controller is another option.
An externally switched power supply is a very basic form of power gating to achieve long term leakage power reduction. To shut off the block for small intervals of time, internal power gating is more suitable.CMOSswitches that provide power to the circuitry are controlled by power gating controllers. Outputs of the power gated block discharge slowly. Hence output voltage levels spend more time in threshold voltage level. This can lead to larger short circuit current.
Power gating uses low-leakagePMOS transistorsas header switches to shut off power supplies to parts of a design in standby or sleep mode.NMOSfooter switches can also be used as sleep transistors. Inserting the sleep transistors splits the chip's power network into a permanent power network connected to the power supply and a virtual power network that drives the cells and can be turned off.
Typically, highthreshold voltage (Vth)sleep transistors are used for power gating in a technique sometimes known asmulti-threshold CMOS(MTCMOS). The sleep transistor sizing is an important design parameter.
The quality of this complex power network is critical to the success of a power-gating design. Two of the most critical parameters are the IR-drop and the penalties in silicon area and routing resources. Power gating can be implemented using cell- or cluster-based (or fine grain) approaches or a distributed coarse-grained approach.
Power gating implementation has additional considerations for timing closure implementation. The following parameters need to be considered and their values carefully chosen for a successful implementation of this methodology.[1][2]
Adding a sleep transistor to every cell that is to be turned off imposes a large area penalty, and individually gating the power of every cluster of cells creates timing issues introduced by inter-cluster voltage variation that are difficult to resolve. Fine-grain power gating encapsulates the switching transistor as a part of the standard cell logic. Switching transistors are designed by either the library IP vendor or standard cell designer. Usually these cell designs conform to the normal standard cell rules and can easily be handled by EDA tools for implementation.
The size of the gate control is designed considering the worst-case scenario that will require the circuit to switch during every clock cycle, resulting in a huge area impact. Some of the recent designs implement the fine-grain power gating selectively, but only for the low Vthcells. If the technology allows multiple Vthlibraries, the use of low Vthdevices is minimum in the design (20%), so that the area impact can be reduced. When using power gates on the low Vthcells the output must be isolated if the next stage is a high Vthcell. Otherwise it can cause the neighboring high Vthcell to have leakage when output goes to an unknown state due to power gating.
Gate control slew rate constraint is achieved by having a buffer distribution tree for the control signals. The buffers must be chosen from a set of always on buffers (buffers without the gate control signal) designed with high Vthcells. The inherent difference between when a cell switches off with respect to another, minimizes the rush current during switch-on and switch-off.
Usually the gating transistor is designed as a high Vthdevice. Coarse-grain power gating offers further flexibility by optimizing the power gating cells where there is low switching activity. Leakage optimization has to be done at the coarse grain level, swapping the low leakage cell for the high leakage one. Fine-grain power gating is an elegant methodology resulting in up to 10 times leakage reduction. This type of power reduction makes it an appealing technique if the power reduction requirement is not satisfied by multiple Vthoptimization alone.
The coarse-grained approach implements the grid style sleep transistors which drives cells locally through shared virtual power networks. This approach is less sensitive to PVT variation, introduces less IR-drop variation, and imposes a smaller area overhead than the cell- or cluster-based implementations. In coarse-grain power gating, the power-gating transistor is a part of the power distribution network rather than the standard cell.
There are two ways of implementing a coarse-grain structure:
Gate sizing depends on the overall switching current of the module at any given time. Since only a fraction of circuits switch at any point of time, power gate sizes are smaller as compared to the fine-grain switches. Dynamic power simulation using worst-case vectors can determine the worst-case switching for the module and hence the size. The IR drop can also be factored into the analysis. Simultaneous switching capacitance is a major consideration in coarse-grain power gating implementation. In order to limit simultaneous switching, gate control buffers can be daisy chained, and special counters can be used to selectively turn on blocks of switches.
Isolation cells are used to prevent short circuit current. As the name suggests, these cells isolate the power gated block from the normally-on block. Isolation cells are specially designed for low short circuit current when input is at threshold voltage level. Isolation control signals are provided by the power gating controller. Isolation of the signals of a switchable module is essential to preserve design integrity. Usually a simple OR or AND logic can function as an output isolation device. Multiple state retention schemes are available in practice to preserve the state before a module shuts down. The simplest technique is to scan out the register values into a memory before shutting down a module. When the module wakes up, the values are scanned back from the memory.
When power gating is used, the system needs some form of state retention, such as scanning out data to a RAM, then scanning it back in when the system is reawakened. For critical applications, the memory states must be maintained within the cell, a condition that requires a retention flop to store bits in a table. That makes it possible to restore the bits very quickly during wakeup. Retention registers are special low leakage flip-flops used to hold the data of the main registers of the power gated block. Thus the internal state of the block during power down mode can be retained and loaded back to it when the block is reactivated. Retention registers are always powered up. The retention strategy is design dependent. A power gating controller controls the retention mechanism such as when to save the current contents of the power gating block and when to restore it back.
|
https://en.wikipedia.org/wiki/Power_gating
|
Processor power dissipationorprocessing unit power dissipationis the process in whichcomputer processorsconsumeelectrical energy, and dissipate this energy in the form ofheatdue to theresistancein theelectronic circuits.
Designing CPUs that perform tasksefficientlywithoutoverheatingis a major consideration of nearly all CPU manufacturers to date. Historically, early CPUs implemented withvacuum tubesconsumed power on the order of manykilowatts. Current CPUs in general-purposepersonal computers, such asdesktopsandlaptops, consume power in the order of tens to hundreds of watts. Some other CPU implementations use very little power; for example, the CPUs inmobile phonesoften use just a fewwattsof electricity,[1]while somemicrocontrollersused inembedded systemsmay consume only a few milliwatts or even as little as a few microwatts.
There are a number of engineering reasons for this pattern:
Processor manufacturers usually release two power consumption numbers for a CPU:
For example, the Pentium 4 2.8 GHz has a 68.4 W typical thermal power and 85 W maximum thermal power. When the CPU is idle, it will draw far less than the typical thermal power.Datasheetsnormally contain thethermal design power(TDP), which is the maximum amount ofheatgenerated by the CPU, which thecooling systemin a computer is required todissipate. Both Intel andAdvanced Micro Devices(AMD) have defined TDP as the maximum heat generation for thermally significant periods, while running worst-case non-synthetic workloads; thus, TDP is not reflecting the actual maximum power of the processor. This ensures the computer will be able to handle essentially all applications without exceeding its thermal envelope, or requiring a cooling system for the maximum theoretical power (which would cost more but in favor of extra headroom for processing power).[3][4]
In many applications, the CPU and other components are idle much of the time, so idle power contributes significantly to overall system power usage. When the CPU usespower managementfeatures to reduce energy use, other components, such as the motherboard and chipset, take up a larger proportion of the computer's energy. In applications where the computer is often heavily loaded, such as scientific computing,performance per watt(how much computing the CPU does per unit of energy) becomes more significant.
CPUs typically use a significant portion of the power consumed by thecomputer. Other major uses include fastvideo cards, which containgraphics processing units, andpower supplies. In laptops, theLCD's backlight also uses a significant portion of overall power. Whileenergy-saving featureshave been instituted in personal computers for when they are idle, the overall consumption of today's high-performance CPUs is considerable. This is in strong contrast with the much lower energy consumption of CPUs designed for low-power devices.
There are several factors contributing to the CPU power consumption; they include dynamic power consumption, short-circuit power consumption, and power loss due totransistor leakage currents:
Pcpu=Pdyn+Psc+Pleak{\displaystyle P_{cpu}=P_{dyn}+P_{sc}+P_{leak}}
The dynamic power consumption originates from the activity of logic gates inside a CPU. When the logic gates toggle, energy is flowing as the capacitors inside them are charged and discharged. The dynamic power consumed by a CPU is approximately proportional to the CPU frequency, and to the square of the CPU voltage:[5]
Pdyn=CV2f{\displaystyle P_{dyn}=CV^{2}f}
whereCis the switched load capacitance,fis frequency,Vis voltage.[6]
When logic gates toggle, some transistors inside may change states. As this takes a finite amount of time, it may happen that for a very brief amount of time some transistors are conducting simultaneously. A direct path between the source and ground then results in some short-circuit power loss (Psc{\displaystyle P_{sc}}). The magnitude of this power is dependent on the logic gate, and is rather complex to model on a macro level.
Power consumption due to leakage power (Pleak{\displaystyle P_{leak}}) emanates at a micro-level in transistors. Small amounts of currents are always flowing between the differently doped parts of the transistor. The magnitude of these currents depend on the state of the transistor, its dimensions, physical properties and sometimes temperature. The total amount of leakage currents tends to inflate for increasing temperature and decreasing transistor sizes.
Both dynamic and short-circuit power consumption are dependent on the clock frequency, while the leakage current is dependent on the CPU supply voltage. It has been shown that the energy consumption of a program shows convex energy behavior, meaning that there exists an optimal CPU frequency at which energy consumption is minimal for the work done.[7]
Power consumption can be reduced in several ways,[citation needed]including the following:
Historically, processor manufacturers consistently delivered increases inclock ratesandinstruction-level parallelism, so that single-threaded code executed faster on newer processors with no modification.[12]More recently, in order to manage CPU power dissipation, processor makers favormulti-corechip designs, thus software needs to be written in amulti-threadedor multi-process manner to take full advantage of such hardware. Many multi-threaded development paradigms introduce overhead, and will not see a linear increase in speed when compared to the number of processors. This is particularly true while accessing shared or dependent resources, due tolockcontention. This effect becomes more noticeable as the number of processors increases.
Recently, IBM has been exploring ways to distribute computing power more efficiently by mimicking the distributional properties of the human brain.[13]
Processors can be damaged from overheating, but vendors protect processors with operational safeguards such as throttling and automatic shutdown. When a core exceeds the set throttle temperature, processors can reduce power to maintain a safe temperature level and if the processor is unable to maintain a safe operating temperature through throttling actions, it will automatically shut down to prevent permanent damage.[14]
|
https://en.wikipedia.org/wiki/CPU_power_dissipation
|
Low-power electronicsareelectronicsdesigned to consume lesselectrical powerthan usual, often at some expense. For example,notebook processorsusually consume less power than theirdesktopcounterparts, at the expense ofcomputer performance.[1]
The earliest attempts to reduce the amount of power required by an electronic device were related to the development of thewristwatch. Electronic watches require electricity as a power source, and some mechanical movements and hybrid electromechanical movements also require electricity. Usually, the electricity is provided by a replaceablebattery. The first use of electrical power in watches was as a substitute for themainspring, to remove the need for winding. The first electrically powered watch, theHamilton Electric 500, was released in 1957 by theHamilton Watch CompanyofLancaster, Pennsylvania.
The first quartz wristwatches were manufactured in 1967, using analog hands to display the time.[2]
Watch batteries(strictly speaking cells, as a battery is composed of multiple cells) are specially designed for their purpose. They are very small and provide tiny amounts of power continuously for very long periods (several years or more). In some cases, replacing the battery requires a trip to a watch repair shop or watch dealer. Rechargeable batteries are used in somesolar-powered watches.
The first digitalelectronicwatch was aPulsarLED prototype produced in 1970.[3]Digital LED watches were very expensive and out of reach to the common consumer until 1975, whenTexas Instrumentsstarted to mass-produce LED watches inside a plastic case.
Most watches with LED displays required that the user press a button to see the time displayed for a few seconds because LEDs used so much power that they could not be kept operating continuously. Watches with LED displays were popular for a few years, but soon the LED displays were superseded byliquid crystal displays(LCDs), which used less battery power and were much more convenient in use, with the display always visible and no need to push a button before seeing the time. Only in darkness, you had to press a button to light the display with a tiny light bulb, later illuminating LEDs.[4]
Most electronic watches today use 32.768 KHzquartz oscillators.[2]
As of 2013, processors specifically designed for wristwatches are thelowest-power processorsmanufactured today—often4-bit, 32.768 kHz processors.
Whenpersonal computerswere first developed, power consumption was not an issue. With the development ofportable computershowever, the requirement to run a computer off abattery packnecessitated the search for a compromise betweencomputing powerand power consumption. Originally mostprocessorsran both the core and I/O circuits at 5 volts, as in theIntel 8088used by the firstCompaq Portable. It was later reduced to 3.5, 3.3, and 2.5 volts to lower power consumption. For example, thePentium P5core voltage decreased from 5V in 1993, to 2.5V in 1997.
With lower voltage comes lower overall power consumption, making a system less expensive to run on any existing battery technology and able to function for longer. This is crucially important for portable or mobile systems. The emphasis on battery operation has driven many of the advances in lowering processor voltage because this has a significant effect on battery life. The second major benefit is that with less voltage and therefore less power consumption, there will be less heat produced. Processors that run cooler can be packed into systems more tightly and will last longer. The third major benefit is that a processor running cooler on less power can be made to run faster. Lowering the voltage has been one of the key factors in allowing theclock rateof processors to go higher and higher.[5]
The density and speed of integrated-circuit computing elements has increased exponentially for several decades, following a trend described byMoore's Law. While it is generally accepted that this exponential improvement trend will end, it is unclear exactly how dense and fast integrated circuits will get by the time this point is reached. Working devices have been demonstrated which were fabricated with aMOSFETtransistorchannel length of 6.3nanometresusing conventional semiconductor materials, and devices have been built that usecarbon nanotubesas MOSFET gates, giving a channel length of approximately onenanometre. The density and computing power of integrated circuits are limited primarily by power-dissipation concerns.
The overallpower consumptionof a new personal computer has been increasing at about 22% growth per year.[6]This increase in consumption comes even though the energy consumed by a single CMOS logic gate in order to change its state has fallen exponentially in accordance with Moore's law, by virtue of shrinkage.[6]
An integrated-circuit chip contains manycapacitiveloads, formed both intentionally (as with gate-to-channel capacitance) and unintentionally (between conductors which are near each other but not electrically connected). Changing the state of the circuit causes a change in the voltage across theseparasitic capacitances, which involves a change in the amount of stored energy. As the capacitive loads are charged and discharged throughresistivedevices, an amount of energy comparable to that stored in the capacitor is dissipated as heat:
The effect of heat dissipation on state change is to limit the amount of computation that may be performed within a given power budget. While device shrinkage can reduce some parasitic capacitances, the number of devices on an integrated circuit chip has increased more than enough to compensate for reduced capacitance in each individual device. Some circuits –dynamic logic, for example – require a minimum clock rate in order to function properly, wasting "dynamic power" even when they do not perform useful computations. Other circuits – most prominently, theRCA 1802, but also several later chips such as theWDC 65C02, theIntel 80C85, theFreescale 68HC11and some otherCMOSchips – use "fully static logic" that has no minimum clock rate, but can "stop the clock" and hold their state indefinitely. When the clock is stopped, such circuits use no dynamic power but they still have a small, static power consumption caused by leakage current.
As circuit dimensions shrink,subthreshold leakagecurrent becomes more prominent. This leakage current results in power consumption, even when no switching is taking place (static power consumption). In modern chips, this current generally accounts for half the power consumed by the IC.
Loss fromsubthreshold leakagecan be reduced by raising thethreshold voltageand lowering the supply voltage. Both these changes slow down the circuit significantly. To address this issue, some modern low-power circuits use dual supply voltages to improve speed on critical paths of the circuit and lower power consumption on non-critical paths. Some circuits even use different transistors (with different threshold voltages) in different parts of the circuit, in an attempt to further reduce power consumption without significant performance loss.
Another method that is used to reduce power consumption ispower gating:[7]the use of sleep transistors to disable entire blocks when not in use. Systems that are dormant for long periods of time and "wake up" to perform a periodic activity are often in an isolated location monitoring an activity. These systems are generally battery- or solar-powered and hence, reducing power consumption is a key design issue for these systems. By shutting down a functional but leaky block until it is used, leakage current can be reduced significantly. For some embedded systems that only function for short periods at a time, this can dramatically reduce power consumption.
Two other approaches also exist to lower the power overhead of state changes. One is to reduce the operating voltage of the circuit, as in adual-voltage CPU, or to reduce the voltage change involved in a state change (making a state change only, changing node voltage by a fraction of the supply voltage—low voltage differential signaling, for example). This approach is limited by thermal noise within the circuit. There is a characteristic voltage (proportional to the device temperature and to theBoltzmann constant), which the state switching voltage must exceed in order for the circuit to be resistant to noise. This is typically on the order of 50–100 mV, for devices rated to 100degrees Celsiusexternal temperature (about 4kT, whereTis the device's internal temperature inKelvinsandkis theBoltzmann constant).
The second approach is to attempt to provide charge to the capacitive loads through paths that are not primarily resistive. This is the principle behindadiabatic circuits. The charge is supplied either from a variable-voltageinductivepower supply or by other elements in areversible-logiccircuit. In both cases, the charge transfer must be primarily regulated by the non-resistive load. As a practical rule of thumb, this means the change rate of a signal must be slower than that dictated by theRC time constantof the circuit being driven. In other words, the price of reduced power consumption per unit computation is a reduced absolute speed of computation. In practice, although adiabatic circuits have been built, it has been difficult for them to reduce computation power substantially in practical circuits.
Finally, there are several techniques for reducing the number of state changes associated with a given computation. For clocked-logic circuits, theclock gatingtechnique is used, to avoid changing the state of functional blocks that are not required for a given operation. As a more extreme alternative, theasynchronous logicapproach implements circuits in such a way that a specific externally supplied clock is not required. While both of these techniques are used to different extents in integrated circuit design, the limit of practical applicability for each appears to have been reached.[citation needed]
There are a variety of techniques for reducing the amount of battery power required for a desired wireless communicationgoodput.[8]Somewireless mesh networksuse"smart" low power broadcastingtechniques that reduce the battery power required to transmit. This can be achieved by usingpower aware protocolsand joint power control systems.
In 2007, about 10% of the average IT budget was spent on energy, and energy costs for IT were expected to rise to 50% by 2010.[9]
The weight and cost of power supply and cooling systems generally depends on the maximum possible power that could be used at any one time.
There are two ways to prevent a system from being permanently damaged by excessive heat.
Most desktop computers design power and cooling systems around the worst-caseCPU power dissipationat the maximum frequency, maximum workload, and worst-case environment.
To reduce weight and cost, many laptop computers choose to use a much lighter, lower-cost cooling system designed around a much lowerThermal Design Power, that is somewhat above expected maximum frequency, typical workload, and typical environment.
Typically such systems reduce (throttle) the clock rate when the CPU die temperature gets too hot, reducing the power dissipated to a level that the cooling system can handle.
|
https://en.wikipedia.org/wiki/Low-power_electronics
|
Event-driven architecture(EDA) is asoftware architectureparadigm concerning the production and detection ofevents. Event-driven architectures areevolutionary in natureand provide a high degree offault tolerance, performance, andscalability. However, they are complex and inherently challenging totest. EDAs are good for complex and dynamic workloads.[1]
Aneventcan be defined as "a significant change instate".[2]For example, when a consumer purchases a car, the car's state changes from "for sale" to "sold". A car dealer's system architecture may treat this state change as an event whose occurrence can be made known to other applications within the architecture. From a formal perspective, what is produced, published, propagated, detected or consumed is a (typically asynchronous) message called the event notification, and not the event itself, which is the state change that triggered the message emission. Events do not travel, they just occur. However, the termeventis often usedmetonymicallyto denote the notification message itself, which may lead to some confusion. This is due to Event-driven architectures often being designed atopmessage-driven architectures, where such a communication pattern requires one of the inputs to be text-only, the message, to differentiate how each communication should be handled.
Thisarchitectural patternmay be applied by the design and implementation of applications and systems that transmit events amongloosely coupled software componentsandservices. An event-driven system typically consists of event emitters (or agents), event consumers (or sinks), and event channels. Emitters have the responsibility to detect, gather, and transfer events. An Event Emitter does not know the consumers of the event, it does not even know if a consumer exists, and in case it exists, it does not know how the event is used or further processed. Sinks have the responsibility of applying a reaction as soon as an event is presented. The reaction might or might not be completely provided by the sink itself. For instance, the sink might just have the responsibility to filter, transform and forward the event to another component or it might provide a self-contained reaction to such an event. Event channels are conduits in which events are transmitted from event emitters to event consumers. The knowledge of the correct distribution of events is exclusively present within the event channel.[citation needed]The physical implementation of event channels can be based on traditional components such asmessage-oriented middlewareor point-to-point communication which might require a more appropriatetransactional executive framework[clarify].
Building systems around an event-driven architecture simplifies horizontal scalability indistributed computingmodels and makes them more resilient to failure. This is because application state can be copied across multiple parallel snapshots for high-availability.[3]New events can be initiated anywhere, but more importantly propagate across the network of data stores updating each as they arrive. Adding extra nodes becomes trivial as well: you can simply take a copy of the application state, feed it a stream of events and run with it.[4]
Event-driven architecture can complementservice-oriented architecture(SOA) because services can be activated by triggers fired on incoming events.[5][6]This paradigm is particularly useful whenever the sink does not provide anyself-contained executive[clarify].
SOA 2.0evolves the implications SOA and EDA architectures provide to a richer, more robust level by leveraging previously unknown causal relationships to form a new event pattern.[vague]This newbusiness intelligencepattern triggers further autonomous human or automated processing that adds exponential value to the enterprise by injecting value-added information into the recognized pattern which could not have been achieved previously.[vague]
Event driven architecture has two primary topologies: “broker topology” wherein components broadcast events to the entire system without any orchestrator. It provides the highest performance and scalability. Whereas in “mediator topology” there is a central orchestrator which controls workflow of events. It provides better control and error handling capabilities. You can also use a hybrid model and combine these two topologies.[1]
There are different types ofeventsin EDA, and opinions on their classification may vary. According to Yan Cui, there are two key categories of events:[7]
Domain events signify important occurrences within a specific business domain. These events are restricted to abounded contextand are vital for preserving business logic. Typically, domain events have lighterpayloads, containing only the necessary information for processing. This is because event listeners are generally within the same service, where their requirements are more clearly understood.[7]
On the other hand, integration events serve to communicate changes across differentbounded contexts. They are crucial for ensuringdata consistencythroughout the entire system. Integration events tend to have more complex payloads with additionalattributes, as the needs of potential listeners can differ significantly. This often leads to a more thorough approach to communication, resulting in overcommunication to ensure that all relevant information is effectively shared.[7]
An event can be made of two parts, the event header and the event body also known as event payload. The event header might include information such as event name, time stamp for the event, and type of event. The event payload provides the details of the state change detected. An event body should not be confused with the pattern or the logic that may be applied in reaction to the occurrence of the event itself.
There are two primary methods for structuring event payloads in event-driven architectures:[1]
These methods represent two ends of a spectrum rather than binary choices.Architectsmust carefully size the event payloads to meet the specific needs of event consumers.[1]
In event driven architectures, event evolution poses challenges, such as managing inconsistent event schemas across services and ensuring compatibility during gradual system updates. Event evolution strategies in event-driven architectures (EDA) can ensure that systems can handle changes to events without disruption. These strategies can include versioning events, such as semantic versioning or schema evolution, to maintain backward and forward compatibility. Adapters can translate events between old and new formats, ensuring consistent processing across components. These techniques can enable systems to evolve while remaining compatible and reliable in complex, distributed environments.[9]
An event driven architecture may be built on four logical layers, starting with the sensing of an event (i.e., a significant temporal state or fact), proceeding to the creation of its technical representation in the form of an event structure and ending with a non-empty set of reactions to that event.[10]
The first logical layer is the event producer, which senses a fact and represents that fact as an event message. As an example, an event producer could be an email client, an E-commerce system, a monitoring agent or some type of physical sensor.
Converting the data collected from such a diverse set of data sources to a single standardized form of data for evaluation is a significant task in the design and implementation of this first logical layer.[10]However, considering that an event is a strongly declarative frame, any informational operations can be easily applied, thus eliminating the need for a high level of standardization.[citation needed]
This is the second logical layer. An event channel is a mechanism of propagating the information collected from an event generator to the event engine[10]or sink.
This could be a TCP/IP connection, or any type of an input file (flat, XML format, e-mail, etc.). Several event channels can be opened at the same time. Usually, because the event processing engine has to process them in near real time, the event channels will be read asynchronously. The events are stored in a queue, waiting to be processed later by the event processing engine.
The event processing engine is the logical layer responsible for identifying an event, and then selecting and executing the appropriate reaction. It can also trigger a number of assertions. For example, if the event that comes into the event processing engine is a product ID low in stock, this may trigger reactions such as “Order product ID” and “Notify personnel”.[10]
This is the logical layer where the consequences of the event are shown. This can be done in many different ways and forms; e.g., an email is sent to someone and an application may display some kind of warning on the screen.[10]Depending on the level of automation provided by the sink (event processing engine) the downstream activity might not be required.
There are three general styles of event processing: simple, stream, and complex. The three styles are often used together in a mature event-driven architecture.[10]
Simple event processing concerns events that are directly related to specific, measurable changes of condition. In simple event processing, a notable event happens which initiates downstream action(s). Simple event processing is commonly used to drive the real-time flow of work, thereby reducing lag time and cost.[10]
For example, simple events can be created by a sensor detecting changes in tire pressures or ambient temperature. The car's tire incorrect pressure will generate a simple event from the sensor that will trigger a yellow light advising the driver about the state of a tire.
Inevent stream processing(ESP), both ordinary and notable events happen. Ordinary events (orders, RFID transmissions) are screened for notability and streamed to information subscribers. Event stream processing is commonly used to drive the real-time flow of information in and around the enterprise, which enables in-time decision making.[10]
Complex event processing(CEP) allows patterns of simple and ordinary events to be considered to infer that a complex event has occurred. Complex event processing evaluates a confluence of events and then takes action. The events (notable or ordinary) may cross event types and occur over a long period of time. The event correlation may be causal, temporal, or spatial. CEP requires the employment of sophisticated event interpreters, event pattern definition and matching, and correlation techniques. CEP is commonly used to detect and respond to business anomalies, threats, and opportunities.[10]
Online event processing(OLEP) uses asynchronous distributed event logs to process complex events and manage persistent data.[11]OLEP allows reliably composing related events of a complex scenario across heterogeneous systems. It thereby enables very flexible distribution patterns with high scalability and offers strong consistency. However, it cannot guarantee upper bounds on processing time.
An event-driven architecture is extremely loosely coupled and well distributed. The great distribution of this architecture exists because an event can be almost anything and exist almost anywhere. The architecture is extremely loosely coupled because the event itself doesn't know about the consequences of its cause. e.g. If we have an alarm system that records information when the front door opens, the door itself doesn't know that the alarm system will add information when the door opens, just that the door has been opened.[10]
Event-driven architectures have loose coupling within space, time and synchronization, providing a scalable infrastructure for information exchange and distributed workflows. However, event-architectures are tightly coupled, via event subscriptions and patterns, to the semantics of the underlying event schema and values. The high degree of semantic heterogeneity of events in large and open deployments such as smart cities and the sensor web makes it difficult to develop and maintain event-based systems. In order to address semantic coupling within event-based systems the use of approximatesemantic matchingof events is an active area of research.[12]
Synchronous transactions in EDA can be achieved through usingrequest-responseparadigm and it can be implemented in two ways:[1]
Event driven architecture is susceptible to thefallacies of distributed computing, a series of misconceptions that can lead to significant issues in software development and deployment.[1]
Finding the right balance in the number of events can be quite difficult. Generating too many detailed events can overwhelm the system, making it hard to analyze the overall event flow effectively. This challenge becomes even greater when rollbacks are required. Conversely, if events are overly consolidated, it can lead to unnecessary processing and responses from event consumers. To achieve an optimal balance, Mark Richards recommends to consider the impact of each event and whether consumers need to review the event payloads to determine their actions. For instance, in a compliance check scenario, it may be adequate to publish just two types of events: compliant and non-compliant. This method ensures that each event is only processed by the relevant consumers, reducing unnecessary workload.[1]
One of the challenges of using event driven architecture is error handling. One way to address this issue is to use a separate error-handler processor. So, when the event consumer experiences an error, it immediately and asynchronously sends the erroneous event to the error-handler processor and moves on. Error-handler processor tries to fix the error and sends the event back to the original channel. But if the error-handler processor fails, then it can send the erroneous event to an administrator for further inspection. Note that if you use an error-handler processor, erroneous events will be processed out of sequence when they are resubmitted.[1]
Another challenge of using event driven architecture is data loss. If any of the components crashes before successfully processing and handing over the event to its next component, then the event is dropped and never makes it into the final destination. To minimize the chance of data loss, you can persist in-transit events and remove / dequeue the events only when the next component has acknowledged the receipt of the event. These features are usually known as "client acknowledge mode" and "last participant support".[1]
|
https://en.wikipedia.org/wiki/Event-driven_architecture
|
Incomputer programming,event-driven programmingis aprogramming paradigmin which theflow of the programis determined by externalevents.UIevents frommice,keyboards,touchpadsandtouchscreens, and externalsensorinputs are common cases. Events may also be programmatically generated, such as frommessages from other programs, notifications from otherthreads, or othernetworkevents.
Event-driven programming is the dominant paradigm used ingraphical user interfacesapplications and network servers.
In an event-driven application, there is generally anevent loopthat listens for events and then triggers acallback functionwhen one of those events is detected.
Event-driven programs can be written in anyprogramming language, although the task is easier in languages that providehigh-level abstractions.
Although they do not exactly fit the event-driven model,interrupt handlingandexception handlinghave many similarities.
It is important to differentiate betweenevent-drivenandmessage-driven (aka queue driven)paradigms: Event-driven services (e.g.AWS SNS) are decoupled from their consumers. Whereas queue / message driven services (e.g.AWS SQS) are coupled with their consumers.[1]
Because theevent loopof retrieving/dispatching of events are common amongst applications, many programming frameworks take care of their implementation and expect the user to provide only the code for the event handlers.
RPG, an early programming language fromIBM, whose 1960s design concept was similar to event-driven programming discussed above, provided a built-in mainI/Oloop (known as the "program cycle") where the calculations responded in accordance to 'indicators' (flags) that were set earlier in the cycle.
The actual logic is contained in event-handler routines. These routines handle the events to which the main program will respond. For example, a single left-button mouse-click on a command button in aGUIprogram may trigger a routine that will open another window, save data to adatabaseor exit the application. ManyIDEsprovide the programmer with GUI event templates, allowing the programmer to focus on writing the event code.
Keeping track of history is normally trivial in a sequential program. Because event handlers execute in response to external events, correctly structuring the handlers to work when called in any order can require special attention and planning in an event-driven program.
In addition to writing the event handlers, event handlers also need to be bound to events so that the correct function is called when the event takes place. For UI events, many IDEs combine the two steps: double-click on a button, and the editor creates an (empty) event handler associated with the user clicking the button and opens a text window so you can edit the event handler.
Most existing GUI architectures use event-driven programming.[2]Windows has anevent loop. The Java AWT framework processes all UI changes on a single thread, called theEvent dispatching thread. Similarly, all UI updates in the Java frameworkJavaFXoccur on the JavaFX Application Thread.[3]
Most network servers and frameworks such as Node.js are also event-driven.[4]
|
https://en.wikipedia.org/wiki/Event-driven_programming
|
Windows RTis amobile operating systemdeveloped byMicrosoftand released alongsideWindows 8on October 26, 2012. It is a version of Windows 8 orWindows 8.1built for the32-bit ARM architecture(ARMv7),[6]designed to take advantage of the architecture's power efficiency to allow for longer battery life, to usesystem-on-chip(SoC) designs to allow for thinner devices and to provide a "reliable" experience over time. Unlike Windows 8, Windows RT was only available as preloaded software on devices specifically designed for the operating system byoriginal equipment manufacturers(OEMs); Microsoft launched its own hardware running it, theSurfacetablet, which was followed bySurface 2, although only five models running Windows RT were released by third-party OEMs throughout its lifetime.
In comparison to other mobile operating systems, Windows RT also supported a relatively large number of existingUSBperipherals and accessories and includes a version ofMicrosoft Office 2013optimized forARMdevices as pre-loaded software. Some limitations it had compared to Windows 8 was that it could only execute softwaredigitally signedby Microsoft, lacked certain developer-oriented features, and could not run applications designed forx86processors, which were the main platform for Windows at the time. Windows RT 8.1 was released in 2013 as a free upgrade, featuring a number of improvements.
It received mixed reviews at launch, while critics and analysts deemed it to be commercially unsuccessful. It was criticized for its poor software ecosystem, citing the early stage ofWindows Storeand its incompatibility with existing Windows software. Some felt Windows RT devices had advantages over other mobile platforms (such asAndroid,iOS, and Microsoft'sWindows Phone) because of its bundled software, and the ability to use a wider variety of USB peripherals and accessories.
Improvements toIntel's mobile processors, along with a decision by Microsoft to remove OEM license fees for Windows on devices with screens smaller than 9 inches, spurred a market for low-endWinteltablets running the full Windows 8 platform, giving battery life and functionality that met or exceeded that of Windows RT devices; these effectivelycannibalizedWindows RT sales, and was a reason why Microsoft suffered a US$900 million loss in July 2013. With the release ofSurface 3in 2015, the Surface line switched to Intel processors. In 2018, Microsoft would partner withQualcommon launching an ARM version ofWindows 10; unlike Windows RT, the OS would support running x86 software via emulation.
At the 2011Consumer Electronics Show, it was officially announced that the next version of Windows would provide support forsystem-on-chip(SoC) implementations based on theARM architecture.Steven Sinofsky, then Windows division president, demonstrated an early version of a Windows port for the architecture, codenamed Windows on ARM (WoA), running on prototypes withQualcommSnapdragon,Texas InstrumentsOMAP, andNvidiaTegra 2chips. The prototypes featured working versions ofInternet Explorer 9(withDirectXsupport via the Tegra 2'sGPU),PowerPointandWord, along with the use ofclass driversto allow printing to anEpsonprinter. Sinofsky felt that the shift towards SoC designs were "a natural evolution of hardware that's applicable to a wide range of form factors, not just to slates", while Microsoft CEOSteve Ballmeremphasized the importance of supporting SoCs on Windows by proclaiming that the operating system would "be everywhere on every kind of device without compromise."[7]
Initial development on WoA took place by porting code fromWindows 7;Windows Mobilesmartphoneswere used to test early builds of WoA because of lack of readily available ARM-based tablets. Later testing was performed using a custom-designed array ofrack-mountedARM-based systems.[8]Changes to the Windows codebase were made to optimize the OS for the internal hardware of ARM devices, but a number of technical standards traditionally used by x86 systems are also used. WoA devices would useUEFIfirmware and have a software-basedTrusted Platform Moduleto support device encryption andUEFI Secure Boot.[9]ACPIis also used to detect and controlplug and playdevices and provide power management outside the SoC. To enable wider hardware support, peripherals such ashuman interface devices, storage and other components that useUSBandI²Cconnections use class drivers and standardized protocols.Windows Updateserves as the mechanism for updating all system drivers, software, andfirmware.[8]
Microsoft showcased other aspects of the new operating system, to be known asWindows 8, during subsequent presentations. Among these changes (which also included an overhauled interface optimized for use on touch-based devices built aroundMetro design language) was the introduction ofWindows Runtime(WinRT). Software developed using this new architecture could be processor-independent (allowing compatibility with both x86- and ARM-based systems),[10]would emphasize the use of touch input, would run within asandboxed environmentto provide additional security, and be distributed throughWindows Store—astoresimilar to services such as theApp StoreandGoogle Play. WinRT was also optimized to provide a more "reliable" experience on ARM-based devices; as such,backward compatibilityforWin32software otherwise compatible with older versions of Windows was intentionally excluded from Windows on ARM. Windows developers indicated that existing Windows applications were not specifically optimized for reliability and energy efficiency on the ARM architecture and that WinRT was sufficient for providing "full expressive power" for applications, "while avoiding the traps and pitfalls that can potentially reduce the overall experience for consumers." Consequentially, this lack of backward compatibility would also prevent existingmalwarefrom running on the operating system.[8][11]
On April 16, 2012, Microsoft announced that Windows on ARM would be officially branded as Windows RT.[12]Microsoft did not explicitly indicate what the "RT" in the operating system's name referred to, but it was believed to refer to the WinRT architecture.[13]Steven Sinofsky stated that Microsoft would ensure the differences between Windows RT and 8 were adequately addressed in advertising. However, reports found that promotional web pages for theMicrosoft Surfacetablet had contained confusing wording alluding to the compatibility differences and thatMicrosoft Storerepresentatives were providing inconsistent and sometimes incorrect information about Windows RT. In response, Microsoft stated that Microsoft Store staff members would be given an average of 15 hours of training prior to the launch of Windows 8 and Windows RT to ensure that consumers were able to make the correct choice for their needs.[14]The first Windows RT devices were officially released alongside Windows 8 on October 26, 2012.[15]
Windows 8.1, an upgrade for Windows 8 and RT, was released inWindows Storeon October 17, 2013, containing a number of improvements to the operating system's interface and functionality. For Windows RT devices, the update also addsOutlookto the included Office RT suite.[16][17][18][19][20]The update was temporarilyrecalledby Microsoft shortly after its release, following reports that some Surface users had encountered a rare bug which corrupted their device'sBoot Configuration Dataduring installation, resulting in an error on startup.[21][22]On October 21, 2013, Microsoft released recovery media and instructions which could be used to repair the device and restored access to Windows 8.1 the next day.[23][24]
While Windows RT functions similarly to Windows 8, there are still some notable differences, primarily involving software and hardware compatibility.[25]Julie Larson-Green, then executive vice president of the Devices and Studios group at Microsoft, explained that Windows RT was ultimately designed to provide a "closed,turnkey" user experience, "where it doesn't have all the flexibility of Windows, but it has the power ofOfficeand then all the new style applications. So you could give it to your kid and he's not going to load it up with a bunch oftoolbarsaccidentally out ofInternet Explorerand then come to you later and say, 'why am I getting all thesepop-ups?' It just isn't capable of doing that by design."[26][27]
Windows RT does not includeWindows Media Player, in favor of other multimedia apps found on Windows Store; devices are pre-loaded with the in-houseXbox MusicandXbox Videoapps.[25]
All Windows RT devices includeOffice 2013 Home & Student RT—a version ofMicrosoft Officethat is optimized for ARM systems.[28]As the version of Office RT included on Windows RT devices is based on the Home & Student version, it cannot be used for "commercial, nonprofit, or revenue-generating activities" unless the organization has a volume license for Office 2013, or the user has anOffice 365subscription with commercial use rights.[20][29]For compatibility and security reasons, certain advanced features, such asVisual Basic macros, are not available in Office RT.[28]
Windows RT also includes aBitLocker-baseddevice encryptionsystem, which passively encrypts a user's data once they sign in with aMicrosoft account.[30]
Due to the differentarchitectureof ARM-based devices compared to x86 devices, Windows RT has software compatibility limitations. Although the operating system still provides the traditional Windows desktop environment alongside Windows 8's touch-orienteduser interface, the only desktop applications officially supported by Windows RT are those that come with the operating system itself; such asFile Explorer,Internet Explorer, and Office RT. OnlyWindows Store appscan be installed by users on Windows RT devices; they must be obtained fromWindows Storeor sideloaded in enterprise environments. Developers cannotportdesktop applications to run on Windows RT since Microsoft developers felt that they would not be properly optimized for the platform.[10]As a consequence, Windows RT also does not support "new-experience enabled"web browsers: a special class of app used on Windows 8 that allows web browsers to bundle variants that can run in the Windows RT "modern-style user interface" and integrate with other apps but still useWin32code like desktop programs.[31][32]
In a presentation at Windows 8's launch event in New York City, Steven Sinofsky claimed that Windows RT would support 420 million existing hardware devices and peripherals. However, in comparison to Windows 8, full functionality will not be available for all devices, and some devices will not be supported at all.[33]Microsoft provides a "Compatibility Center" portal where users can search for compatibility information on devices with Windows RT; on launch, the site listed just over 30,000 devices that were compatible with the operating system.[34]
While Windows RT devices can join aHomeGroupand access files stored within shared folders and libraries on other devices within the group, files cannot be shared from the Windows RT device itself.[35]Windows RT does not support connecting to adomainfor network logins, nor does it support usingGroup Policyfor device management. However,Exchange ActiveSync, theWindows Intuneservice, orSystem Center Configuration Manager2012 SP1 can be used to provide some control over Windows RT devices in enterprise environments, such as the ability to apply security policies and provide a portal which can be used to sideload apps from outside Windows Store.[36]
After installation of the KB3033055 update for Windows RT 8.1, a desktopStart menubecomes available as an alternative to the Start screen. It is divided into two columns, with one devoted to recent and pinned applications, and one devoted to live tiles.[37][38]It is similar to, but not identical to,Windows 10's version.[38]
Windows RT follows the lifecycle policy of Windows 8 and Windows 8.1. The original Surface tablet fell under Microsoft's support policies for consumer hardware and received mainstream support until April 11, 2017.[39]
Mainstream support for Windows RT (8.0) ended on January 12, 2016. Users must have updated to Windows RT 8.1 which continued receiving support until the dates mentioned below.
Mainstream support for Windows RT 8.1 ended on January 9, 2018, and extended support for Windows RT 8.1 ended on January 10, 2023.[3][4]
Microsoft imposed tight control on the development and production of Windows RT devices: they were designed in cooperation with the company, and built to strict design and hardware specifications, including requirements to only use "approved" models of certain components. To ensure hardware quality and control the number of devices released upon launch, the three participating ARM chip makers were only allowed to partner with up to two PC manufacturers to develop the first "wave" of Windows RT devices in Microsoft's development program.Qualcommpartnered withSamsungandHP,NvidiawithAsusandLenovo, andTexas InstrumentswithToshiba. Additionally, Microsoft partnered with Nvidia to produceSurface(retroactively renamed "Surface RT") – the first Windows-based computing device to be manufactured and marketed directly by Microsoft.[40][41][42]Windows RT was designed to support chips meeting the ARMv7 architecture, a32-bitprocessor platform.[6]Shortly after the original release of Windows RT,ARM Holdingsdisclosed that it was working with Microsoft and other software partners on supporting64-bitAArch64.[43]
Multiple hardware partners pulled out of the program during the development of Windows RT, the first being Toshiba and Texas Instruments. TI later announced that it was pulling out of the consumer market for ARM system-on-chips to focus onembedded systems.[44]HP also pulled out of the program, believing that Intel-based tablets were more appropriate for business use than ARM. HP was replaced byDellas an alternate Qualcomm partner.[45]Aceralso intended to release a Windows RT device alongside its Windows 8-based products, but initially decided to delay it until the second quarter of 2013 in response to the mixed reaction to Surface.[46]The unveiling of the Microsoft-developed tablet caught Acer by surprise, leading to concerns that Surface could leave "a huge negative impact for the [Windows] ecosystem and other brands."[40]
The first wave of Windows RT devices included:
After having planned to produce a Windows RT device close to its launch, Acer's president Jim Wong later indicated that there was "no value" in the current version of the operating system, and would reconsider its plans for future Windows RT products when the Windows 8.1 update was released.[57]On August 9, 2013, Asus announced that it would no longer produce any Windows RT products; chairman Johnny Shih expressed displeasure at the market performance of Windows RT, considering it to be "not very promising".[58][59]During the introduction of its Android and Windows 8-basedVenuetablets in October 2013, Dell's vice president Neil Hand stated that the company had no plans to produce an updated version of the XPS 10.[60]
In September 2013, Nvidia CEOJen-Hsun Huangstated that the company was "working really hard" with Microsoft on developing a second revision of Surface.[61]TheMicrosoft Surface 2tablet, which is powered by Nvidia's quad-coreTegra 4platform and features the same full HD display as theSurface Pro 2, was officially unveiled on September 23, 2013, and released on October 22, 2013, following Windows 8.1 general availability the previous week.[62]On the same day as the Surface 2's release,Nokia(the acquisition of theirmobile businessby Microsoft had just been announced, but not yet been completed) unveiled theLumia 2520, a Windows RT tablet with a Qualcomm Snapdragon 800 processor,4G LTE, and a design similar to itslineofWindows Phoneproducts.[63]An LTE-capable version of the Surface 2 was made available the following year.[64]
In January 2015, after its stock sold out onMicrosoft Storeonline, Microsoft confirmed that it had discontinued further production of the Surface 2 to focus on Surface Pro products.[65]Microsoft ended production of the Lumia 2520 the following month, ending active production of Windows RT devices after just over two years of general availability.[66]With the end of production for both Surface 2 and Lumia 2520, Microsoft and its subsidiaries no longer manufacture any Windows RT devices.[65][66]
Microsoft originally developed a "mini" version of its Surface tablet later known asSurface Miniand had planned to unveil it alongside theSurface Pro 3in May 2014; it was reportedly cancelled at the last minute.[67]Images of the product were leaked in June 2017, revealing specifications such as a Qualcomm Snapdragon 800, an 8-inch display, and support for theSurface Peninstead of akeyboard attachment.[68]
In July 2016, an image depicting a number of cancelled Nokia-branded Lumia devices was released, depicting a prototype for a second Nokia tablet known as the Lumia 2020.[69]Details revealed in September 2017 showed the product to have an 8.3-inch display and the same Snapdragon 800 chip as that of the Surface "mini" tablet.[70]
Windows RT's launch devices received mixed reviews upon their release. In a review of the Asus VivoTab RT byPC Advisor, Windows RT was praised for being a mobile operating system that still offered some PC amenities such as a full-featuredfile manager, but noted its lack of compatibility with existing Windows software, and that it had no proper media player aside from a "shameless, in-your-face conduit toXbox Music."[71]AnandTechbelieved Windows RT was the first "legitimately useful" mobile operating system, owing in part to its multitasking system, bundled Office programs, smooth interface performance, and "decent" support for a wider variety ofUSBdevices in comparison to other operating systems on the ARM architecture. However, the OS was panned for its slow application launch times in comparison to arecent iPad, and spotty driver support for printers. The small number of "quality" apps available on launch was also noted—but considered to be a non-issue, assuming that the app ecosystem would "expand significantly unless somehow everyone stops buying Windows-based systems on October 26th."[25][72]
Reception of the preview release of RT 8.1 was mixed; bothExtremeTechandTechRadarpraised the improvements to the operating system's tablet-oriented interface, along with the addition of Outlook;TechRadar's Dan Grabham believed that the inclusion of Outlook was important because "nobody in their right mind would try and handle work email inside the standard Mail app—it's just not up to the task." However, both experienced performance issues running the beta on theTegra 3-based Surface;ExtremeTechconcluded that "as it stands, we’re still not sure why you would ever opt to buy a Windows RT tablet when there are similarly pricedAtom-powered x86 devices that run the full version of Windows 8."[19][73]
The need to market an ARM-compatible version of Windows was questioned by analysts because of recent developments in the PC industry; both Intel and AMD introduced x86-based system-on-chip designs for Windows 8,Atom "Clover Trail"and"Temash"respectively, in response to the growing competition from ARM licensees. In particular, Intel claimed that Clover Trail-based tablets could provide battery life rivaling that of ARM devices; in a test byPC World, Samsung's Clover Trail-based Ativ Smart PC was shown to have battery life exceeding that of the ARM-based Surface. Peter Bright ofArs Technicaargued that Windows RT had no clear purpose, since the power advantage of ARM-based devices was "nowhere near as clear-cut as it was two years ago", and that users would be better off purchasing Office 2013 themselves because of the removed features and licensing restrictions of Office RT.[72][74][75]
Windows RT was also met with lukewarm reaction from manufacturers; in June 2012,Hewlett-Packardcanceled its plans to release a Windows RT tablet, stating that its customers felt Intel-based tablets were more appropriate for use in business environments. In January 2013, Samsung cancelled the American release of its Windows RT tablet, theAtiv Tab, citing the unclear positioning of the operating system, "modest" demand for Windows RT devices, plus the effort and investment required to educate consumers on the differences between Windows 8 and RT as reasons for the move. Mike Abary, senior vice president of Samsung's U.S. PC and tablet businesses, also stated that the company was unable to build the Ativ Tab to meet its target price point—considering that lower cost was intended to be a selling point for Windows RT devices.[54]Nvidia CEOJen-Hsun Huangexpressed disappointment over the market performance of Windows RT, but called on Microsoft to continue increasing its concentration on the ARM platform. Huang also commented on the exclusion of Outlook from the Office 2013 suite included on the device and suggested that Microsoft port the software for RT as well (in response to public demand, Microsoft announced the inclusion of Outlook with future versions of Windows RT in June 2013).[20][76]In May 2013, reports surfaced thatHTChad scrapped plans to produce a 12-inch Windows RT tablet as it would cost too much to produce, and that there would be greater demand for smaller devices.[77]
The poor demand resulted in price cuts for various Windows RT products; in April 2013 the price of Dell's XPS 10 fell fromUS$450 US to $300, and Microsoft began offering free covers for its Surface tablet in some territories as a limited-time promotion—itself a US$130 value for the Type Cover alone.[78][79]Microsoft also reportedly reduced the cost of Windows RT licenses for devices with smaller screens, hoping that this could spur interest in the platform.[80]In July 2013, Microsoft cut the price of the first-generation Surface worldwide by 30%, with its U.S. price falling to $350. Concurrently, Microsoft reported a loss of US$900 million due to the lackluster sales of the device.[81][82][83][84][85]In August 2013, Dell silently pulled the option to purchase the XPS 10 from its online store without a keyboard dock (raising its price back up to US$479), and pulled the device entirely in September 2013.[51][86]Microsoft's "fire sale" of the Surface RT did result in a slight increase of market share; in late-August 2013, usage data from the advertising networkAdDuplex(which provides advertising services within Windows Store apps) revealed that Surface usage had increased from 6.2 to 9.8%.[87]
In contrast to Windows 8 (where the feature had to be enabled by default on OEM devices, but remain user-configurable), Microsoft requires all Windows RT devices to haveUEFI Secure Bootpermanently enabled, preventing the ability to run alternative operating systems on them. Tom Warren ofThe Vergestated that he would have preferred Microsoft to "keep a consistent approach across ARM and x86, though, not least because of the number of users who'd love to runAndroidalongside Windows 8 on their future tablets", but noted that the decision to impose such restrictions was in line with similar measures imposed by other mobile operating systems, including recent Android devices and Microsoft's ownWindows Phonemobile platform.[9][88][89][90]
The requirement to obtain most software on Windows RT through Windows Store was considered to be similar in nature to the application stores on other "closed" mobile platforms; where only software certified under guidelines issued by the vendor (i.e. Microsoft) can be distributed in the store.[91]Microsoft was also criticized by the developers of theFirefoxweb browser for effectively preventing the development of third-party web browsers for Windows RT (and thus forcing use of its own Internet Explorer browser) by restricting the development of desktop applications and by not providing the same APIs and exceptions available on Windows 8 to code web browsers that can run as apps.[10][32]However, theEuropean Union, in response to a complaint about the restrictions in relation to anantitrust caseinvolving Microsoft, ruled that "so far, there are no grounds to pursue further investigation on this particular issue." As mandated by the EU, theBrowserChoice.euservice is still included in Windows 8.[92]
In January 2013, aprivilege escalationexploit was discovered in the Windows kernel that can allow unsigned code to run under Windows RT; the exploit involved the use of aremote debuggingtool (provided by Microsoft to debugWinRTapps on Windows RT devices) to execute code which changes thesigning levelstored inRAMto allow unsigned code to execute (by default, it is set to a level that only allows code signed by Microsoft to execute).[93]Alongside his explanation of the exploit, the developer also included a personal appeal to Microsoft urging them to remove the restrictions on Windows RT devices, contending that their decision was not for technical reasons, and that the devices would be more valuable if this functionality were available.[94]In a statement, a Microsoft spokesperson applauded the effort, indicating that the exploit does not pose a security threat because it requires administrative access to the device, advanced techniques, and would still require programs to be re-compiled for ARM. However, Microsoft has still indicated that the exploit would be patched in a future update.[95]
Abatch file-based tool soon surfaced onXDA Developersto assist users in the process of performing the exploit, and a variety of ported desktop applications began to emerge, such as theemulatorBochs,PuTTYandTightVNC.[93][96][97][98]Afterwards, an emulator known as "Win86emu" surfaced, allowing users to run x86 software on a jailbroken Windows RT device. However, it does not support all Windows APIs, and runs programs slower than they would on a native system.[99]
In November 2013, speaking about Windows RT at the UBS Global Technology Conference,Julie Larson-Greenmade comments discussing the future of Microsoft's mobile strategy surrounding the Windows platform. Larson-Green stated that in the future (accounting for Windows, Windows RT, andWindows Phone), Microsoft was "[not] going to have three [mobile operating systems]." The fate of Windows RT was left unclear by her remarks; industry analysts interpreted them as signs that Microsoft was preparing to discontinue Windows RT due to its poor adoption, while others suggested that Microsoft was planning to unify Windows with Windows Phone.[26][27]Microsoft ultimately announced its "Universal Windows Apps" platform at Build 2014, which would allow developers to create WinRT apps for Windows, Windows Phone, andXbox Onethat share common codebases.[100][101][102][103]These initiatives were compounded by a goal forWindows 10to unify the core Windows operating system across all devices.[104]
Critics interpreted Microsoft's move to cancel the launch of a smaller Surface model in May 2014 as a further sign that Microsoft, under new CEOSatya Nadella, and new device headStephen Elop(who joined Microsoft upon the purchase of Nokia's mobile phone business in September 2013,[105]only to depart the company the following year[106]), was planning to further downplay Windows RT, given that the company had shifted its attention towards a higher-end, productivity-oriented market with the Pro 3—one which would be inappropriate for Windows RT given its positioning and limitations. Analysts believed that Microsoft was planning to leverage its acquisition of Nokia's device business for future Windows RT devices, possibly under the Lumia brand;[107][108][109]
On January 21, 2015, Microsoft unveiledWindows 10 Mobile, an edition of Windows 10 for smartphones and sub-8-inch tablets running on ARM architecture; unlike RT, which was based upon the user experience of the PC version, Windows 10 on these devices is a continuation of the Windows Phone user experience that emphasizes the ability for developers to create "universal" Windows apps that canrun across PCs, tablets, and phones, and only supports the modern-style interface and Windows apps (although on compatible devices, a limited desktop experience will be available when connected to an external display).[110][111][112][113]Following the event, a Microsoft spokesperson stated that the company was working on a Windows RT update that would provide "some of the functionality of Windows 10",[114][115]and the company ended production of both the Surface 2 and Lumia 2520.[66]
Microsoft's purchase of Nokia ultimately turned out to be a failure,[116]and Microsoft would eventually leave the consumer mobile phone market,[117]selling its assets toFoxconnandHMD Globalin May 2016.[118]
Newer Intel processors for mobile devices were more competitive in comparison to ARM equivalents in regards to performance and battery life; this factor and other changes made by Microsoft, such as the removal of Windows OEM license fees on devices with screens less than 9 inches in size,[119]spurred the creation of a market for lower-end tablets running the full Windows 8 operating system on Intel-compatible platforms, leaving further uncertainty over Microsoft's support of ARM outside of smartphones—where they remain ubiquitous.[104][120]Such a device came in March 2015, when Microsoft unveiled a new low-end Surface model, theIntel Atom-basedSurface 3; unlike previous low-end Surface models, Surface 3 did not use ARM and Windows RT.[121]It was succeeded in 2018 by thePentium GoldSurface Go.[122]
Windows 8.1 RT Update 3 (KB3033055)[37][123][38]was released on September 16, 2015;[38][124][125]it adds a version of the updated Start menu seen in early preview versions of Windows 10 (which combines an application list with a sidebar of tiles),[38]but otherwise does not contain any other significant changes to the operating system or its functionality, nor any support for Windows 10's application ecosystem.[38]The Vergecharacterized this update as being similar toWindows Phone 7.8—which similarlybackporteduser interface changes fromWindows Phone 8(which switched from aWindows Mobile-derived platform to one derived from the NT kernel), without making any other significant upgrades to the platform.[126][127]
On December 7, 2016, Microsoft announced that as part of a partnership with Qualcomm, it planned to launch an ARM version of Windows 10 for Snapdragon-based devices, initially focusing on laptops. Unlike Windows RT, the ARM version of Windows 10 supports using an emulation layer to run software compiled for 32-bit x86 architectures.[128]The following year, Microsoft announced theAlways Connected PCbrand, covering Windows 10 devices with cellular connectivity; the launch featured two Snapdragon 835-powered 2-in-1 laptops from Asus and HP, and an integration of Qualcomm'sSnapdragon X16gigabit LTE modem with AMD'sRyzen Mobileplatform.[129][130]Windows 11would additionally add support for64-bitx86 emulation.[131]
On May 2, 2017, Microsoft unveiledWindows 10 S, an edition of Windows 10 designed primarily for low-end mobile devices targeting the education market (competing primarily withGoogle's Linux-basedChromeOS). Similarly to Windows RT, it restricted software installation to applications obtained via Windows Store.[132][133][134][135]Windows 10 S was replaced by S Mode, a mode in which manufacturers can ship Windows 10 computers with the same restrictions, but they can be turned off by the user.[136]
|
https://en.wikipedia.org/wiki/Always_On,_Always_Connected
|
Incomputer networking,Energy-Efficient Ethernet(EEE) is a set of enhancements totwisted-pair,twinaxial,backplane, andoptical fiberEthernet physical-layer variantsthat reduce power consumption during periods of low data activity.[1]The intention is to reduce power consumption by at least half, while retaining full compatibility with existing equipment.[2]
TheInstitute of Electrical and Electronics Engineers(IEEE), through theIEEE 802.3aztask force, developed the standard. The first study group had its call for interest in November 2006, and the official standards task force was authorized in May 2007.[3]The IEEE ratified the final standard in September 2010.[4]Some companies introduced technology to reduce the power required for Ethernet before the standard was ratified, using the nameGreen Ethernet.
Some energy-efficient switchintegrated circuitswere developed before the IEEE 802.3az Energy-Efficient Ethernet standard was finalized.[5][6]
In 2005, all thenetwork interface controllersin the United States (in computers, switches, and routers) used an estimated 5.3 terawatt-hours of electricity.[7]According to a researcher at theLawrence Berkeley Laboratory, Energy-Efficient Ethernet can potentially save an estimatedUS$450million a year in energy costs in the US. Most of the savings would come from homes (US$200million) and offices (US$170million), and the remainingUS$80million from data centers.[8]
The power reduction is accomplished in a few ways. InFast Ethernetand faster links, constant and significant energy is used by thephysical layeras transmitters are active regardless of whether data is being sent. If they could be put into sleep mode when no data is being sent, that energy could be saved.[8]When the controlling software or firmware decides that no data needs to be sent, it can issue a low-power idle (LPI) request to the Ethernet controller physical layerPHY. The PHY will then send LPI symbols for a specified time onto the link, and then disable its transmitter. Refresh signals are sent periodically to maintain link signaling integrity. When there is data to transmit, a normal IDLE signal is sent for a predetermined period of time. The data link is considered to be always operational, as the receive signal circuit remains active even when the transmit path is in sleep mode.[9]
Green Ethernet technology was a superset of the 802.3az standard. In addition to the link load power savings of Energy-Efficient Ethernet, Green Ethernet works in one of two ways. First, it detects link status, allowing each port on the switch to power down into a standby mode when a connected device, such as a computer, is not active. Second, it detects cable length and adjusts the power used for transmission accordingly. Standard switches provide enough power to send a signal up to 100 meters (330 ft).[10]However, this is often unnecessary in the SOHO environment, where 5 to 10 meters (16 to 33 ft) of cabling are typical between rooms. Moreover, small data centers can also benefit from this approach since the majority of cabling is confined to a single room with a few meters of cabling among servers and switches. In addition to the pure power-saving benefits of Green Ethernet, backing off the transmit power on shorter cable runs reduces alien crosstalk and improves the overall performance of the cabling system.
Green Ethernet also encompasses the use of more efficient circuitry in Ethernet chips, and the use ofoffload engineson Ethernet interface cards intended for network servers.[6]In April 2008, the term was used for switches, and, in July 2008, used with wireless routers that featured user-selectable off periods forWi-Fito further reduce energy consumption.[11]
Projected power savings of up to 80 percent were predicted using Green Ethernet switches,[12]translating into a longer product life due to reduced heat.[13]
|
https://en.wikipedia.org/wiki/Energy-Efficient_Ethernet
|
TCP offload engine(TOE) is a technology used in somenetwork interface cards(NIC) tooffloadprocessing of the entireTCP/IPstack to the network controller. It is primarily used with high-speed network interfaces, such asgigabit Ethernetand10 Gigabit Ethernet, where processing overhead of the network stack becomes significant.
TOEs are often used[1]as a way to reduce the overhead associated withInternet Protocol(IP) storage protocols such asiSCSIandNetwork File System(NFS).
OriginallyTCPwas designed for unreliable low speed networks (such as earlydial-upmodems) but with the growth of the Internet in terms ofbackbonetransmission speeds (usingOptical Carrier,Gigabit Ethernetand10 Gigabit Ethernetlinks) and faster and more reliable access mechanisms (such asDSLandcable modems) it is frequently used indata centersand desktopPCenvironments at speeds of over 1 Gigabit per second. At these speeds the TCP software implementations on host systems require significant computing power. In the early 2000s, full-duplex gigabit TCP communication could consume more than 80% of a 2.4 GHzPentium 4processor,[2]resulting in small or no processing resources left for the applications to run on the system.
TCP is aconnection-oriented protocolwhich adds complexity and processing overhead. These aspects include:
Moving some or all of these functions to dedicated hardware, a TCP offload engine, frees the system's mainCPUfor other tasks.
A generally accepted rule of thumb is that 1 Hertz of CPU processing is required to send or receive1bit/sof TCP/IP.[2]For example, 5 Gbit/s (625 MB/s) of network traffic requires 5 GHz of CPU processing. This implies that 2 entire cores of a 2.5 GHzmulti-core processorwill be required to handle the TCP/IP processing associated with 5 Gbit/s of TCP/IP traffic. Since Ethernet (10GE in this example) is bidirectional, it is possible to send and receive 10 Gbit/s (for an aggregate throughput of 20 Gbit/s). Using the 1 Hz/(bit/s) rule this equates to eight 2.5 GHz cores.
Many of the CPU cycles used for TCP/IP processing arefreed-upby TCP/IP offload and may be used by the CPU (usually aserverCPU) to perform other tasks such as file system processing (in a file server) or indexing (in a backup media server). In other words, a server with TCP/IP offload can do moreserverwork than a server without TCP/IP offload NICs.
In addition to the protocol overhead that TOE can address, it can also address some architectural issues that affect a large percentage of host based (server and PC) endpoints.
Many older end point hosts arePCIbus based, which provides a standard interface for the addition of certainperipheralssuch as Network Interfaces toServersand PCs.
PCI is inefficient for transferring small bursts of data from main memory, across the PCI bus to the network interface ICs, but its efficiency improves as the data burst size increases. Within the TCP protocol, a large number of small packets are created (e.g. acknowledgements) and as these are typically generated on the host CPU and transmitted across the PCI bus and out the network physical interface, this impacts the host computer IO throughput.
A TOE solution, located on the network interface, is located on the other side of the PCI bus from the CPU host so it can address this I/O efficiency issue, as the data to be sent across the TCP connection can be sent to the TOE from the CPU across the PCI bus using large data burst sizes with none of the smaller TCP packets having to traverse the PCI bus.
One of the first patents in this technology, for UDP offload, was issued toAuspex Systemsin early 1990.[3]Auspex founder Larry Boucher and a number of Auspex engineers went on to found Alacritech in 1997 with the idea of extending the concept of network stack offload to TCP and implementing it in custom silicon. They introduced the first parallel-stack full offload network card in early 1999; the company's SLIC (Session Layer Interface Card) was the predecessor to its current TOE offerings. Alacritech holds a number of patents in the area of TCP/IP offload.[4]
By 2002, as the emergence of TCP-based storage such asiSCSIspurred interest, it was said that "At least a dozen newcomers, most founded toward the end of the dot-com bubble, are chasing the opportunity for merchant semiconductor accelerators for storage protocols and applications, vying with half a dozen entrenched vendors and in-house ASIC designs."[5]
In 2005Microsoftlicensed Alacritech's patent base and along with Alacritech created the partial TCP offload architecture that has become known as TCP chimney offload. TCP chimney offload centers on the Alacritech "Communication Block Passing Patent". At the same time, Broadcom also obtained a license to build TCP chimney offload chips.
Instead of replacing the TCP stack with a TOE entirely, there are alternative techniques to offload some operations in co-operation with the operating system's TCP stack.TCP checksum offloadandlarge segment offloadare supported by the majority of today's Ethernet NICs. Newer techniques likelarge receive offloadand TCP acknowledgment offload are already implemented in some high-end Ethernet hardware, but are effective even when implemented purely in software.[6][7]
Parallel-stack full offload gets its name from the concept of two parallel TCP/IP Stacks. The first is the main host stack which is included with the host OS. The second or "parallel stack" is connected between theApplication Layerand theTransport Layer (TCP)using a "vampire tap". The vampire tap intercepts TCP connection requests by applications and is responsible for TCP connection management as well as TCP data transfer. Many of the criticisms in the following section relate to this type of TCP offload.
HBA (Host Bus Adapter) full offload is found in iSCSIhost adapterswhich present themselves as disk controllers to the host system while connecting (via TCP/IP) to aniSCSIstorage device. This type of TCP offload not only offloads TCP/IP processing but it also offloads the iSCSI initiator function. Because the HBA appears to the host as a disk controller, it can only be used with iSCSI devices and is not appropriate for general TCP/IP offload.
TCP chimney offload addresses the major security criticism of parallel-stack full offload. In partial offload, the main system stack controls all connections to the host. After a connection has been established between the local host (usually a server) and a foreign host (usually a client) the connection and its state are passed to the TCP offload engine. The heavy lifting of data transmit and receive is handled by the offload device. Almost all TCP offload engines use some type of TCP/IP hardware implementation to perform the data transfer without host CPU intervention. When the connection is closed, the connection state is returned from the offload engine to the main system stack. Maintaining control of TCP connections allows the main system stack to implement and control connection security.
Large receive offload(LRO) is a technique for increasing inboundthroughputof high-bandwidthnetwork connections by reducingcentral processing unit(CPU) overhead. It works by aggregating multiple incomingpacketsfrom a singlestreaminto a larger buffer before they are passed higher up the networking stack, thus reducing the number of packets that have to be processed.Linuximplementations generally use LRO in conjunction with theNew API(NAPI) to also reduce the number ofinterrupts.
According to benchmarks, even implementing this technique entirely in software can increase network performance significantly.[6][7][8]As of April 2007[update], theLinux kernelsupports LRO forTCPin software only.FreeBSD8 supports LRO in hardware on adapters that support it.[9][10][11][12]
LRO should not operate on machines acting as routers, as it breaks theend-to-end principleand can significantly impact performance.[13][14]
Generic receive offload(GRO) implements a generalised LRO in software that isn't restricted to TCP/IPv4or have the issues created by LRO.[15][16]
Incomputer networking,large send offload(LSO) is a technique for increasing egressthroughputof high-bandwidthnetwork connections by reducingCPUoverhead. It works by passing a multipacket buffer to thenetwork interface card(NIC). The NIC then splits this buffer into separate packets. The technique is also calledTCP segmentation offload(TSO) orgeneric segmentation offload(GSO) when applied toTCP. LSO and LRO are independent and use of one does not require the use of the other.
When a system needs to send large chunks of data out over a computer network, the chunks first need breaking down into smaller segments that can pass through all the network elements like routers and switches between the source and destination computers. This process is referred to assegmentation. Often the TCP protocol in the host computer performs this segmentation. Offloading this work to the NIC is calledTCP segmentation offload(TSO).
For example, a unit of 64 KiB (65,536 bytes) of data is usually segmented to 45 segments of 1460 bytes each before it is sent through the NIC and over the network. With some intelligence in the NIC, the host CPU can hand over the 64 KB of data to the NIC in a single transmit-request, the NIC can break that data down into smaller segments of 1460 bytes, add the TCP,IP, and data link layer protocol headers — according to a template provided by the host's TCP/IP stack — to each segment, and send the resulting frames over the network. This significantly reduces the work done by the CPU. As of 2014[update]many new NICs on the market support TSO.
Some network cards implement TSO generically enough that it can be used for offloading fragmentation of othertransport layerprotocols, or for doingIP fragmentationfor protocols that don't support fragmentation by themselves, such asUDP.
Unlike other operating systems, such as FreeBSD, the Linux kernel does not include support for TOE (not to be confused with other types of network offload).[17]While there are patches from the hardware manufacturers such asChelsioorQlogicthat add TOE support, the Linux kernel developers are opposed to this technology for several reasons:[18]
Much of the current work on TOE technology is by manufacturers of 10 Gigabit Ethernet interface cards, such asBroadcom,Chelsio Communications,Emulex,Mellanox Technologies,QLogic.
|
https://en.wikipedia.org/wiki/TCP_offload_engine
|
Building automation(BAS), also known asbuilding management system(BMS) orbuilding energy management system(BEMS), is the automatic centralized control of a building'sHVAC (heating, ventilation and air conditioning), electrical,lighting, shading,access control,security systems, and other interrelated systems. Some objectives of building automation are improved occupant comfort, efficient operation of building systems, reduction in energy consumption, reduced operating and maintaining costs and increased security.
BAS functionality may keep a buildings climate within a specified range, provide light to rooms based on occupancy, monitor performance and device failures, and provide malfunction alarms to building maintenance staff. A BAS works to reduce building energy and maintenance costs compared to a non-controlled building. Most commercial, institutional, and industrial buildings built after 2000 include a BAS, whilst older buildings may be retrofitted with a new BAS.
A building controlled by a BAS is often referred to as an "intelligent building",[1]a "smart building", or (if a residence) asmart home. Commercial and industrial buildings have historically relied on robust proven protocols (likeBACnet) while proprietary protocols (likeX-10) were used in homes.
With the advent ofwireless sensor networksand the Internet of Things, an increasing number of smart buildings are resorting to using low-power wireless communication technologies such as Zigbee, Bluetooth Low Energy and LoRa to interconnect the local sensors, actuators and processing devices.[2]
Almost all multi-storygreen buildingsare designed to accommodate a BAS for the energy, air and water conservation characteristics. Electrical devicedemand responseis a typical function of a BAS, as is the more sophisticated ventilation and humidity monitoring required of "tight" insulated buildings. Most green buildings also use as many low-power DC devices as possible. Even apassivhausdesign intended to consume no net energy whatsoever will typically require a BAS to manageheat capture, shading and venting, and scheduling device use.
Building management systems are most commonly implemented in large projects with extensive mechanical, HVAC, and electrical systems. Systems linked to a BMS typically represent 40% of a building's energy usage; if lighting is included, this number approaches to 70%. BMS systems are a critical component to managing energy demand. Improperly configured BMS systems are believed to account for 20% of building energy usage, or approximately 8% of total energy usage in the United States.[3][4]
In addition to controlling the building's internal environment, BMS systems are sometimes linked to access control (turnstiles and access doors controlling who is allowed access and egress to the building) or other security systems such asclosed-circuit television(CCTV) and motion detectors. Fire alarm systems and elevators are also sometimes linked to a BMS for monitoring. In case a fire is detected then only the fire alarm panel could close dampers in the ventilation system to stop smoke spreading, shut down air handlers, start smoke evacuation fans, and send all the elevators to the ground floor and park them to prevent people from using them.
Building management systems have also included disaster-response mechanisms (such asbase isolation) to save structures from earthquakes. In more recent times, companies and governments have been working to find similar solutions for flood zones and coastal areas at-risk torising sea levels. Self-adjusting floating environment draws from existing technologies used to float concrete bridges and runways such asWashington's SR 520andJapan's Mega-Float.[5]
Analog inputs are used to read a variable measurement. Examples aretemperature,humidityandpressure sensorswhich could bethermistor,4–20 mA, 0–10voltor platinumresistance thermometer(resistance temperature detector), or wirelesssensors.
A digital input indicates a device is on or off. Some examples of digital inputs would be a door contact switch, a current switch, an air flowswitch, or a voltage-freerelaycontact (dry contact). Digital inputs could also be pulse inputs counting the pulses over a period of time. An example is a turbine flow meter transmitting flow data as a frequency of pulses to an input.
Nonintrusive load monitoring[6]is software relying on digital sensors and algorithms to discover appliance or other loads from electrical or magnetic characteristics of the circuit. It is however detecting the event by an analog means. These are extremely cost-effective in operation and useful not only for identification but to detectstart-up transients, line or equipment faults, etc.[7][8]
Analog outputs control the speed or position of a device, such as avariable frequency drive, an I-P (currenttopneumatics)transducer, or a valve or damperactuator. An example is a hot water valve opening up 25% to maintain asetpoint. Another example is avariable frequency driveramping up a motor slowly to avoid a hard start.
Digital outputs are used to open and close relays and switches as well as drive a load upon command. An example would be to turn on the parking lot lights when aphotocellindicates it is dark outside. Another example would be to open a valve by allowing 24VDC/AC to pass through the output powering the valve. Analog outputs could also be pulse type outputs emitting a frequency of pulses over a given period of time. An example is an energy meter calculating kWh and emitting a frequency of pulses accordingly.
Controllers are essentially small, purpose-built computers with input and output capabilities. These controllers come in a range of sizes and capabilities to control devices commonly found in buildings, and to control sub-networks of controllers.
Inputs allow a controller to read temperature, humidity, pressure, current flow, air flow, and other essential factors. The outputs allow the controller to send command and control signals to slave devices, and to other parts of the system. Inputs and outputs can be either digital or analog. Digital outputs are also sometimes called discrete depending on manufacturer.
Controllers used for building automation can be grouped in three categories: programmable logic controllers (PLCs), system/network controllers, and terminal unit controllers. However an additional device can also exist in order to integrate third-party systems (e.g. a stand-alone AC system) into a central building automation system.
Terminal unit controllers usually are suited for control of lighting and/or simpler devices such as a package rooftop unit, heat pump, VAV box, fan coil, etc. The installer typically selects one of the available pre-programmed personalities best suited to the device to be controlled, and does not have to create new control logic.
Occupancy is one of two or more operating modes for a building automation system; unoccupied, morning warmup, and night-time setback are other common modes.
Occupancy is usually based on time of day schedules. In occupancy mode, the BAS aims to provides a comfortable climate and adequate lighting, often with zone-based control so that users on one side of a building have a different thermostat (or a different system, or sub system) than users on the opposite side.
A temperature sensor in the zone provides feedback to the controller, so it can deliver heating or cooling as needed.
If enabled, morning warmup (MWU) mode occurs prior to occupancy. During morning warmup the BAS tries to bring the building tosetpointjust in time for occupancy. The BAS often factors in outdoor conditions and historical experience to optimize MWU. This is also referred to asoptimized start.
Some buildings rely onoccupancy sensorsto activate lighting or climate conditioning. Given the potential for long lead times before a space becomes sufficiently cool or warm, climate conditioning is not often initiated directly by an occupancy sensor.
Lightingcan be turned on, off, or dimmed with a building automation orlighting control systembased on time of day, or on occupancy sensor, photosensors and timers.[9]One typical example is to turn the lights in a space on for a half-hour since the last motion was sensed. A photocell placed outside a building can sense darkness, and the time of day, and modulate lights in outer offices and the parking lot.
Lighting is also a good candidate fordemand response, with many control systems providing the ability to dim (or turn off) lights to take advantage of DR incentives and savings.
In newer buildings, the lighting control can be based on thefield busDigital Addressable Lighting Interface(DALI). Lamps with DALI ballasts are fully dimmable. DALI can also detect lamp and ballast failures on DALI luminaires and signals failures.
Shading and glazing are essential components in the building system, they affect occupants’ visual, acoustical, andthermal comfortand provide the occupant with a view outdoor.[10]Automated shading and glazing systems are solutions for controlling solar heat gains and glare.[11]It refers to the use of technology to control external or internal shading devices (such as blinds, and shades) or glazing itself. The system has an active and rapid response to various changing outdoor data (such as solar, wind) and to changing interior environment (such as temperature, illuminance, and occupant demands). Building shading and glazing systems can contribute to thermal and lighting improvement from both energy conservation and comfort point of view.
Dynamic shading devices allow the control of daylight and solar energy to enter into built environment in relation to outdoor conditions, daylighting demands and solar positions.[12]The common products includevenetian blinds,roller shades,louvers, and shutters.[13]They are mostly installed on the interior side of the glazing system because of the low maintenance cost, but also can be used on the exterior or a combination of both.[14]
Mostair handlersmix return and outside air so less temperature/humidity conditioning is needed. This can save money by using less chilled or heated water (not all AHUs use chilled or hot water circuits). Some external air is needed to keep the building's air healthy. To optimizeenergy efficiencywhile maintaining healthyindoor air quality(IAQ),demand control (or controlled) ventilation (DCV)adjusts the amount of outside air based on measured levels of occupancy.
Analog or digital temperature sensors may be placed in the space or room, the return and supplyair ducts, and sometimes the external air. Actuators are placed on the hot and chilled water valves, the outside air and return air dampers. The supply fan (and return if applicable) is started and stopped based on either time of day, temperatures, building pressures or a combination.
All modern building automation systems have alarm capabilities. It does little good to detect a potentially hazardous[15]or costly situation if no one who can solve the problem is notified. Notification can be through a computer (email or text message),pager, cellular phone voice call, audible alarm, or all of these. For insurance and liability purposes all systems keep logs of who was notified, when and how.
Alarms may immediately notify someone or only notify when alarms build to some threshold of seriousness or urgency. At sites with several buildings, momentary power failures can cause hundreds or thousands of alarms from equipment that has shut down – these should be suppressed and recognized as symptoms of a larger failure. Some sites are programmed so that critical alarms are automatically re-sent at varying intervals. For example, a repeating critical alarm (of anuninterruptible power supplyin 'bypass') might resound at 10 minutes, 30 minutes, and every 2 to 4 hours thereafter until the alarms are resolved.
Security systems can be interlocked to a building automation system.[15]If occupancy sensors are present, they can also be used as burglar alarms. Because security systems are often deliberately sabotaged, at least some detectors or cameras should have battery backup and wireless connectivity and the ability to trigger alarms when disconnected. Modern systems typically use power-over-Ethernet (which can operate apan-tilt-zoom cameraand other devices up to 30–90 watts) which is capable of charging such batteries and keeps wireless networks free for genuinely wireless applications, such as backup communication in outage.
Fire alarm panelsand their related smoke alarm systems are usually hard-wired to override building automation. For example: if the smoke alarm is activated, all the outside air dampers close to prevent air coming into the building, and an exhaust system can isolate the blaze. Similarly,electrical fault detectionsystems can turn entire circuits off, regardless of the number of alarms this triggers or persons this distresses.Fossil fuelcombustion devices also tend to have their own over-rides, such asnatural gasfeed lines that turn off when slow pressure drops are detected (indicating a leak), or when excessmethaneis detected in the building's air supply.
Most building automation networks consist of aprimaryandsecondarybuswhich connect high-level controllers (generally specialized for building automation, but may be genericprogrammable logic controllers) with lower-level controllers,input/outputdevices and auser interface(also known as a human interface device).ASHRAE's open protocolBACnetor the open protocolLonTalkspecify how most such devices interoperate. Modern systems useSNMPto track events, building on decades of history with SNMP-based protocols in the computer networking world.
Physical connectivity between devices was historically provided by dedicatedoptical fiber,ethernet,ARCNET,RS-232,RS-485or a low-bandwidth special purposewireless network. Modern systems rely on standards-based multi-protocol heterogeneous networking such as that specified in theIEEE 1905.1standard and verified by thenVoyauditing mark. These accommodate typically only IP-based networking but can make use of any existing wiring, and also integratepowerline networkingover AC circuits,power over Ethernetlow-power DC circuits, high-bandwidth wireless networks such asLTEandIEEE 802.11nandIEEE 802.11acand often integrate these using the building-specific wireless mesh open standardZigbee.
Proprietary hardwaredominates the controller market. Each company has controllers for specific applications. Some are designed with limited controls and no interoperability, such as simple packaged roof top units for HVAC. Software will typically not integrate well with packages from other vendors. Cooperation is at the Zigbee/BACnet/LonTalk level only.
Current systems provide interoperability at the application level, allowing users to mix-and-match devices from different manufacturers, and to provide integration with other compatible buildingcontrol systems. These typically rely onSNMP, long used for this same purpose to integrate diverse computer networking devices into one coherent network.
With the growing spectrum of capabilities and connections to theInternet of Things, building automation systems were repeatedly reported to be vulnerable, allowing hackers and cybercriminals to attack their components.[16][17]Buildings can be exploited by hackers to measure or change their environment:[18]sensors allow surveillance (e.g. monitoring movements of employees or habits of inhabitants) while actuators allow to perform actions in buildings (e.g. opening doors or windows for intruders). Several vendors and committees started to improve the security features in their products and standards, including KNX, Zigbee and BACnet (see recent standards or standard drafts). However, researchers report several open problems in building automation security.[19][20]
On November 11, 2019, a 132-page security research paper was released titled "I Own Your Building (Management System)" by Gjoko Krstic and Sipke Mellema that addressed more than 100 vulnerabilities affecting various BMS and access control solutions by various vendors.[21]
Room automationis a subset of building automation and with a similar purpose; it is the consolidation of one or more systems under centralized control, though in this case in one room.
The most common example of room automation is corporate boardroom, presentation suites, and lecture halls, where the operation of the large number of devices that define the room function (such asvideoconferencingequipment,video projectors,lighting control systems,public addresssystems etc.) would make manual operation of the room very complex. It is common for room automation systems to employ atouchscreenas the primary way of controlling each operation.
|
https://en.wikipedia.org/wiki/Building_automation
|
Incontrol theory, thecoefficient diagram method(CDM) is analgebraicapproach applied to apolynomialloop in theparameter space. A special diagram called a "coefficient diagram" is used as the vehicle to carry the necessary information and as the criterion of good design.[1]The performance of the closed-loop system is monitored by the coefficient diagram.
The most considerable advantages of CDM can be listed as follows:[2]
It is usually required that the controller for a given plant should be designed under some practical limitations.
The controller is desired to be of minimum degree, minimum phase (if possible) and stable. It must have enough bandwidth and power rating limitations. If the controller is designed without considering these limitations, the robustness property will be very poor, even though the stability andtime responserequirements are met. CDM controllers designed while considering all these problems is of the lowest degree, has a convenient bandwidth and results with a unit step time response without an overshoot. These properties guarantee the robustness, the sufficientdampingof the disturbance effects and the low economic property.[7]
Although the main principles of CDM have been known since the 1950s,[8][9][10]the first systematic method was proposed byShunji Manabe.[11]He developed a new method that easily builds a target characteristic polynomial to meet the desired time response. CDM is an algebraic approach combining classical and modern control theories and uses polynomial representation in the mathematical expression. The advantages of the classical and modern control techniques are integrated with the basic principles of this method, which is derived by making use of the previous experience and knowledge of the controller design. Thus, an efficient and fertile control method has appeared as a tool with which control systems can be designed without needing much experience and without confronting many problems.
Many control systems have been designed successfully using CDM.[12][13]It is very easy to design a controller under the conditions of stability, time domain performance and robustness. The close relations between these conditions and coefficients of the characteristic polynomial can be simply determined. This means that CDM is effective not only for control system design but also for controller parameters tuning.
.
|
https://en.wikipedia.org/wiki/Coefficient_diagram_method
|
Control theoryis a field ofcontrol engineeringandapplied mathematicsthat deals with thecontrolofdynamical systemsin engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing anydelay,overshoot, orsteady-state errorand ensuring a level of controlstability; often with the aim to achieve a degree ofoptimality.
To do this, acontrollerwith the requisite corrective behavior is required. This controller monitors the controlledprocess variable(PV), and compares it with the reference orset point(SP). The difference between actual and desired value of the process variable, called theerrorsignal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied arecontrollabilityandobservability. Control theory is used incontrol system engineeringto design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such asrobotics.
Extensive use is usually made of a diagrammatic style known as theblock diagram. In it thetransfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on thedifferential equationsdescribing the system.
Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described byJames Clerk Maxwell.[1]Control theory was further advanced byEdward Routhin 1874,Charles Sturmand in 1895,Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development ofPID controltheory byNicolas Minorsky.[2]Although a major application ofmathematicalcontrol theory is incontrol systems engineering, which deals with the design ofprocess controlsystems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology andoperations research.[3]
Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of thecentrifugal governor, conducted by the physicistJames Clerk Maxwellin 1868, entitledOn Governors.[4]A centrifugal governor was already used to regulate the velocity of windmills.[5]Maxwell described and analyzed the phenomenon ofself-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate,Edward John Routh, abstracted Maxwell's results for the general class of linear systems.[6]Independently,Adolf Hurwitzanalyzed system stability using differential equations in 1877, resulting in what is now known as theRouth–Hurwitz theorem.[7][8]
A notable application of dynamic control was in the area of crewed flight. TheWright brothersmade their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds.
ByWorld War II, control theory was becoming an important area of research.Irmgard Flügge-Lotzdeveloped the theory of discontinuous automatic control systems, and applied thebang-bang principleto the development ofautomatic flight control equipmentfor aircraft.[9][10]Other areas of application for discontinuous controls includedfire-control systems,guidance systemsandelectronics.
Sometimes, mechanical methods are used to improve the stability of systems. For example,ship stabilizersare fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship.
TheSpace Racealso depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find aninternal modelthat obeys thegood regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of aregulatorinteracting with aplant.
Fundamentally, there are two types of control loop:open-loop control(feedforward), andclosed-loop control(feedback).
The definition of a closed loop control system according to theBritish Standards Institutionis "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero."[12]
Aclosed-loop controlleror feedback controller is acontrol loopwhich incorporatesfeedback, in contrast to anopen-loop controllerornon-feedback controller.
A closed-loop controller uses feedback to controlstatesoroutputsof adynamical system. Its name comes from the information path in the system: process inputs (e.g.,voltageapplied to anelectric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured withsensorsand processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.[14]
In the case of linearfeedbacksystems, acontrol loopincludingsensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at asetpoint(SP). An everyday example is thecruise controlon a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. ThePID algorithmin the controller restores the actual speed to the desired speed in an optimum way, with minimal delay orovershoot, by controlling the power output of the vehicle's engine.
Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent.Open-loop control systemsdo not make use of feedback, and run only in pre-arranged ways.
Closed-loop controllers have the following advantages over open-loop controllers:
In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termedfeedforwardand serves to further improve reference tracking performance.
A common closed-loop controller architecture is thePID controller.
The field of control theory can be divided into two branches:
Mathematical techniques for analyzing and designing control systems fall into two different categories:
In contrast to the frequency-domain analysis of the classical control theory, modern control theory utilizes the time-domainstate spacerepresentation,[citation needed]a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space.[17][18]
Control systems can be divided into different categories depending on the number of inputs and outputs.
The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain usingdifferential equations, in the complex-s domain with theLaplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory arePID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model.
Modern control theory is carried out in thestate space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first orderdifferential equationsdefined usingstate variables.Nonlinear,multivariable,adaptiveandrobust controltheories come under this division. Being fairly new, modern control theory has many areas yet to be explored. Scholars likeRudolf E. KálmánandAleksandr Lyapunovare well known among the people who have shaped modern control theory.
Thestabilityof a generaldynamical systemwith no input can be described withLyapunov stabilitycriteria.
For simplicity, the following descriptions focus on continuous-time and discrete-timelinear systems.
Mathematically, this means that for a causal linear system to be stable all of thepolesof itstransfer functionmust have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside
The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is inCartesian coordinateswhere thex{\displaystyle x}axis is the real axis and the discrete Z-transform is incircular coordinateswhere theρ{\displaystyle \rho }axis is the real axis.
When the appropriate conditions above are satisfied a system is said to beasymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or amodulusequal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it ismarginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero.
If a system in question has animpulse responseof
then the Z-transform (seethis example), is given by
which has a pole inz=0.5{\displaystyle z=0.5}(zeroimaginary part). This system is BIBO (asymptotically) stable since the pole isinsidethe unit circle.
However, if the impulse response was
then the Z-transform is
which has a pole atz=1.5{\displaystyle z=1.5}and is not BIBO stable since the pole has a modulus strictly greater than one.
Numerous tools exist for the analysis of the poles of a system. These include graphical systems like theroot locus,Bode plotsor theNyquist plots.
Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships useantiroll finsthat extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll.
Controllabilityandobservabilityare main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termedstabilizable. Observability instead is related to the possibility ofobserving, through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable.
From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of theeigenvaluesof the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis.
Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors.
Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especiallyroboticsor aircraft cruise control).
A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles haveRe[λ]<−λ¯{\displaystyle Re[\lambda ]<-{\overline {\lambda }}}, whereλ¯{\displaystyle {\overline {\lambda }}}is a fixed value strictly greater than zero, instead of simply asking thatRe[λ]<0{\displaystyle Re[\lambda ]<0}.
Another typical specification is the rejection of a step disturbance; including anintegratorin the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included.
Other "classical" control theory specifications regard the time-response of the closed-loop system. These include therise time(the time needed by the control system to reach the desired value after a perturbation), peakovershoot(the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related torobustness(see after).
Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI).
A control system must always have some robustness property. Arobust controlleris such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the truesystem dynamicscan be so complicated that a complete model is impossible.
The process of determining the equations that govern the model's dynamics is calledsystem identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically itstransfer functionor matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of amass-spring-dampersystem we know thatmx¨(t)=−Kx(t)−Bx˙(t){\displaystyle m{\ddot {x}}(t)=-Kx(t)-\mathrm {B} {\dot {x}}(t)}. Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal.
Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance.
Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and usingNyquistandBode diagrams. Topics includegain and phase marginand amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties.
A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem:model predictive control(see later), andanti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold.
For MIMO systems, pole placement can be performed mathematically using astate space representationof the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design.
Processes in industries likeroboticsand theaerospace industrytypically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g.,feedback linearization,backstepping,sliding mode control, trajectory linearization control normally take advantage of results based onLyapunov's theory.Differential geometryhas been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states.[19]
When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions.
A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks.
Every control system must guarantee first the stability of the closed-loop behavior. Forlinear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based onAleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen.
Many active and historical figures made significant contribution to control theory including
|
https://en.wikipedia.org/wiki/Control_theory
|
Adistributed control system(DCS) is a computerizedcontrol systemfor a process or plant usually with manycontrol loops, in which autonomous controllers are distributed throughout the system, but there is no central operator supervisory control. This is in contrast to systems that use centralized controllers; either discrete controllers located at a central control room or within a central computer. The DCS concept increases reliability and reduces installation costs by localizing control functions near the process plant, with remote monitoring and supervision.
Distributed control systems first emerged in large, high value, safety critical process industries, and were attractive because the DCS manufacturer would supply both the local control level and central supervisory equipment as an integrated package, thus reducing design integration risk. Today the functionality ofSupervisory control and data acquisition (SCADA)and DCS systems are very similar, but DCS tends to be used on large continuous process plants where high reliability and security is important, and the control room is not necessarily geographically remote. Many machine control systems exhibit similar properties as plant and process control systems do.[1]
The key attribute of a DCS is its reliability due to the distribution of the control processing around nodes in the system. This mitigates a single processor failure. If a processor fails, it will only affect one section of the plant process, as opposed to a failure of a central computer which would affect the whole process. This distribution of computing power local to the field Input/Output (I/O) connection racks also ensures fast controller processing times by removing possible network and central processing delays.
The accompanying diagram is a general model which shows functional manufacturing levels using computerised control.
Referring to the diagram;
Levels 1 and 2 are the functional levels of a traditional DCS, in which all equipment are part of an integrated system from a single manufacturer.
Levels 3 and 4 are not strictlyprocess controlin the traditional sense, but where production control and scheduling takes place.
The processor nodes and operatorgraphical displaysare connected over proprietary or industry standard networks, and network reliability is increased by dual redundancy cabling over diverse routes. This distributed topology also reduces the amount of field cabling by siting the I/O modules and their associated processors close to the process plant.
The processors receive information from input modules, process the information and decide control actions to be signalled by the output modules. The field inputs and outputs can beanalog signalse.g.4–20 mA DC current loopor two-state signals that switch either "on" or "off", such as relay contacts or a semiconductor switch.
DCSs are connected to sensors and actuators and usesetpoint controlto control the flow of material through the plant. A typical application is aPID controllerfed by a flow meter and using acontrol valveas the final control element. The DCS sends the setpoint required by the process to the controller which instructs a valve to operate so that the process reaches and stays at the desired setpoint. (see 4–20 mA schematic for example).
Large oil refineries and chemical plants have several thousand I/O points and employ very large DCS. Processes are not limited to fluidic flow through pipes, however, and can also include things likepaper machinesand their associated quality controls,variable speed drivesandmotor control centers,cement kilns,mining operations,ore processingfacilities, andmany others.
DCSs in very high reliability applications can have dual redundant processors with "hot" switch over on fault, to enhance the reliability of the control system.
Although 4–20 mA has been the main field signalling standard, modern DCS systems can also supportfieldbusdigital protocols, such as Foundation Fieldbus, profibus, HART,modbus, PC Link, etc.
Modern DCSs also supportneural networksandfuzzy logicapplications. Recent research focuses on the synthesis of optimal distributed controllers, which optimizes a certainH-infinityor the H 2 control criterion.[2][3]
Distributed control systems (DCS) are dedicated systems used in manufacturing processes that are continuous or batch-oriented.
Processes where a DCS might be used include:
Process control of large industrial plants has evolved through many stages. Initially, control would be from panels local to the process plant. However this required a large amount of human oversight to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Effectively this was the centralisation of all the localised panels, with the advantages of lower manning levels and easier overview of the process. Often the controllers were behind the control room panels, and all automatic and manual control outputs were transmitted back to plant. However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware, and continual operator movement within the control room was required to view different parts of the process.
With the coming of electronic processors and graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around plant, and communicate with the graphic display in the control room or rooms. The distributed control system was born.
The introduction of DCSs allowed easy interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels.
Earlyminicomputerswere used in the control of industrial processes since the beginning of the 1960s. TheIBM 1800, for example, was an early computer that had input/output hardware to gather process signals in a plant for conversion from field contact levels (for digital points) and analog signals to the digital domain.
The first industrial control computer system was built 1959 at the Texaco Port Arthur, Texas, refinery with anRW-300of theRamo-WooldridgeCompany.[4]
In 1975, bothYamatake-Honeywell[5]and Japanese electrical engineering firmYokogawaintroduced their own independently produced DCS's - TDC 2000 and CENTUM systems, respectively. US-based Bristol also introduced their UCS 3000 universal controller in 1975. In 1978Valmetintroduced their own DCS system called Damatic (latest web-based generation Valmet DNAe[6]). In 1980, Bailey (now part of ABB[7]) introduced the NETWORK 90 system, Fisher Controls (now part ofEmerson Electric) introduced the PROVoX system,Fischer & Porter Company(now also part of ABB[8]) introduced DCI-4000 (DCI stands for Distributed Control Instrumentation).
The DCS largely came about due to the increased availability of microcomputers and the proliferation of microprocessors in the world of process control. Computers had already been applied to process automation for some time in the form of bothdirect digital control(DDC) and setpoint control. In the early 1970sTaylor Instrument Company, (now part of ABB) developed the 1010 system, Foxboro the FOX1 system, Fisher Controls the DC2system andBailey Controlsthe 1055 systems. All of these were DDC applications implemented within minicomputers (DECPDP-11,Varian Data Machines,MODCOMPetc.) and connected to proprietary Input/Output hardware. Sophisticated (for the time) continuous as well as batch control was implemented in this way. A more conservative approach was setpoint control, where process computers supervised clusters of analog process controllers. A workstation provided visibility into the process using text and crude character graphics. Availability of a fully functional graphical user interface was a way away.
Central to the DCS model was the inclusion of control function blocks. Function blocks evolved from early, more primitive DDC concepts of "Table Driven" software. One of the first embodiments of object-oriented software, function blocks were self-contained "blocks" of code that emulated analog hardware control components and performed tasks that were essential to process control, such as execution of PID algorithms. Function blocks continue to endure as the predominant method of control for DCS suppliers, and are supported by key technologies such as Foundation Fieldbus[9]today.
Midac Systems, of Sydney, Australia, developed an objected-oriented distributed direct digital control system in 1982. The central system ran 11 microprocessors sharing tasks and common memory and connected to a serial communication network of distributed controllers each running two Z80s. The system was installed at the University of Melbourne.[citation needed]
Digital communication between distributed controllers, workstations and other computing elements (peer to peer access) was one of the primary advantages of the DCS. Attention was duly focused on the networks, which provided the all-important lines of communication that, for process applications, had to incorporate specific functions such as determinism and redundancy. As a result, many suppliers embraced the IEEE 802.4 networking standard. This decision set the stage for the wave of migrations necessary when information technology moved into process automation and IEEE 802.3 rather than IEEE 802.4 prevailed as the control LAN.
In the 1980s, users began to look at DCSs as more than just basic process control. A very early example of aDirect Digital ControlDCS was completed by the Australian business Midac in 1981–82 using R-Tec Australian designed hardware. The system installed at theUniversity of Melbourneused a serial communications network, connecting campus buildings back to a control room "front end". Each remote unit ran twoZ80microprocessors, while the front end ran eleven Z80s in a parallel processing configuration with paged common memory to share tasks and that could run up to 20,000 concurrent control objects.
It was believed that if openness could be achieved and greater amounts of data could be shared throughout the enterprise that even greater things could be achieved. The first attempts to increase the openness of DCSs resulted in the adoption of the predominant operating system of the day:UNIX. UNIX and its companion networking technology TCP-IP were developed by the US Department of Defense for openness, which was precisely the issue the process industries were looking to resolve.
As a result, suppliers also began to adopt Ethernet-based networks with their own proprietary protocol layers. The full TCP/IP standard was not implemented, but the use of Ethernet made it possible to implement the first instances of object management and global data access technology. The 1980s also witnessed the firstPLCsintegrated into the DCS infrastructure. Plant-wide historians also emerged to capitalize on the extended reach of automation systems. The first DCS supplier to adopt UNIX and Ethernet networking technologies was Foxboro, who introduced the I/A Series[10]system in 1987.
The drive toward openness in the 1980s gained momentum through the 1990s with the increased adoption ofcommercial off-the-shelf(COTS) components and IT standards. Probably the biggest transition undertaken during this time was the move from the UNIX operating system to the Windows environment. While the realm of the real time operating system (RTOS) for control applications remains dominated by real time commercial variants of UNIX or proprietary operating systems, everything above real-time control has made the transition to Windows.
The introduction of Microsoft at the desktop and server layers resulted in the development of technologies such asOLE for process control (OPC), which is now a de facto industry connectivity standard. Internet technology also began to make its mark in automation and the world, with most DCS HMI supporting Internet connectivity. The 1990s were also known for the "Fieldbus Wars", where rival organizations competed to define what would become the IECfieldbusstandard for digital communication with field instrumentation instead of 4–20 milliamp analog communications. The first fieldbus installations occurred in the 1990s. Towards the end of the decade, the technology began to develop significant momentum, with the market consolidated around Ethernet I/P, Foundation Fieldbus and Profibus PA for process automation applications. Some suppliers built new systems from the ground up to maximize functionality with fieldbus, such asRockwellPlantPAx System,HoneywellwithExperion& PlantscapeSCADAsystems,ABBwith System 800xA,[11]Emerson Process Management[12]with theEmerson Process ManagementDeltaVcontrol system,Siemenswith the SPPA-T3000[13]orSimatic PCS 7,[14]Forbes Marshall[15]with the Microcon+ control system andAzbil Corporation[ja][16]with theHarmonas-DEOsystem. Fieldbus technics have been used to integrate machine, drives, quality andcondition monitoringapplications to one DCS with Valmet DNA system.[6]
The impact of COTS, however, was most pronounced at the hardware layer. For years, the primary business of DCS suppliers had been the supply of large amounts of hardware, particularly I/O and controllers. The initial proliferation of DCSs required the installation of prodigious amounts of this hardware, most of it manufactured from the bottom up by DCS suppliers. Standard computer components from manufacturers such as Intel and Motorola, however, made it cost prohibitive for DCS suppliers to continue making their own components, workstations, and networking hardware.
As the suppliers made the transition to COTS components, they also discovered that the hardware market was shrinking fast. COTS not only resulted in lower manufacturing costs for the supplier, but also steadily decreasing prices for the end users, who were also becoming increasingly vocal over what they perceived to be unduly high hardware costs. Some suppliers that were previously stronger in thePLCbusiness, such as Rockwell Automation and Siemens, were able to leverage their expertise in manufacturing control hardware to enter the DCS marketplace with cost effective offerings, while the stability/scalability/reliability and functionality of these emerging systems are still improving. The traditional DCS suppliers introduced new generation DCS System based on the latest Communication and IEC Standards, which resulting in a trend of combining the traditional concepts/functionalities for PLC and DCS into a one for all solution—named "Process Automation System" (PAS). The gaps among the various systems remain at the areas such as: the database integrity, pre-engineering functionality, system maturity, communication transparency and reliability. While it is expected the cost ratio is relatively the same (the more powerful the systems are, the more expensive they will be), the reality of the automation business is often operating strategically case by case. The current next evolution step is calledCollaborative Process Automation Systems.
To compound the issue, suppliers were also realizing that the hardware market was becoming saturated. The life cycle of hardware components such as I/O and wiring is also typically in the range of 15 to over 20 years, making for a challenging replacement market. Many of the older systems that were installed in the 1970s and 1980s are still in use today, and there is a considerable installed base of systems in the market that are approaching the end of their useful life. Developed industrial economies in North America, Europe, and Japan already had many thousands of DCSs installed, and with few if any new plants being built, the market for new hardware was shifting rapidly to smaller, albeit faster growing regions such as China, Latin America, and Eastern Europe.
Because of the shrinking hardware business, suppliers began to make the challenging transition from a hardware-based business model to one based on software and value-added services. It is a transition that is still being made today. The applications portfolio offered by suppliers expanded considerably in the '90s to include areas such as production management, model-based control, real-time optimization, plant asset management (PAM), Real-time performance management (RPM) tools,alarm management, and many others. To obtain the true value from these applications, however, often requires a considerable service content, which the suppliers also provide.
The latest developments in DCS include the following new technologies:
Increasingly, and ironically, DCS are becoming centralised at plant level, with the ability to log into the remote equipment. This enables operator to control both at enterprise level ( macro ) and at the equipment level (micro), both within and outside the plant, because the importance of the physical location drops due to interconnectivity primarily thanks to wireless and remote access.
The more wireless protocols are developed and refined, the more they are included in DCS. DCS controllers are now often equipped with embedded servers and provide on-the-go web access. Whether DCS will lead Industrial Internet of Things (IIOT) or borrow key elements from remains to be seen.
Many vendors provide the option of a mobile HMI, ready for bothAndroidandiOS. With these interfaces, the threat of security breaches and possible damage to plant and process are now very real.
|
https://en.wikipedia.org/wiki/Distributed_control_system
|
Droop speed controlis a control mode used for AC electrical power generators, whereby the power output of a generator reduces as the line frequency increases. It is commonly used as the speed control mode of thegovernorof aprime moverdriving asynchronous generatorconnected to anelectrical grid. It works by controlling the rate of power produced by the prime mover according to the grid frequency. With droop speed control, when the grid is operating at maximum operating frequency, the prime mover's power is reduced to zero, and when the grid is at minimum operating frequency, the power is set to 100%, and intermediate values at other operating frequencies.
This mode allows synchronous generators to run in parallel, so that loads are shared among generators with the same droop curve in proportion to their power rating.
In practice, the droop curves that are used by generators on large electrical grids are not necessarily linear or the same, and may be adjusted by operators. This permits the ratio of power used to vary depending on load, so for example,base loadgenerators will generate a larger proportion at low demand. Stability requires that over the operating frequency range the power output is a monotonically decreasing function of frequency.
Droop speed control can also be used by grid storage systems. With droop speed control those systems will remove energy from the grid at higher than average frequencies, and supply it at lower frequencies.
The frequency of a synchronous generator is given by
where
The frequency (F) of a synchronous generator is directly proportional to its speed (N). When multiple synchronous generators are connected in parallel to the electrical grid, the frequency is fixed by the grid, since individual power output of each generator will be small compared to the load on a large grid. Synchronous generators connected to the grid all run at the same frequency but they can run at various speeds because they can differ in the number of poles (P).
A speed reference as percentage of actual speed is set in this mode. As the generator is loaded from no load to full load, the actual speed of the prime mover tends to decrease. In order to increase the power output in this mode, the prime mover speed reference is increased. Because the actual prime mover speed is fixed by the grid, this difference in speed reference and actual speed of the prime mover is used to increase the flow of working fluid (fuel, steam, etc.) to the prime mover, and hence power output is increased. The reverse will be true for decreasing power output. The prime mover speed reference is always greater than actual speed of the prime mover. The actual speed of the prime mover is allowed to "droop" or decrease with respect to the reference, and so the name.
For example, if the turbine is rated at 3000 rpm, and the machine speed reduces from 3000 rpm to 2880 rpm when it is loaded from no load to base load, then the droop % is given by
In this case, speed reference will be 104% and actual speed will be 100%. For every 1% change in the turbine speed reference, the power output of the turbine will change by 25% of rated for a unit with a 4% droop setting. Droop is therefore expressed as the percentage change in (design) speed required for 100% governor action.
As frequency is fixed on the grid, and so actual turbine speed is also fixed, the increase in turbine speed reference will increase the error between reference and actual speed. As the difference increases, fuel flow is increased to increase power output, and vice versa. This type of control is referred to as "straight proportional" control. If the entire grid tends to be overloaded, the grid frequency and hence actual speed of generator will decrease. All units will see an increase in the speed error, and so increase fuel flow to their prime movers and power output. In this way droop speed control mode also helps to hold a stable grid frequency. The amount of power produced is strictly proportional to the error between the actual turbine speed and speed reference.
It can be mathematically shown that if all machines synchronized to a system have the same droop speed control, they will share load proportionate to the machine ratings.[1]
For example, how fuel flow is increased or decreased in a GE-design heavy duty gas turbine can be given by the formula,
FSRN = (FSKRN2 * (TNR-TNH)) + FSKRN1
Where,
FSRN = Fuel Stroke Reference (Fuel supplied to Gas Turbine) for droop mode
TNR = Turbine Speed Reference
TNH = Actual Turbine Speed
FSKRN2 = Constant
FSKRN1 = Constant
The above formula is nothing but the equation of a straight line (y = mx + b).
Multiple synchronous generators having equal % droop setting connected to a grid will share the change in grid load in proportion of their base load.
For stable operation of theelectrical gridof North America, power plants typically operate with a four or five percent speed droop.[2][citation needed]By definition, with 5% droop the full-load speed is 100% and the no-load speed is 105%.
Normally the changes in speed are minor due toinertiaof the total rotating mass of all generators and motors running on the grid.[3]Adjustments in power output for a particular prime mover and generator combination are made by slowly raising the droop curve by increasing the spring pressure on acentrifugal governoror by anengine control unitadjustment, or the analogous operation for an electronic speed governor. All units to be connected to a grid should have the same droop setting, so that all plants respond in the same way to the instantaneous changes in frequency without depending on outside communication.[4]
Next to the inertia given by the parallel operation of synchronous generators,[5]the frequency speed droop is the primary instantaneous parameter in control of an individual power plant's power output (kW).[6]
|
https://en.wikipedia.org/wiki/Droop_speed_control
|
Both electrical and electronics engineers typically possess anacademic degreewith a major in electrical/ electronics engineering. The length of study for such a degree is usually three or four years and the completed degree may be designated as aBachelor of Engineering,Bachelor of ScienceorBachelor of Applied Sciencedepending upon the university.
The degree generally includes units coveringphysics,mathematics,project managementandspecific topics in electrical and electronics engineering. Initially such topics cover most, if not all, of the sub fields of electrical engineering. Students then choose to specialize in one or more sub fields towards the end of the degree. In most countries, a bachelor's degree in engineering represents the first step towards certification and the degree program itself is certified by a professional body. After completing a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States and Canada), Chartered Engineer (in the United Kingdom, Ireland, India, Pakistan, South Africa and Zimbabwe), Chartered Professional Engineer (in Australia) or European Engineer (in much of the European Union).
Electrical engineers can also choose to pursue a postgraduate degree such as amaster of engineering, adoctor of philosophyin engineering or anengineer's degree. The master and engineer's degree may consist of eitherresearch,courseworkor a mixture of the two. The doctor of philosophy consists of a significant research component and is often viewed as the entry point toacademia. In the United Kingdom and various other European countries, the master of engineering is often considered an undergraduate degree of slightly longer duration than the bachelor of engineering.
Apart from electromagnetics and network theory, other items in the syllabus are particular toelectronicsengineering course.Electricalengineering courses have other specializations such asmachines,power generationanddistribution. Note that the following list does not include the large quantity of mathematics (maybe apart from the final year) included in each year's study.
Elements of vector calculus: divergence and curl; Gauss' andStokes' theorems,Maxwell's equations: differential and integral forms.Wave equation,Poynting vector.Plane waves: propagation through various media; reflection and refraction; phase and group velocity; skin depth. Transmission lines: characteristic impedance; impedance transformation; Smith chart; impedance matching; pulse excitation. Waveguides: modes in rectangular waveguides; boundary conditions; cut-off frequencies; dispersion relations. Antennas:Dipole antennas;antenna arrays; radiation pattern;reciprocity theorem,antenna gain. Additional basic fundamental in electrical are to be study
Network graphs: matrices associated with graphs; incidence, fundamentalcut setand fundamental circuit matrices. Solution methods: nodal andmesh analysis. Network theorems: superposition, Thevenin and Norton'smaximum power transfer,Wye-Delta transformation. Steady state sinusoidal analysis usingphasors. Linear constant coefficient differential equations; time domain analysis of simple RLC circuits, Solution of network equations using Laplace transform: frequency domain analysis of RLC circuits. 2-port network parameters: driving point and transfer functions. State equations.
Electronic Devices:Energy bands in silicon, intrinsic and extrinsic silicon. Carrier transport in silicon:diffusion current,drift current, mobility,resistivity. Generation and recombination of carriers.p-n junctiondiode,Zener diode,tunnel diode, BJT,JFET, MOS capacitor,MOSFET, LED, p-I-n and avalanchephoto diode, LASERs. Device technology: integrated circuits fabrication process, oxidation, diffusion, ion implantation, photolithography, n-tub, p-tub and twin-tub CMOS process.
Analog Circuits:Equivalent circuits (large and small-signal) of diodes,BJTs,JFETs, andMOSFETs, Simple diode circuits, clipping, clamping, rectifier. Biasing and bias stability of transistor and FET amplifiers. Amplifiers: single-and multi-stage, differential, operational, feedback and power. Analysis of amplifiers; frequency response of amplifiers. Simple op-amp circuits. Filters. Sinusoidal oscillators; criterion for oscillation; single-transistor and op-amp configurations.Function generatorsand wave-shaping circuits. Power supplies.
Digital circuits:Boolean algebra, minimization of Boolean functions;logic gatesdigital IC families (DTL,TTL,ECL, MOS,CMOS). Combinational circuits: arithmetic circuits, code converters,multiplexersand decoders. Sequential circuits: latches andflip-flops, counters andshift-registers. Sample and hold circuits,ADCs, DACs. Semiconductor memories.Microprocessor(8085): architecture, programming, memory and I/O interfacing.
Definitions and properties ofLaplace transform, continuous-time and discrete-time Fourier series, continuous-time and discrete-timeFourier Transform,z-transform. Sampling theorems. Linear Time-Invariant Systems: definitions and properties; casualty, stability,impulse response, convolution, poles and zeros frequency response, group delay, phase delay. Signal transmission through LTI systems.Random signalsand noise: probability,random variables,probability density function,autocorrelation, power spectral density.
Control systemcomponents; block diagrammatic description, reduction of block diagrams.Open loopand closed loop (feedback) systems and stability analysis of these systems.Signal flow graphsand their use in determining transfer functions of systems; transient and steady state analysis of LTI control systems and frequency response. Tools and techniques for LTI control system analysis:root loci,Routh-Hurwitz criterion,BodeandNyquist plots. Control system compensators: elements of lead andlag compensation, elements of Proportional-Integral-Derivative control. State variable representation and solution of state equation of LTI control systems.
Communication systems: amplitude and angle modulation and demodulation systems, spectral analysis of these operations, superheterodyne receivers; elements of hardware, realizations of analog communication systems; signal-to-noise ratio calculations for amplitude modulation (AM) and frequency modulation (FM) for low noise conditions. Digital communication systems: pulse code modulation,differential pulse-code modulation, delta modulation; digital modulation schemes-amplitude, phase and frequency shift keying schemes, matched filter receivers, bandwidth consideration and probability of error calculations for these schemes.
The advantages of certification vary depending upon location. For example, in the United States and Canada "only a licensed engineer may...seal engineering work for public and private clients". This requirement is enforced by state and provincial legislation such as Quebec's Engineers Act. In other countries, such as Australia, no such legislation exists. Practically all certifying bodies maintain a code of ethics that they expect all members to abide by or risk expulsion. In this way these organizations play an important role in maintaining ethical standards for the profession. Even in jurisdictions where certification has little or no legal bearing on work, engineers are subject to contract law. In cases where an engineer's work fails he or she may be subject to the tort of negligence and, in extreme cases, the charge of criminal negligence. An engineer's work must also comply with numerous other rules and regulations such as building codes and legislation pertaining to environmental law.
Significant professional bodies for electrical engineers include theInstitute of Electrical and Electronics Engineersand theInstitution of Engineering and Technology. The former claims to produce 30 percent of the world's literature on electrical engineering, has over 360,000 members worldwide and holds over 300 conferences annually. The latter publishes 14 journals, has a worldwide membership of 120,000, certifies Chartered Engineers in the United Kingdom and claims to be the largest professional engineering society in Europe.
|
https://en.wikipedia.org/wiki/Education_and_training_of_electrical_and_electronics_engineers
|
TheExperimental Physics and Industrial Control System(EPICS) is a set of software tools and applications used to develop and implementdistributed control systemsto operate devices such asparticle accelerators,telescopesand other large scientific facilities. The tools are designed to help develop systems which often feature large numbers ofnetworked computersdelivering control and feedback. They also provideSCADAcapabilities.[1]
EPICS was initially developed as the Ground Test Accelerator Controls System (GTACS) atLos Alamos National Laboratory(LANL) in 1988 by Bob Dalesio, Jeff Hill, et al.[2]In 1989, Marty Kraimer from Argonne National Laboratory (ANL) came to work alongside the GTA controls team for 6 months, bringing his experience from his work on the Advanced Photon Source (APS) Control System to the project. The resulting software was renamed EPICS and was presented at the International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS) in 1991.[1]
EPICS was originally available under a commercial license, with enhanced versions sold byTate & Kinetic Systems. Licenses for collaborators were free, but required a legal agreement with LANL and APS. An EPICS community was established and development grew as more facilities joined in with the collaboration. In February 2004, EPICS became freely distributable after its release under the EPICS Open License.[3]
It is now used and developed by over 50 large science institutions worldwide, as well as by several commercial companies.
EPICS usesclient–serverandpublish–subscribetechniques to communicate between computers. Servers, the “input/outputcontrollers” (IOCs), collect experiment and control data in real time, using the measurement instruments attached to them. This information is then provided to clients, using the high-bandwidth Channel Access (CA)[4]or the recently added pvAccess[5][6]networking protocols that are designed to suitreal-timeapplications such as scientific experiments.
IOCs hold and interact with a database of "records", which represent either devices or aspects of the devices to be controlled. IOCs can be hosted by stock-standard servers or PCs or byVME,MicroTCA, and other standardembedded systemprocessors. For "hard real-time" applications theRTEMSorVxWorksoperating systems are normally used, whereas "soft real-time" applications typically run onLinuxorMicrosoft Windows.
Data held in the records are represented by unique identifiers known as Process Variables (PVs). These PVs are accessible over the network channels provided by the CA/pvAccess protocol.
Many record types are available for various types of input and output (e.g., analog or binary) and to provide functional behaviour such as calculations. It is also possible to create custom record types. Each record consists of a set of fields, which hold the record's static and dynamic data and specify behaviour when various functions are requested locally or remotely. Most record types are listed in theEPICS record reference manual.
Graphical user interfacepackages are available, allowing users to view and interact with PV data through typical display widgets such as dials and text boxes. Examples include EDM (Extensible Display Manager), MEDM (Motif/EDM), andCSS.
Any software that implements the CA/pvAccess protocol can read and write PV values. Extension packages are available to provide support forMATLAB,LabVIEW,Perl,Python,Tcl,ActiveX, etc. These can be used to write scripts to interact with EPICS-controlled equipment.
Japan
United States
Russia
|
https://en.wikipedia.org/wiki/EPICS
|
Thegood regulator theoremis atheoremconceived by Roger C. Conant andW. Ross Ashbythat is central tocybernetics. It was originally stated as "every good regulator of a system must be a model of that system".[1]That is, anyregulatorthat is maximally simple among optimal regulators must behave as an image of that system under ahomomorphism.
More accurately, every good regulator must contain or have access to a model of the system it regulates. And while the authors sometimes say the regulator and regulated are 'isomorphic', the mapping they construct is only a homomorphism, meaning the model can lose information about the entity that is modeled. So, while the system that is regulated is a pattern of behavior in the world, it is not necessarily the only pattern of behavior observable in a regulated entity.
This theorem is obtained by considering theentropyof the variation of the output of the controlled system, and shows that, under very general conditions, that the entropy is minimized when there is a (deterministic)mappingh:S→R{\displaystyle h:S\to R}from thestatesof thesystemto the states of the regulator. The authors view this maph{\displaystyle h}as making the regulator a 'model' of the system.
With regard to the brain, insofar as it is successful and efficient as a regulator for survival, it must proceed, in learning, by the formation of a model (or models) of its environment.
The theorem is general enough to apply to all regulating and self-regulating orhomeostaticsystems.
Five variables are defined by the authors as involved in the process of system regulation.D{\displaystyle D}as primary disturbers,R{\displaystyle R}as a set of events in the regulator,S{\displaystyle S}as a set of events in the rest of the system outside of the regulator,Z{\displaystyle Z}as the total set of events (or outcomes) that may occur,G{\displaystyle G}as the subset ofZ{\displaystyle Z}events (or outcomes) that are desirable to the system.[1]
The principal point that the authors present with this figure is that regulation requires of the regulator to conceive of all variables as it regards the setS{\displaystyle S}of events concerning the system to be regulated in order to render in satisfactory outcomesG{\displaystyle G}of this regulation. If the regulator is instead not able to conceive of all variables in the setS{\displaystyle S}of events concerning the system that exist outside of the regulator, then the setR{\displaystyle R}of events in the regulator may fail to account for the total variable disturbancesD{\displaystyle D}which in turn may cause errors that lead to outcomes that are not satisfactory to the system (as illustrated by the events in the setZ{\displaystyle Z}that are not elements in the setG{\displaystyle G}).
The theorem does not explain what it takes for the system to become a good regulator. Moreover, although highly cited, some concerns have been raised that the formal proof does not actually fully support the statement in the paper title.[2]
In cybernetics, the problem of creatingethicalregulators is addressed by theethical regulator theorem.[3]The construction of ethical regulators is a general problem for any system (e.g., an automated information system) that regulates some domain of application.
When restricted to theordinary differential equation(ODE) subset ofcontrol theory, it is referred to as theinternal model principle, which was first articulated in 1976 by B. A. Francis and W. M. Wonham.[4]In this form, it stands in contrast to classical control, in that the classicalfeedback loopfails to explicitly model the controlled system (although the classical controller may contain an implicit model).[5]
|
https://en.wikipedia.org/wiki/Good_regulator
|
Guidance, navigation and control(abbreviatedGNC,GN&C, orG&C) is a branch ofengineeringdealing with the design of systems to control the movement of vehicles, especially,automobiles,ships,aircraft, andspacecraft. In many cases these functions can be performed by trained humans. However, because of the speed of, for example, a rocket's dynamics, human reaction time is too slow to control this movement. Therefore, systems—now almost exclusively digital electronic—are used for such control. Even in cases where humans can perform these functions, it is often the case that GNC systems provide benefits such as alleviating operator work load, smoothing turbulence, fuel savings, etc. In addition, sophisticated applications of GNC enableautomaticorremotecontrol.
Guidance, navigation, and control systems consist of 3 essential parts:navigationwhich tracks current location,guidancewhich leverages navigation data and target information to direct flight control "where to go", andcontrolwhich accepts guidance commands to affect change in aerodynamic and/or engine controls.
GNC systems are found in essentially all autonomous or semi-autonomous systems. These include:
Related examples are:
|
https://en.wikipedia.org/wiki/Guidance,_navigation,_and_control
|
Ahierarchical control system(HCS) is a form ofcontrol systemin which a set of devices and governing software is arranged in ahierarchicaltree. When the links in the tree are implemented by acomputer network, then that hierarchical control system is also a form ofnetworked control system.
A human-built system with complex behavior is often organized as a hierarchy. For example, acommand hierarchyhas among its notable features theorganizational chartof superiors, subordinates, and lines oforganizational communication. Hierarchical control systems are organized similarly to divide the decision making responsibility.
Each element of the hierarchy is a linkednodein the tree. Commands, tasks and goals to be achieved flow down the tree from superior nodes to subordinate nodes, whereas sensations and command results flow up the tree from subordinate to superior nodes. Nodes may also exchange messages with their siblings. The two distinguishing features of a hierarchical control system are related to its layers.[1]
Besides artificial systems, an animal's control systems are proposed to be organized as a hierarchy. Inperceptual control theory, which postulates that an organism's behavior is a means of controlling its perceptions, the organism's control systems are suggested to be organized in a hierarchical pattern as their perceptions are constructed so.
The accompanying diagram is a general hierarchical model which shows functional manufacturing levels using computerised control of an industrial control system.
Referring to the diagram;
Among therobotic paradigmsis the hierarchical paradigm in which a robot operates in a top-down fashion, heavy on planning, especiallymotion planning.Computer-aided production engineeringhas been a research focus atNISTsince the 1980s. Its Automated Manufacturing Research Facility was used to develop a five layer production control model. In the early 1990sDARPAsponsored research to developdistributed (i.e. networked) intelligent control systemsfor applications such as military command and control systems. NIST built on earlier research to develop itsReal-Time Control System(RCS) andReal-time Control System Softwarewhich is a generic hierarchical control system that has been used to operate amanufacturing cell, a robotcrane, and anautomated vehicle.
In November 2007,DARPAheld theUrban Challenge. The winning entry, Tartan Racing[2]employed a hierarchical control system, with layered missionplanning,motion planning, behavior generation, perception, world modelling, andmechatronics.[3]
Subsumption architectureis a methodology for developingartificial intelligencethat is heavily associated withbehavior based robotics. This architecture is a way of decomposing complicated intelligent behavior into many "simple" behavior modules, which are in turn organized into layers. Each layer implements a particular goal of thesoftware agent(i.e. system as a whole), and higher layers are increasingly more abstract. Each layer's goal subsumes that of the underlying layers, e.g. the decision to move forward by the eat-food layer takes into account the decision of the lowest obstacle-avoidance layer. Behavior need not be planned by a superior layer, rather behaviors may be triggered by sensory inputs and so are only active under circumstances where they might be appropriate.[4]
Reinforcement learninghas been used to acquire behavior in a hierarchical control system in which each node can learn to improve its behavior with experience.[5]
James Albus, while at NIST, developed a theory for intelligent system design named the Reference Model Architecture (RMA),[6]which is a hierarchical control system inspired by RCS. Albus defines each node to contain these components.
At its lowest levels, the RMA can be implemented as a subsumption architecture, in which the world model is mapped directly to the controlled process or real world, avoiding the need for a mathematical abstraction, and in which time-constrainedreactive planningcan be implemented as afinite-state machine. Higher levels of the RMA however, may have sophisticated mathematical world models and behavior implemented byautomated planning and scheduling. Planning is required when certain behaviors cannot be triggered by current sensations, but rather by predicted or anticipated sensations, especially those that come about as result of the node's actions.[7]
|
https://en.wikipedia.org/wiki/Hierarchical_control_system
|
HVAC(Heating, Ventilation and Air Conditioning) equipment needs a control system to regulate the operation of a heating and/or air conditioning system.[1]Usually a sensing device is used to compare the actual state (e.g. temperature) with a target state. Then the control system draws a conclusion what action has to be taken (e.g. start the blower).
Central controllers and most terminal unit controllers are programmable, meaning thedirect digital controlprogram code may be customized for the intended use. The program features include time schedules, set points, controllers, logic, timers, trend logs, and alarms. The unit controllers typically have analog and digital inputs that allow measurement of the variable (temperature, humidity, or pressure) andanaloganddigitaloutputs for control of the transport medium (hot/cold water and/or steam). Digital inputs are typically (dry) contacts from a control device, and analog inputs are typically a voltage or current measurement from a variable (temperature, humidity, velocity, or pressure) sensing device. Digital outputs are typically relay contacts used to start and stop equipment, and analog outputs are typically voltage or current signals to control the movement of the medium (air/water/steam) control devices such as valves, dampers, and motors.
Groups of DDC controllers, networked or not, form a layer of systems themselves. This "subsystem" is vital to the performance and basic operation of the overall HVAC system. The DDC system is the "brain" of the HVAC system. It dictates the position of every damper and valve in a system. It determines which fans, pumps, and chiller run and at what speed or capacity. With this configurable intelligence in this "brain", we are moving to the concept of building automation.[2]
More complex HVAC systems can interface toBuilding Automation System (BAS)to allow the building owners to have more control over the heating or cooling units.[3]The building owner can monitor the system and respond to alarms generated by the system from local or remote locations. The system can be scheduled for occupancy or the configuration can be changed from the BAS. Sometimes the BAS is directly controlling the HVAC components.
Depending on the BAS different interfaces can be used.[4]
Today, there are also dedicated gateways that connect advanced VRV / VRF and Split HVAC Systems with Home Automation and BMS (Building Management Systems) controllers for centralized control and monitoring, obviating the need to purchase more complex and expensive HVAC systems. In addition, such gateway solutions are capable of providing remote control operation of all HVAC indoor units over the internet incorporating a simple and friendly user interface.[5]
Many people do not have a Heating, Ventilation, and Air Condition (HVAC) system in their homes because it is too expensive. However according to this articleSave Money Through Energy Efficiency, HVAC is not as expensive as one may think it is.[6]Although many might choose to not believe that sticker and that it is just there to help with the sales, history shows that many of the newer HVAC systems with the yellow energy guide stickers help save customers hundreds to thousands of dollars depending on how much they use their HVAC system.[citation needed]
On the yellow energy guide sticker on many of the newer systems, it displays the average cost to run that machine. Once a customer has found the perfect HVAC system, the customer should run it monthly if it is only put into use during specific times of year. It is recommended that if an HVAC system is not being used each month, that it should be turned on and left running for ten to fifteen minutes.[citation needed]
On the other hand if the customer is somebody who runs their HVAC system frequently, it is really important to maintain it. Maintenance on an HVAC system includes changing out the air filter, inspecting the areas where air intake takes place, and check for leaks.[citation needed]
Doing these three steps are super essential and is the key to keeping an HVAC system running for a long time. A customer should do these three steps every couple of months or when it is suspected problem with the HVAC system.[citation needed]
Some signs that could lead to a potential problem is if the HVAC system does not provide air cool enough.[citation needed]
That could be due to a leakage in the cooling fluids. Another sign that could mean that the HVAC system is not running perfectly fine is if there is a bad smell to the air that it is providing. That often means that the air filters need to be replaced. Changing the air filters on an HVAC system is really important because they are exposed to a lot of dust depending on where your HVAC system is and it could build up dust from simply just sitting in one's home.
Source[7]
Goal 1: Keep HVAC equipment and materials dry during construction and provide temperature and humidity control as required during the close-in phase of construction. HVAC System Installation
Goal 2: Install HVAC systems to effectively implement moisture control as specified in the design drawings and specifications. HVAC System Installation
Goal 3: Prepare operation and maintenance materials for continued performance of HVAC system moisture control.
Most HVAC systems are used for the same purpose but designed differently.[citation needed]
All HVAC systems have an intake, air filter, and air conditioning liquid. However, when designing HVAC systems, many engineers design it for a specific setting and/or purpose. When engineers are designing an HVAC system, they try their best to make it compact while still being able to perform at the highest level and experiment with different ways to make HVAC systems as efficient as possible.
The first HVAC controllers utilizedpneumaticcontrols since engineers understood fluid control. Thus, the properties of steam and air were used to control the flow of heated or cooled air via mechanically controlled logic.
After the control of air flow and temperature was standardized, the use of electromechanical relays inladder logicto switchdampersbecame standardized. Eventually, the relays became electronic switches, astransistorseventually could handle greater current loads. By 1985, pneumatic controls could no longer compete with this new technology although pneumatic control systems (sometimes decades old) are still common in many older buildings.[8]
By the year 2000, computerized controllers were common. Today, some of these controllers can even be accessed by web browsers, which need no longer be in the same building as the HVAC equipment. This allows someeconomies of scale, as a single operations center can easily monitor multiple buildings.
|
https://en.wikipedia.org/wiki/HVAC_control_system
|
Anindustrial control system(ICS) is an electroniccontrol systemand associatedinstrumentationused forindustrial process control. Control systems can range in size from a few modular panel-mounted controllers to large interconnected and interactivedistributed control systems(DCSs) with many thousands of field connections. Control systems receive data from remote sensors measuringprocess variables(PVs), compare the collected data with desiredsetpoints(SPs), and derive command functions that are used to control a process through the final control elements (FCEs), such ascontrol valves.
Larger systems are usually implemented bysupervisory control and data acquisition(SCADA) systems, or DCSs, andprogrammable logic controllers(PLCs), though SCADA and PLC systems are scalable down to small systems with few control loops.[1]Such systems are extensively used in industries such as chemical processing, pulp and paper manufacture, power generation, oil and gas processing, and telecommunications.
The simplest control systems are based around small discrete controllers with a singlecontrol loopeach. These are usually panel mounted which allows direct viewing of the front panel and provides means of manual intervention by the operator, either to manually control the process or to change control setpoints. Originally these would be pneumatic controllers, a few of which are still in use, but nearly all are now electronic.
Quite complex systems can be created with networks of these controllers communicating using industry-standard protocols. Networking allows the use of local or remote SCADA operator interfaces, and enables the cascading and interlocking of controllers. However, as the number of control loops increase for a system design there is a point where the use of aprogrammable logic controller(PLC) ordistributed control system(DCS) is more manageable or cost-effective.
A distributed control system (DCS) is a digital process control system (PCS) for a process or plant, wherein controller functions and field connection modules are distributed throughout the system. As the number of control loops grows, DCS becomes more cost effective than discrete controllers. Additionally, a DCS provides supervisory viewing and management over large industrial processes. In a DCS, a hierarchy of controllers is connected bycommunication networks, allowing centralized control rooms and local on-plant monitoring and control.[2]
A DCS enables easy configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other computer systems such asproduction control.[3]It also enables more sophisticated alarm handling, introduces automatic event logging, removes the need for physical records such as chart recorders and allows the control equipment to be networked and thereby located locally to the equipment being controlled to reduce cabling.
A DCS typically uses custom-designed processors as controllers and uses either proprietary interconnections or standard protocols for communication. Input and output modules form the peripheral components of the system.
The processors receive information from input modules, process the information and decide control actions to be performed by the output modules. The input modules receive information from sensing instruments in the process (or field) and the output modules transmit instructions to the final control elements, such ascontrol valves.
The field inputs and outputs can either be continuously changinganalog signalse.g.current loopor 2 state signals that switch eitheronoroff, such as relay contacts or a semiconductor switch.
Distributed control systems can normally also supportFoundation Fieldbus,PROFIBUS,HART,Modbusand other digital communication buses that carry not only input and output signals but also advanced messages such as error diagnostics and status signals.
Supervisory control and data acquisition (SCADA) is acontrol systemarchitecture that uses computers, networked data communications andgraphical user interfacesfor high-level process supervisory management. The operator interfaces which enable monitoring and the issuing of process commands, such as controller setpoint changes, are handled through the SCADA supervisory computer system. However, the real-time control logic or controller calculations are performed by networked modules which connect to other peripheral devices such asprogrammable logic controllersand discretePID controllerswhich interface to the process plant or machinery.[4]
The SCADA concept was developed as a universal means of remote access to a variety of local control modules, which could be from different manufacturers allowing access throughstandard automation protocols. In practice, large SCADA systems have grown to become very similar todistributed control systemsin function, but using multiple means of interfacing with the plant. They can control large-scale processes that can include multiple sites, and work over large distances.[5]This is a commonly-used architecture industrial control systems, however there are concerns about SCADA systems being vulnerable tocyberwarfareorcyberterrorismattacks.[6]
The SCADA software operates on a supervisory level as control actions are performed automatically byRTUsor PLCs. SCADA control functions are usually restricted to basic overriding or supervisory level intervention. A feedback control loop is directly controlled by the RTU or PLC, but the SCADA software monitors the overall performance of the loop. For example, a PLC may control the flow of cooling water through part of an industrial process to a set point level, but the SCADA system software will allow operators to change the set points for the flow. The SCADA also enables alarm conditions, such as loss of flow or high temperature, to be displayed and recorded.
PLCs can range from small modular devices with tens of inputs and outputs (I/O) in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems. They can be designed for multiple arrangements of digital and analog inputs and outputs, extended temperature ranges, immunity toelectrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up ornon-volatile memory.[7]
Process controlof large industrial plants has evolved through many stages. Initially, control was from panels local to the process plant. However this required personnel to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Often the controllers were behind the control room panels, and all automatic and manual control outputs were individually transmitted back to plant in the form of pneumatic or electrical signals. Effectively this was the centralisation of all the localised panels, with the advantages of reduced manpower requirements and consolidated overview of the process.
However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware so system changes required reconfiguration of signals by re-piping or re-wiring. It also required continual operator movement within a large control room in order to monitor the whole process. With the coming of electronic processors, high-speed electronic signalling networks and electronic graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around the plant and would communicate with the graphic displays in the control room. The concept ofdistributed controlwas realised.
The introduction of distributed control allowed flexible interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high-level overviews of plant status and production levels. For large control systems, the general commercial namedistributed control system(DCS) was coined to refer to proprietary modular systems from many manufacturers which integrated high-speed networking and a full suite of displays and control racks.
While the DCS was tailored to meet the needs of large continuous industrial processes, in industries where combinatorial and sequential logic was the primary requirement, the PLC evolved out of a need to replace racks of relays and timers used for event-driven control. The old controls were difficult to re-configure and debug, and PLC control enabled networking of signals to a central control area with electronic displays. PLCs were first developed for the automotive industry on vehicle production lines, where sequential logic was becoming very complex.[8]It was soon adopted in a large number of other event-driven applications as varied as printing presses and water treatment plants.
SCADA's history is rooted in distribution applications, such as power, natural gas, and water pipelines, where there is a need to gather remote data through potentially unreliable or intermittent low-bandwidth and high-latency links. SCADA systems useopen-loop controlwith sites that are widely separated geographically. ASCADAsystem usesremote terminal units(RTUs) to send supervisory data back to a control centre. Most RTU systems always had some capacity to handle local control while the master station is not available. However, over the years RTU systems have grown more and more capable of handling local control.
The boundaries between DCS and SCADA/PLC systems are blurring as time goes on.[9]The technical limits that drove the designs of these various systems are no longer as much of an issue. Many PLC platforms can now perform quite well as a small DCS, using remote I/O and are sufficiently reliable that some SCADA systems actually manage closed-loop control over long distances. With the increasing speed of today's processors, many DCS products have a full line of PLC-like subsystems that weren't offered when they were initially developed.
In 1993, with the release of IEC-1131, later to becomeIEC-61131-3, the industry moved towards increased code standardization with reusable, hardware-independent control software. For the first time,object-oriented programming(OOP) became possible within industrial control systems. This led to the development of both programmable automation controllers (PAC) and industrial PCs (IPC). These are platforms programmed in the five standardized IEC languages: ladder logic, structured text, function block, instruction list and sequential function chart. They can also be programmed in modern high-level languages such as C or C++. Additionally, they accept models developed in analytical tools such asMATLABandSimulink. Unlike traditional PLCs, which use proprietary operating systems, IPCs utilizeWindows IoT. IPC's have the advantage of powerful multi-core processors with much lower hardware costs than traditional PLCs and fit well into multiple form factors such as DIN rail mount, combined with a touch-screen as apanel PC, or as an embedded PC. New hardware platforms and technology have contributed significantly to the evolution of DCS and SCADA systems, further blurring the boundaries and changing definitions.
SCADA and PLCs are vulnerable to cyber attack. TheU.S. GovernmentJoint Capability Technology Demonstration (JCTD) known as MOSAICS (More Situational Awareness for Industrial Control Systems) is the initial demonstration of cybersecurity defensive capability for critical infrastructure control systems.[10]MOSAICS addresses the Department of Defense (DOD) operational need for cyber defense capabilities to defend critical infrastructure control systems from cyber attack, such as power, water and wastewater, and safety controls, affect the physical environment.[11]The MOSAICS JCTD prototype will be shared with commercial industry through Industry Days for further research and development, an approach intended to lead to an innovative, game-changing capabilities for cybersecurity for critical infrastructure control systems.[12]
This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
|
https://en.wikipedia.org/wiki/Industrial_control_system
|
Motion controlis a sub-field ofautomation, encompassing the systems or sub-systems involved in moving parts of machines in a controlled manner. Motion control systems are extensively used in a variety of fields for automation purposes, includingprecision engineering,micromanufacturing,biotechnology, andnanotechnology.[1]The main components involved typically include amotion controller, an energy amplifier, and one or moreprime moversoractuators. Motion control may beopen looporclosed loop. In open loop systems, the controller sends a command through the amplifier to the prime mover or actuator, and does not know if the desired motion was actually achieved. Typical systems includestepper motoror fan control. For tighter control with more precision, a measuring device may be added to the system (usually near the end motion). When the measurement is converted to a signal that is sent back to the controller, and the controller compensates for any error, it becomes a Closed loop System.
Typically the position or velocity of machines are controlled using some type of device such as ahydraulic pump,linear actuator, orelectric motor, generally aservo. Motion control is an important part ofroboticsandCNCmachine tools, however in these instances it is more complex than when used with specialized machines, where thekinematicsare usually simpler. The latter is often called General Motion Control (GMC). Motion control is widely used in the packaging, printing, textile,semiconductor production, and assembly industries.
Motion Control encompasses every technology related to the movement of objects. It covers every motion system from micro-sized systems such as silicon-type micro induction actuators to micro-siml systems such as a space platform. But, these days, the focus of motion control is the special control technology of motion systems with electric actuators such as dc/ac servo motors. Control of robotic manipulators is also included in the field of motion control because most of robotic manipulators are driven by electrical servo motors and the key objective is the control of motion.[2]
The basic architecture of a motion control system contains:
The interface between the motion controller and drives it control is very critical when coordinated motion is required, as it must provide tightsynchronization. Historically the only open interface was an analog signal, until open interfaces were developed that satisfied the requirements of coordinated motion control, the first beingSERCOSin 1991 which is now enhanced toSERCOS III. Later interfaces capable of motion control includeEthernet/IP,Profinet IRT,Ethernet Powerlink, andEtherCAT.
Common control functions include:
|
https://en.wikipedia.org/wiki/Motion_control
|
Anetworked control system(NCS) is acontrol systemwherein the control loops are closed through a communicationnetwork. The defining feature of an NCS is that control and feedback signals are exchanged among the system's components in the form of information packages through a network.
The functionality of a typical NCS is established by the use of four basic elements:
The most important feature of an NCS is that it connects cyberspace to physical space enabling the execution of several tasks from long distance. In addition, NCSs eliminate unnecessary wiring reducing the complexity and the overall cost in designing and implementing the control systems. They can also be easily modified or upgraded by adding sensors, actuators, and controllers to them with relatively low cost and no major change in their structure. Furthermore, featuring efficient sharing of data between their controllers, NCSs are able to easily fuse global information to make intelligent decisions over large physical spaces.
Their potential applications are numerous and cover a wide range of industries, such as space and terrestrial exploration, access in hazardous environments, factory automation, remote diagnostics and troubleshooting, experimental facilities, domestic robots, aircraft, automobiles, manufacturing plant monitoring, nursing homes and tele-operations. While the potential applications of NCSs are numerous, the proven applications are few, and the real opportunity in the area of NCSs is in developing real-world applications that realize the area's potential.
Advent and development of the Internet combined with the advantages provided by NCS attracted the interest of researchers around the globe. Along with the advantages, several challenges also emerged giving rise to many important research topics. New control strategies, kinematics of the actuators in the systems, reliability and security of communications, bandwidth allocation, development ofdata communicationprotocols, correspondingfault detectionandfault tolerantcontrol strategies, real-time information collection and efficient processing of sensors data are some of the relative topics studied in depth.
The insertion of the communication network in the feedbackcontrol loopmakes the analysis and design of an NCS complex, since it imposes additional time delays in control loops or possibility of packages loss. Depending on the application, time-delays could impose severe degradation on the system performance.
To alleviate the time-delay effect, Y. Tipsuwan and M-Y. Chow, in ADAC Lab at North Carolina State University, proposed thegain scheduler middleware(GSM) methodology and applied it in iSpace. S. Munir and W.J. Book (Georgia Institute of Technology) used aSmith predictor, aKalman filterand an energy regulator to perform teleoperation through the Internet.[1][2]
K.C. Lee, S. Lee and H.H. Lee used agenetic algorithmto design a controller used in a NCS. Many other researchers provided solutions using concepts from several control areas such as robust control, optimalstochastic control, model predictive control, fuzzy logic etc.
A most critical and important issue surrounding the design of distributed NCSs with the successively increasing complexity is to meet the requirements on system reliability and dependability, while guaranteeing a high system performance over a wide operating range. This makes network based fault detection and diagnosis techniques, which are essential to monitor the system performance, receive more and more attention.
|
https://en.wikipedia.org/wiki/Networked_control_system
|
Computer numerical control(CNC) is theautomated controlofmachine toolsby a computer. It is an evolution ofnumerical control(NC), where machine tools are directly managed bydata storage mediasuch aspunched cardsorpunched tape. Because CNC allows for easier programming, modification, and real-time adjustments, it has gradually replaced NC as computing costs declined.[1][2][3]
A CNC machine is a motorized maneuverable tool and often a motorized maneuverable platform, which are both controlled by a computer, according to specific input instructions. Instructions are delivered to a CNC machine in the form of a sequential program of machine control instructions such asG-codeand M-code, and then executed. The program can be written by a person or, far more often, generated by graphicalcomputer-aided design(CAD) orcomputer-aided manufacturing(CAM) software. In the case of 3D printers, the part to be printed is "sliced" before the instructions (or the program) are generated. 3D printers also use G-Code.[4]
CNC offers greatly increased productivity over non-computerized machining for repetitive production, where the machine must be manually controlled (e.g. using devices such as hand wheels or levers) or mechanically controlled by pre-fabricated pattern guides (seepantograph mill). However, these advantages come at significant cost in terms of both capital expenditure and job setup time. For some prototyping and smallbatchjobs, a good machine operator can have parts finished to a high standard whilst a CNC workflow is still in setup.
In modern CNC systems, the design of a mechanical part and its manufacturing program are highly automated. The part's mechanical dimensions are defined using CAD software and then translated into manufacturing directives by CAM software. The resulting directives are transformed (by "post processor" software) into the specific commands necessary for a particular machine to produce the component and then are loaded into the CNC machine.
Since any particular component might require the use of several different tools –drills,saws,touch probesetc. – modern machines often combine multiple tools into a single "cell". In other installations, several different machines are used with an external controller and human or robotic operators that move the component from machine to machine. In either case, the series of steps needed to produce any part is highly automated and produces a part that meets every specification in the original CAD drawing, where each specification includes a tolerance.
Motion is controlling multiple axes, normally at least two (X and Y),[5]and a tool spindle that moves in the Z (depth). The position of the tool is driven by direct-drivestepper motorsorservo motorsto provide highly accurate movements, or in older designs, motors through a series of step-down gears.Open-loop controlworks as long as the forces are kept small enough and speeds are not too great. On commercialmetalworkingmachines, closed-loop controls are standard and required to provide the accuracy, speed, andrepeatabilitydemanded.
As the controller hardware evolved, the mills themselves also evolved. One change has been to enclose the entire mechanism in a large box as a safety measure (with safety glass in the doors to permit the operator to monitor the machine's function), often with additional safety interlocks to ensure the operator is far enough from the working piece for safe operation. Most new CNC systems built today are 100% electronically controlled.
CNC-like systems are used for any process that can be described as movements and operations. These includelaser cutting,welding,friction stir welding,ultrasonic welding, flame andplasma cutting,bending, spinning, hole-punching, pinning, gluing, fabric cutting, sewing, tape and fiber placement, routing, picking and placing, and sawing.
The first NC machines were built in the 1940s and 1950s, based on existing tools that were modified with motors that moved the tool or part to follow points fed into the system onpunched tape.[4]These earlyservomechanismswere rapidly augmented with analog and digital computers, creating the modern CNC machine tools that have revolutionized machining processes.
Now the CNC in the processing manufacturing field has been very extensive, not only the traditionalmillingandturning, other machines and equipment are also installed with the corresponding CNC, which makes the manufacturing industry in its support, greatly improving the quality and efficiency. Of course, the latest trend in CNC[6]is to combine traditionalsubtractive manufacturingwithadditive manufacturing(3D printing) to create a new manufacturing method[7]- hybrid additive subtractive manufacturing (HASM).[8]Another trend is the combination ofAI, using a large number ofsensors, with the goal of achievingflexible manufacturing.[9]
EDM can be broadly divided into "sinker" type processes, where the electrode is the positive shape of the resulting feature in the part, and the electric discharge erodes this feature into the part, resulting in the negative shape, and "wire" type processes. Sinker processes are rather slow as compared to conventional machining, averaging on the order of 100mm3/min,[10]as compared to 8x106mm3/min for conventional machining, but it can generate features that conventional machining cannot. Wire EDM operates by using a thin conductive wire, typically brass, as the electrode, and discharging as it runs past the part being machined. This is useful for complex profiles with inside 90 degree corners that would be challenging to machine with conventional methods.
Many other tools have CNC variants, including:
In CNC, a "crash" occurs when the machine moves in such a way that is harmful to the machine, tools, or parts being machined, sometimes resulting in bending or breakage of cutting tools, accessory clamps, vises, and fixtures, or causing damage to the machine itself by bending guide rails, breaking drive screws, or causing structural components to crack or deform under strain. A mild crash may not damage the machine or tools but may damage the part being machined so that it must be scrapped. Many CNC tools have no inherent sense of the absolute position of the table or tools when turned on. They must be manually "homed" or "zeroed" to have any reference to work from, and these limits are just for figuring out the location of the part to work with it and are no hard motion limit on the mechanism. It is often possible to drive the machine outside the physical bounds of its drive mechanism, resulting in a collision with itself or damage to the drive mechanism. Many machines implement control parameters limiting axis motion past a certain limit in addition to physicallimit switches. However, these parameters can often be changed by the operator.
Many CNC tools also do not know anything about their working environment. Machines may have load sensing systems on spindle and axis drives, but some do not. They blindly follow the machining code provided and it is up to an operator to detect if a crash is either occurring or about to occur, and for the operator to manually abort the active process. Machines equipped with load sensors can stop axis or spindle movement in response to an overload condition, but this does not prevent a crash from occurring. It may only limit the damage resulting from the crash. Some crashes may not ever overload any axis or spindle drives.
If the drive system is weaker than the machine's structural integrity, then the drive system simply pushes against the obstruction, and the drive motors "slip in place". The machine tool may not detect the collision or the slipping, so for example the tool should now be at 210mm on the X-axis, but is, in fact, at 32mm where it hit the obstruction and kept slipping. All of the next tool motions will be off by −178mm on the X-axis, and all future motions are now invalid, which may result in further collisions with clamps, vises, or the machine itself. This is common in open-loop stepper systems but is not possible in closed-loop systems unless mechanical slippage between the motor and drive mechanism has occurred. Instead, in a closed-loop system, the machine will continue to attempt to move against the load until either the drive motor goes into an overload condition or a servo motor fails to get to the desired position.
Collision detection and avoidance are possible, through the use of absolute position sensors (optical encoder strips or disks) to verify that motion occurred, or torque sensors or power-draw sensors on the drive system to detect abnormal strain when the machine should just be moving and not cutting, but these are not a common component of most hobby CNC tools. Instead, most hobby CNC tools simply rely on the assumed accuracy ofstepper motorsthat rotate a specific number of degrees in response to magnetic field changes. It is often assumed the stepper is perfectly accurate and never missteps, so tool position monitoring simply involves counting the number of pulses sent to the stepper over time. An alternate means of stepper position monitoring is usually not available, so crash or slip detection is not possible.
Commercial CNC metalworking machines use closed-loop feedback controls for axis movement. In a closed-loop system, the controller monitors the actual position of each axis with an absolute orincremental encoder. Proper control programming will reduce the possibility of a crash, but it is still up to the operator and programmer to ensure that the machine is operated safely. However, during the 2000s and 2010s, the software for machining simulation has been maturing rapidly, and it is no longer uncommon for the entire machine tool envelope (including all axes, spindles, chucks, turrets, tool holders, tailstocks, fixtures, clamps, and stock) to be modeled accurately with3D solid models, which allows the simulation software to predict fairly accurately whether a cycle will involve a crash. Although such simulation is not new, its accuracy and market penetration are changing considerably because of computing advancements.[12]
Within the numerical systems of CNC programming, the code generator can assume that the controlled mechanism is always perfectly accurate, or that precision tolerances are identical for all cutting or movement directions. While the common use ofball screwson most modern NC machines eliminates the vast majority of backlash, it still must be taken into account. CNC tools with a large amount of mechanicalbacklashcan still be highly precise if the drive or cutting mechanism is only driven to apply cutting force from one direction, and all driving systems are pressed tightly together in that one cutting direction. However, a CNC device with high backlash and a dull cutting tool can lead to cutter chatter and possible workpiece gouging. The backlash also affects the precision of some operations involving axis movement reversals during cutting, such as the milling of a circle, where axis motion is sinusoidal. However, this can be compensated for if the amount of backlash is precisely known by linear encoders or manual measurement.
The high backlash mechanism itself is not necessarily relied on to be repeatedly precise for the cutting process, but some other reference object or precision surface may be used to zero the mechanism, by tightly applying pressure against the reference and setting that as the zero references for all following CNC-encoded motions. This is similar to the manual machine tool method of clamping amicrometeronto a reference beam and adjusting theVernierdial to zero using that object as the reference.[citation needed]
In numerical control systems, the position of the tool is defined by a set of instructions called thepart program. Positioning control is handled using either an open-loop or a closed-loop system. In an open-loop system, communication takes place in one direction only: from the controller to the motor. In a closed-loop system, feedback is provided to the controller so that it can correct for errors in position, velocity, and acceleration, which can arise due to variations in load or temperature. Open-loop systems are generally cheaper but less accurate. Stepper motors can be used in both types of systems, while servo motors can only be used in closed systems.
The G & M code positions are all based on a three-dimensionalCartesian coordinate system. This system is a typical plane often seen in mathematics when graphing. This system is required to map out the machine tool paths and any other kind of actions that need to happen in a specific coordinate. Absolute coordinates are what are generally used more commonly for machines and represent the (0,0,0) point on the plane. This point is set on the stock material to give a starting point or "home position" before starting the actual machining.
G-codesare used to command specific movements of the machine, such as machine moves or drilling functions. The majority of G-code programs start with a percent (%) symbol on the first line, then followed by an "O" with a numerical name for the program (i.e. "O0001") on the second line, then another percent (%) symbol on the last line of the program. The format for a G-code is the letter G followed by two to three digits; for example G01. G-codes differ slightly between a mill and lathe application, for example:
[Code Miscellaneous Functions (M-Code)][citation needed]. M-codes are miscellaneous machine commands that do not command axis motion. The format for an M-code is the letter M followed by two to three digits; for example:
Having the correct speeds and feeds in the program provides for a more efficient and smoother product run. Incorrect speeds and feeds will cause damage to the tool, machine spindle, and even the product. The quickest and simplest way to find these numbers would be to use a calculator that can be found online. A formula can also be used to calculate the proper speeds and feeds for a material. These values can be found online or inMachinery's Handbook.
|
https://en.wikipedia.org/wiki/Numerical_control
|
Perceptual control theory(PCT) is a model ofbehaviorbased on the properties ofnegative feedbackcontrol loops. A control loop maintains a sensed variable at or near a reference value by means of the effects of its outputs upon that variable, as mediated by physical properties of the environment. Inengineering control theory, reference values are set by a user outside the system. An example is a thermostat. In a living organism, reference values for controlled perceptual variables are endogenously maintained. Biologicalhomeostasisandreflexesare simple, low-level examples. The discovery of mathematical principles of control introduced a way to model a negative feedback loop closed through the environment (circular causation), which spawned perceptual control theory. It differs fundamentally from some models inbehavioralandcognitive psychologythat modelstimulias causes of behavior (linear causation). PCT research is published inexperimental psychology,neuroscience,ethology,anthropology,linguistics,sociology,robotics,developmental psychology,organizational psychologyand management, and a number of other fields. PCT has been applied to design and administration of educational systems, and has led to a psychotherapy called themethod of levels.
The perceptual control theory is deeply rooted inbiological cybernetics,systems biologyandcontrol theoryand the related concept of feedback loops. Unlike some models in behavioral and cognitive psychology it sets out from the concept of circular causality. It shares, therefore, its theoretical foundation with the concept ofplant control, but it is distinct from it by emphasizing the control of theinternal representation of the physical world.[1]
The plant control theory focuses on neuro-computational processes of movement generation, once a decision for generating the movement has been taken. PCT spotlights the embeddedness of agents in their environment. Therefore, from the perspective of perceptual control, the central problem of motor control consists in finding a sensory input to the system that matches a desired perception.[1]
PCT has roots in physiological insights ofClaude Bernardand in 20th century in the research byWalter B. Cannonand in the fields ofcontrol systems engineeringandcybernetics. Classical negative feedback control was worked out by engineers in the 1930s and 1940s,[2][3]and further developed byWiener,[4]Ashby,[5]and others in the early development of the field ofcybernetics. Beginning in the 1950s,William T. Powersapplied the concepts and methods of engineered control systems to biological control systems, and developed the experimental methodology of PCT.[6][7]
A key insight of PCT is that the controlled variable is not the output of the system (the behavioral actions), but its input, that is, a sensed and transformed function of some state of the environment that the control system's output can affect. Because these sensed and transformed inputs may appear as consciously perceived aspects of the environment, Powers labelled the controlled variable "perception". The theory came to be known as "Perceptual Control Theory" or PCT rather than "Control Theory Applied to Psychology" because control theorists often assert or assume that it is the system's output that is controlled.[8]In PCT it is the internal representation of the state of some variable in the environment—a "perception" in everyday language—that is controlled.[9]The basic principles of PCT were first published by Powers, Clark, and MacFarland as a "general feedback theory of behavior" in 1960,[10]with credits to cybernetic authorsWienerandAshby. It has been systematically developed since then in the research community that has gathered around it.[11]Initially, it was overshadowed by thecognitive revolution(later supplanted bycognitive science), but has now become better known.[12][13][14][15]
Powers and other researchers in the field point to problems of purpose, causation, and teleology at the foundations of psychology which control theory resolves.[16]FromAristotlethroughWilliam JamesandJohn Deweyit has been recognized that behavior is purposeful and not merely reactive, but how to account for this has been problematic because the only evidence for intentions was subjective. As Powers pointed out, behaviorists followingWundt,Thorndike,Watson, and others rejected introspective reports as data for an objective science of psychology. Only observable behavior could be admitted as data.[17]Such behaviorists modeled environmental events (stimuli) as causing behavioral actions (responses). This causal assumption persists in some models incognitive psychologythat interposecognitive mapsand other postulatedinformation processingbetween stimulus and response but otherwise retain the assumption of linear causation from environment to behavior, which Richard Marken called an "open-loop causal model of behavioral organization" in contrast to PCT's closed-loop model.[12]
Another, more specific reason that Powers observed for psychologists' rejecting notions of purpose or intention was that they could not see how a goal (a state that did not yet exist) could cause the behavior that led to it. PCT resolves these philosophical arguments aboutteleologybecause it provides a model of the functioning of organisms in which purpose has objective status without recourse tointrospection, and in which causation is circular aroundfeedback loops.[18]
A simple negative feedback control system is acruise controlsystem for a car. A cruise control system has a sensor which "perceives" speed as the rate of spin of the drive shaft directly connected to the wheels. It also has a driver-adjustable 'goal' specifying a particular speed. The sensed speed is continuously compared against the specified speed by a device (called a "comparator") which subtracts the currently sensed input value from the stored goal value. The difference (the error signal) determines the throttle setting (the accelerator depression), so that the engine output is continuously varied to prevent the speed of the car from increasing or decreasing from that desired speed as environmental conditions change.
If the speed of the car starts to drop below the goal-speed, for example when climbing a hill, the small increase in the error signal, amplified, causes engine output to increase, which keeps the error very nearly at zero. If the speed begins to exceed the goal, e.g. when going down a hill, the engine is throttled back so as to act as a brake, so again the speed is kept from departing more than a barely detectable amount from the goal speed (brakes being needed only if the hill is too steep). The result is that the cruise control system maintains a speed close to the goal as the car goes up and down hills, and as other disturbances such as wind affect the car's speed. This is all done without any planning of specific actions, and without any blind reactions to stimuli. Indeed, the cruise control system does not sense disturbances such as wind pressure at all, it only senses the controlled variable, speed. Nor does it control the power generated by the engine, it uses the 'behavior' of engine power as its means to control the sensed speed.
The same principles of negative feedback control (including the ability to nullify effects of unpredictable external or internal disturbances) apply to living control systems.[4]Implications of these principle are e.g. intensively studied bybiologicalandmedical cyberneticsandsystems biology.
The thesis of PCT is that animals and people do not control their behavior; rather, they vary their behavior as their means for controlling their perceptions, with or without external disturbances. This is harmoniously consistent with the historical and still widespread assumption that behavior is the final result of stimulus inputs and cognitive plans.[12][19]
The principal datum in PCT methodology is the controlled variable. The fundamental step of PCT research, the test for controlled variables, begins with the slow and gentle application of disturbing influences to the state of a variable in the environment which the researcher surmises is already under control by the observed organism. It is essential not to overwhelm the organism's ability to control, since that is what is being investigated. If the organism changes its actions just so as to prevent the disturbing influence from having the expected effect on that variable, that is strong evidence that the experimental action disturbed a controlled variable. It is crucially important to distinguish the perceptions and point of view of the observer from those of the observed organism. It may take a number of variations of the test to isolate just which aspect of the environmental situation is under control, as perceived by the observed organism.[20][21]
PCT employs ablack boxmethodology. The controlled variable as measured by the observer corresponds quantitatively to a reference value for a perception that the organism is controlling. The controlled variable is thus an objective index of the purpose or intention of those particular behavioral actions by the organism—the goal which those actions consistently work to attain despite disturbances. With few exceptions, in the current state ofneurosciencethis internally maintained reference value is seldom directly observed as such (e.g. as a rate of firing in a neuron), since few researchers trace the relevant electrical and chemical variables by their specific pathways while a living organism is engaging in what we externally observe as behavior.[22]However, when a working negative feedback system simulated on a digital computer performs essentially identically to observed organisms, then the well understood negative feedback structure of the simulation or model (the white box) is understood to demonstrate the unseen negative feedback structure within the organism (the black box).[6]
Data for individuals are not aggregated for statistical analysis;[23]instead, a generative model is built which replicates the data observed for individuals with very high fidelity (0.95 or better)[clarification needed]. To build such a model of a given behavioral situation requires careful measurements of three observed variables:
A fourth value, the internally maintained referencer(a variable ′setpoint′), is deduced from the value at which the organism is observed to maintainqi, as determined by the test for controlled variables (described at the beginning of this section).
With two variables specified, the controlled inputqiand the referencer, a properly designed control system, simulated on a digital computer, produces outputsqothat almost precisely oppose unpredictable disturbancesdto the controlled input. Further, the variance from perfect control accords well with that observed for living organisms.[24]Perfect control would result in zero effect of the disturbance, but living organisms are not perfect controllers, and the aim of PCT is to model living organisms. When a computer simulation performs with >95% conformity to experimentally measured values, opposing the effect of unpredictable changes indby generating (nearly) equal and opposite values ofqo, it is understood to model the behavior and the internal control-loop structure of the organism.[18][10][25]
By extension, the elaboration of the theory constitutes a general model of cognitive process and behavior. With every specific model or simulation of behavior that is constructed and tested against observed data, the general model that is presented in the theory is exposed to potential challenge that could call for revision or could lead to refutation.
To illustrate the mathematical calculations employed in a PCT simulation, consider a pursuit tracking task in which the participant keeps a mouse cursor aligned with a moving target on a computer monitor.
The model assumes that a perceptual signal within the participant represents the magnitude of the input quantityqi. (This has been demonstrated to be a rate of firing in a neuron, at least at the lowest levels.)[25][26]In the tracking task, the input quantity is the vertical distance between the target positionTand the cursor positionC, and the random variation of the target position acts as the disturbancedof that input quantity. This suggests that the perceptual signalpquantitatively represents the cursor positionCminus the target position T, as expressed in the equationp=C–T.
Between the perception of target and cursor and the construction of the signal representing the distance between them there is a delay ofτmilliseconds, so that the working perceptual signal at timetrepresents the target-to-cursor distance at a prior time,t–τ. Consequently, the equation used in the model is
1.p(t) =C(t–τ) –T(t–τ)
The negative feedback control system receives a reference signalrwhich specifies the magnitude of the given perceptual signal which is currently intended or desired. (For the origin ofrwithin the organism, see under "A hierarchy of control", below.) Bothrandpare input to a simple neural structure withrexcitatory andpinhibitory. This structure is called a "comparator".[25]The effect is to subtractpfromr, yielding an error signalethat indicates the magnitude and sign of the difference between the desired magnituderand the currently input magnitudepof the given perception. The equation representing this in the model is:
2.e=r–p
The error signalemust be transformed to the output quantityqo(representing the participant's muscular efforts affecting the mouse position). Experiments have shown that in the best model for the output function, the mouse velocityVcursoris proportional to the error signaleby a gain factorG(that is,Vcursor=G*e). Thus, when the perceptual signalpis smaller than the reference signalr, the error signalehas a positive sign, and from it the model computes an upward velocity of the cursor that is proportional to the error.
The next position of the cursorCnewis the current positionColdplus the velocityVcursortimes the durationdtof one iteration of the program. By simple algebra, we substituteG*e(as given above) forVcursor, yielding a third equation:
3.Cnew=Cold+G*e*dt
These three simple equations or program steps constitute the simplest form of the model for the tracking task. When these three simultaneous equations are evaluated over and over with similarly distributed random disturbancesdof the target position that the human participant experienced, the output positions and velocities of the cursor duplicate the participant's actions in the tracking task above within 4.0% of their peak-to-peak range, in great detail.
This simple model can be refined with a damping factordwhich reduces the discrepancy between the model and the human participant to 3.6% when the disturbancedis set to maximum difficulty.
3'.Cnew=Cold+ [(G*e)–(d*Cold)]*dt
Detailed discussion of this model in (Powers 2008)[24]includes both source and executable code, with which the reader can verify how well this simple program simulates real behavior. No consideration is needed of possible nonlinearities such as theWeber-Fechner law, potential noise in the system, continuously varying angles at the joints, and many other factors that could afflict performance if this were a simple linear model. Noinverse kinematicsor predictive calculations are required. The model simply reduces the discrepancy between inputpand referencercontinuously as it arises in real time, and that is all that is required—as predicted by the theory.[18][25]
In the artificial systems that are specified byengineering control theory, the reference signal is considered to be an external input to the 'plant'.[8]In engineering control theory, the reference signal or set point is public; in PCT, it is not, but rather must be deduced from the results of the test for controlled variables, as described above in themethodology section. This is because in living systems a reference signal is not an externally accessible input, but instead originates within the system. In the hierarchical model, error output of higher-level control loops, as described in thenext section below, evokes the reference signalrfrom synapse-local memory, and the strength ofris proportional to the (weighted) strength of the error signal or signals from one or more higher-level systems.[27]
In engineering control systems, in the case where there are several such reference inputs, a 'Controller' is designed to manipulate those inputs so as to obtain the effect on the output of the system that is desired by the system's designer, and the task of a control theory (so conceived) is to calculate those manipulations so as to avoid instability and oscillation. The designer of a PCT model or simulation specifies no particular desired effect on the output of the system, except that it must be whatever is required to bring the input from the environment (the perceptual signal) into conformity with the reference. In Perceptual Control Theory, the input function for the reference signal is a weighted sum of internally generated signals (in the canonical case, higher-level error signals), and loop stability is determined locally for each loop in the manner sketched in the preceding section on themathematics of PCT(and elaborated more fully in thereferenced literature). The weighted sum is understood to result fromreorganization.
Engineering control theory is computationally demanding, but as thepreceding sectionshows, PCT is not. For example, contrast the implementation of a model of aninverted pendulumin engineering control theory[28]with the PCT implementation as a hierarchy of five simple control systems.[29]
Perceptions, in PCT, are constructed and controlled in a hierarchy of levels. For example, visual perception of an object is constructed from differences in light intensity or differences in sensations such as color at its edges. Controlling the shape or location of the object requires altering the perceptions of sensations or intensities (which are controlled by lower-level systems). This organizing principle is applied at all levels, up to the most abstract philosophical and theoretical constructs.
The Russian physiologistNicolas Bernstein[30]independently came to the same conclusion that behavior has to be multiordinal—organized hierarchically, in layers. A simple problem led to this conclusion at about the same time both in PCT and in Bernstein's work. The spinal reflexes act to stabilize limbs against disturbances. Why do they not prevent centers higher in the brain from using those limbs to carry out behavior? Since the brain obviously does use the spinal systems in producing behavior, there must be a principle that allows the higher systems to operate by incorporating the reflexes, not just by overcoming them or turning them off. The answer is that the reference value (setpoint) for a spinal reflex is not static; rather, it is varied by higher-level systems as their means of moving the limbs (servomechanism). This principle applies to higher feedback loops, as each loop presents the same problem to subsystems above it.
Whereas an engineered control system has a reference value orsetpointadjusted by some external agency, the reference value for a biological control system cannot be set in this way. The setpoint must come from some internal process. If there is a way for behavior to affect it, any perception may be brought to the state momentarily specified by higher levels and then be maintained in that state against unpredictable disturbances. In a hierarchy of control systems, higher levels adjust the goals of lower levels as their means of approaching their own goals set by still-higher systems. This has important consequences for any proposed external control of an autonomous living control system (organism). At the highest level, reference values (goals) are set by heredity or adaptive processes.
If an organism controls inappropriate perceptions, or if it controls some perceptions to inappropriate values, then it is less likely to bring progeny to maturity, and may die. Consequently, bynatural selectionsuccessive generations of organisms evolve so that they control those perceptions that, when controlled with appropriate setpoints, tend to maintain critical internal variables at optimal levels, or at least within non-lethal limits. Powers called these critical internal variables "intrinsic variables" (Ashby's "essential variables").
The mechanism that influences the development of structures of perceptions to be controlled is termed "reorganization", a process within the individual organism that is subject to natural selection just as is the evolved structure of individuals within a species.[31]
This "reorganization system" is proposed to be part of the inherited structure of the organism. It changes the underlying parameters and connectivity of the control hierarchy in a random-walk manner. There is a basic continuous rate of change in intrinsic variables which proceeds at a speed set by the total error (and stops at zero error), punctuated by random changes in direction in a hyperspace with as many dimensions as there are critical variables. This is a more or less direct adaptation of Ashby's "homeostat", first adopted into PCT in the 1960 paper[10]and then changed to use E. coli's method of navigating up gradients of nutrients, as described by Koshland (1980).[32]
Reorganization may occur at any level when loss of control at that level causes intrinsic (essential) variables to deviate from genetically determined set points. This is the basic mechanism that is involved in trial-and-error learning, which leads to the acquisition of more systematic kinds of learning processes.[33]
The reorganization concept has led to a method of psychotherapy called themethod of levels(MOL). Using MOL, the therapist aims to help the patient shift his or her awareness to higher levels of perception in order to resolve conflicts and allow reorganization to take place.[34]
Currently, no one theory has been agreed upon to explain the synaptic, neuronal or systemic basis of learning. Prominent since 1973, however, is the idea thatlong-term potentiation(LTP) of populations ofsynapsesinduces learning through both pre- and postsynaptic mechanisms.[35][36]LTP is a form ofHebbian learning, which proposed that high-frequency, tonic activation of a circuit of neurones increases the efficacy with which they are activated and the size of their response to a given stimulus as compared to the standard neurone (Hebb, 1949).[37]These mechanisms are the principles behind Hebb's famously simple explanation: "Those that fire together, wire together".[37]
LTP has received much support since it was first observed byTerje Lømoin 1966 and is still the subject of many modern studies and clinical research. However, there are possible alternative mechanisms underlying LTP, as presented by Enoki, Hu, Hamilton and Fine in 2009,[38]published in the journalNeuron. They concede that LTP is the basis of learning. However, they firstly propose that LTP occurs in individual synapses, and this plasticity is graded (as opposed to in a binary mode) and bidirectional.[38]Secondly, the group suggest that the synaptic changes are expressed solely presynaptically, via changes in the probability of transmitter release.[38]Finally, the team predict that the occurrence of LTP could be age-dependent, as the plasticity of a neonatal brain would be higher than that of a mature one. Therefore, the theories differ, as one proposes an on/off occurrence of LTP by pre- and postsynaptic mechanisms and the other proposes only presynaptic changes, graded ability, and age-dependence.
These theories do agree on one element of LTP, namely, that it must occur through physical changes to the synaptic membrane/s, i.e. synaptic plasticity. Perceptual control theory encompasses both of these views. It proposes the mechanism of'reorganisation'as the basis of learning. Reorganisation occurs within the inherent control system of a human or animal by restructuring the inter- and intraconnections of its hierarchical organisation, akin to the neuroscientific phenomenon of neural plasticity. This reorganisation initially allows the trial-and-error form of learning, which is seen in babies, and then progresses to more structured learning through association, apparent in infants, and finally to systematic learning, covering the adult ability to learn from both internally and externally generated stimuli and events. In this way, PCT provides a valid model for learning that combines the biological mechanisms of LTP with an explanation of the progression and change of mechanisms associated with developmental ability.[39][40][41][42][43]
Powers in 2008 produced a simulation of arm co-ordination.[24]He suggested that in order to move your arm, fourteen control systems that control fourteen joint angles are involved, and they reorganise simultaneously and independently. It was found that for optimum performance, the output functions must be organised in a way so as each control system's output only affects the one environmental variable it is perceiving. In this simulation, the reorganising process is working as it should, and just as Powers suggests that it works in humans, reducing outputs that cause error and increasing those that reduce error. Initially, the disturbances have large effects on the angles of the joints, but over time the joint angles match the reference signals more closely due to the system being reorganised. Powers suggests that in order to achieve coordination of joint angles to produce desired movements, instead of calculating how multiple joint angles must change to produce this movement the brain uses negative feedback systems to generate the joint angles that are required. A single reference signal that is varied in a higher-order system can generate a movement that requires several joint angles to change at the same time.[24]
Botvinick in 2008[44]proposed that one of the founding insights of the cognitive revolution was the recognition of hierarchical structure in human behavior. Despite decades of research, however, the computational mechanisms underlying hierarchically organized behavior are still not fully understood. Bedre, Hoffman, Cooney & D'Esposito in 2009[45]proposed that the fundamental goal in cognitive neuroscience is to characterize the functional organization of the frontal cortex that supports the control of action.
Recent neuroimaging data has supported the hypothesis that the frontal lobes are organized hierarchically, such that control is supported in progressively caudal regions as control moves to more concrete specification of action. However, it is still not clear whether lower-order control processors are differentially affected by impairments in higher-order control when between-level interactions are required to complete a task, or whether there are feedback influences of lower-level on higher-level control.[45]
Botvinik in 2008[44]found that all existing models of hierarchically structured behavior share at least one general assumption – that the hierarchical, part–whole organization of human action is mirrored in the internal or neural representations underlying it. Specifically, the assumption is that there exist representations not only of low-level motor behaviors, but also separable representations of higher-level behavioral units. The latest crop of models provides new insights, but also poses new or refined questions for empirical research, including how abstract action representations emerge through learning, how they interact with different modes of action control, and how they sort out within the prefrontal cortex (PFC).
Perceptual control theory (PCT) can provide an explanatory model of neural organisation that deals with the current issues. PCT describes the hierarchical character of behavior as being determined by control of hierarchically organized perception. Control systems in the body and in the internal environment of billions of interconnected neurons within the brain are responsible for keeping perceptual signals within survivable limits in the unpredictably variable environment from which those perceptions are derived. PCT does not propose that there is an internal model within which the brain simulates behavior before issuing commands to execute that behavior. Instead, one of its characteristic features is the principled lack of cerebral organisation of behavior. Rather, behavior is the organism's variable means to reduce the discrepancy between perceptions and reference values which are based on various external and internal inputs.[46]Behavior must constantly adapt and change for an organism to maintain its perceptual goals. In this way, PCT can provide an explanation of abstract learning through spontaneous reorganisation of the hierarchy. PCT proposes that conflict occurs between disparate reference values for a given perception rather than between different responses,[13]and that learning is implemented as trial-and-error changes of the properties of control systems,[27]rather than any specific response beingreinforced. In this way, behavior remains adaptive to the environment as it unfolds, rather than relying on learned action patterns that may not fit.
Hierarchies of perceptual control have been simulated in computer models and have been shown to provide a close match to behavioral data. For example, Marken[47]conducted an experiment comparing the behavior of a perceptual control hierarchy computer model with that of six healthy volunteers in three experiments. The participants were required to keep the distance between a left line and a centre line equal to that of the centre line and a right line. They were also instructed to keep both distances equal to 2 cm. They had 2 paddles in their hands, one controlling the left line and one controlling the middle line. To do this, they had to resist random disturbances applied to the positions of the lines. As the participants achieved control, they managed to nullify the expected effect of the disturbances by moving their paddles. The correlation between the behavior of subjects and the model in all the experiments approached 0.99. It is proposed that the organization of models of hierarchical control systems such as this informs us about the organization of the human subjects whose behavior it so closely reproduces.
PCT has significant implications for Robotics and Artificial Intelligence. W.T. Powers introduced the application of PCT to robotics in 1978, early in the availability of home computers.[48][49][50][51]The comparatively simple architecture,[52]a hierarchy of perceptual controllers, has no need for complex models of the external world, inverse kinematics, or computation from input-output mappings. Traditional approaches to robotics generally depend upon the computation of actions in a constrained environment. Robots designed this way are inflexible and clumsy, unable to cope with the dynamic nature of the real world. PCT robots inherently resist and counter the chaotic, unpredictable disturbances to their controlled inputs which occur in an unconstrained environment. The PCT robotics architecture has recently been applied to a number of real-world robotic systems including robotic rovers,[53]balancing robot[54]and robot arms.[55]Some commercially available robots which demonstrate good control in a naturalistic environment use a control-theoretic architecture which requires much more intensive computation. For example, Boston Dynamics has said[56]that its robots use historically leveragedmodel predictive control.
The preceding explanation of PCT principles provides justification of how this theory can provide a valid explanation of neural organisation and how it can explain some of the current issues of conceptual models.
Perceptual control theory currently proposes a hierarchy of 11 levels of perceptions controlled by systems in the human mind and neural architecture. These are: intensity, sensation, configuration, transition, event, relationship, category, sequence, program, principle, and system concept. Diverse perceptual signals at a lower level (e.g. visual perceptions of intensities) are combined in an input function to construct a single perception at the higher level (e.g. visual perception of a color sensation). The perceptions that are constructed and controlled at the lower levels are passed along as the perceptual inputs at the higher levels. The higher levels in turn control by adjusting the reference levels (goals) of the lower levels, in effect telling the lower levels what to perceive.[25][33]
While many computer demonstrations of principles have been developed, the proposed higher levels are difficult to model because too little is known about how the brain works at these levels. Isolated higher-level control processes can be investigated, but models of an extensive hierarchy of control are still only conceptual, or at best rudimentary.
Perceptual control theory has not been widely accepted in mainstream psychology, but has been effectively used in a considerable range of domains[57][58]in human factors,[59]clinical psychology, and psychotherapy (the "Method of Levels"), it is the basis for a considerable body of research in sociology,[60]and it has formed the conceptual foundation for the reference model used by a succession ofNATOresearch study groups.[61]
Recent approaches use principles of perceptual control theory to provide new algorithmic foundations forartificial intelligenceandmachine learning.[62]
|
https://en.wikipedia.org/wiki/Perceptual_control_theory
|
Aproportional–integral–derivative controller(PID controllerorthree-term controller) is afeedback-basedcontrol loopmechanism commonly used to manage machines and processes that require continuous control and automatic adjustment. It is typically used inindustrial control systemsand various other applications where constant control throughmodulationis necessary without human intervention. The PID controller automatically compares the desired target value (setpointor SP) with the actual value of the system (process variableor PV). The difference between these two values is called theerror value, denoted ase(t){\displaystyle e(t)}.
It then applies corrective actions automatically to bring the PV to the same value as the SP using three methods: The proportional (P) component responds to the current error value by producing anoutputthat is directly proportional to the magnitude of the error. This provides immediate correction based on how far the system is from the desired setpoint. The integral (I) component, in turn, considers the cumulative sum of past errors to address anyresidualsteady-stateerrors that persist over time, eliminating lingering discrepancies. Lastly, the derivative (D) component predicts future error by assessing the rate of change of the error, which helps to mitigateovershootand enhance system stability, particularly when the system undergoes rapid changes. The PID output signal can directly control actuators through voltage, current, or other modulation methods, depending on the application. The PID controller reduces the likelihood ofhuman errorand improvesautomation.
A common example is a vehicle’scruise control system. For instance, when a vehicle encounters a hill, its speed will decrease if the engine power output is kept constant. The PID controller adjusts the engine's power output to restore the vehicle to its desired speed, doing so efficiently with minimal delay and overshoot.
The theoretical foundation of PID controllers dates back to the early 1920s with the development of automatic steering systems for ships. This concept was later adopted for automatic process control in manufacturing, first appearing inpneumatic actuatorsand evolving into electronic controllers. PID controllers are widely used in numerous applications requiring accurate, stable, and optimizedautomatic control, such astemperature regulation, motor speed control, and industrial process management.
The distinguishing feature of the PID controller is the ability to use the threecontrol termsof proportional, integral and derivative influence on the controller output to apply accurate and optimal control. The block diagram on the right shows the principles of how these terms are generated and applied. It shows a PID controller, which continuously calculates anerror valuee(t){\displaystyle e(t)}as the difference between a desiredsetpointSP=r(t){\displaystyle {\text{SP}}=r(t)}and a measuredprocess variablePV=y(t){\displaystyle {\text{PV}}=y(t)}:e(t)=r(t)−y(t){\displaystyle e(t)=r(t)-y(t)}, and applies a correction based onproportional,integral, andderivativeterms. The controller attempts to minimize the error over time by adjustment of acontrol variableu(t){\displaystyle u(t)}, such as the opening of acontrol valve, to a new value determined by aweighted sumof the control terms.
The PID controller directly generates a continuous control signal based on error, without discrete modulation.
In this model:
Tuning– The balance of these effects is achieved byloop tuningto produce the optimal control function. The tuning constants are shown below as "K" and must be derived for each control application, as they depend on the response characteristics of the physical system, external to the controller. These are dependent on the behavior of the measuring sensor, the final control element (such as a control valve), any control signal delays, and the process itself. Approximate values of constants can usually be initially entered knowing the type of application, but they are normally refined, or tuned, by introducing a setpoint change and observing the system response.[2]
Control action– The mathematical model and practical loop above both use adirectcontrol action for all the terms, which means an increasing positive error results in an increasing positive control output correction. This is because the "error" term is not the deviation from the setpoint (actual-desired) but is in fact the correction needed (desired-actual). The system is calledreverseacting if it is necessary to apply negative corrective action. For instance, if the valve in the flow loop was 100–0% valve opening for 0–100% control output – meaning that the controller action has to be reversed. Some process control schemes and final control elements require this reverse action. An example would be a valve for cooling water, where thefail-safemode, in the case of signal loss, would be 100% opening of the valve; therefore 0% controller output needs to cause 100% valve opening.
The overall control function is
whereKp{\displaystyle K_{\text{p}}},Ki{\displaystyle K_{\text{i}}}, andKd{\displaystyle K_{\text{d}}}, all non-negative, denote the coefficients for the proportional, integral, and derivative terms respectively (sometimes denotedP,I, andD).
In thestandard formof the equation (see later in article),Ki{\displaystyle K_{\text{i}}}andKd{\displaystyle K_{\text{d}}}are respectively replaced byKp/Ti{\displaystyle K_{\text{p}}/T_{\text{i}}}andKpTd{\displaystyle K_{\text{p}}T_{\text{d}}}; the advantage of this being thatTi{\displaystyle T_{\text{i}}}andTd{\displaystyle T_{\text{d}}}have some understandable physical meaning, as they represent an integration time and a derivative time respectively.KpTd{\displaystyle K_{\text{p}}T_{\text{d}}}is the time constant with which the controller will attempt to approach the set point.Kp/Ti{\displaystyle K_{\text{p}}/T_{\text{i}}}determines how long the controller will tolerate the output being consistently above or below the set point.
where
Although a PID controller has three control terms, some applications need only one or two terms to provide appropriate control. This is achieved by setting the unused parameters to zero and is called a PI, PD, P, or I controller in the absence of the other control actions. PI controllers are fairly common in applications where derivative action would be sensitive to measurement noise, but the integral term is often needed for the system to reach its target value.[citation needed]
The use of the PID algorithm does not guaranteeoptimal controlof the system or itscontrol stability(see§ Limitations, below). Situations may occur where there are excessive delays: the measurement of the process value is delayed, or the control action does not apply quickly enough. In these caseslead–lag compensationis required to be effective. The response of the controller can be described in terms of its responsiveness to an error, the degree to which the systemovershootsa setpoint, and the degree of any systemoscillation. But the PID controller is broadly applicable since it relies only on the response of the measured process variable, not on knowledge or a model of the underlying process.
Continuous control, before PID controllers were fully understood and implemented, has one of its origins in thecentrifugal governor, which uses rotating weights to control a process. This was invented byChristiaan Huygensin the 17th century to regulate the gap betweenmillstonesinwindmillsdepending on the speed of rotation, and thereby compensate for the variable speed of grain feed.[3][4]
With the invention of the low-pressure stationary steam engine there was a need for automatic speed control, andJames Watt's self-designed "conical pendulum" governor, a set of revolving steel balls attached to a vertical spindle by link arms, came to be an industry standard. This was based on the millstone-gap control concept.[5]
Rotating-governor speed control, however, was still variable under conditions of varying load, where the shortcoming of what is now known as proportional control alone was evident. The error between the desired speed and the actual speed would increase with increasing load. In the 19th century, the theoretical basis for the operation of governors was first described byJames Clerk Maxwellin 1868 in his now-famous paperOn Governors. He explored the mathematical basis for control stability, and progressed a good way towards a solution, but made an appeal for mathematicians to examine the problem.[6][5]The problem was examined further in 1874 byEdward Routh,Charles Sturm, and in 1895,Adolf Hurwitz, all of whom contributed to the establishment of control stability criteria.[5]In subsequent applications, speed governors were further refined, notably by American scientistWillard Gibbs, who in 1872 theoretically analyzed Watt's conical pendulum governor.
About this time, the invention of theWhitehead torpedoposed a control problem that required accurate control of the running depth. Use of a depth pressure sensor alone proved inadequate, and a pendulum that measured the fore and aft pitch of the torpedo was combined with depth measurement to become thependulum-and-hydrostat control. Pressure control provided only a proportional control that, if the control gain was too high, would become unstable and go into overshoot with considerableinstabilityof depth-holding. The pendulum added what is now known as derivative control, which damped the oscillations by detecting the torpedo dive/climb angle and thereby the rate-of-change of depth.[7]This development (named by Whitehead as "The Secret" to give no clue to its action) was around 1868.[8]
Another early example of a PID-type controller was developed byElmer Sperryin 1911 for ship steering, though his work was intuitive rather than mathematically-based.[9]
It was not until 1922, however, that a formal control law for what we now call PID or three-term control was first developed using theoretical analysis, byRussian AmericanengineerNicolas Minorsky.[10]Minorsky was researching and designing automatic ship steering for the US Navy and based his analysis on observations of ahelmsman. He noted the helmsman steered the ship based not only on the current course error but also on past error, as well as the current rate of change;[11]this was then given a mathematical treatment by Minorsky.[5]His goal was stability, not general control, which simplified the problem significantly. While proportional control provided stability against small disturbances, it was insufficient for dealing with a steady disturbance, notably a stiff gale (due tosteady-state error), which required adding the integral term. Finally, the derivative term was added to improve stability and control.
Trials were carried out on theUSSNew Mexico, with the controllers controlling theangular velocity(not the angle) of the rudder. PI control yielded sustained yaw (angular error) of ±2°. Adding the D element yielded a yaw error of ±1/6°, better than most helmsmen could achieve.[12]
The Navy ultimately did not adopt the system due to resistance by personnel. Similar work was carried out and published by several others[who?]in the 1930s.[citation needed]
The wide use of feedback controllers did not become feasible until the development of wideband high-gain amplifiers to use the concept ofnegative feedback. This had been developed in telephone engineering electronics byHarold Blackin the late 1920s, but not published until 1934.[5]Independently, Clesson E Mason of the Foxboro Company in 1930 invented a wide-band pneumatic controller by combining thenozzle and flapperhigh-gain pneumatic amplifier, which had been invented in 1914, with negative feedback from the controller output. This dramatically increased the linear range of operation of the nozzle and flapper amplifier, and integral control could also be added by the use of a precision bleed valve and a bellows generating the integral term. The result was the "Stabilog" controller which gave both proportional and integral functions using feedback bellows.[5]The integral term was calledReset.[13]Later the derivative term was added by a further bellows and adjustable orifice.
From about 1932 onwards, the use of wideband pneumatic controllers increased rapidly in a variety of control applications. Air pressure was used for generating the controller output, and also for powering process modulating devices such as diaphragm-operated control valves. They were simple low maintenance devices that operated well in harsh industrial environments and did not present explosion risks inhazardous locations. They were the industry standard for many decades until the advent of discrete electronic controllers anddistributed control systems(DCSs).
With these controllers, a pneumatic industry signaling standard of 3–15 psi (0.2–1.0 bar) was established, which had an elevated zero to ensure devices were working within their linear characteristic and represented the control range of 0-100%.
In the 1950s, when high gain electronic amplifiers became cheap and reliable, electronic PID controllers became popular, and the pneumatic standard was emulated by 10-50 mA and 4–20 mAcurrent loopsignals (the latter became the industry standard). Pneumatic field actuators are still widely used because of the advantages of pneumatic energy for control valves in process plant environments.
Most modern PID controls in industry are implemented ascomputer softwarein DCSs,programmable logic controllers(PLCs), or discretecompact controllers.
Electronic analog PID control loops were often found within more complex electronic systems, for example, the head positioning of adisk drive, the power conditioning of apower supply, or even the movement-detection circuit of a modernseismometer. Discrete electronic analog controllers have been largely replaced by digital controllers usingmicrocontrollersorFPGAsto implement PID algorithms. However, discrete analog PID controllers are still used in niche applications requiring high-bandwidth and low-noise performance, such as laser-diode controllers.[14]
Consider arobotic arm[15]that can be moved and positioned by a control loop. Anelectric motormay lift or lower the arm, depending on forward or reverse power applied, but power cannot be a simple function of position because of theinertial massof the arm, forces due to gravity, external forces on the arm such as a load to lift or work to be done on an external object.
The PID controller continuously adjusts the input current to achieve smooth motion.
By measuring the position (PV), and subtracting it from the setpoint (SP), the error (e) is found, and from it the controller calculates how much electric current to supply to the motor (MV).
The obvious method isproportionalcontrol: the motor current is set in proportion to the existing error. However, this method fails if, for instance, the arm has to lift different weights: a greater weight needs a greater force applied for the same error on the down side, but a smaller force if the error is low on the upside. That's where the integral and derivative terms play their part.
Anintegralterm increases action in relation not only to the error but also the time for which it has persisted. So, if the applied force is not enough to bring the error to zero, this force will be increased as time passes. A pure "I" controller could bring the error to zero, but it would be both weakly reacting at the start (because the action would be small at the beginning, depending on time to become significant) and more aggressive at the end (the action increases as long as the error is positive, even if the error is near zero).
Applying too much integral when the error is small and decreasing will lead to overshoot. After overshooting, if the controller were to apply a large correction in the opposite direction and repeatedly overshoot the desired position, the output wouldoscillatearound the setpoint in either a constant, growing, or decayingsinusoid. If the amplitude of the oscillations increases with time, the system is unstable. If it decreases, the system is stable. If the oscillations remain at a constant magnitude, the system ismarginally stable.
Aderivativeterm does not consider the magnitude of the error (meaning it cannot bring it to zero: a pure D controller cannot bring the system to its setpoint), but rather the rate of change of error, trying to bring this rate to zero. It aims at flattening the error trajectory into a horizontal line, damping the force applied, and so reduces overshoot (error on the other side because of too great applied force).
In the interest of achieving a controlled arrival at the desired position (SP) in a timely and accurate way, the controlled system needs to becritically damped. A well-tuned position control system will also apply the necessary currents to the controlled motor so that the arm pushes and pulls as necessary to resist external forces trying to move it away from the required position. The setpoint itself may be generated by an external system, such as aPLCor other computer system, so that it continuously varies depending on the work that the robotic arm is expected to do. A well-tuned PID control system will enable the arm to meet these changing requirements to the best of its capabilities.
If a controller starts from a stable state with zero error (PV = SP), then further changes by the controller will be in response to changes in other measured or unmeasured inputs to the process that affect the process, and hence the PV. Variables that affect the process other than the MV are known as disturbances. Generally, controllers are used to reject disturbances and to implement setpoint changes. A change in load on the arm constitutes a disturbance to the robot arm control process.
In theory, a controller can be used to control any process that has a measurable output (PV), a known ideal value for that output (SP), and an input to the process (MV) that will affect the relevant PV. Controllers are used in industry to regulatetemperature,pressure,force,feed rate,[16]flow rate, chemical composition (componentconcentrations),weight,position,speed, and practically every other variable for which a measurement exists.
The PID control scheme is named after its three correcting terms, whose sum constitutes the manipulated variable (MV). The proportional, integral, and derivative terms are summed to calculate the output of the PID controller. Definingu(t){\displaystyle u(t)}as the controller output, the final form of the PID algorithm is
where
Equivalently, thetransfer functionin theLaplace domainof the PID controller is
wheres{\displaystyle s}is the complex angular frequency.
The proportional term produces an output value that is proportional to the current error value. The proportional response can be adjusted by multiplying the error by a constantKp, called the proportional gain constant.
The proportional term is given by
A high proportional gain results in a large change in the output for a given change in the error. If the proportional gain is too high, the system can become unstable (seethe section on loop tuning). In contrast, a small gain results in a small output response to a large input error, and a less responsive or less sensitive controller. If the proportional gain is too low, the control action may be too small when responding to system disturbances. Tuning theory and industrial practice indicate that the proportional term should contribute the bulk of the output change.[citation needed]
Thesteady-state erroris the difference between the desired final output and the actual one.[17]Because a non-zero error is required to drive it, a proportional controller generally operates with a steady-state error.[a]Steady-state error (SSE) is proportional to the process gain and inversely proportional to proportional gain. SSE may be mitigated by adding a compensatingbias termto the setpoint AND output or corrected dynamically by adding an integral term.
The contribution from the integral term is proportional to both the magnitude of the error and the duration of the error. Theintegralin a PID controller is the sum of the instantaneous error over time and gives the accumulated offset that should have been corrected previously. The accumulated error is then multiplied by the integral gain (Ki) and added to the controller output.
The integral term is given by
The integral term accelerates the movement of the process towards setpoint and eliminates the residual steady-state error that occurs with a pure proportional controller. However, since the integral term responds to accumulated errors from the past, it can cause the present value toovershootthe setpoint value (seethe section on loop tuning).
The derivative of the process error is calculated by determining the slope of the error over time and multiplying this rate of change by the derivative gainKd. The magnitude of the contribution of the derivative term to the overall control action is termed the derivative gain,Kd.
The derivative term is given by
Derivative action predicts system behavior and thus improves settling time and stability of the system.[18][19]An ideal derivative is notcausal, so that implementations of PID controllers include an additional low-pass filtering for the derivative term to limit the high-frequency gain and noise. Derivative action is seldom used in practice though – by one estimate in only 25% of deployed controllers[citation needed]– because of its variable impact on system stability in real-world applications.
Tuninga control loop is the adjustment of its control parameters (proportional band/gain, integral gain/reset, derivative gain/rate) to the optimum values for the desired control response. Stability (no unbounded oscillation) is a basic requirement, but beyond that, different systems have different behavior, different applications have different requirements, and requirements may conflict with one another.
Even though there are only three parameters and it is simple to describe in principle, PID tuning is a difficult problem because it must satisfy complex criteria within thelimitations of PID control. Accordingly, there are various methods for loop tuning, and more sophisticated techniques are the subject of patents; this section describes some traditional, manual methods for loop tuning.
Designing and tuning a PID controller appears to be conceptually intuitive, but can be hard in practice, if multiple (and often conflicting) objectives, such as short transient and high stability, are to be achieved. PID controllers often provide acceptable control using default tunings, but performance can generally be improved by careful tuning, and performance may be unacceptable with poor tuning. Usually, initial designs need to be adjusted repeatedly through computer simulations until the closed-loop system performs or compromises as desired.
Some processes have a degree ofnonlinearity, so parameters that work well at full-load conditions do not work when the process is starting up from no load. This can be corrected bygain scheduling(using different parameters in different operating regions).
If the PID controller parameters (the gains of the proportional, integral and derivative terms) are chosen incorrectly, the controlled process input can be unstable; i.e., its outputdiverges, with or withoutoscillation, and is limited only by saturation or mechanical breakage. Instability is caused byexcessgain, particularly in the presence of significant lag.
Generally, stabilization of response is required and the process must not oscillate for any combination of process conditions and setpoints, though sometimesmarginal stability(bounded oscillation) is acceptable or desired.[citation needed]
Mathematically, the origins of instability can be seen in theLaplace domain.[20]
The closed-loop transfer function is
whereK(s){\displaystyle K(s)}is the PID transfer function, andG(s){\displaystyle G(s)}is the plant transfer function. A system isunstablewhere the closed-loop transfer function diverges for somes{\displaystyle s}.[20]This happens in situations whereK(s)G(s)=−1{\displaystyle K(s)G(s)=-1}. In other words, this happens when|K(s)G(s)|=1{\displaystyle |K(s)G(s)|=1}with a 180° phase shift. Stability is guaranteed whenK(s)G(s)<1{\displaystyle K(s)G(s)<1}for frequencies that suffer high phase shifts. A more general formalism of this effect is known as theNyquist stability criterion.
The optimal behavior on a process change or setpoint change varies depending on the application.
Two basic requirements areregulation(disturbance rejection – staying at a given setpoint) andcommand tracking(implementing setpoint changes). These terms refer to how well the controlled variable tracks the desired value. Specific criteria for command tracking includerise timeandsettling time. Some processes must not allow an overshoot of the process variable beyond the setpoint if, for example, this would be unsafe. Other processes must minimize the energy expended in reaching a new setpoint.
There are several methods for tuning a PID loop. The most effective methods generally involve developing some form of process model and then choosing P, I, and D based on the dynamic model parameters. Manual tuning methods can be relatively time-consuming, particularly for systems with long loop times.
The choice of method depends largely on whether the loop can be taken offline for tuning, and on the response time of the system. If the system can be taken offline, the best tuning method often involves subjecting the system to a step change in input, measuring the output as a function of time, and using this response to determine the control parameters.[citation needed]
If the system must remain online, one tuning method is to first setKi{\displaystyle K_{i}}andKd{\displaystyle K_{d}}values to zero. Increase theKp{\displaystyle K_{p}}until the output of the loop oscillates; then setKp{\displaystyle K_{p}}to approximately half that value for a "quarter amplitude decay"-type response. Then increaseKi{\displaystyle K_{i}}until any offset is corrected in sufficient time for the process, but not until too great a value causes instability. Finally, increaseKd{\displaystyle K_{d}}, if required, until the loop is acceptably quick to reach its reference after a load disturbance. Too muchKp{\displaystyle K_{p}}causes excessive response and overshoot. A fast PID loop tuning usually overshoots slightly to reach the setpoint more quickly; however, some systems cannot accept overshoot, in which case anoverdampedclosed-loop system is required, which in turn requires aKp{\displaystyle K_{p}}setting significantly less than half that of theKp{\displaystyle K_{p}}setting that was causing oscillation.[citation needed]
Another heuristic tuning method is known as theZiegler–Nichols method, introduced byJohn G. ZieglerandNathaniel B. Nicholsin the 1940s. As in the method above, theKi{\displaystyle K_{i}}andKd{\displaystyle K_{d}}gains are first set to zero. The proportional gain is increased until it reaches the ultimate gainKu{\displaystyle K_{u}}at which the output of the loop starts to oscillate constantly.Ku{\displaystyle K_{u}}and the oscillation periodTu{\displaystyle T_{u}}are used to set the gains as follows:
The oscillation frequency is often measured instead, and the reciprocals of each multiplication yields the same result.
These gains apply to the ideal, parallel form of the PID controller. When applied to the standard PID form, only the integral and derivative gainsKi{\displaystyle K_{i}}andKd{\displaystyle K_{d}}are dependent on the oscillation periodTu{\displaystyle T_{u}}.
This method was developed in 1953 and is based on a first-order + time delay model. Similar to theZiegler–Nichols method, a set of tuning parameters were developed to yield a closed-loop response with a decay ratio of14{\displaystyle {\tfrac {1}{4}}}. Arguably the biggest problem with these parameters is that a small change in the process parameters could potentially cause a closed-loop system to become unstable.
Published in 1984 byKarl Johan Åströmand Tore Hägglund,[25]the relay method temporarily operates the process usingbang-bang controland measures the resultant oscillations. The output is switched (as if by arelay, hence the name) between two values of the control variable. The values must be chosen so the process will cross the setpoint, but they need not be 0% and 100%; by choosing suitable values, dangerous oscillations can be avoided.
As long as the process variable is below the setpoint, the control output is set to the higher value. As soon as it rises above the setpoint, the control output is set to the lower value. Ideally, the output waveform is nearly square, spending equal time above and below the setpoint. The period and amplitude of the resultant oscillations are measured, and used to compute the ultimate gain and period, which are then fed into the Ziegler–Nichols method.
Specifically, the ultimate periodTu{\displaystyle T_{u}}is assumed to be equal to the observed period, and the ultimate gain is computed asKu=4b/πa,{\displaystyle K_{u}=4b/\pi a,}whereais the amplitude of the process variable oscillation, andbis the amplitude of the control output change which caused it.
There are numerous variants on the relay method.[26]
The transfer function for a first-order process with dead time is
wherekpis the process gain,τpis the time constant,θis the dead time, andu(s) is a step change input. Converting this transfer function to the time domain results in
using the same parameters found above.
It is important when using this method to apply a large enough step-change input that the output can be measured; however, too large of a step change can affect the process stability. Additionally, a larger step change ensures that the output does not change due to a disturbance (for best results, try to minimize disturbances when performing the step test).
One way to determine the parameters for the first-order process is using the 63.2% method. In this method, the process gain (kp) is equal to the change in output divided by the change in input. The dead timeθis the amount of time between when the step change occurred and when the output first changed. The time constant (τp) is the amount of time it takes for the output to reach 63.2% of the new steady-state value after the step change. One downside to using this method is that it can take a while to reach a new steady-state value if the process has large time constants.[27]
Most modern industrial facilities no longer tune loops using the manual calculation methods shown above. Instead, PID tuning and loop optimization software are used to ensure consistent results. These software packages gather data, develop process models, and suggest optimal tuning. Some software packages can even develop tuning by gathering data from reference changes.
Mathematical PID loop tuning induces an impulse in the system and then uses the controlled system's frequency response to design the PID loop values. In loops with response times of several minutes, mathematical loop tuning is recommended, because trial and error can take days just to find a stable set of loop values. Optimal values are harder to find. Some digital loop controllers offer a self-tuning feature in which very small setpoint changes are sent to the process, allowing the controller itself to calculate optimal tuning values.
Another approach calculates initial values via the Ziegler–Nichols method, and uses a numerical optimization technique to find better PID coefficients.[28]
Other formulas are available to tune the loop according to different performance criteria. Many patented formulas are now embedded within PID tuning software and hardware modules.[29]
Advances in automated PID loop tuning software also deliver algorithms for tuning PID Loops in a dynamic or non-steady state (NSS) scenario. The software models the dynamics of a process, through a disturbance, and calculate PID control parameters in response.[30]
While PID controllers are applicable to many control problems, and often perform satisfactorily without any improvements or only coarse tuning, they can perform poorly in some applications and do not in general provideoptimalcontrol. The fundamental difficulty with PID control is that it is a feedback control system, withconstantparameters, and no direct knowledge of the process, and thus overall performance is reactive and a compromise. While PID control is the best controller for anobserverwithout a model of the process, better performance can be obtained by overtly modeling the actor of the process without resorting to an observer.
PID controllers, when used alone, can give poor performance when the PID loop gains must be reduced so that the control system does not overshoot, oscillate orhuntabout the control setpoint value. They also have difficulties in the presence of non-linearities, may trade-off regulation versus response time, do not react to changing process behavior (say, the process changes after it has warmed up), and have lag in responding to large disturbances.
The most significant improvement is to incorporatefeed-forward controlwith knowledge about the system, and using the PID only to control error. Alternatively, PIDs can be modified in more minor ways, such as by changing the parameters (either gain scheduling in different use cases or adaptively modifying them based on performance), improving measurement (higher sampling rate, precision, and accuracy, and low-pass filtering if necessary), or cascading multiple PID controllers.
PID controllers work best when the loop to be controlled is linear and symmetric. Thus, their performance in non-linear and asymmetric systems is degraded.
A non-linear valve, for instance, in a flow control application, will result in variable loop sensitivity, requiring dampened action to prevent instability. One solution is the use of the valve's non-linear characteristic in the control algorithm to compensate for this.
An asymmetric application, for example, is temperature control inHVAC systemsusing only active heating (via a heating element), where there is only passive cooling available. When it is desired to lower the controlled temperature the heating output is off, but there is no active cooling due to control output. Any overshoot of rising temperature can therefore only be corrected slowly; it cannot be forced downward by the control output. In this case the PID controller could be tuned to be over-damped, to prevent or reduce overshoot, but this reduces performance by increasing the settling time of a rising temperature to the set point. The inherent degradation of control quality in this application could be solved by application of active cooling.
A problem with the derivative term is that it amplifies higher frequency measurement or processnoisethat can cause large amounts of change in the output. It is often helpful to filter the measurements with alow-pass filterin order to remove higher-frequency noise components. As low-pass filtering and derivative control can cancel each other out, the amount of filtering is limited. Therefore, low noise instrumentation can be important. A nonlinearmedian filtermay be used, which improves the filtering efficiency and practical performance.[31]In some cases, the differential band can be turned off with little loss of control. This is equivalent to using the PID controller as aPI controller.
The basic PID algorithm presents some challenges in control applications that have been addressed by minor modifications to the PID form.
One common problem resulting from the ideal PID implementations isintegral windup. Following a large change in setpoint the integral term can accumulate an error larger than the maximal value for the regulation variable (windup), thus the system overshoots and continues to increase until this accumulated error is unwound. This problem can be addressed by:
For example, a PID loop is used to control the temperature of an electric resistance furnace where the system has stabilized. Now when the door is opened and something cold is put into the furnace the temperature drops below the setpoint. The integral function of the controller tends to compensate for error by introducing another error in the positive direction. This overshoot can be avoided by freezing of the integral function after the opening of the door for the time the control loop typically needs to reheat the furnace.
API controller(proportional-integral controller) is a special case of the PID controller in which the derivative (D) of the error is not used.
The controller output is given by
whereΔ{\displaystyle \Delta }is the error or deviation of actual measured value (PV) from the setpoint (SP).
A PI controller can be modelled easily in software such asSimulinkorXcosusing a "flow chart" box involvingLaplaceoperators:
where
Setting a value forG{\displaystyle G}is often a trade off between decreasing overshoot and increasing settling time.
The lack of derivative action may make the system more steady in the steady state in the case of noisy data. This is because derivative action is more sensitive to higher-frequency terms in the inputs.
Without derivative action, a PI-controlled system is less responsive to real (non-noise) and relatively fast alterations in state and so the system will be slower to reach setpoint and slower to respond to perturbations than a well-tuned PID system may be.
Many PID loops control a mechanical device (for example, a valve). Mechanical maintenance can be a major cost and wear leads to control degradation in the form of eitherstictionorbacklashin the mechanical response to an input signal. The rate of mechanical wear is mainly a function of how often a device is activated to make a change. Where wear is a significant concern, the PID loop may have an outputdeadbandto reduce the frequency of activation of the output (valve). This is accomplished by modifying the controller to hold its output steady if the change would be small (within the defined deadband range). The calculated output must leave the deadband before the actual output will change.
The proportional and derivative terms can produce excessive movement in the output when a system is subjected to an instantaneous step increase in the error, such as a large setpoint change. In the case of the derivative term, this is due to taking the derivative of the error, which is very large in the case of an instantaneous step change. As a result, some PID algorithms incorporate some of the following modifications:
The control system performance can be improved by combining thefeedback(or closed-loop) control of a PID controller withfeed-forward(or open-loop) control. Knowledge about the system (such as the desired acceleration and inertia) can be fed forward and combined with the PID output to improve the overall system performance. The feed-forward value alone can often provide the major portion of the controller output. The PID controller primarily has to compensate for whatever difference orerrorremains between the setpoint (SP) and the system response to the open-loop control. Since the feed-forward output is not affected by the process feedback, it can never cause the control system to oscillate, thus improving the system response without affecting stability. Feed forward can be based on the setpoint and on extra measured disturbances. Setpoint weighting is a simple form of feed forward.
For example, in most motion control systems, in order to accelerate a mechanical load under control, more force is required from the actuator. If a velocity loop PID controller is being used to control the speed of the load and command the force being applied by the actuator, then it is beneficial to take the desired instantaneous acceleration, scale that value appropriately and add it to the output of the PID velocity loop controller. This means that whenever the load is being accelerated or decelerated, a proportional amount of force is commanded from the actuator regardless of the feedback value. The PID loop in this situation uses the feedback information to change the combined output to reduce the remaining difference between the process setpoint and the feedback value. Working together, the combined open-loop feed-forward controller and closed-loop PID controller can provide a more responsive control system.
PID controllers are often implemented with a "bumpless" initialization feature that recalculates the integral accumulator term to maintain a consistent process output through parameter changes.[33]A partial implementation is to store the integral gain times the error rather than storing the error and postmultiplying by the integral gain, which prevents discontinuous output when the I gain is changed, but not the P or D gains.
In addition to feed-forward, PID controllers are often enhanced through methods such as PIDgain scheduling(changing parameters in different operating conditions),fuzzy logic, or computational verb logic.[34][35]Further practical application issues can arise from instrumentation connected to the controller. A high enough sampling rate, measurement precision, and measurement accuracy are required to achieve adequate control performance. Another new method for improvement of PID controller is to increase the degree of freedom by usingfractional order. The order of the integrator and differentiator add increased flexibility to the controller.[36]
One distinctive advantage of PID controllers is that two PID controllers can be used together to yield better dynamic performance. This is called cascaded PID control. Two controllers are in cascade when they are arranged so that one regulates the set point of the other. A PID controller acts as outer loop controller, which controls the primary physical parameter, such as fluid level or velocity. The other controller acts as inner loop controller, which reads the output of outer loop controller as setpoint, usually controlling a more rapid changing parameter, flowrate or acceleration. It can be mathematically proven[citation needed]that the working frequency of the controller is increased and the time constant of the object is reduced by using cascaded PID controllers.[vague].
For example, a temperature-controlled circulating bath has two PID controllers in cascade, each with its ownthermocoupletemperature sensor. The outer controller controls the temperature of the water using a thermocouple located far from the heater, where it accurately reads the temperature of the bulk of the water. The error term of this PID controller is the difference between the desired bath temperature and measured temperature. Instead of controlling the heater directly, the outer PID controller sets a heater temperature goal for the inner PID controller. The inner PID controller controls the temperature of the heater using a thermocouple attached to the heater. The inner controller's error term is the difference between this heater temperature setpoint and the measured temperature of the heater. Its output controls the actual heater to stay near this setpoint.
The proportional, integral, and differential terms of the two controllers will be very different. The outer PID controller has a long time constant – all the water in the tank needs to heat up or cool down. The inner loop responds much more quickly. Each controller can be tuned to match the physics of the systemitcontrols – heat transfer and thermal mass of the whole tank or of just the heater – giving better total response.[37][38]
The form of the PID controller most often encountered in industry, and the one most relevant to tuning algorithms is thestandard form. In this form theKp{\displaystyle K_{p}}gain is applied to theIout{\displaystyle I_{\mathrm {out} }}, andDout{\displaystyle D_{\mathrm {out} }}terms, yielding:
where
In this standard form, the parameters have a clear physical meaning. In particular, the inner summation produces a new single error value which is compensated for future and past errors. The proportional error term is the current error. The derivative components term attempts to predict the error value atTd{\displaystyle T_{d}}seconds (or samples) in the future, assuming that the loop control remains unchanged. The integral component adjusts the error value to compensate for the sum of all past errors, with the intention of completely eliminating them inTi{\displaystyle T_{i}}seconds (or samples). The resulting compensated single error value is then scaled by the single gainKp{\displaystyle K_{p}}to compute the control variable.
In the parallel form, shown in the controller theory section
the gain parameters are related to the parameters of the standard form throughKi=Kp/Ti{\displaystyle K_{i}=K_{p}/T_{i}}andKd=KpTd{\displaystyle K_{d}=K_{p}T_{d}}. This parallel form, where the parameters are treated as simple gains, is the most general and flexible form. However, it is also the form where the parameters have the weakest relationship to physical behaviors and is generally reserved for theoretical treatment of the PID controller. The standard form, despite being slightly more complex mathematically, is more common in industry.
In many cases, the manipulated variable output by the PID controller is a dimensionless fraction between 0 and 100% of some maximum possible value, and the translation into real units (such as pumping rate or watts of heater power) is outside the PID controller. The process variable, however, is in dimensioned units such as temperature. It is common in this case to express the gainKp{\displaystyle K_{p}}not as "output per degree", but rather in the reciprocal form of aproportional band100/Kp{\displaystyle 100/K_{p}}, which is "degrees per full output": the range over which the output changes from 0 to 1 (0% to 100%). Beyond this range, the output is saturated, full-off or full-on. The narrower this band, the higher the proportional gain.
In most commercial control systems, derivative action is based on process variable rather than error. That is, a change in the setpoint does not affect the derivative action. This is because the digitized version of the algorithm produces a large unwanted spike when the setpoint is changed. If the setpoint is constant then changes in the PV will be the same as changes in error. Therefore, this modification makes no difference to the way the controller responds to process disturbances.
Most commercial control systems offer theoptionof also basing the proportional action solely on the process variable. This means that only the integral action responds to changes in the setpoint. The modification to the algorithm does not affect the way the controller responds to process disturbances.
Basing proportional action on PV eliminates the instant and possibly very large change in output caused by a sudden change to the setpoint. Depending on the process and tuning this may be beneficial to the response to a setpoint step.
King[39]describes an effective chart-based method.
Sometimes it is useful to write the PID regulator inLaplace transformform:
Having the PID controller written in Laplace form and having the transfer function of the controlled system makes it easy to determine the closed-loop transfer function of the system.
Another representation of the PID controller is the series, orinteractingform
where the parameters are related to the parameters of the standard form through
with
This form essentially consists of a PD and PI controller in series. As the integral is required to calculate the controller's bias this form provides the ability to track an external bias value which is required to be used for proper implementation of multi-controller advanced control schemes.
The analysis for designing a digital implementation of a PID controller in amicrocontroller(MCU) orFPGAdevice requires the standard form of the PID controller to bediscretized.[40]Approximations for first-order derivatives are made by backwardfinite differences.u(t){\displaystyle u(t)}ande(t){\displaystyle e(t)}are discretized with a sampling periodΔt{\displaystyle \Delta t}, k is the sample index.
Differentiating both sides of PID equation usingNewton's notationgives:
u˙(t)=Kpe˙(t)+Kie(t)+Kde¨(t){\displaystyle {\dot {u}}(t)=K_{p}{\dot {e}}(t)+K_{i}e(t)+K_{d}{\ddot {e}}(t)}
Derivative terms are approximated as,
So,
Applying backward difference again gives,
By simplifying and regrouping terms of the above equation, an algorithm for an implementation of the discretized PID controller in a MCU is finally obtained:
or:
s.t.Ti=Kp/Ki,Td=Kd/Kp{\displaystyle T_{i}=K_{p}/K_{i},T_{d}=K_{d}/K_{p}}
Note: This method solves in factu(t)=Kpe(t)+Ki∫0te(τ)dτ+Kdde(t)dt+u0{\displaystyle u(t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,\mathrm {d} \tau +K_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}}+u_{0}}whereu0{\displaystyle u_{0}}is a constant independent of t. This constant is useful when you want to have a start and stop control on the regulation loop. For instance, setting Kp,Ki and Kd to 0 will keep u(t) constant. Likewise, when you want to start a regulation on a system where the error is already close to 0 with u(t) non null, it prevents from sending the output to 0.
Here is a very simple and explicit group of pseudocode that can be easily understood by the layman:[citation needed]
Below a pseudocode illustrates how to implement a PID considering the PID as anIIRfilter:
TheZ-transformof a PID can be written as (Δt{\displaystyle \Delta _{t}}is the sampling time):
and expressed in a IIR form (in agreement with the discrete implementation shown above):
We can then deduce the recursive iteration often found in FPGA implementation[41]
Here, Kp is a dimensionless number, Ki is expressed ins−1{\displaystyle s^{-1}}and Kd is expressed in s. When doing a regulation where the actuator and the measured value are not in the same unit (ex. temperature regulation using a motor controlling a valve), Kp, Ki and Kd may be corrected by a unit conversion factor. It may also be interesting to use Ki in its reciprocal form (integration time). The above implementation allows to perform an I-only controller which may be useful in some cases.
In the real world, this isD-to-A convertedand passed into the process under control as the manipulated variable (MV). The current error is stored elsewhere for re-use in the next differentiation, the program then waits until dt seconds have passed since start, and the loop begins again,reading innew values for the PV and the setpoint and calculating a new value for the error.[42]
Note that for real code, the use of "wait(dt)" might be inappropriate because it doesn't account for time taken by the algorithm itself during the loop, or more importantly, any pre-emption delaying the algorithm.
A common issue when usingKd{\displaystyle K_{d}}is the response to the derivative of a rising or falling edge of the setpoint as shown below:
A typical workaround is to filter the derivative action using a low pass filter of time constantτd/N{\displaystyle \tau _{d}/N}where3<=N<=10{\displaystyle 3<=N<=10}:
A variant of the above algorithm using aninfinite impulse response(IIR) filter for the derivative:
|
https://en.wikipedia.org/wiki/PID_controller
|
Industrial process control(IPC) or simplyprocess controlis a system used in modernmanufacturingwhich uses the principles ofcontrol theoryand physicalindustrial control systemsto monitor, control and optimize continuousindustrial production processesusing control algorithms. This ensures that the industrialmachinesrun smoothly and safely infactoriesand efficiently useenergyto transformraw materialsinto high-qualityfinished productswith reliableconsistencywhile reducingenergy wasteand economiccosts, something which could not be achieved purely by human manual control.[1]
In IPC, control theory provides the theoretical framework to understand system dynamics, predict outcomes and design control strategies to ensure predetermined objectives, utilizing concepts like feedback loops, stability analysis and controller design. On the other hand, the physical apparatus of IPC, based on automation technologies, consists of several components. Firstly, a network of sensors continuously measure various process variables (such as temperature, pressure, etc.) and product quality variables. A programmable logic controller (PLC, for smaller, less complex processes) or a distributed control system (DCS, for large-scale or geographically dispersed processes) analyzes this sensor data transmitted to it, compares it to predefined setpoints using a set of instructions or a mathematical model called the control algorithm and then, in case of any deviation from these setpoints (e.g., temperature exceeding setpoint), makes quick corrective adjustments through actuators such as valves (e.g. cooling valve for temperature control), motors or heaters to guide the process back to the desired operational range. This creates a continuous closed-loop cycle of measurement, comparison, control action, and re-evaluation which guarantees that the process remains within established parameters. The HMI (Human-Machine Interface) acts as the "control panel" for the IPC system where small number of human operators can monitor the process and make informed decisions regarding adjustments.[1]IPCs can range from controlling the temperature and level of a single process vessel (controlled environment tank for mixing, separating, reacting, or storing materials in industrial processes.) to a complete chemical processing plant with several thousand control feedback loops.
IPC provides several critical benefits to manufacturing companies. By maintaining a tight control over key process variables, it helps reduce energy use, minimize waste and shorten downtime for peak efficiency and reduced costs. It ensures consistent and improved product quality with little variability, which satisfies the customers and strengthens the company's reputation. It improves safety by detecting and alerting human operators about potential issues early, thus preventing accidents, equipment failures, process disruptions and costly downtime. Analyzing trends and behaviors in the vast amounts of data collected real-time helps engineers identify areas of improvement, refine control strategies and continuously enhance production efficiency using a data-driven approach.[1]
IPC is used across a wide range of industries where precise control is important.[2]The applications can range from controlling the temperature and level of a single process vessel, to a complete chemical processing plant with several thousand control loops. In automotive manufacturing, IPC ensures consistent quality by meticulously controlling processes like welding and painting. Mining operations are optimized with IPC monitoring ore crushing and adjusting conveyor belt speeds for maximum output. Dredging benefits from precise control of suction pressure, dredging depth and sediment discharge rate by IPC, ensuring efficient and sustainable practices. Pulp and paper production leverages IPC to regulate chemical processes (e.g., pH and bleach concentration) and automate paper machine operations to control paper sheet moisture content and drying temperature for consistent quality. In chemical plants, it ensures the safe and efficient production of chemicals by controlling temperature, pressure and reaction rates. Oil refineries use it to smoothly convert crude oil into gasoline and other petroleum products. In power plants, it helps maintain stable operating conditions necessary for a continuous electricity supply. In food and beverage production, it helps ensure consistent texture, safety and quality. Pharmaceutical companies relies on it to produce life-saving drugs safely and effectively. The development of large industrial process control systems has been instrumental in enabling the design of large high volume and complex processes, which could not be otherwise economically or safely operated.[3]
Historical milestones in the development of industrial process control began in ancient civilizations, where water level control devices were used to regulate water flow for irrigation and water clocks. During the Industrial Revolution in the 18th century, there was a growing need for precise control over boiler pressure in steam engines. In the 1930s, pneumatic and electronic controllers, such as PID (Proportional-Integral-Derivative) controllers, were breakthrough innovations that laid the groundwork for modern control theory. The late 20th century saw the rise of programmable logic controllers (PLCs) and distributed control systems (DCS), while the advent ofmicroprocessorsfurther revolutionized IPC by enabling more complex control algorithms.
Early process control breakthroughs came most frequently in the form of water control devices.Ktesibios of Alexandriais credited for inventing float valves to regulate water level ofwater clocksin the 3rd century BC. In the 1st century AD,Heron of Alexandriainvented a water valve similar to the fill valve used in modern toilets.[4]
Later process controls inventions involved basic physics principles. In 1620,Cornelis Drebbelinvented a bimetallic thermostat for controlling the temperature in a furnace. In 1681,Denis Papindiscovered the pressure inside a vessel could be regulated by placing weights on top of the vessel lid.[4]In 1745, Edmund Lee created thefantailto improve windmill efficiency; a fantail was a smaller windmill placed 90° of the larger fans to keep the face of the windmill pointed directly into the oncoming wind.
With the dawn of the Industrial Revolution in the 1760s, process controls inventions were aimed to replace human operators with mechanized processes. In 1784,Oliver Evanscreated a water-powered flourmill which operated using buckets and screw conveyors.Henry Fordapplied the same theory in 1910 when the assembly line was created to decrease human intervention in the automobile production process.[4]
For continuously variable process control it was not until 1922 that a formal control law for what we now callPID controlor three-term control was first developed using theoretical analysis, byRussian AmericanengineerNicolas Minorsky.[5]Minorsky was researching and designing automatic ship steering for the US Navy and based his analysis on observations of ahelmsman. He noted the helmsman steered the ship based not only on the current course error, but also on past error, as well as the current rate of change;[6]this was then given a mathematical treatment by Minorsky.[7]His goal was stability, not general control, which simplified the problem significantly. While proportional control provided stability against small disturbances, it was insufficient for dealing with a steady disturbance, notably a stiffgale(due tosteady-state error), which required adding the integral term. Finally, the derivative term was added to improve stability and control.
Process control of large industrial plants has evolved through many stages. Initially, control would be from panels local to the process plant. However this required a large manpower resource to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Effectively this was the centralization of all the localized panels, with the advantages of lower manning levels and easier overview of the process. Often the controllers were behind the control room panels, and all automatic and manual control outputs were transmitted back to plant. However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware, and continual operator movement within the control room was required to view different parts of the process.
With the coming of electronic processors and graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors.[8]These could be distributed around the plant, and communicate with the graphic display in the control room or rooms. The distributed control system (DCS) was born.
The introduction of DCSs allowed easy interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels.
The accompanying diagram is a general model which shows functional manufacturing levels in a large process using processor and computer-based control.
Referring to the diagram: Level 0 contains the field devices such as flow and temperature sensors (process value readings - PV), and final control elements (FCE), such ascontrol valves; Level 1 contains the industrialized Input/Output (I/O) modules, and their associated distributed electronic processors; Level 2 contains the supervisory computers, which collate information from processor nodes on the system, and provide the operator control screens; Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and monitoring targets; Level 4 is the production scheduling level.
To determine the fundamental model for any process, the inputs and outputs of the system are defined differently than for other chemical processes.[9]The balance equations are defined by the control inputs and outputs rather than the material inputs. The control model is a set of equations used to predict the behavior of a system and can help determine what the response to change will be. The state variable (x) is a measurable variable that is a good indicator of the state of the system, such as temperature (energy balance), volume (mass balance) or concentration (component balance). Input variable (u) is a specified variable that commonly include flow rates.
The entering and exiting flows are both considered control inputs. The control input can be classified as a manipulated, disturbance, or unmonitored variable. Parameters (p) are usually a physical limitation and something that is fixed for the system, such as the vessel volume or the viscosity of the material. Output (y) is the metric used to determine the behavior of the system. The control output can be classified as measured, unmeasured, or unmonitored.
Processes can be characterized as batch, continuous, or hybrid.[10]Batch applications require that specific quantities of raw materials be combined in specific ways for particular duration to produce an intermediate or end result. One example is the production of adhesives and glues, which normally require the mixing of raw materials in a heated vessel for a period of time to form a quantity of end product. Other important examples are the production of food, beverages and medicine. Batch processes are generally used to produce a relatively low to intermediate quantity of product per year (a few pounds to millions of pounds).
A continuous physical system is represented through variables that are smooth and uninterrupted in time. The control of the water temperature in aheating jacket, for example, is an example of continuous process control. Some important continuous processes are the production of fuels, chemicals and plastics. Continuous processes in manufacturing are used to produce very large quantities of product per year (millions to billions of pounds). Such controls usefeedbacksuch as in thePID controllerA PID Controller includes proportional, integrating, and derivative controller functions.
Applications having elements of batch and continuous process control are often called hybrid applications.
The fundamental building block of any industrial control system is thecontrol loop, which controls just one process variable. An example is shown in the accompanying diagram, where the flow rate in a pipe is controlled by aPID controller, assisted by what is effectively a cascaded loop in the form of a valve servo-controller to ensure correct valve positioning.
Some large systems may have several hundreds or thousands of control loops. In complex processes the loops are interactive, so that the operation of one loop may affect the operation of another. The system diagram for representing control loops is aPiping and instrumentation diagram.
Commonly used control systems includeprogrammable logic controller(PLC),Distributed Control System(DCS) orSCADA.
A further example is shown. If a control valve were used to hold level in a tank, the level controller would compare the equivalent reading of a level sensor to the level setpoint and determine whether more or less valve opening was necessary to keep the level constant. A cascaded flow controller could then calculate the change in the valve position.
The economic nature of many products manufactured in batch and continuous processes require highly efficient operation due to thin margins. The competing factor in process control is that products must meet certain specifications in order to be satisfactory. These specifications can come in two forms: a minimum and maximum for a property of the material or product, or a range within which the property must be.[11]All loops are susceptible to disturbances and therefore a buffer must be used on process set points to ensure disturbances do not cause the material or product to go out of specifications. This buffer comes at an economic cost (i.e. additional processing, maintaining elevated or depressed process conditions, etc.).
Process efficiency can be enhanced by reducing the margins necessary to ensure product specifications are met.[11]This can be done by improving the control of the process to minimize the effect of disturbances on the process. The efficiency is improved in a two step method of narrowing the variance and shifting the target.[11]Margins can be narrowed through various process upgrades (i.e. equipment upgrades, enhanced control methods, etc.). Once margins are narrowed, an economic analysis can be done on the process to determine how the set point target is to be shifted. Less conservative process set points lead to increased economic efficiency.[11]Effective process control strategies increase the competitive advantage of manufacturers who employ them.
|
https://en.wikipedia.org/wiki/Process_control
|
Process optimizationis the discipline of adjusting a process so as to make the best or most effective use of some specified set of parameters without violating some constraint. Common goals are minimizing cost and maximizing throughput and/or efficiency. Process optimization is one of the majorquantitativetools in industrialdecision making.
Whenoptimizinga process, the goal is to maximize one or more of the process specifications, while keeping all others within their constraints. This can be done by using aprocess miningtool, discovering the critical activities and bottlenecks, and acting only on them.
Fundamentally, there are three parameters that can be adjusted to affect optimal performance. They are:
The first step is to verify that the existing equipment is being used to its fullest advantage by examining operating data to identify equipment bottlenecks.
Operating procedures may vary widely from person-to-person or from shift-to-shift. Automation of the plant can help significantly. But automation will be of no help if the operators take control and run the plant manually.
In a typical processing plant, such as achemical plantoroil refinery, there are hundreds or even thousands of control loops. Each control loop is responsible for controlling one part of the process, such as maintaining a temperature, level, or flow.
If thecontrol loopis not properly designed and tuned, the process runs below its optimum. The process will be more expensive to operate, and equipment will wear out prematurely. For each control loop to run optimally, identification of sensor, valve, and tuning problems is important. It has been well documented that over 35% of control loops typically have problems.[citation needed]
The process of continuously monitoring and optimizing the entire plant is sometimes called performance supervision.
|
https://en.wikipedia.org/wiki/Process_optimization
|
Insystems science, asampled-data systemis acontrol systemin which a continuous-timeplantis controlled with a digital device. Under periodicsampling, the sampled-data system is time-varying but also periodic; thus, it may be modeled by a simplified discrete-time system obtained by discretizing the plant. However, this discrete model does not capture the inter-sample behavior of the real system, which may be critical in a number of applications.
The analysis of sampled-data systems incorporating full-time information leads to challenging control problems with a rich mathematical structure. Many of these problems have only been solved recently.
This technology-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Sampled_data_system
|
VisSimis a visualblock diagramprogram for the simulation ofdynamical systemsandmodel-based designofembedded systems, with its ownvisual language. It is developed by Visual Solutions ofWestford, Massachusetts. Visual Solutions was acquired byAltairin August 2014 and its products have been rebranded as Altair Embed as a part of Altair's Model Based Development Suite. With Embed, virtual prototypes of dynamic systems can be developed. Models are built by sliding blocks into the work area and wiring them together with the mouse. Embed automatically converts the control diagrams into C-code ready to be downloaded to the target hardware.
VisSim (now Altair Embed) uses a graphical data flow paradigm to implement dynamic systems, based on differential equations. Version 8 adds interactiveUMLOMG2 compliantstate chart graphsthat are placed in VisSim diagrams, which allows the modelling of state based systems such as startup sequencing of process plants or serial protocol decoding.
VisSim/Altair Embed is used incontrol systemdesign anddigital signal processingfor multi-domain simulation and design.[1]It includes blocks for arithmetic, boolean, andtranscendental functions, as well asdigital filters,transfer functions,numerical integrationand interactive plotting.[2]The most commonly modelled systems are aeronautical, biological/medical, digital power, electric motor, electrical, hydraulic, mechanical, process, thermal/HVACand econometric.[1]
A read-only version of the software,VisSim Viewer, is available free of charge and provides a way for people who do not own a license to use VisSim to run VisSim models.[3]This program is intended to allow models to be more widely shared while preserving the model in its published form.[3]The viewer can execute any VisSim model, and only changes to block and simulation parameters to illustrate different design scenarios, are allowed. Sliders and buttons may be activated if included in the model.
The "VisSim/C-Code" add-on generatesANSI Ccode for the model, and generates target specific code for on-chip devices like PWM, ADC, encoder, GPIO, I2C etc. This is useful for development ofembedded systems. After the behaviour of the controller has been simulated, C-code can be generated, compiled and run on the target. For debugging, VisSim supports an interactive JTAG linkage, called "Hotlink", that allows interactive gain change and plotting of on-target variables. The VisSim generated code has been called efficient and readable, making it well suited for development of embedded systems.[4]VisSim's author served on the X3J11 ANSI C committee and wrote several C compilers, in addition to co-authoring a book on C.[5]This deep understanding of ANSI C, and the nature of the resultingmachine codewhen compiled, is the key to the code generator's efficiency. VisSim can target small16-bitfixed pointsystems like theTexas InstrumentsMSP430, using only 740 bytes flash and 64 bytes of RAM for a small closed-loopPulse-width modulation(PWM) actuated system, as well as allowing very high control sample rates over 500 kHz on larger32-bitfloating-point processorslike theTexas Instruments150 MHz F28335.
The technique of simulating system performance off-line, and then generating code from the simulation is known as "model-based development". Model-based development forembedded systemsis becoming widely adopted for production systems because it shortens development cycles for hardware development in the same way thatModel-driven architectureshortens production cycles for software development.[6]
Model buildingis a visual way of describing a situation. In an engineering context, instead of writing and solving asystem of equation, model building involves using visual "blocks" to solve the problem. The advantage of using models is that in some cases problems which appear difficult if expressed mathematically may be easier to understand when represented pictorially.
VisSim uses a hierarchical composition to create nested block diagrams. A typical model would consist of "virtual plants" composed of various VisSim "layers", combined if necessary with custom blocks written in C or FORTRAN. A virtual controller can be added and tuned to give desired overall system response.Graphical control elementsuch as sliders and buttons allow control ofwhat-if analysisfor operator training or controller tuning.
Although VisSim was originally designed for use bycontrol engineers, it can be used for any type of mathematical model.
Screenshots show the simulation of asinefunction in VisSim. Noise is added to the model, then filtered out using aButterworth filter. The signal traces of the sine function with noise and filtered noise are first shown together, and then shown in separate windows in the plot block.
|
https://en.wikipedia.org/wiki/VisSim
|
This is a comprehensive list ofvolunteer computingprojects, which are a type ofdistributed computingwhere volunteers donate computing time to specific causes. The donated computing power comes from idleCPUsandGPUsinpersonal computers,video game consoles,[1]andAndroid devices.
Each project seeks to utilize the computing power of many internet connected devices to solve problems and perform tedious, repetitive research in a very cost effective manner.
2014-06-01[139]
2011-08-23[146]
2017-04-07[151]
2022-09[155]
2016[164]
2017[169]
2024-01
2018-04-20[180]
2014-06[183]
2014-05-23[187]
2012[197]
2004-03-08[198]
2011-06[200]
2001-09-03[202]
2010-02-21
2011-02[208]
2016-07[211]
2018-06-05[217]
2017-03[220]
2009-11-15[224]
2016-01-28[230]
2022-10-02[238]
2010
2016[248]
2013-02-16[251]
2 Spy Hill Research[254]
Tested BOINC's forum software for possible use byInteractions in Understanding the Universe[256]
2016-10-04[258]
2020-05-31[264]
2013-01
2023-06-01[270]
2022-09-29
Merged with PrimeGrid.
2012-08
2017-02[281]
2012-08[284]
2020-03-31[289]
2020-03-31[289]
2014-05-30
2011-09-28[297]
2018-01[300]
2013-09-05
2018-05-02[306]
2010-11[311]
2019-03[314]
2019-03[314]
2017[319]
|
https://en.wikipedia.org/wiki/List_of_distributed_computing_projects
|
TheScalable Weakly Consistent Infection-style Process Group Membership (SWIM) Protocolis a group membership protocol based on "outsourcedheartbeats"[1]used indistributed systems, first introduced by Abhinandan Das, Indranil Gupta and Ashish Motivala in 2002.[2][3]It is a hybrid algorithm which combinesfailure detectionwith group membership dissemination.
The protocol has two components, theFailure Detector Componentand theDissemination Component.
TheFailure Detector Componentfunctions as follows:
TheDissemination Componentfunctions as follows:
The protocol provides the following guarantees:
The original SWIM paper lists the following extensions to make the protocol more robust:[2]
|
https://en.wikipedia.org/wiki/SWIM_Protocol
|
Incomputing, acrash, orsystem crash, occurs when a computer program such as asoftware applicationor anoperating systemstops functioning properly andexits. On some operating systems or individual applications, acrash reporting servicewill report the crash and any details relating to it (or give the user the option to do so), usually to thedeveloper(s)of the application. If the program is a critical part of the operating system, the entire system may crash or hang, often resulting in akernel panicorfatal system error.
Most crashes are the result of asoftware bug. Typical causes include accessing invalid memory addresses,[a]incorrect address values in theprogram counter,buffer overflow, overwriting a portion of the affected program code due to an earlier bug, executing invalidmachine instructions(anillegalorunauthorizedopcode), or triggering an unhandledexception. The original software bug that started this chain of events is typically considered to be the cause of the crash, which is discovered through the process ofdebugging. The original bug can be far removed from thecodethat actually triggered the crash.
In early personal computers, attempting to write data to hardware addresses outside the system's main memory could cause hardware damage. Some crashes areexploitableand let a malicious program orhackerexecutearbitrary code, allowing the replication ofvirusesor the acquisition of data which would normally be inaccessible.
Anapplicationtypically crashes when it performs an operation that is not allowed by the operating system. The operating system then triggers anexceptionorsignalin the application. Unix applications traditionally responded to the signal bydumping core. Most Windows and UnixGUIapplications respond by displaying a dialogue box (such as the one shown in the accompanying image on the right) with the option to attach adebuggerif one is installed. Some applications attempt to recover from the error and continue running instead ofexiting.
An application can also containcodeto crash[b]after detecting a severe error.
Typical errors that result in application crashes include:
A "crash to desktop" (CTD) is said to occur when aprogram(commonly avideo game) unexpectedly quits, abruptly taking the user back to thedesktop. Usually, the term is applied only to crashes where no error is displayed, hence all the user sees as a result of the crash is the desktop. Many times there is no apparent action that causes a crash to desktop. During normal function, the program mayfreezefor a shorter period of time, and then close by itself. Also during normal function, the program may become ablack screenand repeatedly play the last few seconds ofsound(depending on the size of the audiobuffer) that was being played before it crashes to desktop. Other times it may appear to betriggeredby a certain action, such as loading an area.
CTD bugs are considered particularly problematic for users. Since they frequently display no error message, it can be very difficult to track down the source of the problem, especially if the times they occur and the actions taking place right before the crash do not appear to have any pattern or common ground. One way to track down the source of the problem for games is to run them in windowed-mode. Certain operating system versions may feature one or more tools to help track down causes of CTD problems.
Some computer programs such asStepManiaand BBC'sBamzookialso crash to desktop if in full-screen, but display the error in a separate window when the user has returned to the desktop.
The software running theweb serverbehind a website may crash, rendering it inaccessible entirely or providing only an error message instead of normal content.
For example, if a site is using an SQL database (such asMySQL) for a script (such asPHP) and that SQL database server crashes, thenPHPwill display a connection error.
An operating system crash commonly occurs when ahardware exceptionoccurs that cannot behandled. Operating system crashes can also occur when internalsanity-checkinglogic within the operating system detects that the operating system has lost its internal self-consistency.
Modern multi-tasking operating systems, such asLinux, andmacOS, usually remain unharmed when an application program crashes.
Some operating systems, e.g.,z/OS, have facilities forReliability, availability and serviceability(RAS) and the OS can recover from the crash of a critical component, whether due to hardware failure, e.g., uncorrectable ECC error, or to software failure, e.g., a reference to an unassigned page.
An Abnormal end or ABEND is an abnormal termination ofsoftware, or a program crash. Errors or crashes on theNovellNetWare network operating system are usually called ABENDs. Communities ofNetWareadministrators sprang up around the Internet, such asabend.org.
This usage derives from theABENDmacro on IBMOS/360, ...,z/OSoperating systems. Usually capitalized, but may appear as "abend". Some common ABEND codes are System ABEND 0C7 (data exception) and System ABEND 0CB (division by zero).[1][2][3]Abends can be "soft" (allowing automatic recovery) or "hard" (terminating the activity).[4]The term is jocularly claimed to be derived from the German word "Abend" meaning "evening".[5]
Depending on the application, the crash may contain the user's sensitive andprivate information.[6]Moreover, many software bugs which cause crashes are alsoexploitableforarbitrary code executionand other types ofprivilege escalation.[7][8]For example, astack buffer overflowcan overwrite the return address of a subroutine with an invalid value, which will cause, e.g., asegmentation fault, when the subroutine returns. However, if an exploit overwrites the return address with a valid value, the code in that address will be executed.
When crashes are collected in the field using acrash reporter, the next step for developers is to be able to reproduce them locally. For this, several techniques exist:
STAR uses symbolic execution,[9]EvoCrash performs evolutionary search.[10]
|
https://en.wikipedia.org/wiki/Crash_(computing)
|
A fundamental problem indistributed computingandmulti-agent systemsis to achieve overall system reliability in the presence of a number of faulty processes. This often requires coordinating processes to reachconsensus, or agree on some data value that is needed during computation. Example applications of consensus include agreeing on what transactions to commit to a database in which order,state machine replication, andatomic broadcasts. Real-world applications often requiring consensus includecloud computing,clock synchronization,PageRank, opinion formation,smart power grids,state estimation,control of UAVs(and multiple robots/agents in general),load balancing,blockchain, and others.
The consensus problem requires agreement among a number of processes (or agents) on a single data value. Some of the processes (agents) may fail or be unreliable in other ways, so consensus protocols must befault-tolerantor resilient. The processes must put forth their candidate values, communicate with one another, and agree on a single consensus value.
The consensus problem is a fundamental problem in controlling multi-agent systems. One approach to generating consensus is for all processes (agents) to agree on a majority value. In this context, a majority requires at least one more than half of the available votes (where each process is given a vote). However, one or more faulty processes may skew the resultant outcome such that consensus may not be reached or may be reached incorrectly.
Protocols that solve consensus problems are designed to deal with a limited number of faultyprocesses. These protocols must satisfy several requirements to be useful. For instance, a trivial protocol could have all processes output binary value 1. This is not useful; thus, the requirement is modified such that the production must depend on the input. That is, the output value of a consensus protocol must be the input value of some process. Another requirement is that a process may decide upon an output value only once, and this decision is irrevocable. A method is correct in an execution if it does not experience a failure. A consensus protocol tolerating halting failures must satisfy the following properties.[1]
Variations on the definition ofintegritymay be appropriate, according to the application. For example, a weaker[further explanation needed]type of integrity would be for the decision value to equal a value that some correct process proposed – not necessarily all of them.[1]There is also a condition known asvalidityin the literature which refers to the property that a message sent by a process must be delivered.[1]
A protocol that can correctly guarantee consensus amongst n processes of which at most t fail is said to bet-resilient.
In evaluating the performance of consensus protocols two factors of interest arerunning timeandmessage complexity. Running time is given inBig O notationin the number of rounds of message exchange as a function of some input parameters (typically the number of processes and/or the size of the input domain). Message complexity refers to the amount of message traffic that is generated by the protocol. Other factors may include memory usage and the size of messages.
Varying models of computation may define a "consensus problem". Some models may deal with fully connected graphs, while others may deal with rings and trees. In some models message authentication is allowed, whereas in others processes are completely anonymous. Shared memory models in which processes communicate by accessing objects in shared memory are also an important area of research.
In most models of communication protocol participants communicate throughauthenticated channels.This means that messages are not anonymous, and receivers know the source of every message they receive.
Some models assume a stronger,transferableform of authentication, where eachmessageis signed by the sender, so that a receiver knows not just the immediate source of every message, but the participant that initially created the message.
This stronger type of authentication is achieved by digital signatures, and when this stronger form of authentication is available, protocols can tolerate a larger number of faults.[2]
The two different authentication models are often calledoral communicationandwritten communicationmodels. In an oral communication model, the immediate source of information is known, whereas in stronger, written communication models, every step along the receiver learns not just the immediate source of the message, but the communication history of the message.[3]
In the most traditionalsingle-valueconsensus protocols such asPaxos, cooperating nodes agree on a single value such as an integer, which may be of variable size so as to encode usefulmetadatasuch as a transaction committed to a database.
A special case of the single-value consensus problem, calledbinary consensus, restricts the input, and hence the output domain, to a single binary digit {0,1}. While not highly useful by themselves, binary consensus protocols are often useful as building blocks in more general consensus protocols, especially for asynchronous consensus.
Inmulti-valuedconsensus protocols such asMulti-PaxosandRaft, the goal is to agree on not just a single value but a series of values over time, forming a progressively-growing history. While multi-valued consensus may be achieved naively by running multiple iterations of a single-valued consensus protocol in succession, many optimizations and other considerations such as reconfiguration support can make multi-valued consensus protocols more efficient in practice.
There are two types of failures a process may undergo, a crash failure or aByzantine failure. Acrash failureoccurs when a process abruptly stops and does not resume.Byzantine failures are failures in which absolutely no conditions are imposed. For example, they may occur as a result of the malicious actions of an adversary. A process that experiences a Byzantine failure may send contradictory or conflicting data to other processes, or it may sleep and then resume activity after a lengthy delay. Of the two types of failures, Byzantine failures are far more disruptive.
Thus, a consensus protocol tolerating Byzantine failures must be resilient to every possible error that can occur.
A stronger version of consensus tolerating Byzantine failures is given by strengthening the Integrity constraint:
The consensus problem may be considered in the case of asynchronous or synchronous systems. While real world communications are often inherently asynchronous, it is more practical and often easier to model synchronous systems,[4]given that asynchronous systems naturally involve more issues than synchronous ones.
In synchronous systems, it is assumed that all communications proceed inrounds. In one round, a process may send all the messages it requires, while receiving all messages from other processes. In this manner, no message from one round may influence any messages sent within the same round.
In a fully asynchronous message-passing distributed system, in which at least one process may have acrash failure, it has been proven in the famous 1985FLP impossibility resultby Fischer, Lynch and Paterson that adeterministic algorithmfor achieving consensus is impossible.[5]This impossibility result derives from worst-case scheduling scenarios, which are unlikely to occur in practice except in adversarial situations such as an intelligentdenial-of-service attackerin the network. In most normal situations, process scheduling has a degree of natural randomness.[4]
In an asynchronous model, some forms of failures can be handled by a synchronous consensus protocol. For instance, the loss of a communication link may be modeled as a process which has suffered a Byzantine failure.
Randomizedconsensus algorithms can circumvent the FLP impossibility result by achieving both safety and liveness with overwhelming probability, even under worst-case scheduling scenarios such as an intelligent denial-of-service attacker in the network.[6]
Consensus algorithms traditionally assume that the set of participating nodes is fixed and given at the outset: that is, that some prior (manual or automatic) configuration process haspermissioneda particular known group of participants who can authenticate each other as members of the group. In the absence of such a well-defined, closed group with authenticated members, aSybil attackagainst an open consensus group can defeat even a Byzantine consensus algorithm, simply by creating enough virtual participants to overwhelm the fault tolerance threshold.
Apermissionlessconsensus protocol, in contrast, allows anyone in the network to join dynamically and participate without prior permission, but instead imposes a different form of artificial cost orbarrier to entryto mitigate theSybil attackthreat.Bitcoinintroduced the first permissionless consensus protocol usingproof of workand a difficulty adjustment function, in which participants compete to solve cryptographichashpuzzles, and probabilistically earn the right to commit blocks and earn associated rewards in proportion to their invested computational effort. Motivated in part by the high energy cost of this approach, subsequent permissionless consensus protocols have proposed or adopted other alternative participation rules for Sybil attack protection, such asproof of stake,proof of space, andproof of authority.
Three agreement problems of interest are as follows.
A collection ofn{\displaystyle n}processes, numbered from0{\displaystyle 0}ton−1,{\displaystyle n-1,}communicate by sending messages to one another. Process0{\displaystyle 0}must transmit a valuev{\displaystyle v}to all processes such that:
It is also known as The General's Problem.
Formal requirements for a consensus protocol may include:
Fornprocesses in a partially synchronous system (the system alternates between good and bad periods of synchrony), each process chooses a private value. The processes communicate with each other by rounds to determine a public value and generate a
consensus vector with the following requirements:[7]
It can be shown that variations of these problems are equivalent in that the solution for a problem in one type of model may be the solution for another problem in another type of model. For example, a solution to the Weak Byzantine General problem in a synchronous authenticated message passing model leads to a solution for Weak Interactive Consistency.[8]An interactive consistency algorithm can solve the consensus problem by having each process choose the majority value in its consensus vector as its consensus value.[9]
There is a t-resilient anonymous synchronous protocol which solves theByzantine Generals problem,[10][11]iftn<13{\displaystyle {\tfrac {t}{n}}<{\tfrac {1}{3}}}and the Weak Byzantine Generals case[8]wheret{\displaystyle t}is the number of failures andn{\displaystyle n}is the number of processes.
For systems withn{\displaystyle n}processors, of whichf{\displaystyle f}are Byzantine, it has been shown that there exists no algorithm that solves the consensus problem forn≤3f{\displaystyle n\leq 3f}in theoral-messages model.[12]The proof is constructed by first showing the impossibility for the three-node casen=3{\displaystyle n=3}and using this result to argue about partitions of processors. In thewritten-messages modelthere are protocols that can toleraten=f+1{\displaystyle n=f+1}.[2]
In a fully asynchronous system there is no consensus solution that can tolerate one or more crash failures even when only requiring the non triviality property.[5]This result is sometimes called the FLP impossibility proof named after the authorsMichael J. Fischer,Nancy Lynch, andMike Patersonwho were awarded aDijkstra Prizefor this significant work. The FLP result has been mechanically verified to hold even underfairness assumptions.[13]However, FLP does not state that consensus can never be reached: merely that under the model's assumptions, no algorithm can always reach consensus in bounded time. In practice it is highly unlikely to occur.
ThePaxosconsensus algorithm byLeslie Lamport, and variants of it such asRaft, are used pervasively in widely deployeddistributedandcloud computingsystems. These algorithms are typically synchronous, dependent on an elected leader to make progress, and tolerate only crashes and not Byzantine failures.
An example of a polynomial time binary consensus protocol that tolerates Byzantine failures is the Phase King algorithm by Garay and Berman.[14]The algorithm solves consensus in a synchronous message passing model withnprocesses and up toffailures, providedn> 4f.
In the phase king algorithm, there aref+ 1 phases, with 2 rounds per phase.
Each process keeps track of its preferred output (initially equal to the process's own input value). In the first round of each phase each process broadcasts its own preferred value to all other processes. It then receives the values from all processes and determines which value is the majority value and its count. In the second round of the phase, the process whose id matches the current phase number is designated the king of the phase. The king broadcasts the majority value it observed in the first round and serves as a tie breaker. Each process then updates its preferred value as follows. If the count of the majority value the process observed in the first round is greater thann/2 +f, the process changes its preference to that majority value; otherwise it uses the phase king's value. At the end off+ 1 phases the processes output their preferred values.
Google has implemented adistributed lock servicelibrary calledChubby.[15]Chubby maintains lock information in small files which are stored in a replicated database to achieve high availability in the face of failures. The database is implemented on top of a fault-tolerant log layer which is based on thePaxos consensus algorithm. In this scheme, Chubby clients communicate with the Paxosmasterin order to access/update the replicated log; i.e., read/write to the files.[16]
Many peer-to-peer onlinereal-time strategygames use a modifiedlockstep protocolas a consensus protocol in order to manage game state between players in a game. Each game action results in a game state delta broadcast to all other players in the game along with a hash of the total game state. Each player validates the change by applying the delta to their own game state and comparing the game state hashes. If the hashes do not agree then a vote is cast, and those players whose game state is in the minority are disconnected and removed from the game (known as a desync.)
Another well-known approach is called MSR-type algorithms which have been used widely in fields from computer science to control theory.[17][18][19]
Bitcoinusesproof of work, a difficulty adjustment function and a reorganization function to achieve permissionless consensus in its openpeer-to-peernetwork. To extend bitcoin'sblockchainordistributed ledger,minersattempt to solve a cryptographic puzzle, where probability of finding a solution is proportional to the computational effort expended in hashes per second. The node that first solves such a puzzle has their proposed version of the next block of transactions added to the ledger and eventually accepted by all other nodes. As any node in the network can attempt to solve the proof-of-work problem, a Sybil attack is infeasible in principle unless the attacker has over 50% of the computational resources of the network.
Other cryptocurrencies (e.g.Ethereum, NEO, STRATIS, ...) useproof of stake, in which nodes compete to append blocks and earn associated rewards in proportion tostake, or existing cryptocurrency allocated and locked orstakedfor some time period. One advantage of a 'proof of stake' over a 'proof of work' system, is the high energy consumption demanded by the latter. As an example, bitcoin mining (2018) is estimated to consume non-renewable energy sources at an amount similar to the entire nations of Czech Republic or Jordan, while the total energy consumption of Ethereum, the largest proof of stake network, is just under that of 205 average US households.[29][30][31]
Some cryptocurrencies, such as Ripple, use a system of validating nodes to validate the ledger.
This system used by Ripple, called Ripple Protocol Consensus Algorithm (RPCA), works in rounds:
Other participation rules used in permissionless consensus protocols to imposebarriers to entryand resist sybil attacks includeproof of authority,proof of space, proof of burn, or proof of elapsed time.
Contrasting with the above permissionless participation rules, all of which reward participants in proportion to amount of investment in some action or resource,proof of personhoodprotocols aim to give each real human participant exactly one unit of voting power in permissionless consensus, regardless of economic investment.[33][34]Proposed approaches to achieving one-per-person distribution of consensus power for proof of personhood include physical pseudonym parties,[35]social networks,[36]pseudonymized government-issued identities,[37]and biometrics.[38]
To solve the consensus problem in a shared-memory system, concurrent objects must be introduced. A concurrent object, or shared object, is a data structure which helps concurrent processes communicate to reach an agreement. Traditional implementations usingcritical sectionsface the risk of crashing if some process dies inside the critical section or sleeps for an intolerably long time. Researchers definedwait-freedomas the guarantee that the algorithm completes in a finite number of steps.
Theconsensus numberof a concurrent object is defined to be the maximum number of processes in the system which can reach consensus by the given object in a wait-free implementation.[39]Objects with a consensus number ofn{\displaystyle n}can implement any object with a consensus number ofn{\displaystyle n}or lower, but cannot implement any objects with a higher consensus number. The consensus numbers form what is calledHerlihy's hierarchy of synchronization objects.[40]
According to the hierarchy, read/write registers cannot solve consensus even in a 2-process system. Data structures like stacks and queues can only solve consensus between two processes. However, some concurrent objects are universal (notated in the table with∞{\displaystyle \infty }), which means they can solve consensus among any number of processes and they can simulate any other objects through an operation sequence.[39]
|
https://en.wikipedia.org/wiki/Consensus_(computer_science)
|
Infault-tolerantdistributed computing, anatomic broadcastortotal order broadcastis abroadcastwhere all correct processes in a system of multiple processes receive the same set of messages in the same order; that is, the same sequence of messages.[1][2]The broadcast is termed "atomic" because it either eventually completes correctly at all participants, or all participants abort withoutside effects. Atomic broadcasts are an important distributed computing primitive.
The following properties are usually required from an atomic broadcast protocol:
Rodrigues and Raynal[3]and Schiper et al.[4]define the integrity and validity properties of atomic broadcast slightly differently.
Note that total order is not equivalent toFIFOorder, which requires that if a process sent message 1 before it sent message 2, then all participants must receive message 1 before receiving message 2. It is also not equivalent to "causal order", where if message 2 "depends on" or "occurs after" message 1 then all participants must receive message 2 after receiving message 1. While a strong and useful condition, total order requires only that all participants receive the messages in the same order, but does not place other constraints on that order such as that in which the messages are sent.[5]
Designing an algorithm for atomic broadcasts is relatively easy if it can be assumed that computers will not fail. For example, if there are no failures, atomic broadcast can be achieved simply by having all participants communicate with one "leader" which determines the order of the messages, with the other participants following the leader.
However, real computers are faulty; they fail and recover from failure at unpredictable, possibly inopportune, times. For example, in the follow-the-leader algorithm, what if the leader fails at the wrong time? In such an environment achieving atomic broadcasts is difficult.[1]A number of protocols have been proposed for performing atomic broadcast, under various assumptions about the network, failure models, availability of hardware support formulticast, and so forth.[2]
In order for the conditions for atomic broadcast to be satisfied, the participants must effectively "agree" on the order of receipt of the messages. Participants recovering from failure, after the other participants have "agreed" on an order and started to receive the messages, must be able to learn and comply with the agreed order. Such considerations indicate that in systems with crash failures, atomic broadcast andconsensusare equivalent problems.[6]
A value can be proposed by a process for consensus by atomically broadcasting it, and a process can decide a value by selecting the value of the first message which it atomically receives. Thus, consensus can be reduced to atomic broadcast.
Conversely, a group of participants can atomically broadcast messages by achieving consensus regarding the first message to be received, followed by achieving consensus on the next message, and so forth until all the messages have been received. Thus, atomic broadcast reduces to consensus. This was demonstrated more formally and in greater detail by Xavier Défago, et al.[2]
A fundamental result in distributed computing is that achieving consensus in asynchronous systems in which even one crash failure can occur is impossible in the most general case. This was shown in 1985 byMichael J. Fischer,Nancy Lynch, andMike Paterson, and is sometimes called theFLP result.[7]Since consensus and atomic broadcast are equivalent, FLP applies also to atomic broadcast.[5]The FLP result does not prohibit the implementation of atomic broadcast in practice, but it does require making less stringent assumptions than FLP in some respect, such as about processor and communication timings.
TheChandra-Toueg algorithm[6]is a consensus-based solution to atomic broadcast. Another solution has been put forward by Rodrigues and Raynal.[3]
The Zookeeper Atomic Broadcast (ZAB) protocol is the basic building block forApache ZooKeeper, a fault-tolerant distributed coordination service which underpinsHadoopand many other important distributed systems.[8][9]
Ken Birmanhas proposed thevirtual synchronyexecution model for distributed systems, the idea of which is that all processes observe the same events in the same order. A total ordering of the messages being received, as in atomic broadcast, is one (though not the only) method for attaining virtually synchronous message receipt.
|
https://en.wikipedia.org/wiki/Atomic_broadcast
|
Methods of productionfall into three main categories: job (one-off production), batch (multiple items, one step at a time for all items), and flow.
Job production is used when a product is produced with the labor of one or few workers and is rarely used for bulk and large scale production. It is mainly used for one-off products or prototypes (hence also known asPrototype Production), as it is inefficient; however, quality is greatly enhanced with job production compared to other methods. Individual wedding cakes and made-to-measure suits are examples of job production. New small firms often use job production before they get a chance or have the means to expand. Job Production is highly motivating for workers because it gives the workers an opportunity to produce the whole product and take pride in it.
Batch production is the method used to produce or process any product of the groups or batches where the products in the batch go through the whole production process together. An example would be when a bakery produces each different type of bread separately and each product (in this case, bread) is not produced continuously. Batch production is used in many different ways and is most suited to when there is a need for a quality/quantity balance. This technique is probably the most commonly used method for organizing manufacture and promotes specialist labor, as very often batch production involves a small number of persons. Batch production occurs when many similar items are produced together. Each batch goes through one stage of the production before moving onto the next stage.
Flow production (mass production) is also a very common method of production. Flow production is when the product is built up through many segregated stages; the product is built upon at each stage and then passed directly to the next stage where it is built upon again. The production method is financially the most efficient and effective because there is less of a need for skilled workers.
Contrary to job production, the method of Boutique Manufacturing (lean) is suitable for the production of very small to small batches, i.e. orders of a few units up to several dozens of similar or equal goods. The workflow organization of a Boutique Manufacturing entity can be a mixture of both jobbing and batch production but involves higher standardization than job production. Boutique Manufacturing is often organized with single workplaces or production cells carrying out a number of subsequent production steps until completion of certain components or even the whole product; large assembly lines are generally not used. The flexibility and variety of products able to be produced in the entity therefore are much higher than with the more standardized method of batch production.
|
https://en.wikipedia.org/wiki/Methods_of_production
|
Dataficationis a technological trend turning many aspects of our life intodata[1][2]which is subsequently transferred into information realised as a new form of value.[3]Kenneth CukierandViktor Mayer-Schönbergerintroduced the termdataficationto the broader lexicon in 2013.[4]Up until this time, datafication had been associated with the analysis of representations of our lives captured through data, but not on the present scale. This change was primarily due to the impact ofbig dataand the computational opportunities afforded topredictive analytics.
Datafication is not the same as digitization, which takes analog content—books, films, photographs—and converts it into digital information, a sequence of ones and zeros that computers can read. Datafication is a far broader activity: taking all aspects of life and turning them into data [...] Once we datafy things, we can transform their purpose and turn the information into new forms of value[2]
Datafication has an ideological aspect, calleddataism:
"The drive towards datafication is rooted in a belief in the capacity of data to represent social life, sometimes better or more objectively than pre-digital (human) interpretations."[5]
Datafication is often applied to social and communication media. Some examples include howTwitterdatafies stray thoughts, as well as datafication ofHRbyLinkedInand others.[citation needed]
Other examples include aspects of the build environment, and design via engineering and or other tools that tie data to formal, functional, or other physical media outcomes.Data collectionand processing foroptimal control(e.g.,shape optimization) is another example.[citation needed]
|
https://en.wikipedia.org/wiki/Datafication
|
Ageographic information system(GIS) consists of integrated computer hardware andsoftwarethat store, manage,analyze, edit, output, andvisualizegeographic data.[1][2]Much of this often happens within aspatial database; however, this is not essential to meet the definition of a GIS.[1]In a broader sense, one may consider such a system also to include human users and support staff, procedures and workflows, thebody of knowledgeof relevant concepts and methods, and institutional organizations.
The uncounted plural,geographic information systems, also abbreviated GIS, is the most common term for the industry and profession concerned with these systems. The academic discipline that studies these systems and their underlying geographic principles, may also be abbreviated as GIS, but the unambiguousGIScienceis more common.[3]GIScience is often considered a subdiscipline ofgeographywithin the branch oftechnical geography.
Geographic information systems are utilized in multiple technologies, processes, techniques and methods. They are attached to various operations and numerous applications, that relate to: engineering, planning, management, transport/logistics, insurance, telecommunications, and business,[4]as well as the natural sciences such as forestry, ecology, and Earth science. For this reason, GIS andlocation intelligenceapplications are at the foundation of location-enabled services, which rely on geographic analysis and visualization.
GIS provides the ability to relate previously unrelated information, through the use of location as the "key index variable". Locations and extents that are found in the Earth'sspacetimeare able to be recorded through the date and time of occurrence, along with x, y, and zcoordinates; representing,longitude(x),latitude(y), andelevation(z). All Earth-based, spatial–temporal, location and extent references should be relatable to one another, and ultimately, to a "real" physical location or extent. This key characteristic of GIS has begun to open new avenues of scientific inquiry and studies.
While digital GIS dates to the mid-1960s, whenRoger Tomlinsonfirst coined the phrase "geographic information system",[5]many of the geographic concepts and methods that GIS automates date back decades earlier.
One of the first known instances in which spatial analysis was used came from the field ofepidemiologyin theRapport sur la marche et les effets du choléra dans Paris et le département de laSeine(1832).[6]Frenchcartographerand geographerCharles Picquetcreated a map outlining theforty-eight districts in Paris, usinghalftonecolor gradients, to provide a visual representation for the number of reported deaths due tocholeraper every 1,000 inhabitants.
In 1854,John Snow, an epidemiologist and physician, was able to determine the source of acholera outbreak in Londonthrough the use of spatial analysis. Snow achieved this through plotting the residence of each casualty on a map of the area, as well as the nearby water sources. Once these points were marked, he was able to identify the water source within the cluster that was responsible for the outbreak. This was one of the earliest successful uses of a geographic methodology in pinpointing the source of an outbreak in epidemiology. While the basic elements oftopographyand theme existed previously incartography, Snow's map was unique due to his use of cartographic methods, not only to depict, but also to analyze clusters of geographically dependent phenomena.
The early 20th century saw the development ofphotozincography, which allowed maps to be split into layers, for example one layer for vegetation and another for water. This was particularly used for printing contours – drawing these was a labour-intensive task but having them on a separate layer meant they could be worked on without the other layers to confuse thedraughtsman. This work was initially drawn on glass plates, but laterplastic filmwas introduced, with the advantages of being lighter, using less storage space and being less brittle, among others. When all the layers were finished, they were combined into one image using a large process camera. Once color printing came in, the layers idea was also used for creating separate printing plates for each color. While the use of layers much later became one of the typical features of a contemporary GIS, the photographic process just described is not considered a GIS in itself – as the maps were just images with no database to link them to.
Two additional developments are notable in the early days of GIS:Ian McHarg's publicationDesign with Nature[7]and its map overlay method and the introduction of a street network into the U.S. Census Bureau's DIME (Dual Independent Map Encoding) system.[8]
The first publication detailing the use of computers to facilitate cartography was written byWaldo Toblerin 1959.[9]Furthercomputer hardwaredevelopment spurred bynuclear weaponresearch led to more widespread general-purpose computer "mapping" applications by the early 1960s.[10]
In 1963, the world's first true operational GIS was developed inOttawa, Ontario, Canada, by the federal Department of Forestry and Rural Development. Developed byRoger Tomlinson, it was called theCanada Geographic Information System(CGIS) and was used to store, analyze, and manipulate data collected for theCanada Land Inventory, an effort to determine the land capability for rural Canada by mapping information aboutsoils, agriculture, recreation, wildlife,waterfowl,forestryand land use at a scale of 1:50,000. A rating classification factor was also added to permit analysis.[11][12]
CGIS was an improvement over "computer mapping" applications as it provided capabilities for data storage, overlay, measurement, anddigitizing/scanning. It supported a national coordinate system that spanned the continent, coded lines asarcshaving a true embeddedtopologyand it stored the attribute and locational information in separate files. As a result of this, Tomlinson has become known as the "father of GIS", particularly for his use of overlays in promoting the spatial analysis of convergent geographic data.[13]CGIS lasted into the 1990s and built a large digital land resource database in Canada. It was developed as amainframe-based system in support of federal and provincial resource planning and management. Its strength was continent-wide analysis of complexdatasets. The CGIS was never available commercially.
In 1964, Howard T. Fisher formed the Laboratory for Computer Graphics and Spatial Analysis at theHarvard Graduate School of Design(LCGSA 1965–1991), where a number of important theoretical concepts in spatial data handling were developed, and which by the 1970s had distributed seminal software code and systems, such as SYMAP, GRID, and ODYSSEY, to universities, research centers and corporations worldwide.[14]These programs were the first examples of general-purpose GIS software that was not developed for a particular installation, and was very influential on future commercial software, such asEsriARC/INFO, released in 1983.
By the late 1970s, two public domain GIS systems (MOSSandGRASS GIS) were in development, and by the early 1980s, M&S Computing (laterIntergraph) along with Bentley Systems Incorporated for theCADplatform, Environmental Systems Research Institute (ESRI),CARIS(Computer Aided Resource Information System), and ERDAS (Earth Resource Data Analysis System) emerged as commercial vendors of GIS software, successfully incorporating many of the CGIS features, combining the first-generation approach to separation of spatial and attribute information with a second-generation approach to organizing attribute data into database structures.[15]
In 1986, Mapping Display and Analysis System (MIDAS), the first desktop GIS product,[16]was released for theDOSoperating system. This was renamed in 1990 to MapInfo for Windows when it was ported to theMicrosoft Windowsplatform. This began the process of moving GIS from the research department into the business environment.
By the end of the 20th century, the rapid growth in various systems had been consolidated and standardized on relatively few platforms and users were beginning to explore viewing GIS data over theInternet, requiring data format and transfer standards. More recently, a growing number offree, open-source GIS packagesrun on a range of operating systems and can be customized to perform specific tasks. The major trend of the 21st Century has been the integration of GIS capabilities with otherInformation technologyandInternetinfrastructure, such asrelational databases,cloud computing,software as a service(SAAS), andmobile computing.[17]
The distinction must be made between a singulargeographic information system, which is a single installation of software and data for a particular use, along with associated hardware, staff, and institutions (e.g., the GIS for a particular city government); andGIS software, a general-purposeapplication programthat is intended to be used in many individual geographic information systems in a variety of application domains.[18]: 16Starting in the late 1970s, many software packages have been created specifically for GIS applications.Esri'sArcGIS, which includesArcGIS Proand the legacy softwareArcMap, currently dominates the GIS market.[as of?]Other examples of GIS includeAutodeskandMapInfo Professionaland open-source programs such asQGIS,GRASS GIS,MapGuide, andHadoop-GIS.[19]These and other desktop GIS applications include a full suite of capabilities for entering, managing, analyzing, and visualizing geographic data, and are designed to be used on their own.
Starting in the late 1990s with the emergence of theInternet, as computer network technology progressed, GIS infrastructure and data began to move toservers, providing another mechanism for providing GIS capabilities.[20]: 216This was facilitated by standalone software installed on a server, similar to other server software such asHTTP serversandrelational database management systems, enabling clients to have access to GIS data and processing tools without having to install specialized desktop software. These networks are known asdistributed GIS.[21][22]This strategy has been extended through the Internet and development ofcloud-basedGIS platforms such as ArcGIS Online and GIS-specializedsoftware as a service(SAAS). The use of the Internet to facilitate distributed GIS is known asInternet GIS.[21][22]
An alternative approach is the integration of some or all of these capabilities into other software orinformation technologyarchitectures. One example is aspatial extensiontoObject-relational databasesoftware, which defines a geometry datatype so that spatial data can be stored in relational tables, and extensions toSQLfor spatial analysis operations such asoverlay. Another example is the proliferation of geospatial libraries andapplication programming interfaces(e.g.,GDAL,Leaflet,D3.js) that extend programming languages to enable the incorporation of GIS data and processing into custom software, includingweb mappingsites andlocation-based servicesinsmartphones.
The core of any GIS is adatabasethat contains representations of geographic phenomena, modeling theirgeometry(location and shape) and theirpropertiesorattributes. A GIS database may be stored in a variety of forms, such as a collection of separatedata filesor a singlespatially-enabledrelational database. Collecting and managing these data usually constitutes the bulk of the time and financial resources of a project, far more than other aspects such as analysis and mapping.[20]: 175
GIS uses spatio-temporal (space-time) location as the key index variable for all other information. Just as a relational database containing text or numbers can relate many different tables using common key index variables, GIS can relate otherwise unrelated information by using location as the key index variable. The key is the location and/or extent in space-time.
Any variable that can be located spatially, and increasingly also temporally, can be referenced using a GIS. Locations or extents in Earth space–time may be recorded as dates/times of occurrence, and x, y, and zcoordinatesrepresenting,longitude,latitude, andelevation, respectively. These GIS coordinates may represent other quantified systems of temporo-spatial reference (for example, film frame number, stream gage station, highway mile-marker, surveyor benchmark, building address, street intersection, entrance gate, water depth sounding,POSorCADdrawing origin/units). Units applied to recorded temporal-spatial data can vary widely (even when using exactly the same data, seemap projections), but all Earth-based spatial–temporal location and extent references should, ideally, be relatable to one another and ultimately to a "real" physical location or extent in space–time.
Related by accurate spatial information, an incredible variety of real-world and projected past or future data can be analyzed, interpreted and represented.[23]This key characteristic of GIS has begun to open new avenues of scientific inquiry into behaviors and patterns of real-world information that previously had not been systematicallycorrelated.
GIS data represents phenomena that exist in the real world, such as roads, land use, elevation, trees, waterways, and states. The most common types of phenomena that are represented in data can be divided into two conceptualizations:discrete objects(e.g., a house, a road) andcontinuous fields(e.g., rainfall amount or population density).[20]: 62–65Other types of geographic phenomena, such as events (e.g., location ofWorld War IIbattles), processes (e.g., extent ofsuburbanization), and masses (e.g., types ofsoilin an area) are represented less commonly or indirectly, or are modeled in analysis procedures rather than data.
Traditionally, there are two broad methods used to store data in a GIS for both kinds of abstractions mapping references:raster imagesandvector. Points, lines, and polygons represent vector data of mapped location attribute references.
A new hybrid method of storing data is that of identifying point clouds, which combine three-dimensional points withRGBinformation at each point, returning a3D color image. GIS thematic maps then are becoming more and more realistically visually descriptive of what they set out to show or determine.
GIS data acquisition includes several methods for gathering spatial data into a GIS database, which can be grouped into three categories:primary data capture, the direct measurement phenomena in the field (e.g.,remote sensing, theglobal positioning system);secondary data capture, the extraction of information from existing sources that are not in a GIS form, such as paper maps, throughdigitization; anddata transfer, the copying of existing GIS data from external sources such as government agencies and private companies. All of these methods can consume significant time, finances, and other resources.[20]: 173
Surveydata can be directly entered into a GIS from digital data collection systems on survey instruments using a technique calledcoordinate geometry(COGO). Positions from a global navigation satellite system (GNSS) like theGlobal Positioning Systemcan also be collected and then imported into a GIS. A current trend[as of?]in data collection gives users the ability to utilizefield computerswith the ability to edit live data using wireless connections or disconnected editing sessions.[24]The current trend[as of?]is to utilize applications available on smartphones andPDAsin the form of mobile GIS.[25]This has been enhanced by the availability of low-cost mapping-grade GPS units with decimeter accuracy in real time. This eliminates the need to post process, import, and update the data in the office after fieldwork has been collected. This includes the ability to incorporate positions collected using alaser rangefinder. New technologies also allow users to create maps as well as analysis directly in the field, making projects more efficient and mapping more accurate.
Remotely senseddata also plays an important role in data collection and consist of sensors attached to a platform. Sensors include cameras, digital scanners andlidar, while platforms usually consist of aircraft andsatellites. In England in the mid-1990s, hybrid kite/balloons calledhelikitesfirst pioneered the use of compact airborne digital cameras as airborne geo-information systems. Aircraft measurement software, accurate to 0.4 mm, was used to link the photographs and measure the ground. Helikites are inexpensive and gather more accurate data than aircraft. Helikites can be used over roads, railways and towns whereunmanned aerial vehicles(UAVs) are banned.
Recently, aerial data collection has become more accessible withminiature UAVsand drones. For example, theAeryon Scoutwas used to map a 50-acre area with aground sample distanceof 1 inch (2.54 cm) in only 12 minutes.[26]
The majority of digital data currently comes fromphoto interpretationof aerial photographs. Soft-copy workstations are used to digitize features directly fromstereo pairsof digital photographs. These systems allow data to be captured in two and three dimensions, with elevations measured directly from a stereo pair using principles ofphotogrammetry. Analog aerial photos must be scanned before being entered into a soft-copy system, for high-quality digital cameras this step is skipped.
Satelliteremote sensingprovides another important source of spatial data. Here satellites use different sensor packages to passively measure the reflectance from parts of theelectromagnetic spectrumor radio waves that were sent out from an active sensor such as radar. Remote sensing collects raster data that can be further processed using different bands to identify objects and classes of interest, such as land cover.
The most common method of data creation isdigitization, where ahard copymap or survey plan is transferred into a digital medium through the use of a CAD program, and geo-referencing capabilities. With the wide availability ofortho-rectified imagery(from satellites, aircraft, Helikites and UAVs), heads-up digitizing is becoming the main avenue through which geographic data is extracted. Heads-up digitizing involves the tracing of geographic data directly on top of the aerial imagery instead of by the traditional method of tracing the geographic form on a separatedigitizing tablet(heads-down digitizing). Heads-down digitizing, or manual digitizing, uses a special magnetic pen, or stylus, that feeds information into a computer to create an identical, digital map. Some tablets use a mouse-like tool, called a puck, instead of a stylus.[27][28]The puck has a small window with cross-hairs which allows for greater precision and pinpointing map features. Though heads-up digitizing is more commonly used, heads-down digitizing is still useful for digitizing maps of poor quality.[28]
Existing data printed on paper orPET filmmaps can bedigitizedor scanned to produce digital data. A digitizer producesvectordata as an operator traces points, lines, and polygon boundaries from a map.Scanninga map results in raster data that could be further processed to produce vector data.
When data is captured, the user should consider if the data should be captured with either a relative accuracy or absolute accuracy, since this could not only influence how information will be interpreted but also the cost of data capture.
After entering data into a GIS, the data usually requires editing, to remove errors, or further processing. For vector data it must be made "topologically correct" before it can be used for some advanced analysis. For example, in a road network, lines must connect with nodes at an intersection. Errors such as undershoots and overshoots must also be removed. For scanned maps, blemishes on the source map may need to be removed from the resultingraster. For example, a fleck of dirt might connect two lines that should not be connected.
The earth can be represented by various models, each of which may provide a different set of coordinates (e.g., latitude, longitude, elevation) for any given point on the Earth's surface. The simplest model is to assume the earth is a perfect sphere. As more measurements of the earth have accumulated, the models of the earth have become more sophisticated and more accurate. In fact, there are models calleddatumsthat apply to different areas of the earth to provide increased accuracy, likeNorth American Datum of 1983for U.S. measurements, and theWorld Geodetic Systemfor worldwide measurements.
The latitude and longitude on a map made against a local datum may not be the same as one obtained from aGPS receiver. Converting coordinates from one datum to another requires adatum transformationsuch as aHelmert transformation, although in certain situations a simpletranslationmay be sufficient.[29]
In popular GIS software, data projected in latitude/longitude is often represented as aGeographic coordinate system. For example, data in latitude/longitude if the datum is the 'North American Datumof 1983' is denoted by 'GCS North American 1983'.
While no digital model can be a perfect representation of the real world, it is important that GIS data be of a high quality. In keeping with the principle ofhomomorphism, the data must be close enough to reality so that the results of GIS procedures correctly correspond to the results of real world processes. This means that there is no single standard for data quality, because the necessary degree of quality depends on the scale and purpose of the tasks for which it is to be used. Several elements of data quality are important to GIS data:
The quality of a dataset is very dependent upon its sources, and the methods used to create it. Land surveyors have been able to provide a high level of positional accuracy utilizing high-endGPSequipment, but GPS locations on the average smartphone are much less accurate.[31]Common datasets such as digital terrain and aerial imagery[32]are available in a wide variety of levels of quality, especially spatial precision. Paper maps, which have been digitized for many years as a data source, can also be of widely varying quality.
A quantitative analysis of maps brings accuracy issues into focus. The electronic and other equipment used to make measurements for GIS is far more precise than the machines of conventional map analysis. All geographical data are inherently inaccurate, and these inaccuracies will propagate through GIS operations in ways that are difficult to predict.[33]
Data restructuring can be performed by a GIS to convert data into different formats. For example, a GIS may be used to convert a satellite image map to a vector structure by generating lines around all cells with the same classification, while determining the cell spatial relationships, such as adjacency or inclusion.
More advanced data processing can occur withimage processing, a technique developed in the late 1960s byNASAand the private sector to provide contrast enhancement, false color rendering and a variety of other techniques including use of two dimensionalFourier transforms. Since digital data is collected and stored in various ways, the two data sources may not be entirely compatible. So a GIS must be able to convertgeographic datafrom one structure to another. In so doing, the implicit assumptions behind different ontologies and classifications require analysis.[34]Object ontologies have gained increasing prominence as a consequence ofobject-oriented programmingand sustained work byBarry Smithand co-workers.
Spatial ETLtools provide the data processing functionality of traditionalextract, transform, load(ETL) software, but with a primary focus on the ability to manage spatial data. They provide GIS users with the ability to translate data between different standards and proprietary formats, whilst geometrically transforming the data en route. These tools can come in the form of add-ins to existing wider-purpose software such asspreadsheets.
GIS spatial analysis is a rapidly changing field, and GIS packages are increasingly including analytical tools as standard built-in facilities, as optional toolsets, as add-ins or 'analysts'. In many instances these are provided by the original software suppliers (commercial vendors or collaborative non commercial development teams), while in other cases facilities have been developed and are provided by third parties. Furthermore, many products offer software development kits (SDKs), programming languages and language support, scripting facilities and/or special interfaces for developing one's own analytical tools or variants. The increased availability has created a new dimension tobusiness intelligencetermed "spatial intelligence" which, when openly delivered via intranet, democratizes access to geographic and social network data.Geospatial intelligence, based on GIS spatial analysis, has also become a key element for security. GIS as a whole can be described as conversion to a vectorial representation or to any other digitisation process.
Geoprocessingis a GIS operation used to manipulate spatial data. A typical geoprocessing operation takes an inputdataset, performs an operation on that dataset, and returns the result of the operation as an output dataset. Common geoprocessing operations include geographic feature overlay, feature selection and analysis,topologyprocessing,rasterprocessing, and data conversion. Geoprocessing allows for definition, management, and analysis of information used to form decisions.[35]
Many geographic tasks involve theterrain, the shape of the surface of the earth, such ashydrology,earthworks, andbiogeography. Thus, terrain data is often a core dataset in a GIS, usually in the form of a rasterDigital elevation model(DEM) or aTriangulated irregular network(TIN). A variety of tools are available in most GIS software for analyzing terrain, often by creating derivative datasets that represent a specific aspect of the surface. Some of the most common include:
Most of these are generated using algorithms that are discrete simplifications ofvector calculus. Slope, aspect, and surface curvature in terrain analysis are all derived from neighborhood operations using elevation values of a cell's adjacent neighbours.[39]Each of these is strongly affected by the level of detail in the terrain data, such as the resolution of a DEM, which should be chosen carefully.[40]
Distance is a key part of solving many geographic tasks, usually due to thefriction of distance. Thus, a wide variety of analysis tools have analyze distance in some form, such asbuffers,Voronoi or Thiessen polygons,Cost distance analysis, andnetwork analysis.
It is difficult to relatewetlandsmaps torainfallamounts recorded at different points such as airports, television stations, and schools. A GIS, however, can be used to depict two- and three-dimensional characteristics of the Earth's surface, subsurface, and atmosphere from information points. For example, a GIS can quickly generate a map withisoplethorcontour linesthat indicate differing amounts of rainfall. Such a map can be thought of as a rainfall contour map. Many sophisticated methods can estimate the characteristics of surfaces from a limited number of point measurements. A two-dimensional contour map created from the surface modeling of rainfall point measurements may be overlaid and analyzed with any other map in a GIS covering the same area. This GIS derived map can then provide additional information - such as the viability ofwater powerpotential as arenewable energysource. Similarly, GIS can be used to compare otherrenewable energyresources to find the best geographic potential for a region.[41]
Additionally, from a series of three-dimensional points, ordigital elevation model, isopleth lines representing elevation contours can be generated, along with slope analysis,shaded relief, and other elevation products. Watersheds can be easily defined for any given reach, by computing all of the areas contiguous and uphill from any given point of interest. Similarly, an expectedthalwegof where surface water would want to travel in intermittent and permanent streams can be computed from elevation data in the GIS.
A GIS can recognize and analyze the spatial relationships that exist within digitally stored spatial data. Thesetopologicalrelationships allow complex spatial modelling and analysis to be performed. Topological relationships between geometric entities traditionally include adjacency (what adjoins what), containment (what encloses what), and proximity (how close something is to something else).
Geometric networksare linear networks of objects that can be used to represent interconnected features, and to perform special spatial analysis on them. A geometric network is composed of edges, which are connected at junction points, similar tographsin mathematics and computer science. Just like graphs, networks can have weight and flow assigned to its edges, which can be used to represent various interconnected features more accurately. Geometric networks are often used to model road networks andpublic utilitynetworks, such as electric, gas, and water networks. Network modeling is also commonly employed intransportation planning,hydrologymodeling, andinfrastructuremodeling.
Dana Tomlincoined the termcartographic modelingin his PhD dissertation (1983); he later used it in the title of his book,Geographic Information Systems and Cartographic Modeling(1990).[42]Cartographic modelingrefers to a process where several thematiclayersof the same area are produced, processed, and analyzed. Tomlin used raster layers, but the overlay method (see below) can be used more generally. Operations on map layers can be combined into algorithms, and eventually into simulation or optimization models.
The combination of several spatial datasets (points, lines, orpolygons) creates a new output vector dataset, visually similar to stacking several maps of the same region. These overlays are similar to mathematicalVenn diagramoverlays. Aunionoverlay combines the geographic features and attribute tables of both inputs into a single new output. Anintersectoverlay defines the area where both inputs overlap and retains a set of attribute fields for each. Asymmetric differenceoverlay defines an output area that includes the total area of both inputs except for the overlapping area.
Data extraction is a GIS process similar to vector overlay, though it can be used in either vector or raster data analysis. Rather than combining the properties and features of both datasets, data extraction involves using a "clip" or "mask" to extract the features of one data set that fall within the spatial extent of another dataset.
In raster data analysis, the overlay of datasets is accomplished through a process known as "local operation on multiple rasters" or "map algebra", through a function that combines the values of each raster'smatrix. This function may weigh some inputs more than others through use of an "index model" that reflects the influence of various factors upon a geographic phenomenon.
Geostatisticsis a branch of statistics that deals with field data, spatial data with a continuous index. It provides methods to model spatial correlation, and predict values at arbitrary locations (interpolation).
When phenomena are measured, the observation methods dictate the accuracy of any subsequent analysis. Due to the nature of the data (e.g. traffic patterns in an urban environment; weather patterns over thePacific Ocean), a constant or dynamic degree of precision is always lost in the measurement. This loss of precision is determined from the scale and distribution of the data collection.
To determine the statistical relevance of the analysis, an average is determined so that points (gradients) outside of any immediate measurement can be included to determine their predicted behavior. This is due to the limitations of the applied statistic and data collection methods, and interpolation is required to predict the behavior of particles, points, and locations that are not directly measurable.
Interpolationis the process by which a surface is created, usually a raster dataset, through the input of data collected at a number of sample points. There are several forms of interpolation, each which treats the data differently, depending on the properties of the data set. In comparing interpolation methods, the first consideration should be whether or not the source data will change (exact or approximate). Next is whether the method is subjective, a human interpretation, or objective. Then there is the nature of transitions between points: are they abrupt or gradual. Finally, there is whether a method is global (it uses the entire data set to form the model), or local where an algorithm is repeated for a small section of terrain.
Interpolation is a justified measurement because of a spatial autocorrelation principle that recognizes that data collected at any position will have a great similarity to, or influence of those locations within its immediate vicinity.
Digital elevation models,triangulated irregular networks, edge-finding algorithms,Thiessen polygons,Fourier analysis,(weighted) moving averages,inverse distance weighting,kriging,spline, andtrend surface analysisare all mathematical methods to produce interpolative data.
Geocoding is interpolating spatial locations (X,Y coordinates) from street addresses or any other spatially referenced data such asZIP Codes,parcel lotsand address locations. A reference theme is required togeocodeindividual addresses, such as a road centerline file with address ranges. The individual address locations have historically been interpolated, or estimated, by examining address ranges along a road segment. These are usually provided in the form of a table or database. The software will then place a dot approximately where that address belongs along the segment of centerline. For example, an address point of 500 will be at the midpoint of a line segment that starts with address 1 and ends with address 1,000. Geocoding can also be applied against actual parcel data, typically from municipal tax maps. In this case, the result of the geocoding will be an actually positioned space as opposed to an interpolated point. This approach is being increasingly used to provide more precise location information.
Reverse geocoding is the process of returning an estimatedstreet addressnumber as it relates to a given coordinate. For example, a user can click on a road centerline theme (thus providing a coordinate) and have information returned that reflects the estimated house number. This house number is interpolated from a range assigned to that road segment. If the user clicks at themidpointof a segment that starts with address 1 and ends with 100, the returned value will be somewhere near 50. Note that reverse geocoding does not return actual addresses, only estimates of what should be there based on the predetermined range.
Coupled with GIS,multi-criteria decision analysismethods support decision-makers in analysing a set of alternative spatial solutions, such as the most likely ecological habitat for restoration, against multiple criteria, such as vegetation cover or roads. MCDA uses decision rules to aggregate the criteria, which allows the alternative solutions to be ranked or prioritised.[43]GIS MCDA may reduce costs and time involved in identifying potential restoration sites.
GIS or spatialdata miningis the application of data mining methods to spatial data. Data mining, which is the partially automated search for hidden patterns in large databases, offers great potential benefits for applied GIS-based decision making. Typical applications includeenvironmental monitoring. A characteristic of such applications is that spatial correlation between data measurements require the use of specialized algorithms for more efficient data analysis.[44]
Cartographyis the design and production of maps, or visual representations of spatial data. The vast majority of modern cartography is done with the help of computers, usually using GIS but production of quality cartography is also achieved by importing layers into a design program to refine it. Most GIS software gives the user substantial control over the appearance of the data.
Cartographic work serves two major functions:
First, it produces graphics on the screen or on paper that convey the results of analysis to the people who make decisions about resources. Wall maps and other graphics can be generated, allowing the viewer to visualize and thereby understand the results of analyses or simulations of potential events.Web Map Serversfacilitate distribution of generated maps through web browsers using various implementations of web-based application programming interfaces (AJAX,Java,Flash, etc.).
Second, other database information can be generated for further analysis or use. An example would be a list of all addresses within one mile (1.6 km) of a toxic spill.
An archeochrome is a new way of displaying spatial data. It is a thematic on a 3D map that is applied to a specific building or a part of a building. It is suited to the visual display of heat-loss data.
Traditional maps are abstractions of the real world, a sampling of important elements portrayed on a sheet of paper with symbols to represent physical objects. People who use maps must interpret these symbols.Topographic mapsshow the shape of land surface withcontour linesor withshaded relief.
Today, graphic display techniques such asshadingbased onaltitudein a GIS can make relationships among map elements visible, heightening one's ability to extract and analyze information. For example, two types of data were combined in a GIS to produce a perspective view of a portion ofSan Mateo County,California.
A GIS was used to register and combine the two images torenderthe three-dimensionalperspective viewlooking down theSan Andreas Fault, using the Thematic Mapper image pixels, but shaded using the elevation of thelandforms. The GIS display depends on the viewing point of theobserverand time of day of the display, to properly render the shadows created by the sun's rays at that latitude, longitude, and time of day.
In recent years there has been a proliferation of free-to-use and easily accessible mapping software such as theproprietaryweb applicationsGoogle MapsandBing Maps, as well as thefree and open-sourcealternativeOpenStreetMap. These services give the public access to huge amounts of geographic data, perceived by many users to be as trustworthy and usable as professional information.[45]For example, during the COVID-19 pandemic, web maps hosted on dashboards were used to rapidly disseminate case data to the general public.[46]
Some of them, like Google Maps andOpenLayers, expose anapplication programming interface(API) that enable users to create custom applications. These toolkits commonly offer street maps, aerial/satellite imagery, geocoding, searches, and routing functionality. Web mapping has also uncovered the potential ofcrowdsourcinggeodata in projects likeOpenStreetMap, which is a collaborative project to create a free editable map of the world. Thesemashupprojects have been proven to provide a high level of value and benefit to end users outside that possible through traditional geographic information.[47][48]
Web mapping is not without its drawbacks. Web mapping allows for the creation and distribution of maps by people without proper cartographic training.[49]This has led to maps that ignore cartographic conventions and are potentially misleading, with one study finding that more than half of United States state government COVID-19 dashboards did not follow these conventions.[50][51]
Since its origin in the 1960s, GIS has been used in an ever-increasing range of applications, corroborating the widespread importance of location and aided by the continuing reduction in the barriers to adopting geospatial technology. The perhaps hundreds of different uses of GIS can be classified in several ways:
The implementation of a GIS is often driven by jurisdictional (such as a city), purpose, or application requirements. Generally, a GIS implementation may be custom-designed for an organization. Hence, a GIS deployment developed for an application, jurisdiction, enterprise, or purpose may not be necessarilyinteroperableor compatible with a GIS that has been developed for some other application, jurisdiction, enterprise, or purpose.[62]
GIS is also diverging intolocation-based services, which allows GPS-enabled mobile devices to display their location in relation to fixed objects (nearest restaurant, gas station, fire hydrant) or mobile objects (friends, children, police car), or to relay their position back to a central server for display or other processing.
GIS is also used in digital marketing and SEO for audience segmentation based on location.[63][64]
Geospatial disaster response uses geospatial data and tools to help emergency responders, land managers, and scientists respond to disasters. Geospatial data can help save lives, reduce damage, and improve communication. Geospatial data can be used by federal authorities likeFEMAto create maps that show the extent of a disaster, the location of people in need, and the location of debris, create models that estimate the number of people at risk and the amount of damage, improve communication between emergency responders, land managers, and scientists, as well as help determine where to allocate resources, such as emergency medical resources or search and rescue teams and plan evacuation routes and identify which areas are most at risk.
In the United States, FEMA's Response Geospatial Office is responsible for the agency's capture, analysis and development of GIS products to enhance situational awareness and enable expeditions and effective decision making. The RGO's mission is to support decision makers in understanding the size, scope, and extent of disaster impacts so they can deliver resources to the communities most in need.[67]
The use of digital maps generated by GIS has also influenced the development of an academic field known as spatial humanities.[75]
TheOpen Geospatial Consortium(OGC) is an international industry consortium of 384 companies, government agencies, universities, and individuals participating in a consensus process to develop publicly available geoprocessing specifications. Open interfaces and protocols defined by OpenGIS Specifications support interoperable solutions that "geo-enable" the Web, wireless and location-based services, and mainstream IT, and empower technology developers to make complex spatial information and services accessible and useful with all kinds of applications. Open Geospatial Consortium protocols includeWeb Map Service, andWeb Feature Service.[79]
GIS products are broken down by the OGC into two categories, based on how completely and accurately the software follows the OGC specifications.
Compliant productsare software products that comply to OGC's OpenGIS Specifications. When a product has been tested and certified as compliant through the OGC Testing Program, the product is automatically registered as "compliant" on this site.
Implementing productsare software products that implement OpenGIS Specifications but have not yet passed a compliance test. Compliance tests are not available for all specifications. Developers can register their products as implementing draft or approved specifications, though OGC reserves the right to review and verify each entry.
The condition of the Earth's surface, atmosphere, and subsurface can be examined by feeding satellite data into a GIS. GIS technology gives researchers the ability to examine the variations in Earth processes over days, months, and years through the use of cartographic visualizations.[80]As an example, the changes in vegetation vigor through a growing season can be animated to determine when drought was most extensive in a particular region. The resulting graphic represents a rough measure of plant health. Working with two variables over time would then allow researchers to detect regional differences in the lag between a decline in rainfall and its effect on vegetation.
GIS technology and the availability of digital data on regional and global scales enable such analyses. The satellite sensor output used to generate a vegetation graphic is produced for example by theadvanced very-high-resolution radiometer(AVHRR). This sensor system detects the amounts of energy reflected from the Earth's surface across various bands of the spectrum for surface areas of about 1 km2(0.39 sq mi). The satellite sensor produces images of a particular location on the Earth twice a day. AVHRR and more recently themoderate-resolution imaging spectroradiometer(MODIS) are only two of many sensor systems used for Earth surface analysis.
In addition to the integration of time in environmental studies, GIS is also being explored for its ability to track and model the progress of humans throughout their daily routines. A concrete example of progress in this area is the recent release of time-specific population data by theU.S. Census. In this data set, the populations of cities are shown for daytime and evening hours highlighting the pattern of concentration and dispersion generated by North American commuting patterns. The manipulation and generation of data required to produce this data would not have been possible without GIS.
Using models to project the data held by a GIS forward in time have enabled planners to test policy decisions usingspatial decision support systems.
Tools and technologies emerging from theWorld Wide Web Consortium'sSemantic Webare proving useful fordata integrationproblems in information systems. Correspondingly, such technologies have been proposed as a means to facilitateinteroperabilityand data reuse among GIS applications and also to enable new analysis mechanisms.[81][82][83][84]
Ontologiesare a key component of this semantic approach as they allow a formal, machine-readable specification of the concepts and relationships in a given domain. This in turn allows a GIS to focus on the intended meaning of data rather than its syntax or structure. For example,reasoningthat a land cover type classified asdeciduous needleleaf treesin one dataset is a specialization or subset of land cover typeforestin another more roughly classified dataset can help a GIS automatically merge the two datasets under the more general land cover classification. Tentative ontologies have been developed in areas related to GIS applications, for example the hydrology ontology[85]developed by theOrdnance Surveyin theUnited Kingdomand the SWEET ontologies[86]developed byNASA'sJet Propulsion Laboratory. Also, simpler ontologies and semantic metadata standards are being proposed by the W3C Geo Incubator Group[87]to represent geospatial data on the web.GeoSPARQLis a standard developed by the Ordnance Survey,United States Geological Survey,Natural Resources Canada, Australia'sCommonwealth Scientific and Industrial Research Organisationand others to support ontology creation and reasoning using well-understood OGC literals (GML, WKT), topological relationships (Simple Features, RCC8, DE-9IM), RDF and theSPARQLdatabase query protocols.
Recent research results in this area can be seen in the International Conference on Geospatial Semantics[88]and the Terra Cognita – Directions to the Geospatial Semantic Web[89]workshop at the International Semantic Web Conference.
With the popularization of GIS in decision making, scholars have begun to scrutinize the social and political implications of GIS.[90][91][45]GIS can also be misused to distort reality for individual and political gain.[92][93]It has been argued that the production, distribution, utilization, and representation of geographic information are largely related with the social context and has the potential to increase citizen trust in government.[94]Other related topics include discussion oncopyright,privacy, andcensorship. A more optimistic social approach to GIS adoption is to use it as a tool for public participation.
At the end of the 20th century, GIS began to be recognized as tools that could be used in the classroom.[95][96][97]The benefits of GIS in education seem focused on developingspatial cognition, but there is not enough bibliography or statistical data to show the concrete scope of the use of GIS in education around the world, although the expansion has been faster in those countries where the curriculum mentions them.[98]: 36
GIS seems to provide many advantages in teachinggeographybecause it allows for analysis based on real geographic data and also helps raise research questions from teachers and students in the classroom. It also contributes to improvement in learning by developing spatial and geographical thinking and, in many cases, student motivation.[98]: 38
Courses in GIS are also offered by educational institutions.[99][100]
GIS is proven as an organization-wide, enterprise and enduring technology that continues to change how local government operates.[101]Government agencies have adopted GIS technology as a method to better manage the following areas of government organization:
Theopen datainitiative is pushing local government to take advantage of technology such as GIS technology, as it encompasses the requirements to fit the open data/open governmentmodel of transparency.[101]With open data, local government organizations can implement citizen engagement applications and online portals, allowing citizens to see land information, report potholes and signage issues, view and sort parks by assets, view real-time crime rates and utility repairs, and much more.[103][104]The push for open data within government organizations is driving the growth in local government GIS technology spending, and database management.
|
https://en.wikipedia.org/wiki/Geographic_information_system
|
Amanagement information system(MIS) is aninformation system[1]used fordecision-making, and for the coordination, control, analysis, and visualization of information in an organization. The study of the management information systems involves people, processes and technology in an organizational context. In other words, it serves, as the functions of controlling, planning, decision making in the management level setting.[2][3]
In a corporate setting, the ultimate goal of using management information system is to increase the value and profits of the business.[4][5]
While it can be contested that the history of management information systems dates as far back as companies using ledgers to keep track of accounting, the modern history of MIS can be divided into fiveerasoriginally identified byKenneth C. Laudonand Jane Laudon in their seminal textbookManagement Information Systems.[6][7]
Thefirst era(mainframe and minicomputer computing) was ruled byIBMand their mainframe computers for which they supplied both the hardware and software. These computers would often take up whole rooms and require teams to run them. As technology advanced, these computers were able to handle greater capacities and therefore reduce their cost. Smaller, more affordable minicomputers allowed larger businesses to run their own computing centers in-house / on-site / on-premises.
Thesecond era(personal computers) began in 1965 as microprocessors started to compete with mainframes and minicomputers and accelerated the process of decentralizing computing power from large data centers to smaller offices. In the late 1970s, minicomputer technology gave way to personal computers and relatively low-cost computers were becoming mass market commodities, allowing businesses to provide their employees access to computing power that ten years before would have cost tens of thousands of dollars. This proliferation of computers created a ready market for interconnecting networks and the popularization of the Internet. (The first microprocessor—a four-bit device intended for a programmable calculator—was introduced in 1971, and microprocessor-based systems were not readily available for several years. The MITS Altair 8800 was the first commonly known microprocessor-based system, followed closely by the Apple I and II. It is arguable that the microprocessor-based system did not make significant inroads into minicomputer use until 1979, whenVisiCalcprompted record sales of the Apple II on which it ran. The IBM PC introduced in 1981 was more broadly palatable to business, but its limitations gated its ability to challenge minicomputer systems until perhaps the late 1980s to early 1990s.)
Thethird era(client/server networks) arose as technological complexity increased, costs decreased, and the end-user (now the ordinary employee) required a system toshare informationwith other employees within an enterprise. Computers on a common network shared information on a server. This lets thousands and even millions of people access data simultaneously on networks referred to asIntranets.
Thefourth era(enterprise computing) enabled by high speed networks, consolidated the original department specific software applications into integrated software platforms referred to asenterprise software. This new platform tied all aspects of the business enterprise together offering rich information access encompassing the complete managerial structure.
The terms management information system (MIS),Information management system(IMS),information system(IS),enterprise resource planning(ERP),computer science,electrical computer engineering, andinformation technology management(IT) are often confused. MIS is a hierarchical subset of information systems. MIS is more organization-focused narrowing in on leveraging information technology to increase business value. Computer science is more software-focused dealing with the applications that may be used in MIS. Electrical computer engineering is product-focused mainly dealing with the hardware architecture behind computer systems. ERP software is a subset of MIS and IT management refers to the technical management of an IT department which may include MIS.
A career in MIS focuses on understanding and projecting the practical use of management information systems. It studies the interaction, organization and processes among technology, people and information to solve problems.[8]
While management information systems can be used by any or every level of management, the decision of which systems to implement generally falls upon thechief information officers(CIO) andchief technology officers(CTO). These officers are generally responsible for the overall technology strategy of an organization including evaluating how new technology can help their organization. They act as decision-makers in the implementation process of the new MIS.
Once decisions have been made, IT directors, including MIS directors, are in charge of the technical implementation of the system. They are also in charge of implementing the policies affecting the MIS (either new specific policies passed down by the CIOs or CTOs or policies that align the new systems with the organization's overall IT policy). It is also their role to ensure the availability of data and network services as well as the security of the data involved by coordinating IT activities.
Upon implementation, the assigned users will have appropriate access to relevant information. It is important to note that not everyone inputting data into MIS needs to be at the management level. It is common practice to have inputs to MIS be inputted by non-managerial employees though they rarely have access to the reports and decision support platforms offered by these systems.
The following are types of information systems used to create reports, extract data, and assist in the decision-making processes of middle and operational level managers.
The following are some of the benefits that can be attained using MIS:[12]
Some of the disadvantages of MIS systems:
|
https://en.wikipedia.org/wiki/Management_information_system
|
APersonal Information Agent(PIA) is an individual, business, or organization who is expressly authorized by another identifiable individual in dealings with third persons, businesses or organizations concerningpersonally identifiable information(PII).[1]PIA status allows access to information pertaining to an identifiable individual and the records and associated files of that identifiable individual. This normally includes, but is not limited to, financial files, correspondence, memorandum,machine-readablerecords and any other documentary material, regardless of physical form or characteristics. Access of these records extends to any copy of any of those things, pertaining to that identifiable individual and including the right to audit and monitor activities that involve the process for notification and reporting of unauthorized disclosure or PII breaches.
Thislaw-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Personal_Information_Agent
|
Real-time business intelligence(RTBI) is a concept describing the process of deliveringbusiness intelligence(BI) or information about business operations as they occur.Real timemeans near to zerolatencyandaccess to informationwhenever it is required.[1]
The speed of today's processing systems has allowed typicaldata warehousingto work in real-time. The result is real-time business intelligence. Business transactions as they occur are fed to a real-time BI system that maintains the current state of the enterprise. The RTBI system not only supports the classic strategic functions of data warehousing for deriving information and knowledge from past enterprise activity, but it also provides real-time tactical support to drive enterprise actions that react immediately to events as they occur. As such, it replaces both the classic data warehouse and theenterprise application integration(EAI) functions. Such event-driven processing is a basic tenet of real-time business intelligence.
In this context, "real-time" means a range frommillisecondsto a few seconds (5s) after the business event has occurred. While traditional BI presents historical data for manual analysis, RTBI compares current business events with historical patterns to detect problems or opportunities automatically. This automated analysis capability enables corrective actions to be initiated and/or business rules to be adjusted to optimizebusiness processes.
RTBI is an approach in which up-to-a-minute data is analyzed, either directly from operational sources or feeding business transactions into a real timedata warehouseand business intelligence system.
All real-time business intelligence systems have some latency, but the goal is to minimize the time from the business event happening to a corrective action or notification being initiated. Analyst Richard Hackathorn describes three types of latency:[2]
Real-time business intelligence technologies are designed to reduce all three latencies to as close to zero as possible, whereas traditional business intelligence only seeks to reduce data latency and does not address analysis latency or action latency since both are governed by manual processes.
Some commentators have introduced the concept ofright time business intelligencewhich proposes that information should be delivered just before it is required, and not necessarily in real-time.
Real-time Business Intelligence systems areevent driven, and may useComplex Event Processing,Event Stream ProcessingandMashup (web application hybrid)techniques to enable events to be analysed without being first transformed and stored in a database. Thesein-memory databasetechniques have the advantage that high rates of events can be monitored, and since data does not have to be written into databases data latency can be reduced to milliseconds.
An alternative approach to event driven architectures is to increase the refresh cycle of an existing data warehouse to update the data more frequently. These real-time data warehouse systems can achieve near real-time update of data, where the data latency typically is in the range from minutes to hours. The analysis of the data is still usually manual, so the total latency is significantly different from event driven architectural approaches.
The latest alternative innovation to "real-time" event driven and/or "real-time" data warehouse architectures isMSSOTechnology (Multiple Source Simple Output) which removes the need for the data warehouse and intermediary servers altogether since it is able to access live data directly from the source (even from multiple, disparate sources). Because live data is accessed directly by server-less means, it provides the potential for zero-latency, real-time data in the truest sense.
This is sometimes considered a subset ofoperational intelligenceand is also identified withBusiness Activity Monitoring. It allows entire processes (transactions, steps) to be monitored, metrics (latency, completion/failed ratios, etc.) to be viewed, compared with warehoused historic data, and trended in real-time. Advanced implementations allow threshold detection, alerting and providing feedback to the process execution systems themselves, thereby 'closing the loop'.
Technologies that can be supported to enable real-time business intelligence aredata visualization,data federation, enterprise information integration, enterprise application integration andservice oriented architecture.Complex event processingtools can be used to analyze data streams in real time and either trigger automated actions or alert workers to patterns and trends.
|
https://en.wikipedia.org/wiki/Real-time_business_intelligence
|
Social information processingis "an activity through which collective human actions organize knowledge."[1]It is the creation and processing of information by a group of people. As an academic field Social Information Processing studies theinformation processingpower of networkedsocial systems.
Typically computer tools are used such as:
Although computers are often used to facilitate networking and collaboration, they are not required. For example theTrictionaryin 1982 was entirely paper and pen based, relying on neighborhood social networks and libraries. The creation of theOxford English Dictionaryin the 19th century was done largely with the help of anonymous volunteers organized by help wanted ads in newspapers and slips of paper sent through the postal mail.
The website for the AAAI 2008 Spring Symposium on Social Information Processing suggested the following topics and questions:[2]
Social overloadcorresponds to being imposed to high amount of information and interaction on social web. Social overload causes some challenges from the aspect of both social media websites and their users.[3]Users need to deal with high volume of information and to make decisions among different social network applications whereas social network sites try to keep their existing users and make their sites interesting to users. To overcome social overload,social recommender systemshas been utilized to engage users in social media websites in a way that users receive more personalized content using recommendation techniques.[3]Social recommender systems are specific types of recommendation systems being designed for social media and utilizing new sort of data brought by it, such as likes, comments, tags and so on, to improve effectiveness of recommendations. Recommendation in social media have several aspects like recommendation of social media content, people, groups and tags.
Social media lets users to provide feedback on the content produced by users of social media websites, by means of commenting on or liking the content shared by others and annotating their own-created content via tagging. This newly introduced metadata by social media helps to obtain recommendations for social media content with improved effectiveness.[3]Also, social media lets to extract the explicit relationship between users such as friendship and people followed/followers. This provides further improvement on collaborative filtering systems because now users can have judgement on the recommendations provided based on the people they have relationships.[3]There have been studies showing the effectiveness of recommendation systems which utilize relationships among users on social media compared to traditional collaborative filtering based systems, specifically for movie and book recommendation.[4][5]Another improvement brought by social media to recommender systems is solving the cold start problem for new users.[3]
Some key application areas of social media content recommendation are blog and blog post recommendation, multimedia content recommendation such as YouTube videos, question and answer recommendation to question askers and answerers on socialquestion-and-answer websites, job recommendation (LinkedIn), news recommendation on social new aggregator sites (like Digg, GoogleReader, Reddit etc.), short message recommendations on microblogs (such as Twitter).[3]
Also known associal matching(the term is proposed by Terveen and McDonald), people recommender systems deal with recommending people to people on social media. Aspects making people recommender systems distinct from traditional recommender systems and require special attention are basically privacy, trust among users, and reputation.[6]There are several factors which effect the choice of recommendation techniques for people recommendation on social networking sites (SNS). Those factors are related to types of relationships among people on social networking sites, such as symmetric vs asymmetric, ad-hoc vs long-term, and confirmed vs nonconfirmed relationships.[3]
The scope of people recommender systems can be categorized into three:[3]recommending familiar people to connect with, recommending people to follow and recommending strangers. Recommending strangers is seen as valuable as recommending familiar people because of leading to chances such as exchanging ideas, obtaining new opportunities, and increasing one’s reputation.
Handling with social streams is one of the challenges social recommender systems face with.[3]Social stream can be described as the user activity data pooled on newsfeed on social media websites. Social stream data has unique characteristics such as rapid flow, variety of data (only text content vs heterogenous content), and requiring freshness. Those unique properties of stream data compared to traditional social media data impose challenges on social recommender systems.
Another challenge in social recommendation is performing cross-domain recommendation, as in traditional recommender systems.[3]The reason is that social media websites in different domains include different information about users, and merging information within different contexts may not lead to useful recommendations. For example, using favorite recipes of users in one social media site may not be a reliable source of information to effective job recommendations for them.
Participation of people in online communities, in general, differ from their participatory behavior in real-world collective contexts. Humans in daily life are used to making use of "social cues" for guiding their decisions and actions e.g. if a group of people is looking for a good restaurant to have lunch, it is very likely that they will choose to enter to a local that have some customers inside instead of one that it is empty (the more crowded restaurant could reflect its popularity and in consequence, its quality of service). However, in online social environments, it is not straightforward how to access to these sources of information which are normally being logged in the systems, but this is not disclosed to the users.
There are some theories that explain how this social awareness can affect the behavior of people in real-life scenarios. The American philosopherGeorge Herbert Meadstates that humans are social creatures, in the sense that people's actions cannot be isolated from the behavior of the whole collective they are part of because every individuals' act are influenced by larger social practices that act as a general behavior's framework.[7]In his performance framework, the Canadian sociologistErving Goffmanpostulates that in everyday social interactions individuals perform their actions by collecting information from others first, in order to know in advance what they may expect from them and in this way being able to plan how to behave more effectively.[8]
In the same way that in the real-world, providing social cues in virtual communities can help people to understand better the situations they face in these environments, to alleviate their decision-making processes by enabling their access to more informed choices, to persuade them to participate in the activities that take place there, and to structure their own schedule of individual and group activities more efficiently.[9]
In this frame of reference, an approach called "social context displays" has been proposed for showing social information -either from real or virtual environments- in digital scenarios. It is based on the use of graphical representations to visualize the presence and activity traces of a group of people, thus providing users with a third-party view of what is happening within the community i.e. who are actively participating, who are not contributing to the group efforts, etc. This social-context-revealing approach has been studied in different scenarios (e.g. IBM video-conference software, large community displaying social activity traces in a shared space called NOMATIC*VIZ), and it has been demonstrated that its application can provide users with several benefits, like providing them with more information to make better decisions and motivating them to take an active attitude towards the management of their self and group representations within the display through their actions in the real-life.[9]
By making the traces of activity of users publicly available for others to access it is natural that it can raise users concerns related to which are their rights over the data they generate, who are the final users that can have access to their information and how they can know and control their privacy policies.[9]There are several perspectives that try to contextualize this privacy issue. One perspective is to see privacy as a tradeoff between the degree of invasion to the personal space and the number of benefits that the user could perceive from the social system by disclosing their online activity traces.[10]Another perspective is examining the concession between the visibility of people within the social system and their level of privacy, which can be managed at an individual or at a group level by establishing specific permissions for allowing others to have access to their information. Other authors state that instead of enforcing users to set and control privacy settings, social systems might focus on raising their awareness about who their audiences are so they can manage their online behavior according to the reactions they expect from those different user groups.[9]
|
https://en.wikipedia.org/wiki/Social_information_processing
|
In the field ofinformation security,user activity monitoring(UAM) oruser activity analysis(UAA) is the monitoring and recording of user actions. UAM captures user actions, including the use of applications, windows opened, system commands executed, checkboxes clicked, text entered/edited, URLs visited and nearly every other on-screen event toprotect databy ensuring that employees and contractors are staying within their assigned tasks, and posing no risk to the organization.
User activity monitoring software can deliver video-like playback of user activity and process the videos into user activity logs that keep step-by-step records of user actions that can be searched and analyzed to investigate any out-of-scope activities.[1]
The need for UAM rose due to the increase in security incidents that directly or indirectly involve user credentials, exposing company information or sensitive files. In 2014, there were 761data breachesin the United States, resulting in over 83 million exposed customer and employee records.[2]With 76% of these breaches resulting from weak or exploited user credentials, UAM has become a significant component ofIT infrastructure.[3]The main populations of users that UAM aims to mitigate risks with are:
Contractors are used in organizations to completeinformation technologyoperational tasks. Remote vendors that have access to company data are risks. Even with no malicious intent, an external user like a contractor is a major security liability.
70% of regular business users admitted to having access to more data than necessary. Generalized accounts give regular business users access to classified company data.[4]This makesinsider threatsa reality for any business that uses generalized accounts.
Administrator accounts are heavily monitored due to the high-profile nature of their access. However, current log tools can generate “log fatigue” on these admin accounts. Log fatigue is the overwhelming sensation of trying to handle a vast amount of logs on an account as a result of too many user actions. Harmful user actions can easily be overlooked with thousands of user actions being compiled every day.
According to the Verizon Data Breach Incident Report, “The first step in protecting your data is in knowing where it is and who has access to it.”[2]In today's IT environment, “there is a lack of oversight and control over how and who among employees has access to confidential, sensitive information.”[5]This apparent gap is one of many factors that have resulted in a major number of security issues for companies.
Most companies that use UAM usually separate the necessary aspects of UAM into three major components.
Visualforensicsinvolves creating a visual summary of potentially hazardous user activity. Each user action is logged, and recorded. Once a user session is completed, UAM has created both a written record and a visual record, whether it bescreen captures or videoof exactly what a user has done. This written record differs from that of a SIEM orloggingtool, because it captures data at a user-level not at a system level –providing plain English logs rather than SysLogs (originally created for debugging purposes). These textual logs are paired with the corresponding screen-captures or video summaries. Using these corresponding logs and images, the visual forensics component of UAM allows for organizations to search for exact user actions in case of a security incident.
In the case of a security threat, i.e. a data breach, Visual forensics are used to show exactly what a user did, and everything leading up to the incident. Visual Forensics can also be used to provide evidence to anylaw enforcementthat investigate theintrusion.
User activity alerting serves the purpose of notifying whoever operates the UAM solution to a mishap or misstep concerning company information. Real-time alerting enables the console administrator to be notified the moment an error or intrusion occurs. Alerts are aggregated for each user to provide a user riskprofileand threat ranking. Alerting is customizable based on combinations of users, actions, time, location, and access method. Alerts can be triggered simply such as opening an application, or entering a certain keyword or web address. Alerts can also be customized based on user actions within an application, such as deleting or creating a user and executing specific commands.
User behavior analyticsadd an additional layer of protection that will help security professionals keep an eye on the weakest link in the chain. By monitoring user behavior, with the help of dedicated software that analyzes exactly what the user does during their session, security professionals can attach a risk factor to the specific users and/or groups, and immediately be alerted with a red flag warning when a high-risk user does something that can be interpreted as a high-risk action such as exporting confidential customer information, performing largedatabasequeries that are out of the scope of their role, accessing resources that they shouldn't be accessing and so forth.
UAM collects user data by recording activity by every user on applications,web pagesand internal systems and databases. UAM spans all access levels and access strategies (RDP,SSH,Telnet,ICA, direct console login, etc.). Some UAM solutions pair withCitrixandVMwareenvironments.
UAM solutions transcribe all documented activities into user activity logs. UAM logs match up with video-playbacks of concurrent actions. Some examples of items logged are names of applications run, titles of pages opened, URLs, text (typed, edited, copied/pasted), commands, and scripts.
UAM usesscreen recordingtechnology that captures individual user actions. Each video-like playback is saved and accompanied by a user activity log. Playbacks differ from traditional video playback toscreen scraping, which is the compiling of sequential screen shots into a video-likereplay. The user activity logs combined with the video-like playback provides a searchable summary of all user actions. This enables companies to not only read, but also view exactly what a particular user did on company systems.
Whether user activity monitoring wouldjeopardize one's privacydepends on howprivacyis defined under different theories. While in "control theory", privacy is defined as the levels of control that an individual has over his or her personal information, the "unrestricted access theory" defines privacy as the accessibility of one's personal data to others. Using the control theory, some argues that the monitoring system decreased people's control over information, and therefore, regardless of what whether the system is actually put into use, will lead to a loss of privacy.[6]
Many regulations require a certain level of UAM while others only require logs of activity for audit purposes. UAM meets a variety ofregulatory compliancerequirements (HIPAA,ISO 27001,SOX,PCI, and others). UAM is typically implemented for the purpose of audits and compliance, to serve as a way for companies to make their audits easier and more efficient. An audit information request for information on user activity can be met with UAM. Unlike normal log orSIEMtools, UAM can help speed up an audit process by building the controls necessary to navigate an increasingly complex regulatory environment. The ability to replay user actions provides support for determining the impact on regulated information during security incident response.
UAM has two deployment models. Appliance-based monitoring approaches that use dedicated hardware to conduct monitoring by looking at network traffic. Software-based monitoring approaches that use software agents installed on the nodes accessed by users.
More commonly, software requires the installation of an agent on systems (servers, desktops, VDI servers, terminal servers) across which users you want to monitor. These agents capture user activity and reports information back to a central console for storage and analysis. These solutions may be quickly deployed in a phased manner by targeting high-risk users and systems with sensitive information first, allowing the organization to get up and running quickly and expand to new user populations as the business requires.
|
https://en.wikipedia.org/wiki/User_activity_monitoring
|
Abounding interval hierarchy(BIH) is a partitioningdata structuresimilar to that ofbounding volume hierarchiesorkd-trees. Bounding interval hierarchies can be used in high performance (or real-time)ray tracingand may be especially useful for dynamic scenes.
The BIH was first presented under the name of SKD-Trees,[1]presented by Ooi et al., and BoxTrees,[2]independently invented by Zachmann.
Bounding interval hierarchies (BIH) exhibit many of the properties of bothbounding volume hierarchies(BVH) andkd-trees. Whereas the construction and storage of BIH is comparable to that of BVH, the traversal of BIH resembles that ofkd-trees. Furthermore, BIH are alsobinary treesjust like kd-trees (and their superset,BSP trees). Finally, BIH is axis-aligned as are its ancestors.
Although a more general non-axis-aligned implementation of the BIH should be possible (similar to the BSP tree, which uses unaligned planes), it would almost certainly be less desirable due to decreased numerical stability and an increase in the complexity of ray traversal.
The key feature of the BIH is the storage of 2 planes per node (as opposed to 1 for the kd tree and 6 for an axis alignedbounding boxhierarchy), which allows for overlapping children (just like a BVH), but at the same time featuring an order on the children along one dimension/axis (as it is the case for kd trees).
It is also possible to just use the BIH data structure for the construction phase but traverse the tree in a way a traditional axis-aligned bounding box hierarchy does. This enables some simple speed-up optimizations for large ray bundles[3]while keepingmemory/cacheusage low.
Some general attributes of bounding interval hierarchies (and techniques related to BIH) as described by[4]are:
To construct anyspace partitioningstructure some form ofheuristicis commonly used. For this thesurface area heuristic, commonly used with many partitioning schemes, is a possible candidate. Another, more simplistic heuristic is the "global" heuristic[4]which only requires anaxis-aligned bounding box, rather than the full set of primitives, making it much more suitable for a fast construction.
The general construction scheme for a BIH:
Potential heuristics for the split plane candidate search:
The traversal phase closely resembles a kd-tree traversal: One has to distinguish four simple cases, where the ray
For the third case, depending on the ray direction (negative or positive) of the component (x, y or z) equalling the split axis of the current node, the traversal continues first with the left (positive direction) or the right (negative direction) child and the other one is pushed onto astackfor deferred potential traversal.
Traversal continues until a leaf node is found. After intersecting the objects in the leaf, the next traversal element is popped from the stack. If the stack is empty, the nearest intersection of all pierced leaves is returned. If the popped element is entirely beyond the current nearest intersection, its traversal is skipped.
It is also possible to add a fifth traversal case, but which also requires a slightly complicated construction phase. By swapping the meanings of the left and right plane of a node, it is possible to cut off empty space on both sides of a node.
This requires an additional bit that must be stored in the node to detect this special case during traversal. Handling this case during the traversal phase is simple, as the ray
All operations during the hierarchy construction/sorting of the triangles are min/max-operations and comparisons. Thus no triangle clipping has to be done as it is the case with kd-trees and which can become a problem for triangles that just slightly intersect a node. Even if the kd implementation is carefully written, numerical errors can result in a non-detected intersection and thus rendering errors (holes in the geometry) due to the missed ray-object intersection.
Instead of using two planes per node to separate geometry, it is also possible to use any number of planes to create a n-ary BIH or use multiple planes in a standard binary BIH (one and four planes per node were already proposed in[4]and then properly evaluated in[5]) to achieve better object separation.
|
https://en.wikipedia.org/wiki/Bounding_interval_hierarchy
|
Thedemoscene(/ˈdɛmoʊˌsiːn/) is an internationalcomputer artsubculturefocused on producingdemos: self-contained, sometimes extremely small, computer programs that produceaudiovisualpresentations. The purpose of a demo is to show offprogramming, visual art, and musical skills. Demos and other demoscene productions (graphics, music, videos, games) are shared, voted on and released online at festivals known asdemoparties.
The scene started with thehome computerrevolution of the early 1980s, and the subsequent advent ofsoftware cracking.[1]Crackers altered the code ofcomputer gamesto remove copy protection, claiming credit by adding introduction screens of their own ("cracktros"). They soon started competing for the best visual presentation of these additions.[2]Through the making of intros and stand-alone demos, a new community eventually evolved, independent of the gaming[3]: 29–30andsoftware sharingscenes.
Demos are informally classified into several categories, mainly of size-restricted intros. The most typical competition categories for intros are the 64k intro and the 4K intro, where the size of the executable file is restricted to 65536 and 4096 bytes, respectively. In other competitions the choice of platform is restricted; only8-bitcomputers like the Atari 800 or Commodore 64, or the 16-bitAmigaorAtari ST. Such restrictions provide a challenge for coders, musicians, and graphics artists, to make a device do more than was intended in its original design.
The earliest computer programs that have some resemblance to demos anddemo effectscan be found among the so-calleddisplay hacks. Display hacks predate the demoscene by several decades, with theearliest examplesdating back to the early 1950s.[5]
Demos in the demoscene sense began assoftware crackers' "signatures", that is, crack screens andcrack introsattached to software whosecopy protectionwas removed. The first crack screens appeared on theApple IIin the early 1980s, and they were often nothing but plain text screens crediting the cracker or their group. Gradually, these static screens evolved into increasingly impressive-looking introductions containing animated effects and music. Eventually, many cracker groups started to release intro-like programs separately, without being attached to unlicensed software.[6]These programs were initially known by various names, such aslettersormessages, but they later came to be known asdemos.[citation needed]
In 1980,Atari, Inc.began using a looping demo with visual effects and music to show the features of theAtari 400/800 computersin stores.[7]At the 1985Consumer Electronics Show, Atari showed a demoscene-style demo for its latest 8-bit computers that alternated between a 3D walking robot and a flying spaceship, each with its own music, and animating larger objects than typically seen on those systems; the two sections were separated by the Atari logo.[8]The program was released to the public. Also in 1985, a large, spinning, checkered ball—casting a translucent shadow—was the signature demo of what the hardware was capable of when Commodore'sAmigawas announced.
Simple demo-like music collections were put together on the C64 in 1985 byCharles Deenen, inspired by crack intros, using music taken from games and adding some homemade color graphics.[citation needed]In the following year, the movement now known as the demoscene was born. The Dutch groups 1001 Crew andThe Judges, both Commodore 64-based, are often mentioned[by whom?]among the earliest demo groups. While competing with each other in 1986, they both produced pure demos with original graphics and music involving more than just casual work, and used extensive hardware trickery. At the same time demos from others, such asAntony Crowther, had started circulating onCompunetin the United Kingdom.
The demoscene is mainly a European phenomenon.[9]It is a competition-oriented subculture, with groups and individual artists competing against each other in technical and artistic excellence. Those who achieve excellence are dubbed "elite", while those who do not follow the demoscene's implicit rules are called "lamers"; such rules emphasize creativity over "ripping" (or else using with permission) the works of others, having good contacts within the scene, and showing effort rather than asking for help.[9]Both this competitiveness and the sense of cooperation among demosceners have led to comparisons with the earlierhacker culturein academic computing.[9][10]: 159The demoscene is a closed subculture, which seeks and receives little mainstream public interest.[3]: 4As of 2010[update], the size of the scene was estimated at some 10,000.[11]
In the early days, competition came in the form of setting records, like the number of "bobs" (blitter objects) on the screen per frame, or the number ofDYCP(Different Y Character Position) scrollers on a C64.[citation needed]These days, there are organized competitions, or compos, held atdemoparties, although there have been some online competitions. It has also been common fordiskmagsto have voting-based charts which provide ranking lists for the best coders, graphicians, musicians, demos and other things.
In 2020, Finland added its demoscene to its nationalUNESCOlist ofintangible cultural heritage.[12]It is the first digital subculture to be put on an intangible cultural heritage list.
In 2021, Germany and Poland also added its demoscene to its national UNESCO list of intangible cultural heritage,[13][14]followed by Netherlands in 2023[15]with Sweden and France in 2025.[16][17]
Demosceners typically organize in small groups, centered around a coder (programmer), a musician, a graphician (graphics designer) and a swapper (who spreads their own and others' creations by mail).
Groups always have names, and similarly the individual members pick a handle by which they will be addressed in the large community. While the practice of using handles rather than real names is a borrowing from the cracker/warez culture, where it serves to hide the identity of the cracker from law enforcement, in the demoscene (oriented toward legal activities) it mostly serves as a manner of self-expression. Group members tend to self-identify with the group, often extending their handle with their group's name, following the patterns "HandleofGroup" or "Handle/Group".[3]: 31–32
A demoparty is an event where demosceners[18]and other computer enthusiasts gather to take part in competitions, nicknamedcompos,[19]where they present demos (shortaudio-visualpresentations ofcomputer art) and other works such as digital art and music. A typical demoparty is a non-stop event spanning a weekend, providing the visitors a lot of time to socialize. The competing works, at least those in the most important competitions, are usually shown at night, using avideo projectorandloudspeakers.[20]
The most important competition is usually thedemo compo.[21]TheAssemblyis the biggest demoscene party.[22]The Gatheringbecame more of a players' party, the worlds largest computerparty.[23]
The visitors of a demoparty often bring their own computers to compete and show their works. To this end, most parties provide a large hall with tables, electricity and usually alocal area networkconnected to the Internet. In this respect, many demoparties resembleLAN parties, and many of the largest events also gather gamers and other computer enthusiasts in addition to demosceners. A major difference between a real demoparty and a LAN party is that demosceners typically spend more time socializing (often outside the actual party hall) than in front of their computers.[24]
A64K introis ademowith an executable file size limit of 64kibibytes, or 65,536bytes. This is a traditional limit inherited from the maximum size of aCOM file. Demos traditionally were limited by RAM size, or later by storage size. By the early 1990s, demo sizes grew, so categories were created for limited sizes that forced developers to not simply stream data from storage.
To reduce the file size, 64K intros often useexecutable compressionandprocedural generation, such assound synthesis,mesh generation,procedural textures, andprocedural animation.[42][43]
fr-08, a 64k PC demo byFarbrauschreleased atThe Party2000 inAarshas since been claimed[44]to mark awatershed momentin the popularity of the category. Others includeChaos TheorybyConspiracy(2006), Gaia Machina by Approximate (2012),[45]F — Felix's Workshop by Ctrl-Alt-Test (2012)[46]Fermi paradox by Mercury (2016),[47][48]and Darkness Lay Your Eyes Upon Me by Conspiracy (2016).[48]
Every year, awards in the demoscene celebrate the creativity, technical prowess, and artistic vision of demoscene groups and individuals:
Although demos are a rather obscure form of art, even in traditionally active demoscene countries, the scene has influenced areas such ascomputer games industryandnew media art.[49][50][51]
Many European game programmers, artists, and musicians have come from the demoscene, often cultivating the learned techniques, practices and philosophies in their work. For example, the Finnish companyRemedy Entertainment, known for theMax Payneseries of games, was founded by the PC groupFuture Crew, and most of its employees are former or active Finnish demosceners.[52][53]Sometimes demos even provide direct influence even to game developers that have no demoscene affiliation: for instance,Will Wrightnames demoscene as a major influence on theMaxisgameSpore, which is largely based onprocedural content generation.[54]Similarly, atQuakeConin 2011,John Carmacknoted that he "thinks highly" of people who do 64k intros, as an example of artificial limitations encouraging creative programming.[55]Jerry HolkinsfromPenny Arcadeclaimed to have an "abiding love" for the demoscene, and noted that it is "stuff worth knowing".[56]
Certain forms of computer art have a strong affiliation with the demoscene.Tracker music, for example, originated in the Amiga game industry but was soon heavily dominated by demoscene musicians; producerAdam Fielding[57]claims to have tracker/demoscene roots. Currently, there is a major tracking scene separate from the actual demoscene. A form of static computer graphics where demosceners have traditionally excelled ispixel art; seeartscenefor more information on the related subculture.[citation needed]Origins ofcreative codingtools likeShadertoyandThree.jscan be directly traced back to the scene.[58]
Over the years, desktop computer hardware capabilities have improved by orders of magnitude, and so for most programmers, tight hardware restrictions are no longer a common issue. Nevertheless, demosceners continue to study and experiment with creating impressive effects on limited hardware. Sincehandheld consolesand cellular phones have comparable processing power or capabilities to the desktop platforms of old (such as low resolution screens which require pixel art, or very limited storage and memory for music replay), many demosceners have been able to apply their niche skills to develop games for these platforms, and earn a living doing so.[citation needed]One particular example isAngry Birds, whose lead designer Jaakko Iisalo was an active and well-known demoscener in the 1990s.[59]Unity Technologiesis another notable example; its technical leads on iPhone, Android and Nintendo Switch platforms Renaldas Zioma and Erik Hemming[60][61]are authors ofSuicide Barbie[62]demo for the Playstation Portable console, which was released in 2007.
Some attempts have been made to increase the familiarity of demos as an art form. For example, there have been demo shows, demo galleries and demoscene-related books, sometimes even TV programs introducing the subculture and its works.[63][original research?]
The museum IT-ceum in Linköping, Sweden, has an exhibition about the demoscene.[64]
4players.de reported that "numerous" demo and intro programmers, artists, and musicians were employed in the games industry by 2007. Video game companies with demoscene members on staff includedDigital Illusions,Starbreeze,Ascaron,[65]49Games,Remedy,Techland,Lionhead Studios,[66]Bugbear,Digital Reality,Guerrilla Games, andAkella.[67]
Thetracker musicwhich is part of demoscene culture could be found in many video games of the late 1980s to early 2000s, such asLemmings,Jazz Jackrabbit,One Must Fall: 2097,Crusader: No Remorse, theUnrealseries,Deus Ex,Bejeweled, andUplink.[68]
|
https://en.wikipedia.org/wiki/Demoscene
|
Inreal-time computer graphics,geometry instancingis the practice ofrenderingmultiple copies of the samemeshin a scene at once. This technique is primarily used for objects such as trees, grass, or buildings which can be represented as repeated geometry without appearing unduly repetitive, but may also be used for characters. Although vertex data is duplicated across all instanced meshes, each instance may have other differentiating parameters (such as color, orskeletal animationpose) changed in order to reduce the appearance of repetition.
Starting inDirect3Dversion 9,Microsoftincluded support for geometry instancing. This method improves the potential runtime performance of rendering instanced geometry by explicitly allowing multiple copies of a mesh to be rendered sequentially by specifying the differentiating parameters for each in a separate stream. The same functionality is available inVulkancore, and theOpenGLcore in versions 3.1 and up but may be accessed in some earlier implementations using theEXT_draw_instancedextension.
Geometry instancing inHoudini,Mayaor other3D packagesusually involves mapping a static or pre-animated object or geometry to particles or arbitrary points in space, which can then be rendered by almost any offline renderer. Geometry instancing in offline rendering is useful for creating things like swarms of insects, in which each one can be detailed, but still behaves in a realistic way that does not have to be determined by the animator. Most packages allow variation of thematerialor material parameters on a per instance basis, which helps ensure that instances do not appear to be exact copies of each other. InHoudini, many object level attributes (e.g. such as scale) can also be varied on a per instance basis. Because instancing geometry in most 3D packages only references the original object, file sizes are kept very small and changing the original changes all of the instances.
In many offline renderers, such as Pixar'sPhotoRealistic RenderMan, instancing is achieved by using delayed load render procedurals to only load geometry when the bucket containing the instance is actually being rendered. This means that the geometry for all the instances does not have to be in memory at once.
Thiscomputer graphics–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Geometry_instancing
|
Optical feedback(OF) refers to the phenomenon in alaser, where a part of thecoherent emissionlight returns to the laser cavity. The effect is typical for any laser, although the cause of the feedback light varies: reflection from optical components, fiber edges,spectroscopy cellwindows. Operation of thesemiconductor laseris very sensitive to OF due to its very highintrinsic gainandchirping effect, as well as therelaxation oscillation.[1]
Upon return, the feedback light will be delayed with respect to the light in the cavity, and will have different phase, thus either amplifying or suppressing the laser output. While the recombination of the feedback light and light already in the cavity is usually linear, the delay, high gain, chirping, and relaxation oscillation create complex dynamic effects at the output.[1]
In the case of theoptical resonator, this is accomplished by parallel mirrors which cause light to be reflected within the lasing cavity, allowing for a single photon to be amplified several times by the lasing medium, while in the case of theRandom laser, this is a result of internal scattering within the lasing medium.[2][failed verification]
Optical feedback may result in significant changes in output power forsemiconductor lasers, and if unchecked can cause serious damage. This has been exploited for chaotic output power in semiconductor lasers.[3]
Thisoptics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Optical_feedback
|
Quartz Composeris anode graph systemprovided as part of theXcodedevelopment environmentinmacOSfor processing andrenderinggraphical data. It is capable of making sophisticated animations for keynote or presentations and creating animated screensavers.[1]
Quartz Composer usesOpenGL(includingGLSL),OpenCL(only in Mac OS X Snow Leopard and later),OpenAL,Core Image,Core Video,JavaScriptand other technologies to create anAPIand a developer tool around a simple visual programming paradigm. Apple has embedded Quartz technologies deeply into theoperating system. Compositions created in Quartz Composer can be played standalone in anyQuickTime-aware application[2](although only on Mac OS X Tiger and later), as a systemScreen Saver,[3]as an iTunes Visualizer, from inside the Quartz Composer application, or can be embedded into aCocoaorCarbonapplication via supplieduser interfacewidgets. While Quartz Composer is included with the iPhone SDK, as of December 2015[update]there is no way of running Quartz Compositions oniOSdevices. Starting in macOS Catalina, the Quartz Composer framework has been deprecated, although it is still present for compatibility.[4]
Quartz programming through Quartz Composer works by implementing and connectingpatches.[5]Similar to routines in traditional programming languages, patches are base processing units. They execute and produce a result. For better performance, patch execution follows alazy evaluationapproach, meaning that patches are only executed when their output is needed. There are three types of patches: Consumers, Processors and External Input patches that can receive and output mouse clicks, scrolls and movements;MIDIand audio; keyboard; or other movements. A collection of patches can be melded into one, called a macro. Macros can be nested and their subroutines also edited.
To control the order of rendering, each renderer is assigned a layer, indicated in its upper-right corner. Layers are rendered sequentially, lowest to highest. Renderers can be enabled or disabled, essentially turning on or off that particular layer. Turning off unused layers often results in better performance, since fewer upstream patches need to be evaluated.
Some patches can have subpatches, which allows for global parameter changes to just the included subpatches. This is useful for lighting, 3D transformation andGLSLshaders, among other things. Subpatch support is indicated by square corners on a patch, rather than the typical rounded corners.
With Version 3.0, it became possible to turn compositions into Virtual Patches. These allow the user to reuse functionality without having to store duplicate copies in each composition. The Quartz Composer Editor allows the user to save a "flattened" copy (with the virtual patches fully expanded inside), for easy distribution. Version 4.0 extended this functionality even more and automatically includes "flattened" copies of virtual patches for use as a fallback if the desired virtual patch isn't installed on the host system. This greatly simplifies composition distribution.
Network functionality was greatly improved with the release of Leopard. It became possible to transmit data and synchronize over a network interface and it also added support forOpen Sound Controltransmission and reception.
Also new in Version 3.0 was the possibility to write custom patch plugins using anXcodetemplate, and the notion of a "safe mode" where plugins and other unsafe patches fail to load. This prevents malicious compositions from performing dangerous or insecure operations. Custom patches using Apple's Xcode template are always considered unsafe.
It was possible to develop custom patch plugins for Version 2.0, but the API was undocumented and private and was never supported by Apple. Eventually, templates were released to simplify this procedure.[6]
In the Quartz Composer editor, holding theoption keywhile selecting "Preferences..." from the menu adds 3 additional tabs of options for the user to configure. These options include System settings, Editor settings and QuickTime integration settings. Notable options include expanded tooltips, software rendering and uncapped framerate rendering. Multisample antialiasing (MSAA) was added as a hidden option in version 4.0, allowing for antialiasing inside the QC Editor, though it only works on GPU's that support MSAA.
Data inside QC can be one of the following types:
Two additional types were introduced in version 4.0:
Data can usually be converted to other types transparently. In Quartz Composer 3.0, the connections between patches change color to indicate conversions that are taking place. Yellow connections mean no conversion is taking place, Orange indicates a possible loss of data from conversion (Number to Index) and Red indicates a severe conversion; Image to Boolean, for example.
Quartz Composer documents are calledCompositions. Compositions are BinaryProperty Lists(thoughXMLversions are also supported) with afilename extension.qtzand acom.apple.quartz-composer-compositionUTI.[7]Patches, their connections and their input port states are saved in the composition file. Images can be stored inside a composition as well, making for self-contained compositions with embedded graphics. By dragging a movie file into the Quartz Composer editor, a reference to the movie file is created, providing a changing image that can be connected to a renderer.
Compositions also storemetadatasuch as composition author,copyrightand description. The user can also add arbitrary metadata items, if desired.
Many image formats are supported, includingJPEG,JPEG2000,GIF,PNG,TIFF,TGA,OpenEXR,BMP,ICO,PDF,PICT,ICNSand some raw digital camera types.[8]Images are maintained in their native form for as long as possible before rasterizing for display. This means that Quartz Composer will keep vector images as vectors when cropping, scaling, rotating, or translating which allows it to work with very large logical image dimensions without consuming large amounts of memory or processing time. Such functionality is most apparent when working with text-based images, or PDFs.
Version 3.0 added the ability to add annotations to areas of the composition, callednotes. These notes parallelcommentsin other programming languages. Notes can be yellow, red, green, blue, or gray, and can overlap other notes.
In Version 3.0, the concept of Composition Protocols was introduced. Protocols provide a template of required and optional inputs and outputs to qualify conforming compositions for various purposes. The following protocols are available by default:
There is an additional protocol that Apple uses in their private API:
One new protocol was added in version 4.0:
There is no officially supported way to add additional protocols to Quartz Composer. However, there are some undocumented methods that may make this possible in the future.[9]
In addition to protocols, compositions can also conform to different runtimes where Quartz Composer is available. In Leopard, there are runtimes for Tiger (32-bit), as well as 32-bit and 64-bit versions of the Leopard Quartz Composer runtime. The editor can also indicate used patches that are unsafe, or unavailable in Tiger to aid in making compatible compositions.
A System-wide Composition Repository is available as of Version 3.0.[10]This allows applications to share and make use of common compositions for effects and processing. It is also possible for applications to query the repository for compositions that match certain criteria, such as protocol conformance.
The Repository is spread across 3file systemlocations:
Adding compositions to the repository is as simple as adding the composition file to one of these locations.
It became possible to compare compositions in Quartz Composer 3.0. This feature allows the user to compare inputs, rendered output and graph appearance of any two compositions.
A developer tool called Quartz Composer Visualizer was released with Quartz Composer 3.0 that allows compositions to be rendered across multiple screens on a single machine, or even spanned across several machines and displays.
Support for some Automator actions was added with the release of Leopard.
Pierre-Olivier Latouroriginally developed the predecessor to Quartz Composer under the namePixelShox Studio.[11]
|
https://en.wikipedia.org/wiki/Quartz_Composer
|
Real timewithin the media is a method in which events are portrayed at the same rate at which they occur in the plot. For example, if a film told in real time is two hours long, then the plot of that movie covers two hours of fictional time. If a daily real time comic strip runs for six years, then the characters will be six years older at the end of the strip than they were at the beginning. This technique can be enforced with varying levels of precision. In some stories, every minute ofscreen timeis a minute of fictional time. In other stories, such as the daily comic stripFor Better or For Worse, each day's strip does not necessarily correspond to a new day of fictional time, but each year of the strip does correspond to one year of fictional time.
Real time fiction dates back to the climactic structure of classicalGreek drama.[1]
Often, use ofsplit screensorpicture-in-picturesare used to show events occurring at the same time, or the context in which varioussubplotsare affecting each other. Examples include the television series24and filmsTimecodeandPhone Booth. On-screen clocks are often used to remind the audience of the real time presentation.
In a real time computer game or simulation, events in the game occur at the same rate as the events which are being depicted. For instance, in a real time combat game, in one hour of play the game depicts one hour of combat.
Incomic books, the use of real time is made more complicated by the fact that most serial comics are released on a monthly basis and are traditionally 20 to 30 pages long, making it difficult to tell a story set in real time without overlooking important events from one month to the next. Another explanation is the prevalence of thesuperherogenre in American comics, and theiconicstatus attached to such characters; it is often considered that such mythological, sometimes godlike heroes cannot age in real time without losing the characteristics that make them special.[citation needed]This has led to the common use offloating timelinesin the universes ofMarvel ComicsandDC Comics.
In theInspector Rebusseries of detective novels by Scottish writerIan Rankin, characters age in step with the publication date. Rebus is stated to have been born in 1947; in the 2007 novelExit Musiche reached age 60 and retired.
|
https://en.wikipedia.org/wiki/Real_time_(media)
|
In3D computer graphics,ray tracingis a technique for modelinglight transportfor use in a wide variety ofrenderingalgorithms for generatingdigital images.
On a spectrum ofcomputational costand visual fidelity, ray tracing-based rendering techniques, such asray casting,recursive ray tracing,distribution ray tracing,photon mappingandpath tracing, are generally slower and higher fidelity thanscanline renderingmethods.[1]Thus, ray tracing was first deployed in applications where taking a relatively long time to render could be tolerated, such as stillCGIimages, and film and televisionvisual effects(VFX), but was less suited toreal-timeapplications such asvideo games, wherespeed is criticalin rendering eachframe.[2]
Since 2018, however,hardware acceleration for real-time ray tracinghas become standard on new commercial graphics cards, and graphics APIs have followed suit, allowing developers to use hybrid ray tracing andrasterization-based rendering in games and other real-time applications with a lesser hit to frame render times.
Ray tracing is capable of simulating a variety ofopticaleffects,[3]such asreflection,refraction,soft shadows,scattering,depth of field,motion blur,caustics,ambient occlusionanddispersionphenomena (such aschromatic aberration). It can also be used to trace the path ofsound wavesin a similar fashion to light waves, making it a viable option for more immersive sound design in video games by rendering realisticreverberationandechoes.[4]In fact, any physicalwaveorparticlephenomenon with approximately linear motion can be simulated withray tracing.
Ray tracing-based rendering techniques that involve sampling light over a domain generateimage noiseartifacts that can be addressed by tracing a very large number of rays or usingdenoisingtechniques.
The idea of ray tracing comes from as early as the 16th century when it was described byAlbrecht Dürer, who is credited for its invention.[5]Dürer described multiple techniques for projecting 3-D scenes onto an image plane. Some of these project chosen geometry onto the image plane, as is done withrasterizationtoday. Others determine what geometry is visible along a given ray, as is done with ray tracing.[6][7]
Using a computer for ray tracing to generate shaded pictures was first accomplished byArthur Appelin 1968.[8]Appel used ray tracing for primary visibility (determining the closest surface to the camera at each image point) by tracing a ray through each point to be shaded into the scene to identify the visible surface. The closest surface intersected by the ray was the visible one. This non-recursive ray tracing-based rendering algorithm is today called "ray casting". His algorithm then traced secondary rays to the light source from each point being shaded to determine whether the point was in shadow or not.
Later, in 1971, Goldstein and Nagel ofMAGI (Mathematical Applications Group, Inc.)[9]published "3-D Visual Simulation", wherein ray tracing was used to make shaded pictures of solids. At the ray-surface intersection point found, they computed the surface normal and, knowing the position of the light source, computed the brightness of the pixel on the screen. Their publication describes a short (30 second) film “made using the University of Maryland’s display hardware outfitted with a 16mm camera. The film showed the helicopter and a simple ground level gun emplacement. The helicopter was programmed to undergo a series of maneuvers including turns, take-offs, and landings, etc., until it eventually is shot down and crashed.” ACDC 6600computer was used. MAGI produced an animation video calledMAGI/SynthaVision Samplerin 1974.[10]
Another early instance of ray casting came in 1976, when Scott Roth created a flip book animation inBob Sproull's computer graphics course atCaltech. The scanned pages are shown as a video in the accompanying image. Roth's computer program noted an edge point at a pixel location if the ray intersected a bounded plane different from that of its neighbors. Of course, a ray could intersect multiple planes in space, but only the surface point closest to the camera was noted as visible. The platform was a DECPDP-10, aTektronixstorage-tube display, and a printer which would create an image of the display on rolling thermal paper. Roth extended the framework, introduced the termray castingin the context ofcomputer graphicsandsolid modeling, and in 1982 published his work while at GM Research Labs.[11]
Turner Whittedwas the first to show recursive ray tracing for mirror reflection and for refraction through translucent objects, with an angle determined by the solid's index of refraction, and to use ray tracing foranti-aliasing.[12]Whitted also showed ray traced shadows. He produced a recursive ray-traced film calledThe Compleat Angler[13]in 1979 while an engineer at Bell Labs. Whitted's deeply recursive ray tracing algorithm reframed rendering from being primarily a matter of surface visibility determination to being a matter of light transport. His paper inspired a series of subsequent work by others that includeddistribution ray tracingand finallyunbiasedpath tracing, which provides therendering equationframework that has allowed computer generated imagery to be faithful to reality.
For decades,global illuminationin major films usingcomputer-generated imagerywas approximated with additional lights. Ray tracing-based rendering eventually changed that by enabling physically-based light transport. Early feature films rendered entirely using path tracing includeMonster House(2006),Cloudy with a Chance of Meatballs(2009),[14]andMonsters University(2013).[15]
Optical ray tracing describes a method for producing visual images constructed in3-D computer graphicsenvironments, with more photorealism than eitherray castingorscanline renderingtechniques. It works by tracing a path from an imaginary eye through eachpixelin a virtual screen, and calculating the color of the object visible through it.
Scenes in ray tracing are described mathematically by a programmer or by a visual artist (normally using intermediary tools). Scenes may also incorporate data from images and models captured by means such as digital photography.
Typically, each ray must be tested forintersectionwith some subset of all the objects in the scene. Once the nearest object has been identified, the algorithm will estimate the incominglightat the point of intersection, examine the material properties of the object, and combine this information to calculate the final color of the pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene.
It may at first seem counterintuitive or "backward" to send raysawayfrom the camera, rather thanintoit (as actual light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming majority of light rays from a given light source do not make it directly into the viewer's eye, a "forward" simulation could potentially waste a tremendous amount of computation on light paths that are never recorded.
Therefore, the shortcut taken in ray tracing is to presuppose that a given ray intersects the view frame. After either a maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and the pixel's value is updated.
On input we have (in calculation we use vectornormalizationandcross product):
The idea is to find the position of each viewport pixel centerPij{\displaystyle P_{ij}}which allows us to find the line going from eyeE{\displaystyle E}through that pixel and finally get the ray described by pointE{\displaystyle E}and vectorR→ij=Pij−E{\displaystyle {\vec {R}}_{ij}=P_{ij}-E}(or its normalisationr→ij{\displaystyle {\vec {r}}_{ij}}). First we need to find the coordinates of the bottom left viewport pixelP1m{\displaystyle P_{1m}}and find the next pixel by making a shift along directions parallel to viewport (vectorsb→n{\displaystyle {\vec {b}}_{n}},v→n{\displaystyle {\vec {v}}_{n}}) multiplied by the size of the pixel. Below we introduce formulas which include distanced{\displaystyle d}between the eye and the viewport. However, this value will be reduced during ray normalizationr→ij{\displaystyle {\vec {r}}_{ij}}(so you might as well accept thatd=1{\displaystyle d=1}and remove it from calculations).
Pre-calculations: let's find and normalise vectort→{\displaystyle {\vec {t}}}and vectorsb→,v→{\displaystyle {\vec {b}},{\vec {v}}}which are parallel to the viewport (all depicted on above picture)
note that viewport centerC=E+t→nd{\displaystyle C=E+{\vec {t}}_{n}d}, next we calculate viewport sizeshx,hy{\displaystyle h_{x},h_{y}}divided by 2 including inverseaspect ratiom−1k−1{\displaystyle {\frac {m-1}{k-1}}}
and then we calculate next-pixel shifting vectorsqx,qy{\displaystyle q_{x},q_{y}}along directions parallel to viewport (b→,v→{\displaystyle {\vec {b}},{\vec {v}}}), and left bottom pixel centerp1m{\displaystyle p_{1m}}
Calculations: notePij=E+p→ij{\displaystyle P_{ij}=E+{\vec {p}}_{ij}}and rayR→ij=Pij−E=p→ij{\displaystyle {\vec {R}}_{ij}=P_{ij}-E={\vec {p}}_{ij}}so
In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One can think of this "ray" as a stream ofphotonstraveling along the same path. In a perfect vacuum this ray will be a straight line (ignoringrelativistic effects). Any combination of four things might happen with this light ray:absorption,reflection,refractionandfluorescence. A surface may absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. It might also reflect all or part of the light ray, in one or more directions. If the surface has anytransparentortranslucentproperties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of thespectrum(and possibly altering the color). Less commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a longer wavelength color in a random direction, though this is rare enough that it can be discounted from most rendering applications. Between absorption, reflection, refraction and fluorescence, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, reflective and fluorescent properties again affect the progress of the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image.
The idea behind ray casting, the predecessor to recursive ray tracing, is to trace rays from the eye, one per pixel, and find the closest object blocking the path of that ray. Think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine theshadingof this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3-D computer graphics shading models. One important advantage ray casting offered over olderscanline algorithmswas its ability to easily deal with non-planar surfaces and solids, such asconesandspheres. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by usingsolid modelingtechniques and easily rendered.
In the method of volume ray casting, each ray is traced so that color and/or density can be sampled along the ray and then be combined into a final pixel color.
This is often used when objects cannot be easily represented by explicit surfaces (such as triangles), for example when rendering clouds or 3D medical scans.
In SDF ray marching, or sphere tracing,[16]each ray is traced in multiple steps to approximate an intersection point between the ray and a surface defined by asigned distance function(SDF). The SDF is evaluated for each iteration in order to be able take as large steps as possible without missing any part of the surface. A threshold is used to cancel further iteration when a point is reached that is close enough to the surface. This method is often used for 3-D fractal rendering.[17]
Earlier algorithms traced rays from the eye into the scene until they hit an object, but determined the ray color without recursively tracing more rays. Recursive ray tracing continues the process. When a ray hits a surface, additional rays may be cast because of reflection, refraction, and shadow.:[18]
These recursive rays add more realism to ray traced images.
Ray tracing-based rendering's popularity stems from its basis in a realistic simulation oflight transport, as compared to other rendering methods, such asrasterization, which focuses more on the realistic simulation of geometry. Effects such as reflections andshadows, which are difficult to simulate using other algorithms, are a natural result of the ray tracing algorithm. The computational independence of each ray makes ray tracing amenable to a basic level ofparallelization,[20]but the divergence of ray paths makes high utilization under parallelism quite difficult to achieve in practice.[21]
A serious disadvantage of ray tracing is performance (though it can in theory be faster than traditional scanline rendering depending on scene complexity vs. number of pixels on-screen). Until the late 2010s, ray tracing in real time was usually considered impossible on consumer hardware for nontrivial tasks. Scanline algorithms and other algorithms use data coherence to share computations between pixels, while ray tracing normally starts the process anew, treating each eye ray separately. However, this separation offers other advantages, such as the ability to shoot more rays as needed to performspatial anti-aliasingand improve image quality where needed.
Whitted-style recursive ray tracing handles interreflection and optical effects such as refraction, but is not generallyphotorealistic. Improved realism occurs when therendering equationis fully evaluated, as the equation conceptually includes every physical effect of light flow. However, this is infeasible given the computing resources required, and the limitations on geometric and material modeling fidelity.Path tracingis an algorithm for evaluating the rendering equation and thus gives a higher fidelity simulations of real-world lighting.
The process of shooting rays from the eye to the light source to render an image is sometimes calledbackwards ray tracing, since it is the opposite direction photons actually travel. However, there is confusion with this terminology. Early ray tracing was always done from the eye, and early researchers such as James Arvo used the termbackwards ray tracingto mean shooting rays from the lights and gathering the results. Therefore, it is clearer to distinguisheye-basedversuslight-basedray tracing.
While the direct illumination is generally best sampled using eye-based ray tracing, certain indirect effects can benefit from rays generated from the lights.Causticsare bright patterns caused by the focusing of light off a wide reflective region onto a narrow area of (near-)diffuse surface. An algorithm that casts rays directly from lights onto reflective objects, tracing their paths to the eye, will better sample this phenomenon. This integration of eye-based and light-based rays is often expressed as bidirectional path tracing, in which paths are traced from both the eye and lights, and the paths subsequently joined by a connecting ray after some length.[22][23]
Photon mappingis another method that uses both light-based and eye-based ray tracing; in an initial pass, energetic photons are traced along rays from the light source so as to compute an estimate of radiant flux as a function of 3-dimensional space (the eponymous photon map itself). In a subsequent pass, rays are traced from the eye into the scene to determine the visible surfaces, and the photon map is used to estimate the illumination at the visible surface points.[24][25]The advantage of photon mapping versus bidirectional path tracing is the ability to achieve significant reuse of photons, reducing computation, at the cost of statistical bias.
An additional problem occurs when light must pass through a very narrow aperture to illuminate the scene (consider a darkened room, with a door slightly ajar leading to a brightly lit room), or a scene in which most points do not have direct line-of-sight to any light source (such as with ceiling-directed light fixtures ortorchieres). In such cases, only a very small subset of paths will transport energy;Metropolis light transportis a method which begins with a random search of the path space, and when energetic paths are found, reuses this information by exploring the nearby space of rays.[26]
To the right is an image showing a simple example of a path of rays recursively generated from the camera (or eye) to the light source using the above algorithm. A diffuse surface reflects light in all directions.
First, a ray is created at an eyepoint and traced through a pixel and into the scene, where it hits a diffuse surface. From that surface the algorithm recursively generates a reflection ray, which is traced through the scene, where it hits another diffuse surface. Finally, another reflection ray is generated and traced through the scene, where it hits the light source and is absorbed. The color of the pixel now depends on the colors of the first and second diffuse surface and the color of the light emitted from the light source. For example, if the light source emitted white light and the two diffuse surfaces were blue, then the resulting color of the pixel is blue.
As a demonstration of the principles involved in ray tracing, consider how one would find the intersection between a ray and a sphere. This is merely the math behind theline–sphere intersectionand the subsequent determination of the colour of the pixel being calculated. There is, of course, far more to the general process of ray tracing, but this demonstrates an example of the algorithms used.
Invector notation, the equation of a sphere with centerc{\displaystyle \mathbf {c} }and radiusr{\displaystyle r}is
Any point on a ray starting from points{\displaystyle \mathbf {s} }with directiond{\displaystyle \mathbf {d} }(hered{\displaystyle \mathbf {d} }is aunit vector) can be written as
wheret{\displaystyle t}is its distance betweenx{\displaystyle \mathbf {x} }ands{\displaystyle \mathbf {s} }. In our problem, we knowc{\displaystyle \mathbf {c} },r{\displaystyle r},s{\displaystyle \mathbf {s} }(e.g. the position of a light source) andd{\displaystyle \mathbf {d} }, and we need to findt{\displaystyle t}. Therefore, we substitute forx{\displaystyle \mathbf {x} }:
Letv=defs−c{\displaystyle \mathbf {v} \ {\stackrel {\mathrm {def} }{=}}\ \mathbf {s} -\mathbf {c} }for simplicity; then
Knowing that d is a unit vector allows us this minor simplification:
Thisquadratic equationhas solutions
The two values oft{\displaystyle t}found by solving this equation are the two ones such thats+td{\displaystyle \mathbf {s} +t\mathbf {d} }are the points where the ray intersects the sphere.
Any value which is negative does not lie on the ray, but rather in the oppositehalf-line(i.e. the one starting froms{\displaystyle \mathbf {s} }with opposite direction).
If the quantity under the square root (thediscriminant) is negative, then the ray does not intersect the sphere.
Let us suppose now that there is at least a positive solution, and lett{\displaystyle t}be the minimal one. In addition, let us suppose that the sphere is the nearest object on our scene intersecting our ray, and that it is made of a reflective material. We need to find in which direction the light ray is reflected. The laws ofreflectionstate that the angle of reflection is equal and opposite to the angle of incidence between the incident ray and thenormalto the sphere.
The normal to the sphere is simply
wherey=s+td{\displaystyle \mathbf {y} =\mathbf {s} +t\mathbf {d} }is the intersection point found before. The reflection direction can be found by areflectionofd{\displaystyle \mathbf {d} }with respect ton{\displaystyle \mathbf {n} }, that is
Thus the reflected ray has equation
Now we only need to compute the intersection of the latter ray with ourfield of view, to get the pixel which our reflected light ray will hit. Lastly, this pixel is set to an appropriate color, taking into account how the color of the original light source and the one of the sphere are combined by the reflection.
Adaptive depth control means that the renderer stops generating reflected/transmitted rays when the computed intensity becomes less than a certain threshold. There must always be a set maximum depth or else the program would generate an infinite number of rays. But it is not always necessary to go to the maximum depth if the surfaces are not highly reflective. To test for this the ray tracer must compute and keep the product of the global and reflection coefficients as the rays are traced.
Example: let Kr = 0.5 for a set of surfaces. Then from the first surface the maximum contribution is 0.5, for the reflection from the second: 0.5 × 0.5 = 0.25, the third: 0.25 × 0.5 = 0.125, the fourth: 0.125 × 0.5 = 0.0625, the fifth: 0.0625 × 0.5 = 0.03125, etc. In addition we might implement a distance attenuation factor such as 1/D2, which would also decrease the intensity contribution.
For a transmitted ray we could do something similar but in that case the distance traveled through the object would cause even faster intensity decrease. As an example of this, Hall & Greenberg found that even for a very reflective scene, using this with a maximum depth of 15 resulted in an average ray tree depth of 1.7.[27]
Enclosing groups of objects in sets ofbounding volume hierarchies(BVH) decreases the amount of computations required for ray tracing. A cast ray is first tested for an intersection with thebounding volume, and then if there is an intersection, the volume is recursively divided until the ray hits the object. The best type of bounding volume will be determined by the shape of the underlying object or objects. For example, if the objects are long and thin, then a sphere will enclose mainly empty space compared to a box. Boxes are also easier to generate hierarchical bounding volumes.
Note that using a hierarchical system like this (assuming it is done carefully) changes the intersection computational time from a linear dependence on the number of objects to something between linear and a logarithmic dependence. This is because, for a perfect case, each intersection test would divide the possibilities by two, and result in a binary tree type structure. Spatial subdivision methods, discussed below, try to achieve this. Furthermore, this acceleration structure makes the ray-tracing computationoutput-sensitive. I.e. the complexity of the ray intersection calculations depends on the number of objects that actually intersect the rays and not (only) on the number of objects in the scene.
Kay & Kajiya give a list of desired properties for hierarchical bounding volumes:
The first implementation of an interactive ray tracer was theLINKS-1 Computer Graphics Systembuilt in 1982 atOsaka University's School of Engineering, by professors Ohmura Kouichi, Shirakawa Isao and Kawata Toru with 50 students.[citation needed]It was amassively parallelprocessingcomputersystem with 514microprocessors(257Zilog Z8001sand 257iAPX 86s), used for3-D computer graphicswith high-speed ray tracing. According to theInformation Processing Society of Japan: "The core of 3-D image rendering is calculating the luminance of each pixel making up a rendered surface from the given viewpoint,light source, and object position. The LINKS-1 system was developed to realize an image rendering methodology in which each pixel could be parallel processed independently using ray tracing. By developing a new software methodology specifically for high-speed image rendering, LINKS-1 was able to rapidly render highly realistic images." It was used to create an early 3-Dplanetarium-like video of theheavensmade completely with computer graphics. The video was presented at theFujitsupavilion at the 1985 International Exposition inTsukuba."[28]It was the second system to do so after theEvans & SutherlandDigistarin 1982. The LINKS-1 was claimed by the designers to be the world's most powerful computer in 1984.[29]
The next interactive ray tracer, and the first known to have been labeled "real-time" was credited at the 2005SIGGRAPHcomputer graphics conference as being the REMRT/RT tools developed in 1986 byMike Muussfor theBRL-CADsolid modeling system. Initially published in 1987 atUSENIX, the BRL-CAD ray tracer was an early implementation of a parallel network distributed ray tracing system that achieved several frames per second in rendering performance.[30]This performance was attained by means of the highly optimized yet platform independent LIBRT ray tracing engine in BRL-CAD and by using solid implicitCSGgeometry on several shared memory parallel machines over a commodity network. BRL-CAD's ray tracer, including the REMRT/RT tools, continue to be available and developed today asopen sourcesoftware.[31]
Since then, there have been considerable efforts and research towards implementing ray tracing at real-time speeds for a variety of purposes on stand-alone desktop configurations. These purposes include interactive 3-D graphics applications such asdemoscene productions,computer and video games, and image rendering. Some real-time software 3-D engines based on ray tracing have been developed by hobbyistdemo programmerssince the late 1990s.[32]
In 1999 a team from theUniversity of Utah, led by Steven Parker, demonstrated interactive ray tracing live at the 1999 Symposium on Interactive 3D Graphics. They rendered a 35 million sphere model at 512 by 512 pixel resolution, running at approximately 15 frames per second on 60 CPUs.[33]
The Open RT project included a highly optimized software core for ray tracing along with anOpenGL-like API in order to offer an alternative to the currentrasterizationbased approach for interactive 3-D graphics.Ray tracing hardware, such as the experimentalRay Processing Unitdeveloped by Sven Woop at theSaarland University, was designed to accelerate some of the computationally intensive operations of ray tracing.
The idea that video games could ray trace their graphics in real time received media attention in the late 2000s. During that time, a researcher named Daniel Pohl, under the guidance of graphics professor Philipp Slusallek and in cooperation with theErlangen UniversityandSaarland Universityin Germany, equippedQuake IIIandQuake IVwith anenginehe programmed himself, which Saarland University then demonstrated atCeBIT2007.[34]Intel, a patron of Saarland, became impressed enough that it hired Pohl and embarked on a research program dedicated to ray traced graphics, which it saw as justifying increasing the number of its processors' cores.[35]: 99–100[36]On June 12, 2008, Intel demonstrated a special version ofEnemy Territory: Quake Wars, titledQuake Wars: Ray Traced, using ray tracing for rendering, running in basic HD (720p) resolution.ETQWoperated at 14–29 frames per second on a 16-core (4 socket, 4 core) Xeon Tigerton system running at 2.93 GHz.[37]
At SIGGRAPH 2009, Nvidia announcedOptiX, a free API for real-time ray tracing on Nvidia GPUs. The API exposes seven programmable entry points within the ray tracing pipeline, allowing for custom cameras, ray-primitive intersections, shaders, shadowing, etc. This flexibility enables bidirectional path tracing, Metropolis light transport, and many other rendering algorithms that cannot be implemented with tail recursion.[38]OptiX-based renderers are used inAutodeskArnold,AdobeAfterEffects, Bunkspeed Shot,Autodesk Maya,3ds max, and many other renderers.
In 2014, a demo of thePlayStation 4video gameThe Tomorrow Children, developed byQ-GamesandJapan Studio, demonstrated newlightingtechniques developed by Q-Games, notably cascadedvoxelconeray tracing, which simulates lighting in real-time and uses more realisticreflectionsrather thanscreen spacereflections.[39]
Nvidia introduced theirGeForce RTXand Quadro RTX GPUs September 2018, based on theTuring architecturethat allows for hardware-accelerated ray tracing. The Nvidia hardware uses a separate functional block, publicly called an "RT core". This unit is somewhat comparable to a texture unit in size, latency, and interface to the processor core. The unit featuresBVHtraversal, compressed BVH node decompression, ray-AABB intersection testing, and ray-triangle intersection testing.[40]The GeForce RTX, in the form of models 2080 and 2080 Ti, became the first consumer-oriented brand of graphics card that can perform ray tracing in real time,[41]and, in November 2018,Electronic Arts'Battlefield Vbecame the first game to take advantage of its ray tracing capabilities, which it achieves via Microsoft's new API,DirectX Raytracing.[42]AMD, which already offered interactive ray tracing on top ofOpenCLthrough itsRadeon ProRender,[43][44]unveiled in October 2020 theRadeon RX 6000 series, itssecond generationNavi GPUs with support for hardware-accelerated ray tracing at an online event.[45][46][47][48][49]Subsequent games that render their graphics by such means appeared since, which has been credited to the improvements in hardware and efforts to make more APIs and game engines compatible with the technology.[50]Current home gaming consoles implement dedicatedray tracing hardware componentsin their GPUs for real-time ray tracing effects, which began with theninth-generationconsolesPlayStation 5,Xbox Series X and Series S.[51][52][53][54][55]
On 4 November, 2021,Imagination Technologiesannounced their IMG CXT GPU with hardware-accelerated ray tracing.[56][57]On January 18, 2022, Samsung announced theirExynos 2200AP SoC with hardware-accelerated ray tracing.[58]On June 28, 2022,Armannounced theirImmortalis-G715with hardware-accelerated ray tracing.[59]On November 16, 2022,Qualcommannounced theirSnapdragon 8 Gen 2with hardware-accelerated ray tracing.[60][61]
On September 12, 2023 Apple introduced hardware-accelerated ray tracing in its chip designs, beginning with theA17 Pro chipfor iPhone 15 Pro models.[62][63]Later the same year, Apple released M3 family of processors with HW enabled ray tracing support.[64]Currently, this technology is accessible across iPhones, iPads, and Mac computers via theMetal API. Apple reports up to a 4x performance increase over previous software-based ray tracing on the phone[63]and up to 2.5x faster comparing M3 to M1 chips.[64]The hardware implementation includes acceleration structure traversal and dedicated ray-box intersections, and the API supports RayQuery (Inline Ray Tracing) as well as RayPipeline features.[65]
Various complexity results have been proven for certain formulations of the ray tracing problem. In particular, if the decision version of the ray tracing problem is defined as follows[66]– given a light ray's initial position and direction and some fixed point, does the ray eventually reach that point, then the referenced paper proves the following results:
|
https://en.wikipedia.org/wiki/Real-time_raytracing
|
Incomputer graphics,tessellationis the dividing of datasets ofpolygons(sometimes calledvertex sets) presenting objects in a scene into suitable structures forrendering. Especially forreal-time rendering, data istessellated into triangles, for example inOpenGL 4.0andDirect3D 11.[1][2]
A key advantage of tessellation forrealtime graphicsis that it allows detail to be dynamically added and subtracted from a3D polygon meshand its silhouette edges based on control parameters (often camera distance). In previously leading realtime techniques such asparallax mappingandbump mapping, surface details could be simulated at the pixel level, but silhouette edge detail was fundamentally limited by the quality of the original dataset.[3]
InDirect3D 11pipeline (a part of DirectX 11), thegraphics primitiveis thepatch.[4]Thetessellatorgenerates a triangle-basedtessellationof the patch according to tessellation parameters such as theTessFactor, which controls the degree of fineness of themesh. The tessellation, along withshaderssuch as aPhong shader, allows for producing smoother surfaces than would be generated by the original mesh.[4]By offloading the tessellation process onto theGPUhardware, smoothing can be performed in real time. Tessellation can also be used for implementingsubdivision surfaces,level of detailscaling and finedisplacement mapping.[5]OpenGL 4.0uses a similar pipeline, where tessellation into triangles is controlled by theTessellation Control Shaderand a set of four tessellation parameters.[6]
Incomputer-aided designthe constructed design is represented by aboundary representationtopological model, where analytical 3D surfaces and curves, limited to faces, edges, and vertices, constitute a continuous boundary of a 3D body.
Arbitrary 3D bodies are often too complicated to analyze directly. So they are approximated (tessellated) with ameshof small, easy-to-analyze pieces of 3D volume—usually either irregulartetrahedra, or irregularhexahedra. The mesh is used forfinite element analysis.[citation needed]
The mesh of a surface is usually generated per individual faces and edges (approximated topolylines) so that original limit vertices are included into mesh. To ensure that approximation of the original surface suits the needs of further processing, three basic parameters are usually defined for the surface mesh generator:
An algorithm generating a mesh is typically controlled by the above three and other parameters. Some types of computer analysis of a constructed design require anadaptive mesh refinement, which is a mesh made finer (using stronger parameters) in regions where the analysis needs more detail.[1][2]
|
https://en.wikipedia.org/wiki/Tessellation_(computer_graphics)
|
Video artis anartform which relies on usingvideotechnology as a visual and audio medium. Video art emerged during the late 1960s as new consumer video technology such asvideo tape recordersbecame available outside corporatebroadcasting. Video art can take many forms: recordings that arebroadcast;installationsviewed in galleries or museums; works either streamed online, or distributed asvideo tapes, or onDVDs; andperformanceswhich may incorporate one or moretelevision sets,video monitors, and projections, displaying live or recorded images and sounds.[1]
Video art is named for the original analogvideo tape, which was the most commonly used recording technology in much of the form's history into the 1990s. With the advent ofdigital recordingequipment, many artists began to explore digital technology as a new way of expression. Video art does not necessarily rely on the conventions that define theatrical cinema. It may not useactors, may contain nodialogue, and may have no discerniblenarrativeorplot. Video art also differs from cinema subcategories such asavant gardecinema,short films, andexperimental film.
Nam June Paik, a Korean-American artist who studied in Germany, is widely regarded as a pioneer in video art.[2][3]In March 1963 Paik showed at the Galerie Parnass inWuppertaltheExposition of Music – Electronic Television.[4][5]In May 1963Wolf Vostellshowed theinstallation6 TV Dé-coll/ageat theSmolin Galleryin New York and created the videoSun in your headin Cologne. OriginallySun in your headwas made on 16mm film and transferred 1967 to videotape.[6][7][8]
Video art is often said to have begun when Paik used his newSonyPortapakto shoot footage ofPope Paul VI's procession throughNew York Cityin the autumn of 1965[9]Later that same day, across town in aGreenwich Villagecafe, Paik played the tapes and video art was born.
Prior to the introduction of consumer video equipment, moving image production was only available non-commercially via8mm filmand16mm film. After the Portapak's introduction and its subsequent update every few years, many artists began exploring the new technology.
Many of the early prominent video artists were those involved with concurrent movements in conceptual art, performance, and experimental film. These include AmericansVito Acconci,Valie Export,John Baldessari,Peter Campus,Doris Totten Chase,Maureen Connor,Norman Cowie,Dimitri Devyatkin,Frank Gillette,Dan Graham,Gary Hill,Joan Jonas,Bruce Nauman,Nam June Paik,Bill Viola,Shigeko Kubota,Martha Rosler,William Wegman, and many others. There were also those such asSteina and Woody Vasulkawho were interested in the formal qualities of video and employed video synthesizers to create abstract works.Kate Craig,[10]Vera Frenkel[11]andMichael Snow[12]were important to the development of video art in Canada.
Much video art in the medium's heyday experimented formally with the limitations of the video format. For example, American artistPeter Campus'Double Visioncombined the video signals from two SonyPortapaksthrough an electronic mixer, resulting in a distorted and radically dissonant image. Another representative piece,Joan Jonas'Vertical Roll, involved recording previously-recorded material of Jonas dancing while playing the videos back on a television, resulting in a layered and complex representation of mediation.
Much video art in the United States was produced in New York City, withThe Kitchen, founded in 1972 bySteina and Woody Vasulka(and assisted by video directorDimitri DevyatkinandShridhar Bapat), serving as a nexus for many young artists. An early multi-channel video artwork (using several monitors or screens) wasWipe CyclebyIra SchneiderandFrank Gillette.Wipe Cyclewas first exhibited at the Howard Wise Gallery in New York in 1969 as part of an exhibition titled "TV as a Creative Medium". An installation of nine television screens,Wipe Cyclecombined live images of gallery visitors, found footage from commercial television, and shots from pre-recorded tapes. The material was alternated from one monitor to the next in an elaborate choreography.
On the West coast, the San Jose State television studios in 1970,Willoughby Sharpbegan the "Videoviews" series of videotaped dialogues with artists. The "Videoviews" series consists of Sharps' dialogues withBruce Nauman(1970),Joseph Beuys(1972),Vito Acconci(1973),Chris Burden(1973),Lowell Darling(1974), andDennis Oppenheim(1974). Also in 1970, Sharp curated "Body Works", an exhibition of video works byVito Acconci,Terry Fox,Richard Serra,Keith Sonnier,Dennis OppenheimandWilliam Wegmanwhich was presented at Tom Marioni'sMuseum of Conceptual Art, San Francisco, California.[citation needed][13]
In Europe,Valie Export's groundbreaking video piece, "Facing a Family" (1971) was one of the first instances of television intervention and broadcasting video art. The video, originally broadcast on the Austrian television program "Kontakte" February 2, 1971,[11] shows a bourgeois Austrian family watching TV while eating dinner, creating a mirroring effect for many members of the audience who were doing the same thing. Export believed the television could complicate the relationship between subject, spectator, and television.[14][15]In the United KingdomDavid Hall's "TV Interruptions" (1971) were transmitted intentionally unannounced and uncredited on Scottish TV, the first artist interventions on British television.
As the prices of editing software decreased, the access the general public had to utilize these technologies increased. Video editing software became so readily available that it changed the way artists worked with the medium. Simulteanously, with the arrival of independent televisions in Europe and the emergence of video clips, artists also used the potential of special effects, high quality images and sophisticated editing (Gary Hill,Bill Viola). Festivals dedicated to video art such as the World Wide Video festival in The Hague, the Biennale de l'Image in Geneva orArs Electronicain Linz developed and underlined the importance of creation in this field.
From the beginning of the 90's,contemporary artexhibitions integrate artists' videos among other works and installations. This is the case of theVenice Biennale(Aperto 93) and of NowHere at theLouisiana Museum, but also of art galleries where a new generation of artists for whom the arrival of lighter equipment such asHandycamsfavored a more direct expression. Artists such asPipilotti Rist,Tony Oursler,Carsten Höller, Cheryl Donegan, Nelson Sullivan were able, as others in the 1960s, to leave their studios easily to film by hand without sophistication, sometimes mixing found images with their own (Douglas Gordon,Pierre Bismuth,Sylvie Fleury, Johan Grimonprez,Claude Closky) and using a present but simple post-production.
The presentation of the works was also simplified with the arrival of monitors in the exhibition rooms and distribution inVHS. The arrival of this younger generation announced the feminist and gender issues to come, but also the increasingly hybrid use of different media (transferred super 8 films, 16mm, digital editing, TV show excerpts, sounds from different sources, etc).
At the same time, museums and institutions more specialized in video art were integrating digital technology, such as theZKMin Karlsruhe, directed byPeter Weibel, with numerous thematic exhibitions, or theCentre pour l'Image Contemporainewith its biennial Version (1994-2004) directed bySimon Lamunière.
With the arrival of digital technology and the Internet, some museums have federated their databases such as New Media Art produced by theCentre Georges Pompidouin Paris, theMuseum Ludwigin Cologne and theCentre pour l'Image Contemporaine(center for contemporary images) in Geneva.
By the end of the century, institutions and artists worked on the expanding spectrum of the media, 3d imagery, interactivity, cd-roms, Internet, digital post production etc. Different themes emerged such as interactivity and nonlinearity. Some artists combined physical and digital techniques, such asJeffrey Shaw's "Legible City" (1988–91). Others by using Low-Tech interactivity such asClaude Closky's online "+1" or "Do you want Love or Lust" in 1996 coproduced by theDia Art Foundation. But these steps start to move away from the so called video art towards theNew media artandInternet art.
As the available amount of footage and the editing techniques evolved, some artists have also produced complex narrative videos without using any of their own footage:Marco Brambilla'sCivilization(2008) is a collage, or a "video mural"[16]that portrays heaven and hell.[17]Johan Grimonprez'sDial H-I-S-T-O-R-Yis a 68 minute long interpretation of the cold war and the role of terrorists, made almost exclusively with original television and film excerpts on hijacking.
More generally, during the first decade, one of the most significant steps in the video art domain, was achieved with its strong presence in contemporary art exhibitions at the international level. During this period, it was common to see artist videos in group shows, on monitors or as projections. More than a third of the works presented at Art Unlimited (the section ofArt Baseldedicated to large-scale works) were video installations between 2000 and 2015. The same is true for most biennials. A new generation of artists such asPipilotti Rist,Francis Alys,Kim Sooja,Apichatpong Weerasethakul,Omer Fast,David Claerbout,Sarah Morris,Matthew Barney, were presented alongside the previous generations (Roman Signer,Bruce Nauman,Bill Viola,Joan Jonas,John Baldessari).
Some artists have also widened their audience by making movies (Apichatpong Weerasethakulwho won the2010 Cannes Film Festival"Palm d'or") or by curating large public events (Pipilotti Rist's Swiss National Expo02).
In 2003,Kalup LinzycreatedConversations Wit De Churen II: All My Churen, a soap opera satire that has been credited as creating the video and performance sub-genre[18]Although Linzy's work is genre defying his work has been a major contribution to the medium.Ryan Trecartin, an experimental young video-artist, uses color, editing techniques and bizarre acting to portray whatThe New Yorkercalls "a cultural watershed".[19][20]
Video art as a medium can also be combined with other forms of artistic expression such asPerformance art. This combination can also be referred to as "media and performance art"[21]when artists "break the mold of video and film and broaden the boundaries of art".[21]With increased ability for artists to obtain video cameras, performance art started being documented and shared across large amounts of audiences.[22]Artists such asMarina AbramovicandUlayexperimented with video taping their performances in the 1970s and the 1980s. In a piece titled “Rest energy” (1980) both Ulay and Marina suspended their weight so that they pulled back a bow and arrow aimed at her heart, Ulay held the arrow, and Marina the bow. The piece was 4:10 which Marina described as being “a performance about complete and total trust”.[23]
Other artists who combined Video art with Performance art used the camera as the audience.Kate Gilmoreexperimented with the positioning of the camera. In her video“Anything” (2006)she films her performance piece as she is constantly trying the reach the camera which is staring down at her. As the 13-minute video goes on, she continues to tie together pieces of furniture while constantly attempting to reach the camera. Gilmore added an element of struggle to her art which is sometimes self-imposed,[24]in her video “My love is an anchor” (2004)[25]she lets her foot dry in cement before attempting to break free on camera.[26]Gilmore has said to have mimicked expression styles from the 1960s and 1970s with inspirations like Marina Abramovic as she adds extremism and struggle to her work.[27]
Some artists experimented with space when combining Video art and Performance art.Ragnar Kjartannson, an Icelandic artist, filmed an entire music video with 9 different artists, including himself, being filmed in different rooms. All the artists could hear each other through a pair of headphones so that they could play the song together, the piece was titled "The visitors" (2012).[28]
Some artists, such asJaki IrvineandVictoria Fuhave experimented with combining16 mm film,8 mm filmand video to make use of the potential discontinuity between moving image, musical score and narrator to undermine any sense of linear narrative.[29]
Since 2000, video arts programs have begun to emerge among colleges and universities as a standalone discipline typically situated in relation to film and older broadcast curricula. Current models found in universities likeNortheasternandSyracuseshow video arts offering baseline competencies in lighting, editing and camera operation. While these fundamentals can feed into and support existing film or TV production areas, recent growth of entertainment media through CGI and other special effects situate skills like animation, motion graphics and computer aided design as upper level courses in this emerging area.
As the industry continues to evolve, video arts programs are also incorporating elements of interactive media, virtual production, and immersive technologies such as augmented and virtual reality. Many institutions are expanding their curricula to include courses on real-time rendering, AI-assisted content creation, and multi-platform storytelling, reflecting the growing demand for versatile digital artists. Additionally, collaborations with game design, digital marketing, and media studies departments are fostering interdisciplinary approaches that prepare students for diverse career opportunities beyond traditional film and television.
|
https://en.wikipedia.org/wiki/Video_art
|
Avideo display controller(VDC), also called adisplay engineordisplay interface, is anintegrated circuitwhich is the main component in avideo-signal generator, a device responsible for the production of aTVvideo signalin a computing or game system. Some VDCs also generate anaudio signal, but that is not their main function.
VDCs were used in thehome computersof the 1980s and also in some earlyvideo picturesystems.
The VDC is the main component of the video signal generator logic, responsible for generating the timing of video signals such as the horizontal and verticalsynchronization signalsand theblanking intervalsignal. Sometimes other supporting chips were necessary to build a complete system, such asRAMto holdpixeldata,ROMto holdcharacter fonts, or somediscrete logicsuch asshift registers.
Most often the VDC chip is completely integrated in the logic of the main computer system, (itsvideo RAMappears in thememory mapof the main CPU), but sometimes it functions as acoprocessorthat can manipulate the video RAM contents independently.
The difference between a display controller, a graphics accelerator, and a video compression/decompression IC is huge, but, since all of this logic is usually found on the chip of agraphics processing unitand is usually not available separately to the end-customer, there is often much confusion about these very different functional blocks.
GPUs with hardware acceleration became popular during the 1990s, including theS3 ViRGE, theMatrox Mystique, and theVoodoo Graphics; though earlier examples such as theNEC μPD7220had already existed for some time. VDCs often had special hardware for the creation of "sprites", a function that in more modern VDP chips is done with the "Bit Blitter" using the "Bit blit" function.
One example of a typical video display processor is the "VDP2 32-bit background and scroll plane video display processor" of theSega Saturn.
Another example is theLisa(AGA) chip that was used for the improved graphics of the later generationAmigacomputers.
That said, it is not completely clear when a "video chip" is a "video display controller" and when it is a "video display processor". For example, the TMS9918 is sometimes called a "video display controller" and sometimes a "video display processor". In general however a "video display processor" has some power to "process" the contents of the video RAM (filling an area of RAM for example), while a "video display controller" only controls the timing of the video synchronization signals and the access to the video RAM.
Thegraphics processing unit(GPU) goes one step further than the VDP and normally also supports 3D functionality. This is the kind of chip that is used in modern personal computers.
Video display controllers can be divided in several different types, listed here from simplest to most complex;
Examples of video display controllers are:
Video shifters
CRT Controllers
Video interface controllers
Video coprocessors
Note that many early home computers did not use a VDP chip, but built the whole video display controller from a lot ofdiscrete logicchips, (examples are theApple II,PET, andTRS-80). Because these methods are very flexible, video display generators could be very capable (or extremely primitive, depending on the quality of the design), but also needed a lot of components.
Many early systems used some form of an earlyprogrammable logic arrayto create a video system; examples include theZX SpectrumandZX81systems and ElektronikaBK-0010, but there were many others. Early implementations were often very primitive, but later implementations sometimes resulted in fairly advanced video systems, like the one in theSAM Coupé. On the lower end, as in the ZX81, the hardware would only perform electrical functions and the timing and level of the video stream was provided by the microprocessor. As the video data rate was high relative to the processor speed, the computer could only perform actual non-display computations during the retrace period between display frames. This limited performance to at most 25% of overall available CPU cycles.
These systems could thus build a very capable system with relatively few components, but the low transistor count of early programmable logic meant that the capabilities of early PLA-based systems were often less impressive than those using the video interface controllers or video coprocessors that were available at the same time. Later PLA solutions, such as those usingCPLDsorFPGAs, could result in much more advanced video systems, surpassing those built using off-the-shelf components.
An often-used hybrid solution was to use a video interface controller (often theMotorola 6845) as a basis and expand its capabilities with programmable logic or anASIC. An example of such a hybrid solution is the originalVGAcard, that used a 6845 in combination with an ASIC. That is why all current VGA based video systems still use thehardware registersthat were provided by the 6845.
With the advancements made insemiconductor device fabrication, more and more functionality is implemented asintegrated circuits, often licensable assemiconductor intellectual property core(SIP core). Display controllerSystem In Package(SiP) blocks can be found on thedieofGPUs,APUsandSoCs.[citation needed]
They support a variety ofinterfaces:VGA,DVI,HDMI,DisplayPort,VHDCI,DMS-59and more. ThePHYincludesLVDS,Embedded DisplayPort,TMDSandFlat Panel Display Link,OpenLDIandCML.[citation needed]A moderncomputer monitormay has built-in LCD controller or OLED controller.[4]
For example, a VGA-signal, which is created by GPU is being transported over a VGA-cable to the monitor built-in controller. Both ends of the cable end in aVGA connector.Laptopsand othermobile computersuse different interfaces between the display controller and the display. A display controller usually supports multiplecomputer display standards.
KMS driveris an example of adevice driverfor display controllers andAMD Eyefinityis a special brand of display controller withmulti-monitorsupport.
RandR(resize and rotate) is a method to configure screen resolution and refresh rate on each individual outputs separately and at the same time configure the settings of the windowing system accordingly.
An example for this dichotomy is offered byARM Holdings: they offer SIP core for 3D rendering acceleration and for display controller independently. The former has marketing names such as Mali-200 or Mali-T880 while the latter is available as Mali-DP500, Mali-DP550 and Mali-DP650.[5]
In 1982,NECreleased theNEC μPD7220, one of the most widely used video display controllers in 1980spersonal computers. It was used in theNEC PC-9801,APC III,IBM PC compatibles,DEC Rainbow,Tulip System-1, andEpson QX-10.[6]Intellicensed the design and called it the 82720 graphics display controller.[7]
Previously, graphic cards were also called graphic adapters, and the chips used on theseISA/EISAcards consisted solely of a display controller, as this was the only functionality required to connect a computer to a display. Later cards included ICs to perform calculations related to 2D rendering in parallel with the CPU; these cards were referred to as graphics accelerator cards. Similarly, ICs for 3D rendering eventually followed. Such cards were available withVLB,PCI, andAGPinterfaces; modern cards typically use thePCI Expressbus, as they require much greater bandwidth then the ISA bus can deliver.
|
https://en.wikipedia.org/wiki/Video_display_controller
|
Adaptive partition schedulersare a relatively new type of partition scheduler, which in turn is a kind ofscheduling algorithm, pioneered with the most recent version of theQNXoperating system. Adaptive partitioning, or AP, allows the real-time system designer to request that a percentage of processing resources be reserved for a particular partition (group of threads and/or processes making up asubsystem). The operating system'spriority-driven pre-emptive schedulerwill behave in the same way that a non-AP system would until the system is overloaded (i.e. system-wide there is more computation to perform than the processor is capable of sustaining over the long term). During overload, the AP scheduler enforces hard limits on total run-time for the subsystems within a partition, as dictated by the allocated percentage of processor bandwidth for the particular partition.
If the system is not overloaded, a partition that is allocated (for example) 10% of the processor bandwidth, can, in fact, use more than 10%, as it will borrow from the spare budget of other partitions (but will be required to pay it back later). This is very useful for the non real-time subsystems that experience variable load, since these subsystems can make use of spare budget fromhard real-timepartitions in order to make more forward progress than they would in afixed partition schedulersuch asARINC-653Archived2008-12-28 at theWayback Machine, but without impacting the hard real-time subsystems' deadlines.
QNX Neutrino 6.3.2 and newer versions have this feature.
ThisUnix-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Adaptive_partition_scheduler
|
This is a list ofreal-time operating systems(RTOSs). This is anoperating systemin which the time taken to process an input stimulus is less than the time lapsed until the next input stimulus of the same type.
Platforms: SmartFusion2,RaspberryPi,STM32On an OS:Linux,Windows,macOS,FreeRTOS,RTEMS
|
https://en.wikipedia.org/wiki/Comparison_of_real-time_operating_systems
|
DO-178B, Software Considerations in Airborne Systems and Equipment Certificationis a guideline dealing with the safety ofsafety-criticalsoftware used in certain airborne systems. It was jointly developed by the safety-critical working group RTCA SC-167 of theRadio Technical Commission for Aeronautics(RTCA) and WG-12 of theEuropean Organisation for Civil Aviation Equipment(EUROCAE). RTCA published the document asRTCA/DO-178B, while EUROCAE published the document asED-12B. Although technically aguideline, it was ade factostandard for developingavionics softwaresystems until it was replaced in 2012 byDO-178C.
TheFederal Aviation Administration(FAA) applies DO-178B as the document it uses for guidance to determine if the software will perform reliably in an airborne environment,[1]when specified by theTechnical Standard Order(TSO) for which certification is sought. In the United States, the introduction of TSOs into the airworthiness certification process, and by extension DO-178B, is explicitly established in Title 14: Aeronautics and Space of theCode of Federal Regulations(CFR), also known as theFederal Aviation Regulations, Part 21, Subpart O.
TheSoftware Level, also termed theDesign Assurance Level(DAL) orItem Development Assurance Level(IDAL) as defined inARP4754(DO-178Conly mentions IDAL as synonymous with Software Level[2]), is determined from thesafety assessment processandhazard analysisby examining the effects of a failure condition in the system. The failure conditions are categorized by their effects on the aircraft, crew, and passengers.
DO-178B alone is not intended to guarantee software safety aspects. Safety attributes in the design and implemented as functionality, must receive additional mandatory system safety tasks to drive and show objective evidence of meeting explicit safety requirements. Typically IEEE STD-1228-1994 Software Safety Plans are allocated and software safety analyses tasks are accomplished in sequential steps (requirements analysis, top level design analysis, detailed design analysis, code level analysis, test analysis and change analysis). These software safety tasks and artifacts are integral supporting parts of the process for hazard severity and DAL determination to be documented in system safety assessments (SSA). The certification authorities require and DO-178B specifies the correct DAL be established using these comprehensive analyses methods to establish the software level A-E. Any software that commands, controls, and monitors safety-critical functions should receive the highest DAL - Level A. It is the software safety analyses that drive the system safety assessments that determine the DAL that drives the appropriate level of rigor in DO-178B. The system safety assessments combined with methods such asSAE ARP 4754Adetermine the after mitigation DAL and may allow reduction of the DO-178B software level objectives to be satisfied if redundancy, design safety features and other architectural forms of hazard mitigation are in requirements driven by the safety analyses. Therefore, DO-178B central theme is design assurance and verification after the prerequisite safety requirements have been established.
The number of objectives to be satisfied (eventually with independence) is determined by the software level A-E. The phrase "with independence" refers to a separation of responsibilities where the objectivity of the verification and validation processes is ensured by virtue of their "independence" from the software development team. For objectives that must be satisfied with independence, the person verifying the item (such as a requirement or source code) may not be the person who authored the item and this separation must be clearly documented.[3]In some cases, an automated tool may be equivalent to independence.[4]However, the tool itself must then be qualified if it substitutes for human review.
Processes are intended to support the objectives, according to the software level (A through D—Level E was outside the purview of DO-178B). Processes are described as abstract areas of work in DO-178B, and it is up to the planners of a real project to define and document the specifics of how a process will be carried out. On a real project, the actual activities that will be done in the context of a process must be shown to support the objectives. These activities are defined by the project planners as part of the Planning process.
This objective-based nature of DO-178B allows a great deal of flexibility in regard to following different styles ofsoftware life cycle. Once an activity within a process has been defined, it is generally expected that the project respect that documented activity within its process. Furthermore, processes (and their concrete activities) must have well defined entry and exit criteria, according to DO-178B, and a project must show that it is respecting those criteria as it performs the activities in the process.
The flexible nature of DO-178B's processes and entry/exit criteria make it difficult to implement the first time, because these aspects are abstract and there is no "base set" of activities from which to work. The intention of DO-178B was not to be prescriptive. There are many possible and acceptable ways for a real project to define these aspects. This can be difficult the first time a company attempts to develop a civil avionics system under this standard, and has created a niche market for DO-178B training and consulting.
For a generic DO-178B based process, avisual summaryis provided including the Stages of Involvement (SOIs) defined by FAA on the "Guidance and Job Aids for Software and Complex Electronic Hardware".
System requirements are typically input to the entire project.
The last 3 documents (standards) are not required for software level D..
DO-178B is not intended as a software development standard; it is software assurance using a set of tasks to meet objectives and levels of rigor.
The development process output documents:
Traceability from system requirements to all source code or executable object code is typically required (depending on software level).
Typically usedsoftware development process:
Document outputs made by this process:
Analysis of all code and traceability from tests and results to all requirements is typically required (depending on software level).
This process typically also involves:
Other names for tests performed in this process can be:
Documents maintained by theconfiguration managementprocess:
This process handles problem reports, changes and related activities. The configuration management process typically provides archive and revision identification of:
Output documents from the quality assurance process:
This process performs reviews and audits to show compliance with DO-178B. The interface to the certification authority is also handled by the quality assurance process.
Typically aDesignated Engineering Representative(DER) reviews technical data as part of the submission to the FAA for approval.
Software can automate, assist or otherwise handle or help in the DO-178B processes. All tools used for DO-178B development must be part of the certification process. Tools generating embedded code arequalified as development tools, with the same constraints as the embedded code. Tools used to verify the code (simulators, test execution tool, coverage tools, reporting tools, etc.) must bequalified as verification tools, a much lighter process consisting in a comprehensiveblack box testingof the tool.
A third party tool can be qualified as a verification tool, but development tools must have been developed following the DO-178 process. Companies providing these kind of tools asCOTSare subject to audits from the certification authorities, to which they give complete access to source code, specifications and all certification artifacts.
Outside of this scope, output of any used tool must be manually verified by humans.
Requirements traceability is concerned with documenting the life of a requirement. It should be possible to trace back to the origin of each requirement and every change made to the requirement should therefore be documented in order to achieve traceability. Even the use of the requirement after the implemented features have been deployed and used should be traceable.
VDC Research notes that DO-178B has become "somewhat antiquated" in that it is not adapting well to the needs and preferences of today's engineers. In the same report, they also note thatDO-178Cseems well-poised to address this issue.[citation needed]
|
https://en.wikipedia.org/wiki/DO-178B
|
Incomputing,firmwareissoftwarethat provideslow-levelcontrol ofcomputing devicehardware.
For a relatively simple device, firmware may perform all control, monitoring and data manipulation functionality.
For a more complex device, firmware may provide relatively low-level control as well ashardware abstractionservicesto higher-level software such as anoperating system.
Firmware is found in a wide range of computing devices includingpersonal computers,smartphones,home appliances,vehicles,computer peripheralsand in many of theintegrated circuitsinside each of these larger systems.
Firmware is stored innon-volatile memory– eitherread-only memory(ROM) or programmable memory such asEPROM,EEPROM, orflash. Changing a device's firmware stored in ROM requires physically replacing the memory chip – although some chips are not designed to be removed after manufacture. Programmable firmware memory can be reprogrammed via a procedure sometimes calledflashing.[2]
Common reasons forchangingfirmware include fixingbugsand addingfeatures.
Ascher Opler used the termfirmwarein a 1967Datamationarticle, as an intermediary term betweenhardwareandsoftware. Opler projected that fourth-generation computer systems would have awritable control store(a small specialized high-speed memory) into whichmicrocodefirmware would be loaded. Many software functions would be moved to microcode, andinstruction setscould be customized, with different firmware loaded for different instruction sets.[3]
As computers began to increase in complexity, it became clear that various programs needed to first be initiated and run to provide a consistent environment necessary for running more complex programs at the user's discretion. This required programming the computer to run those programs automatically. Furthermore, as companies, universities, and marketers wanted to sell computers to laypeople with little technical knowledge, greater automation became necessary to allow a lay-user to easily run programs for practical purposes. This gave rise to a kind of software that a user would not consciously run, and it led to software that a lay user wouldn't even know about.[4]
As originally used, firmware contrasted with hardware (the CPU itself) and software (normal instructions executing on a CPU). It was not composed of CPU machine instructions, but of lower-level microcode involved in the implementation of machine instructions. It existed on the boundary between hardware and software; thus the namefirmware. Over time, popular usage extended the wordfirmwareto denote any computer program that is tightly linked to hardware, includingBIOSon PCs,boot firmwareon smartphones,computer peripherals, or the control systems on simpleconsumer electronic devicessuch asmicrowave ovensandremote controls.
In some respects, the various firmware components are as important as theoperating systemin a working computer. However, unlike most modern operating systems, firmware rarely has a well-evolved automatic mechanism of updating itself to fix any functionality issues detected after shipping the unit.
A computer's firmware may be manually updated by a user via a small utility program. In contrast, firmware in mass storage devices (hard-disk drives, optical disc drives, flash memory storage e.g. solid state drive) is less frequently updated, even when flash memory (rather than ROM, EEPROM) storage is used for the firmware.
Most computer peripherals are themselves special-purpose computers. Devices such as printers, scanners, webcams, andUSB flash driveshave internally-stored firmware; some devices may also permit field upgrading of their firmware. For modern simpler devices, such asUSB keyboards,USB mousesandUSB sound cards, the trend is to store the firmware in on-chip memory in the device'smicrocontroller, as opposed to storing it in a separateEEPROMchip.
Examples of computer firmware include:
Consumer appliances likegaming consoles,digital camerasandportable music playerssupport firmware upgrades. Some companies use firmware updates to add new playable file formats (codecs). Other features that may change with firmware updates include the GUI or even the battery life.Smartphoneshave afirmware over the airupgrade capability for adding new features and patching security issues.
Since 1996, mostautomobileshave employed an on-board computer and various sensors to detect mechanical problems. As of 2010[update], modern vehicles also employ computer-controlledanti-lock braking systems(ABS) and computer-operatedtransmission control units(TCUs). The driver can also get in-dash information while driving in this manner, such as real-time fuel economy and tire pressure readings. Local dealers can update most vehicle firmware.
Other firmware applications include:
Flashing[6]is a process that involves the overwriting of existing firmware or data, contained inEEPROMorflash memorymodule present in an electronic device, with new data.[6]This can be done to upgrade a device[7]or to change the provider of a service associated with the function of the device, such as changing from one mobile phone service provider to another or installing a new operating system. If firmware is upgradable, it is often done via a program from the provider, and will often allow the old firmware to be saved before upgrading so it can be reverted to if the process fails, or if the newer version performs worse. Free software replacements for vendor flashing tools have been developed, such asFlashrom.
Sometimes, third parties develop an unofficial new or modified ("aftermarket") version of firmware to provide new features or to unlock hidden functionality; this is referred to ascustom firmware. An example isRockboxas a firmware replacement forportable media players. There are manyhomebrewprojects for various devices, which often unlock general-purpose computing functionality in previously limited devices (e.g., runningDoomoniPods).
Firmware hacks usually take advantage of the firmware update facility on many devices to install or run themselves. Some, however, must resort toexploitsto run, because the manufacturer has attempted to lock the hardware to stop it from runningunlicensed code.
Most firmware hacks arefree software.
The Moscow-basedKaspersky Labdiscovered that a group of developers it refers to as theEquation Grouphas developedhard disk drivefirmware modifications for various drive models, containing atrojan horsethat allows data to be stored on the drive in locations that will not be erased even if the drive is formatted or wiped.[8]Although the Kaspersky Lab report did not explicitly claim that this group is part of the United StatesNational Security Agency(NSA), evidence obtained from the code of various Equation Group software suggests that they are part of the NSA.[9][10]
Researchers from the Kaspersky Lab categorized the undertakings by Equation Group as the most advanced hacking operation ever uncovered, also documenting around 500 infections caused by the Equation Group in at least 42 countries.
Mark Shuttleworth, the founder of the companyCanonical, which created theUbuntu Linuxdistribution, has describedproprietaryfirmware as a security risk, saying that "firmware on your device is theNSA's best friend" and calling firmware "a trojan horse of monumental proportions". He has asserted that low-quality,closed sourcefirmware is a major threat to system security:[11]"Your biggest mistake is to assume that the NSA is the only institution abusing this position of trust – in fact, it's reasonable to assume that all firmware is a cesspool of insecurity, courtesy of incompetence of the highest degree from manufacturers, and competence of the highest degree from a very wide range of such agencies". As a potential solution to this problem, he has called for declarative firmware, which would describe "hardware linkage and dependencies" and "should not includeexecutable code".[12]Firmware should beopen-sourceso that the code can be checked and verified.
Custom firmware hacks have also focused on injectingmalwareinto devices such as smartphones orUSB devices. One such smartphone injection was demonstrated on theSymbian OSatMalCon,[13][14]ahacker convention. A USB device firmware hack calledBadUSBwas presented at theBlack Hat USA 2014conference,[15]demonstrating how aUSB flash drivemicrocontroller can be reprogrammed to spoof various other device types to take control of a computer, exfiltrate data, or spy on the user.[16][17]Other security researchers have worked further on how to exploit the principles behind BadUSB,[18]releasing at the same time the source code of hacking tools that can be used to modify the behavior of different USB devices.[19]
|
https://en.wikipedia.org/wiki/Firmware
|
TheINtimeReal Time Operating System (RTOS) family is based on a 32-bit RTOS conceived to run time-critical operations cycle-times as low as 50μs. INtime RTOS runs on single-core, hyper-threaded, and multi-core x86 PC platforms from Intel and AMD. It supports two binary compatible usage configurations; INtime for Windows, where the INtime RTOS runs alongside Microsoft Windows®, and INtime Distributed RTOS, where INtime runs one.
Like itsiRMXpredecessors, INtime is a real-time operating system, and like DOSRMX and iRMX for Windows, it runs concurrently with a general-purpose operating system on a single hardware platform.
INtime 1.0 was originally introduced in 1997[1]in conjunction with theWindows NToperating system. Since then it has been upgraded to include support for all subsequent protected-modeMicrosoft Windowsplatforms,Windows XPtoWindows 10.
INtime can also be used as a stand-alone RTOS. INtime binaries are able to run unchanged when running on astand-alonenode of the INtime RTOS. Unlike Windows, INtime can run on anIntel 80386or equivalent processor. Current versions of the Windows operating system generally require at least aPentiumlevel processor in order to boot and execute.
After spinning off fromRadisysin 2000[2]development work on INtime continued atTenAsys Corporation. In 2003 TenAsys released version 2.2 of INtime.[3]
Notable features of version 2.2 include:
|
https://en.wikipedia.org/wiki/INtime
|
Time-triggered architecture(abbreviated asTTA), also known as atime-triggered system, is a computer system that executes one or more sets of tasks according to a predetermined and set task schedule.[1]Implementation of a TT system will typically involve use of a single interrupt that is linked to the periodic overflow of a timer. This interrupt may drive a task scheduler (a restricted form ofreal-time operating system). The scheduler will—in turn—release the system tasks at predetermined points in time.[1]
Because they have highly deterministic timing behavior, TT systems have been used for many years to developsafety-criticalaerospace and related systems.[2]
An early text that sets forth the principles of time triggered architecture, communications, and sparse time approaches isReal-Time Systems: Design Principles for Distributed Embedded Applicationsin 1997.[3]
Use of TT systems was popularized by the publication ofPatterns for Time-Triggered Embedded Systems(PTTES) in 2001[1]and the related introductory bookEmbedded Cin 2002.[4]The PTTES book also introduced the concepts of time-triggered hybrid schedulers (an architecture for time-triggered systems that require task pre-emption) and shared-clock schedulers (an architecture for distributed time-triggered systems involving multiple, synchronized, nodes).[1]
Since publication of PTTES, extensive research work on TT systems has been carried out.[5][6][7][8][9][10]
Time-triggered systems are now commonly associated with international safety standards such asIEC 61508(industrial systems),ISO 26262(automotive systems),IEC 62304(medical systems) andIEC 60730(household goods).
Time-triggered systems can be viewed as a subset of a more general event-triggered (ET) system architecture (seeevent-driven programming).
Implementation of an ET system will typically involve use of multiple interrupts, each associated with specific periodic events (such as timer overflows) or aperiodic events (such as the arrival of messages over a communication bus at random points in time). ET designs are traditionally associated with the use of what is known as areal-time operating system(or RTOS), though use of such a software platform is not a defining characteristic of an ET architecture.[1]
|
https://en.wikipedia.org/wiki/Time-triggered_system
|
TheLinux Foundation(LF) is anon-profit organizationestablished in 2000 to supportLinuxdevelopment andopen-source softwareprojects.[2]
The Linux Foundation started asOpen Source Development Labsin 2000 to standardize and promote the open-source operating system kernelLinux.[3]It merged withFree Standards Groupin 2007. The foundation has since evolved to promote open-source projects beyond the Linux OS as a "foundation of foundations" that hosts a variety of projects spanning topics such ascloud, networking,blockchain, and hardware.[4]The foundation also hosts annual educational events among the Linux community, including theLinux Kernel Developers Summitand theOpen Source Summit.[5][6]
As of September 2015[update], the total economic value of the development costs of Linux Foundation Collaborative Projects was estimated at $5 billion.[7]
For the Linux kernel community, the Linux Foundation hosts its IT infrastructure and organizes conferences such as the Linux Kernel Summit and the Linux Plumbers Conference. It also hosts a Technical Advisory Board made up of Linux kernel developers. One of these developers has been appointed to sit on the Linux Foundation board.
In January 2016, the Linux Foundation announced a partnership with Goodwill Central Texas to help hundreds of disadvantaged individuals from underserved communities and a variety of backgrounds get the training they need to start careers in Linux IT.[37]
In July 2020, the Linux Foundation announced an initiative allowing open-source communities to create Open Standards using tools and methods inspired by open-source developers.[38]
The Core Infrastructure Initiative (CII), is a project managed by the Linux Foundation that enables technology companies, industry stakeholders, and esteemed developers to collaboratively identify and fund critical open-source projects in need of assistance. In June 2015, the organization announced financial support of nearly $500,000 for three new projects to better support critical security elements of the global information infrastructure.[39]In May 2016, CII launched its Best Practice Badge program to raise awareness of development processes and project governance steps that will help projects have better security outcomes. In May 2017, CII issued its 100th badge to a passing project.[40]
Introduced in October 2017,[41]the Community Data License Agreement (CDLA) is a legal framework for sharing data.[42]There are two initial CDLA licenses:
On March 3, 2009, the Linux Foundation announced that they would take over the management ofLinux.comfrom its previous owners,SourceForge, Inc.[43]
The site was relaunched on May 13, 2009, shifting away from its previous incarnation as a news site to become a central source for Linux tutorials, information, software, documentation, and answers across the server, desktop/netbook, mobile, and embedded areas. It also includes a directory of Linux software and hardware.[44]
Much like Linux itself,Linux.complans to rely on the community to create and drive content and conversation.
In 2020 amidst theCOVID-19 pandemic, the Linux Foundation announced the LFPH,[45]a program dedicated to advancing and supporting the virus contact tracing work led byGoogleandAppleand their Bluetooth notification systems. The LFPH is focusing its efforts on public health applications, including the effort's first initiative: a notification app intended for governments wanting to launch their privacy-focused exposure notification networks. As of today, LFPH hosts two contact-tracing apps.[46]
In September 2020, The Linux Foundation announced the LF Climate Finance Foundation (LFCF), a new initiative "to encourage investment in AI-enhanced open source analytics toaddress climate change."[47]LFCF plans to build a platform that will utilize open-source open data to help the financial investment, NGO, and academia sectors to help better model companies’ exposure to climate change.[48]Allianz, Amazon, Microsoft, andS&P Globalwill be the initiative's founding members.[49]
LF Energyis an initiative launched by the Linux Foundation in 2018 to improve thepower grid.[50][51]
The Linux Foundation Training Program features instructors and content from the leaders of the Linux developer and open-source communities.[52]
Participants receive Linux training that is vendor-neutral and created with oversight from leaders of the Linux development community. The Linux Foundation's online and in-person training programs aim to deliver broad, foundational knowledge and networking opportunities.
In March 2014, the Linux Foundation andedXpartnered to offer a free, massive open online class titled Introduction to Linux.[53]This was the first in a series of ongoing free offerings from both organizations whose current catalogue ofMOOCsinclude Intro to DevOps, Intro to Cloud Foundry and Cloud Native Software Architecture, Intro to Apache Hadoop, Intro to Cloud Infrastructure Technologies, and Intro to OpenStack.[54]
In December 2015, the Linux Foundation introduced a self-paced course designed to help prepare administrators for the OpenStack Foundation's Certified OpenStack Administrator exam.[55]
As part of a partnership with Microsoft, it was announced in December 2015 that the Linux on Azure certification would be awarded to individuals who pass both the Microsoft Exam 70-533 (Implementing Microsoft Azure Infrastructure Solutions) and the Linux Foundation Certified System Administrator (LFCS) exam.[56]
In early 2017, at the annual Open Source Leadership Summit, it was announced that the Linux Foundation would begin offering an Inclusive Speaker Orientation course in partnership with the National Center for Women & Information Technology. The course is designed to give participants "practical skills to promote inclusivity in their presentations."[57]
In September 2020, the Linux Foundation released a freeserverless computingtraining course with CNCF. It is taught by Alex Ellis, founder of OpenFaaS.[58]
Among many other organizations with similar offerings, The Linux Foundation has reported a 40% increase in demand for their online courses in 2020 during the coronavirus pandemic and the resulting social-distancing measures.[59]
Thepatent commonsconsists of allpatentedsoftwarewhich has been made available to theopen sourcecommunity. For software to be considered to be in thecommonsthe patent owner must guarantee that developers will not be sued for infringement, though there may be some restrictions on the use of the patented code. The concept was first given substance byRed Hatin 2001 when it published its Patent Promise.[60]
The Patent Commons Project was launched on November 15, 2005, by theOpen Source Development Labs(OSDL). The core of the project is anonlinepatent commons referencelibraryaggregating and documenting information about patent-relatedpledgesand otherlegalsolutions directed at the open-source software community. As of 2015[update], the project listed 53 patents.[61]
The Linux Foundation's Open Compliance Program provides an array of programs for open-source software licensing compliance. The focus of this initiative is to educate and assist developers (and their companies) on licensing requirements, to make it easier to create new software. The program consists primarily of self-administered training modules, but it is also meant to include automated tools to help programmatically identify license compliance issues.[62]
Funding for the Linux Foundation comes primarily from its Platinum Members, who pay US$500,000 per year according to Schedule A in LF's bylaws,[63]adding up to US$7.5 million.[64]The Gold Members contribute a combined total of US$1.2 million and Silver members contribute between US$5,000 and US$20,000 based on the amount of employees, summing up to at least US$6,240,000.[65]
In December 2023, theOpen Networking Foundation (ONF), including its LF Broadband, Aether and P4 projects, merged with Linux Foundation. As part of the merger, ONF handed over $5 million in funding.[66]As of June 2024, the foundation collected annual fees worth at least US$14,940,000.[67]
By early 2018, the Linux Foundation's website stated that it "uses [donations] in part to help fund the infrastructure and fellows (likeLinus Torvalds) who help develop the Linux kernel."[68]
The Linux Foundation established the Linux Foundation Europe, with its headquarter located inBrussels, on September 14, 2022, with the aim of promoting open source throughout Europe. Linux Foundation Europe will increase open collaborative activities for all European stakeholders, including citizens, the public sector, and the private sector. Among the first members of the Linux Foundation Europe areEricsson,Accenture, Alliander,Avast,Bosch, BTP, esatus AG,NXP Semiconductors, RTE,SAP,SUSE S.A.,TomTom,Bank of England,OpenForum Europe,OpenUK, and theResearch Institutes of Sweden. The Linux Foundation Europe will make it possible for open collaborative projects to be housed on European soil.[69][70]The first initiative is theOpen Wallet Foundation(OWF), which aims to create an interoperable engine for digital wallets that supports payment processing, identity verification, and storing verified credentials including employment, education, financial status, and entitlements was launched on 23 February 2023. The inaugural members are Accenture,Gen Digital, Futurewei Technologies,Visa Inc.,American Express,Deutsche Telekom/T-Systems, esatus AG, Fynbos, Hopae, IAMX, IDnow, IndyKite, Intesi Group,Ping Identity, Digital Identification and Authentication Council of Canada (DIACC), Digital Dollar Project, Digital Identity New Zealand (DINZ), Digital Identity and Data Sovereignty Association (DIDAS), DizmeID Foundation (DIZME), Hyperledger Foundation, Information Technologies and Telematics Institute / Centre for Research and Technology Hellas (CERTH/ITI),Johannes Kepler University Linz,ID2020, IDunion SCE,Mifos Initiative, MIT Connection Science, Modular Open Source Identity Platform (MOSIP),OpenID Foundation,Open Identity Exchange(OIX), Secure Identity Alliance (SIA),University of Rovira i Virgili, and the Trust Over IP Foundation (ToIP).[71][72]
The Linux Foundation Europe started theRISC-VSoftware Ecosystem (RISE) initiative on May 31, 2023. The goal of RISE is to increase the availability of software for high-performance and power-efficient RISC-V processors running high-level operating systems for a range of market segments by bringing together a large number of hardware and software vendors. Red Hat,Samsung,Qualcomm,Nvidia,MediaTek,Intel, and Google are among the initial members.[73]
During KubeCon + CloudNativeCon India inNew Delhi, the Linux Foundation announced the opening of Linux Foundation India on 11 December 2024, which will work on subjects including blockchain, security, Edge/IoT, cloud native technologies, telecommunications, and domain-specific artificial intelligence.[74][75]In India, the need for open-source technology has increased by 42% in 2023 as a result of the Linux Foundation's partnership with the International Startup Foundation (ISF). They are also collaborating with the open source networking company OpenNets.[76][77]Through the LF Decentralized Trust, theReserve Bank of India(RBI) and theMinistry of Electronics and Information Technology(MEITy) are utilizing Linux Foundation's projects to build theNational Blockchain FrameworkandDigital Rupee.[78][77]The Linux Foundation India will launch projects that will be introduced straight upstream into the Linux Foundation further facilitating ongoing technological collaborations between theFederal Government of the United Statesand theGovernment of India, in contrast to the Linux Foundation Europe and Linux Foundation Japan, which focus on region-specific open source projects because of governmental constraints. Linux Foundation India will provide open source contributors to the Linux Foundation's sub-organizations.[79][77]
|
https://en.wikipedia.org/wiki/Real-time_linux
|
TheLinux Operating systemis prevalent inembedded systems. As of 2024, developer surveys and industry reports find that Embedded Linux is used in 44%-46% of embedded systems.[1][2][3]Due to itsversatility, its large community of developers, as well as its adaptability to devices with size and power constraints, Linux is a popular choice for devices used inEdge Computing[4]and autonomous systems.[citation needed]
Prior to becoming the de-facto standard for microprocessor-based devices,[6]a linux distribution was created for theLinux Router Project, with the intent of transforming PCs to routers.
Starting in the late 1990s and the first decade of the 21st century, the introduction ofuCLinuxenabled ports to a large variety ofmicroprocessors.[7]Linux is also used as an alternative to using aproprietaryoperating systemand its associatedtoolchain.[8]
The introduction ofbusyboxin 1999, enabled packaging critical tools in an embedded system, with a minimal footprint.
As mentioned in the articleARM architecture family, due to their low costs, low power consumption, and low heat generation, arm processors are prevalent in many embedded devices. The open source nature, the flexibility, and the stability of Linux contributes to its wide-spread adoption to arm devices.[9]
The development of theGNUcross-compilerfacilitated the adoption of Linux embedded to many processors.
In 2008 Android 1.0 was released, based on the linux kernel.Android, a Linux-kernel-based operating system acquired and extended byGoogleand introduced in 2008, has become a highly competitive platform forsmartphonesandtablets. In time,Androidwould become the most successful linux embedded distribuition.[5]
Not every embedded Linux distribution is required to or meetsreal-timerequirements.[10][11]This is particular relevant forsafety criticalapplications and systems.[12]
The original Linux kernel was not suitable for real-time tasks due to its non-deterministic behavior[13]
Early attempts to provide real time support, such asRTAIwere based on a real-time kernel alongside the standard kernel.
In 2005, thePREEMPT_RTproject was initiated to provide a patch to the linux kernel.[14][15]
In 2024, thePREEMPT_RTpatch was fully merged into the Linux kernel for supported architectures.
The open source nature and security features of Linux, have contributed to its prevalence indevices on the EdgeandIoTSystems.[16]Correspondingly, the demand for the real time capabilities described in the previous subsection, is driven by the proliferation of IoT devices.
The emerging technologies of thefourth industrial revolutionhave driven further enhancements to the linux kernel, notably the adoption ofcontainerization.[17]
Due to its freely available source code and ease of customization, Linux has been shipped in many consumer devices.StarlinkandSpaceXuse embedded Linux on their constellations and rockets.[18]The Embeddable Linux Kernel is a lightweight and customizableLinux distributionappropriate forlow resourcehardware.[19]Like the synergy with the ARM architecture as mentioned in#The ARM-Linux Synergy, Linux embedded has evolved with hardware technologies likeSystem on a chipandSingle-board computer, networking standards, and memory devices.[20](example:Raspberry Pi)
With the availability of consumer embedded devices, communities of users and developers were formed around these devices: replacement or enhancements of theLinux distributionshipped on the device has often been made possible thanks to availability of thesource codeand to the communities surrounding the devices.
Alongside the evolution of the linux kernel, build systems evolved to support the building of an optimized operating system for an embedded device.
Before the emergence of these build systems, developers manually builttoolchains, and compiled each component of the embedded distribution (kernel, libraries, applications).[21]
Currently, there are several solutions, some full build systems, others are supporting tools.
|
https://en.wikipedia.org/wiki/Linux_on_embedded_systems
|
In mathematics and computer science, thepinwheel schedulingproblem is a problem inreal-time schedulingwith repeating tasks of unit length and hard constraints on the time between repetitions.
When a pinwheel scheduling problem has a solution, it has one in which the schedule repeats periodically. This repeating pattern resembles the repeating pattern of set and unset pins on the gears of apinwheel cipher machine, justifying the name.[1]If the fraction of time that is required by each task totals less than 5/6 of the total time, a solution always exists, but some pinwheel scheduling problems whose tasks use a total of slightly more than 5/6 of the total time do not have solutions.
Certain formulations of the pinwheel scheduling problem areNP-hard.
The input to pinwheel scheduling consists of a list of tasks, each of which is assumed to take unit time per instantiation. Each task has an associated positive integer value, its maximum repeat time (the maximum time from the start of one instantiation of the task to the next). Only one task can be performed at any given time.[1]
The desired output is an infinite sequence specifying which task to perform in each unit of time. Each input task should appear infinitely often in the sequence, with the largest gap between two consecutive instantiations of a task at most equal to the repeat time of the task.[1]
For example, the infinitely repeating sequence ABACABACABAC... would be a valid pinwheel schedule for three tasks A, B, and C with repeat times that are at least 2, 4, and 4 respectively.
If the task to be scheduled are numbered from1{\displaystyle 1}ton{\displaystyle n}, letti{\displaystyle t_{i}}denote the repeat time for taski{\displaystyle i}. In any valid schedule, taski{\displaystyle i}must use a1/ti{\displaystyle 1/t_{i}}fraction of the total time, the amount that would be used in a schedule that repeats that task at exactly its specified repeat time. Thedensityof a pinwheel scheduling problem is defined as the sum of these fractions,∑1/ti{\displaystyle \textstyle \sum 1/t_{i}}. For a solution to exist, the times devoted to each task cannot sum to more than the total available time, so it is necessary for the density to be atmost1{\displaystyle 1}.[2]
This condition on density is also sufficient for a schedule to exist in the special case that all repeat times are multiples of each other. For instance, this would be true when all repeat times arepowers of two. In this case one can solve the problem using a disjointcovering system.[1]Having density at most1{\displaystyle 1}is also sufficient when there are exactly two distinct repeat times.[2]However, having density at most 1 is not sufficient in some other cases. In particular, there is no schedule for three items with repeat timest1=2{\displaystyle t_{1}=2},t2=3{\displaystyle t_{2}=3}, andt3{\displaystyle t_{3}}, no matter how larget3{\displaystyle t_{3}}may be, even though the density of this system is only5/6+1/t3{\displaystyle 5/6+1/t_{3}}.[3]
In 1993, it was conjectured that, when the density of a pinwheel scheduling is at most5/6{\displaystyle 5/6}, a solution exists.[3]This was proven in 2024.[4]
When a solution exists, it can be assumed to be periodic, with a period at most equal to the product of the repeat times. However, it is not always possible to find a repeating schedule of sub-exponential length.[2]
With a compact input representation that specifies, for each distinct repeat time, the number of objects that have that repeat time, pinwheel scheduling isNP-hard.[2]
Despite the NP-hardness of the pinwheel scheduling problem for general inputs, some types of inputs can be scheduled efficiently. An example of this occurs for inputs where (when listed in sorted order) each repeat time evenly divides the next one, and the density is at most one. In this case, the problem can be solved by agreedy algorithmthat schedules the tasks in sorted order, scheduling each task to repeat at exactly its repeat time. At each step in this algorithm, the time slots that have already been assigned form a repeating sequence, with period equal to the repeat time of the most recently-scheduled task. This pattern allows each successive task to be scheduled greedily, maintaining the same invariant.[1]
The same idea can be used for arbitrary instances with density at most 1/2,
by rounding down each repeat time to apower of twothat is less than or equal to it. This rounding process at most doubles the density, keeping it at most one. After rounding, all densities are multiples of each other, allowing the greedy algorithm to work. The resulting schedule repeats each task at its rounded repeat time; because these rounded times do not exceed the input times, the schedule is valid.[1]Instead of rounding to powers of two, a greater density threshold can be achieved by rounding to other sequences of multiples, such as the numbers of the formx⋅2i{\displaystyle x\cdot 2^{i}}for a careful choice of the coefficientx{\displaystyle x},[3]or by rounding to two differentgeometric seriesand generalizing the idea that tasks with two distinct repeat times can be scheduled up to density one.[3][5]
The original work on pinwheel scheduling proposed it for an application in which a single base station must communicate with multiplesatellitesorremote sensors, one at a time, with distinct communications requirements. In this application, each satellite becomes a task in a pinwheel scheduling problem, with a repeat time chosen to give it adequate bandwidth. The resulting schedule is used to assign time slots for each satellite to communicate with the base station.[1]
Other applications of pinwheel scheduling include scheduling maintenance sessions for a collection of objects (such as oil changes for automobiles), the arrangement of repeated symbols on the print chains ofline printers,[3]computer processing of multimedia data,[6]and contention resolution in real-time wireless computer networks.[7]
|
https://en.wikipedia.org/wiki/Pinwheel_scheduling
|
Asynchrony, incomputer programming, refers to the occurrence of events independent of the mainprogram flowand ways to deal with such events. These may be "outside" events such as the arrival ofsignals, or actions instigated by a program that take placeconcurrentlywith program execution, without the programhangingto wait for results.[1]Asynchronous input/outputis an example of the latter case of asynchrony, and lets programs issue commands to storage or network devices that service these requests while theprocessorcontinues executing the program. Doing so provides a degree ofconcurrency.[1]
A common way for dealing with asynchrony in aprogramming interfaceis to providesubroutinesthat return afuture or promisethat represents the ongoing operation, and a synchronizing operation thatblocksuntil the future or promise is completed. Some programming languages, such asCilk, have special syntax for expressing an asynchronous procedure call.[2]
Examples of asynchrony include the following:
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Asynchronous_programming
|
Incomputing,dataflowis a broad concept, which has various meanings depending on the application and context. In the context ofsoftware architecture, data flow relates tostream processingorreactive programming.
Dataflow computingis a software paradigm based on the idea of representing computations as adirected graph, where nodes are computations and data flow along the edges.[1]Dataflow can also be calledstream processingorreactive programming.[2]
There have been multiple data-flow/stream processing languages of various forms (seeStream processing). Data-flow hardware (seeDataflow architecture) is an alternative to the classicvon Neumann architecture. The most obvious example of data-flow programming is the subset known asreactive programmingwith spreadsheets. As a user enters new values, they are instantly transmitted to the next logical "actor" or formula for calculation.
Distributed data flowshave also been proposed as a programming abstraction that captures the dynamics of distributed multi-protocols. The data-centric perspective characteristic of data flow programming promotes high-level functional specifications and simplifies formal reasoning about system components.
Hardware architectures for dataflow was a major topic incomputer architectureresearch in the 1970s and early 1980s.Jack Dennisof theMassachusetts Institute of Technology(MIT) pioneered the field of static dataflow architectures. Designs that use conventional memory addresses as data dependency tags are called static dataflow machines. These machines did not allow multiple instances of the same routines to be executed simultaneously because the simple tags could not differentiate between them. Designs that usecontent-addressable memoryare called dynamic dataflow machines byArvind. They use tags in memory to facilitate parallelism.
Data flows around the computer through the components of the computer. It gets entered from the input devices and can leave through output devices (printer etc.).
A dataflow network is a network of concurrently executing processes or automata that can communicate by sending data overchannels(seemessage passing.)
InKahn process networks, named afterGilles Kahn, the processes aredeterminate. This implies that each determinate process computes acontinuous functionfrom input streams to output streams, and that a network of determinate processes is itself determinate, thus computing a continuous function. This implies that the behavior of such networks can be described by a set of recursive equations, which can be solved usingfixed point theory. The movement and transformation of the data is represented by a series of shapes and lines.
Dataflow can also refer to:
The dictionary definition ofdataflowat Wiktionary
|
https://en.wikipedia.org/wiki/Data_flow
|
Inautomata theoryandsequential logic, astate-transition tableis a table showing what state (or states in the case of anondeterministic finite automaton) afinite-state machinewill move to, based on the current state and other inputs. It is essentially atruth tablein which the inputs include the current state along with other inputs, and the outputs include the next state along with other outputs.
A state-transition table is one of many ways to specify afinite-state machine. Other ways include astate diagram.
State-transition tables are sometimes one-dimensional tables, also calledcharacteristic tables. They are much more like truth tables than their two-dimensional form. The single dimension indicates inputs, current states, next states and (optionally) outputs associated with the state transitions.
State-transition tables are typically two-dimensional tables. There are two common ways for arranging them.
In the first way, one of the dimensions indicates current states, while the other indicates inputs. The row/column intersections indicate next states and (optionally) outputs associated with the state transitions.
In the second way, one of the dimensions indicates current states, while the other indicates next states. The row/column intersections indicate inputs and (optionally) outputs associated with the state transitions.
Simultaneous transitions in multiple finite-state machines can be shown in what is effectively ann-dimensional state-transition table in which pairs of rows map (sets of) current states to next states.[1]This is an alternative to representing communication between separate, interdependent finite-state machines.
At the other extreme, separate tables have been used for each of the transitions within a single finite-state machine: "AND/OR tables"[2]are similar to incompletedecision tablesin which the decision for the rules which are present is implicitly the activation of the associated transition.
An example of a state-transition table together with the correspondingstate diagramfor a finite-state machine that accepts a string with an even number 0s is given below:
In the state-transition table, all possible inputs to the finite-state machine are enumerated across the columns of the table, while all possible states are enumerated across the rows. If the machine is in the state S1(the first row) and receives an input of 1 (second column), the machine will stay in the state S1. Now if the machine is in the state S1and receives an input of 0 (first column), the machine will transition to the state S2.In the state diagram, the former is denoted by the arrow looping from S1to S1labeled with a 1, and the latter is denoted by the arrow from S1to S2labeled with a 0. This process can be described statistically usingMarkov Chains.
For anondeterministic finite-state machine, an input may cause the machine to be in more than one state, hence itsnon-determinism. This is denoted in a state-transition table by the set of all target states enclosed in a pair of braces {}. An example of a state-transition table together with the corresponding state diagram for a nondeterministic finite-state machine is given below:
If the machine is in the state S2and receives an input of 0, the machine will be in two states at the same time, the states S1and S2.
It is possible to draw astate diagramfrom a state-transition table. A sequence of easy to follow steps is given below:
|
https://en.wikipedia.org/wiki/State_transition
|
Incomputer science,best,worst, andaverage casesof a givenalgorithmexpress what theresourceusage isat least,at mostandon average, respectively. Usually the resource being considered is running time, i.e.time complexity, but could also be memory or some other resource.
Best case is the function which performs the minimum number of steps on input data of n elements. Worst case is the function which performs the maximum number of steps on input data of size n. Average case is the function which performs an average number of steps on input data of n elements.
Inreal-time computing, theworst-case execution timeis often of particular concern since it is important to know how much time might be neededin the worst caseto guarantee that the algorithm will always finish on time.
Average performanceandworst-case performanceare the most used in algorithm analysis. Less widely found isbest-case performance, but it does have uses: for example, where the best cases of individual tasks are known, they can be used to improve the accuracy of an overall worst-case analysis.Computer scientistsuseprobabilistic analysistechniques, especiallyexpected value, to determine expected running times.
The terms are used in other contexts; for example the worst- and best-case outcome of an epidemic, worst-case temperature to which an electronic circuit element is exposed, etc. Where components of specifiedtoleranceare used, devices must be designed to work properly with the worst-case combination of tolerances and external conditions.
The termbest-case performanceis used in computer science to describe an algorithm's behavior under optimal conditions. For example, the best case for a simple linear search on a list occurs when the desired element is the first element of the list.
Development and choice of algorithms is rarely based on best-case performance: most academic and commercial enterprises are more interested in improvingaverage-case complexityandworst-case performance. Algorithms may also be trivially modified to have good best-case running time by hard-coding solutions to a finite set of inputs, making the measure almost meaningless.[1]
Worst-case performance analysis and average-case performance analysis have some similarities, but in practice usually require different tools and approaches.
Determining whattypical inputmeans is difficult, and often that average input has properties which make it difficult to characterise mathematically (consider, for instance, algorithms that are designed to operate onstringsof text). Similarly, even when a sensible description of a particular "average case" (which will probably only be applicable for some uses of the algorithm) is possible, they tend to result in more difficult analysis of equations.[2]
Worst-case analysis gives asafeanalysis (the worst case is never underestimated), but one which can be overlypessimistic, since there may be no (realistic) input that would take this many steps.
In some situations it may be necessary to use a pessimistic analysis in order to guarantee safety. Often however, a pessimistic analysis may be too pessimistic, so an analysis that gets closer to the real value but may be optimistic (perhaps with some known low probability of failure) can be a much more practical approach. One modern approach in academic theory to bridge the gap between worst-case and average-case analysis is calledsmoothed analysis.
When analyzing algorithms which often take a small time to complete, but periodically require a much larger time,amortized analysiscan be used to determine the worst-case running time over a (possibly infinite) series ofoperations. Thisamortizedcost can be much closer to the average cost, while still providing a guaranteed upper limit on the running time. So e.g.online algorithmsare frequently based on amortized analysis.
The worst-case analysis is related to theworst-case complexity.[3]
Many algorithms with bad worst-case performance have good average-case performance. For problems we want to solve, this is a good thing: we can hope that the particular instances we care about are average. Forcryptography, this is very bad: we want typical instances of a cryptographic problem to be hard. Here methods likerandom self-reducibilitycan be used for some specific problems to show that the worst case is no harder than the average case, or, equivalently, that the average case is no easier than the worst case.
On the other hand, some data structures likehash tableshave very poor worst-case behaviors, but a well written hash table of sufficient size will statistically never give the worst case; the average number of operations performed follows an exponential decay curve, and so the run time of an operation is statistically bounded.
|
https://en.wikipedia.org/wiki/Best_and_worst_cases
|
The cigarette smokers problemis a classicconcurrencyproblem in computer science, introduced bySuhas Patilin 1971. It illustratessynchronizationchallenges in multi-process systems, where multiple processes (smokers) compete for limited resources (ingredients) provided by a single agent. The problem is notable for its constraints, such as the immutability of the agent's behavior and the prohibition of conditional statements in solutions, which have been subjects of criticism.[1]
Patil's problem includes a "quite arbitrary"[1]"restriction that the process which supplies the ingredients cannot be changed and that no conditional statements may be used."[2]
Assume a cigarette requires three ingredients to make and smoke: tobacco, paper, and matches. There are three smokers around a table, each of whom has an infinite supply ofoneof the three ingredients — one smoker has an infinite supply of tobacco, another has paper, and the third has matches.
There is also a non-smoking agent who enables the smokers to make their cigarettes by arbitrarily (non-deterministically) selecting two of the supplies to place on the table. The smoker who has the third supply should remove the two items from the table, using them (along with their own supply) to make a cigarette, which they smoke for a while. Once the smoker has finished his cigarette, the agent places two new random items on the table. This process continues forever.
Threesemaphoresare used to represent the items on the table; the agent increases the appropriate semaphore to signal that an item has been placed on the table, and smokers decrement the semaphore when removing items. Also, each smoker has an associated semaphore that they use to signal to the agent that the particular smoker is done smoking; the agent has a process that waits on each smoker's semaphore to let the agent know that it can place the new items on the table.
A simplepseudocodeimplementation of the smoker who has the supply of tobacco might look like the following:
However, this can lead to deadlock; if the agent places paper and tobacco on the table, the smoker with tobacco may remove the paper and the smoker with matches may take the tobacco, leaving both unable to make their cigarette. The solution is to define additional processes and semaphores that prevent deadlock, without modifying the agent.
Patil placed the following constraints on the cigarette smokers problem:
Patil used a proof in terms ofPetri netsto claim that a solution to the cigarette smokers problem usingEdsger Dijkstra's semaphore primitives is impossible, and to suggest that a more powerful primitive is necessary.[3][2]However,David Parnasdemonstrated that Patil's proof is inadequate if arrays of semaphores are used, offering a solution that uses helper processes that do arithmetic to signal the appropriate smoker to proceed.[1]
According toAllen B. Downey, the first restriction makes sense, because if the agent represents anoperating system, it would be unreasonable or impossible to modify it every time a new application came along.[4]However, Parnas argues that the second restriction is unjustified:
The limitations reported by Patil are limitations of his primitives, but they are not limitations on the primitives described by Dijkstra. … It is important, however, that such an investigation [of Dijkstra primitives] not investigate the power of these primitives under artificial restrictions. By artificial we mean restrictions which cannot be justified by practical considerations. In this author's opinion, restrictions prohibiting either conditionals or semaphore arrays are artificial.[1]
|
https://en.wikipedia.org/wiki/Cigarette_smokers_problem
|
Incomputing, theproducer-consumer problem(also known as thebounded-buffer problem) is a family of problems described byEdsger W. Dijkstrasince 1965.
Dijkstra found the solution for the producer-consumer problem as he worked as a consultant for theElectrologicaX1 and X8 computers: "The first use of producer-consumer was partly software, partly hardware: The component taking care of the information transport between store and peripheral was called 'a channel' ... Synchronization was controlled by two counting semaphores in what we now know as the producer/consumer arrangement: the one semaphore indicating the length of the queue, was incremented (in a V) by the CPU and decremented (in a P) by the channel, the other one, counting the number of unacknowledged completions, was incremented by the channel and decremented by the CPU. [The second semaphore being positive would raise the corresponding interrupt flag.]"[1]
Dijkstra wrote about the unbounded buffer case: "We consider two processes, which are called the 'producer' and the 'consumer' respectively. The producer is a cyclic process and each time it goes through its cycle it produces a certain portion of information, that has to be processed by the consumer. The consumer is also a cyclic process and each time it goes through its cycle, it can process the next portion of information, as has been produced by the producer ... We assume the two processes to be connected for this purpose via a buffer with unbounded capacity."[2]
He wrote about the bounded buffer case: "We have studied a producer and a consumer coupled via a buffer with unbounded capacity ... The relation becomes symmetric, if the two are coupled via a buffer of finite size, sayNportions"[3]
And about the multiple producer-consumer case: "We consider a number of producer/consumer pairs, where pairiis coupled via an information stream containing niportions. We assume ... the finite buffer that should contain all portions of all streams to have a capacity of 'tot' portions."[4]
Per Brinch HansenandNiklaus Wirthsaw soon the problem of semaphores: "I have come to the same conclusion with regard to semaphores, namely that they are not suitable for higher level languages. Instead, the natural synchronization events are exchanges ofmessage."[5]
The original semaphore bounded buffer solution was written inALGOLstyle. The buffer can store N portions or elements. The "number of queueing portions"semaphorecounts the filled locations in the buffer, the "number of empty positions" semaphore counts the empty locations in the buffer and the semaphore "buffer manipulation" works asmutexfor the buffer put and get operations. If the buffer is full, that is the number of empty positions is zero, the producer thread will wait in the P(number of empty positions) operation. If the buffer is empty, that is the number of queueing portions is zero, the consumer thread will wait in the P(number of queueing portions) operation. The V() operations release the semaphores. As a side effect, a thread can move from the wait queue to the ready queue. The P() operation decreases the semaphore value down to zero. The V() operation increases the semaphore value.[6]
As of C++ 20, semaphores are part of the language. Dijkstra's solution can easily be written in modern C++. The variable buffer_manipulation is a mutex. The semaphore feature of acquiring in one thread and releasing in another thread is not needed. The lock_guard() statement instead of a lock() and unlock() pair is C++RAII. The lock_guard destructor ensures lock release in case of an exception. This solution can handle multiple consumer threads and/or multiple producer threads.
[improper synthesis?]
Per Brinch Hansendefined the monitor: I will use the term monitor to denote a shared variable and the set of meaningful operations on it. The purpose of a monitor is to control the scheduling of resources among individual processes according to a certain policy.[7]Tony Hoarelaid a theoretical foundation for the monitor.[8]
The monitor is an object that contains variablesbuffer,head,tailandcountto realize acircular buffer, the condition variablesnonemptyandnonfullfor synchronization and the methodsappendandremoveto access the bounded buffer. The monitor operation wait corresponds to the semaphore operation P or acquire, signal corresponds to V or release. The circled operation (+) are taken modulo N. The presented Pascal stylepseudo codeshows a Hoare monitor. AMesamonitor useswhile countinstead ofif count. A programming language C++ version is:
The C++ version needs an additional mutex for technical reasons. It uses assert to enforce thepreconditionsfor the buffer add and remove operations.
The very first producer-consumer solution in the Electrologica computers used 'channels'. Hoare definedchannels: An alternative to explicit naming of source and destination
would be to name a port through which communication is to take place. The port names would be local to the processes, and the manner in which pairs of ports are to be connected by channels could be declared in the head of a parallel command.[9]Brinch Hansen implemented channels in the programming languagesJoyceandSuper Pascal. The Plan 9 operating system programming languageAlef, the Inferno operating system programming languageLimbohave channels. The following C source code compiles onPlan 9 from User Space:
The program entry point is at functionthreadmain. The function callch = chancreate(sizeof(ulong), 1)creates the channel, the function callsendul(ch, i)sends a value into the channel and the function callp = recvul(ch)receives a value from the channel. The programming languageGohas channels, too. A Go example:
The Go producer-consumer solution uses the main Go routine for consumer and creates a new, unnamed Go routine for the producer. The two Go routines are connected with channel ch. This channel can queue up to three int values. The statementch := make(chan int, 3)creates the channel, the statementch <- produceMessage()sends a value into the channel and the statementrecvMsg := range chreceives a value from the channel.[10]The allocation of memory resources, the allocation of processing resources, and the synchronization of resources are done by the programming language automatically.
Leslie Lamportdocumented a bounded buffer producer-consumer solution for one producer and one consumer: We assume that the buffer can hold at most b messages, b >= 1. In our solution, we let k be a constant greater than b, and let s and r be integer variables assuming values between 0 and k-1. We assume that initially s=r and the buffer is empty.
By choosing k to be a multiple of b, the buffer can be implemented as an array B [0: b - 1]. The producer simply puts each new message into B[s mod b], and the consumer takes each message from B[r mod b].[11]The algorithm is shown below, generalized for infinite k.
The Lamport solution uses busy waiting in the thread instead of waiting in the scheduler. This solution neglects the impact of scheduler thread switch at inconvenient times. If the first thread has read a variable value from memory, the scheduler switches to the second thread that changes the variable value, and the scheduler switches back to the first thread then the first thread uses the old value of the variable, not the current value. Atomicread-modify-writesolves this problem. Modern C++ offersatomicvariables and operations for multi-thread programming. The following busy waiting C++11 solution for one producer and one consumer uses atomic read-modify-write operationsfetch_addandfetch_subon the atomic variablecount.
The circular buffer index variablesheadandtailare thread-local and therefore not relevant for memory consistency. The variablecountcontrols the busy waiting of the producer and consumer thread.
|
https://en.wikipedia.org/wiki/Producers-consumers_problem
|
Incomputer science, thereaders–writers problemsare examples of a common computing problem inconcurrency.[1]There are at least three variations of the problems, which deal with situations in which many concurrentthreadsof execution try to access the same shared resource at one time.
Some threads may read and some may write, with the constraint that no thread may access the shared resource for either reading or writing while another thread is in the act of writing to it. (In particular, we want to prevent more than one thread modifying the shared resource simultaneously and allow for two or more readers to access the shared resource at the same time). Areaders–writer lockis adata structurethat solves one or more of the readers–writers problems.
The basic reader–writers problem was first formulated and solved by Courtoiset al.[2][3]
Suppose we have a shared memory area (critical section) with the basic constraints detailed above. It is possible to protect the shared data behind a mutual exclusionmutex, in which case no two threads can access the data at the same time. However, this solution is sub-optimal, because it is possible that a readerR1might have the lock, and then another readerR2requests access. It would be foolish forR2to wait untilR1was done before starting its own read operation; instead,R2should be allowed to read the resource alongsideR1because reads don't modify data, so concurrent reads are safe. This is the motivation for thefirst readers–writers problem, in which the constraint is added thatno reader shall be kept waiting if the share is currently opened for reading.This is also calledreaders-preference, with its solution:
In this solution of the readers/writers problem, the first reader must lock the resource (shared file) if such is available. Once the file is locked from writers, it may be used by many subsequent readers without having them to re-lock it again.
Before entering thecritical section, every new reader must go through the entry section. However, there may only be a single reader in the entry section at a time. This is done to avoidrace conditionson the readers (in this context, a race condition is a condition in which two or more threads are waking up simultaneously and trying to enter the critical section; without further constraint, the behavior is nondeterministic. E.g. two readers increment thereadcountat the same time, and both try to lock the resource, causing one reader to block). To accomplish this, every reader which enters the <ENTRY Section> will lock the <ENTRY Section> for themselves until they are done with it. At this point the readers are not locking the resource. They are only locking the entry section so no other reader can enter it while they are in it. Once the reader is done executing the entry section, it will unlock it by signaling the mutex. Signaling it is equivalent to: mutex.V() in the above code. Same is valid for the <EXIT Section>. There can be no more than a single reader in the exit section at a time, therefore, every reader must claim and lock the Exit section for themselves before using it.
Once the first reader is in the entry section, it will lock the resource. Doing this will prevent any writers from accessing it. Subsequent readers can just utilize the locked (from writers) resource. The reader to finish last (indicated by thereadcountvariable) must unlock the resource, thus making it available to writers.
In this solution, every writer must claim the resource individually. This means that a stream of readers can subsequently lock all potential writers out and starve them. This is so, because after the first reader locks the resource, no writer can lock it, before it gets released. And it will only be released by the last reader. Hence, this solution does not satisfy fairness.
The first solution is suboptimal, because it is possible that a readerR1might have the lock, a writerWbe waiting for the lock, and then a readerR2requests access. It would be unfair forR2to jump in immediately, ahead ofW; if that happened often enough,Wwouldstarve. Instead,Wshould start as soon as possible. This is the motivation for thesecond readers–writers problem, in which the constraint is added thatno writer, once added to the queue, shall be kept waiting longer than absolutely necessary. This is also calledwriters-preference.
A solution to the writers-preference scenario is:[2]
In this solution, preference is given to the writers. This is accomplished by forcing every reader to lock and release thereadtrysemaphore individually. The writers on the other hand don't need to lock it individually. Only the first writer will lock thereadtryand then all subsequent writers can simply use the resource as it gets freed by the previous writer. The very last writer must release thereadtrysemaphore, thus opening the gate for readers to try reading.
No reader can engage in the entry section if thereadtrysemaphore has been set by a writer previously. The reader must wait for the last writer to unlock the resource andreadtrysemaphores. On the other hand, if a particular reader has locked thereadtrysemaphore, this will indicate to any potential concurrent writer that there is a reader in the entry section. So the writer will wait for the reader to release thereadtryand then the writer will immediately lock it for itself and all subsequent writers. However, the writer will not be able to access the resource until the current reader has released the resource, which only occurs after the reader is finished with the resource in the critical section.
The resource semaphore can be locked by both the writer and the reader in their entry section. They are only able to do so after first locking thereadtrysemaphore, which can only be done by one of them at a time.
It will then take control over the resource as soon as the current reader is done reading and lock all future readers out. All subsequent readers will hang up at thereadtrysemaphore waiting for the writers to be finished with the resource and to open the gate by releasingreadtry.
Thermutexandwmutexare used in exactly the same way as in the first solution. Their sole purpose is to avoid race conditions on the readers and writers while they are in their entry or exit sections.
In fact, the solutions implied by both problem statements can result in starvation — the first one may starve writers in the queue, and the second one may starve readers. Therefore, thethird readers–writers problemis sometimes proposed, which adds the constraint thatno thread shall be allowed to starve; that is, the operation of obtaining a lock on the shared data will always terminate in a bounded amount of time.
A solution with fairness for both readers and writers might be as follows:
This solution can only satisfy the condition that "no thread shall be allowed to starve" if and only if semaphores preserve first-in first-out ordering when blocking and releasing threads. Otherwise, a blocked writer, for example, may remain blocked indefinitely with a cycle of other writers decrementing the semaphore before it can.
The simplest reader writer problem which uses only two semaphores and doesn't need an array of readers to read the data in buffer.
Please notice that this solution gets simpler than the general case because it is made equivalent to theBounded bufferproblem, and therefore onlyNreaders are allowed to enter in parallel,Nbeing the size of the buffer. The initial value ofreadandwritesemaphores are 0 and N respectively.
In writer, the value of write semaphore is given to read semaphore and in reader, the value of read is given to write on completion of the loop.
|
https://en.wikipedia.org/wiki/Readers-writers_problem
|
Incomputer science, thesleeping barber problemis a classicinter-process communicationandsynchronizationproblem that illustrates the complexities that arise when there are multipleoperating systemprocesses.[1]
The problem was originally proposed in 1965 by computer science pioneerEdsger Dijkstra,[2]who used it to make the point that general semaphores are often superfluous.[3]
Imagine a hypothetical barbershop with one barber, one barber chair, and a waiting room withnchairs (nmay be 0) for waiting customers. The following rules apply:[4]
There are two main complications. First, there is a risk that arace condition, where the barber sleeps while a customer waits for the barber to get them for a haircut, arises because all of the actions—checking the waiting room, entering the shop, taking a waiting room chair—take a certain amount of time. Specifically, a customer may arrive to find the barber cutting hair so they return to the waiting room to take a seat but while walking back to the waiting room the barber finishes the haircut and goes to the waiting room, which he finds empty (because the customer walks slowly or went to the restroom) and thus goes to sleep in the barber chair. Second, another problem may occur when two customers arrive at the same time when there is only one empty seat in the waiting room and both try to sit in the single chair; only the first person to get to the chair will be able to sit.
Amultiple sleeping barbers problemhas the additional complexity of coordinating several barbers among the waiting customers.[6]
There are several possible solutions, but all solutions require amutex, which ensures that only one of the participants can change state at once. The barber must acquire the room status mutex before checking for customers and release it when they begin either to sleep or cut hair; a customer must acquire it before entering the shop and release it once they are sitting in a waiting room or barber chair, and also when they leave the shop because no seats were available. This would take care of both of the problems mentioned above. A number ofsemaphoresis also required to indicate the state of the system. For example, one might store the number of people in the waiting room.
The followingpseudocodeguarantees synchronization between barber and customer and isdeadlockfree, but may lead tostarvationof a customer. The problem of starvation can be solved with afirst-in first-out (FIFO)queue. Thesemaphorewould provide two functions:wait()andsignal(), which in terms ofC codewould correspond toP()andV(), respectively.[citation needed]
|
https://en.wikipedia.org/wiki/Sleeping_barber_problem
|
stat()is aUnixsystem callthat queries thefile systemformetadataabout afile(includingspecial filessuch asdirectories). The metadata contains many fields includingtype,size, ownership,permissionsandtimestamps.
For example, thelscommand uses this system call to retrieve timestamps:
stat()appeared inVersion 1 Unix. It is among the few original Unixsystem callsto change, withVersion 4's addition ofgroup permissionsand largerfile size.[1]
Since at least 2004, the same-namedshellcommandstathas been available forLinuxto expose features of the system call via acommand-line interface.[2]
TheC POSIX libraryheadersys/stat.h, found onPOSIXand otherUnix-likeoperating systems, declaresstat()and related functions.
Each function accepts a pointer to astruct statbuffer which the function loads with information about the specified file. As typical for system calls, each function returns 0 on success, or on failure, setserrnoto indicate the failure condition and returns −1.
Thestat()andlstat()functions accept apathargument that specifies a file. If the path identifies asymbolic link,stat()returns attributes of the link target, whereaslstat()returns attributes of the link itself. Thefstat()function accepts afile descriptorargument instead of a path, and returns attributes of the file that it identifies.
The functions was extended to supportlarge files. Functionsstat64(),lstat64()andfstat64()load information intostruct stat64buffer, which supports 64-bit sizes; allowing them to work with files 2 GiB and larger (up to 8 EiB). When the_FILE_OFFSET_BITSmacrois defined to 64, the 64-bit functions are available as the original names.
The metadata structure is defined in thesys/stat.hheader. The following shows the base fields, but an implementation is free to define additional fields:[3]
POSIX.1 does not requirest_rdev,st_blocksandst_blksizemembers; these fields are defined as part of XSI option in the Single Unix Specification.
In older versions of POSIX.1 standard, the time-related fields were defined asst_atime,st_mtimeandst_ctime, and were of typetime_t. Since the 2008 version of the standard, these fields were renamed tost_atim,st_mtimandst_ctim, respectively, of type structtimespec, since this structure provides a higher resolution time unit. For the sake of compatibility, implementations can define the old names in terms of thetv_secmember ofstruct timespec. For example,st_atimecan be defined asst_atim.tv_sec.[3]
Fields include:
An exampleCapplication that logs information about each path passed via the command-line. It usesstat()to query the system for the information.
|
https://en.wikipedia.org/wiki/Stat_(Unix)
|
Incomputing,netstatis acommand-linenetwork utilitythat displays open network sockets,routing tables, and a number of network interface (network interface controllerorsoftware-defined network interface) and network protocol statistics. It is available onUnix,Plan 9,Inferno, andUnix-likeoperating systemsincludingmacOS,Linux,SolarisandBSD. It is also available onIBMOS/2and onMicrosoftWindows NT-based operating systems includingWindows XP,Windows Vista,Windows 7,Windows 8andWindows 10.
It is used for finding problems in the network and to determine the amount of traffic on the network as a performance measurement.[1]On Linux this program is mostly obsolete, although still included in many distributions.
On Linux,netstat(part of "net-tools") is superseded byss(part ofiproute2). The replacement fornetstat -risip route, the replacement fornetstat -iisip -s link, and the replacement fornetstat -gisip maddr, all of which are recommended instead.[2][3][4][5]
Netstatprovides statistics for the following:
Parameters used with this command must be prefixed with a hyphen (-) rather than a slash (/). Some parameters are not supported on all platforms.
On macOS, BSD systems, Linux distributions, and Microsoft Windows:
To display thestatistics for only the TCP or UDPprotocols, type one of the following commands:
netstat -sp tcp
netstat -sp udp
On Unix-like systems:
To display all ports open by a process with idpid:
netstat -aop | grep "pid"
To continuously display open TCP and UDP connections numerically and also which program is using them on Linux:
netstat -nutpacw
On Microsoft Windows:
To display active TCP connections and the process IDs every5seconds, type the following command (works on NT based systems only, or Windows 2000 with hotfix):
netstat -o 5
To display active TCP connections and the process IDs usingnumerical form, type the following command (works on NT based systems only, or Windows 2000 with hotfix):
netstat -no
Netstat uses an asterisk * as a wildcard which means "any". An example would be
Example output:
Under "Local Address" *, in*:smtp, means the process is listening on all of the network interfaces the machine has for the port mapped as smtp (see /etc/services for service resolution). This can also be shown as 0.0.0.0.
The first *, in*:*, means connections can come from any IP address, and the second *, in*:*, means the connection can originate from any port on the remote machine.
Some versions ofnetstatlack explicit field delimiters in their printf-generated output, leading to numeric fields running together and thus corrupting the output data.
UnderLinux, raw data can often be obtained from the/proc/net/devto work around theprintfoutput corruption arising in netstat's network interface statistics summary,netstat -i, until such time as the problem is corrected.[citation needed]
On theWindowsplatform, netstat information can be retrieved by calling theGetTcpTableandGetUdpTablefunctions in the IP HelperAPI, or IPHLPAPI.DLL. Information returned includes local and remoteIP addresses, local and remote ports, and (for GetTcpTable)TCPstatus codes. In addition to the command-line netstat.exe tool that ships with Windows,GUI-based netstat programs are available.
On the Windows platform, this command is available only if theInternet Protocol (TCP/IP)protocol is installed as a component in the properties of a network adapter in Network Connections.
On theWindowsplatform running Remote Desktop Services (formerly Terminal Services) it will only show connections for the current user, not for the whole computer.
OnmacOS, the /System/Library/CoreServices/Applications folder (or /Applications/Utilities inOS X Mountain Lionand earlier) contains a network GUI utility calledNetwork Utility, theNetstattab of which runs the netstat command and displays its output in the tab.
|
https://en.wikipedia.org/wiki/Netstat
|
Aname–value pair, also called anattribute–value pair,key–value pair, orfield–value pair, is a fundamentaldata representationincomputing systemsandapplications. Designers often desire an open-endeddata structurethat allows forfuture extensionwithout modifying existing code or data. In such situations, all or part of thedata modelmay be expressed as a collection of2-tuplesin the form<attribute name, value>with each element being an attribute–value pair. Depending on the particular application and the implementation chosen by programmers, attribute names may or may not be unique.
Some of the applications where information is represented as name-value pairs are:
Somecomputer languagesimplement name–value pairs, or more frequently collections of attribute–value pairs, as standard language features. Most of these implement the general model of anassociative array: an unordered list of unique attributes with associated values. As a result, they are not fully general; they cannot be used, for example, to implement electronic mail headers (which are ordered and non-unique).
In some applications, a name–value pair has a value that contains anestedcollection of attribute–value pairs. Somedata serializationformats such asJSONsupport arbitrarily deep nesting.[2]Other data representations are restricted to one level of nesting, such asINI file's section/name/value.
|
https://en.wikipedia.org/wiki/Attribute%E2%80%93value_pair
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.