text
stringlengths
16
172k
source
stringlengths
32
122
Aninterpreter directiveis acomputer languageconstruct, that on some systems is better described as an aspect of the system's executable file format, that is used to control whichinterpreterparses and interprets the instructions in acomputer program.[1] InUnix,Linuxand otherUnix-likeoperating systems, the first two bytes in a file can be the characters "#!", which constitute amagic number(hexadecimal23 and 21, the ASCII values of "#" and "!") often referred to asshebang, prefix the first line in ascript, with the remainder of the line being a command usually limited to a max of 14 (when introduced) up to usually about 80 characters in 2016[citation needed]. If thefile system permissionson the script (a file) include anexecutepermission bit for the user invoking it by its filename (often found through the command search path), it is used to tell the operating system what interpreter (usually a program that implements ascripting language) to use to execute thescript's contents, which may bebatch commandsor might be intended for interactive use. An example would be#!/bin/bash, meaning run this script with thebash shellfound in the /bindirectory.[2][3][4][5][6][7] Other systems or files may use some othermagic numberas the interpreter directives.[citation needed] This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Interpreter_directive
Aread–eval–print loop(REPL), also termed aninteractive toplevelorlanguage shell, is a simple interactivecomputer programmingenvironment that takes single user inputs, executes them, and returns the result to the user; a program written in a REPL environment is executed piecewise.[1]The term usually refers to programming interfaces similar to the classicLisp machineinteractive environment. Common examples includecommand-lineshellsand similar environments forprogramming languages, and the technique is very characteristic ofscripting languages.[2] In 1964, the expressionREAD-EVAL-PRINT cycleis used byL. Peter DeutschandEdmund Berkeleyfor an implementation ofLispon thePDP-1.[3]Just one month later,Project Macpublished a report byJoseph Weizenbaum(the creator ofELIZA, the world's first chatbot) describing a REPL-based language, called OPL-1, implemented in hisFortran-SLIPlanguage on theCompatible Time Sharing System (CTSS).[4][5][6] The 1974Maclispreference manual byDavid A. Moonattests "Read-eval-print loop" on page 89, but does not use the acronym REPL.[7] Since at least the 1980s, the abbreviationsREP LoopandREPLare attested in the context ofScheme.[8][9] In a REPL, the user enters one or more expressions (rather than an entirecompilation unit) and the REPL evaluates them and displays the results.[1]The nameread–eval–print loopcomes from the names of the Lisp primitive functions which implement this functionality: The development environment then returns to the read state, creating a loop, which terminates when the program is closed. REPLs facilitateexploratory programminganddebuggingbecause the programmer can inspect the printed result before deciding what expression to provide for the next read. The read–eval–print loop involves the programmer more frequently than the classic edit–compile–run–debug cycle. Because theprintfunction outputs in the same textual format that thereadfunction uses for input, most results are printed in a form that could be copied and pasted back into the REPL. However, it is sometimes necessary to print representations of elements that cannot sensibly be read back in, such as a socket handle or a complex class instance. In these cases, there must exist a syntax for unreadable objects. In Python, it is the<__module__.class instance>notation, and in Common Lisp, the#<whatever>form. The REPL ofCLIM,SLIME, and theSymbolicsLisp Machinecan also read back unreadable objects. They record for each output which object was printed. Later when the code is read back, the object will be retrieved from the printed output. REPLs can be created to support any text-based language. REPL support for compiled languages is usually achieved by implementing aninterpreteron top of a virtual machine which provides an interface to the compiler. For example, starting with JDK 9,JavaincludedJShellas a command-line interface to the language. Various other languages have third-party tools available for download that provide similar shell interaction with the language. As ashell, a REPL environment allows users to access relevant features of an operating system in addition to providing access to programming capabilities. The most common use for REPLs outside of operating system shells is for interactiveprototyping.[10]Other uses include mathematical calculation, creating documents that integrate scientific analysis (e.g.IPython), interactive software maintenance,benchmarking, and algorithm exploration. A minimal definition is: whereenvrepresents initialeval-uation environment. It is also assumed thatenvcan be destructively updated byeval. Typical functionality provided by a Lisp REPL includes:
https://en.wikipedia.org/wiki/Read-eval-print_loop
TheRun commandon anoperating systemsuch asMicrosoft WindowsandUnix-likesystems is used to directly open an application or document whosepathis known. The command functions more or less like a single-linecommand-line interface. In theGNOME(a UNIX-like derivative) interface, the Run command is used to run applications via terminal commands. It can be accessed by pressingAlt+F2.KDE(a UNIX-like derivative) has similar functionality calledKRunner. It is accessible via the same key binds. TheMulticsshell includes aruncommand to run a command in an isolated environment.[1]TheDECTOPS-10[2]andTOPS-20[3]Command Processor included aRUNcommandfor running executable programs. In theBASICprogramming language,RUNis used to start program execution fromdirect mode, or to start anoverlay programfrom aloader program. Starting withWindows 95, the Run command is accessible through theStart menuand also through the shortcut key⊞ Win+R. Although the Run command is still present inWindows Vistaand later, it no longer appears directly on the Start menu by default, in favor of the new search box and a shortcut to the Run command in the Windows System sub-menu. The Run command is launched inGNOMEandKDEdesktop environment by holdingAlt+F2. Uses include bringing up webpages; for example, if a user were to bring up the Run command and type inhttp://www.example.com/, the user's defaultWeb Browserwould open that page. This allows user to not only launchhttpprotocol, but also all registeredURI schemesin OS and applications associated with them, likemailtoandfile. InGNOMEandKDE, the Run command acts as a location where applications and commands can be executed.
https://en.wikipedia.org/wiki/Run_command
Agraphical user interface, orGUI[a], is a form ofuser interfacethat allowsuserstointeract with electronic devicesthroughgraphicaliconsand visual indicators such assecondary notation. In many applications, GUIs are used instead oftext-based UIs, which are based on typed command labels or text navigation. GUIs were introduced in reaction to the perceived steeplearning curveofcommand-line interfaces(CLIs),[4][5][6]which require commands to be typed on acomputer keyboard. The actions in a GUI are usually performed throughdirect manipulationof the graphical elements.[7][8][9]Beyond computers, GUIs are used in many handheldmobile devicessuch asMP3players, portable media players, gaming devices,smartphonesand smaller household, office andindustrial controls. The termGUItends not to be applied to other lower-display resolutiontypes of interfaces, such asvideo games(wherehead-up displays(HUDs)[10]are preferred), or not including flat screens likevolumetric displays[11]because the term is restricted to the scope of2Ddisplay screens able to describe generic information, in the tradition of thecomputer scienceresearch at theXerox Palo Alto Research Center. Designing the visual composition and temporal behavior of a GUI is an important part ofsoftware applicationprogramming in the area ofhuman–computer interaction. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a storedprogram, a design discipline namedusability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well-tailored to the tasks. The visible graphical interface features of an application are sometimes referred to aschromeorGUI.[12][13][14]Typically, users interact with information by manipulating visualwidgetsthat allow for interactions appropriate to the kind of data they hold. The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of users. Amodel–view–controllerallows flexible structures in which the interface is independent of and indirectly linked to application functions, so the GUI can be customized easily. This allows users to select or design a differentskinorthemeat will, and eases the designer's work to change the interface as user needs evolve. Good GUI design relates to users more, and to system architecture less. Large widgets, such aswindows, usually provide a frame or container for the main presentation content such as a web page, email message, or drawing. Smaller ones usually act as a user-input tool. A GUI may be designed for the requirements of avertical marketas application-specific GUIs. Examples includeautomated teller machines(ATM),point of sale(POS) touchscreens at restaurants,[15]self-service checkoutsused in a retail store, airline self-ticket and check-in, information kiosks in a public space, like a train station or a museum, and monitors or control screens in an embedded industrial application which employ areal-time operating system(RTOS). Cell phonesand handheld game systems also employ application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations. A GUI uses a combination of technologies and devices to provide a platform that users can interact with, for the tasks of gathering and producing information. A series of elements conforming avisual languagehave evolved to represent information stored in computers. This makes it easier for people with few computer skills to work with and use computer software. The most common combination of such elements in GUIs is thewindows, icons, text fields, canvases, menus, pointer(WIMP) paradigm, especially inpersonal computers.[16] The WIMP style of interaction uses a virtualinput deviceto represent the position of apointing device's interface, most often amouse, and presentsinformationorganized in windows and represented withicons. Available commands are compiled together in menus, and actions are performed making gestures with the pointing device. Awindow managerfacilitates the interactions between windows,applications, and thewindowing system. The windowing system handles hardware devices such as pointing devices, graphics hardware, and positioning of the pointer. Inpersonal computers, all these elements are modeled through adesktop metaphorto produce a simulation called adesktop environmentin which the display represents a desktop, on which documents and folders of documents can be placed. Window managers and other software combine to simulate the desktop environment with varying degrees of realism. Entries may appear in a list to make space for text and details, or in a grid for compactness and larger icons with little space underneath for text. Variations in between exist, such as a list with multiple columns of items and a grid of items with rows of text extending sideways from the icon.[17] Multi-row and multi-column layouts commonly found on the web are "shelf" and "waterfall". The former is found onimage search engines, where images appear with a fixed height but variable length, and is typically implemented with the CSS property and parameterdisplay: inline-block;. A waterfall layout found onImgurandTweetDeckwith fixed width but variable height per item is usually implemented by specifyingcolumn-width:. Smaller app mobile devices such aspersonal digital assistants(PDAs) andsmartphonestypically use the WIMP elements with different unifying metaphors, due to constraints in space and available input devices. Applications for which WIMP is not well suited may use newerinteraction techniques, collectively termedpost-WIMPUIs.[18] As of 2011, some touchscreen-based operating systems such as Apple'siOS(iPhone) andAndroiduse the class of GUIs named post-WIMP. These support styles of interaction using more than one finger in contact with a display, which allows actions such as pinching and rotating, which are unsupported by one pointer and mouse.[19] Human interface devices, for the efficient interaction with a GUI include acomputer keyboard, especially used together withkeyboard shortcuts,pointing devicesfor thecursor(or ratherpointer) control:mouse,pointing stick,touchpad,trackball,joystick,virtual keyboards, andhead-up displays(translucent information devices at the eye level). There are also actions performed by programs that affect the GUI. For example, there are components likeinotifyorD-Busto facilitate communication between computer programs. Ivan SutherlanddevelopedSketchpadin 1963, widely held as the first graphicalcomputer-aided designprogram. It used alight pento create and manipulate objects in engineering drawings in realtime with coordinated graphics. In the late 1960s, researchers at theStanford Research Institute, led byDouglas Engelbart, developed theOn-Line System(NLS), which used text-basedhyperlinksmanipulated with a then-new device: themouse. (A 1968 demonstration of NLS became known as "The Mother of All Demos".) In the 1970s, Engelbart's ideas were further refined and extended to graphics by researchers atXerox PARCand specificallyAlan Kay, who went beyond text-based hyperlinks and used a GUI as the main interface for theSmalltalk programming language, which ran on theXerox Altocomputer, released in 1973. Most modern general-purpose GUIs are derived from this system. The Xerox PARC GUI consisted of graphical elements such aswindows,menus,radio buttons, andcheck boxes. The concept oficonswas later introduced byDavid Canfield Smith, who had written a thesis on the subject under the guidance of Kay.[20][21][22]The PARC GUI employs apointing devicealong with a keyboard. These aspects can be emphasized by using the alternative term and acronym forwindows, icons, menus,pointing device(WIMP). This effort culminated in the 1973Xerox Alto, the first computer with a GUI, though the system never reached commercial production. The first commercially available computer with a GUI was the 1979PERQ workstation, manufactured by Three Rivers Computer Corporation. Its design was heavily influenced by the work at Xerox PARC. In 1981, Xerox eventually commercialized the ideas from the Alto in the form of a new and enhanced system – the Xerox 8010 Information System – more commonly known as theXerox Star.[23][24]These early systems spurred many other GUI efforts, includingLisp machinesbySymbolicsand other manufacturers, theApple Lisa(which presented the concept ofmenu barandwindow controls) in 1983, theAppleMacintosh 128Kin 1984, and theAtari STwithDigital Research'sGEM, and CommodoreAmigain 1985.Visi Onwas released in 1983 for theIBM PC compatiblecomputers, but was never popular due to its high hardware demands.[25]Nevertheless, it was a crucial influence on the contemporary development ofMicrosoft Windows.[26] Apple, Digital Research, IBM and Microsoft used many of Xerox's ideas to develop products, and IBM'sCommon User Accessspecifications formed the basis of the GUIs used in Microsoft Windows, IBMOS/2Presentation Manager, and the UnixMotiftoolkit andwindow manager. These ideas evolved to create the interface found in current versions of Microsoft Windows, and in variousdesktop environmentsforUnix-likeoperating systems, such as macOS andLinux. Thus most current GUIs have largely common idioms. GUIs were a hot topic in the early 1980s. TheApple Lisawas released in 1983, and various windowing systems existed forDOSoperating systems (includingPC GEMandPC/GEOS). Individual applications for many platforms presented their own GUI variants.[27]Despite the GUI's advantages, many reviewers questioned the value of the entire concept,[28]citing hardware limits and problems in finding compatible software. In 1984, Applereleased a television commercialwhich introduced the Apple Macintosh during the telecast ofSuper Bowl XVIIIbyCBS,[29]withallusionstoGeorge Orwell's noted novelNineteen Eighty-Four. The goal of the commercial was to make people think about computers, identifying the user-friendly interface as a personal computer which departed from prior business-oriented systems,[30]and becoming a signature representation of Apple products.[31] In 1985,Commodorereleased theAmiga 1000, along withWorkbenchandKickstart 1.0(which containedIntuition). This interface ran as a separate task, meaning it was very responsive and, unlike other GUIs of the time, it didn't freeze up when a program was busy. Additionally, it was the first GUI to introduce something resemblingVirtual Desktops. Windows 95, accompanied by an extensive marketing campaign,[32]was a major success in the marketplace at launch and shortly became the most popular desktop operating system.[33] In 2007, with theiPhone[34]and later in 2010 with the introduction of theiPad,[35]Apple popularized the post-WIMP style of interaction formulti-touchscreens, and those devices were considered to be milestones in the development ofmobile devices.[36][37] The GUIs familiar to most people as of the mid-late 2010s areMicrosoft Windows,macOS, and theX Window Systeminterfaces for desktop and laptop computers, andAndroid, Apple'siOS,Symbian,BlackBerry OS,Windows Phone/Windows 10 Mobile,Tizen,WebOS, andFirefox OSfor handheld (smartphone) devices.[38][39] People said it's more of a right-brain machine and all that—I think there is some truth to that. I think there is something to dealing in a graphical interface and a more kinetic interface—you're reallymovinginformation around, you're seeing it move as though it had substance. And you don't see that on a PC. The PC is very much of a conceptual machine; you move information around the way you move formulas, elements on either side of an equation. I think there's a difference. Since the commands available in command line interfaces can be many, complex operations can be performed using a short sequence of words and symbols. Custom functions may be used to facilitate access to frequent actions. Command-line interfaces are morelightweight, as they only recall information necessary for a task; for example, no preview thumbnails or graphical rendering of web pages. This allows greater efficiency and productivity once many commands are learned.[4]But reaching this level takes some time because the command words may not be easily discoverable ormnemonic. Also, using the command line can become slow and error-prone when users must enter long commands comprising many parameters or several different filenames at once. However,windows, icons, menus, pointer(WIMP) interfaces present users with manywidgetsthat represent and can trigger some of the system's available commands. GUIs can be made quite hard when dialogs are buried deep in a system or moved about to different places during redesigns. Also, icons and dialog boxes are usually harder for users to script. WIMPs extensively usemodes, as the meaning of all keys and clicks on specific positions on the screen are redefined all the time. Command-line interfaces use modes only in limited forms, such as for current directory andenvironment variables. Most modernoperating systemsprovide both a GUI and some level of a CLI, although the GUIs usually receive more attention. GUI wrappers find a way around thecommand-line interfaceversions (CLI) of (typically)LinuxandUnix-likesoftware applications and theirtext-based UIsor typed command labels. While command-line or text-based applications allow users to run a program non-interactively, GUI wrappers atop them avoid the steeplearning curveof the command-line, which requires commands to be typed on thekeyboard. By starting a GUI wrapper,userscan intuitivelyinteractwith, start, stop, and change its working parameters, through graphicaliconsand visual indicators of adesktop environment, for example. Applications may also provide both interfaces, and when they do the GUI is usually a WIMP wrapper around the command-line version. This is especially common with applications designed forUnix-likeoperating systems. The latter used to be implemented first because it allowed the developers to focus exclusively on their product's functionality without bothering about interface details such as designing icons and placing buttons. Designing programs this way also allows users to run the program in ashell script. Many environments and games use the methods of3D graphicsto project 3D GUI objects onto the screen. The use of 3D graphics has become increasingly common in mainstream operating systems (ex.Windows Aero, andAqua(macOS)) to create attractive interfaces, termed eye candy (which includes, for example, the use ofdrop shadowsunderneath windows and thecursor), or for functional purposes only possible using three dimensions. For example, user switching is represented by rotating a cube with faces representing each user's workspace, and window management is represented via aRolodex-style flipping mechanism inWindows Vista(seeWindows Flip 3D). In both cases, the operating system transforms windows on-the-fly while continuing to update the content of those windows. The GUI is usually WIMP-based, although occasionally other metaphors surface, such as those used inMicrosoft Bob, 3dwm, File System Navigator,File System Visualizer, 3D Mailbox,[41][42]andGopherVR.Zooming(ZUI) is a related technology that promises to deliver the representation benefits of 3D environments without their usability drawbacks of orientation problems and hidden objects. In 2006,Hillcrest Labsintroduced the first ZUI for television.[43]Other innovations include the menus on thePlayStation 2; the menus on theXbox; Sun'sProject Looking Glass;Metisse, which was similar to Project Looking Glass;[44]BumpTop, where users can manipulate documents and windows with realistic movement and physics as if they were physical documents;Croquet OS, which is built for collaboration;[45]andcompositing window managerssuch asEnlightenmentandCompiz.Augmented realityandvirtual realityalso make use of 3D GUI elements.[46] 3D GUIs have appeared inscience fictionliterature andfilms, even before certain technologies were feasible or in common use.[47]
https://en.wikipedia.org/wiki/Graphical_user_interface#Comparison_to_other_interfaces
In the Beginning... Was the Command Lineis an essay byNeal Stephensonwhich was originally published online in 1999 and later made available in book form (November 1999,ISBN978-0380815937). The essay is a commentary on why the proprietaryoperating systemsbusiness is unlikely to remain profitable in the future because of competition fromfree software. It also analyzes the corporate/collective culture of theMicrosoft,Apple Computer, andfree softwarecommunities. Stephenson explores theGraphical user interface(GUI) as a metaphor in terms of the increasing interposition of abstractions between humans and the actual workings of devices (in a similar manner toZen and the Art of Motorcycle Maintenance) and explains the beautyhackersfeel in good-quality tools. He does this with acaranalogy. He compares four operating systems,Mac OSbyApple Computerto a luxury European car,WindowsbyMicrosoftto astation wagon,Linuxto a freetank, andBeOSto abatmobile. Stephenson argues that people continue to buy the station wagon despite free tanks being given away, because people do not want to learn how to operate a tank; they know that the station wagon dealership has a machine shop that they can take their car to when it breaks down. Because of this attitude, Stephenson argues that Microsoft is not really a monopoly, as evidenced by the free availability of other choice OSes, but rather has simply accrued enoughmindshareamong the people to have them coming back. He compares Microsoft toDisney, in that both are selling a vision to their customers, who in turn "want to believe" in that vision. Stephenson relays his experience with theDebian bug tracking system(#6518). He then contrasts it with Microsoft's approach. Debian developers responded from around the world within a day. He was completely frustrated with his initial attempt to achieve the same response from Microsoft, but he concedes that his subsequent experience was satisfactory. The difference he notes is that Debian developers are personally accessible and transparently own up to defects in their OS distribution, while Microsoft pretends errors don't exist. The essay was written before the advent ofMac OS X. A recurring theme is the full power of the command line compared with easier-to-learn graphical user interfaces (GUIs) which are described as broken mixed metaphors for 'power users'. He then mentions GUIs that have traditional terminals in windows. In aSlashdotinterview in 2004, in response to the question: ... have you embraced the new UNIX based MacOS X as the OS you want to use when you "Just want to go to Disneyland"? he replied: I embracedOS Xas soon as it was available and have never looked back. So a lot ofIn the Beginning...was the Command Lineis now obsolete. I keep meaning to update it, but if I'm honest with myself, I have to say this is unlikely.[1] With Neal Stephenson's permission, Garrett Birkel responded toIn the Beginning...was the Command Linein 2004, bringing it up to date and critically discussing Stephenson's argument.[2]Birkel's response is interspersed throughout the original text, which remains untouched.
https://en.wikipedia.org/wiki/In_the_Beginning..._Was_the_Command_Line
Incomputer programming,glue codeiscodethat allows components tointeroperatethat otherwise areincompatible. Theadapter patterndescribes glue code as asoftware design pattern. Glue code describeslanguage bindingsorforeign function interfacessuch as theJava Native Interface(JNI). Glue code may be written to access existinglibraries, mapobjectsto adatabaseusingobject-relational mapping, orintegratecommercial off-the-shelfprograms. Glue code may be written in the same language as the code it is gluing together, or in a separateglue language. Glue code can be key torapid prototyping. Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Glue_code
Incomputing, ashebangis the character sequence#!, consisting of the charactersnumber sign(also known assharporhash) andexclamation mark(also known asbang), at the beginning of ascript. It is also calledsharp-exclamation,sha-bang,[1][2]hashbang,[3][4]pound-bang,[5][6]orhash-pling.[7] When a text file with a shebang is used as if it were an executable in aUnix-likeoperating system, theprogram loadermechanism parses the rest of the file's initial line as aninterpreter directive. The loader executes the specifiedinterpreterprogram, passing to it as an argument the path that was initially used when attempting to run the script, so that the program may use the file as input data.[8]For example, if a script is named with the pathpath/to/script, and it starts with the line#! /bin/sh, then the program loader is instructed to run the program/bin/sh, passingpath/to/scriptas the first argument. The shebang line is usually ignored by the interpreter, because the "#" character is acommentmarker in many scripting languages; some language interpreters that do not use the hash mark to begin comments still may ignore the shebang line in recognition of its purpose.[9] The form of a shebanginterpreter directiveis as follows:[8] in whichinterpreteris apathto an executable program. The space between#!andinterpreteris optional. There could be any number of spaces or tabs either before or afterinterpreter. Theoptional-argwill include any extra spaces up to the end-of-line. InLinux, the file specified byinterpretercan be executed if it has the execute rights and is one of the following: On Linux andMinix, an interpreter can also be a script. A chain of shebangs and wrappers yields a directly executable file that gets the encountered scripts as parameters in reverse order. For example, if file/bin/Ais an executable file inELFformat, file/bin/Bcontains the shebang#! /bin/A optparam, and file/bin/Ccontains the shebang#! /bin/B, then executing file/bin/Cresolves to/bin/B /bin/C, which finally resolves to/bin/A optparam /bin/B /bin/C. InSolaris- andDarwin-derived operating systems (such asmacOS), the file specified byinterpretermust be an executable binary and cannot itself be a script.[10] Some typical shebang lines: Shebang lines may include specific options that are passed to the interpreter. However, implementations vary in the parsing behavior of options; for portability, only one option should be specified without any embedded whitespace.[11]Further portability guidelines are found below. Interpreter directives allow scripts and data files to be used as commands, hiding the details of their implementation from users and other programs, by removing the need to prefix scripts with their interpreter on the command line. For example, consider a script having the initial line#! /bin/sh -x. It may be invoked simply by giving its file path, such assome/path/to/foo,[12]and some parameters, such asbarandbaz: In this case/bin/shis invoked in its place, with parameters-x,some/path/to/foo,bar, andbaz, as if the original command had been Most interpreters make any additional arguments available to the script. If/bin/shis aPOSIX-compatible shell, thenbarandbazare presented to the script as the positional parameter array"$@", and individually as parameters"$1"and"$2"respectively. Because the initial#is the character used to introduce comments in thePOSIX shelllanguage (and in the languages understood by many other interpreters), the whole shebang line is ignored by the interpreter. However, it is up to the interpreter to ignore the shebang line, and not all do so; thus, a script consisting of the following two lines simply outputsbothlines when run: When compared to the use of global association lists between file extensions and the interpreting applications, the interpreter directive method allows users to use interpreters not known at a global system level, and without administrator rights. It also allows specific selection of interpreter, without overloading thefilename extensionnamespace(where one file extension refers to more than one file type), and allows the implementation language of a script to be changed without changing its invocation syntax by other programs. Invokers of the script need not know what the implementation language is as the script itself is responsible for specifying the interpreter to use. Shebangs must specifyabsolute paths(or paths relative to current working directory) to system executables; this can cause problems on systems that have a non-standard file system layout. Even when systems have fairly standard paths, it is quite possible for variants of the same operating system to have different locations for the desired interpreter.Python, for example, might be in/usr/bin/python3,/usr/local/bin/python3, or even something like/home/username/bin/python3if installed by an ordinary user. A similar problem exists for thePOSIX shell, since POSIX only required its name to besh, but did not mandate a path. A common value is/bin/sh, but some systems such as Solaris have the POSIX-compatible shell at/usr/xpg4/bin/sh.[13]In manyLinuxsystems,/bin/shis a hard orsymbolic linkto/bin/bash, theBourne Again shell(BASH). Using bash-specific syntax while maintaining a shebang pointing toshis also not portable.[14] Because of this it is sometimes required to edit the shebang line after copying ascriptfrom one computer to another because the path that was coded into the script may not apply on a new machine, depending on the consistency in past convention of placement of the interpreter. For this reason and becausePOSIXdoes not standardize path names, POSIX does not standardize the feature.[15]TheGNUAutoconftool can test for system support with the macro AC_SYS_INTERPRETER.[16] Often, the program/usr/bin/envcan be used to circumvent this limitation by introducing a level ofindirection.#!is followed by/usr/bin/env, followed by the desired command without full path, as in this example: This mostly works because the path/usr/bin/envis commonly used for theenvutility, and it invokes the firstshfound in the user's$PATH, typically/bin/sh. This particular example (usingsh) is of limited utility: neither/bin/shnor/usr/bin/envis universal, with similar numbers of devices lacking each. More broadly using#!/usr/bin/envfor any script still has some portability issues withOpenServer5.0.6 andUnicos9.0.2 which have only/bin/envand no/usr/bin/env. Using#!/usr/bin/envresults inrun-timeindirection, which has the potential to degrade system security; for this reason some commentators recommend against its use[17]in packaged software, reserving it only for "educational examples". Command arguments are split in different ways across platforms. Some systems do not split up the arguments; for example, when running the script with the first line, all text after the first space is treated as a single argument, that is,python3 -cwill be passed as one argument to/usr/bin/env, rather than two arguments. Such systems include Linux[18][19]andCygwin. Another approach is the use of awrapper. FreeBSD 6.0 (2005) introduced a-Soption to itsenvas it changed the shebang-reading behavior to non-splitting. This option tellsenvto split the string itself.[20]The GNUenvutility sincecoreutils8.30 (2018) also includes this feature.[21]Although using this option mitigates the portability issue on the kernel end with splitting, it adds the requirement thatenvsupports this particular extension. Another problem is scripts containing acarriage returncharacter immediately after the shebang line, perhaps as a result of being edited on a system that uses DOSline breaks, such asMicrosoft Windows. Some systems interpret the carriage return character as part of theinterpretercommand, resulting in an error message.[22] The shebang is actually a human-readable instance of amagic numberin the executable file, themagic byte stringbeing0x23 0x21, the two-character encoding inASCIIof#!. This magic number is detected by the "exec" family of functions, which determine whether a file is a script or an executable binary. The presence of the shebang will result in the execution of the specified executable, usually an interpreter for the script's language. It has been claimed[23]that some old versions of Unix expect the normal shebang to be followed by a space and a slash (#! /), but this appears to be untrue;[11]rather, blanks after the shebang have traditionally been allowed, and sometimes documented with a space, as described in the1980 historical emailbelow. The shebang characters are represented by the same two bytes inextended ASCIIencodings, includingUTF-8, which is commonly used for scripts and other text files on current Unix-like systems. However, UTF-8 files may begin with the optionalbyte order mark(BOM); if the "exec" function specifically detects the bytes0x23and0x21, then the presence of the BOM (0xEF 0xBB 0xBF) before the shebang will prevent the script interpreter from being executed. Some authorities recommend against using the byte order mark inPOSIX(Unix-like) scripts,[24]for this reason and for wider interoperability and philosophical concerns. Additionally, a byte order mark is not necessary in UTF-8, as that encoding does not haveendiannessissues; it serves only to identify the encoding as UTF-8.[24] An executable file starting with an interpreter directive is simply called a script, often prefaced with the name or general classification of the intended interpreter. The nameshebangfor the distinctive two characters may have come from an inexactcontractionofSHArpbangorhaSH bang, referring to the two typical Unix names for them. Another theory on theshinshebangis that it is from the default shellsh, usually invoked with shebang.[25]This usage was current by December 1989,[26]and probably earlier. The shebang was introduced byDennis RitchiebetweenEdition 7and8at Bell Laboratories. It was also added to theBSDreleases from Berkeley's Computer Science Research (present at 2.8BSD[27]and activated by default by 4.2BSD). As AT&T Bell Laboratories Edition 8 Unix, and later editions, were not released to the public, the first widely known appearance of this feature was on BSD. The lack of an interpreter directive, but support for shell scripts, is apparent in the documentation fromVersion 7 Unixin 1979,[28]which describes instead a facility of the Bourne shell where files with execute permission would be handled specially by the shell, which would (sometimes depending on initial characters in the script, such as ":" or "#") spawn a subshell which would interpret and run the commands contained in the file. In this model, scripts would only behave as other commands if called from within a Bourne shell. An attempt to directly execute such a file via the operating system's ownexec()system call would fail, preventing scripts from behaving uniformly as normal system commands. In later versions of Unix-like systems, this inconsistency was removed.Dennis Ritchieintroduced kernel support for interpreter directives in January 1980, forVersion 8 Unix, with the following description:[27] The feature's creator didn't give it a name, however:[30] Kernel support for interpreter directives spread to other versions of Unix, and one modern implementation can be seen in the Linux kernel source infs/binfmt_script.c.[31] This mechanism allows scripts to be used in virtually any context normal compiled programs can be, including as full system programs, and even as interpreters of other scripts. As a caveat, though, some early versions of kernel support limited the length of the interpreter directive to roughly 32 characters (just 16 in its first implementation), would fail to split the interpreter name from any parameters in the directive, or had other quirks. Additionally, some modern systems allow the entire mechanism to be constrained or disabled for security purposes (for example, set-user-id support has been disabled for scripts on many systems). Note that, even in systems with full kernel support for the#!magic number, some scripts lacking interpreter directives (although usually still requiring execute permission) are still runnable by virtue of the legacy script handling of the Bourne shell, still present in many of its modern descendants. Scripts are then interpreted by the user's default shell.
https://en.wikipedia.org/wiki/Shebang_(Unix)
PowerShellis ashellprogramdeveloped byMicrosoftfor task automation andconfiguration management. As is typical for a shell, it provides acommand-lineinterpreterfor interactive use and ascriptinterpreter for automation via alanguagedefined for it. Originally only for Windows, known asWindows PowerShell, it was madeopen-sourceandcross-platformon August 18, 2016, with the introduction ofPowerShell Core.[9]The former is built on the.NET Framework; the latter on.NET(previously .NET Core). PowerShell is bundled with currentversions of Windowsand can be installed onmacOSandLinux.[9]SinceWindows 10build 14971, PowerShell replacedCommand Promptas the defaultcommand shellexposed byFile Explorer.[10][11] In PowerShell, administrative tasks are generally performed viacmdlets(pronouncedcommand-lets), which are specialized .NETclassesimplementing a particular operation. These work by accessing data in different data stores, like thefile systemorWindows Registry, which are made available to PowerShell viaproviders. Third-party developers can add cmdlets and providers to PowerShell.[12][13]Cmdlets may be used by scripts, which may in turn be packaged into modules. Cmdlets work in tandem with the .NETAPI. PowerShell's support for.NET Remoting,WS-Management,CIM, andSSHenables administrators to perform administrative tasks on both local and remote Windows systems. PowerShell also provides a hostingAPIwith which the PowerShell runtime can be embedded inside other applications. These applications can then use PowerShell functionality to implement certain operations, including those exposed via thegraphical interface. This capability has been used byMicrosoft Exchange Server2007 to expose its management functionality as PowerShell cmdlets and providers and implement thegraphicalmanagement tools as PowerShell hosts which invoke the necessary cmdlets.[12][14]Other Microsoft applications includingMicrosoft SQL Server 2008also expose their management interface via PowerShell cmdlets.[15] PowerShell includes its own extensive,console-basedhelp (similar toman pagesinUnix shells) accessible via theGet-Helpcmdlet. Updated local help contents can be retrieved from the Internet via theUpdate-Helpcmdlet. Alternatively, help from the web can be acquired on a case-by-case basis via the-onlineswitch toGet-Help. Shell programs, including PowerShell, trace lineage to shells in olderoperating systemssuch asMS-DOSandXenixwhich exposed system functionality to the user almost exclusively via acommand-line interface(CLI) – althoughMS-DOS 5also came with a complementary graphicalDOS Shell. TheWindows 9xfamily came bundled withCOMMAND.COM, the command-line environment of MS-DOS. TheWindows NTandWindows CEfamilies, however, came with the newercmd.exe– a significant upgrade from COMMAND.COM. Both environments provide CLI for both internal and external commands and automation viabatch files– a relatively primitive language for scripting. To address limitations of these shells – including the inability to directly use asoftware componentexposed viaCOM– Microsoft introduced theWindows Script Hostin 1998 withWindows 98, and its command-line based host,cscript.exe. It integrates with theActive Scriptengine and allows scripts to be written in compatible languages, such asJScriptandVBScript. These scripts can use COM components directly, but it has relatively inaccessible documentation and gained a reputation as a systemvulnerability vectorafter several high-profilecomputer virusesexploited weaknesses in its security provisions. Different versions of Windows provided various special-purpose command-line interpreters (such asnetshandWMIC) with their own command sets but they were not interoperable.Windows Server 2003further attempted to improve the command-line experience but scripting support was still unsatisfactory.[16] By the late 1990s,Intelhad come to Microsoft asking for help in making Windows, which ran on Intel CPUs, a more appropriate platform to support the development of future Intel CPUs. At the time, Intel CPU development was accomplished onSun Microsystemscomputers which ranSolaris(aUnixvariant) onRISC-architecture CPUs. The ability to run Intel's manyKornShellautomation scripts on Windows was identified as a key capability. Internally, Microsoft began an effort to create a Windows port of Korn Shell, which was code-named Kermit.[17]Intel ultimately pivoted to aLinux-based development platform that could run on Intel CPUs, rendering the Kermit project redundant. However, with a fully funded team, Microsoft program managerJeffrey Snoverrealized there was an opportunity to create a more general-purpose solution to Microsoft's problem of administrative automation. By 2002, Microsoft had started to develop a new approach to command-line management, including a CLI called Monad (also known asMicrosoft Shellor MSH). The ideas behind it were published in August 2002 in a white paper called the "Monad Manifesto" by its chief architect,Jeffrey Snover.[18]In a 2017 interview, Snover explains the genesis of PowerShell, saying that he had been trying to makeUnixtools available on Windows, which didn't work due to "core architectural difference[s] between Windows and Linux". Specifically, he noted thatLinuxconsiders everything atext file, whereas Windows considers everything an "APIthat returns structured data". They were fundamentally incompatible, which led him to take a different approach.[19] Monad was to be a new extensible CLI with a fresh design capable of automating a range of core administrative tasks. Microsoft first demonstrated Monad publicly at the Professional Development Conference in Los Angeles in October 2003. A few months later, they opened up private beta, which eventually led to a public beta. Microsoft published the first Monad publicbeta releaseon June 17, 2005, and the Beta 2 on September 11, 2005, and Beta 3 on January 10, 2006. On April 25, 2006, not long after the initial Monad announcement, Microsoft announced that Monad had been renamedWindows PowerShell, positioning it as a significant part of its management technology offerings.[20]Release Candidate (RC) 1 of PowerShell was released at the same time. A significant aspect of both the name change and the RC was that this was now a component of Windows, rather than a mere add-on. Release Candidate 2 of PowerShell version 1 was released on September 26, 2006, with finalrelease to the webon November 14, 2006. PowerShell for earlier versions of Windows was released on January 30, 2007.[21]PowerShell v2.0 development began before PowerShell v1.0 shipped. During the development, Microsoft shipped threecommunity technology previews (CTP). Microsoft made these releases available to the public. The last CTP release of Windows PowerShell v2.0 was made available in December 2008. PowerShell v2.0 was completed and released to manufacturing in August 2009, as an integral part of Windows 7 and Windows Server 2008 R2. Versions of PowerShell for Windows XP, Windows Server 2003, Windows Vista and Windows Server 2008 were released in October 2009 and are available for download for both 32-bit and 64-bit platforms.[22]In an October 2009 issue ofTechNet Magazine, Microsoft called proficiency with PowerShell "the single most important skill a Windowsadministratorwill need in the coming years".[23] Windows 10 shipped with Pester, a script validation suite for PowerShell.[24] On August 18, 2016, Microsoft announced[25]that they had made PowerShell open-source and cross-platform with support for Windows,macOS,CentOSandUbuntu.[9]The source code was published onGitHub.[26]The move to open source created a second incarnation of PowerShell called "PowerShell Core", which runs on.NET Core. It is distinct from "Windows PowerShell", which runs on the full.NET Framework.[27]Starting with version 5.1, PowerShell Core is bundled withWindows Server 2016 Nano Server.[28][29] A project namedPash, apunon the widely known "bash" Unix shell, has been anopen-sourceandcross-platformreimplementation of PowerShell via theMono framework.[30]Pash was created by Igor Moochnick, written inC#and was released under theGNU General Public License. Pash development stalled in 2008, was restarted onGitHubin 2012,[31]and finally ceased in 2016 when PowerShell was officially made open-source and cross-platform.[32] A key design goal for PowerShell was to leverage the large number ofAPIsthat already existed in Windows, Windows Management Instrumentation, .NET Framework, and other software. PowerShell cmdlets generally wrap and expose existing functionality instead of implementing new functionality. The intent was to provide an administrator-friendly, more-consistent interface between administrators and a wide range of underlying functionality. With PowerShell, an administrator doesn't need to know .NET, WMI, or low-level API coding, and can instead focus on using the cmdlets exposed by PowerShell. In this regard, PowerShell creates little new functionality, instead focusing on making existing functionality more accessible to a particular audience.[33] PowerShell's developers based the core grammar of the tool on that of thePOSIX 1003.2KornShell.[34] However, PowerShell's language was also influenced byPHP,Perl, and many other existing languages.[35] PowerShell can execute four kinds of named commands:[36] If a command is a standalone executable program, PowerShell launches it in a separateprocess; if it is a cmdlet, it executes in the PowerShell process. PowerShell provides an interactivecommand-line interface, where the commands can be entered and their output displayed. The user interface offers customizabletab completion. PowerShell enables the creation ofaliasesfor cmdlets, which PowerShell textually translates into invocations of the original commands. PowerShell supports bothnamedand positionalparametersfor commands. In executing a cmdlet, the job of binding the argument value to the parameter is done by PowerShell itself, but for external executables, arguments are parsed by the external executable independently of PowerShell interpretation.[37] The PowerShellExtended Type System(ETS) is based on the .NET type system, but with extended semantics (for example, propertySets and third-party extensibility). For example, it enables the creation of different views of objects by exposing only a subset of the data fields, properties, and methods, as well as specifying custom formatting and sorting behavior. These views are mapped to the original object usingXML-based configuration files.[38] A cmdlet is a .NETclassthat derives either fromCmdletor fromPSCmdlet; the latter used when it needs to interact with the PowerShell runtime.[39]The base classes specify methods –BeginProcessing(),ProcessRecord()andEndProcessing()– which a cmdlet overrides to provide functionality based on the events that these functions represent.ProcessRecord()is called if the object receives pipeline input.[40]If a collection of objects is piped, the method is invoked for each object in the collection. The cmdlet class must have theattributeCmdletAttributewhich specifies the verb and the noun that make up the name of the cmdlet. A cmdlet name follows aVerb-Nounnaming pattern, such asGet-ChildItem, which tends to make itself-documented.[39]Common verbs are provided as anenum.[41][42] If a cmdlet receives either pipeline input or command-line parameter input, there must be a correspondingpropertyin the class, with amutatorimplementation. PowerShell invokes the mutator with the parameter value or pipeline input, which is saved by the mutator implementation in class variables. These values are then referred to by the methods which implement the functionality. Properties that map to command-line parameters are marked byParameterAttribute[43]and are set before the call toBeginProcessing(). Those which map to pipeline input are also flanked byParameterAttribute, but with theValueFromPipelineattribute parameter set.[44] A cmdlet can use any.NETAPIand may be written in any.NET language. In addition, PowerShell makes certain APIs available, such asWriteObject(), which is used to access PowerShell-specific functionality, such as writing objects to the pipeline. A cmdlet can use .NET a data accessAPIdirectly or use the PowerShell infrastructure ofProviders, which make data stores addressable using uniquepaths. Data stores are exposed usingdrive letters, and hierarchies within them, addressed as directories. PowerShell ships with providers for thefile system,registry, thecertificatestore, as well as thenamespacesfor command aliases, variables, and functions.[45]PowerShell also includes various cmdlets for managing variousWindowssystems, including thefile system, or usingWindows Management Instrumentationto controlWindows components. Other applications can register cmdlets with PowerShell, thus allowing it to manage them, and, if they enclose any datastore (such as a database), they can add specific providers as well.[citation needed] A cmdlet can be added to the shell via modules or before v2 snap-ins. Users are not limited to the cmdlets included in the base PowerShell installation. The number of cmdlets included in the base PowerShell install for various versions: To enablepipelinesemantics, similar to theUnix pipeline, a cmdlet receives input and outputs result as objects. If a cmdlet outputs multiple objects, each object of the collection is passed through the pipeline before the next object is processed.[39]. A PowerShell pipeline enables complex logic using the pipe (|) operator to connect stages. However, the PowerShell pipeline differs from Unix pipelines in that stages executewithinthe PowerShell runtime rather than as a set of processes coordinated by theoperating system. Additionally, structured .NET objects, rather thanbyte streams, are passed from one stage to the next. Usingobjectsand executing stages within the PowerShell runtime eliminates the need toserializedata structures, or to extract them by explicitlyparsingtext output.[50]An object can alsoencapsulatecertain functions that work on the contained data, which become available to the recipient command for use.[51][52]For the last cmdlet in a pipeline, PowerShell automatically pipes its output object to theOut-Defaultcmdlet, which transforms the objects into a stream of format objects and then renders those to the screen.[53][54] Because a PowerShell object is a .NET object, it has a.ToString()method which is used to serialize object state. In addition, PowerShell allows formatting definitions to be specified, so the text representation of objects can be customized by choosing which data elements to display, and in what manner. However, in order to maintainbackward compatibility, if an external executable is used in a pipeline, it receives a text stream representing the object, instead of directly integrating with the PowerShell type system.[55][56][57] PowerShell includes adynamically typedlanguage for scriptingwhich can implement complex operations using cmdletsimperatively. The language supports variables, functions, branching (if-then-else), loops (while,do,for, andforeach), structured error/exception handling andclosures/lambda expressions,[58]as well as integration with .NET. Variables in PowerShell scripts are prefixed with$. Variables can be assigned any value, including the output of cmdlets. Strings can be enclosed either in single quotes or in double quotes: when using double quotes, variables will be expanded even if they are inside the quotation marks. Enclosing the path to a file in braces preceded by a dollar sign (as in${C:\foo.txt}) creates a reference to the contents of the file. If it is used as anL-value, anything assigned to it will be written to the file. When used as anR-value, the contents of the file will be read. If an object is assigned, it is serialized before being stored.[citation needed] Object members can be accessed using.notation, as in C# syntax. PowerShell provides special variables, such as$args, which is an array of all the command-line arguments passed to a function from the command line, and$_, which refers to the current object in the pipeline.[59]PowerShell also providesarraysandassociative arrays. The PowerShell language also evaluates arithmetic expressions entered on the command line immediately, and it parses common abbreviations, such as GB, MB, and KB.[60][61] Using thefunctionkeyword, PowerShell provides for the creation of functions. A simple function has the following general look:[62] However, PowerShell allows for advanced functions that support named parameters, positional parameters, switch parameters and dynamic parameters.[62] The defined function is invoked in either of the following forms:[62] PowerShell allows any static .NET methods to be called by providing their namespaces enclosed in brackets ([]), and then using a pair of colons (::) to indicate the static method.[63]For example: There are dozens of ways to create objects in PowerShell. Once created, one can access the properties and instance methods of an object using the.notation.[63] PowerShell acceptsstrings, both raw andescaped. A string enclosed between singlequotation marksis a raw string while a string enclosed between double quotation marks is an escaped string. PowerShell treats straight and curly quotes as equivalent.[64] The following list of special characters is supported by PowerShell:[65] For error handling, PowerShell provides a .NET-basedexception-handlingmechanism. In case of errors, objects containing information about the error (Exceptionobject) are thrown, which are caught using thetry ... catchconstruct (although atrapconstruct is supported as well). PowerShell can be configured to silently resume execution, without actually throwing the exception; this can be done either on a single command, a single session or perpetually.[66] Scripts written using PowerShell can be made to persist across sessions in either a.ps1file or a.psm1file (the latter is used to implement a module). Later, either the entire script or individual functions in the script can be used. Scripts and functions operate analogously with cmdlets, in that they can be used as commands in pipelines, and parameters can be bound to them. Pipeline objects can be passed between functions, scripts, and cmdlets seamlessly. To prevent unintentional running of scripts, script execution is disabled by default and must be enabled explicitly.[67]Enabling of scripts can be performed either at system, user or session level. PowerShell scripts can besignedto verify their integrity, and are subject toCode Access Security.[68] The PowerShell language supportsbinary prefixnotation similar to thescientific notationsupported by many programming languages in the C-family.[69] One can also use PowerShell embedded in a management application, which uses the PowerShell runtime to implement the management functionality. For this, PowerShell provides amanagedhostingAPI. Via the APIs, the application can instantiate arunspace(one instantiation of the PowerShell runtime), which runs in the application'sprocessand is exposed as aRunspaceobject.[12]The state of the runspace is encased in aSessionStateobject. When the runspace is created, the PowerShell runtime initializes the instantiation, including initializing the providers and enumerating the cmdlets, and updates theSessionStateobject accordingly. The Runspace then must be opened for either synchronous processing or asynchronous processing. After that it can be used to execute commands.[citation needed] To execute a command, a pipeline (represented by aPipelineobject) must be created and associated with the runspace. The pipeline object is then populated with the cmdlets that make up the pipeline. For sequential operations (as in a PowerShell script), a Pipeline object is created for each statement and nested inside another Pipeline object.[12]When a pipeline is created, PowerShell invokes the pipeline processor, which resolves the cmdlets into their respectiveassemblies(thecommand processor) and adds a reference to them to the pipeline, and associates them withInputPipe,OutputPipeandErrorOutputPipeobjects, to represent the connection with the pipeline. The types are verified and parameters bound usingreflection.[12]Once the pipeline is set up, the host calls theInvoke()method to run the commands, or its asynchronous equivalent,InvokeAsync(). If the pipeline has theWrite-Hostcmdlet at the end of the pipeline, it writes the result onto the console screen. If not, the results are handed over to the host, which might either apply further processing or display the output itself.[citation needed] Microsoft Exchange Server2007 uses the hosting APIs to provide its management GUI. Each operation exposed in the GUI is mapped to a sequence of PowerShell commands (or pipelines). The host creates the pipeline and executes them. In fact, the interactive PowerShell console itself is a PowerShell host, whichinterpretsthe scripts entered at command line and creates the necessaryPipelineobjects and invokes them.[citation needed] DSC allows for declaratively specifying how a software environment should be configured.[70] Upon running aconfiguration, DSC will ensure that the system gets the state described in the configuration. DSC configurations are idempotent. TheLocal Configuration Manager(LCM) periodically polls the system using the control flow described byresources(imperative pieces of DSC) to make sure that the state of a configuration is maintained. All major releases are still supported, and each major release has featured backwards compatibility with preceding versions.[dubious–discuss] Initially using the code name "Monad", PowerShell was first shown publicly at the Professional Developers Conference in October 2003 in Los Angeles. Named Windows PowerShell, version 1.0 was released in November 2006 forWindows XP SP2,Windows Server 2003 SP1andWindows Vista[71]and as an optional component ofWindows Server 2008. Version 2.0 integrates withWindows 7andWindows Server 2008 R2[72]and is released forWindows XPwith Service Pack 3,Windows Server 2003with Service Pack 2, andWindows Vistawith Service Pack 1.[73][74] The version includes changes to the language and hosting API, in addition to including more than 240 new cmdlets.[75][76] New features include:[77][78][79] Version 3.0 integrates withWindows 8,Windows Server 2012,Windows 7with Service Pack 1,Windows Server 2008with Service Pack 1, andWindows Server 2008 R2with Service Pack 1.[84][85] Version 3.0 is part of a larger package,Windows Management Framework3.0 (WMF3), which also contains theWinRMservice to support remoting.[85]Microsoft made severalCommunity Technology Previewreleases of WMF3. An early community technology preview 2 (CTP 2) version of Windows Management Framework 3.0 was released on December 2, 2011.[86]Windows Management Framework 3.0 was released for general availability in December 2012[87]and is included with Windows 8 and Windows Server 2012 by default.[88] New features include:[85][89]: 33–34 Version 4.0 integrates withWindows 8.1,Windows Server 2012 R2,Windows 7 SP1,Windows Server 2008 R2SP1 andWindows Server 2012.[90] New features include: Version 5.0 was re-released with Windows Management Framework (WMF) 5.0 on February 24, 2016, following an initial release with a severe bug.[94] Key features included: Version 5.1 was released along with theWindows 10 Anniversary Update[97]on August 2, 2016, and inWindows Server 2016.[98]PackageManagement now supports proxies, PSReadLine now has ViMode support, and two new cmdlets were added: Get-TimeZone and Set-TimeZone. The LocalAccounts module allows for adding/removing local user accounts.[99]A preview for was released for Windows 7, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2 on July 16, 2016,[100]and was released on January 19, 2017.[101] Version 5.1 is the first to come in two editions of "Desktop" and "Core". The "Desktop" edition is the continuation product line that uses the .NET Framework, and the "Core" edition runs on .NET Core and is bundled with Windows Server 2016 Nano Server. In exchange for smaller footprint, the latter lacks some features such as the cmdlets to manage clipboard or join a computer to a domain, WMI version 1 cmdlets, Event Log cmdlets and profiles.[29]This was the final version exclusively for Windows. Version 5.1 remains pre-installed on Windows 10, Windows 11 and Windows Server 2022, while the .NET version needs to be installed separately and can run side-by-side with the .NET Framework version.[102][103] Renamed to PowerShell Core, version 6.0 was first announced on August 18, 2016, when Microsoft unveiled its decision to make the productcross-platform, independent of Windows, free and open source.[9]It achievedgeneral availabilityon January 10, 2018, for Windows,macOSandLinux.[104]It has its own support lifecycle and adheres to the Microsoft lifecycle policy that is introduced with Windows 10: Only the latest version of PowerShell Core is supported. Microsoft expects to release one minor version for PowerShell Core 6.0 every six months.[105] The most significant change in this version is the expansion to the other platforms. For Windows administrators, this version did not include any major new features. In an interview with the community on January 11, 2018, the development team was asked to list the top 10 most exciting things that would happen for a Windows IT professional who would migrate from version 5.1 to version 6.0. In response, Angel Calvo of Microsoft could only name two: cross-platform and open-source.[106]PowerShell 6 changed toUTF-8as default encoding, with some exceptions.[107](version 7.4 changes more to UTF-8)[108] According to Microsoft, one of the new features of version 6.1 is "Compatibility with 1900+ existing cmdlets in Windows 10 andWindows Server 2019."[109]Still, no details of these cmdlets can be found in the full version of the change log.[110]Microsoft later professes that this number was insufficient as PowerShell Core failed to replace Windows PowerShell 5.1 and gain traction on Windows.[111]It was, however, popular on Linux.[111] Version 6.2 is focused primarily on performance improvements, bug fixes, and smaller cmdlet and language enhancements that improved developer productivity.[112] Renamed to simply PowerShell, version 7 replaces the previous product lines: PowerShell Core and Windows PowerShell.[113][111]The focus in development was to make version 7 a viable replacement for version 5.1, i.e. to have near parity with it in terms of compatibility with modules that ship with Windows.[114] New features include:[115] Version 7.2 is the next long-term support version, after version 7.0. It uses .NET 6.0 and features universal installer packages for Linux. On Windows, updates to version 7.2 and later come via theMicrosoft Updateservice; this feature has been missing from versions 6.0 through 7.1.[116] Version 7.3 includes some general Cmdlet updates and fixes, testing for framework dependent package in release pipeline as well as build and packaging improvements.[117] Version 7.4 is based on .NET 8 and is considered the long term support (LTS) release.[118] Changes include:[119] Version 7.5, is the latest stable release; released January 2025; built on .NET 9.0.1. It includes enhancements for performance, usability, and security.[120]Key updates include improvements to tab completion, such as better type inference and new argument completers, as well as fixes for Invoke-WebRequest and Invoke-RestMethod. This release also adds the new ConvertTo-CliXml and ConvertFrom-CliXml cmdlets, and updates core modules like PSReadLine and Microsoft.PowerShell.PSResourceGet. Breaking changes include updates to Test-Path parameter handling, and default settings for New-FileCatalog. Prior to GA Release there were 5 preview releases and 1 RC release of PowerShell v7.5.0,[121]with a full release blog post for this version expected soon. Version 7.6 is based on .NET 9 and is the latest preview release. The first preview release v7.6.0-preview.2[122]was released on 2025-01-15. Changes include: TBD[123] The following table contains various cmdlets that ship with PowerShell that have notably similar functionality to commands in other shells. Many of these cmdlets are exposed to the user via predefined aliases to make their use familiar to users of the other shells. Notes
https://en.wikipedia.org/wiki/PowerShell
TheMicrosoftWindows Script Host(WSH) (formerly namedWindows Scripting Host) is an automation technology forMicrosoft Windowsoperating systemsthat provides scripting abilities comparable tobatch files, but with a wider range of supported features. This tool was first provided on Windows 95 after Build 950a on the installation discs as an optional installation configurable and installable by means of theControl Panel, and then a standard component of Windows 98 (Build 1111) and subsequent and Windows NT 4.0 Build 1381 and by means of Service Pack 4. WSH is also a means of automation forInternet Explorervia the installed WSH engines from IE Version 3.0 onwards; at this, time VBScript became a means of automation forMicrosoft Outlook97.[1]WSH is also an optional install provided with a VBScript and JScript engine forWindows CE3.0 and following; some third-party engines, includingRexxand other forms ofBASIC, are also available.[2][3][4] It is language-independent in that it can make use of differentActive Scriptinglanguage engines. By default, it interprets and runs plain-textJScript(.JS and .JSE files) andVBScript(.VBS and .VBE files). Users can install differentscripting enginesto enable them to script in other languages, for instancePerlScript. The language-independentfilename extensionWSF can also be used. The advantage of theWindows Script File(.WSF) is that it allows multiple scripts ("jobs") as well as a combination of scripting languages within a single file. WSH engines include various implementations for theRexx,ooRexx(up to version 4.0.0),BASIC,Perl,Ruby,Tcl,PHP,JavaScript,Delphi,Python,XSLT, and other languages. Windows Script Host is distributed and installed by default onWindows 98and later versions of Windows. It is also installed ifInternet Explorer 5(or a later version) is installed. Beginning withWindows 2000, the Windows Script Host became available for use with user login scripts. Windows Script Host may be used for a variety of purposes, including logon scripts, administration and general automation. Microsoft describes it as anadministration tool.[5]WSH provides an environment for scripts to run – it invokes the appropriate script engine and provides a set of services and objects for the script to work with.[5]These scripts may be run inGUImode (WScript.exe) or command line mode (CScript.exe), or from a COM object (wshom.ocx), offering flexibility to the user for interactive or non-interactive scripts.[6]Windows Management Instrumentationis also scriptable by this means. WSH, the engines, and related functionality are also listed as objects which can be accessed and scripted and queried by means of theVBAand Visual Studio object explorers and those for similar tools like the various script debuggers, e.g.Microsoft Script Debugger, and editors. WSH implements anobject modelwhich exposes a set ofComponent Object Model(COM) interfaces.[7]So in addition to ASP, IIS, Internet Explorer, CScript and WScript, WSH can be used to automate and communicate with any Windows application with COM and other exposed objects, such as using PerlScript to queryMicrosoft Accessby various means including variousODBCengines andSQL, ooRexxScript to create what are in effect Rexx macros inMicrosoft Excel, Quattro Pro,Microsoft Word,Lotus Notesand any of the like, theXLNTscript to get environment variables and print them in a newTextPaddocument, and so on. TheVBAfunctionality of Microsoft Office,Open Office(as well asPythonand other installable macro languages) andCorel WordPerfect Officeis separate from WSH engines althoughOutlook 97usesVBScriptrather than VBA as its macro language.[8] Pythonin the form ofActiveStatePythonScriptcan be used to automate and query the data inSecureCRT, as with other languages with installed engines, e.g.PerlScript,ooRexxScript,PHPScript,RubyScript,LuaScript,XLNTand so on. One notable exception isPaint Shop Pro, which can be automated in Python by means of a macro interpreter within the PSP programme itself rather than using the PythonScript WSH engine or an external Python implementation such as Python interpreters supplied withUnixemulation and integration software suites or other standalone Python implementations et al.[9][10]as an intermediate and indeed can be programmed like this even in the absence of any third-party Python installation; the same goes for the Rexx-programmable terminal emulator Passport.[11]TheSecureCRTterminal emulator,SecureFXFTP client, and related client and server programmes from Van Dyke are as of the current versions automated by means of WSH so any language with an installed engine may be used; the software comes with VBScript, JScript, and PerlScript examples. As of the most recent releases and going back a number of versions now, the programmability of4NT / Take Commandin the latest implementations (by means of "@REXX" and similar for Perl, Python, Tcl, Ruby, Lua, VBScript, JScript, and the like) generally uses the WSH engine.[12]TheZOCterminal emulator gets its ability to be programmed in Rexx by means of an external interpreter, one of which is supplied with the programme, and alternate Rexx interpreters can be specified in the configuration of the programme.[13][14]The MKS Toolkit provides PScript, a WSH engine in addition to the standard Perl interpreter perl.exe which comes with the package. VBScript, JScript, and some third-party engines have the ability to create and execute scripts in an encoded format which prevents editing with a text editor; the file extensions for these encoded scripts is .vbe and .jse and others of that type. Unless otherwise specified, any WSH scripting engine can be used with the various Windows server software packages to provide CGI scripting. The current versions of the default WSH engines and all or most of the third-party engines have socket abilities as well; as a CGI script or otherwise, PerlScript is the choice of many programmers for this purpose and the VBScript and various Rexx-based engines are also rated as sufficiently powerful in connectivity and text-processing abilities to also be useful. This also goes for file access and processing—the earliest WSH engines for VBScript and JScript do not since the base language did not,[15]whilst PerlScript, ooRexxScript, and the others have this from the beginning. WinWrap Basic,SaxBasicand others are similar to Visual Basic for Applications, These tools are used to add scripting and macro abilities to software being developed and can be found in earlier versions ofHost Explorerfor example. Many other languages can also be used in this fashion. Other languages used for scripting of programmes include Rexx, Tcl, Perl, Python, Ruby, and others which come with methods to control objects in the operating system and the spreadsheet and database programmes.[16]One exception is that theZocterminal emulator is controlled by aRexxinterpreter supplied with the package or another interpreter specified by the user; this is also the case with the Passport emulator. VBScript is the macro language inMicrosoft Outlook97, whilstWordBasicis used for Word up to 6, PowerPoint and other tools. Excel to 5.0 uses Visual Basic 5.0. In Office 2000 forward, true Visual Basic for Applications 6.0 is used for all components. Other components useVisual Basic for Applications.OpenOfficeuses Visual Basic, Python, and several others as macro languages and others can be added.LotusScriptis very closely related to VBA and used forLotus NotesandLotus SmartSuite, which includesLotus Word Pro(the current descendant ofAmi Pro),Lotus Approach,Lotus FastSite,Lotus 1-2-3, &c, and pure VBA, licensed from Microsoft, is used in Corel products such asWordPerfect,Paradox,Quattro Pro&c. Any scripting language installed under Windows can be accessed by external means of PerlScript, PythonScript, VBScript and the other engines available can be used to access databases (Lotus Notes, Microsoft Access,Oracle Database, Paradox) and spreadsheets (Microsoft Excel, Lotus 1-2-3, Quattro Pro) and other tools like word processors, terminal emulators, command shells and so on. This can be accomplished by means of WSH, so any language can be used if there is an installed engine. In recent versions of theTake Commandenhanced command prompt and tools, the "script" command typed at the shell prompt will produce a list of the currently installed engines, one to a line and therefore CR-LF delimited.[17][18][19] The first example is very simple; it shows someVBScriptwhich uses the root WSH COM object "WScript" to display a message with an 'OK' button. Upon launching this script the CScript or WScript engine would be called and the runtime environment provided. Content of a filehello0.vbs WSH programming can also use theJScriptlanguage. Content of a filehello1.js Or, code can be mixed in oneWSFfile, such asVBScriptandJScript, or any other: Content of a filehello2.wsf Windows applications and processes may be automated using a script in Windows Script Host. Viruses and malware could be written to exploit this ability. Thus, some suggest disabling it for security reasons.[20]Alternatively, antivirus programs may offer features to control .vbs and other scripts which run in the WSH environment. Since version 5.6 of WSH, scripts can bedigitally signedprogrammatically using theScripting.Signerobject in a script itself, provided a validcertificateis present on the system. Alternatively, the signcode tool from the Platform SDK, which has been extended to support WSH filetypes, may be used at the command line.[21] By usingSoftware Restriction Policiesintroduced with Windows XP, a system may be configured to execute only those scripts which are stored in trusted locations, have a known MD5 hash, or have been digitally signed by a trusted publisher, thus preventing the execution of untrusted scripts.[22] Note: By definition, all of these scripting engines can be utilised inCGIprogramming under Windows with any number of programmes and set up, meaning that the source code files for a script used on a server for CGI purposes could bear other file extensions such as .cgi and so on. The aforementioned ability of the Windows Script Host to run a script with multiple languages in it in files with a .wsh extension.Extended HtmlandXMLalso add to the additional possibilities when working with scripts for network use, as doActive Server Pagesand so forth. Moreover, Windowsshell scriptsand scripts written in shells with enhanced capabilities likeTCC,4NT, etc. and Unix shells under interoperability software like theMKS Toolkitcan have scripts embedded in them as well. There have been suggestions of creating engines for other languages, such asLotusScript,SaxBasic,BasicScript,KiXtart,awk,bash,cshand other Unix shells,4NT,cmd.exe(the Windows NT shell),Windows PowerShell,DCL,C,C++,Fortranand others.[24]The XLNT language[25]is based on DCL and provides a very large subset of the language along with additional commands and statements and the software can be used in three ways: the WSH engine (*.xcs), the console interpreter (*.xlnt) and as a server and client side CGI engine (*.xgi).[26] When a server implementing CGI such as the Windows Internet Information Server, ports of Apache and others, all or most of the engines can be used; the most commonly used are VBScript, JScript, PythonScript, PerlScript, ActivePHPScript, and ooRexxScript. The MKS Toolkit PScript program also runs Perl. Command shells like cmd.exe, 4NT, ksh, and scripting languages with string processing and preferably socket functionality are also able to be used for CGI scripting; compiled languages like C++, Visual Basic, and Java can also be used like this. All Perl interpreters, ooRexx, PHP, and more recent versions of VBScript and JScript can use sockets for TCP/IP and usually UDP and other protocols for this. The redistributable version of WSH version 5.6 can be installed on Windows 95/98/Me and Windows NT 4.0/2000. WSH 5.7 is downloadable for Windows 2000, Windows XP and Windows Server 2003. Recently[when?], redistributable versions for older operating systems (Windows 9x and Windows NT 4.0) are no longer available from the Microsoft Download Center. Since Windows XP Service Pack 3, release 5.7 is the only version available from Microsoft, with newer revisions being included in newer versions of Windows since.
https://en.wikipedia.org/wiki/Windows_Script_Host
Exception(s),The Exception(s), orexceptionalmay refer to:
https://en.wikipedia.org/wiki/Exception_(disambiguation)
Acceptanceis the experience of a situation without an intention to change that situation. Acceptancemay also refer to:
https://en.wikipedia.org/wiki/Acceptance_(disambiguation)
Receiverorreceivemay refer to:
https://en.wikipedia.org/wiki/Receive_(disambiguation)
Rejection, or the verbreject, may refer to:
https://en.wikipedia.org/wiki/Rejection_(disambiguation)
Inprogramming language theory,lazy evaluation, orcall-by-need,[1]is anevaluation strategywhich delays the evaluation of anexpressionuntil its value is needed (non-strict evaluation) and which avoids repeated evaluations (by the use ofsharing).[2][3] The benefits of lazy evaluation include: Lazy evaluation is often combined withmemoization, as described inJon Bentley'sWriting Efficient Programs.[4]After a function's value is computed for thatparameteror set of parameters, the result is stored in alookup tablethat is indexed by the values of those parameters; the next time the function is called, the table is consulted to determine whether the result for that combination of parameter values is already available. If so, the stored result is simply returned. If not, the function is evaluated, and another entry is added to the lookup table for reuse. Lazy evaluation is difficult to combine withimperativefeatures such asexception handlingandinput/output, because theorder of operationsbecomes indeterminate. The opposite of lazy evaluation iseager evaluation, sometimes known as strict evaluation. Eager evaluation is the evaluation strategy employed in most[quantify]programming languages. Lazy evaluation was introduced forlambda calculusby Christopher Wadsworth.[5]For programming languages, it was independently introduced by Peter Henderson andJames H. Morris[6]and byDaniel P. Friedmanand David S. Wise.[7][8] Delayed evaluation is used particularly infunctional programminglanguages. When using delayed evaluation, an expression is not evaluated as soon as it gets bound to a variable, but when the evaluator is forced to produce the expression's value. That is, a statement such asx = expression;(i.e. the assignment of the result of an expression to a variable) clearly calls for the expression to be evaluated and the result placed inx, but what actually is inxis irrelevant until there is a need for its value via a reference toxin some later expression whose evaluation could itself be deferred, though eventually the rapidly growing tree of dependencies would be pruned to produce some symbol rather than another for the outside world to see.[9] Lazy evaluation allows control structures to be defined normally, and not as primitives or compile-time techniques. For example, one can defineif-then-elseandshort-circuit evaluationoperators:[10][11] These have the usual semantics, i.e.,ifThenElseabcevaluates (a), then if and only if (a) evaluates to true does it evaluate (b), otherwise it evaluates (c). That is, exactly one of (b) or (c) will be evaluated. Similarly, forEasilyComputed||LotsOfWork, if the easy part givesTruethe lots of work expression could be avoided. Finally, when evaluatingSafeToTry&&Expression, ifSafeToTryisfalsethere will be no attempt at evaluating theExpression. Conversely, in an eager language the above definition forifThenElseabcwould evaluate (a), (b), and (c) regardless of the value of (a). This is not the desired behavior, as (b) or (c) may haveside effects, take a long time to compute, or throw errors. It is usually possible to introduce user-defined lazy control structures in eager languages as functions, though they may depart from the language's syntax for eager evaluation: Often the involved code bodies need to be wrapped in a function value, so that they are executed only when called. Delayed evaluation has the advantage of being able to create calculable infinite lists without infinite loops or size matters interfering in computation. The actual values are only computed when needed. For example, one could create a function that creates an infinite list (often called astream) ofFibonacci numbers. The calculation of then-th Fibonacci number would be merely the extraction of that element from the infinite list, forcing the evaluation of only the first n members of the list.[12][13] Take for example this trivial program inHaskell: In the functionnumberFromInfiniteList, the value ofinfinityis an infinite range, but until an actual value (or more specifically, a specific value at a certain index) is needed, the list is not evaluated, and even then, it is only evaluated as needed (that is, until the desired index.) Provided the programmer is careful, the program completes normally. However, certain calculations may result in the program attempting to evaluate an infinite number of elements; for example, requesting the length of the list or trying to sum the elements of the list with afold operationwould result in the program either failing to terminate or runningout of memory. As another example, the list of all Fibonacci numbers can be written in the programming languageHaskellas:[13] In Haskell syntax, ":" prepends an element to a list,tailreturns a list without its first element, andzipWithuses a specified function (in this case addition) to combine corresponding elements of two lists to produce a third.[12] In computerwindowing systems, the painting of information to the screen is driven byexpose eventswhich drive the display code at the last possible moment. By doing this, windowing systems avoid computing unnecessary display content updates.[14] Another example of laziness in modern computer systems iscopy-on-writepage allocation ordemand paging, where memory is allocated only when a value stored in that memory is changed.[14] Laziness can be useful for high performance scenarios. An example is the Unixmmapfunction, which providesdemand drivenloading of pages from disk, so that only those pages actually touched are loaded into memory, and unneeded memory is not allocated. MATLABimplementscopy on edit, where arrays which are copied have their actual memory storage replicated only when their content is changed, possibly leading to anout of memoryerror when updating an element afterwards instead of during the copy operation.[15] The number of beta reductions to reduce a lambda term with call-by-need is no larger than the number needed by call-by-value orcall-by-namereduction.[16][17]And with certain programs the number of steps may be much smaller, for example a specific family of lambda terms usingChurch numeralstake an infinite amount of steps with call-by-value (i.e. never complete), an exponential number of steps with call-by-name, but only a polynomial number with call-by-need. Call-by-need embodies two optimizations - never repeat work (similar to call-by-value), and never perform unnecessary work (similar to call-by-name).[18]Lazy evaluation can also lead to reduction inmemory footprint, since values are created when needed.[19] In practice, lazy evaluation may cause significant performance issues compared to eager evaluation. For example, on modern computer architectures, delaying a computation and performing it later is slower than performing it immediately. This can be alleviated throughstrictness analysis.[18]Lazy evaluation can also introducememory leaksdue to unevaluated expressions.[20][21] Some programming languages delay evaluation of expressions by default, and some others providefunctionsor specialsyntaxto delay evaluation. InKRC,MirandaandHaskell, evaluation of function arguments is delayed by default. In many other languages, evaluation can be delayed by explicitly suspending the computation using special syntax (as withScheme's"delay" and "force" andOCaml's "lazy" and "Lazy.force") or, more generally, by wrapping the expression in athunk. The object representing such an explicitly delayed evaluation is called alazy future.Rakuuses lazy evaluation of lists, so one can assign infinite lists to variables and use them as arguments to functions, but unlike Haskell and Miranda, Raku does not use lazy evaluation of arithmetic operators and functions by default.[9] In lazy programming languages such as Haskell, although the default is to evaluate expressions only when they are demanded, it is possible in some cases to make code more eager—or conversely, to make it more lazy again after it has been made more eager. This can be done by explicitly coding something which forces evaluation (which may make the code more eager) or avoiding such code (which may make the code more lazy).Strictevaluation usually implies eagerness, but they are technically different concepts. However, there is an optimisation implemented in some compilers calledstrictness analysis, which, in some cases, allows the compiler to infer that a value will always be used. In such cases, this may render the programmer's choice of whether to force that particular value or not, irrelevant, because strictness analysis will forcestrict evaluation. In Haskell, markingconstructorfields strict means that their values will always be demanded immediately. Theseqfunction can also be used to demand a value immediately and then pass it on, which is useful if a constructor field should generally be lazy. However, neither of these techniques implementsrecursivestrictness—for that, a function calleddeepSeqwas invented. Also,pattern matchingin Haskell 98 is strict by default, so the~qualifier has to be used to make it lazy.[22] InJava, lazy evaluation can be done by using objects that have a method to evaluate them when the value is needed. The body of this method must contain the code required to perform this evaluation. Since the introduction oflambda expressionsin Java SE8, Java has supported a compact notation for this. The following examplegenericinterface provides a framework for lazy evaluation:[23][24] TheLazyinterface with itseval()method is equivalent to theSupplierinterface with itsget()method in thejava.util.functionlibrary.[25][26]: 200 Each class that implements theLazyinterface must provide anevalmethod, and instances of the class may carry whatever values the method needs to accomplish lazy evaluation. For example, consider the following code to lazily compute and print 210: In the above, the variableainitially refers to a lazy integer object created by the lambda expression() -> 1. Evaluating this lambda expression is similar[a]to constructing a new instance of ananonymous classthat implementsLazy<Integer>with anevalmethod returning1. Each iteration of the loop linksato a new object created by evaluating the lambda expression inside the loop. Each of these objects holds a reference to another lazy object,b, and has anevalmethod that callsb.eval()twice and returns the sum. The variablebis needed here to meet Java's requirement that variables referenced from within a lambda expression be effectively final. This is an inefficient program because this implementation of lazy integers does notmemoizethe result of previous calls toeval. It also involves considerableautoboxing and unboxing. What may not be obvious is that, at the end of the loop, the program has constructed alinked listof 11 objects and that all of the actual additions involved in computing the result are done in response to the call toa.eval()on the final line of code. This callrecursivelytraverses the list to perform the necessary additions. We can build a Java class that memoizes a lazy object as follows:[23][24] This allows the previous example to be rewritten to be far more efficient. Where the original ran in time exponential in the number of iterations, the memoized version runs inlinear time: Java's lambda expressions are justsyntactic sugar. Anything that can be written with a lambda expression can be rewritten as a call to construct an instance of an anonymousinner classimplementing the interface,[a]and any use of an anonymous inner class can be rewritten using a named inner class, and any named inner class can be moved to the outermost nesting level. InJavaScript, lazy evaluation can be simulated by using agenerator. For example, thestreamof allFibonacci numberscan be written, usingmemoization, as: InPython2.x therange()function[27]computes a list of integers. The entire list is stored in memory when the first assignment statement is evaluated, so this is an example of eager or immediate evaluation: In Python 3.x therange()function[28]returns ageneratorwhich computes elements of the list on demand. Elements are only generated when they are needed (e.g., whenprint(r[3])is evaluated in the following example), so this is an example of lazy or deferred evaluation: In Python 2.x is possible to use a function calledxrange()which returns an object that generates the numbers in the range on demand. The advantage ofxrangeis that generated object will always take the same amount of memory. From version 2.2 forward, Python manifests lazy evaluation by implementing iterators (lazy sequences) unlike tuple or list sequences. For instance (Python 2): In the.NETframework, it is possible to do lazy evaluation using the classSystem.Lazy<T>.[29]The class can be easily exploited inF#using thelazykeyword, while theforcemethod will force the evaluation. There are also specialized collections likeMicrosoft.FSharp.Collections.Seqthat provide built-in support for lazy evaluation. In C# and VB.NET, the classSystem.Lazy<T>is directly used. Or with a more practical example: Another way is to use theyieldkeyword:
https://en.wikipedia.org/wiki/Lazy_evaluation
InDOS memory management,conventional memory, also calledbase memory, is the first 640kilobytesof the memory onIBM PCor compatible systems. It is the read-write memory directly addressable by the processor for use by the operating system and application programs. As memory prices rapidly declined, this design decision became a limitation in the use of large memory capacities until the introduction of operating systems and processors that made it irrelevant. The640 KB barrieris an architectural limitation ofIBM PC compatiblePCs. TheIntel 8088CPU, used in theoriginal IBM PC, was able to address 1 MB (220bytes), since the chip offered 20address lines. In the design of the PC, the memory below 640 KB was forrandom-access memoryon the motherboard or on expansion boards, and it was called the conventional memory area.The first memory segment (64 KB) of the conventional memory area is namedlower memoryorlow memory area. The remaining 384 KB beyond the conventional memory area, called theupper memory area(UMA), was reserved for system use and optional devices. UMA was used for theROM BIOS, additionalread-only memory, BIOS extensions for fixed disk drives and video adapters, video adapter memory, and othermemory-mapped input and output devices. The design of the original IBM PC placed theColor Graphics Adapter(CGA) memory map in UMA. The need for more RAM grew faster than the needs of hardware to utilize the reserved addresses, which resulted in RAM eventually being mapped into these unused upper areas to utilize all available addressable space. This introduced a reserved "hole" (or several holes) into the set of addresses occupied by hardware that could be used for arbitrary data. Avoiding such a hole was difficult and ugly and not supported byDOSor most programs that could run on it. Later, space between the holes would be used as upper memory blocks (UMBs). To maintain compatibility with older operating systems and applications, the 640 KB barrier remained part of the PC design even after the 8086/8088 had been replaced with theIntel 80286processor, which could address up to 16 MB of memory inprotected mode. The 1 MB barrier also remained as long as the 286 was running inreal mode, since DOS required real mode which uses the segment and offset registers in an overlapped manner such that addresses with more than 20 bits are not possible. It is still present in IBM PC compatibles today if they are running in real mode such as used by DOS. Even the most modern Intel PCs still have the area between 640 and 1024KBreserved.[3][4]This however is invisible to programs (or even most of the operating system) on newer operating systems (such asWindows,Linux, orMac OS X) that usevirtual memory, because they have no awareness of physical memory addresses at all. Instead they operate within a virtual address space, which is defined independently of available RAM addresses.[5] Some motherboards feature a "Memory Hole at 15 Megabytes" option required for certain VGA video cards that require exclusive access to one particular megabyte for video memory. Later video cards using theAGP(PCI memory space) bus can have 256 MB memory with 1 GBaperture size. One technique used on earlyIBM XTcomputers was to install additional RAM into the video memory address range and push the limit up to the start of theMonochrome Display Adapter(MDA). Sometimes software or a customaddress decoderwas required for this to work. This moved the barrier to 704 KB (with MDA/HGC) or 736 KB (with CGA).[6][7] Memory managerson386-basedsystems (such asQEMMor MEMMAX (+V) inDR-DOS) could achieve the same effect, adding conventional memory at 640 KB and moving the barrier to 704 KB (up to segment B000, the start of MDA/HGC) or 736 KB (up to segment B800, the start of the CGA).[7]Only CGA could be used in this situation, becauseEnhanced Graphics Adapter(EGA) video memory was immediately adjacent to the conventional memory area below the 640 KB line; the same memory area could not be used both for theframe bufferof the video card and for transient programs. All Computers' piggy-back add-onmemory management unitsAllCardfor XT-[8][9]andChargecard[10]for 286/386SX-class computers, as well as MicroWay's ECM (Extended Conventional Memory) add-on-board[11]allowed normal memory to be mapped into the A0000–EFFFF (hex) address range, giving up to 952 KB for DOS programs. Programs such asLotus 1-2-3, which accessed video memory directly, needed to bepatchedto handle this memory layout. Therefore, the 640 KB barrier was removed at the cost of hardware compatibility.[10] It was also possible to useconsole redirection[12](either by specifying an alternative console device likeAUX:when initially invokingCOMMAND.COMor by usingCTTYlater on) to direct output to and receive input from adumb terminalor another computer running aterminal emulator. Assuming theSystem BIOSstill permitted the machine to boot (which is often the case at least with BIOSes for embedded PCs), the video card in a so calledheadless computercould then be removed completely, and the system could provide a total of 960 KB of continuous DOS memory for programs to load. Similar usage was possible on many DOS- but not IBM-compatible computers with a non-fragmented memory layout, for exampleSCPS-100 bussystems equipped with their8086CPU card CP-200B and up to sixteen SCP 110A memory cards (with 64 KB RAM on each of them) for a total of up to 1024 KB (without video card, but utilizing console redirection, and after mapping out the boot/BIOS ROM),[13]theVictor 9000/Sirius 1which supported up to 896 KB, or theApricot PCwith more continuous DOS memory to be used under its custom version of MS-DOS. Most standard programs written for DOS did not necessarily need 640 KB or more of memory. Instead, driver software and utilities referred to asterminate-and-stay-resident programs(TSRs) could be used in addition to the standard DOS software. These drivers and utilities typically used some conventional memory permanently, reducing the total available for standard DOS programs. Some very common DOS drivers and TSRs using conventional memory included: As can be seen above, many of these drivers and TSRs could be considered practically essential to the full-featured operation of the system. But in many cases a choice had to be made by the computer user, to decide whether to be able to run certain standard DOS programs or have all their favorite drivers and TSRs loaded. Loading the entire list shown above is likely either impractical or impossible, if the user also wants to run a standard DOS program as well. In some cases drivers or TSRs would have to be unloaded from memory to run certain programs, and then reloaded after running the program. For drivers that could not be unloaded, later versions of DOS included a startup menu capability to allow the computer user to select various groups of drivers and TSRs to load before running certain high-memory-usage standard DOS programs. As DOS applications grew larger and more complex in the late 1980s and early 1990s, it became common practice to free up conventional memory by moving the device drivers and TSR programs into upper memory blocks (UMBs) in theupper memory area(UMA) at boot, in order to maximize the conventional memory available for applications. This had the advantage of not requiring hardware changes, and preserved application compatibility. This feature was first provided by third-party products such asQEMM, before being built intoDR DOS 5.0in 1990 thenMS-DOS 5.0in 1991. Most users used the accompanyingEMM386driver provided in MS-DOS 5, but third-party products from companies such asQEMMalso proved popular. At startup, drivers could be loaded high using the "DEVICEHIGH=" directive, while TSRs could be loaded high using the "LOADHIGH", "LH" or "HILOAD" directives. If the operation failed, the driver or TSR would automatically load into the regular conventional memory instead. CONFIG.SYS, loading ANSI.SYS into UMBs, no EMS support enabled: AUTOEXEC.BAT, loading MOUSE, DOSKEY, and SMARTDRV into UMBs if possible: The ability of DOS versions 5.0 and later to move their own system core code into thehigh memory area(HMA) through theDOS=HIGH command gave another boost to free memory. Hardware expansion boards could use any of the upper memory area for ROM addressing, so the upper memory blocks were of variable size and in different locations for each computer, depending on the hardware installed. Some windows of upper memory could be large and others small. Loading drivers and TSRs high would pick a block and try to fit the program into it, until a block was found where it fit, or it would go into conventional memory. An unusual aspect of drivers and TSRs is that they would use different amounts of conventional and/or upper memory, based on the order they were loaded. This could be used to advantage if the programs were repeatedly loaded in different orders, and checking to see how much memory was free after each permutation. For example, if there was a 50 KB UMB and a 10 KB UMB, and programs needing 8 KB and 45 KB were loaded, the 8 KB might go into the 50 KB UMB, preventing the second from loading. Later versions of DOS allowed the use of a specific load address for a driver or TSR, to fit drivers/TSRs more tightly together. In MS-DOS 6.0, Microsoft introducedMEMMAKER, which automated this process of block matching, matching the functionality third-partymemory managersoffered. This automatic optimization often still did not provide the same result as doing it by hand, in the sense of providing the greatest free conventional memory. Also in some cases third-party companies wrote special multi-function drivers that would combine the capabilities of several standard DOS drivers and TSRs into a single very compact program that used just a few kilobytes of memory. For example, the functions of mouse driver, CD-ROM driver, ANSI support, DOSKEY command recall, and disk caching would all be combined together in one program, consuming just 1 – 2 kilobytes of conventional memory for normal driver/interrupt access, and storing the rest of the multi-function program code in EMS or XMS memory. The barrier was only overcome with the arrival ofDOS extenders, which allowed DOS applications to run in 16-bit or 32-bitprotected mode, but these were not very widely used outside ofcomputer gaming. With a 32-bit DOS extender, a game could benefit from a 32-bit flat address space and the full 32-bit instruction set without the 66h/67h operand/address override prefixes. 32-bit DOS extenders required compiler support (32-bit compilers) whileXMSandEMSworked with an old compiler targeting 16-bit real-mode DOS applications. The two most common specifications for DOS extenders wereVCPI- and laterDPMI-compatible with Windows 3.x. The most notable DPMI-compliant DOS extender may beDOS/4GW, shipping withWatcom. It was very common in games for DOS. Such a game would consist of either a DOS/4GW 32-bit kernel, or a stub which loaded a DOS/4GW kernel located in the path or in the same directory and a 32-bit "linear executable". Utilities are available which can strip DOS/4GW out of such a program and allow the user to experiment with any of the several, and perhaps improved, DOS/4GW clones. Prior to DOS extenders, if a user installed additional memory and wished to use it under DOS, they would first have to install and configure drivers to support eitherexpanded memoryspecification (EMS) orextended memoryspecification (XMS) and run programs supporting one of these specifications. EMS was a specification available on all PCs, including those based on theIntel 8086andIntel 8088, which allowed add-on hardware to page small chunks of memory in and out (bank switching) of the "real mode" addressing space (0x0400–0xFFFF). This allowed 16-bit real-mode DOS programs to access several megabytes of RAM through a hole in real memory, typically (0xE000–0xEFFF). A program would then have to explicitly request the page to be accessed before using it. These memory locations could then be used arbitrarily until replaced by another page. This is very similar to modern pagedvirtual memory. However, in a virtual memory system, the operating system handles allpagingoperations, while paging was explicit with EMS. XMS provided a basic protocol which allowed a 16-bit DOS programs to load chunks of 80286 or 80386 extended memory in low memory (address 0x0400–0xFFFF). A typical XMS driver had to switch to protected mode in order to load this memory. The problem with this approach is that while in 286 protected mode, direct DOS calls could not be made. The workaround was to implement a callback mechanism, requiring a reset of the 286. On the 286, this was a major problem. TheIntel 80386, which introduced "virtual 8086 mode", allowed the guest kernel to emulate the 8086 and run the host operating system without having to actually force the processor back into "real mode".HIMEM.SYS2.03 and higher usedunreal modeon the 80386 and higher CPUs while HIMEM.SYS 2.06 and higher usedLOADALLto change undocumented internal registers on the 80286, significantly improving interrupt latency by avoiding repeated real mode/protected mode switches.[14] Windows installs its own version of HIMEM.SYS[15]on DOS 3.3 and higher. Windows HIMEM.SYS launches 32-bit protected mode XMS (n).0 services provider for the Windows Virtual Machine Manager, which then provides XMS (n-1).0 services to DOS boxes and the 16-bit Windows machine (e.g. DOS 7 HIMEM.SYS is XMS 3.0 but running 'MEM' command in a Windows 95 DOS window shows XMS 2.0 information).
https://en.wikipedia.org/wiki/Conventional_memory
InIBM PC compatiblecomputing,DOS memory managementrefers to software and techniques employed to give applications access to more than 640kibibytes(640*1024 bytes) (KiB) of "conventional memory". The 640 KiB limit was specific to the IBM PC and close compatibles; other machines runningMS-DOShad different limits, for example theApricot PCcould have up to 768 KiB and theSiriusVictor 9000, 896 KiB. Memory management on the IBM family was made complex by the need to maintain backward compatibility to the original PC design[1]andreal-modeDOS, while allowing computer users to take advantage of large amounts of low-cost memory and new generations of processors. Since DOS has given way toMicrosoft Windowsand other 32-bit operating systems not restricted by the original arbitrary 640 KiB limit of the IBM PC, managing the memory of a personal computer no longer requires the user to manually manipulate internal settings and parameters of the system. The 640 KiB limit imposed great complexity on hardware and software intended to circumvent it; the physical memory in a machine could be organized as a combination of base or conventional memory (including lower memory), upper memory, high memory (not the same as upper memory), extended memory, and expanded memory, all handled in different ways. TheIntel 8088processor used in the original IBM PC had 20 address lines and so could directly address 1 MiB (220bytes) of memory. Different areas of this address space were allocated to different kinds of memory used for different purposes. Starting at the lowest end of the address space, the PC had read/writerandom access memory(RAM) installed, which was used by DOS and application programs. The first part of this memory was installed on the motherboard of the system (in very early machines, 64 KiB, later revised to 256 KiB). Additional memory could be added with cards plugged into the expansion slots; each card contained straps or switches to control what part of the address space accesses memory and devices on that card. On the IBM PC, all the address space up to 640 KiB was available for RAM. This part of the address space is called "conventional memory" since it is accessible to all versions of DOS automatically on startup. Segment 0, the first 64 KiB of conventional memory, is also calledlow memory area. Normally expansion memory is set to be contiguous in the address space with the memory on the motherboard. If there was an unallocated gap between motherboard memory and the expansion memory, the memory would not be automatically detected as usable by DOS. The upper memory area (UMA) refers to the address space between 640 and 1024 KiB (0xA0000–0xFFFFF). The 128 KiB region between 0xA0000 and 0xBFFFF was reserved forVGAscreen memory and legacy SMM. The 128 KiB region between 0xC0000 and 0xDFFFF was reserved for deviceOption ROMs, includingVideo BIOS. The 64 KiB region between 0xE0000 to 0xEFFFF was reserved forBIOSas lower BIOS area. The 64 KiB region between 0xF0000 and 0xFFFFF was reserved for BIOS as upper BIOS area.[2] For example, themonochrome video adaptermemory area ran from 704 to 736 KiB (0xB0000–0xB7FFF). If only a monochrome display adapter was used, the address space between 0xA0000 and 0xAFFFF could be used for RAM, which would be contiguous with the conventional memory.[3] The system BIOS ROMs must be at the upper end of the address space because the CPU starting address is fixed by the design of the processor. The starting address is loaded into theprogram counterof the CPU after a hardware reset and must have a defined value that endures after power is interrupted to the system. On reset or power up, the CPU loads the address from the system ROM and then jumps to a defined ROM location to begin executing the systempower-on self-test, and eventually load an operating system. Since anexpansion cardsuch as a video adapter,hard drive controller, or network adapter could use allocations of memory in many of the upper memory areas, configuration of some combinations of cards required careful reading of documentation, or experimentation, to find card settings andmemory mappingsthat worked. Mapping two devices to use the same physical memory addresses could result in a stalled or unstable system.[3]Not all addresses in the upper memory area were used in a typical system; unused physical addresses would return undefined and system-dependent data if accessed by the processor. As memory prices declined, application programs such asspreadsheetsandcomputer-aided draftingwere changed to take advantage of more and more physical memory in the system.Virtual memoryin the 8088 and8086was not supported by the processor hardware, and disk technology of the time would make it too slow and cumbersome to be practical. Expanded memory was a system that allowed application programs to access more RAM than directly visible to the processor's address space. The process was a form ofbank switching. When extra RAM was needed,driversoftware would temporarily make a piece of expanded memory accessible to the processor; when the data in that piece of memory was updated, another part could be swapped into the processor's address space. For the IBM PC andIBM PC/XT, with only 20 address lines, special-purpose expanded memory cards were made containing perhaps a megabyte, or more, of expanded memory, with logic on the board to make that memory accessible to the processor in defined parts of the 8088 address space. Allocation and use of expanded memory was not transparent to application programs. The application had to keep track of which bank of expanded memory contained a particular piece of data, and when access to that data was required, the application had to request (through a driver program) the expanded memory board to map that part of memory into the processor's address space. Although applications could use expanded memory with relative freedom, many other software components such as drivers andterminate-and-stay-resident programs(TSRs) were still normally constrained to reside within the 640K "conventional memory" area, which soon became a critically scarce resource. When theIBM PC/ATwas introduced, thesegmented memoryarchitecture of the Intel family processors had the byproduct of allowing slightly more than 1 MiB of memory to be addressed in the "real" mode. Since the 80286 had more than 20 address lines, certain combinations of segment and offset could point into memory above the 0x0100000 (220) location. The 80286 could address up to 16 MiB of system memory, thus removing the behavior of memory addresses "wrapping around". Since the required address line now existed, the combination F800:8000 would no longer point to the physical address 0x0000000 but the correct address 0x00100000. As a result, some DOS programs would no longer work. To maintain compatibility with the PC and XT behavior, the AT included anA20 linegate (Gate A20) that made memory addresses on the AT wrap around to low memory as they would have on an 8088 processor. This gate could be controlled, initially through thekeyboard controller, to allow running programs which were designed for this to access an additional 65,520 bytes (64 KiB) of memory inreal mode. Atboottime, the BIOS first enables A20 when counting and testing all of the system's memory, and disables it before transferring control to the operating system. Enabling the A20 line is one of the first steps aprotected modex86operating systemdoes in the bootup process, often before control has been passed onto the kernel from the bootstrap (in the case of Linux, for example). Thehigh memory area(HMA) is theRAMarea consisting of the first 64 KiB, minus 16bytes, of theextended memoryon an IBM PC/AT or compatible microcomputer. Originally, the logic gate was a gate connected to theIntel 8042keyboard controller. Controlling it was a relatively slow process. Other methods have since been added to allow for more efficient multitasking of programs which require this wrap-around with programs that access all of the system's memory. There was at first a variety of methods, but eventually the industry settled on the PS/2 method of using a bit inport92h to control the A20 line. Disconnecting A20 would not wrapallmemory accesses above 1 MiB, just those in the 1 MiB, 3 MiB, 5 MiB, etc. ranges.Real modesoftware only cared about the area slightly above 1 MiB, so Gate A20 was enough. Virtual 8086 mode, introduced with theIntel 80386, allows the A20 wrap-around to be simulated by using thevirtual memoryfacilities of the processor: physical memory may be mapped to multiple virtual addresses, thus allowing that the memory mapped at the first megabyte of virtual memory may be mapped again in the second megabyte of virtual memory. The operating system may intercept changes to Gate A20 and make corresponding changes to the virtual memory address space, which also makes irrelevant the efficiency of Gate-A20 toggling. The first user of the HMA among Microsoft products wasWindows 2.0in 1987, which introduced theHIMEM.SYSdevice driver. Starting with versions 5.0 ofDR-DOS(1990) and ofMS-DOS(1991), parts of the operating system could be loaded into HMA as well, freeing up to 46 KiB ofconventional memory. Other components, such as device drivers and TSRs, could be loaded into theupper memory area(UMA). 80286 introduced a MMIO memory hole (15 MiB to 16 MiB) for someISAdevices. TheA20 handleris software controlling access to the high memory area.Extended memorymanagers usually provide this functionality. In DOS, high memory area managers, such asHIMEM.SYShad the extra task of managing A20 and provided anAPIfor opening/closing A20. DOS itself could utilize the area for some of its storage needs, thereby freeing up more conventional memory for programs. This functionality was enabled by the "DOS=HIGH" directive in theCONFIG.SYSconfiguration file. TheIntel 80486andPentiumadded a special pin namedA20M#, which when asserted low forces bit 20 of the physical address to be zero for all on-chip cache or external memory accesses. This was necessary since the 80486 introduced an on-chip cache, and therefore masking this bit in external logic was no longer possible. Software still needs to manipulate the gate and must still deal with external peripherals (thechipset) for that.[4] Intel processors from the386onward allowed avirtual 8086 mode, which simplified the hardware required to implement expanded memory for DOS applications. Expanded memory managers such asQuarterdeck'sQEMMproduct and Microsoft'sEMM386supported the expanded memory standard without requirement for special memory boards. On 386 and subsequent processors, memory managers like QEMM might move the bulk of the code for a driver or TSR into extended memory and replace it with a small fingerhold that was capable of accessing the extended-memory-resident code. They might analyze memory usage to detect drivers that required more RAM during startup than they did subsequently, and recover and reuse the memory that was no longer needed after startup. They might even remap areas of memory normally used for memory-mapped I/O. Many of these tricks involved assumptions about the functioning of drivers and other components. In effect, memory managers might reverse-engineer and modify other vendors' code on the fly. As might be expected, such tricks did not always work. Therefore, memory managers also incorporated very elaborate systems of configurable options, and provisions for recovery should a selected option render the PC unbootable (a frequent occurrence). Installing and configuring a memory manager might involve hours of experimentation with options, repeatedly rebooting the machine, and testing the results. But conventional memory was so valuable that PC owners felt that such time was well-spent if the result was to free up 30 or 40 KiB of conventional memory space. In the context of IBM PC-compatible computers,extended memoryrefers to memory in the address space of the 80286 and subsequent processors, beyond the 1 megabyte limit imposed by the 20 address lines of the 8088 and 8086. Such memory is not directly available to DOS applications running in the so-called "real mode" of the 80286 and subsequent processors. This memory is only accessible in the protected or virtual modes of 80286 and higher processors.
https://en.wikipedia.org/wiki/DOS_memory_management
InDOS memory management,extended memoryrefers tomemoryabove the firstmegabyte(220bytes) ofaddress spacein anIBM PCor compatible with an80286or laterprocessor. The term is mainly used under theDOSandWindowsoperating systems. DOS programs, running inreal modeorvirtual x86 mode, cannot directly access this memory, but are able to do so through anapplication programming interface(API) called theExtended Memory Specification(XMS). This API is implemented by adriver(such asHIMEM.SYS) or the operating system kernel, which takes care ofmemory managementand copying memory betweenconventionaland extended memory, by temporarily switching the processor intoprotected mode. In this context, the term "extended memory" may refer to either the whole of the extended memory or only the portion available through this API. Extended memory can also be accessed directly by DOS programs running in protected mode usingVCPIorDPMI, two (different and incompatible) methods of using protected mode under DOS. Extended memory should not be confused withexpanded memory(EMS), an earlier method for expanding the IBM PC's memory capacity beyond 640 kB (655,360 bytes) using anexpansion cardwithbank switchedmemory modules. Because of the available support for expanded memory in popular applications, device drivers were developed that emulated expanded memory using extended memory. Later two additional methods were developed allowing direct access to small portions of additional memory above 640 KB from real mode. One of these is referred to as thehigh memory area(HMA), consisting of the first nearly 64 KB of extended memory, and the other is referred to as theupper memory area(UMA; also referred to as upper memory blocks or UMBs), located in the address range between 640 KB and 1 MB which the IBM PC designates for hardware adapters and ROM. Onx86-based PCs, extended memory is only available with anIntel 80286processor or higher, such as theIBM PC At.[1]Only these chips can directly address more than 1 megabyte ofRAM. The earlier8086/8088processors can make use of more than 1 MB of RAM if one employsspecial hardwareto make selectable parts of it appear at addresses below 1 MB. On a 286 or better PC equipped with more than 640 kB of RAM, the additional memory would generally be re-mapped above the 1 MB boundary, since the IBM PC architecture reserves addresses between 640 kB and 1 MB for system ROM and peripherals. Extended memory is not accessible inreal mode(except for a small portion called thehigh memory area). Only applications executing inprotected modecan use extended memory directly. A supervising protected-modeoperating systemsuch asMicrosoft Windowsmanages application programs' access to memory. The processor makes this memory available through theGlobal Descriptor Table(GDT) and one or moreLocal Descriptor Tables(LDTs). The memory is "protected" in the sense that memory segments assigned a local descriptor cannot be accessed by another program because that program uses a different LDT, and memory segments assigned a global descriptor can have their access rights restricted, causing a processorexception(e.g., ageneral protection faultor GPF) on violation. This prevents programs running in protected mode from interfering with each other's memory.[2] Extended memory went unused at first because no software ran in the 80286's protected mode. By contrast, the industry quickly adopted 1985'sexpanded memorystandard, which works with all PCs regardless of processor.[1]A protected-mode operating system such as Microsoft Windows can also run real-mode programs and provide expanded memory to them. TheDOS Protected Mode Interface(DPMI) is Microsoft's prescribed method for aDOSprogram to access extended memory under amultitaskingenvironment.[2] TheExtended Memory Specification(XMS) is the specification describing the use ofIBM PCextended memory inreal modefor storing data (but not for running executable code in it). Memory is made available byextended memory manager(XMM) software such asHIMEM.SYS. The XMM functions are accessible by direct calls to a variable address that can be found throughsoftware interrupt2Fh function 4310h. XMS version 2.0, released in July 1988, allowed for up to 64 MB of memory.[3]With XMS version 3.0 this increased to 4 GB (232bytes).[4]The difference is a direct result of the sizes of the values used to report the amounts of total and unallocated (free) extended memory in 1 KB (1024-byte) units: XMS 2.0 uses 16-bit unsigned integers, capable of representing a maximum of (65535 * 1 KB) = 64 MB, while XMS 3.0 adds new alternate functions that use 32-bit unsigned integers, capable of representing (4 G * 1 KB) = 4 TB (4 terabytes) but limited by the specification to 4 GB.[3][4](4 GB is the address range of the 80386 and the 80486, the only 32-bit Intel x86 CPUs that existed when XMS 3.0 was published in 1991.) XMS 3.0 retains the original XMS 2.0 API functions with their original 64 MB limit but adds new "super extended memory" functions that support 4 GB of extended memory (minus the first 1 MB) and can be called only with a 32-bit CPU (since these "super" functions use 32-bit CPU registers to pass values).[4]To differentiate between the possibly different amount of memory that might be available to applications, depending on which version of the specification they were developed to, the latter may be referred to assuper extended memory(SXMS). The extended memory manager is also responsible for managing allocations in thehigh memory area(HMA) and theupper memory area(UMA; also referred to as upper memory blocks or UMBs). In practice the upper memory area will be provided by theexpanded memorymanager (EMM), after which DOS will try to allocate them all and manage them itself.[clarification needed][citation needed]
https://en.wikipedia.org/wiki/Extended_memory
InDOS memory management, thehigh memory area(HMA) is theRAMarea consisting of the first 65520bytesabove the one megabyte in anIBMATor compatible computer. Inreal mode, thesegmentation architectureof theIntel8086and subsequent processors identifies memory locations with a 16-bit segment and a 16-bit offset, which is resolved into a physical address via (segment) × 16 + (offset). Although intended to address only 1Megabyte(MB) (220bytes) of memory, segment:offset addresses atFFFF:0010and beyond reference memory beyond 1 MB (FFFF0 + 0010 = 100000). So, on an80286and subsequent processors, this mode can actually address the first 65520 bytes of extended memory as part of the 64 KB range starting 16 bytes before the 1 MB mark—FFFF:0000 (0xFFFF0)toFFFF:FFFF (0x10FFEF). The Intel8086and8088processors, with only 1 MB of memory and only 20address lines, wrapped around at the 20th bit, so that addressFFFF:0010was equivalent to0000:0000.[1] To allow running existing DOS programs which relied on this feature to accesslow memoryon their newer IBMPC ATcomputers, IBM added specialcircuitryon themotherboardto simulate the wrapping around. This circuit was a simplelogic gatewhich could disconnect the microprocessor's 21st addressing line,A20, from the rest of the motherboard. This gate could be controlled, initially through thekeyboard controller, to allow running programs which wanted to access the entire RAM.[1] So-calledA20 handlerscould control the addressing mode dynamically,[1]thereby allowing programs to load themselves into the 1024–1088 KB region and run in real mode.[1] Code suitable to be executed in the HMA must either be coded to beposition-independent(using only relative references),[2][1]be compiled to work at the specific addresses in the HMA (typically allowing only one or at most two pieces of code to share the HMA), or it must be designed to beparagraph boundaryor evenoffset relocatable(with all addresses being fixed up during load).[2][1] Before code (or data) in the HMA can be addressed by the CPU, the corresponding driver must ensure that the HMA is mapped in. This requires that any such requests are tunneled through astubremaining in memory outside the HMA, which would invoke the A20 handler in order to (temporarily) enable theA20 gate.[2][1]If the driver does not exhibit any public data structures and only uses interrupts or calls already controlled by the underlying operating system, it might be possible to register the driver with the system in a way so that the system will take care of A20 itself thereby eliminating the need for a separate stub.[1][nb 1] The first user of the HMA amongMicrosoftproducts wasWindows/2862.1 in 1988, which introduced theHIMEM.SYSdevice driver. Starting in 1990 withDigital Research'sDR DOS 5.0[3](viaHIDOS.SYS /BDOS=FFFF[4]andCONFIG.SYSHIDOS=ON) and since 1991 withMS-DOS 5.0[3](viaDOS=HIGH), parts of the operating system'sBIOSand kernel could be loaded into the HMA as well,[3][5]freeing up to 46 KB ofconventional memory.[1]Other components, such as device drivers andterminate-and-stay-resident programs(TSRs), could at least be loaded into theupper memory area(UMA), but not into the HMA. Under DOS 5.0 and higher, withDOS=HIGH, the system additionally attempted to move the disk buffers into the HMA.[5]UnderDR DOS 6.0(1991) and higher, the disk buffers (viaHIBUFFERS, and later alsoBUFFERSHIGH), parts of the command processorCOMMAND.COMas well as several specialself-relocatingdrivers likeKEYB,NLSFUNCandSHAREcould load into the HMA as well (using their/MHoption), thereby freeing up even more conventional memory and upper memory for conventional DOS software to work with.[1]TASKMAXseems to have relocated parts of itself into the HMA as well.[6][7]Novell'sNLCACHEfromNetWare Liteand early versions ofNWCACHEfromPersonal NetWareandNovell DOS 7could utilize the HMA as well.[8][9][7]Under MS-DOS/PC DOS, a ca. 2 KB shared portion of COMMAND.COM can be relocated into the HMA,[10]as well asDISPLAY.SYSbitmaps for preparedcodepages.[10][11]UnderMS-DOS 6.2(1993) and higher, a ca. 5 KB portion ofDBLSPACE.BIN/DRVSPACE.BINcan coexist with DOS in the HMA (unlessDBLSPACE/DRVSPACE/NOHMAis invoked).[5][12]UnderPC DOS 7.0(1995) and2000,DOSKEYloads into the HMA (if available),[13]and SHARE can be loaded into the HMA as well (unless its/NOHMAoption is given).[13]UnderMS-DOS 7.0(1995) to8.0(2000), parts of the HMA are also used as a scratchpad to hold a growing data structure recording various properties of the loaded real-mode drivers.[7][14][15]
https://en.wikipedia.org/wiki/High_memory_area
In a general computing sense,overlayingmeans "the process of transferring ablockof program code or other data intomain memory, replacing what is already stored".[1]Overlaying is aprogrammingmethod that allows programs to be larger than the computer'smain memory.[2]Anembedded systemwould normally use overlays because of the limitation ofphysical memory, which isinternal memoryfor asystem-on-chip, and the lack ofvirtual memoryfacilities. Constructing an overlay program involves manually dividing a program into self-containedobject codeblocks called overlays or links, generally laid out in atree structure.[b]Siblingsegments, those at the same depth level, share the same memory, calledoverlay region[c]ordestination region. An overlay manager, either part of theoperating systemor part of the overlay program, loads the required overlay fromexternal memoryinto its destination region when it is needed; this may be automatic or via explicit code. Oftenlinkersprovide support for overlays.[3] The following example shows the control statements that instruct theOS/360Linkage Editorto link an overlay program containing a single region, indented to show structure (segment names are arbitrary): These statements define a tree consisting of the permanently resident segment, called theroot, and two overlays A and B which will be loaded following the end of MOD2. Overlay A itself consists of two overlay segments, AA, and AB. At execution time overlays A and B will both utilize the same memory locations; AA and AB will both utilize the same locations following the end of MOD3. All the segments between the root and a given overlay segment are called apath. As of 2015[update], most business applications are intended to run on platforms withvirtual memory. A developer on such a platform can design a program as if the memory constraint does not exist unless the program'sworking setexceeds the available physical memory. Most importantly, the architect can focus on the problem being solved without the added design difficulty of forcing the processing into steps constrained by the overlay size. Thus, the designer can use higher-level programming languages that do not allow the programmer much control over size (e.g.Java,C++,Smalltalk). Still, overlays remain useful in embedded systems.[4]Some low-cost processors used inembedded systemsdo not provide amemory management unit(MMU). In addition many embedded systems arereal-timesystems and overlays provide more determinate response-time thanpaging. For example, theSpace ShuttlePrimary Avionics System Software (PASS)uses programmed overlays.[5] Even on platforms with virtual memory,software componentssuch ascodecsmay bedecoupledto the point where they can beloaded in and out as needed. IBM introduced the concept of achain job[6]inFORTRAN II. The program had to explicitly call the CHAIN subroutine to load a new link, and the new link replaced all of the old link's storage except for the Fortran COMMON area. IBM introduced more general overlay handling[7]inIBSYS/IBJOB, including a tree structure and automatic loading of links as part of CALL processing. In OS/360, IBM extended the overlay facility ofIBLDRby allowing an overlay program to have independent overlay regions, each with its own overlay tree. OS/360 also had a simpler overlay system for transientSVCroutines, using 1024-byte SVC transient areas. In thehome computerera overlays were popular because the operating system and many of the computer systems it ran on lacked virtual memory and had very little RAM by current standards: the originalIBM PChad between 16K and 64K, depending on configuration. Overlays were a popular technique inCommodore BASICto load graphics screens.[2] "SeveralDOSlinkers in the 1980s supported [overlays] in a form nearly identical to that used 25 years earlier on mainframe computers."[4][8]Binary filescontaining memory overlays had de facto standard extensions.OVL[8]or.OVR[9](but also used numerical file extensions like.000,.001, etc. for subsequent files[10]). This file type was used among others byWordStar[11](consisting of the main executableWS.COMand the overlay modulesWSMSGS.OVR,WSOVLY1.OVR,MAILMERGE.OVRandSPELSTAR.OVR, where the "fat" overlay files were even binary identical in their ports forCP/M-86and MS-DOS[12]),dBase,[13]and theEnableDOS office automation software package fromEnable Software.Borland'sTurbo Pascal[14][15]and theGFA BASICcompiler were able to produce .OVL files.
https://en.wikipedia.org/wiki/Overlay_(programming)
InDOS memory management, theupper memory area(UMA) is thememorybetween theaddressesof 640KBand 1024 KB (0xA0000–0xFFFFF) in anIBM PCor compatible. IBM reserved the uppermost 384 KB of the8088CPU's 1024 KB address space forBIOSROM,Video BIOS,Option ROMs, video RAM,RAMon peripherals,memory-mapped I/O, and obsoletedROM BASIC.[1] However, even with video RAM, the ROMBIOS, theVideo BIOS, theOption ROMs, and I/O ports for peripherals, much of this 384 KB of address space was unused. As the 640 KB memory restriction became ever more of an obstacle, techniques were found to fill the empty areas with RAM. These areas were referred to asupper memory blocks(UMBs). The next stage in the evolution of DOS was for theoperating systemto use upper memory blocks (UMBs) and thehigh memory area(HMA). This occurred with the release ofDR DOS 5.0in 1990.[2]DR DOS' built-in memory manager,EMM386.EXE, could perform most of the basic functionality ofQEMMand comparable programs. The advantage of DR DOS 5.0 over the combination of an older DOS plus QEMM was that the DR DOS kernel itself and almost all of its data structures could be loaded into high memory. This left virtuallyallthe base memory free, allowing configurations with up to 620 KB out of 640 KB free. Configuration was not automatic - free UMBs had to be identified by hand, manually included in the line that loaded EMM386 fromCONFIG.SYS, and then drivers and so on had to be manually loaded into UMBs from CONFIG.SYS andAUTOEXEC.BAT. This configuration was not a trivial process. As it was largely automated by the installation program of QEMM, this program survived on the market; indeed, it worked well with DR DOS' own HMA and UMB support and went on to be one of the best-selling utilities for the PC. This functionality was copied byMicrosoftwith the release ofMS-DOS 5.0in June 1991.[2]Eventually, even more DOS data structures were moved out of conventional memory, allowing up to 631 KB out of 640 KB to be left free. Starting from version 6.0 of MS-DOS, Microsoft even included a program calledMEMMAKERwhich was used to automatically optimize conventional memory by movingterminate-and-stay-resident(TSR) programs to the upper memory. For a period in the early 1990s, manual optimization of the DOS memory map became a highly prized skill, allowing for the largest applications to run on even the most complex PC configurations. The technique was to first create as many UMBs as possible, including remapping allocated but unused blocks of memory, such as the monochrome display area on colour machines. Then, DOS' many subcomponents had to be loaded into these UMBs in the correct sequence to use the blocks of memory as efficiently as possible. Some TSR programs required additional memory while loading, which was freed up again once loading was complete. Fortunately, there were few dependencies amongst these modules, so it was possible to load them in almost any sequence. Exceptions were that to successfully cache CD-ROMs, most disk caches had to be loaded after any CD-ROM drivers, and that the modules of most network stacks had to be loaded in a certain sequence, essentially working progressively up through the layers of theOSI model. A basic yet effective method used to optimize conventional memory was to load HIMEM.SYS as a device, thereafter loading EMM386.EXE as a device with the "RAM AUTO" option which allows access into the UMA by loading device drivers as devicehigh. This method effectively loads the fundamental memory managers into conventional memory, and thereafter everything else into the UMA. Conventional memory glutton programs such asMSCDEXcould also be loaded into the UMA in a similar fashion, hence freeing up a large amount of conventional memory. The increasing popularity ofWindows 3.0made the necessity of the upper memory area less relevant, as Windows applications were not directly affected by DOS' base memory limits, but DOS programs running under Windows (with Windows itself acting as a multitasking manager) were still thus constrained. With the release ofWindows 95, it became less relevant still, as this version of Windows provides much of the functionality of the DOS device drivers to DOS applications running under Windows, such as CD, network and sound support; the memory map of Windows 95 DOS boxes was automatically optimised. However, not all DOS programs could execute in this environment. Specifically, programs that tried to directly switch from real mode to protected mode would not work as this was not allowed in thevirtual 8086 modeit was running in. Also, programs that tried making the switch using theVirtual Control Program Interface(VCPI) API (which was introduced to allow DOS programs that needed protected mode to enter it from the virtual 8086 mode set up by a memory manager, as described above) didn't work in Windows 95. Only theDOS Protected Mode Interface(DPMI) API for switching to protected mode was supported. Upper memory blocks can be created by mappingextended memoryinto the upper memory area when running invirtual 8086 mode. This is similar to howexpanded memorycan be emulated usingextended memoryso this method of providing upper memory blocks is usually provided by the expanded memory manager (for exampleEMM386). Theapplication programming interfacefor managing the upper memory blocks is specified in theeXtended Memory Specification. On many systems including modern ones it is possible to use memory reserved for shadowing expansion card ROM as upper memory. Many chipsets reserve up to 384 KB RAM for this purpose and since this RAM is generally unused it may be used asreal modeupper memory with a customdevice driver, such as UMBPCI.[3] OnIBM XTcomputers, it was possible to add more memory to the motherboard and use a customaddress decoderPROMto make it appear in the upper memory area.[4]As with the 386-based upper memory described above, the extra RAM could be used to load TSR files, or as aRAM disk. TheAllCard, an add-onmemory management unitfor XT-class computers, allowed normal memory to be mapped into the 0xA0000-EFFFF address range, giving up to 952 KB for DOS programs. Programs such asLotus 1-2-3, which accessed video memory directly, needed to bepatchedto handle this memory layout. Therefore, the640 KB barrierwas removed at the cost of software compatibility. This usage of the upper memory area is different from using upper memory blocks, which was used to freeconventional memoryby moving device drivers and TSRs into the upper 384 KB of the 1MBaddress space, but left the amount of addressable memory (640 KB) intact.
https://en.wikipedia.org/wiki/Upper_memory_area
EMM386is theexpanded memorymanager ofMicrosoft'sMS-DOS,IBM'sPC DOS,Digital Research'sDR-DOS, andDatalight'sROM-DOS[1]which is used to create expanded memory usingextended memoryonIntel 80386CPUs. There also is an EMM386.EXE available inFreeDOS.[2] EMM386.EXE can map memory into unused blocks in theupper memory area(UMA), allowing device drivers andterminate-and-stay-resident programsto be "loaded high", preservingconventional memory. The technique probably first appeared with the development ofCEMM, included with Compaq's OEMMS-DOSfor theCompaq Deskpro 386in 1986. Microsoft's version first appeared, built-in, withWindows/3862.0 in 1987 and as standalone EMM386.SYS withMS-DOS 4.0in 1988; the more flexible EMM386.EXE version appeared inMS-DOS 5.0in 1991. EMM386 uses the processor'svirtual 8086 mode. This forces memory accesses made by DOS applications to go through the processor'sMMU(introduced in the 386), and the page table entries used by the MMU are configured by EMM386 to map certain regions in upper memory to areas of extended memory (obtained by EMM386 through the extended memory managerHIMEM.SYS). This technique enabled both EMS (expanded memory) as well asUMBs- both of which appear to DOS applications to be memory in the upper area but are in fact mapped to physical memory locations beyond 1MB. It temporarily shuts down during a Windows session in386 Enhancedmode, with Windows'protected modekernel taking over its role. Windows uses the GEMMIS API to take over memory management from EMM386.EXE.Global EMM Import Specification(GEMMIS) is supported via a document available to a select number of memory-manager vendors ("Windows/386 Paging Import Specification").[3][4][5][6] Only a few memory managers implemented the GEMMIS API, some of the ones that include it are: EMM386.EXE, QuarterdeckQEMM, Qualitas386MAX,Helix Netroom[3]andDOSBox builtin DOS. Notably missing are FreeDOS's memory managers. None of the FreeDOS memory managers (HIMEMX.EXE, JEMM386.EXE, JEMMEX.EXE) implement the GEMMIS API and Windows fails to start when running in conjunction with JEMMxxx since Windows fails to take over the memory management role.Windows ME,Windows 98,Windows 95,Windows for Workgroups 3.1x, andWindows 3.xx, all will fail with JEMMxxx displaying: With JEMMxx, it is possible to run Windows 3.x and Windows for Workgroups 3.1x in limited capabilities by forcing Windows to use Standard Mode; i.e. using 80286 Protected Mode, not 80386 Enhanced Mode. Three conditions are required: Note that Windows in standard mode is limited in functionality, it lacks virtual memory, it skips the [386Enh] section in SYSTEM.INI and any device drivers in [386Enh] are not loaded. This DOS software-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Global_EMM_Import_Specification
x86 memory segmentationis a term for the kind ofmemory segmentationcharacteristic of the Intelx86computerinstruction set architecture. The x86 architecture has supported memory segmentation since the originalIntel 8086(1978), butx86 memory segmentationis a plainly descriptiveretronym. The introduction of memory segmentation mechanisms in this architecture reflects the legacy of earlier 80xx processors, which initially[1]could only address 16, or later[2]64 KB of memory (16,384 or 65,536bytes), and whose instructions and registers were optimised for the latter. Dealing with larger addresses and more memory was thus comparably slower, as that capability was somewhat grafted-on in the Intel 8086. Memory segmentation could keep programs compatible, relocatable in memory, and by confining significant parts of a program's operation to 64 KB segments, the program could still run faster. In 1982, theIntel 80286added support forvirtual memoryandmemory protection; the original mode was renamedreal mode, and the new version was namedprotected mode. Thex86-64architecture, introduced in 2003, has largely dropped support for segmentation in 64-bit mode. In both real and protected modes, the system uses 16-bitsegment registersto derive the actual memory address.In real mode, the registers CS, DS, SS, and ES point to the currently used programcode segment(CS), the currentdata segment(DS), the currentstack segment(SS), and oneextrasegment determined by the system programmer (ES). TheIntel 80386, introduced in 1985, adds two additional segment registers, FS and GS, with no specific uses defined by the hardware. The way in which the segment registers are used differs between the two modes.[3] The choice of segment is normally defaulted by the processor according to the function being executed. Instructions are always fetched from the code segment. Any data reference to the stack, including any stack push or pop, uses the stack segment; data references indirected through the BP register typically refer to the stack and so they default to the stack segment. The extra segment is the mandatory destination for string operations (for example MOVS or CMPS); for this one purpose only, the automatically selected segment register cannot be overridden. All other references to data use the data segment by default. The data segment is the default source for string operations, but it can be overridden. FS and GS have no hardware-assigned uses. The instruction format allows an optionalsegment prefixbyte which can be used to override the default segment for selected instructions if desired.[4] Inreal modeorV86 mode, the fundamental size of asegmentis 65,536bytes, with individual bytes being addressed using 16-bitoffsets. The 16-bit segment selector in the segment register is interpreted as the most significant 16 bits of a linear 20-bit address, called a segment address, of which the remaining four least significant bits are all zeros. The segment address is always added to a 16-bit offset in the instruction to yield alinearaddress, which is the same asphysical addressin this mode. For instance, the segmented address 06EFh:1234h (here the suffix "h" meanshexadecimal) has a segment selector of 06EFh, representing a segment address of 06EF0h, to which the offset is added, yielding the linear address 06EF0h + 1234h = 08124h. (The leading zeros of the linear address, segmented addresses, and the segment and offset fields are shown here for clarity. They are usually omitted.) Because of the way the segment address and offset are added, a single linear address can be mapped to up to 212= 4096 distinct segment:offset pairs. For example, the linear address 08124h can have the segmented addresses 06EFh:1234h, 0812h:0004h, 0000h:8124h, etc. This could be confusing to programmers accustomed to unique addressing schemes, but it can also be used to advantage, for example when addressing multiple nested data structures. While real mode segments aretechnicallyalways 64KBlong, the practical effect is only that no segment can belongerthan 64 KB, rather than that every segment as actually used in a programmustbe treated as 64 KB long – dealing with effectively smaller segments is possible: usable sizes range from 16 through 65,536 bytes, in 16-byte steps. Because there is no protection or privilege limitation in real mode, it is still entirely up to the program to coordinate and keep within the bounds of any segments. This is true both when a segment is programmatically treated as smaller than, or the full 64 KB, but it is also true that any program can always access any memory by just changing segments, since it can arbitrarily set segment selectors to change segment addresses with absolutely no supervision. Therefore, while real mode can be thought of as allowing different segment lengths, and as allowing segments to be overlapping or non-overlapping as desired, none of this is restrictively enforced by the CPU. The effective 20-bitaddress spaceof PC/XT-generation CPUs limits theaddressable memoryto 220bytes, or 1,048,576 bytes (1MB). This derived directly from the hardware design of the Intel 8086 (and, subsequently, the closely related 8088), which had exactly 20address pins. (Both were packaged in 40-pin DIP packages; even with only 20 address lines, the address and data buses were multiplexed to fit all the address and data lines within the limited pin count.) Each segment begins at a multiple of 16 bytes, called aparagraph, from the beginning of the linear (flat) address space. That is, at 16 byte intervals. Since all segments are technically 64 KB long, this explains how overlap can occur between segments and why any location in the linear memory address space can be accessed with many segment:offset pairs. The actual location of the beginning of a segment in the linear address space can be calculated withsegment× 16. Such address translations are carried out by the segmentation unit of the CPU. The last segment, FFFFh (65535), begins at linear address FFFF0h (1048560), 16 bytes before the end of the 20-bit address space, and thus can access, with an offset of up to 65,536 bytes, up to 65,520 (65536−16) bytes past the end of the 20-bit address space of the 8086 or 8088 CPU. A further 4,094 next-highest 64K-segments also still cross that 1MB-threshold, but by less and less. On the 8086 and 8088 CPUs, these address accesses were wrapped around to the beginning of the address space such that 65535:16 would access address 0, and e.g. 65533:1000 would access address 952 of the linear address space. The fact that some programs written for the 8088 and 8086 relied on this quirky wrap-around as a feature led to theGate A20compatibility issues in later CPU generations, with theIntel 286and above, where the linear address space was expanded past 20 bits. In 16-bit real mode, enabling applications to make use of multiple memory segments for a single data structure (in order to access more memory than available in any one 64K-segment) is quite complex, but was viewed as a necessary evil for all but the smallest tools (which could do with less memory). The root of the problem is that no appropriate address-arithmetic instructions suitable for flat addressing of the entire memory range are available.[citation needed]Flat addressing is possible by applying multiple instructions, which however leads to slower programs. Thememory modelconcept derives from the setup of the segment registers. For example, in thetiny modelCS=DS=SS, that is the program's code, data, and stack are all contained within a single 64 KB segment. In thesmallmemory model DS=SS, so both data and stack reside in the same segment; CS points to a different code segment of up to 64 KB. The80286'sprotected modeextends the processor's address space to 224bytes (16 megabytes), but not by adjusting the shift value used to calculate a segment address from the value in a segment register. Instead, each 16-bit segment register now contains an index into a table ofsegment descriptorscontaining 24-bit base addresses to which offsets are added. To support old software, the processor starts up in "real mode", a mode in which it uses the segmented addressing model of the 8086. There is a small difference though: the resulting physical address is no longer truncated to 20 bits, soreal modepointers (but not 8086 pointers) can now refer to addresses from 10000016through 10FFEF16. This nearly 64-kilobyte region of memory was known as theHigh Memory Area(HMA), and later versions ofDOScould use it to increase the available "conventional" memory (i.e. within the firstMB), by moving parts of DOS from conventional memory into the HMA. With the addition of the HMA, the total address space is approximately 1.06 MB. Though the 80286 does not truncate real-mode addresses to 20 bits, a system containing an 80286 can do so with hardware external to the processor, by gating off the 21st address line, theA20 line. The IBM PC AT provided the hardware to do this (for backward compatibility with software for the originalIBM PCandPC/XTmodels), and so all subsequent "AT-class" PC clones did as well. 286 protected mode was seldom used as it would have excluded the large body of users with 8086/88 machines. Moreover, it still necessitated dividing memory into 64k segments like was done in real mode. This limitation can be worked around on 32-bit CPUs which permit the use of memory pointers greater than 64k in size, however as the Segment Limit field is only 24-bit long, the maximum segment size that can be created is 16MB (although paging can be used to allocate more memory, no individual segment may exceed 16MB). This method was commonly used on Windows 3.x applications to produce a flat memory space, although as the OS itself was still 16-bit, API calls could not be made with 32-bit instructions. Thus, it was still necessary to place all code that performs API calls in 64k segments. Once 286 protected mode is invoked, it could not normally be exited except by performing a hardware reset. Machines following the risingIBM PC/ATstandard could feign a reset to the CPU via the standardised keyboard controller, but this was significantly sluggish. Windows 3.x worked around both of these problems by intentionally triggering atriple faultin the interrupt-handling mechanisms of the CPU, which would cause the IBM AT-compatible hardware to reset the CPU, nearly instantly, thus causing it to drop back into real mode.[5] A logical address consists of a 16-bit segment selector (supplying 13+1 address bits) and a 16-bit offset. The segment selector must be located in one of the segment registers. That selector consists of a 2-bit RequestedPrivilege Level(RPL), a 1-bit Table Indicator (TI), and a 13-bit index. When attempting address translation of a given logical address, the processor reads the 64-bitsegment descriptorstructure from either theGlobal Descriptor Tablewhen TI=0 or theLocal Descriptor Tablewhen TI=1. It then performs the privilege check: where CPL is the current privilege level (found in the lower 2 bits of the CS register), RPL is the requested privilege level from the segment selector, and DPL is the descriptor privilege level of the segment (found in the descriptor). All privilege levels are integers in the range 0–3, where the lowest number corresponds to the highest privilege. If the inequality is false, the processor generates ageneral protection (GP) fault. Otherwise, address translation continues. The processor then takes the 16-bit offset and compares it against the segment limit specified in the segment descriptor. If it is larger, a GP fault is generated. Otherwise, the processor adds the 24-bit segment base, specified in descriptor, to the offset, creating a linear physical address. The privilege check is done only when the segment register is loaded, becausesegment descriptorsare cached in hidden parts of the segment registers.[citation needed][3] In theIntel 80386and later, protected mode retains the segmentation mechanism of 80286 protected mode, but apagingunit has been added as a second layer of address translation between the segmentation unit and the physical bus. Also, importantly, address offsets are 32-bit (instead of 16-bit), and the segment base in each segment descriptor is also 32-bit (instead of 24-bit). The general operation of the segmentation unit is otherwise unchanged. The paging unit may be enabled or disabled; if disabled, operation is the same as on the 80286. If the paging unit is enabled, addresses in a segment are now virtual addresses, rather than physical addresses as they were on the 80286. That is, the segment starting address, the offset, and the final 32-bit address the segmentation unit derived by adding the two are all virtual (or logical) addresses when the paging unit is enabled. When the segmentation unit generates and validates these 32-bit virtual addresses, the enabled paging unit finally translates these virtual addresses into physical addresses. The physical addresses are 32-bit on the386, but can be larger on newer processors which supportPhysical Address Extension. As mentioned above, the 80386 also introduced two new general-purpose data segment registers, FS and GS, to the original set of four segment registers (CS, DS, ES, and SS). A 386 CPU can be put back into real mode by clearing a bit in the CR0 control register, however this is a privileged operation in order to enforce security and robustness. By way of comparison, a 286 could only be returned to real mode by forcing a processor reset, e.g. by atriple faultor using external hardware. Thex86-64architecture does not use segmentation in long mode (64-bit mode). Four of the segment registers, CS, SS, DS, and ES, are forced to base address 0, and the limit to 264. The segment registers FS and GS can still have a nonzero base address. This allows operating systems to use these segments for special purposes. Unlike theglobal descriptor tablemechanism used by legacy modes, the base address of these segments is stored in amodel-specific register. The x86-64 architecture further provides the specialSWAPGSinstruction, which allows swapping thekernel modeanduser modebase addresses. For instance,Microsoft Windowson x86-64 uses the GS segment to point to theThread Environment Block, a small data structure for eachthread, which contains information about exception handling, thread-local variables, and other per-thread state. Similarly, theLinux kerneluses the GS segment to store per-CPU data. GS/FS are also used ingcc'sthread-local storageandcanary-basedstack protector. Logical addresses can be explicitly specified inx86 assembly language, e.g. (AT&T syntax): or inIntel syntax: However, segment registers are usually used implicitly. Segmentation cannot be turned off on x86-32 processors (this is true for 64-bit mode as well, but beyond the scope of discussion), so many 32-bit operating systems simulate aflat memory modelby setting all segments' bases to 0 in order to make segmentation neutral to programs. For instance, theLinux kernelsets up only 4 general purpose segments: Since the base is set to 0 in all cases and the limit 4 GiB, the segmentation unit does not affect the addresses the program issues before they arrive at thepagingunit. (This, of course, refers to 80386 and later processors, as the earlier x86 processors do not have a paging unit.) Current Linux also uses GS to point tothread-local storage. Segments can be defined to be either code, data, or system segments. Additional permission bits are present to make segments read only, read/write, execute, etc. In protected mode, code may always modify all segment registersexceptCS (thecode segmentselector). This is because the current privilege level (CPL) of the processor is stored in the lower 2 bits of the CS register. The only ways to raise the processor privilege level (and reload CS) are through thelcall(far call) andint(interrupt)instructions. Similarly, the only ways to lower the privilege level (and reload CS) are throughlret(far return) andiret(interrupt return) instructions. In real mode, code may also modify the CS register by making a far jump (or using an undocumentedPOP CSinstruction on the 8086 or 8088).[6]Of course, in real mode, there are no privilege levels; all programs have absolute unchecked access to all of memory and all CPU instructions. For more information about segmentation, see theIA-32manuals freely available on theAMDorIntelwebsites.
https://en.wikipedia.org/wiki/X86_memory_segmentation
Address Windowing Extensions(AWE) is aMicrosoft Windowsapplication programming interfacethat allows a 32-bit softwareapplicationto access morephysical memorythan it has virtual address space, even in excess of the 4 GB limit.[1]The process of mapping an application's virtual address space to physical memory under AWE is known as "windowing", and is similar to theoverlayconcept of other environments. AWE is beneficial to certaindata-intensive applications, such asdatabasemanagement systems andscientificandengineeringsoftware, that need to manipulate very largedata setswhile minimizingpaging. The application reserves a region, or "window" of virtual address space, and allocates one or more regions of physical memory. Using the AWE API, the application can map the virtual window to any one of the physical regions. The application can reserve more than one virtual address space and map it to any of the allocated regions of physical memory, as long as the number of bytes reserved in the virtual address space matches that of the physical memory region. An application must have the Lock Pages in Memory privilege to use AWE. On 32-bit systems, AWE depends onPhysical Address Extensionsupport when reserving memory above 4 GB.[2]AWE was first introduced inWindows 2000as a new API superseding thePSE36method (from theWindows NT 4.0Enterprise Edition) of accessing more than 4 GB of memory, which was no longer supported in Windows 2000.[3][4]Among the first applications to make use of AWE wereOracle8.1.6[4]andMicrosoft SQL Server2000.[3] If the/3GBboot flag is used to repartition the 32-bit virtual address space (from the 2 GB kernel and 2 GB userland) to 3 GB userland, then AWE is limited to accessing 16 GB of physical memory.[3]This limitation is because with only one GB reserved for the kernel, there isn't enough memory for the page table entries to map more than 16 GB of memory.[5]Additional restrictions on the maximum amount of memory addressable thorough AWE are imposed by the Windows licensing scheme. For example, Windows 2000 Advanced Server was limited to 8 GB, while Windows 2000 Data Center Server supported 64 GB.[6] An article published inDr. Dobb's Journalin 2004 noted that memory allocated using Address Windowing Extensions will not be written to thepagefile, and suggested that AWE regions could therefore be used as a way of protecting sensitive application data such as encryption keys.[7]
https://en.wikipedia.org/wiki/Address_Windowing_Extensions
Incomputing,Physical Address Extension(PAE), sometimes referred to asPage Address Extension,[1]is a memory management feature for the x86 architecture. PAE was first introduced by Intel in thePentium Pro, and later by AMD in theAthlonprocessor.[2]It defines apage tablehierarchy of three levels (instead of two), with table entries of 64 bits each instead of 32, allowing these CPUs to directly access a physicaladdress spacelarger than 4gigabytes(232bytes). The page table structure used byx86-64CPUs when operating inlong modefurther extends the page table hierarchy to four or more levels, extending the virtual address space, and uses additional physical address bits at all levels of the page table, extending the physical address space. It also uses the topmost bit of the 64-bit page table entry as a no-execute or"NX" bit, indicating that code cannot be executed from the associated page. The NX feature is also available inprotected modewhen these CPUs are running a 32-bit operating system, provided that the operating system enables PAE. PAE was first implemented in the IntelPentium Proin 1995,[3]although the accompanying chipsets usually lacked support for the required extra address bits.[4] PAE is supported by the Pentium Pro,Pentium II,Pentium III, andPentium 4processors. The firstPentium Mfamily processors ("Banias") introduced in 2003 also support PAE; however, they do not show the PAE support flag in theirCPUIDinformation.[5]This was remedied in a later revision of the "Dothan" core in 2005. It was also available on AMD processors including the AMDAthlon[6][7](although the chipsets are limited to 32-bit addressing[8]) and later AMD processor models. WhenAMDdefined their 64-bit extension of the industry standardx86architecture,AMD64or x86-64, they also enhanced the paging system in "long mode" based on PAE.[9]It supports 64-bit virtual addresses[10]: 24(as of July 2023,[update]48 bits are implemented on some processors and 57 bits are implemented on others[10]: 139, 141–143[11]), 52-bit physical addresses,[10]: 24and includesNX bitfunctionality. When the x86-64 processor is initialized, the PAE feature is required to be enabled before the processor is switched from Legacy Mode to Long Mode.[9] With PAE, thepage table entryof the x86 architecture is enlarged from 32 to 64 bits. This allows more room for the physical page address, or "page frame number" field, in the page table entry. In the initial implementations of PAE the page frame number field was expanded from 20 to 24 bits. The size of the "byte offset" from the address being translated is still 12 bits, so total physical address size increases from 32 bits to 36 bits (i.e. from 20+12 to 24+12). This increased the physical memory that is theoretically addressable by the CPU from 4 GB to 64 GB. In the first processors that supported PAE, support for larger physical addresses is evident in their package pinout, with address pin designations going up to A35 instead of stopping at A31.[12]Later processor families use interconnects such asHypertransportorQuickPath Interconnect, which lack dedicated memory address signals, so this relationship is less apparent. The 32-bit size of the virtual address is not changed, so regular application software continues to use instructions with 32-bit addresses and (in aflat memory model) is limited to 4 gigabytes of virtual address space. Operating systems supporting this mode usepage tablesto map the regular 4 GB virtual address space into the physical memory, which, depending on the operating system and the rest of the hardware platform, may be as big as 64 GB. The mapping is typically applied separately for eachprocess, so that the additional RAM is useful even though no single process can access it all simultaneously. Later work associated with AMD's development ofx86-64architecture expanded the theoretical possible size of physical addresses to 52 bits.[10]: 24 Inprotected modewith paging enabled (bit 31,PG, of control registerCR0is set), but without PAE,x86processors use a two-level page translation scheme.Control registerCR3holds the page-aligned physical address of a single 4 KB longpage directory. This is divided into 1024 four-byte page directory entries that in turn, if valid, hold the page-aligned physical addresses ofpage tables, each 4 KB in size. These similarly consist of 1024 four-byte page table entries which, if valid, hold the page-aligned physical addresses of 4 KB longpagesof physical memory (RAM). The entries in the page directory have an additional flag in bit 7, namedPS(forpage size). If the system has set this bit to1, the page directory entry does not point to a page table but to a single, large 4 MB page (Page Size Extension). Enabling PAE (by setting bit 5,PAE, of the system registerCR4) causes major changes to this scheme. By default, the size of each page remains as 4 KB. Each entry in the page table and page directory becomes 64 bits long (8 bytes), instead of 32 bits, to allow for additional address bits. However, the size of each tabledoes notchange, so both table and directory now have only 512 entries. Because this allows only one half of the entries of the original scheme, an extra level of hierarchy has been added, so the system registerCR3now points physically to aPage Directory Pointer Table, a short table containing four pointers to page directories. Supporting 64 bit addresses in the page-table is a significant change as this enables two changes to the processor addressing. Firstly, the page table walker, which uses physical addresses to access the page table and directory, can now access physical addresses greater than the 32-bit physical addresses supported in systems without PAE. FromCR3, the page table walker can access page directories and tables that are beyond the 32-bit range. Secondly, the physical address for the data being accessed (stored in the page table) can be represented as a physical address larger than the 32-bit addresses supported in a system without PAE. Again, this allows data accesses to access physical memory regions beyond the 32-bit range.[13] The entries in the page directory have an additional flag in bit 7, namedPS(forpage size). If the system has set this bit to1, the page directory entry does not point to a page table but to a single, large 2 MB page (Page Size Extension). In all page table formats supported byIA-32andx86-64, the 12 least significant bits of the page table entry are either interpreted by the memory management unit or are reserved for operating system use. In processors that implement the "no-execute" or "execution disable" feature, the most significant bit (bit 63) is theNX bit. The next eleven most significant bits (bits 52 through 62) are reserved for operating system use by both Intel and AMD's architecture specifications. Thus, from 64 bits in the page table entry, 12 low-order and 12 high-order bits have other uses, leaving 40 bits (bits 12 though 51) for the physical page number. Combined with 12 bits of "offset within page" from the linear address, a maximum of 52 bits are available to address physical memory. This allows a maximum RAM configuration of 252bytes, or 4 petabytes (about 4.5×1015bytes). Onx86-64processors in nativelong mode, the address translation scheme uses PAE but adds a fourth table, the 512-entrypage-map level 4table, and extends the page directory pointer table to 512 entries instead of the original 4 entries it has in protected mode. This means that 48 bits of virtual page number are translated, giving a virtual address space of up to 256 TB. For some processors, a mode can be enabled with a fifth table, the 512-entrypage-map level 5 table; this means that 57 bits of virtual page number are translated, giving a virtual address space of up to 128 PB.[10]: 141–153In the page table entries, in the original specification, 40 bits of physical page number are implemented. Software can identify via theCPUIDflagPAEwhether a CPU supports PAE mode or not. A free-of-charge program for Microsoft Windows is available which will list many processor capabilities, including PAE support.[14]In Linux, commands such ascat /proc/cpuinfocan list thepaeflag when present,[15]as well as other tools such as theSYSLINUXHardware Detection Tool. To run the processor in PAE mode,operating systemsupport is required. To use PAE to access more than 4 GB of RAM, further support is required in the operating system, in the chipset, and on the motherboard. Some chipsets do not support physical memory addresses above 4 GB (FFFFFFFF in hexadecimal), and some motherboards simply do not have enough RAM sockets to allow the installation of more than 4 GB of RAM. Nevertheless, even if no more than 4 GB of RAM is available and accessible, a PAE-capable CPU may be run in PAE mode, for example to allow use of theNo executefeature. 32-bit versions ofMicrosoft Windowssupport PAE if booted with the appropriate option. According to Microsoft Technical FellowMark Russinovich, some drivers were found to be unstable when encountering physical addresses above 4GB.[16] The following table shows the memory limits for 32-bit versions of Microsoft Windows: The original releases of Windows XP and Windows XP SP1 used PAE mode to allow RAM to extend beyond the 4 GB address limit. However, it led to compatibility problems with 3rd party drivers which led Microsoft to remove this capability in Windows XP Service Pack 2. Windows XP SP2 and later, by default, on processors with theno-execute (NX)orexecute-disable (XD)feature, runs in PAE mode in order to allow NX.[20]The NX bit resides in bit 63 of the page table entry and, without PAE, page table entries on 32-bit systems have only 32 bits; therefore PAE mode is required in order to exploit the NX feature. However, "client" versions of 32-bit Windows (Windows XP SP2 and later, Windows Vista, Windows 7) limit physical address space to the first 4 GB for driver compatibility[16]even though these versions do run in PAE mode if NX support is enabled. Windows 8and later releases will only run on processors which support PAE, in addition toNXandSSE2.[21][22] Mac OS X TigerthroughMac OS X Snow Leopardsupport PAE and theNX biton IA-32 processors; Snow Leopard was the last version to support IA-32 processors. On x86-64 processors, all versions ofmacOSuse 4-level paging (IA-32e paging rather than PAE) to address memory above 4GB.Mac ProandXservesystems can use up to 64 GB of RAM.[23] TheLinux kernelincludes full PAE-mode support starting with version 2.3.23,[24]in 1999 enabling access of up to 64 GB of memory on 32-bit machines. A PAE-enabled Linux kernel requires that the CPU also support PAE. The Linux kernel supports PAE as a build option and major distributions provide a PAE kernel either as the default or as an option. The NX bit feature requires a kernel built with PAE support.[25] Linux distributionsnow commonly use a PAE-enabled kernel as the default, a trend that began in 2009.[26]As of 2012[update]many, includingUbuntu(and derivatives likeXubuntuandLinux Mint),[27][28]Red Hat Enterprise Linux6.0,[29]andCentOS, have stopped distributing non-PAE kernels, thus making PAE-supporting hardware mandatory. Linux distributions that require PAE may refuse to boot onPentium Mfamily processors because they do not show the PAE support flag in their CPUID information (even though it is supported internally).[5]However, this can be easily bypassed with theforcepaeoption.[30] Distributions that still provide a non-PAE option, includingDebian(and derivatives likeLMDE 2 (Linux Mint Debian Edition)[31]),Slackware, andLXLE, typically do so with "i386", "i486", or "retro" labels.[32][33]The articleLight-weight Linux distributiondoes list some others, allowing to install Linux onto old computers. FreeBSDandNetBSDalso support PAE as a kernel build option.FreeBSDsupports PAE in the 4.x series starting with 4.9, in the 5.x series starting with 5.1, and in all 6.x and later releases. Support requires the kernelPAEconfiguration-option.Loadable kernel modulescan only be loaded into a kernel with PAE enabled if the modules were built with PAE enabled; the binary modules in FreeBSD distributions are not built with PAE enabled, and thus cannot be loaded into PAE kernels. Not all drivers support more than 4 GB of physical memory; those drivers won't work correctly on a system with PAE.[34] OpenBSDhas had support for PAE since 2006 with the standard GENERIC i386 kernel. GeNUA mbH supported the initial implementation.[35]Since release 5.0 PAE has had a series of changes, in particular changes to i386 MMU processing for PMAP, see pmap(9).[36][failed verification] Solarissupports PAE beginning with Solaris version 7. However, third-party drivers used with version 7 which do not specifically include PAE support may operate erratically or fail outright on a system with PAE.[37] Haikuadded initial support for PAE sometime after the R1 Alpha 2 release. With the release of R1 Alpha 3 PAE is now officially supported. ArcaOShas limited support for PAE for the purpose of creating RAM disks above the 4 GB boundary.[38]
https://en.wikipedia.org/wiki/Physical_Address_Extension
Thesidewaysaddress space on theAcornBBC Microcomputer,ElectronandMaster-series microcomputerwas Acorn'sbank switchingimplementation, providing for permanent system expansion in the days beforehard disk drivesor evenfloppy disk driveswere commonplace.Filing systems, application and utility software, and drivers were made available as sidewaysROMs, and extraRAMcould be fitted via the sideways address space. The BBC Micro Advanced User Guide[1]refers to the sideways address space as "paged ROMs" because it predated the use of this address space for RAM expansion. The BBC B+, B+ 128 and BBC Master all featured sideways RAM as standard. The machines used the 8-bit6502and 65C102processorswith a 16-bit address space. The address space was split into 32KBRAM(0x0000 to 0x7FFF), 16 KB sideways address space (0x8000 to 0xBFFF) and 16 KBoperating systemspace (0xC000 to 0xFFFF). The sideways address space is a bank-switched (referred to by Acorn as "paged")address spacethat allows access to one 16 KB bank at a time. Each bank can be ROM or RAM. On both the BBC Micro and the BBC Master, there are ROM sockets on the motherboard (four on the BBC Micro) which take sideways ROMs. The BBC Micro shipped with a single ROM, containingBBC BASIC; further ROMs can be added to the computer to add software that will remain available at all times. The Electron's sideways address space was exposed only by the addition of a Plus 1 add-on or a third-party equivalent; the Plus 1 also introduced cartridge slots that were carried over into the BBC Master design as an alternative way to package ROMs. Sideways ROMs permitted the addition of new filing systems to the OS (such as theDisc Filing System) and application and utility software. Software supplied as ROMs has two main benefits: it loads instantaneously (if delivered as language or service ROMs), and it requires very little RAM to operate (and may use the dedicated paged ROM area of RAM that normal software keeps clear of). This allowed for application software to have more working space than would normally be possible, and for utility software such asdebuggersto operate on software held in RAM. The ROM filing system also allowed software to reside in ROMs as files that would be loaded in a similar way to cassette programs.[2]Such loading was not instantaneous since it involved transferring the files into RAM, but was nevertheless used by Acorn to deliver some cartridge-based software such as games and utilities, ostensibly due to the ability to redeploy cassette-based software in another medium without needing to make significant changes to the software.[3] The first few bytes of sideways ROMs contain details that inform the OS how to handle them. These include language and service entry points, ROM type code, version number and a pointer to the copyright information. On reset the OS validates each sideways bank by checking for a copyright string. During operation the OS talks to valid ROMs by jumping to the two entry points with a specific value of theaccumulatorset. This provides a clean API for expanding the operating system and negotiating bank switching and RAM sharing. ROMs have two entry points: theservice entry pointprovides theAPIaccess to the ROM, and thelanguage entry pointis the starting point for application software contained in the ROM. "Service" ROMs need not have a language entry point, and only exist to extend the OS. "Language" ROMs are ROMs that provide application software, and gain their name from the fact that the BBC BASIC language is supplied as the default ROM in bank 15. ROMs often contain both entry points, as all user software must have a service entry point to allow the OS to call into it. Pure service ROMs typically only extend the features of the OS itself, without providing any application software. The BBC Micro and Electron require one language ROM be present atPOSTto provide the computer with a user interface, else the OS will report "Language?" and halt. The version ofAcorn MOSon the BBC Master has a built-in command line and will present this if no default language ROM is configured. In addition to ROM, banks of RAM could be added to the computer via the sideways address space. These could either be used to load and use ROM images from disk or as extra workspace for machine code programs. The BBC Model B is hard-wired to prevent writing to the sideways area, so a write signal needs to be collected from somewhere. The methods vary, but the two most common methods are removing chips from the board and placing them into an expansion board that occupies the chips' original sockets, and fitting a RAM module in a ROM socket with a flying lead connected to a write signal elsewhere on the motherboard. The 64 KB model B+ had 12 KB of "special" sideways RAM. This used the sideways address but was selected by the high bit of the ROM select register and could not be used to load ROM images. The 128 KB model B+ had an expansion board with 64 KB of "regular" sideways RAM in addition to the 12 KB of "special" sideways RAM on the main board. The BBC Master came with 64 KB of regular sideways RAM, and could be configured with motherboard links as to which banks were ROM and which were RAM. In addition it had 4 KB of "special" sideways RAM and 8 KB of RAM paged over the operating system. Unlike on the B+ where the "special" sideways RAM had been available for user applications these memory areas on the Master were used as operating system and filing system workspace. The cartridge port wiring differs between the Electron and Master 128 with regard to certain RAM-related signals. The Master's slots replace the READY signal with a more general R/W signal, preserving the CSRW (chip select, read/write) signal only for certain addresses in pages FC, FD and FE, whereas CSRW corresponds to the CPU read/write line in the Electron Plus 1 cartridge interface. Both systems support RAM cartridges, however.[4] Acorn MOSsupports up to 16 sideways banks. Due to limited motherboard space, extra sideways sockets were made available by third-party expansion boards. Certain boards, such as theWatford ElectronicsSidewise board, also provided the option of permanent, battery backed-up RAM. This allows for developer testing of new sideways ROM software without burning anEPROMfor each attempt. A write-protect switch could be used to prevent the contents of sideways RAM from being modified.
https://en.wikipedia.org/wiki/Sideways_address_space
Incomputer programming, theblock starting symbol(abbreviated to.bssorbss) is the portion of anobject file, executable, orassembly languagecode that containsstatically allocated variablesthat are declared but have not been assigned a value yet. It is often referred to as the "bss section" or "bss segment". Typically only the length of the bss section, but no data, is stored in theobject file. Theprogram loaderallocates memory for the bss section when it loads the program. By placing variables with no value in the .bss section, instead of the.dataor .rodata section which require initial value data, the size of the object file is reduced. On some platforms, some or all of the bss section is initialized to zeroes.Unix-likesystems andWindowsinitialize the bss section to zero, which can thus be used forCandC++statically allocated variables that are initialized to all zero bits. Operating systems may use a technique called zero-fill-on-demand to efficiently implement the bss segment.[1]In embedded software, the bss segment is mapped into memory that is initialized to zero by the Crun-time systembeforemain()is entered. Some C run-time systems may allow part of the bss segment not to be initialized; C variables must explicitly be placed into that portion of the bss segment.[2] On somecomputer architectures, theapplication binary interfacealso supports ansbsssegment for "small data". Typically, these data items can be accessed using shorter instructions that may only be able to access a certain range of addresses. Architectures supportingthread-local storagemight use atbsssection for uninitialized, static data marked as thread-local.[3] Historically,BSS(fromBlock Started by Symbol) is apseudo-operationinUA-SAP(United Aircraft Symbolic Assembly Program), theassemblerdeveloped in the mid-1950s for theIBM 704by Roy Nutt, Walter Ramshaw, and others atUnited Aircraft Corporation.[4][5]The BSS keyword was later incorporated intoFORTRAN Assembly Program[6](FAP) and Macro Assembly Program[7](MAP),IBM's standard assemblers for its709 and 7090/94computers. It defined a label (i.e. symbol) and reserved a block of uninitialized space for a given number ofwords.[8]In this situation BSS served as a shorthand in place of individually reserving a number of separate smaller data locations. Some assemblers support a complementary or alternative directiveBES, forBlock Ended by Symbol, where the specified symbol corresponds to the end of the reserved block.[9] InC, statically allocated objects without an explicit initializer are initialized to zero (for arithmetic types) or a null pointer (for pointer types). Implementations of C typically represent zero values and null pointer values using a bit pattern consisting solely of zero-valued bits (despite filling bss with zero is not required by the C standard, all variables in .bss are required to be individually initialized to some sort of zeroes according to Section 6.7.8 of C ISO Standard 9899:1999 or section 6.7.9 for newer standards). Hence, the BSS segment typically includes all uninitialized objects (both variables andconstants) declared at file scope (i.e., outside any function) as well as uninitializedstatic local variables(local variablesdeclared with thestatickeyword); static localconstantsmust be initialized at declaration, however, as they do not have a separate declaration, and thus are typically not in the BSS section, though they may be implicitly or explicitly initialized to zero. An implementation may also assign statically-allocated variables and constants initialized with a value consisting solely of zero-valued bits to the BSS section. Peter van der Linden, a C programmer and author, says, "Some people like to remember it as 'Better Save Space'. Since the BSS segment only holds variables that don't have any value yet, it doesn't actually need to store the image of these variables. The size that BSS will require at runtime is recorded in the object file, but BSS (unlike the data segment) doesn't take up any actual space in the object file."[10] InFortran, common block variables are allocated in this segment.[11]Some compilers may, for64-bitinstruction sets, limit offsets, in instructions that access this segment, to 32 bits, limiting its size to 2 GB or 4 GB.[12][13][14]Also, note that Fortran does not require static data to be initialized to zero. On those systems where the bss segment is initialized to zero, putting common block variables and other static data into that segment guarantees that it will be zero, but for portability, programmers should not depend on that.
https://en.wikipedia.org/wiki/BSS_Segment
Incomputing, adata segment(often denoted.data) is a portion of anobject fileor the correspondingaddress spaceof a program that contains initializedstatic variables, that is,global variablesandstatic local variables. The size of this segment is determined by the size of the values in the program's source code, and does not change atrun time. The data segment is read/write, since the values of variables can be altered at run time. This is in contrast to theread-only data segment(rodatasegmentor.rodata), which contains static constants rather than variables; it also contrasts to thecode segment, also known as the text segment, which is read-only on many architectures. Uninitialized data, both variables and constants, is instead in the.bsssegment. Historically, to be able to support memory address spaces larger than the native size of the internal address register would allow, early CPUs implemented a system of segmentation whereby they would store a small set of indexes to use as offsets to certain areas. TheIntel 8086family of CPUs provided four segments: the code segment, the data segment, the stack segment and the extra segment. Each segment was placed at a specific location in memory by the software being executed and all instructions that operated on the data within those segments were performed relative to the start of that segment. This allowed a 16-bit address register, which would normally be able to access 64 KB of memory space, to access 1 MB of memory space. This segmenting of the memory space into discrete blocks with specific tasks carried over into the programming languages of the day and the concept is still widely in use within modern programming languages. A computer program memory can be largely categorized into two sections: read-only and read/write. This distinction grew from early systems holding their main program inread-only memorysuch asMask ROM,EPROM,PROMorEEPROM. As systems became more complex and programs were loaded from other media into RAM instead of executing from ROM, the idea that some portions of the program's memory should not be modified was retained. These became the.textand.rodatasegments of the program, and the remainder which could be written to divided into a number of other segments for specific tasks. Thecode segment, also known astext segment, containsexecutablecode and is generally read-only and fixed size. Thedata segmentcontains initialized static variables, i.e. global variables and local static variables which have a defined value and can be modified. Examples in C include: TheBSS segmentcontains uninitialized static data, both variables and constants, i.e. global variables and local static variables that are initialized to zero or do not have explicit initialization in source code. Examples in C include: Theheap segmentcontains dynamically allocated memory, commonly begins at the end of the BSS segment and grows to larger addresses from there. It is managed bymalloc, calloc, realloc, and free, which may use thebrkandsbrksystem calls to adjust its size (note that the use of brk/sbrk and a single heap segment is not required to fulfill the contract of malloc/calloc/realloc/free; they may also be implemented usingmmap/munmap to reserve/unreserve potentially non-contiguous regions of virtual memory into the process'virtual address space). The heap segment is shared by all threads, shared libraries, and dynamically loaded modules in a process. Thestack segmentcontains thecall stack, aLIFOstructure, typically located in the higher parts of memory. A "stack pointer" register tracks the top of the stack; it is adjusted each time a value is "pushed" onto the stack. The set of values pushed for one function call is termed a "stack frame". A stack frame consists at minimum of a return address.Automatic variablesare also allocated on the stack. The stack segment traditionally adjoined the heap segment and they grew towards each other; when the stack pointer met the heap pointer, free memory was exhausted. With large address spaces and virtual memory techniques they tend to be placed more freely, but they still typically grow in a converging direction. On the standard PCx86 architecturethe stack grows toward address zero, meaning that more recent items, deeper in the call chain, are at numerically lower addresses and closer to the heap. On some other architectures it grows the opposite direction. Some interpreted languages offer a similar facility to the data segment, notablyPerl[1]andRuby.[2]In these languages, including the line__DATA__(Perl) or__END__(Ruby, old Perl) marks the end of the code segment and the start of the data segment. Only the contents prior to this line are executed, and the contents of the source file after this line are available as a file object:PACKAGE::DATAin Perl (e.g.,main::DATA) andDATAin Ruby. This can be considered a form ofhere document(a file literal).
https://en.wikipedia.org/wiki/Data_segment
Flat memory modelorlinear memory modelrefers to amemory addressingparadigm in which "memoryappears to the program as a single contiguousaddress space."[1]TheCPUcan directly (andlinearly)addressall of the availablememorylocations without having to resort to any sort ofbank switching,memory segmentationorpagingschemes. Memory management andaddress translationcan still be implementedon top ofa flat memory model in order to facilitate theoperating system's functionality, resource protection,multitaskingor to increase the memory capacity beyond the limits imposed by the processor's physical address space, but the key feature of a flat memory model is that the entire memory space is linear, sequential and contiguous. In a simple controller, or in asingle taskingembedded application, where memory management is not needed nor desirable, the flat memory model is the most appropriate, because it provides the simplest interface from the programmer's point of view, with direct access to all memory locations and minimum design complexity. In a general purpose computer system, which requires multitasking, resource allocation, and protection, the flat memory system must be augmented by some memory management scheme, which is typically implemented through a combination of dedicated hardware (inside or outside the CPU) and software built into the operating system. The flat memory model (at the physical addressing level) still provides the greatest flexibility for implementing this type of memory management. Most modern memory models fall into one of three categories: Within the x86 architectures, when operating in thereal mode(or emulation), physical address is computed as:[2] (I.e., the 16-bit segment register is shifted left by 4 bits and added to a 16-bit offset, resulting in a 20-bit address.)
https://en.wikipedia.org/wiki/Flat_memory_model
Agrant tableis an interface which grants access tomemory pagestovirtual machinesthat do not own the pages.[1]Grant tables are implemented onXenhypervisor. Grant tables are generally used in inter-virtual machine communication, when one of the communicating VM's requires pages owned by the other VM. This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Grant_table
Incomputeroperating systems,memory pagingis amemory managementscheme that allows the physical memory used by a program to be non-contiguous.[1]This also helps avoid the problem of memory fragmentation and requiring compaction to reduce fragmentation. It is often combined with the related technique of allocating and freeingpage framesand storing pages on and retrieving them fromsecondary storage[a]in order to allow the aggregate size of the address spaces to exceed the physical memory of the system.[2]For historical reasons, this technique is sometimes referred to as "swapping". When combined withvirtual memory, it is known aspaged virtual memory. In this scheme, the operating system retrieves data from secondary storage inblocksof the same size. These blocks are calledpages. Paging is an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physical memory. Hardware support is necessary for efficient translation of logical addresses tophysical addresses. As such, paged memory functionality is usually hardwired into a CPU through itsMemory Management Unit(MMU) orMemory Protection Unit(MPU), and separately enabled by privileged system code in theoperating system'skernel. In CPUs implementing thex86instruction set architecture(ISA) for instance, the memory paging is enabled via the CR0control register. In the 1960s, swapping was an early virtual memory technique. An entire program or entiresegmentwould be "swapped out" (or "rolled out") from RAM to disk or drum, and another one would beswapped in(orrolled in).[3][4]A swapped-out program would be current but its execution would be suspended while its RAM was in use by another program; a program with a swapped-out segment could continue running until it needed that segment, at which point it would be suspended until the segment was swapped in. A program might include multipleoverlaysthat occupy the same memory at different times. Overlays are not a method of paging RAM to secondary storage[a]but merely of minimizing the program's RAM use. Subsequent architectures usedmemory segmentation, and individual program segments became the units exchanged between secondary storage and RAM. A segment was the program's entire code segment or data segment, or sometimes other large data structures. These segments had to becontiguouswhen resident in RAM, requiring additional computation and movement to remedyfragmentation.[5] Ferranti'sAtlas, and theAtlas Supervisordeveloped at theUniversity of Manchester,[6](1962), was the first system to implement memory paging. Subsequent early machines, and their operating systems, supporting paging include theIBM M44/44Xand its MOS operating system (1964),[7]theSDS 940[8]and theBerkeley Timesharing System(1966), a modifiedIBM System/360 Model 40and theCP-40operating system (1967), theIBM System/360 Model 67and operating systems such asTSS/360andCP/CMS(1967), theRCA 70/46and theTime Sharing Operating System(1967), theGE 645andMultics(1969), and thePDP-10with addedBBN-designed paging hardware and theTENEXoperating system (1969). Those machines, and subsequent machines supporting memory paging, use either a set ofpage address registersor in-memorypage tables[d]to allow the processor to operate on arbitrary pages anywhere in RAM as a seemingly contiguouslogical addressspace. These pages became the units exchanged between secondary storage[a]and RAM. When a process tries to reference a page not currently mapped to apage framein RAM, the processor treats this invalid memory reference as apage faultand transfers control from the program to the operating system. The operating system must: When all page frames are in use, the operating system must select a page frame to reuse for the page the program now needs. If the evicted page frame wasdynamically allocatedby a program to hold data, or if a program modified it since it was read into RAM (in other words, if it has become "dirty"), it must be written out to secondary storage before being freed. If a program later references the evicted page, another page fault occurs and the page must be read back into RAM. The method the operating system uses to select the page frame to reuse, which is itspage replacement algorithm, affects efficiency. The operating system predicts the page frame least likely to be needed soon, often through theleast recently used(LRU) algorithm or an algorithm based on the program'sworking set. To further increase responsiveness, paging systems may predict which pages will be needed soon, preemptively loading them into RAM before a program references them, and may steal page frames from pages that have been unreferenced for a long time, making them available. Some systems clear new pages to avoid data leaks that compromise security; some set them to installation defined or random values to aid debugging. When pure demand paging is used, pages are loaded only when they are referenced. A program from a memory mapped file begins execution with none of its pages in RAM. As the program commits page faults, the operating system copies the needed pages from a file, e.g.,memory-mapped file, paging file, or a swap partition containing the page data into RAM. Some systems use onlydemand paging—waiting until a page is actually requested before loading it into RAM. Other systems attempt to reduce latency by guessing which pages not in RAM are likely to be needed soon, and pre-loading such pages into RAM, before that page is requested. (This is often in combination with pre-cleaning, which guesses which pages currently in RAM are not likely to be needed soon, and pre-writing them out to storage). When a page fault occurs, anticipatory paging systems will not only bring in the referenced page, but also other pages that are likely to be referenced soon. A simple anticipatory paging algorithm will bring in the next few consecutive pages even though they are not yet needed (a prediction usinglocality of reference); this is analogous to aprefetch input queuein a CPU. Swap prefetching will prefetch recently swapped-out pages if there are enough free pages for them.[9] If a program ends, the operating system may delay freeing its pages, in case the user runs the same program again. Some systems allow application hints; the application may request that a page be made available and continue without delay. The free page queue is a list of page frames that are available for assignment. Preventing this queue from being empty minimizes the computing necessary to service a page fault. Some operating systems periodically look for pages that have not been recently referenced and then free the page frame and add it to the free page queue, a process known as "page stealing". Some operating systems[e]supportpage reclamation; if a program commits a page fault by referencing a page that was stolen, the operating system detects this and restores the page frame without having to read the contents back into RAM. The operating system may periodically pre-clean dirty pages: write modified pages back to secondary storage[a]even though they might be further modified. This minimizes the amount of cleaning needed to obtain new page frames at the moment a new program starts or a new data file is opened, and improves responsiveness. (Unix operating systems periodically usesyncto pre-clean all dirty pages; Windows operating systems use "modified page writer" threads.) Some systems allow application hints; the application may request that a page be cleared or paged out and continue without delay. After completing initialization, most programs operate on a small number of code and data pages compared to the total memory the program requires. The pages most frequently accessed are called theworking set. When the working set is a small percentage of the system's total number of pages, virtual memory systems work most efficiently and an insignificant amount of computing is spent resolving page faults. As the working set grows, resolving page faults remains manageable until the growth reaches a critical point. Then faults go up dramatically and the time spent resolving them overwhelms time spent on the computing the program was written to do. This condition is referred to asthrashing. Thrashing occurs on a program that works with huge data structures, as its large working set causes continual page faults that drastically slow down the system. Satisfying page faults may require freeing pages that will soon have to be re-read from secondary storage.[a]"Thrashing" is also used in contexts other than virtual memory systems; for example, to describecacheissues in computing orsilly window syndromein networking. A worst case might occur onVAXprocessors. A single MOVL crossing a page boundary could have a source operand using a displacement deferred addressing mode, where the longword containing the operand address crosses a page boundary, and a destination operand using a displacement deferred addressing mode, where the longword containing the operand address crosses a page boundary, and the source and destination could both cross page boundaries. This single instruction references ten pages; if not all are in RAM, each will cause a page fault. As each fault occurs the operating system needs to go through the extensive memory management routines perhaps causing multiple I/Os which might include writing other process pages to disk and reading pages of the active process from disk. If the operating system could not allocate ten pages to this program, then remedying the page fault would discard another page the instruction needs, and any restart of the instruction would fault again. To decrease excessive paging and resolve thrashing problems, a user can increase the number of pages available per program, either by running fewer programs concurrently or increasing the amount of RAM in the computer. Inmulti-programmingor in amulti-userenvironment, many users may execute the same program, written so that its code and data are in separate pages. To minimize RAM use, all users share a single copy of the program. Each process'spage tableis set up so that the pages that address code point to the single shared copy, while the pages that address data point to different physical pages for each process. Different programs might also use the same libraries. To save space, only one copy of the shared library is loaded into physical memory. Programs which use the same library have virtual addresses that map to the same pages (which contain the library's code and data). When programs want to modify the library's code, they usecopy-on-write, so memory is only allocated when needed. Shared memory is an efficient means of communication between programs. Programs can share pages in memory, and then write and read to exchange data. The first computer to support paging was the supercomputerAtlas,[10][11][12]jointly developed byFerranti, theUniversity of ManchesterandPlesseyin 1963. The machine had an associative (content-addressable) memory with one entry for each 512 word page. The Supervisor[13]handled non-equivalence interruptions[f]and managed the transfer of pages between core and drum in order to provide a one-level store[14]to programs. Paging has been a feature ofMicrosoft WindowssinceWindows 3.0in 1990. Windows 3.x creates ahidden filenamed386SPART.PARorWIN386.SWPfor use as a swap file. It is generally found in theroot directory, but it may appear elsewhere (typically in the WINDOWS directory). Its size depends on how much swap space the system has (a setting selected by the user underControl Panel→ Enhanced under "Virtual Memory"). If the user moves or deletes this file, ablue screenwill appear the next time Windows is started, with theerror message"The permanent swap file is corrupt". The user will be prompted to choose whether or not to delete the file (even if it does not exist). Windows 95,Windows 98andWindows Meuse a similar file, and the settings for it are located under Control Panel → System → Performance tab → Virtual Memory. Windows automatically sets the size of the page file to start at 1.5× the size of physical memory, and expand up to 3× physical memory if necessary. If a user runs memory-intensive applications on a system with low physical memory, it is preferable to manually set these sizes to a value higher than default. The file used for paging in theWindows NTfamily ispagefile.sys. The default location of the page file is in the root directory of the partition where Windows is installed. Windows can be configured to use free space on any available drives for page files. It is required, however, for the boot partition (i.e., the drive containing the Windows directory) to have a page file on it if the system is configured to write either kernel or full memory dumps after aBlue Screen of Death. Windows uses the paging file as temporary storage for the memory dump. When the system is rebooted, Windows copies the memory dump from the page file to a separate file and frees the space that was used in the page file.[15] In the default configuration of Windows, the page file is allowed to expand beyond its initial allocation when necessary. If this happens gradually, it can become heavilyfragmentedwhich can potentially cause performance problems.[16]The common advice given to avoid this is to set a single "locked" page file size so that Windows will not expand it. However, the page file only expands when it has been filled, which, in its default configuration, is 150% of the total amount of physical memory.[17]Thus the total demand for page file-backed virtual memory must exceed 250% of the computer's physical memory before the page file will expand. The fragmentation of the page file that occurs when it expands is temporary. As soon as the expanded regions are no longer in use (at the next reboot, if not sooner) the additional disk space allocations are freed and the page file is back to its original state. Locking a page file size can be problematic if a Windows application requests more memory than the total size of physical memory and the page file, leading to failed requests to allocate memory that may cause applications and system processes to fail. Also, the page file is rarely read or written in sequential order, so the performance advantage of having a completely sequential page file is minimal. However, a large page file generally allows the use of memory-heavy applications, with no penalties besides using more disk space. While a fragmented page file may not be an issue by itself, fragmentation of a variable size page file will over time create several fragmented blocks on the drive, causing other files to become fragmented. For this reason, a fixed-size contiguous page file is better, providing that the size allocated is large enough to accommodate the needs of all applications. The required disk space may be easily allocated on systems with more recent specifications (i.e. a system with 3 GB of memory having a 6 GB fixed-size page file on a 750 GB disk drive, or a system with 6 GB of memory and a 16 GB fixed-size page file and 2 TB of disk space). In both examples, the system uses about 0.8% of the disk space with the page file pre-extended to its maximum. Defragmentingthe page file is also occasionally recommended to improve performance when a Windows system is chronically using much more memory than its total physical memory.[18]This view ignores the fact that, aside from the temporary results of expansion, the page file does not become fragmented over time. In general, performance concerns related to page file access are much more effectively dealt with by adding more physical memory. Unixsystems, and otherUnix-likeoperating systems, use the term "swap" to describe the act of substituting disk space for RAM when physical RAM is full.[19]In some of those systems, it is common to dedicate an entire partition of a hard disk to swapping. These partitions are calledswap partitions. Many systems have an entire hard drive dedicated to swapping, separate from the data drive(s), containing only a swap partition. A hard drive dedicated to swapping is called a "swap drive" or a "scratch drive" or a "scratch disk". Some of those systems only support swapping to a swap partition; others also support swapping to files. The Linux kernel supports a virtually unlimited number of swap backends (devices or files), and also supports assignment of backend priorities. When the kernel swaps pages out of physical memory, it uses the highest-priority backend with available free space. If multiple swap backends are assigned the same priority, they are used in around-robinfashion (which is somewhat similar toRAID 0storage layouts), providing improved performance as long as the underlying devices can be efficiently accessed in parallel.[20] From the end-user perspective, swap files in versions 2.6.x and later of the Linux kernel are virtually as fast as swap partitions; the limitation is that swap files should be contiguously allocated on their underlying file systems. To increase performance of swap files, the kernel keeps a map of where they are placed on underlying devices and accesses them directly, thus bypassing the cache and avoiding filesystem overhead.[21][22]When residing on HDDs, which are rotational magnetic media devices, one benefit of using swap partitions is the ability to place them on contiguous HDD areas that provide higher data throughput or faster seek time. However, the administrative flexibility of swap files can outweigh certain advantages of swap partitions. For example, a swap file can be placed on any mounted file system, can be set to any desired size, and can be added or changed as needed. Swap partitions are not as flexible; they cannot be enlarged without using partitioning orvolume managementtools, which introduce various complexities and potential downtimes. Swappinessis aLinux kernelparameter that controls the relative weight given toswapping outofruntime memory, as opposed to droppingpagesfrom the systempage cache, whenever a memory allocation request cannot be met from free memory. Swappiness can be set to a value from 0 to 200.[23]A low value causes the kernel to prefer to evict pages from the page cache while a higher value causes the kernel to prefer to swap out "cold" memory pages. Thedefault valueis60; setting it higher can cause high latency if cold pages need to be swapped back in (when interacting with a program that had been idle for example), while setting it lower (even 0) may cause high latency when files that had been evicted from the cache need to be read again, but will make interactive programs more responsive as they will be less likely to need to swap back cold pages. Swapping can also slow downHDDsfurther because it involves a lot of random writes, whileSSDsdo not have this problem. Certainly the default values work well in most workloads, but desktops and interactive systems for any expected task may want to lower the setting while batch processing and less interactive systems may want to increase it.[24] When the system memory is highly insufficient for the current tasks and a large portion of memory activity goes through a slow swap, the system can become practically unable to execute any task, even if the CPU is idle. When every process is waiting on the swap, the system is considered to be inswap death.[25][26] Swap death can happen due to incorrectly configuredmemory overcommitment.[27][28][29] The original description of the "swapping to death" problem relates to theX server. If code or data used by the X server to respond to a keystroke is not in main memory, then if the user enters a keystroke, the server will take one or more page faults, requiring those pages to read from swap before the keystroke can be processed, slowing the response to it. If those pages do not remain in memory, they will have to be faulted in again to handle the next keystroke, making the system practically unresponsive even if it's actually executing other tasks normally.[30] macOSuses multiple swap files. The default (and Apple-recommended) installation places them on the root partition, though it is possible to place them instead on a separate partition or device.[31] AmigaOS 4.0introduced a new system for allocating RAM and defragmenting physical memory. It still uses flat shared address space that cannot be defragmented. It is based onslab allocationand paging memory that allows swapping. Paging was implemented inAmigaOS 4.1. It can lock up the system if all physical memory is used up.[32]Swap memory could be activated and deactivated, allowing the user to choose to use only physical RAM. The backing store for a virtual memory operating system is typically manyorders of magnitudeslower thanRAM. Additionally, using mechanical storage devices introducesdelay, several milliseconds for a hard disk. Therefore, it is desirable to reduce or eliminate swapping, where practical. Some operating systems offer settings to influence the kernel's decisions. ManyUnix-likeoperating systems (for exampleAIX,Linux, andSolaris) allow using multiple storage devices for swap space in parallel, to increase performance. In some older virtual memory operating systems, space in swap backing store is reserved when programs allocate memory for runtime data. Operating system vendors typically issue guidelines about how much swap space should be allocated. Paging is one way of allowing the size of the addresses used by a process, which is the process's "virtual address space" or "logical address space", to be different from the amount of main memory actually installed on a particular computer, which is the physical address space. In most systems, the size of a process's virtual address space is much larger than the available main memory.[35]For example: A computer with truen-bit addressing may have 2naddressable units of RAM installed. An example is a 32-bitx86processor with 4GBand withoutPhysical Address Extension(PAE). In this case, the processor is able to address all the RAM installed and no more. However, even in this case, paging can be used to support more virtual memory than physical memory. For instance, many programs may be running concurrently. Together, they may require more physical memory than can be installed on the system, but not all of it will have to be in RAM at once. A paging system makes efficient decisions on which memory to relegate to secondary storage, leading to the best use of the installed RAM. In addition the operating system may provide services to programs that envision a larger memory, such as files that can grow beyond the limit of installed RAM. Not all of the file can be concurrently mapped into the address space of a process, but the operating system might allow regions of the file to be mapped into the address space, and unmapped if another region needs to be mapped in. A few computers have a main memory larger than the virtual address space of a process, such as the Magic-1,[35]somePDP-11machines, and some systems using 32-bitx86processors withPhysical Address Extension. This nullifies a significant advantage of paging, since a single process cannot use more main memory than the amount of its virtual address space. Such systems often use paging techniques to obtain secondary benefits: The size of the cumulative total of virtual address spaces is still limited by the amount of secondary storage available.
https://en.wikipedia.org/wiki/Memory_paging
Thezero pageorbase pageis the block of memory at the very beginning of acomputer'saddress space; that is, thepagewhose starting address is zero. The size of a page depends on the context, and the significance of zero page memory versus higher addressed memory is highly dependent on machine architecture. For example, theMotorola 6800andMOS Technology 6502processor families treat the first 256bytesofmemoryspecially,[1]whereas many other processors do not. Unlike more modern hardware, in the 1970s computerRAMspeed was similar to that of CPUs.[citation needed]Thus it made sense to have few registers and use the main memory as an extended pool of extra registers. In machines with a relatively wide16-bitaddress busand comparatively narrow8-bitdata bus, calculating an address in memory could take several cycles. The zero page's one-byte address was smaller and therefore faster to read and calculate than other locations, making the zero page useful for high-performance code. Zero page addressing now has mostly historical significance, since the developments inintegrated circuittechnologyhave made adding more registers to a CPU less expensive and CPU operations much faster than RAM accesses. The actual size of the zero page in bytes is determined by themicroprocessordesign and in older designs, is often equal to the largest value that can be referenced by the processor's indexing registers. For example, the aforementioned 8-bit processors have 8-bit index registers and a page size of 256 bytes. Therefore, their zero page extends from address 0 to address 255. In early computers, such as thePDP-8, the zero page had a special fastaddressing mode, which facilitated its use for temporarystorage of dataand compensated for the paucity ofCPUregisters. The PDP-8 had only one register, so zero page addressing was essential. In the originalPDP-10KA-10 models, the available registers are simply the first 16 words,36-bitslong, of main memory. Those locations can be accessed as both registers and memory locations. Unlike more modern hardware, 1970s-era computerRAMwas as fast as the CPU. Thus, it made sense to have few registers and use the main memory as an extended pool of extra registers. In machines with a16-bitaddress busand8-bitdata bus, accessing zero page locations could be faster than accessing other locations. Since zero page locations could be addressed by a singlebyte, the instructions accessing them could be shorter and hence faster-loading. For example, theMOS Technology 6502family has only one general purpose register: the accumulator. To offset this limitation and gain a performance advantage, the 6502 is designed to make special use of the zero page, providing instructions whoseoperandsare eight bits, instead of 16, thus requiring fewer memory fetch cycles. Manyinstructionsare coded differently for zero page and non-zero page addresses; this is calledzero-page addressingin 6502 terminology (it is calleddirect addressinginMotorola 6800terminology; theWestern Design Center65C816also refers to zero page addressing asdirect page addressing): In 6502 assembly language, the above two instructions both accomplish the same thing: they load the value of memory location$12into the.A(accumulator) register ($is Motorola/MOS Technology assembly language notation for a hexadecimal number). However, the first instruction is only two bytes long and requires three clock cycles to complete. The second instruction is three bytes in length and requires four clock cycles to execute.  This difference in execution time could become significant in repetitive code. Some processors, such as theMotorola 6809and the aforementioned WDC 65C816, implement a “direct page register” (DP) that tells the processor the starting address inRAMof what is considered to be zero page.  In this context, zero page addressing is notional; the actual access would not be to the physical zero page ifDPis loaded with some address other than$00(or$0000in the case of the 65C816). Some computer architectures still reserve the beginning of address space for other purposes, though; for instance,Intelx86systems reserve the first 256 double-words of address space for theinterrupt vector table(IVT) if they run inreal mode. A similar technique of using the zero page for hardware related vectors was employed in the ARM architecture. In badly written programs this could lead to "ofla" behaviour, where a program tries to read information from an unintended memory area, and treats executable code as data or vice versa. This is especially problematic if the zero page area is used to store system jump vectors and the firmware is tricked into overwriting them.[2] In 8-bitCP/M, the zero page is used for communication between the running program and the operating system. In some processor architectures, like that of theIntel 40044-bit processor, memory was divided into (256 byte) pages and special precautions had to be taken when thecontrol flowcrossedpage boundaries, as somemachine instructionsexhibited different behaviour if located in the last few instructions of a page, so that only few instructions were recommended to jump between pages.[3] Contrary to the zero page's original preferential use, some modern operating systems such asFreeBSD,Linux,Solaris,macOS, andMicrosoft Windows[4]actually make the zero page inaccessible to trap uses ofnull pointers. Such pointer values may legitimately indicate uninitialized values orsentinel nodes, but they do not point to valid objects.Buggy codemay try to access an object via a null pointer, and this can be trapped at the operating system level as a memoryaccess violation.
https://en.wikipedia.org/wiki/Zero_page
TheZero Page(or Base Page) is adata structureused inCP/Msystems for programs to communicate with the operating system. In 8-bit CP/M versions it is located in thefirst 256 bytes of memory, hence its name. The equivalent structure inDOSis theProgram Segment Prefix(PSP), a 256-byte (page-sized) structure, which is by default located exactly before offset 0 of the program's load segment, rather than in segment 0. A segment register is initialised to 0x10 less than the code segment, in order to address it. In 8-bit CP/M, it has the following structure: InCP/M-86, the structure is:
https://en.wikipedia.org/wiki/Zero_page_(CP/M)
Incomputing, acache(/kæʃ/ⓘKASH)[1]is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. Acache hitoccurs when the requested data can be found in a cache, while acache missoccurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.[2] To be cost-effective, caches must be relatively small. Nevertheless, caches are effective in many areas of computing because typicalcomputer applicationsaccess data with a high degree oflocality of reference. Such access patterns exhibit temporal locality, where data is requested that has been recently requested, and spatial locality, where data is requested that is stored near data that has already been requested. In memory design, there is an inherent trade-off between capacity and speed because larger capacity implies larger size and thus greater physical distances for signals to travel causingpropagation delays. There is also a tradeoff between high-performance technologies such asSRAMand cheaper, easily mass-produced commodities such asDRAM,flash, orhard disks. Thebufferingprovided by a cache benefits one or both oflatencyandthroughput(bandwidth). A larger resource incurs a significant latency for access – e.g. it can take hundreds of clock cycles for a modern 4 GHz processor to reach DRAM. This is mitigated by reading large chunks into the cache, in the hope that subsequent reads will be from nearby locations and can be read from the cache. Prediction or explicitprefetchingcan be used to guess where future reads will come from and make requests ahead of time; if done optimally, the latency is bypassed altogether. The use of a cache also allows for higher throughput from the underlying resource, by assembling multiple fine-grain transfers into larger, more efficient requests. In the case of DRAM circuits, the additional throughput may be gained by using a wider data bus. Hardware implements cache as ablockof memory for temporary storage of data likely to be used again.Central processing units(CPUs),solid-state drives(SSDs) and hard disk drives (HDDs) frequently include hardware-based cache, whileweb browsersandweb serverscommonly rely on software caching. A cache is made up of a pool of entries. Each entry has associateddata, which is a copy of the same data in somebacking store. Each entry also has atag, which specifies the identity of the data in the backing store of which the entry is a copy. When the cache client (a CPU, web browser,operating system) needs to access data presumed to exist in the backing store, it first checks the cache. If an entry can be found with a tag matching that of the desired data, the data in the entry is used instead. This situation is known as acache hit. For example, a web browser program might check its local cache on disk to see if it has a local copy of the contents of a web page at a particularURL. In this example, the URL is the tag, and the content of the web page is the data. The percentage of accesses that result in cache hits is known as thehit rateorhit ratioof the cache. The alternative situation, when the cache is checked and found not to contain any entry with the desired tag, is known as acache miss. This requires a more expensive access of data from the backing store. Once the requested data is retrieved, it is typically copied into the cache, ready for the next access. During a cache miss, some other previously existing cache entry is typically removed in order to make room for the newly retrieved data. Theheuristicused to select the entry to replace is known as thereplacement policy. One popular replacement policy, least recently used (LRU), replaces the oldest entry, the entry that was accessed less recently than any other entry. More sophisticated caching algorithms also take into account the frequency of use of entries. Cache writes must eventually be propagated to the backing store. The timing for this is governed by thewrite policy. The two primary write policies are:[3] A write-back cache is more complex to implement since it needs to track which of its locations have been written over and mark them asdirtyfor later writing to the backing store. The data in these locations are written back to the backing store only when they are evicted from the cache, a process referred to as alazy write. For this reason, a read miss in a write-back cache may require two memory accesses to the backing store: one to write back the dirty data, and one to retrieve the requested data. Other policies may also trigger data write-back. The client may make many changes to data in the cache, and then explicitly notify the cache to write back the data. Write operations do not return data. Consequently, a decision needs to be made for write misses: whether or not to load the data into the cache. This is determined by thesewrite-miss policies: While both write policies can Implement either write-miss policy, they are typically paired as follows:[4][5] Entities other than the cache may change the data in the backing store, in which case the copy in the cache may become out-of-date orstale. Alternatively, when the client updates the data in the cache, copies of that data in other caches will become stale. Communication protocols between the cache managers that keep the data consistent are associated withcache coherence. On a cache read miss, caches with ademand pagingpolicyread the minimum amount from the backing store. A typical demand-paging virtual memory implementation reads one page of virtual memory (often 4 KB) from disk into the disk cache in RAM. A typical CPU reads a single L2 cache line of 128 bytes from DRAM into the L2 cache, and a single L1 cache line of 64 bytes from the L2 cache into the L1 cache. Caches with aprefetch input queueor more generalanticipatory paging policygo further—they not only read the data requested, but guess that the next chunk or two of data will soon be required, and so prefetch that data into the cache ahead of time. Anticipatory paging is especially helpful when the backing store has a long latency to read the first chunk and much shorter times to sequentially read the next few chunks, such asdisk storageand DRAM. A few operating systems go further with aloaderthat always pre-loads the entire executable into RAM. A few caches go even further, not only pre-loading an entire file, but also starting to load other related files that may soon be requested, such as thepage cacheassociated with aprefetcheror theweb cacheassociated withlink prefetching. Small memories on or close to the CPU can operate faster than the much largermain memory.[6]Most CPUs since the 1980s have used one or more caches, sometimesin cascaded levels; modern high-endembedded,desktopand servermicroprocessorsmay have as many as six types of cache (between levels and functions).[7]Some examples of caches with a specific function are theD-cache,I-cacheand thetranslation lookaside bufferfor thememory management unit(MMU). Earliergraphics processing units(GPUs) often had limited read-onlytexture cachesand usedswizzlingto improve 2Dlocality of reference.Cache misseswould drastically affect performance, e.g. ifmipmappingwas not used. Caching was important to leverage 32-bit (and wider) transfers for texture data that was often as little as 4 bits per pixel. As GPUs advanced, supportinggeneral-purpose computing on graphics processing unitsandcompute kernels, they have developed progressively larger and increasingly general caches, includinginstruction cachesforshaders, exhibiting functionality commonly found in CPU caches. These caches have grown to handlesynchronization primitivesbetween threads andatomic operations, and interface with a CPU-style MMU. Digital signal processorshave similarly generalized over the years. Earlier designs usedscratchpad memoryfed bydirect memory access, but modern DSPs such asQualcomm Hexagonoften include a very similar set of caches to a CPU (e.g.Modified Harvard architecturewith shared L2, split L1 I-cache and D-cache).[8] A memory management unit (MMU) that fetches page table entries from main memory has a specialized cache, used for recording the results ofvirtual addresstophysical addresstranslations. This specialized cache is called a translation lookaside buffer (TLB).[9] Information-centric networking(ICN) is an approach to evolve theInternetinfrastructure away from a host-centric paradigm, based on perpetual connectivity and theend-to-end principle, to a network architecture in which the focal point is identified information. Due to the inherent caching capability of the nodes in an ICN, it can be viewed as a loosely connected network of caches, which has unique requirements for caching policies. However, ubiquitous content caching introduces the challenge to content protection against unauthorized access, which requires extra care and solutions.[10] Unlike proxy servers, in ICN the cache is a network-level solution. Therefore, it has rapidly changing cache states and higher request arrival rates; moreover, smaller cache sizes impose different requirements on the content eviction policies. In particular, eviction policies for ICN should be fast and lightweight. Various cache replication and eviction schemes for different ICN architectures and applications have been proposed.[citation needed] The time aware least recently used (TLRU) is a variant of LRU designed for the situation where the stored contents in cache have a valid lifetime. The algorithm is suitable in network cache applications, such as ICN,content delivery networks(CDNs) and distributed networks in general. TLRU introduces a new term: time to use (TTU). TTU is a time stamp on content which stipulates the usability time for the content based on the locality of the content and information from the content publisher. Owing to this locality-based time stamp, TTU provides more control to the local administrator to regulate in-network storage. In the TLRU algorithm, when a piece of content arrives, a cache node calculates the local TTU value based on the TTU value assigned by the content publisher. The local TTU value is calculated by using a locally-defined function. Once the local TTU value is calculated the replacement of content is performed on a subset of the total content stored in cache node. The TLRU ensures that less popular and short-lived content should be replaced with incoming content.[11] The least frequent recently used (LFRU) cache replacement scheme combines the benefits of LFU and LRU schemes. LFRU is suitable for network cache applications, such as ICN, CDNs and distributed networks in general. In LFRU, the cache is divided into two partitions called privileged and unprivileged partitions. The privileged partition can be seen as a protected partition. If content is highly popular, it is pushed into the privileged partition. Replacement of the privileged partition is done by first evicting content from the unprivileged partition, then pushing content from the privileged partition to the unprivileged partition, and finally inserting new content into the privileged partition. In the above procedure, the LRU is used for the privileged partition and an approximated LFU (ALFU) scheme is used for the unprivileged partition. The basic idea is to cache the locally popular content with the ALFU scheme and push the popular content to the privileged partition.[12] In 2011, the use of smartphones with weather forecasting options was overly taxingAccuWeatherservers; two requests from the same area would generate separate requests. An optimization by edge-servers to truncate the GPS coordinates to fewer decimal places meant that the cached results from a nearby query would be used. The number of to-the-server lookups per day dropped by half.[13] While CPU caches are generally managed entirely by hardware, a variety of software manages other caches. Thepage cachein main memory is managed by theoperating system kernel. While thedisk buffer, which is an integrated part of the hard disk drive or solid state drive, is sometimes misleadingly referred to asdisk cache, its main functions are write sequencing and read prefetching. High-enddisk controllersoften have their own on-board cache for the hard disk drive's data blocks. Finally, a fast local hard disk drive can also cache information held on even slower data storage devices, such as remote servers (web cache) or localtape drivesoroptical jukeboxes; such a scheme is the main concept ofhierarchical storage management. Also, fast flash-based solid-state drives (SSDs) can be used as caches for slower rotational-media hard disk drives, working together ashybrid drives. Web browsers andweb proxy servers, either locally or at theInternet service provider(ISP), employ web caches to store previous responses from web servers, such asweb pagesandimages. Web caches reduce the amount of information that needs to be transmitted across the network, as information previously stored in the cache can often be re-used. This reduces bandwidth and processing requirements of the web server, and helps to improveresponsivenessfor users of the web.[14] Another form of cache isP2P caching, where the files most sought for bypeer-to-peerapplications are stored in an ISP cache to accelerate P2P transfers. Similarly, decentralised equivalents exist, which allow communities to perform the same task for P2P traffic, for example, Corelli.[15] A cache can store data that is computed on demand rather than retrieved from a backing store.Memoizationis anoptimizationtechnique that stores the results of resource-consumingfunction callswithin a lookup table, allowing subsequent calls to reuse the stored results and avoid repeated computation. It is related to thedynamic programmingalgorithm design methodology, which can also be thought of as a means of caching. Acontent delivery network(CDN) is a network of distributed servers that deliver pages and otherweb contentto a user, based on the geographic locations of the user, the origin of the web page and the content delivery server. CDNs were introduced in the late 1990s as a way to speed up the delivery of static content, such as HTML pages, images and videos. By replicating content on multiple servers around the world and delivering it to users based on their location, CDNs can significantly improve the speed and availability of a website or application. When a user requests a piece of content, the CDN will check to see if it has a copy of the content in its cache. If it does, the CDN will deliver the content to the user from the cache.[16] Acloud storage gatewayis ahybrid cloud storagedevice that connects a local network to one or morecloud storage services, typicallyobject storageservices such asAmazon S3. It provides a cache for frequently accessed data, providing high speed local access to frequently accessed data in the cloud storage service. Cloud storage gateways also provide additional benefits such as accessing cloud object storage through traditional file serving protocols as well as continued access to cached data during connectivity outages.[17] The BINDDNSdaemon caches a mapping of domain names toIP addresses, as does a resolver library. Write-through operation is common when operating over unreliable networks (like an Ethernet LAN), because of the enormous complexity of the coherency protocol required between multiple write-back caches when communication is unreliable. For instance, web page caches andclient-sidenetwork file systemcaches (like those inNFSorSMB) are typically read-only or write-through specifically to keep the network protocol simple and reliable. Search enginesalso frequently make web pages they have indexed available from their cache. For example,Googleprovides a "Cached" link next to each search result. This can prove useful when web pages from a web server are temporarily or permanently inaccessible. Database cachingcan substantially improve the throughput ofdatabaseapplications, for example in the processing ofindexes,data dictionaries, and frequently used subsets of data. Adistributed cache[18]uses networked hosts to provide scalability, reliability and performance to the application.[19]The hosts can be co-located or spread over different geographical regions. The semantics of a "buffer" and a "cache" are not totally different; even so, there are fundamental differences in intent between the process of caching and the process of buffering. Fundamentally, caching realizes a performance increase for transfers of data that is being repeatedly transferred. While a caching system may realize a performance increase upon the initial (typically write) transfer of a data item, this performance increase is due to buffering occurring within the caching system. With read caches, a data item must have been fetched from its residing location at least once in order for subsequent reads of the data item to realize a performance increase by virtue of being able to be fetched from the cache's (faster) intermediate storage rather than the data's residing location. With write caches, a performance increase of writing a data item may be realized upon the first write of the data item by virtue of the data item immediately being stored in the cache's intermediate storage, deferring the transfer of the data item to its residing storage at a later stage or else occurring as a background process. Contrary to strict buffering, a caching process must adhere to a (potentially distributed) cache coherency protocol in order to maintain consistency between the cache's intermediate storage and the location where the data resides. Buffering, on the other hand, With typical caching implementations, a data item that is read or written for the first time is effectively being buffered; and in the case of a write, mostly realizing a performance increase for the application from where the write originated. Additionally, the portion of a caching protocol where individual writes are deferred to a batch of writes is a form of buffering. The portion of a caching protocol where individual reads are deferred to a batch of reads is also a form of buffering, although this form may negatively impact the performance of at least the initial reads (even though it may positively impact the performance of the sum of the individual reads). In practice, caching almost always involves some form of buffering, while strict buffering does not involve caching. A buffer is a temporary memory location that is traditionally used because CPUinstructionscannot directly address data stored in peripheral devices. Thus, addressable memory is used as an intermediate stage. Additionally, such a buffer may be feasible when a large block of data is assembled or disassembled (as required by a storage device), or when data may be delivered in a different order than that in which it is produced. Also, a whole buffer of data is usually transferred sequentially (for example to hard disk), so buffering itself sometimes increases transfer performance or reduces the variation or jitter of the transfer's latency as opposed to caching where the intent is to reduce the latency. These benefits are present even if the buffered data are written to the buffer once and read from the buffer once. A cache also increases transfer performance. A part of the increase similarly comes from the possibility that multiple small transfers will combine into one large block. But the main performance-gain occurs because there is a good chance that the same data will be read from cache multiple times, or that written data will soon be read. A cache's sole purpose is to reduce accesses to the underlying slower storage. Cache is also usually anabstraction layerthat is designed to be invisible from the perspective of neighboring layers.
https://en.wikipedia.org/wiki/Cache_(computing)
Incomputing,cache replacement policies(also known ascache replacement algorithmsorcache algorithms) areoptimizinginstructions oralgorithmswhich acomputer programor hardware-maintained structure can utilize to manage acacheof information. Caching improves performance by keeping recent or often-used data items in memory locations which are faster, or computationally cheaper to access, than normal memory stores. When the cache is full, the algorithm must choose which items to discard to make room for new data. The average memory reference time is[1] where A cache has two primary figures of merit: latency and hit ratio. A number of secondary factors also affect cache performance.[1] The hit ratio of a cache describes how often a searched-for item is found. More efficient replacement policies track more usage information to improve the hit rate for a given cache size. The latency of a cache describes how long after requesting a desired item the cache can return that item when there is a hit. Faster replacement strategies typically track of less usage information—or, with a direct-mapped cache, no information—to reduce the time required to update the information. Each replacement strategy is a compromise between hit rate and latency. Hit-rate measurements are typically performed onbenchmarkapplications, and the hit ratio varies by application. Video and audio streaming applications often have a hit ratio near zero, because each bit of data in the stream is read once (a compulsory miss), used, and then never read or written again. Many cache algorithms (particularlyLRU) allow streaming data to fill the cache, pushing out information which will soon be used again (cache pollution).[2]Other factors may be size, length of time to obtain, and expiration. Depending on cache size, no further caching algorithm to discard items may be needed. Algorithms also maintaincache coherencewhen several caches are used for the same data, such as multiple database servers updating a shared data file. The most efficient caching algorithm would be to discard information which would not be needed for the longest time; this is known asBélády's optimal algorithm, optimal replacement policy, orthe clairvoyant algorithm. Since it is generally impossible to predict how far in the future information will be needed, this is unfeasible in practice. The practical minimum can be calculated after experimentation, and the effectiveness of a chosen cache algorithm can be compared. When apage faultoccurs, a set of pages is in memory. In the example, the sequence of 5, 0, 1 is accessed by Frame 1, Frame 2, and Frame 3 respectively. When 2 is accessed, it replaces value 5 (which is in frame 1, predicting that value 5 will not be accessed in the near future. Because a general-purpose operating system cannot predict when 5 will be accessed, Bélády's algorithm cannot be implemented there. Random replacement selects an item and discards it to make space when necessary. This algorithm does not require keeping any access history. It has been used inARM processorsdue to its simplicity,[3]and it allows efficientstochasticsimulation.[4] With this algorithm, the cache behaves like aFIFO queue; it evicts blocks in the order in which they were added, regardless of how often or how many times they were accessed before. The cache behaves like astack, and unlike a FIFO queue. The cache evicts the block added most recently first, regardless of how often or how many times it was accessed before. SIEVEis a simple eviction algorithm designed specifically for web caches, such as key-value caches and Content Delivery Networks. It uses the idea of lazy promotion and quick demotion.[5]Therefore, SIEVE does not update the global data structure at cache hits and delays the update till eviction time; meanwhile, it quickly evicts newly inserted objects because cache workloads tend to show high one-hit-wonder ratios, and most of the new objects are not worthwhile to be kept in the cache. SIEVE uses a single FIFO queue and uses a moving hand to select objects to evict. Objects in the cache have one bit of metadata indicating whether the object has been requested after being admitted into the cache. The eviction hand points to the tail of the queue at the beginning and moves toward the head over time. Compared with the CLOCK eviction algorithm, retained objects in SIEVE stay in the old position. Therefore, new objects are always at the head, and the old objects are always at the tail. As the hand moves toward the head, new objects are quickly evicted (quick demotion), which is the key to the high efficiency in the SIEVE eviction algorithm. SIEVE is simpler than LRU, but achieves lower miss ratios than LRU on par with state-of-the-art eviction algorithms. Moreover, on stationary skewed workloads, SIEVE is better than existing known algorithms including LFU.[6] Discards least recently used items first. This algorithm requires keeping track of what was used and when, which is cumbersome. It requires "age bits" forcache lines, and tracks the least recently used cache line based on these age bits. When a cache line is used, the age of the other cache lines changes. LRU isa family of caching algorithms, that includes 2Q by Theodore Johnson and Dennis Shasha[7]and LRU/K by Pat O'Neil, Betty O'Neil and Gerhard Weikum.[8]The access sequence for the example is A B C D E D F: When A B C D is installed in the blocks with sequence numbers (increment 1 for each new access) and E is accessed, it is amissand must be installed in a block. With the LRU algorithm, E will replace A because A has the lowest rank (A(0)). In the next-to-last step, D is accessed and the sequence number is updated. F is then accessed, replacing B – which had the lowest rank, (B(1)). Time-aware, least-recently-used (TLRU)[9]is a variant of LRU designed for when the contents of a cache have a valid lifetime. The algorithm is suitable for network cache applications such asinformation-centric networking(ICN),content delivery networks(CDNs) and distributed networks in general. TLRU introduces a term: TTU (time to use), a timestamp of content (or a page) which stipulates the usability time for the content based on its locality and the content publisher. TTU provides more control to a local administrator in regulating network storage. When content subject to TLRU arrives, a cachenodecalculates the local TTU based on the TTU assigned by the content publisher. The local TTU value is calculated with a locally-defined function. When the local TTU value is calculated, content replacement is performed on a subset of the total content of the cache node. TLRU ensures that less-popular and short-lived content is replaced with incoming content. Unlike LRU, MRU discards the most-recently-used items first. At the 11thVLDB conference, Chou and DeWitt said: "When a file is being repeatedly scanned in a [looping sequential] reference pattern, MRU is the bestreplacement algorithm."[10]Researchers presenting at the 22nd VLDB conference noted that for random access patterns and repeated scans over largedatasets(also known as cyclic access patterns), MRU cache algorithms have more hits than LRU due to their tendency to retain older data.[11]MRU algorithms are most useful in situations where the older an item is, the more likely it is to be accessed. The access sequence for the example is A B C D E C D B: A B C D are placed in the cache, since there is space available. At the fifth access (E), the block which held D is replaced with E since this block was used most recently. At the next access (to D), C is replaced since it was the block accessed just before D. An SLRU cache is divided into two segments: probationary and protected. Lines in each segment are ordered from most- to least-recently-accessed. Data from misses is added to the cache at the most-recently-accessed end of the probationary segment. Hits are removed from where they reside and added to the most-recently-accessed end of the protected segment; lines in the protected segment have been accessed at least twice. The protected segment is finite; migration of a line from the probationary segment to the protected segment may force the migration of the LRU line in the protected segment to the most-recently-used end of the probationary segment, giving this line another chance to be accessed before being replaced. The size limit of the protected segment is an SLRU parameter which varies according toI/Oworkload patterns. When data must be discarded from the cache, lines are obtained from the LRU end of the probationary segment.[12] LRU may be expensive in caches with higherassociativity. Practical hardware usually employs an approximation to achieve similar performance at a lower hardware cost. ForCPU cacheswith largeassociativity(generally > four ways), the implementation cost of LRU becomes prohibitive. In many CPU caches, an algorithm that almost always discards one of the least recently used items is sufficient; many CPU designers choose a PLRU algorithm, which only needs one bit per cache item to work. PLRU typically has a slightly-worse miss ratio, slightly-betterlatency, uses slightly less power than LRU, and has a loweroverheadthan LRU. Bits work as a binary tree of one-bit pointers which point to a less-recently-used sub-tree. Following the pointer chain to the leaf node identifies the replacement candidate. With an access, all pointers in the chain from the accessed way's leaf node to the root node are set to point to a sub-tree which does not contain the accessed path. The access sequence in the example is A B C D E: When there is access to a value (such as A) and it is not in the cache, it is loaded from memory and placed in the block where the arrows are pointing in the example. After that block is placed, the arrows are flipped to point the opposite way. A, B, C and D are placed; E replaces A as the cache fills because that was where the arrows were pointing, and the arrows which led to A flip to point in the opposite direction (to B, the block which will be replaced on the next cache miss). The LRU algorithm cannot be implemented in the critical path of computer systems, such asoperating systems, due to its high overhead;Clock, an approximation of LRU, is commonly used instead. Clock-Pro is an approximation ofLIRSfor low-cost implementation in systems.[13]Clock-Pro has the basic Clock framework, with three advantages. It has three "clock hands" (unlike Clock's single "hand"), and can approximately measure the reuse distance of data accesses. Like LIRS, it can quickly evict one-time-access or low-localitydata items. Clock-Pro is as complex as Clock, and is easy to implement at low cost. The buffer-cache replacement implementation in the 2017 version ofLinuxcombines LRU and Clock-Pro.[14][15] The LFU algorithm counts how often an item is needed; those used less often are discarded first. This is similar to LRU, except that how many times a block was accessed is stored instead of how recently. While running an access sequence, the block which was used the fewest times will be removed from the cache. The least frequent recently used (LFRU)[16]algorithm combines the benefits of LFU and LRU. LFRU is suitable for network cache applications such asICN,CDNs, and distributed networks in general. In LFRU, the cache is divided into two partitions: privileged and unprivileged. The privileged partition is protected and, if content is popular, it is pushed into the privileged partition. In replacing the privileged partition, LFRU evicts content from the unprivileged partition; pushes content from the privileged to the unprivileged partition, and inserts new content into the privileged partition. LRU is used for the privileged partition and an approximated LFU (ALFU) algorithm for the unprivileged partition. A variant, LFU with dynamic aging (LFUDA), uses dynamic aging to accommodate shifts in a set of popular objects; it adds a cache-age factor to the reference count when a new object is added to the cache or an existing object is re-referenced. LFUDA increments cache age when evicting blocks by setting it to the evicted object's key value, and the cache age is always less than or equal to the minimum key value in the cache.[17]If an object was frequently accessed in the past and becomes unpopular, it will remain in the cache for a long time (preventing newly- or less-popular objects from replacing it). Dynamic aging reduces the number of such objects, making them eligible for replacement, and LFUDA reducescache pollutioncaused by LFU when a cache is small. RRIP-style policies are the basis for other cache replacement policies, including Hawkeye.[18] RRIP[19]is a flexible policy, proposed byIntel, which attempts to provide good scan resistance while allowing older cache lines that have not been reused to be evicted. All cache lines have a prediction value, the RRPV (re-reference prediction value), that should correlate with when the line is expected to be reused. The RRPV is usually high on insertion; if a line is not reused soon, it will be evicted to prevent scans (large amounts of data used only once) from filling the cache. When a cache line is reused the RRPV is set to zero, indicating that the line has been reused once and is likely to be reused again. On a cache miss, the line with an RRPV equal to the maximum possible RRPV is evicted; with 3-bit values, a line with an RRPV of 23- 1 = 7 is evicted. If no lines have this value, all RRPVs in the set are increased by 1 until one reaches it. A tie-breaker is needed, and usually, it is the first line on the left. The increase is needed to ensure that older lines are aged properly and will be evicted if they are not reused. SRRIP inserts lines with an RRPV value of maxRRPV; a line which has just been inserted will be the most likely to be evicted on a cache miss. SRRIP performs well normally, but suffers when the working set is much larger than the cache size and causescache thrashing. This is remedied by inserting lines with an RRPV value of maxRRPV most of the time, and inserting lines with an RRPV value of maxRRPV - 1 randomly with a low probability. This causes some lines to "stick" in the cache, and helps prevent thrashing. BRRIP degrades performance, however, on non-thrashing accesses. SRRIP performs best when the working set is smaller than the cache, and BRRIP performs best when the working set is larger than the cache. DRRIP[19]uses set dueling[20]to select whether to use SRRIP or BRRIP. It dedicates a few sets (typically 32) to use SRRIP and another few to use BRRIP, and uses a policy counter which monitors set performance to determine which policy will be used by the rest of the cache. Bélády's algorithm is the optimal cache replacement policy, but it requires knowledge of the future to evict lines that will be reused farthest in the future. A number of replacement policies have been proposed which attempt to predict future reuse distances from past access patterns,[21]allowing them to approximate the optimal replacement policy. Some of the best-performing cache replacement policies attempt to imitate Bélády's algorithm. Hawkeye[18]attempts to emulate Bélády's algorithm by using past accesses by a PC to predict whether the accesses it produces generate cache-friendly (used later) or cache-averse accesses (not used later). It samples a number of non-aligned cache sets, uses a history of length8×the cache size{\displaystyle 8\times {\text{the cache size}}}and emulates Bélády's algorithm on these accesses. This allows the policy to determine which lines should have been cached and which should not, predicting whether an instruction is cache-friendly or cache-averse. This data is then fed into an RRIP; accesses from cache-friendly instructions have a lower RRPV value (likely to be evicted later), and accesses from cache-averse instructions have a higher RRPV value (likely to be evicted sooner). The RRIP backend makes the eviction decisions. The sampled cache andOPTgenerator set the initial RRPV value of the inserted cache lines. Hawkeye won the CRC2 cache championship in 2017,[22]and Harmony[23]is an extension of Hawkeye which improves prefetching performance. Mockingjay[24]tries to improve on Hawkeye in several ways. It drops the binary prediction, allowing it to make more fine-grained decisions about which cache lines to evict, and leaves the decision about which cache line to evict for when more information is available. Mockingjay keeps a sampled cache of unique accesses, the PCs that produced them, and their timestamps. When a line in the sampled cache is accessed again, the time difference will be sent to the reuse distance predictor. The RDP uses temporal difference learning,[25]where the new RDP value will be increased or decreased by a small number to compensate for outliers; the number is calculated asw=min(1,timestamp difference16){\displaystyle w=\min \left(1,{\frac {\text{timestamp difference}}{16}}\right)}. If the value has not been initialized, the observed reuse distance is inserted directly. If the sampled cache is full and a line needs to be discarded, the RDP is instructed that the PC that last accessed it produces streaming accesses. On an access or insertion, the estimated time of reuse (ETR) for this line is updated to reflect the predicted reuse distance. On a cache miss, the line with the highest ETR value is evicted. Mockingjay has results which are close to the optimal Bélády's algorithm. A number of policies have attempted to useperceptrons,markov chainsor other types ofmachine learningto predict which line to evict.[26][27]Learning augmented algorithmsalso exist for cache replacement.[28][29] LIRS is a page replacement algorithm with better performance than LRU and other, newer replacement algorithms. Reuse distance is a metric for dynamically ranking accessed pages to make a replacement decision.[30]LIRS addresses the limits of LRU by using recency to evaluate inter-reference recency (IRR) to make a replacement decision. In the diagram, X indicates that a block is accessed at a particular time. If block A1 is accessed at time 1, its recency will be 0; this is the first-accessed block and the IRR will be 1, since it predicts that A1 will be accessed again in time 3. In time 2, since A4 is accessed, the recency will become 0 for A4 and 1 for A1; A4 is the most recently accessed object, and the IRR will become 4. At time 10, the LIRS algorithm will have two sets: an LIR set = {A1, A2} and an HIR set = {A3, A4, A5}. At time 10, if there is access to A4 a miss occurs; LIRS will evict A5 instead of A2 because of its greater recency. Adaptive replacement cache(ARC) constantly balances between LRU and LFU to improve the combined result.[31]It improves SLRU by using information about recently-evicted cache items to adjust the size of the protected and probationary segments to make the best use of available cache space.[32] Clock with adaptive replacement(CAR) combines the advantages of ARC andClock. CAR performs comparably to ARC, and outperforms LRU and Clock. Like ARC, CAR is self-tuning and requires no user-specified parameters. The multi-queue replacement (MQ) algorithm was developed to improve the performance of a second-level buffer cache, such as a server buffer cache, and was introduced in a paper by Zhou, Philbin, and Li.[33]The MQ cache contains anmnumber of LRU queues: Q0, Q1, ..., Qm-1. The value ofmrepresents a hierarchy based on the lifetime of all blocks in that queue.[34] Pannier[35]is a container-based flash caching mechanism which identifies containers whose blocks have variable access patterns. Pannier has a priority-queue-based survival-queue structure to rank containers based on their survival time, which is proportional to live data in the container. Static analysisdetermines which accesses are cache hits or misses to indicate theworst-case execution timeof a program.[36]An approach to analyzing properties of LRU caches is to give each block in the cache an "age" (0 for the most recently used) and compute intervals for possible ages.[37]This analysis can be refined to distinguish cases where the same program point is accessible by paths that result in misses or hits.[38]An efficient analysis may be obtained by abstracting sets of cache states byantichainswhich are represented by compactbinary decision diagrams.[39] LRU static analysis does not extend to pseudo-LRU policies. According tocomputational complexity theory, static-analysis problems posed by pseudo-LRU and FIFO are in highercomplexity classesthan those for LRU.[40][41]
https://en.wikipedia.org/wiki/Cache_replacement_policies
Heterogeneous System Architecture(HSA) is a cross-vendor set of specifications that allow for the integration ofcentral processing unitsandgraphics processorson the same bus, with sharedmemoryandtasks.[1]The HSA is being developed by theHSA Foundation, which includes (among many others)AMDandARM. The platform's stated aim is to reducecommunication latencybetween CPUs, GPUs and othercompute devices, and make these various devices more compatible from a programmer's perspective,[2]: 3[3]relieving the programmer of the task of planning the moving of data between devices' disjoint memories (as must currently be done withOpenCLorCUDA).[4] CUDA and OpenCL as well as most other fairly advanced programming languages can use HSA to increase their execution performance.[5]Heterogeneous computingis widely used insystem-on-chipdevices such astablets,smartphones, other mobile devices, andvideo game consoles.[6]HSA allows programs to use the graphics processor forfloating pointcalculations without separate memory or scheduling.[7] The rationale behind HSA is to ease the burden on programmers when offloading calculations to the GPU. Originally driven solely by AMD and called the FSA, the idea was extended to encompass processing units other than GPUs, such as other manufacturers'DSPs, as well. Modern GPUs are very well suited to performsingle instruction, multiple data(SIMD) andsingle instruction, multiple threads(SIMT), while modern CPUs are still being optimized for branching. etc. Originally introduced byembedded systemssuch as theCell Broadband Engine, sharing system memory directly between multiple system actors makes heterogeneous computing more mainstream. Heterogeneous computing itself refers to systems that contain multiple processing units –central processing units(CPUs),graphics processing units(GPUs),digital signal processors(DSPs), or any type ofapplication-specific integrated circuits(ASICs). The system architecture allows any accelerator, for instance agraphics processor, to operate at the same processing level as the system's CPU. Among its main features, HSA defines a unifiedvirtual address spacefor compute devices: where GPUs traditionally have their own memory, separate from the main (CPU) memory, HSA requires these devices to sharepage tablesso that devices can exchange data by sharingpointers. This is to be supported by custommemory management units.[2]: 6–7To render interoperability possible and also to ease various aspects of programming, HSA is intended to beISA-agnostic for both CPUs and accelerators, and to support high-level programming languages. So far, the HSA specifications cover: HSAIL (Heterogeneous System Architecture Intermediate Language), avirtual instruction setfor parallel programs Mobile devices are one of the HSA's application areas, in which it yields improved power efficiency.[6] The illustrations below compare CPU-GPU coordination under HSA versus under traditional architectures. Some of the HSA-specific features implemented in the hardware need to be supported by theoperating system kerneland specific device drivers. For example, support for AMDRadeonandAMD FirePrographics cards, andAPUsbased onGraphics Core Next(GCN), was merged into version 3.19 of theLinux kernel mainline, released on 8 February 2015.[10]Programs do not interact directly withamdkfd[further explanation needed], but queue their jobs utilizing the HSA runtime.[11]This very first implementation, known asamdkfd, focuses on"Kaveri"or "Berlin" APUs and works alongside the existing Radeon kernel graphics driver. Additionally,amdkfdsupportsheterogeneous queuing(HQ), which aims to simplify the distribution of computational jobs among multiple CPUs and GPUs from the programmer's perspective. Support forheterogeneous memory management(HMM), suited only for graphics hardware featuring version 2 of the AMD'sIOMMU, was accepted into the Linux kernel mainline version 4.14.[12] Integrated support for HSA platforms has been announced for the "Sumatra" release ofOpenJDK, due in 2015.[13] AMD APP SDKis AMD's proprietary software development kit targetingparallel computing, available for Microsoft Windows and Linux. Bolt is a C++ template library optimized for heterogeneous computing.[14] GPUOpencomprehends a couple of other software tools related to HSA.CodeXLversion 2.0 includes an HSA profiler.[15] As of February 2015[update], only AMD's "Kaveri" A-series APUs (cf."Kaveri" desktop processorsand"Kaveri" mobile processors) and Sony'sPlayStation 4allowed theintegrated GPUto access memory via version 2 of the AMD's IOMMU. Earlier APUs (Trinity and Richland) included the version 2 IOMMU functionality, but only for use by an external GPU connected via PCI Express.[citation needed] Post-2015 Carrizo and Bristol Ridge APUs also include the version 2 IOMMU functionality for the integrated GPU.[citation needed] The following table shows features ofAMD's processors with 3D graphics, includingAPUs(see also:List of AMD processors with 3D graphics). ARM'sBifrostmicroarchitecture, as implemented in the Mali-G71,[30]is fully compliant with the HSA 1.1 hardware specifications. As of June 2016[update], ARM has not announced software support that would use this hardware feature.
https://en.wikipedia.org/wiki/Heterogeneous_System_Architecture
Endpoint securityorendpoint protectionis an approach to the protection ofcomputer networksthat are remotely bridged to client devices. The connection of endpoint devices such aslaptops,tablets,mobile phones, and otherwirelessdevices to corporate networks creates attack paths for security threats.[1]Endpoint security attempts to ensure that such devices followcompliancetostandards.[2] The endpoint security space has evolved since the 2010s away from limitedantivirus softwareand into more advanced, comprehensive defenses. This includes next-generationantivirus, threat detection, investigation, and response,device management,data loss prevention(DLP),patch management, and other considerations to face evolvingthreats. Endpoint security management is a software approach that helps to identify and manage the users' computer and data access over a corporate network.[3]This allows the network administrator to restrict the use of sensitive data as well as certain website access to specific users, to maintain, and comply with the organization's policies and standards. The components involved in aligning the endpoint security management systems include avirtual private network(VPN) client, anoperating systemand an updated endpoint agent.[4]Computer devices that are not in compliance with the organization's policy are provisioned with limited access to avirtual LAN.[5]Encryptingdata on endpoints, and removable storage devices help to protect against data leaks.[6] Endpoint security systems operate on aclient-server model. The main software for threat analysis and decision making is on a centrallymanaged host server. Each endpoint has client programs to collect data and interact with the server.[7][8]There is another model calledsoftware as a service(SaaS), where the security programs and the host server are maintained remotely by the merchant. In thepayment cardindustry, the contribution from both the delivery models is that the server program verifies and authenticates the user login credentials and performs a device scan to check if it complies with designatedcorporate securitystandards prior to permitting network access.[9] In addition to protecting an organization's endpoints from potential threats, endpoint security allows IT admins to monitor operation functions and data backup strategies.[10] Endpoint security is a constantly evolving field, primarily because adversaries never cease innovating their strategies. A foundational step in fortifying defenses is to grasp the myriad pathways adversaries exploit to compromise endpoint devices. Here are a few of the most used methods: The protection of endpoint devices has become more crucial than ever. Understanding the different components that contribute to endpoint protection is essential for developing a robust defense strategy. Here are the key elements integral to securing endpoints: An endpoint protection platform (EPP) is a solution deployed on endpoint devices to prevent file-basedmalwareattacks, detect malicious activity, and provide the investigation and remediation capabilities needed to respond to dynamic security incidents and alerts.[14]Several vendors produce systems converging EPP systems withendpoint detection and response(EDR) platforms – systems focused on threat detection, response, and unified monitoring.[15]Tools like Endpoint Detection and Response (EDR) help monitor and respond to potential threats in real-time, providing valuable defense mechanisms against advanced attacks.[16]Additionally,Virtual Private Networks (VPNs)play a critical role in encrypting internet traffic, particularly for users connecting over unsecured networks such as public Wi-Fi hotspots.[17][18]Multi-factor Authentication (MFA)enhances these platforms by adding an extra layer of verification, ensuring that only authorized users can access sensitive systems.[19][20]
https://en.wikipedia.org/wiki/Endpoint_security
Modular programmingis asoftware designtechnique that emphasizes separating the functionality of aprograminto independent, interchangeablemodules, such that each contains everything necessary to execute only one aspect or"concern"of the desired functionality. A moduleinterfaceexpresses the elements that are provided and required by the module. The elements defined in the interface are detectable by other modules. Theimplementationcontains the working code that corresponds to the elements declared in the interface. Modular programming is closely related tostructured programmingandobject-oriented programming, all having the same goal of facilitating construction of large software programs and systems bydecompositioninto smaller pieces, and all originating around the 1960s. While the historical usage of these terms has been inconsistent, "modular programming" now refers to the high-level decomposition of the code of an entire program into pieces: structured programming to the low-level code use of structuredcontrol flow, and object-oriented programming to thedatause ofobjects, a kind ofdata structure. In object-oriented programming, the use of interfaces as an architectural pattern to construct modules is known asinterface-based programming.[citation needed] Modular programming, in the form of subsystems (particularly for I/O) and software libraries, dates to early software systems, where it was used forcode reuse. Modular programming per se, with a goal of modularity, developed in the late 1960s and 1970s, as a larger-scale analog of the concept ofstructured programming(1960s). The term "modular programming" dates at least to the National Symposium on Modular Programming, organized at the Information and Systems Institute in July 1968 byLarry Constantine; other key concepts wereinformation hiding(1972) andseparation of concerns(SoC, 1974). Modules were not included in the original specification forALGOL 68(1968), but were included as extensions in early implementations,ALGOL 68-R(1970) andALGOL 68C(1970), and later formalized.[1]One of the first languages designed from the start for modular programming was the short-livedModula(1975), byNiklaus Wirth. Another early modular language wasMesa(1970s), byXerox PARC, and Wirth drew on Mesa as well as the original Modula in its successor,Modula-2(1978), which influenced later languages, particularly through its successor,Modula-3(1980s). Modula's use of dot-qualified names, likeM.ato refer to objectafrom moduleM, coincides with notation to access a field of a record (and similarly for attributes or methods of objects), and is now widespread, seen inC++,C#,Dart,Go,Java,OCaml, andPython, among others. Modular programming became widespread from the 1980s: the originalPascallanguage (1970) did not include modules, but later versions, notablyUCSD Pascal(1978) andTurbo Pascal(1983) included them in the form of "units", as did the Pascal-influencedAda(1980). The Extended Pascal ISO 10206:1990 standard kept closer to Modula2 in its modular support.Standard ML(1984)[2]has one of the most complete module systems, includingfunctors(parameterized modules) to map between modules. In the 1980s and 1990s, modular programming was overshadowed by and often conflated withobject-oriented programming, particularly due to the popularity of C++ and Java. For example, the C family of languages had support for objects and classes in C++ (originallyC with Classes, 1980) and Objective-C (1983), only supporting modules 30 years or more later. Java (1995) supports modules in the form of packages, though the primary unit of code organization is a class. However, Python (1991) prominently used both modules and objects from the start, using modules as the primary unit of code organization and "packages" as a larger-scale unit; andPerl 5(1994) includes support for both modules and objects, with a vast array of modules being available fromCPAN(1993).OCaml(1996) followed ML by supporting modules and functors. Modular programming is now widespread, and found in virtually all major languages developed since the 1990s. The relative importance of modules varies between languages, and in class-based object-oriented languages there is still overlap and confusion with classes as a unit of organization and encapsulation, but these are both well-established as distinct concepts. The termassembly(as in.NET languageslikeC#,F#orVisual Basic .NET) orpackage(as inDart,GoorJava) is sometimes used instead ofmodule. In other implementations, these are distinct concepts; inPythona package is a collection of modules, while inJava 9the introduction of thenew module concept(a collection of packages with enhanced access control) was implemented. Furthermore, the term "package" has other uses in software (for example.NET NuGet packages). Acomponentis a similar concept, but typically refers to a higher level; a component is a piece of a wholesystem, while a module is a piece of an individual program. The scale of the term "module" varies significantly between languages; in Python it is very small-scale and each file is a module, while inJava 9it is planned to be large-scale, where a module is a collection of packages, which are in turn collections of files. Other terms for modules includeunit, used inPascaldialects. Languages that formally support the module concept includeAda,ALGOL,BlitzMax,C++,C#,Clojure,COBOL,Common Lisp,D,Dart, eC,Erlang,Elixir,Elm,F,F#,Fortran,Go,Haskell,IBM/360Assembler,Control Language(CL),IBM RPG,Java,Julia,MATLAB,ML,Modula,Modula-2,Modula-3, Morpho,NEWP,Oberon,Oberon-2,Objective-C,OCaml, severalPascalderivatives (Component Pascal,Object Pascal,Turbo Pascal,UCSD Pascal),Perl,PHP,PL/I,PureBasic,Python,R,Ruby,[3]Rust,JavaScript,[4]Visual Basic (.NET)and WebDNA. In the Java programming language, the term "package" is used for the analog of modules in the JLS;[5]— seeJava package. "Modules", a kind of collection of packages, were introduced inJava 9as part ofProject Jigsaw; these were earlier called "superpackages" were planned for Java 7. Conspicuous examples of languages that lack support for modules areCand have beenC++and Pascal in their original form,CandC++do, however, allow separate compilation and declarative interfaces to be specified usingheader files. Modules were added to Objective-C iniOS 7(2013); to C++ withC++20,[6]and Pascal was superseded byModulaandOberon, which included modules from the start, and various derivatives that included modules.JavaScripthas had native modules sinceECMAScript2015.C++ moduleshave allowed backwards compatibility with headers (with "header units"). Dialects of C allow for modules, for exampleClangsupports modules for the C language,[7]though the syntax and semantics of Clang C modules differ from C++ modules significantly. Modular programming can be performed even where the programming language lacks explicit syntactic features to support named modules, like, for example, in C. This is done by using existing language features, together with, for example,coding conventions,programming idiomsand the physical code structure.IBM ialso uses modules when programming in theIntegrated Language Environment(ILE). With modular programming,concerns are separatedsuch that modules perform logically discrete functions, interacting through well-defined interfaces. Often modules form adirected acyclic graph(DAG); in this case a cyclic dependency between modules is seen as indicating that these should be a single module. In the case where modules do form a DAG they can be arranged as a hierarchy, where the lowest-level modules are independent, depending on no other modules, and higher-level modules depend on lower-level ones. A particular program or library is a top-level module of its own hierarchy, but can in turn be seen as a lower-level module of a higher-level program, library, or system. When creating a modular system, instead of creating a monolithic application (where the smallest component is the whole), several smaller modules are written separately so when they are composed together, they construct the executable application program. Typically, these are alsocompiledseparately, viaseparate compilation, and then linked by alinker. Ajust-in-time compilermay perform some of this construction "on-the-fly" atrun time. These independent functions are commonly classified as either program control functions or specific task functions. Program control functions are designed to work for one program. Specific task functions are closely prepared to be applicable for various programs. This makes modular designed systems, if built correctly, far more reusable than a traditional monolithic design, since all (or many) of these modules may then be reused (without change) in other projects. This also facilitates the "breaking down" of projects into several smaller projects. Theoretically, a modularized software project will be more easily assembled by large teams, since no team members are creating the whole system, or even need to know about the system as a whole. They can focus just on the assigned smaller task.
https://en.wikipedia.org/wiki/Modular_programming
Insoftware architecture,publish–subscribeorpub/subis amessaging patternwhere publishers categorizemessagesinto classes that are received by subscribers. This is contrasted to the typical messaging pattern model where publishers send messages directly to subscribers. Similarly, subscribers express interest in one or more classes and only receive messages that are of interest, without knowledge of which publishers, if any, there are. Publish–subscribe is a sibling of themessage queueparadigm, and is typically one part of a largermessage-oriented middlewaresystem. Most messaging systems support both the pub/sub and message queue models in theirAPI; e.g.,Java Message Service(JMS). This pattern provides greater networkscalabilityand a more dynamicnetwork topology, with a resulting decreased flexibility to modify the publisher and the structure of the published data. According to Gregor Hohpe, compared with synchronous messaging patterns (such asRPC) andpoint-to-point messagingpatterns, publish–subscribe provides the highest level ofdecouplingamong architectural components, however it can alsocouplethem in some other ways (such as format and semantic coupling) so they become messy over time.[1] In the publish–subscribe model, subscribers typically receive only a subset of the total messages published. The process of selecting messages for reception and processing is calledfiltering. There are two common forms of filtering: topic-based and content-based. In atopic-basedsystem, messages are published to "topics" or named logical channels. Subscribers in a topic-based system will receive all messages published to the topics to which they subscribe. The publisher is responsible for defining the topics to which subscribers can subscribe. In acontent-basedsystem, messages are only delivered to a subscriber if the attributes or content of those messages matches constraints defined by the subscriber. The subscriber is responsible for classifying the messages. Some systems support ahybridof the two; publishers post messages to a topic while subscribers register content-based subscriptions to one or more topics. In many publish–subscribe systems, publishers post messages to an intermediarymessage broker or event bus, and subscribers register subscriptions with that broker, letting the broker perform the filtering. The broker normally performs astore and forwardfunction to route messages from publishers to subscribers. In addition, the broker may prioritize messages in aqueuebefore routing.[citation needed] Subscribers may register for specific messages at build time, initialization time or runtime. In GUI systems, subscribers can be coded to handle user commands (e.g., click of a button), which corresponds to build time registration. Some frameworks and software products useXMLconfiguration files to register subscribers. These configuration files are read at initialization time. The most sophisticated alternative is when subscribers can be added or removed at runtime. This latter approach is used, for example, indatabase triggers,mailing lists, andRSS.[citation needed] TheData Distribution Service(DDS) middleware does not use a broker in the middle. Instead, each publisher and subscriber in the pub/sub system shares meta-data about each other viaIP multicast. The publisher and the subscribers cache this information locally and route messages based on the discovery of each other in the shared cognizance. In effect, brokerless architectures require publish/subscribe system to construct an overlay network which allows efficient decentralized routing from publishers to subscribers. It was shown byJon Kleinbergthat efficient decentralised routing requiresNavigable Small-World topologies. Such Small-World topologies are usually implemented by decentralized or federated publish/subscribe systems.[2]Locality-aware publish/subscribe systems[3]construct Small-World topologies that route subscriptions through short-distance and low-cost links thereby reducing subscription delivery times. One of the earliest publicly described pub/sub systems was the "news" subsystem of the Isis Toolkit, described at the 1987Association for Computing Machinery(ACM) Symposium on Operating Systems Principles conference (SOSP '87), in a paper "ExploitingVirtual SynchronyinDistributed Systems. 123–138."[4] Publishers areloosely coupledto subscribers, and need not even know of their existence. With the topic being the focus, publishers and subscribers are allowed to remain ignorant of system topology. Each can continue to operate as per normal independently of the other. In the traditional tightly coupledclient–server paradigm, the client cannot post messages to the server while the server process is not running, nor can the server receive messages unless the client is running. Many pub/sub systems decouple not only the locations of the publishers and subscribers but also decouple them temporally. A common strategy used bymiddleware analystswith such pub/sub systems is to take down a publisher to allow the subscriber to work through the backlog (a form ofbandwidth throttling). Pub/sub provides the opportunity for betterscalabilitythan traditional client-server, through parallel operation, message caching, tree-based or network-based routing, etc. However, in certain types of tightly coupled, high-volume enterprise environments, as systems scale up to become data centers with thousands of servers sharing the pub/sub infrastructure, current vendor systems often lose this benefit; scalability for pub/sub products under high load in these contexts is a research challenge. Outside of the enterprise environment, on the other hand, the pub/sub paradigm has proven its scalability to volumes far beyond those of a single data center, providing Internet-wide distributed messaging through web syndication protocols such asRSSandAtom. These syndication protocols accept higher latency and lack of delivery guarantees in exchange for the ability for even a low-end web server to syndicate messages to (potentially) millions of separate subscriber nodes. The most serious problems with pub/sub systems are a side-effect of their main advantage: the decoupling of publisher from subscriber. A pub/sub system must be designed carefully to be able to provide stronger system properties that a particular application might require, such as assured delivery. The pub/sub pattern scales well for small networks with a small number of publisher and subscriber nodes and low message volume. However, as the number of nodes and messages grows, the likelihood of instabilities increases, limiting the maximum scalability of a pub/sub network. Example throughput instabilities at large scales include: For pub/sub systems that use brokers (servers), the argument for a broker to send messages to a subscriber isin-band, and can be subject to security problems. Brokers might be fooled into sending notifications to the wrong client, amplifying denial of service requests against the client. Brokers themselves could be overloaded as they allocate resources to track created subscriptions. Even with systems that do not rely on brokers, a subscriber might be able to receive data that it is not authorized to receive. An unauthorized publisher may be able to introduce incorrect or damaging messages into the pub/sub system. This is especially true with systems thatbroadcastormulticasttheir messages.Encryption(e.g.Transport Layer Security(SSL/TLS)) can prevent unauthorized access, but cannot prevent damaging messages from being introduced by authorized publishers. Architectures other than pub/sub, such as client/server systems, are also vulnerable to authorized message senders that behave maliciously.
https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern
Pull codingorclient pullis a style ofnetworkcommunication, where the initialrequestfor data originates from theclient, and then is responded to by theserver. The reverse is known aspush technology, where the serverpushesdata to clients. Pull requests form the foundation of network computing, where many clients request data from centralized servers. Pull is used extensively on theInternetforHTTPpage requests fromwebsites. Apushcan also be simulated using multiplepullswithin a short amount of time. For example, when pullingPOP3email messages from a server, a client can make regular pull requests, every few minutes. To the user, the email then appears to be pushed, as emails appear to arrive close to real-time. A trade-off of this system is that it places a heavier load on both the server and network to function correctly. Manyweb feeds, such asRSSare technically pulled by the client. With RSS, the user'sRSS readerpolls the server periodically for new content; the server does not send information to the client unrequested. This continual polling is inefficient and has contributed to the shutdown or reduction of several popular RSS feeds that could not handle the bandwidth.[1][2]For solving this problem, theWebSubprotocol, as another example of a push code, was devised. Podcastingis specifically a pull technology. When a new podcast episode is published to an RSS feed, it sits on the server until it is requested by a feed reader, mobile podcasting app, or directory. Directories such asApple Podcasts(iTunes), The Blubrry Directory, and many apps' directories request the RSS feed periodically to update the Podcast's listing on those platforms. Subscribers to those RSS feeds via app or reader will get the episodes when they request the RSS feed next time, independent of when the directory listing updates. ThisWorld Wide Web–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Pull_technology
Push technology,also known asserver Push,refers to a communication method, where the communication is initiated by aserverrather than a client. This approach is different from the "pull" method where the communication is initiated by a client.[1] In push technology, clients can express their preferences for certain types of information or data, typically through a process known as thepublish–subscribemodel. In this model, a client "subscribes" to specific information channels hosted by a server. When new content becomes available on these channels, the server automatically sends, or "pushes," this information to the subscribed client. Under certain conditions, such as restrictive security policies that block incomingHTTPrequests, push technology is sometimes simulated using a technique calledpolling.In these cases, the client periodically checks with the server to see if new information is available, rather than receiving automatic updates. Synchronous conferencingandinstant messagingare examples of push services. Chat messages and sometimesfilesare pushed to the user as soon as they are received by the messaging service. Both decentralizedpeer-to-peerprograms (such asWASTE) and centralized programs (such asIRCorXMPP) allow pushing files, which means the sender initiates the data transfer rather than the recipient. Emailmay also be a push system:SMTPis a push protocol (seePush e-mail). However, the last step—from mail server to desktop computer—typically uses a pull protocol likePOP3orIMAP. Modern e-mail clients make this step seem instantaneous by repeatedlypollingthe mail server, frequently checking it for new mail. The IMAP protocol includes theIDLEcommand, which allows the server to tell the client when new messages arrive. The originalBlackBerrywas the first popular example of push-email in a wireless context.[citation needed] Another example is thePointCast Network, which was widely covered in the 1990s. It delivered news and stock market data as a screensaver. BothNetscapeandMicrosoftintegrated push technology through theChannel Definition Format(CDF) into their software at the height of thebrowser wars, but it was never very popular. CDF faded away and was removed from the browsers of the time, replaced in the 2000s withRSS(a pull system.) Other uses of push-enabledweb applicationsinclude software updates distribution ("push updates"), market data distribution (stock tickers), online chat/messaging systems (webchat), auctions, online betting and gaming, sport results, monitoring consoles, andsensor networkmonitoring. The Web push proposal of theInternet Engineering Task Forceis a simple protocol usingHTTP version 2to deliver real-time events, such as incoming calls or messages, which can be delivered (or "pushed") in a timely fashion. The protocol consolidates allreal-timeevents into a single session which ensures more efficient use of network and radio resources. A single service consolidates all events, distributing those events to applications as they arrive. This requires just one session, avoiding duplicated overhead costs.[2] Web Notifications are part of theW3Cstandard and define anAPIfor end-user notifications. A notification allows alerting the user of an event, such as the delivery of an email, outside the context of a web page.[3]As part of this standard, Push API is fully implemented inChrome,Firefox, andEdge, and partially implemented inSafarias of February 2023[update].[4][5] HTTP server push (also known as HTTP streaming) is a mechanism for sending unsolicited (asynchronous) data from aweb serverto aweb browser. HTTP server push can be achieved through any of several mechanisms. As a part ofHTML5theWeb SocketAPI allows a web server and client to communicate over afull-duplexTCP connection. Generally, the web server does not terminate a connection after response data has been served to a client. The web server leaves the connection open so that if an event occurs (for example, a change in internal data which needs to be reported to one or multiple clients), it can be sent out immediately; otherwise, the event would have to be queued until the client's next request is received. Most web servers offer this functionality viaCGI(e.g., Non-Parsed Headers scripts onApache HTTP Server). The underlying mechanism for this approach ischunked transfer encoding. Another mechanism is related to a specialMIMEtype calledmultipart/x-mixed-replace, which was introduced byNetscapein 1995. Web browsers interpret this as a document that changes whenever the server pushes a new version to the client.[6]It is still supported byFirefox,Opera, andSafaritoday, but it is ignored byInternet Explorer[7]and is only partially supported byChrome.[8]It can be applied toHTMLdocuments, and also for streaming images inwebcamapplications. TheWHATWGWeb Applications 1.0 proposal[9]includes a mechanism to push content to the client. On September 1, 2006, the Opera web browser implemented this new experimental system in a feature called "Server-Sent Events".[10][11]It is now part of theHTML5standard.[12] In this technique, the server takes advantage ofpersistent HTTP connections, leaving the response perpetually "open" (i.e., the server never terminates the response), effectively fooling the browser to remain in "loading" mode after the initial page load could be considered complete. The server then periodically sends snippets ofJavaScriptto update the content of the page, thereby achieving push capability. By using this technique, the client doesn't needJava appletsor other plug-ins in order to keep an open connection to the server; the client is automatically notified about new events, pushed by the server.[13][14]One serious drawback to this method, however, is the lack of control the server has over the browser timing out; a page refresh is always necessary if a timeout occurs on the browser end. Long polling is itself not a true push; long polling is a variation of the traditional polling technique, but it allows emulating a push mechanism under circumstances where a real push is not possible, such as sites with security policies that require rejection of incoming HTTP requests. With long polling, the client requests to get more information from the server exactly as in normal polling, but with the expectation that the server may not respond immediately. If the server has no new information for the client when the poll is received, then instead of sending an empty response, the server holds the request open and waits for response information to become available. Once it does have new information, the server immediately sends an HTTP response to the client, completing the open HTTP request. Upon receipt of the server response, the client often immediately issues another server request. In this way the usual response latency (the time between when the information first becomes available and the next client request) otherwise associated with polling clients is eliminated.[15] For example,BOSHis a popular, long-lived HTTP technique used as a long-polling alternative to a continuous TCP connection when such a connection is difficult or impossible to employ directly (e.g., in a web browser);[16]it is also an underlying technology in theXMPP, which Apple uses for its iCloud push support. This technique, used bychatapplications, makes use of theXML Socketobject in a single-pixelAdobe Flashmovie. Under the control ofJavaScript, the client establishes aTCP connectionto aunidirectionalrelay on the server. The relay server does not read anything from thissocket; instead, it immediately sends the client aunique identifier. Next, the client makes anHTTP requestto the web server, including this identifier with it. The web application can then push messages addressed to the client to a local interface of the relay server, which relays them over the Flash socket. The advantage of this approach is that it appreciates the natural read-write asymmetry that is typical of many web applications, including chat, and as a consequence it offers high efficiency. Since it does not accept data on outgoing sockets, the relay server does not need to poll outgoing TCP connectionsat all, making it possible to hold open tens of thousands of concurrent connections. In this model, the limit to scale is the TCP stack of the underlying server operating system. In services such ascloud computing, to increase reliability and availability of data, it is usually pushed (replicated) to several machines. For example, the Hadoop Distributed File System (HDFS) makes 2 extra copies of any object stored. RGDD focuses on efficiently casting an object from one location to many while saving bandwidth by sending minimal number of copies (only one in the best case) of the object over any link across the network. For example, Datacast[17]is a scheme for delivery to many nodes inside data centers that relies on regular and structured topologies and DCCast[18]is a similar approach for delivery across data centers. A push notification is a message that is "pushed" from a back-end server or application to a user interface, e.g. mobile applications[19]or desktop applications.Appleintroduced push notifications foriPhonein 2009,[20]and in 2010Googlereleased "Google Cloud to Device Messaging" (superseded byGoogle Cloud Messagingand then byFirebase Cloud Messaging).[21]In November 2015,Microsoftannounced that theWindows Notification Servicewould be expanded to make use of the Universal Windows Platform architecture, allowing for push data to be sent toWindows 10,Windows 10 Mobile,Xbox, and other supported platforms using universal API calls and POST requests.[22] Push notifications are mainly divided into two approaches, local notifications and remote notifications.[23]For local notifications, the application schedules the notification with the local device's OS. The application sets a timer in the application itself, provided it is able to continuously run in the background. When the event's scheduled time is reached, or the event's programmed condition is met, the message is displayed in the application's user interface. Remote notifications are handled by a remote server. Under this scenario, the client application needs to be registered on the server with a unique key (e.g., aUUID). The server then fires the message against the unique key to deliver it to the client via an agreed client/server protocol such asHTTPorXMPP, and the client displays the message received. When the push notification arrives, it can transmit short notifications and messages, set badges on application icons, blink or continuously light up thenotification LED, or play alert sounds to attract user's attention.[24]Push notifications are usually used by applications to bring information to users' attention. The content of the messages can be classified in the following example categories: Real-time push notifications may raise privacy issues since they can be used to bind virtual identities of social network pseudonyms to the real identities of the smartphone owners.[26]The use of unnecessary push notifications for promotional purposes has been criticized as an example ofattention theft.[27]
https://en.wikipedia.org/wiki/Push_technology
In distributed computing, aremote procedure call(RPC) is when a computer program causes a procedure (subroutine) to execute in a different address space (commonly on another computer on a shared computer network), which is written as if it were a normal (local) procedure call, without the programmer explicitly writing the details for the remote interaction. That is, the programmer writers essentially the same code whether the subroutine is local to the executing program, or remote. This is a form of server interaction (caller is client, executor is server), typically implemented via a request–response message passing system. In the object-oriented programming paradigm, RPCs are represented by remote method invocation (RMI). The RPC model implies a level of location transparency, namely that calling procedures are largely the same whether they are local or remote, but usually, they are not identical, so local calls can be distinguished from remote calls. Remote calls are usually orders of magnitude slower and less reliable than local calls, so distinguishing them is important. RPCs are a form of inter-process communication (IPC), in that different processes have different address spaces: if on the same host machine, they have distinct virtual address spaces, even though the physical address space is the same; while if they are on different hosts, the physical address space is also different. Many different (often incompatible) technologies have been used to implement the concept. Request–response protocols date to early distributed computing in the late 1960s, theoretical proposals of remote procedure calls as the model of network operations date to the 1970s, and practical implementations date to the early 1980s.Bruce Jay Nelsonis generally credited with coining the term "remote procedure call" in 1981.[1] Remote procedure calls used in modern operating systems trace their roots back to the RC 4000 multiprogramming system,[2]which used a request-response communication protocol for process synchronization.[3]The idea of treating network operations as remote procedure calls goes back at least to the 1970s in earlyARPANETdocuments.[4]In 1978,Per Brinch Hansenproposed Distributed Processes, a language for distributed computing based on "external requests" consisting of procedure calls between processes.[5] One of the earliest practical implementations was in 1982 byBrian Randelland colleagues for theirNewcastle Connectionbetween UNIX machines.[6]This was soon followed by "Lupine" by Andrew Birrell and Bruce Nelson in theCedarenvironment atXerox PARC.[7][8][9]Lupine automatically generated stubs, providing type-safe bindings, and used an efficient protocol for communication.[8]One of the first business uses of RPC was byXeroxunder the name "Courier" in 1981. The first popular implementation of RPC on Unix was Sun's RPC (now called ONC RPC), used as the basis for Network File System (NFS). In the 1990s, with the popularity ofobject-oriented programming, an alternative model of remote method invocation (RMI) was widely implemented, such as in Common Object Request Broker Architecture (CORBA, 1991) and Java remote method invocation. RMIs, in turn, fell in popularity with the rise of the internet, particularly in the 2000s. RPC is a request–response protocol. An RPC is initiated by theclient, which sends a request message to a known remoteserverto execute a specified procedure with supplied parameters. The remote server sends a response to the client, and the application continues its process. While the server is processing the call, the client is blocked (it waits until the server has finished processing before resuming execution), unless the client sends an asynchronous request to the server, such as an XMLHttpRequest. There are many variations and subtleties in various implementations, resulting in a variety of different (incompatible) RPC protocols. An important difference between remote procedure calls and local calls is that remote calls can fail because of unpredictable network problems. Also, callers generally must deal with such failures without knowing whether the remote procedure was actually invoked. Idempotent procedures (those that have no additional effects if called more than once) are easily handled, but enough difficulties remain that code to call remote procedures is often confined to carefully written low-level subsystems. To let different clients access servers, a number of standardized RPC systems have been created. Most of these use aninterface description language(IDL) to let various platforms call the RPC. The IDL files can then be used to generate code to interface between the client and servers. Notable RPC implementations and analogues include:
https://en.wikipedia.org/wiki/Remote_procedure_call
TheServer Change Number (SCN)is a counter variable used in Client/Server Architecture systems to find out whether theserverstate could be synchronized with the state of theclient. In case of a difference, there have been obviously communication problems. The number is incremented once the server has successfully integrated changes coming from the client in the case of a server-sideevent. The counter is incremented once more, if the changes made by the programmer arecommitted. Thisnetwork-relatedsoftwarearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Server_change_number
Systems Network Architecture[1](SNA) isIBM's proprietarynetworkingarchitecture, created in 1974.[2]It is a completeprotocol stackfor interconnectingcomputersand their resources. SNA describes formats and protocols but, in itself, is not a piece of software. The implementation of SNA takes the form of various communications packages, most notablyVirtual Telecommunications Access Method(VTAM), themainframesoftware package for SNA communications. SNA was made public as part of IBM's "Advanced Function for Communications" announcement in September, 1974,[3]which included the implementation of the SNA/SDLC (Synchronous Data Link Control) protocols on new communications products: They were supported byIBM 3704/3705communication controllers and theirNetwork Control Program(NCP), and by System/370 and their VTAM and other software such as CICS and IMS. This announcement was followed by another announcement in July, 1975, which introduced theIBM 3760data entry station, theIBM 3790communication system, and the new models of theIBM 3270display system.[4] SNA was designed in the era when the computer industry had not fully adopted the concept of layered communication. Applications, databases, and communication functions were mingled into the same protocol or product, which made it difficult to maintain and manage.[5][6]SNA was mainly designed by the IBM Systems Development Division laboratory inResearch Triangle Park,North Carolina, USA,[7]helped by other laboratories that implemented SNA/SDLC. IBM later made the details public in its System Reference Library manuals andIBM Systems Journal. It is still used extensively in banks and other financial transaction networks, as well as in many government agencies. In 1999 there were an estimated 3,500 companies "with 11,000 SNA mainframes."[8]One of the primary pieces of hardware, the3745/3746 communications controller, has been withdrawn[a]from the market by IBM. IBM continues to provide hardware maintenance service and microcode features to support users. A robust market of smaller companies continues to provide the 3745/3746, features, parts, and service. VTAM is also supported by IBM, as is the NCP required by the 3745/3746 controllers. In 2008 an IBM publication said: with the popularity and growth of TCP/IP, SNA is changing from being a true network architecture to being what could be termed an "application and application access architecture." In other words, there are many applications that still need to communicate in SNA, but the required SNA protocols are carried over the network by IP.[9] IBM in the mid-1970s saw itself mainly as a hardware vendor and hence all its innovations in that period aimed to increase hardware sales. SNA's objective was to reduce the costs of operating large numbers of terminals and thus induce customers to develop or expandinteractiveterminal-based systems as opposed tobatchsystems. An expansion of interactive terminal-based systems would increase sales of terminals and more importantly of mainframe computers and peripherals - partly because of the simple increase in the volume of work done by the systems and partly because interactive processing requires more computing power per transaction than batch processing. Hence SNA aimed to reduce the main non-computer costs and other difficulties in operating large networks using earlier communications protocols. The difficulties included: As a result, running a large number of terminals required a lot more communications lines than the number required today, especially if different types of terminals needed to be supported, or the users wanted to use different types of applications (.e.g. under CICS or TSO) from the same location. In purely financial terms SNA's objectives were to increase customers' spending on terminal-based systems and at the same time to increase IBM's share of that spending, mainly at the expense of the telecommunications companies. SNA also aimed to overcome a limitation of the architecture which IBM'sSystem/370mainframes inherited fromSystem/360. Each CPU could connect to at most 16I/O channels[10]and each channel could handle up to 256 peripherals - i.e. there was a maximum of 4096 peripherals per CPU. At the time when SNA was designed, each communications line counted as a peripheral. Thus the number of terminals with which powerful mainframes could otherwise communicate was limited. Improvements in computer component technology made it feasible to build terminals that included more powerful communications cards which could operate a single standardcommunications protocolrather than a very stripped-down protocol which suited only a specific type of terminal. As a result, severalmulti-layer communications protocolswere proposed in the 1970s, of which IBM's SNA andITU-T'sX.25became dominant later. The most important elements of SNA include: SNA removed link control from the application program and placed it in the NCP. This had the following advantages and disadvantages: SNA at its core was designed with the ability to wrap different layers of connections with a blanket of security. To communicate within an SNA environment you would first have to connect to a node and establish and maintain a link connection into the network. You then have to negotiate a proper session and then handle the flows within the session itself. At each level there are different security controls that can govern the connections and protect the session information.[20] Network Addressable Unitsin a SNA network are any components that can be assigned an address and can send and receive information. They are distinguished further as follows:[21] SNA essentially offers transparent communication: equipment specifics that do not impose any constraints onto LU-LU communication. But eventually it serves a purpose to make a distinction between LU types, as the application must take the functionality of the terminal equipment into account (e.g. screen sizes and layout). Within SNA there are three types of data stream to connect local display terminals and printers; there is SNA Character String (SCS), used for LU1 terminals and for logging on to an SNA network with Unformatted System Services (USS), there is the3270data streammainly used by mainframes such as theSystem/370and successors, including thezSeriesfamily, and the5250data stream mainly used by minicomputers/servers such as theSystem/34,System/36,System/38, andAS/400and its successors, including System i andIBM Power SystemsrunningIBM i. SNA defines several kinds of devices, called Logical Unit types:[25] The primary ones in use are LU1, LU2, andLU6.2(an advanced protocol for application to application conversations). The term37xxrefers to IBM's family of SNA communications controllers. The 3745 supports up to eight high-speedT1circuits, the 3725 is a large-scale node andfront-end processorfor a host, and the3720is a remote node that functions as aconcentratorandrouter. VTAM/NCP PU4 nodes attached to IBMToken Ringnetworks can share the same Local Area Network infrastructure with workstations and servers. NCP encapsulates SNA packets into Token-Ring frames, allowing sessions to flow over a Token-Ring network. The actual encapsulation and decapsulation takes place in the 3745. As mainframe-based entities looked for alternatives to their 37XX-based networks, IBM partnered withCiscoin the mid-1990s and together they developedData Link Switching, or DLSw. DLSw encapsulates SNA packets into IP datagrams, allowing sessions to flow over an IP network. The actual encapsulation and decapsulation takes place in Cisco routers at each end of a DLSw peer connection. At the local, or mainframe site, the router uses Token Ring topology to connect natively to VTAM. At the remote (user) end of the connection, a PU type 2 emulator (such as an SNA gateway server) connects to the peer router via the router's LAN interface. End user terminals are typically PCs with 3270 emulation software that is defined to the SNA gateway. The VTAM/NCP PU type 2 definition becomes a Switched Major Node that can be local to VTAM (without an NCP), and a "Line" connection can be defined using various possible solutions (such as a Token Ring interface on the 3745, a 3172 Lan Channel Station, or a CiscoESCON-compatible Channel Interface Processor). The proprietary networking architecture forHoneywell Bullmainframes isDistributed Systems Architecture(DSA).[27]The Communications package for DSA isVIP. DSA is also no longer supported for client access. Bull mainframes are fitted withMainwayfor translating DSA toTCP/IPand VIP devices are replaced by TNVIPTerminal Emulations(GLink,Winsurf).GCOS 8supportsTNVIP SEover TCP/IP. The networking architecture forUnivacmainframes was the Distributed Computing Architecture (DCA), and the networking architecture forBurroughsmainframes was the Burroughs Network Architecture (BNA); after they merged to formUnisys, both were provided by the merged company. Both were largely obsolete by 2012.International Computers Limited(ICL) provided its Information Processing Architecture (IPA). DECnet[28][29][30]is a suite ofnetwork protocolscreated byDigital Equipment Corporation, originally released in 1975 to connect twoPDP-11minicomputers. It evolved into one of the firstpeer-to-peernetwork architectures, thus transforming DEC into a networking powerhouse in the 1980s. SNA also competed withISO'sOpen Systems Interconnection, which was an attempt to create a vendor-neutral network architecture that failed due to the problems of "design by committee".[citation needed]OSI systems are very complex, and the many parties involved required extensive flexibilities that hurt the interoperability of OSI systems, which was the prime objective to start with.[citation needed] The TCP/IP suite for many years was not considered a serious alternative by IBM, due in part to the lack of control over the intellectual property.[citation needed]The 1988 publication ofRFC1041, authored byYakov Rekhter, which defines an option to runIBM 3270sessions overTelnet, explicitly recognizes the customer demand for interoperability in the data center. Subsequently, the IETF expanded on this work with multiple other RFCs.TN3270(Telnet 3270), defined by those RFCs, supports direct client-server connections to the mainframe using a TN3270 server on the mainframe, and a TN3270 emulation package on the computer at the end user site. This protocol allows existing VTAM applications (CICS, TSO) to run with little or no change from traditional SNA by supporting traditional 3270 terminal protocol over the TCP/IP session. This protocol is widely used to replace legacy SNA connectivity more thanData-Link Switching(DLSw) and other SNA replacement technologies. A similarTN5250(Telnet 5250) variant exists for theIBM 5250. Non-IBM SNA software allowed systems other than IBM's to communicate with IBM's mainframes andAS/400midrange computers using the SNA protocols. Some Unix system vendors, such asSun Microsystemswith its SunLink SNA product line, including PU2.1 Server,[31]andHewlett-Packard/Hewlett Packard Enterprise, with their SNAplus2 product,[32]provided SNA software. Microsoftintroduced SNA Server forWindowsin 1993;[33]it is now namedMicrosoft Host Integration Server. Digital Equipment Corporationhad VMS/SNA forVMS.[34]Third-party SNA software packages for VMS, such as the VAX Link products from Systems Strategies, Inc.,[34]were also available. Hewlett-Packard offered SNA Server and SNA Access for itsHP 3000systems.[35] Brixton Systems developed several SNA software packages, sold under the name "Brixton",[36][37][38]such as Brixton BrxPU21, BrxPU5, BrxLU62, and BrxAPPC, for systems such as workstations fromHewlett-Packard,[39]andSun Microsystems.[40] IBM supported using several non-IBM software implementations ofAPPC/PU2.1/LU6.2to communicate withz/OS, including SNAplus2 for systems fromHP,[41]Brixton 4.1 SNA forSun Solaris,[42]and SunLink SNA 9.1 Support for Sun Solaris.[43]
https://en.wikipedia.org/wiki/Systems_Network_Architecture
International Business Machines Corporation(using thetrademarkIBM), nicknamedBig Blue,[6]is an Americanmultinationaltechnology companyheadquartered inArmonk, New Yorkand present in over 175 countries.[7][8]It is apublicly traded companyand one of the 30 companies in theDow Jones Industrial Average.[a][9][10]IBM is the largest industrial research organization in the world, with 19 research facilities across a dozen countries, having held the record for most annualU.S.patentsgenerated by a business for 29 consecutive years from 1993 to 2021. IBM was founded in 1911 as theComputing-Tabulating-Recording Company(CTR), aholding companyof manufacturers of record-keeping and measuring systems. It was renamed "International Business Machines" in 1924 and soon became the leading manufacturer ofpunch-card tabulating systems. During the 1960s and 1970s, theIBM mainframe, exemplified by theSystem/360and its successors, was the world's dominantcomputing platform, with the company producing 80 percent of computers in the U.S. and 70 percent of computers worldwide.[11]Embracing both business and scientific computing, System/360 was the first family of computers designed to cover a complete range of applications from small to large.[12] IBM debuted in themicrocomputermarket in 1981 with theIBM Personal Computer, — itsDOSsoftware provided byMicrosoft, which became the basis for the majority ofpersonal computersto the present day.[13]The company later also found success in theportablespace with theThinkPad. Since the 1990s, IBM has concentrated oncomputer services,software,supercomputers, andscientific research; it sold its microcomputer division toLenovoin 2005. IBM continues to develop mainframes, and its supercomputers haveconsistently rankedamong the most powerful in the world in the 21st century. In 2018, IBM along with 91 additionalFortune500companies had "paid an effective federal tax rate of 0% or less" as a result of Donald Trump´sTax Cuts and Jobs Act of 2017.[14] As one of the world's oldest and largest technology companies, IBM has been responsible for severaltechnological innovations, including theAutomated Teller Machine(ATM),Dynamic Random-Access Memory(DRAM), thefloppy disk, thehard disk drive, themagnetic stripe card, therelational database, theSQL programming language, and theUniversal Product Code(UPC) barcode. The company has made inroads in advancedcomputer chips,quantum computing,artificial intelligence, anddata infrastructure.[15][16][17]IBM employees and alumni have won various recognitions for their scientific research and inventions, including sixNobel Prizesand sixTuring Awards.[18] IBM originated with several technological innovations developed and commercialized in the late 19th century. Julius E. Pitrap patented the computing scale in 1885;[19]Alexander Dey invented the dial recorder (1888);[20]Herman Hollerithpatented theElectric Tabulating Machine(1889);[21]andWillard Bundyinvented atime clockto record workers' arrival and departure times on a paper tape (1889).[22]On June 16, 1911, their four companies wereamalgamatedin New York State byCharles Ranlett Flintforming a fifth company, theComputing-Tabulating-Recording Company(CTR) based in Endicott, New York.[1][23]The five companies had 1,300 employees and offices and plants in Endicott andBinghamton, New York;Dayton, Ohio;Detroit, Michigan;Washington, D.C.; andToronto, Canada.[24] Collectively, the companies manufactured a wide array of machinery for sale and lease, ranging from commercial scales and industrial time recorders, meat and cheese slicers, to tabulators and punched cards.Thomas J. Watson, Sr., fired from theNational Cash Register CompanybyJohn Henry Patterson, called on Flint and, in 1914, was offered a position at CTR.[25]Watson joined CTR as general manager and then, 11 months later, was made President whenantitrustcases relating to his time at NCR were resolved.[26]Having learned Patterson's pioneering business practices, Watson proceeded to put the stamp of NCR onto CTR's companies.[25]: 105He implemented sales conventions, "generous sales incentives, a focus on customer service, an insistence on well-groomed, dark-suited salesmen and had an evangelical fervor for instilling company pride and loyalty in every worker".[27][28]His favorite slogan, "THINK", became a mantra for each company's employees.[27]During Watson's first four years, revenues reached $9 million ($163 million today) and the company's operations expanded to Europe, South America, Asia and Australia.[27]Watson never liked the clumsy hyphenated name "Computing-Tabulating-Recording Company" and chose to replace it with the more expansive title "International Business Machines" which had previously been used as the name of CTR's Canadian Division;[29]the name was changed on February 14, 1924.[30]By 1933, most of the subsidiaries had been merged into one company, IBM.[31] TheNazismade extensive use of Hollerith punch card and alphabetical accounting equipment and IBM's majority-owned German subsidiary, Deutsche Hollerith Maschinen GmbH (Dehomag), supplied this equipment from the early 1930s. This equipment was critical to Nazi efforts to categorize citizens of both Germany and other nations that fell under Nazi control through ongoing censuses. These census data were used to facilitate the round-up of Jews and other targeted groups, and to catalog their movements through the machinery of theHolocaust, including internment in the concentration camps.[32]Black contends that IBM's dealings with Nazis through its New York City headquarters persisted during World War II.[33]Nazi concentration camps operated a Hollerith department called Hollerith Abteilung, which had IBM machines, including calculating and sorting machines.[34] IBM as a military contractor produced 6% of theM1 Carbinerifles used in World War II, about 346,500 of them, between August 1943 and May 1944. IBM built theAutomatic Sequence Controlled Calculator, an electromechanical computer, during World War II. It offered its first commercial stored-program computer, the vacuum tube basedIBM 701, in 1952. TheIBM 305 RAMACintroduced the hard disk drive in 1956. The company switched to transistorized designs with the7000and1400series, beginning in 1958. In which, IBM considered the1400series the ''model T'' of computing, because it was the first computer with over ten thousand unit sales by IBM.[citation needed] In 1956, the company demonstrated the first practical example ofartificial intelligencewhenArthur L. Samuelof IBM'sPoughkeepsie, New York, laboratory programmed anIBM 704not merely to play checkers but "learn" from its own experience. In 1957, theFORTRANscientific programming language was developed.[citation needed] In 1961, IBM developed theSABRE reservation systemforAmerican Airlinesand introduced the highly successfulSelectrictypewriter. Also in 1961 IBM used theIBM 7094to generate the first song sung completely by a computer using synthesizers. The song was Daisy Bell (Bicycle Built for Two). In 1963, IBM employees and computers helped NASA track the orbital flights of theMercuryastronauts. A year later, it moved its corporate headquarters from New York City toArmonk, New York. The latter half of the 1960s saw IBM continue its support of space exploration, participating in the 1965Geminiflights, 1966 Saturn flights, and 1969 lunar mission. IBM also developed and manufactured theSaturn V'sInstrument Unit andApollospacecraft guidance computers. On April 7, 1964, IBM launched the first computer system family, theIBM System/360. It spanned the complete range of commercial and scientific applications from large to small, allowing companies for the first time to upgrade to models with greater computing capability without having to rewrite their applications. It was followed by theIBM System/370in 1970. Together the 360 and 370 made theIBM mainframethe dominantmainframe computerand the dominant computing platform in the industry throughout this period and into the early 1980s. They and the operating systems that ran on them such asOS/VS1andMVS, and the middleware built on top of those such as theCICStransaction processing monitor, had a near-monopoly-level market share and became the thing IBM was most known for during this period.[35] In 1969, the United States of America alleged that IBM violated theSherman Antitrust Actby monopolizing or attempting to monopolize the general-purpose electronic digital computer system market, specifically computers designed primarily for business, and subsequently alleged that IBM violated the antitrust laws in IBM's actions directed against leasing companies and plug-compatible peripheral manufacturers. Shortly after, IBM unbundled its software and services in what many observers believed was a direct result of the lawsuit, creating a competitive market for software. In 1982, the Department of Justice dropped the case as "without merit".[36] Also in 1969, IBM engineerForrest Parryinvented themagnetic stripe cardthat would become ubiquitous for credit/debit/ATM cards, driver's licenses, rapid transit cards and a multitude of other identity and access control applications. IBM pioneered the manufacture of these cards, and for most of the 1970s, the data processing systems and software for such applications ran exclusively on IBM computers. In 1974, IBM engineerGeorge J. Laurerdeveloped theUniversal Product Code.[37]IBM and theWorld Bankfirst introducedfinancial swapsto the public in 1981, when they entered into a swap agreement.[38] IBM entered themicrocomputermarket in the 1980s with theIBM Personal Computer(IBM 5150). The computer, which spawned along line of successors, had a profoundinfluence on the development of the personal computer marketand became one of IBM's best selling products of all time. Because of a lack of foresight by IBM,[39][40]the PC was not well protected byintellectual propertylaws. As a consequence, IBM quickly began losing its market dominance to emerging,compatiblecompetitors in the PC market. In 1985, IBM collaborated withMicrosoftto develop a newoperating system, which was released asOS/2. Following a dispute, Microsoft severed the collaboration and IBM continued development of OS/2 on its own but it failed in the marketplace against Microsoft'sWindowsduring the mid-1990s. In 1991 IBM began spinning off its many divisions into autonomous subsidiaries (so-called "Baby Blues") in an attempt to make the company more manageable and to streamline IBM by having other investors finance those companies.[41][42]These includedAdStar, dedicated to disk drives and other data storage products; IBM Application Business Systems, dedicated to mid-range computers; IBM Enterprise Systems, dedicated to mainframes; Pennant Systems, dedicated to mid-range and large printers;Lexmark, dedicated to small printers; and more.[43]Lexmark was acquired byClayton & Dubilierin aleveraged buyoutshortly after its formation.[44] In September 1992, IBM completed the spin-off of their various non-mainframe and non-midrange, personal computer manufacturing divisions, combining them into an autonomous wholly owned subsidiary known as the IBM Personal Computer Company (IBM PC Co.).[45][46]This corporate restructuring came after IBM reported a sharp drop in profit margins during the second quarter of fiscal year 1992; market analysts attributed the drop to a fierce price war in the personal computer market over the summer of 1992.[47]The corporate restructuring was one of the largest and most expensive in history up to that point.[48]By the summer of 1993, the IBM PC Co. had divided into multiple business units itself, includingAmbra Computer Corporationand the IBM Power Personal Systems Group, the former an attempt to design and market "clone" computers of IBM's own architecture and the latter responsible for IBM'sPowerPC-basedworkstations.[49][50]IBM PC Co. introduced theThinkPadclone computers, which IBM would heavily market and would eventually become one of the best-selling series ofnotebook computers.[51] In 1993, IBM posted an $8 billion loss – at the time the biggest in American corporate history.[52]Lou Gerstnerwas hired as CEO fromRJR Nabiscoto turn the company around.[53]In 1995, IBM purchasedLotus Software, best known for itsLotus 1-2-3spreadsheet software.[54]During the decade, IBM was working on a new operating system, named theWorkplace OSproject. Despite a large amount of money spent on the project, it was cancelled in 1996. In 1998, IBM merged the enterprise-oriented Personal Systems Group of the IBM PC Co. into IBM's own Global Services personal computer consulting and customer service division. The resulting merged business units then became known simply as IBM Personal Systems Group.[55]A year later, IBM stopped selling their computers at retail outlets after their market share in this sector had fallen considerably behind competitorsCompaqandDell.[56]Immediately afterwards, the IBM PC Co. was dissolved and merged into IBM Personal Systems Group.[57] In 2002 IBM acquired PwC Consulting, the consulting arm ofPwCwhich was merged into itsIBM Global Services.[58][59]On September 14, 2004,LGand IBM announced that their business alliance in theSouth Koreanmarket would end at the end of that year. Both companies stated that it was unrelated to the charges of bribery earlier that year.[60][61][62][63]Xnotewas originally part of the joint venture and was sold by LG in 2012.[64] Continuing a trend started in the 1990s of downsizing its operations and divesting fromcommodity production, IBMsold all of its personal computer businessto Chinese technology companyLenovo[65]and, in 2009, it acquired software companySPSS Inc.Later in 2009, IBM'sBlue Genesupercomputing program was awarded theNational Medal of Technology and Innovationby U.S. PresidentBarack Obama. In 2011, IBM gained worldwide attention for its artificial intelligence programWatson, which was exhibited onJeopardy!where it won against game-show champions Ken Jennings and Brad Rutter. The company also celebrated its 100th anniversary in the same year on June 16. In 2012, IBM announced it had agreed to buyKenexaand Texas Memory Systems,[66]and a year later it also acquired SoftLayer Technologies, aweb hosting service, in a deal worth around $2 billion.[67]Also that year, the company designed a video surveillance system forDavao City.[68] In 2014 IBM announced it would sell itsx86server division to Lenovo for $2.1 billion.[69]while continuing to offerPower ISA-based servers.[70]Also that year, IBM began announcing several major partnerships with other companies, includingApple Inc.,[71][72]Twitter,[73]Facebook,[74]Tencent,[75]Cisco,[76]UnderArmour,[77]Box,[78]Microsoft,[79]VMware,[80]CSC,[81]Macy's,[82]Sesame Workshop,[83]the parent company ofSesame Street, andSalesforce.com.[84] In 2015, its chip division transitioned to afablessmodel withsemiconductorsdesign, offloading manufacturing toGlobalFoundries.[85] In 2015, IBM announced three major acquisitions: Merge Healthcare for $1 billion,[86]data storage vendorCleversafe, and all digital assets fromThe Weather Company, includingWeather.comandThe Weather Channelmobile app.[87][88]Also that year, IBM employees created the filmA Boy and His Atom, which was the first molecule movie to tell a story. In 2016, IBM acquired video conferencing serviceUstreamand formed a new cloud video unit.[89][90]In April 2016, it posted a 14-year low in quarterly sales.[91]The following month,Grouponsued IBM accusing it of patent infringement, two months after IBM accused Groupon of patent infringement in a separate lawsuit.[92] In 2015, IBM bought the digital part ofThe Weather Company,[93]Truven Health Analytics for $2.6 billion in 2016, and in October 2018, IBM announced its intention to acquireRed Hatfor $34 billion,[94][95][96]which was completed on July 9, 2019.[97] In February 2020, IBM'sJohn Kelly IIIjoinedBrad SmithofMicrosoftto sign a pledge with theVaticanto ensure the ethical use and practice ofArtificial Intelligence (AI).[98] IBM announced in October 2020 that it would divest the Managed Infrastructure Services unit of its Global Technology Services division into a new public company.[99]The new company,Kyndryl, will have 90,000 employees, 4,600 clients in 115 countries, with a backlog of $60 billion.[100][101][102]IBM's spin off was greater than any of its previous divestitures, and welcomed by investors.[103][104][105]IBM appointed Martin Schroeter, who had been IBM's CFO from 2014 through the end of 2017, as CEO of Kyndryl.[106][107] In 2021, IBM announced the acquisition of the enterprise software companyTurbonomicfor $1.5 billion.[108]In January 2022, IBM announced it would sellWatson Healthto private equity firmFrancisco Partners.[109] On March 7, 2022, a few days after the start of theRussian invasion of Ukraine, IBM CEO Arvind Krishna published a Ukrainian flag and announced that "we have suspended all business in Russia". All Russian articles were also removed from the IBM website.[110]On June 7, Krishna announced that IBM would carry out an "orderly wind-down" of its operations in Russia.[111] In late 2022, IBM started a collaboration with new Japanese manufacturerRapidus,[112]which led GlobalFoundries to file a lawsuit against IBM the following year.[113] In 2023, IBM acquired Manta Software Inc. to complement its data and A.I. governance capabilities for an undisclosed amount.[114]On November 16, 2023, IBM suspended ads on Twitter after ads were found next to pro-Nazi content.[115][116] In August 2023, IBM agreed to sell The Weather Company to Francisco Partners for an undisclosed sum.[117]The sale was finalized on February 1, 2024,[118]and the cost was disclosed as $1.1 billion, with $750 million in cash, $100 million deferred over seven years, and $250 million in contingent consideration.[119] In December 2023, IBM announced it would acquireSoftware AG's StreamSets andwebMethodsplatforms for €2.13 billion ($2.33 billion).[120] IBM's market capitalization was valued at over $153 billion as of May 2024.[121]Despite its relative decline within the technology sector,[122]IBM remains the seventh largest technology company by revenue, and 67thlargest overall company by revenue in the United States. IBM ranked No. 38 on the 2020Fortune 500rankings of the largest United States corporations by total revenue.[123]In 2014, IBM was accused of using "financial engineering" to hit its quarterly earnings targets rather than investing for the longer term.[124][125][126] The key trends of IBM are (as at the financial year ending December 31):[127][128] The company's 15-member board of directors are responsible for overall corporate management and includes the current or former CEOs ofAnthem,Dow Chemical,Johnson and Johnson,Royal Dutch Shell,UPS, andVanguardas well as the president ofCornell Universityand a retiredU.S. Navy admiral.[129]Vanguard Group is the largest shareholder of IBM and as of March 31, 2023, held 15.7% of total shares outstanding.[130] In 2011, IBM became the first technology companyWarren Buffett'sholding companyBerkshire Hathawayinvested in.[131]Initially he bought 64 million shares costing $10.5 billion. Over the years, Buffett increased his IBM holdings, but by the end of 2017 had reduced them by 94.5% to 2.05 million shares; by May 2018, he was completely out of IBM.[132] IBM is headquartered inArmonk, New York, a community 37 miles (60 km) north of Midtown Manhattan.[133]A nickname for the company is the "Colossus of Armonk".[134]Its principal building, referred to as CHQ, is a 283,000-square-foot (26,300 m2) glass and stone edifice on a 25-acre (10 ha) parcel amid a 432-acre former apple orchard the company purchased in the mid-1950s.[135]There are two other IBM buildings within walking distance of CHQ: the North Castle office, which previously served as IBM's headquarters; and the Louis V. Gerstner, Jr., Center for Learning[136](formerly known as IBM Learning Center (ILC)), a resort hotel and training center, which has 182 guest rooms, 31 meeting rooms, and various amenities.[137] IBM operates in 174 countries as of 2016[update],[2]with mobility centers in smaller market areas and major campuses in the larger ones. In New York City, IBM has several offices besides CHQ, including theIBM Watsonheadquarters atAstor Placein Manhattan. Outside of New York, major campuses in the United States includeAustin, Texas;Research Triangle Park (Raleigh-Durham), North Carolina;Rochester, Minnesota; andSilicon Valley, California. IBM's real estate holdings are varied and globally diverse. Towers occupied by IBM include1250 René-Lévesque(Montreal, Canada) andOne Atlantic Center(Atlanta, Georgia, US). In Beijing, China, IBM occupiesPangu Plaza,[138]the city's seventh tallest building and overlookingBeijing National Stadium ("Bird's Nest"), home to the2008 Summer Olympics. IBM India Private Limitedis the Indian subsidiary of IBM, which is headquartered atBangalore, Karnataka. It has facilities inCoimbatore,Chennai,Kochi,Ahmedabad,Delhi,Kolkata,Mumbai,Pune,Gurugram,Noida,Bhubaneshwar,Surat,Visakhapatnam,Hyderabad,BangaloreandJamshedpur. Other notable buildings include theIBM Rome Software Lab(Rome, Italy),Hursley House(Winchester, UK),330 North Wabash(Chicago, Illinois, United States), theCambridge Scientific Center(Cambridge, Massachusetts, United States), theIBM Toronto Software Lab(Toronto, Canada), the IBM Building, Johannesburg (Johannesburg, South Africa), theIBM Building (Seattle)(Seattle, Washington, United States), theIBM Hakozaki Facility(Tokyo, Japan), theIBM Yamato Facility(Yamato, Japan), theIBM Canada Head Office Building(Ontario, Canada) and the Watson IoT Headquarters[139](Munich, Germany). Defunct IBM campuses include theIBM Somers Office Complex(Somers, New York),Spango Valley(Greenock, Scotland), andTour Descartes(Paris, France). The company's contributions to industrial architecture and design include works byMarcel Breuer,Eero Saarinen,Ludwig Mies van der Rohe,I.M. PeiandRicardo Legorreta. Van der Rohe's building in Chicago was recognized with the 1990Honor Awardfrom theNational Building Museum.[140] IBM has a large and diverse portfolio of products and services. As of 2016[update], these offerings fall into the categories ofcloud computing, artificial intelligence,commerce,dataandanalytics,Internet of things(IoT),[141]IT infrastructure,mobile, digital workplace[142]andcybersecurity.[143] Since 1954, IBM sellsmainframe computers, the latest being theIBM zseries. The most recent model, theIBM z17, was released in 2024. In 1990, IBM released thePower microprocessors, which were designed into many console gaming systems, includingXbox 360,[144]PlayStation 3, andNintendo'sWii U.[145][146]IBMSecure Blueis encryption hardware that can be built into microprocessors,[147]and in 2014, the company revealedTrueNorth, aneuromorphicCMOSintegrated circuitand announced a $3 billion investment over the following five years to design a neural chip that mimics the human brain, with 10 billion neurons and 100 trillion synapses, but that uses just 1 kilowatt of power.[148]In 2016, the company launchedall-flash arraysdesigned for small and midsized companies, which includes software for data compression, provisioning, and snapshots across various systems.[149] In January 2019, IBM introduced its first commercial quantum computer:IBM Q System One.[150]In March 2020, it was announced that IBM will build Europe's first quantum computer inEhningen, Germany. The center, operated by theFraunhofer Society, was opened in 2024.[151][152][153] Since 2009, IBM ownsSPSS, a software package used forstatistical analysisin thesocial sciences.[154]IBM also ownedThe Weather Company, which provides weather forecasting and includesweather.comandWeather Underground,[155]which was sold in 2024. IBM Cloudincludesinfrastructure as a service(IaaS),software as a service(SaaS) andplatform as a service(PaaS) offered through public, private and hybridcloud delivery models. For instance, the IBMBluemixPaaS enables developers to quickly create complex websites on a pay-as-you-go model. IBMSoftLayeris adedicated server,managed hostingandcloud computingprovider, which in 2011 reported hosting more than 81,000 servers for more than 26,000 customers.[156]IBM also provides Cloud Data Encryption Services (ICDES), usingcryptographic splittingto secure customer data.[157] In May 2022, IBM announced the company had signed a multi-year Strategic Collaboration Agreement withAmazon Web Servicesto make a wide variety of IBM software available as a service on AWS Marketplace. Additionally, the deal includes both companies making joint investments that make it easier for companies to consume IBM's offering and integrate them with AWS, including developer training and software development for select markets.[158] IBM Watsonis a technology platform that usesnatural language processingand machine learning to reveal insights from large amounts ofunstructured data.[159]Watson was debuted in 2011 on the American game showJeopardy!, where it competed against championsKen JenningsandBrad Rutterin a three-game tournament and won. Watson has since been applied to business, healthcare, developers, and universities. For example, IBM has partnered withMemorial Sloan Kettering Cancer Centerto assist with considering treatment options foroncologypatients and for doingmelanomascreenings.[160]Several companies use Watson for call centers, either replacing or assisting customer service agents.[161] IBM also provides infrastructure for theNew York City Police Departmentthrough theirIBM Cognos Analyticsto perform data visualizations ofCompStatcrime data.[162] In June 2020, IBM announced that it was exiting the facial recognition business. In a letter to congress,[163]IBM's Chief Executive Officer Arvind Krishna told lawmakers, "now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies."[164] In May 2023, IBM revealedWatsonx, aGenerative AItoolkit that is powered by IBM's ownGranitemodels with option to use other publicly availableLLMs. Watsonx has multiple services for training andfine tuningmodels based on confidential data.[165]A year later, IBMopen-sourcedGranite code models and put them onHugging Facefor public use.[166]In October 2024, IBM introduced Granite 3.0, an open-source large language model designed for enterprise AI applications.[167] With 160,000 consultants globally as of 2024, it is one of the ten largest consulting companies in the world with capabilities spanning strategy andmanagement consulting, experience design, technology andsystems integration, and operations.[168]IBM's consulting business was valued at $20 billion, as of 2024.[169] Research has been part of IBM since its founding, and its organized efforts trace their roots back to 1945, when the Watson Scientific Computing Laboratory was founded atColumbia Universityin New York City, converting a renovated fraternity house on Manhattan's West Side into IBM's first laboratory. Now,IBM Researchconstitutes the largest industrial research organization in the world, with 12 labs on 6 continents.[170]IBM Research is headquartered at theThomas J. Watson Research Centerin New York, and facilities include theAlmaden labin California, Austin lab in Texas,Australia labin Melbourne,Brazil labin São Paulo and Rio de Janeiro, China lab in Beijing and Shanghai, Ireland lab in Dublin,Haifa labin Israel, India lab in Delhi andBangalore,Tokyo lab,Zurichlaband Africa lab inNairobi. In terms of investment, IBM'sR&Dexpenditure totals several billion dollars each year. In 2012, that expenditure was approximately $6.9 billion.[171]Recent allocations have included $1 billion to create a business unit forWatsonin 2014, and $3 billion to create a next-gen semiconductor along with $4 billion towards growing the company's "strategic imperatives" (cloud, analytics, mobile, security, social) in 2015.[172] IBM has been a leading proponent of theOpen Source Initiative, and began supportingLinuxin 1998.[173]The company invests billions of dollars in services and software based on Linux through the IBMLinux Technology Center, which includes over 300Linux kerneldevelopers.[174]IBM has also released code under differentopen-source licenses, such as theplatform-independentsoftware frameworkEclipse(worth approximately $40 million at the time of the donation),[175]the three-sentence International Components for Unicode (ICU) license, and theJava-basedrelational database management system(RDBMS)Apache Derby. IBM'sopen sourceinvolvement has not been trouble-free, however (seeSCO v. IBM). Famousinventionsand developments by IBM include: theautomated teller machine (ATM),Dynamic Random Access Memory (DRAM), theelectronic keypunch, thefinancial swap, thefloppy disk, thehard disk drive, themagnetic stripe card, therelational database,RISC, theSABRE airline reservation system,SQL, theUniversal Product Code (UPC)bar code, and thevirtual machine. Additionally, in 1990 company scientists used ascanning tunneling microscopeto arrange 35individual xenon atomsto spell out the company acronym, marking the first structure assembled one atom at a time.[176]A major part of IBM research is the generation ofpatents. Since its first patent for a traffic signaling device, IBM has been one of the world's most prolific patent sources. In 2021, the company held the record for mostpatentsgenerated by a business for 29 consecutive years for the achievement.[177] As of 2021, IBM holds the record for most annualU.S.patentsgenerated by a business for 29 consecutive years.[177][178][179] In 2001, IBM became the first company to generate more than 3,000 patents in one year, beating this record in 2008 with over 4,000 patents.[11]As of 2022, the company held 150,000 patents.[180]IBM has also been criticized as being apatent troll.[181][182][183] IBM is nicknamedBig Bluepartly because of its blue logo and color scheme,[184][185]and also in reference to its formerde factodress codeof white shirts with blue suits.[184][186]The company logo has undergone several changes over the years, with its current "8-bar" logo designed in 1972 bygraphic designerPaul Rand.[187]It was a general replacement for a 13-bar logo, since period photocopiers did not render narrow (as opposed to tall) stripes well. Aside from the logo, IBM usedHelveticaas a corporate typeface for 50 years, until it was replaced in 2017 by the custom-designedIBM Plex. IBM has a valuable brand as a result of over 100 years of operations and marketing campaigns. Since 1996, IBM has been the exclusive technology partner for theMasters Tournament, one of the fourmajor championshipsin professional golf, with IBM creating the first Masters.org (1996), the first course cam (1998), the firstiPhoneapp with live streaming (2009), and first-ever live 4K Ultra High Definition feed in the United States for a major sporting event (2016).[188]As a result, IBM CEOGinni Romettybecame the third female member of the Master's governing body, theAugusta National Golf Club.[189]IBM is also a major sponsor in professional tennis, with engagements at theU.S. Open,Wimbledon, the Australian Open, and the French Open.[190]The company also sponsored theOlympic Gamesfrom 1960 to 2000,[191]and theNational Football Leaguefrom 2003 to 2012.[192]In Japan, IBM employees also have anAmerican footballteam complete with pro stadium, cheerleaders and televised games, competing in the JapaneseX-Leagueas the "Big Blue".[193] In 2004, concerns were raised related to IBM's contribution in its early days to pollution in its original location inEndicott, New York.[194][195]IBM reported its totalCO2e emissions(direct and indirect) for the twelve months ending December 31, 2020 at 621 kilotons (-324 /-34.3% year-on-year).[196]In February 2021, IBM committed to achieve net zero greenhouse gas emissions by the year 2030.[197] In 2018, IBM along with 91 additionalFortune500companies had "paid an effective federal tax rate of 0% or less" as a result of Donald Trump´sTax Cuts and Jobs Act of 2017.[14] It is among theworld's largest employers, with over 297,900 employees worldwide in 2022,[198]with about 160,000 of those beingtech consultants.[169] IBM's leadership programs includeExtreme Blue, an internship program, and theIBM Fellowaward, offered since 1963 based on technical achievement.[199] Many IBM employees have achieved notability outside of work and after leaving IBM. In business, former IBM employees includeApple Inc.CEOTim Cook,[200]formerEDSCEO and politicianRoss Perot,MicrosoftchairmanJohn W. Thompson,SAPco-founderHasso Plattner,GartnerfounderGideon Gartner,Advanced Micro Devices (AMD)CEOLisa Su,[201]Cadence Design SystemsCEOAnirudh Devgan,[202]formerCitizens Financial GroupCEOEllen Alemany, formerYahoo!chairmanAlfred Amoroso, formerAT&TCEOC. Michael Armstrong, formerXerox CorporationCEOsDavid T. KearnsandG. Richard Thoman,[203]formerFair Isaac CorporationCEOMark N. Greene,[204]Citrix Systemsco-founderEd Iacobucci,ASOS.comchairman Brian McBride, formerLenovoCEOSteve Ward, and formerTeradataCEOKenneth Simonds. In government,Patricia Roberts Harrisserved asUnited States Secretary of Housing and Urban Development, the firstAfrican Americanwomanto serve in theUnited States Cabinet.[205]Samuel K. Skinnerserved asU.S. Secretary of Transportationand as theWhite House Chief of Staff. Alumni also includeU.S. SenatorsMack MattinglyandThom Tillis;WisconsingovernorScott Walker;[206]formerU.S. AmbassadorsVincent Obsitnik(Slovakia),Arthur K. Watson(France), andThomas Watson Jr.(Soviet Union); and formerU.S. RepresentativesTodd Akin,[207]Glenn Andrews,Robert Garcia,Katherine Harris,[208]Amo Houghton,Jim Ross Lightfoot,Thomas J. Manton,Donald W. Riegle Jr., andEd Zschau. Other former IBM employees includeNASAastronautMichael J. Massimino,Canadian astronautand formerGovernor GeneralJulie Payette, noted musicianDave Matthews,[209]Harvey Mudd CollegepresidentMaria Klawe,Western Governors Universitypresident emeritusRobert Mendenhall, formerUniversity of KentuckypresidentLee T. Todd Jr., formerUniversity of IowapresidentBruce Harreld,NFLrefereeBill Carollo,[210]formerRangers F.C.chairmanJohn McClelland, and recipient of theNobel Prize in LiteratureJ. M. Coetzee.Thomas Watson Jr.also served as the11th national presidentof theBoy Scouts of America. Five IBM employees have received the Nobel Prize:Leo Esaki, of the Thomas J. Watson Research Center in Yorktown Heights, N.Y., in 1973, for work in semiconductors;Gerd BinnigandHeinrich Rohrer, of the Zurich Research Center, in 1986, for thescanning tunneling microscope;[211]andGeorg BednorzandAlex Müller, also of Zurich, in 1987, for research insuperconductivity. Six IBM employees have won theTuring Award, including the first female recipientFrances E. Allen.[212]TenNational Medals of Technology (USA)and fiveNational Medals of Science (USA)have been awarded to IBM employees. Employees are often referred to as "IBMers". IBM's culture has evolved significantly over its century of operations. In its early days, a dark (or gray) suit, white shirt, and a "sincere" tie constituted the public uniform for IBM employees.[213]During IBM's management transformation in the 1990s, CEOLouis V. Gerstner Jr.relaxed these codes, normalizing the dress and behavior of IBM employees.[214]The company's culture has also given to different plays on the company acronym (IBM), with some saying it stands for "I've Been Moved," based on frequent relocations,[215]others saying it stands for "I'm By Myself" pursuant to a prevalent work-from-anywhere norm,[216]and others saying it stands for "I'm Being Mentored" in reference to the company's open door policy and encouragement for mentoring at all levels.[217]The company has traditionally resistedlabor union organizing,[218]although unions represent some IBM workers outside the United States.[219]
https://en.wikipedia.org/wiki/IBM
Incomputer networking, athin client,sometimes calledslim clientorlean client, is a simple (low-performance)computerthat has beenoptimizedforestablishing a remote connectionwith aserver-based computing environment. They are sometimes known asnetwork computers, or in their simplest form aszero clients. The server does most of the work, which can include launchingsoftwareprograms, performingcalculations, andstoring data. This contrasts with arich clientor a conventionalpersonal computer; the former is also intended for working in aclient–server modelbut has significant local processing power, while the latter aims to perform its function mostly locally.[1] Thin clients occur as components of a broader computing infrastructure, where many clients share their computations with a server orserver farm. The server-side infrastructure usescloud computingsoftware such asapplication virtualization, hosted shared desktop (HSD) ordesktop virtualization(VDI). This combination forms what is known as a cloud-based system, where desktop resources are centralized at one or moredata centers. The benefits of centralization are hardware resource optimization, reducedsoftware maintenance, and improvedsecurity. Thin client hardware generally supports commonperipherals, such as keyboards, mice,monitors,jacksfor sound peripherals, and openportsforUSBdevices (e.g., printer, flash drive, webcam). Some thin clients include (legacy)serialorparallel portsto support older devices, such as receipt printers, scales or time clocks. Thin client software typically consists of agraphical user interface(GUI), cloud access agents (e.g.,RDP,ICA,PCoIP), a localweb browser,terminal emulators(in some cases), and a basic set of localutilities. In using cloud-based architecture, the server takes on the processing load of several client sessions, acting as a host for each endpoint device. The client software is narrowly purposed and lightweight; therefore, only the host server or server farm needs to be secured, rather than securing software installed on every endpoint device (although thin clients may still require basic security and strong authentication to prevent unauthorized access). One of the combined benefits of using cloud architecture with thin client desktops is that critical IT assets are centralized for better utilization of resources. Unused memory, bussing lanes, and processor cores within an individual user session, for example, can be leveraged for other active user sessions. The simplicity of thin client hardware and software results in a very lowtotal cost of ownership, but some of these initial savings can be offset by the need for a more robust cloud infrastructure required on the server side. An alternative to traditional server deployment which spreads out infrastructure costs over time is a cloud-based subscription model known asdesktop as a service, which allows IT organizations to outsource the cloud infrastructure to a third party. Thin client computing is known to simplify the desktop endpoints by reducing the client-side software footprint. With a lightweight, read-onlyoperating system(OS), client-side setup and administration is greatly reduced. Cloud access is the primary role of a thin client which eliminates the need for a large suite of local user applications, data storage, and utilities. This architecture shifts most of the software execution burden from the endpoint to the data center. User assets are centralized for greater visibility. Data recovery and desktop repurposing tasks are also centralized for faster service and greater scalability. While the server must be robust enough to handle several client sessions at once, thin client hardware requirements are minimal compared to that of a traditional PC laptop or desktop. Most thin clients have low-energy processors,flash storage, memory, and no moving parts. This reduces the cost, power consumption (heat, noise and vibrations), making them affordable to own and easy to replace or deploy. Numerous thin clients also useRaspberry Pis.[2]Since thin clients consist of fewer hardware components than a traditional desktop PC, they can operate in morehostile environments. And because they typically don't store critical data locally, risk of theft is minimized because there is little or no user data to be compromised. Modern thin clients have come a long way to meet the demands of today's graphical computing needs. New generations of low energy chipset andcentral processing unit(CPU) combinations improve processing power and graphical capabilities. To minimize latency of high resolution video sent across the network, some host software stacks leverage multimedia redirection (MMR) techniques to offload video rendering to the desktop device. Video codecs are often embedded on the thin client to support these various multimedia formats. Other host software stacks makes use ofUser Datagram Protocol(UDP) in order to accelerate fast changing pixel updates required by modern video content. Thin clients typically support local software agents capable of accepting and decoding UDP. Some of the more graphically intense use cases remain a challenge for thin clients. These use cases might include applications like photo editors, 3D drawing programs, and animation tools. This can be addressed at the host server using dedicatedGPUcards, allocation ofvGPUs(virtual GPU), workstation cards, andhardware accelerationcards. These solutions allow IT administrators to provide power-user performance where it is needed to a relatively generic endpoint device such as a thin client. To achieve such simplicity, thin clients sometimes lag behind desktop PCs in terms of extensibility. For example, if a local software utility or set of device drivers are needed in order to support a locally attached peripheral device (e.g. printer, scanner,biometric security device), the thin client operating system may lack the resources needed to fully integrate the required dependencies (although dependencies can sometimes be added if they can be identified). Modern thin clients address this limitation via port mapping or USB redirection software. However, these methods cannot address all scenarios. Therefore, it is good practice to perform validation tests of locally attached peripherals in advance to ensure compatibility. Further, in large distributed desktop environments, printers are often networked, negating the need for device drivers on every desktop. While running local productivity applications goes beyond the normal scope of a thin client, it is sometimes needed in rare use cases. License restrictions that apply to thin clients can sometimes prevent them from supporting these applications. Local storage constraints may also limit the space required to install large applications or application suites. It is also important to acknowledge that network bandwidth and performance is more critical in any type of cloud-based computing model. IT organizations must ensure that their network can accommodate the number of users that they need to serve. If demand for bandwidth exceeds network limits, it could result in a major loss of end user productivity. A similar risk exists inside the data center. Servers must be sized correctly in order to deliver adequate performance to end users. In a cloud-based computing model, the servers can also represent a single point of failure risk. If a server fails, end users lose access to all of the resources supported by that server. This risk can be mitigated by building redundancies, fail-over processes, backups, andload balancingutilities into the system. Redundancy provides reliable host availability but it can add cost to smaller user populations that lack scale. Popular providers of thin clients include Chip PC Technologies,Dell(acquiredWyseTechnology in 2012),HP,ClearCube,IGEL Technology,LG,NComputing, Stratodesk,Samsung Electronics, ThinClient Direct, and ZeeTim. Thin clients have their roots inmulti-user systems, traditionallymainframesaccessed by some sort ofcomputer terminal. As computer graphics matured, these terminals transitioned from providing acommand-line interfaceto a fullgraphical user interface, as is common on modern advanced thin clients. The prototypical multi-user environment along these lines,Unix, began to support fully graphicalX terminals, i.e., devices runningdisplay serversoftware, from about 1984. X terminals remained relatively popular even after the arrival of other thin clients in the mid-late 1990s.[citation needed]Modern Unix derivatives likeBSDandLinuxcontinue the tradition of the multi-user, remote display/input session. Typically, X software is not made available on non-X-based thin clients, although no technical reason for this exclusion would prevent it. Windows NTbecame capable of multi-user operations primarily through the efforts ofCitrix Systems, which repackagedWindows NT 3.51as the multi-user operating systemWinFramein 1995, launched in coordination with Wyse Technology's Winterm thin client.Microsoftlicensed this technology back from Citrix and implemented it intoWindows NT 4.0Terminal Server Edition, under a project codenamed "Hydra". Windows NT then became the basis of Windows 2000 and Windows XP. As of 2011[update], Microsoft Windows systems support graphical terminals via theRemote Desktop Servicescomponent. The Wyse Winterm was the first Windows-display-focused thin client (AKA Windows Terminal) to access this environment. The termthin clientwas coined in 1993[3]by Tim Negris, VP of Server Marketing atOracle Corporation, while working with company founderLarry Ellisonon the launch ofOracle 7. At the time, Oracle wished to differentiate their server-oriented software from Microsoft's desktop-oriented products. Ellison subsequently popularized Negris'buzzwordwith frequent use in his speeches and interviews about Oracle products. Ellison would go on to be a founding board member of thin client maker Network Computer, Inc (NCI), later renamed Liberate.[4] The term stuck for several reasons. The earlier term "graphical terminal" had been chosen to distinguish such terminals from text-based terminals, and thus put the emphasis heavily ongraphics– which became obsolete as a distinguishing characteristic in the 1990s as text-only physical terminals themselves became obsolete, and text-only computer systems (a few of which existed in the 1980s) were no longer manufactured. The term "thin client" also conveys better what was then viewed as the fundamental difference: thin clients can be designed with less expensive hardware, because they have reduced computational workloads. By the 2010s, thin clients were not the only desktop devices for general purpose computing that were "thin" – in the sense of having a small form factor and being relatively inexpensive. Thenettopform factor for desktop PCs was introduced, and nettops could run full feature Windows or Linux;tablets,tablet-laptop hybridshad also entered the market. However, while there was now little size difference, thin clients retained some key advantages over these competitors, such as not needing a local drive. However, "thin client" can be amisnomerfor slim form-factor computers usingflash memorysuch ascompactflash,SD card, or permanent flash memory as ahard disksubstitute. In 2013, a Citrix employee experimented with aRaspberry Pias a thin client.[5][6]Since then, several manufacturers have introduced their version of Raspberry Pi thin clients.[2]
https://en.wikipedia.org/wiki/Thin_client
Configurable Network ComputingorCNCisJD Edwards's (JDE)client–serverproprietary architecture and methodology. Now a division of theOracle Corporation, Oracle continues to sponsor the ongoing development of theJD EdwardsEnterprise Resource Planning(ERP) system, While highly flexible, the CNC architecture isproprietaryand, as such, it cannot be exported to any other systems. While the CNC architecture's chief 'Claim to fame', insulation of applications from the underlying database and operating systems, were largely superseded by modern web-based technology, nevertheless CNC technology continues to be at the heart of both JD Edwards' One World and Enterprise One architecture and is planned to play a significant role Oracle's developingfusion architectureinitiative.[1]While a proprietary architecture, CNC is neither an Oracle nor JDE product offering. The term CNC also refers to the systems analysts who install, maintain, manage and enhance this architecture. CNC's are also one of the three technical areas in the JD EdwardsEnterprise Resource PlanningERPwhich include developer/report writer and functional/business analysts. Oracle is continuing to develop the CNC technology and will incorporate key elements of the CNC technology into its Oracle Fusion project which will pull together technologies from JDE,PeopleSoftand its own application software technologies. In the CNC architecture, a company's JD Edwards (JDE)business softwareapplications run transparently insulated from both thedatabasewhere the business data is stored as well as from the clientcomputer's underlyingoperating systemand all other intervening JDE business applications servers. In layman's terms, the business programs don't "care" where the data is or which operating system is being used on any of theend usercomputers. Neither do the applications servers on which business programs run need to directly "know" what database systems are being called on the data end or back end. The CNC architecture keeps track of this through various database tables that point the business applications to the servers that run or execute the business applications and also include database connection tools called database drivers that tell the system also where the database servers are and what specific databases to do lookups, data inserts and data extracts from. Because of the key nature of the underlying architecture, a sound CNC infrastructure is critical to the success of a JD Edwards OneWorld installation or implementation. The back-end databases that are supported includeOracle database,Microsoft SQL Server, andIBM DB2databases. The application server can run onWindowsplatforms,Unix/Linux, and the IBM System i (formerly known as iSeries and AS/400). Theweb servercan beIBM WebSphere(on Windows, Unix/Linux, or System i) or theOracle Weblogic Server(on Windows or Unix/Linux). In what has been known traditionally asclient–serverenvironments, applications must communicate across a combination of different hardware platforms, operating systems, and databases as including. The CNC architecture uses a layer of software, calledmiddleware, which resides between the platform operating system and the JDE business applications. To accomplish this, JDE provides two types of middleware, JDENET Communication Middleware, and JDEBASE Database Middleware. The JDEBASE middleware communicates with the database through ODBC, JDBC, or SQL*Net. According to the JD Edwards document,Configurable Network Computing Implementation, the CNC architecture is defined as follows: "CNC is the technical architecture for JD Edwards OneWorld and EnterpriseOne software. CNC enables highly configurable, distributed applications to run on a variety of platforms without users or analysts needing to know which platforms or which databases are involved in any given task. CNC insulates the business solution from the underlying technology. Enterprises can grow and adopt new technologies without rewriting applications."[citation needed] "Configurable Network Computing an application architecture that enables interactive and batch applications, composed of a single code base, to run across aTCP/IP networkof multiple server platforms and SQL databases. The applications consist of reusable business functions and associated data that can be configured across the network dynamically. The overall objective for businesses is to provide a future-proof environment that enables them to change organizational structures, business processes, and technologies independently of each other."[2] Another strength of JD Edwards is its multi-foundation architecture. This means one can create separate instances of JDE on different Tools Releases and isolate these releases from each other. This is achieved by creating a separate set of system folders for the other foundation. In the main configuration file of the applications or enterprise server, JDE.ini, the incoming and outgoing ports are changed to a different one than the other foundation, so if one foundation had port 6015, then the alternate could use 6016. Also, the client-side tools release folder is installed on the deployment server and the system administrator uses the JDE Planner or installation environment to define another foundation. Subsequent full packages can then be pointed to using this different foundation. Until the advent of EnterpriseOne applications version, 8.12 running on tools release/service pack 8.96, by far the most vulnerable aspect of the CNC technology was that proprietary object specifications had to be copied from the full client up to the applications server in order for a JDE user's data selection and processing options to be run as requested on the server. If those proprietary specifications became corrupted, the batch application object, in turn, on the applications server could become corrupted. A rebuild and redeploy of the object was the only fix. Likewise, if there is some intervening process that corrupts object specifications as they come down to the client PC, the related object could become corrupted and no longer function correctly. Since applications upgrade E812 and Tools Release or systems or foundational service pack, the proprietary specifications have been replaced with XML-based object properties which have proven to be more stable and less prone to corruption. In the fall of 2008, Oracle brought out the E900 applications release and by the fall of 2010, the tools release was up to 8.98.3.3. E900 Update 1, or E901 is the latest release as of fall 2010. While copying the object specifications between the different environments within the same system is easy, the code, once developed in any given system, is not easily portable to other systems. JD Edwards has developed a built-in process named "Product Packaging" to address this issue, but it's slow, not easy to use, and is limited in a number of ways. Because of this, it's mainly used to deliver software updates by Oracle itself, while independent software vendors are mostly using third-party tools like Boomerang. Product Packaging supports the export of specifications and E812 and beyond allow for versions to be exported as ZIP files through the action's column in Object Management Workbench.[3] Object specifications are not easily accessible to retrieve the data from, because they are in a proprietary format. A variety of interesting information is therefore hidden from the view. Some of this data can be retrieved, interpreted, and displayed by the standard JDE software, but in many cases, this may not be enough, not fast enough, or in the desired format. Many third-party software solutions have been developed to fill this gap.[4] While powerful, the CNC architecture can be enormously complex making it difficult to maintain by anyone except quite senior CNC analysts. It is not uncommon to see 50 servers in some of the larger implementations and all these have to be maintained. While virtualization has helped in some areas, a lot of time has to be invested into keeping all these servers up and operational. There are a number of third-party applications that add functionality and programmability to the JDE Scheduler. They include Cisco Tidal Enterprise Scheduler which is a JDE client-based product and Appworx, a third-party server-based scheduler in which scripting and workflow products have been customized for JDE support addressing adding to thevanillascheduler that comes with JDE. AutoDeploy, a third-party bolt-on, fully automates the package build and deploy process for JD Edwards EnterpriseOne reducing the complexity of pre-project, project, and post-project codebase maintenance. The advent of theWorld Wide WebandHTMLtechnologies have also insulated users and applications from underlying technologies. The CNC architecture combines this with its own architecture through a Java Applications Server (JAS) architecture. The web clients communicate with the CNC architecture via these JAS servers. In the fall of 2008, Oracle brought out the E900 applications release and by the summer of 2011, the tools release was up to 8.98.4.3. In the fall of 2009, E900 Update 1 was released. By the summer of 2011, over 2000 Electronic Software Updates (ESUs) patches were required to bring the E901 release up to the latest code current levels. In the fall of 2010, update 2 was released. In the fall of 2011, Oracle released Applications Release 9.1 and Tools Release 9.1 significantly changing the look and feel of E1. JDENET and JDEBase middleware are the two elements in the CNC architecture that allow JDE applications to communicate across heterogeneous distributed computing environments. JDENET handles communications at thepresentation layerwith other internal JDE applications, while JDEBASE is the JDE middleware that provides platform-independent multi-vendorSQLdatabase access. JDENET is themessage-oriented middlewarethat connects the generated presentation layer of JDE applications with business function components through a standard JDE applications programming interface, orAPIcalled "jdeCallObject." The JDENET middleware, running within the CNC architecture, supports the configuration of business function components for execution in the heterogeneous distributed computing environment that the CNC architecture support. JDEBASE is the database middleware that provides platform-independent application program interface APIs for multi-vendor database access. These APIs are used in two ways. The first way is by JDE applications that dynamically generate platform-specific Structured Query Language (SQL), depending on the data source request. The second way is as open APIs for advanced C language business function writing. JDE uses these APIs to dynamically generate platform-specific SQL statements. Thus, this middleware provides workstation-to-server and server-to-server database access. To accomplish this, both the legacy JDEOneWorldmiddleware as well as the newer JDEEnterpriseOnemiddleware incorporate database driver support for a variety of third-party database drivers including ODBC, for connection to Microsoft SQL server, OCI, for connection to Oracle database and Client Access 400 drivers for connectivity to IBM DB2. Systems analysts that work in this field are known as JDE CNCs.[5]Based on the size of the company implementing a JDE system, there may be one or more CNCs. In some small companies, there is no resident CNC, but some of the day-to-day CNC functions such as security and business program object builds and deployment is done by a JDE developer on staff while a third-party CNC is called in for non-routine, critical, and/or high-risk CNC work such as system upgrades and expansion. CNC is one of the three JDE areas of expertise, the others being the JDE developer who changes code and the JDE functional analyst who is the business subject matter and business processes expert. In recent years, there has been much discussion among the CNC community on the title, "CNC." On many websites including such sites asOracleandLinkedInpeople who have worked in the CNC field for many years have proposed a new title to replace the traditional CNC term. One of the most popular is "JD Edwards Systems Architect" or "EnterpriseOneArchitect" even simply "JDE Architect." This seems to be driven by the fact that many senior CNCs become involved in planning and implementing the underlying CNC architecture and that the term CNC really conveys no meaning as to the actual job description. While the discussions go round and round, recruiters who continue to use the CNC job description or dispense with the CNC term may refer to the job as "JD Edwards System Administrator." Unfortunately, this latter term is largely misunderstood by recruiters and IT people unfamiliar with the complexities of a JD Edwards Implementation and have told CNCs that from the title JDE System Administrator, their responsibilities are fairly simple and probably mimic that of an email administrator or operating systems administrator adding/deleting users and resetting user passwords. JD Edwards "Infrastructure Engineer" is often used which better explains the functions that go beyond simple administration. Despite the discussions on the utility of the CNC title, IT seems to return to it as the only industry-wide accepted way to term the job description. The CNC function entails a number of responsibilities or functions. Large companies may have an entire staff of CNCs, some working on security, others software change management which deploy changes in the JDE ERP system through the various stages of development, testing, and production. Other CNCs will troubleshoot performance issues, others will work on batch process automation and finally, a Senior CNC will manage the entire group and, in that capacity, will often function as the chief JDE systems architect. In order to support this architecture, CNC analysts perform a wide variety of tasks. A frequent criticism of the CNC field is that it is too complicated to be learned in any less than 2–3 years. A number of overlapped functions are involved.[5]Some or all of the following functions: Because of the scope of the CNC functionality, the CNC function requires intensive training.[6]Oracle JD Edwards manages the officially required coursework, but many JDE business partners also offer training. A frequent criticism of CNC training is that far too many trainers and the syllabuses that they employ are so complicated as to be almost indecipherable to an incoming novice. The training is couched in techno-speak: terms such aspath code,environment, OCM mappings are bantered about with overlapping and circular explanations that leave novices and introductory CNC students quite confused.[citation needed]As of 2000, there is no official certification program. A typical list of classes is as follows: A worldwide organization,Quest Oracle Communityas well as local, statewide and regional JDE user groups have CNC sub-groups that support JDE CNCs. Among the useful user websites that support JDE CNCs and other users is JDELIST which has a website atjdelist.com
https://en.wikipedia.org/wiki/Configurable_Network_Computing
J.D. Edwards World Solution CompanyorJD Edwards, abbreviatedJDE, was anenterprise resource planning(ERP)softwarecompany, whose namesake ERP system is still sold under ownership byOracle Corporation. JDE's products includedWorldforIBMAS/400minicomputers(the users using acomputer terminalorterminal emulator),OneWorldfor their proprietaryConfigurable Network Computingarchitecture (aclient–serverfat client), and JD EdwardsEnterpriseOne(aweb-basedthin client). The company was founded March 1977 inDenver, by Jack Thompson, C.T.P. "Chuck" Hintze, Dan Gregory, andC. Edward "Ed" McVaney. In June 2003, JD Edwards agreed to sell itself toPeopleSoftfor $1.8 billion. Within days, Oracle launched a hostile takeover bid for PeopleSoft sans JD Edwards.[1][2]PeopleSoft went ahead with the JD Edwards acquisition anyway, and in 2005,Oracle Corporationfinally took ownership of the combined JD Edwards-PeopleSoft organization. As of 2020, Oracle continues to sell and actively support both ERP packages, branded now as JD Edwards EnterpriseOne[3]and JD Edwards World.[4] Ed McVaneyoriginally trained as anengineerat theUniversity of Nebraska, and in 1964 was employed by Western Electric, then byPeat Marwick, and moved toDenver, in 1968, and later became a partner at Alexander Grant where he hired Jack Thompson and Dan Gregory. Around that time he was realizing that, in his words, "The culture of a public accounting firm is the antithesis of developing software. The idea of spending time on something that you’re not getting paid for—software development—they just could not stomach that."[5]McVaney felt that accounting clients did not understand what was required for software development, and decided to start his own firm. "JD Edwards" was founded in 1977 by Jack Thompson, Dan Gregory, and Ed McVaney; the company's name is drawn from the initials "J" for Jack, "D" for Dan, and "Edwards" for Ed. McVaney took a salary cut from $44,000 to $36,000 to ensure initial funding. Start-up clients included McCoy Sales, a wholesale distribution company in Denver, and Cincinnati Milacron, a maker of machine tools. The business received a $75,000 contract to develop wholesale distribution system software and a $50,000 contract with the Colorado Highway Department to develop governmental and construction cost accounting systems. The first international client wasShell Oil Company. Shell Oil implemented JD Edwards inCanadaand then inCameroon. Gregory flew to Shell Oil inDouala, Cameroon to install the company's first international, multi-national, multi-currency client software system. As the majority of JD Edwards's customers weremedium-sized companies, clients did not have large-scale software implementations. There was a basic business need for all accounting to be tightly integrated. As McVaney would explain in 2002, integrated systems were created precisely because "you can’t go into a moderate-sized company and just put in a payroll. You have to put in a payroll and job cost, general ledger, inventory, fixed assets and the whole thing.SAPhad the same advantage that JD Edwards had because we worked on smaller companies, we were forced to see the whole broad picture."[5]This requirement was relevant to both JDE clients in the US and Europe and their European competitor SAP, whose typical clients were much smaller than the AmericanFortune 500firms. McVaney and his company developed what would be calledEnterprise Resource Planning(ERP) software in response to that business requirement. The software ultimately sold was namedJD Edwards WorldSoftware, popularly calledWorld. Development began usingSystem/34and/36minicomputers, changing course in the mid-1980s to theSystem/38, later switching to theAS/400platform when it became available. The company's initial focus was on developing theaccountingsoftware needed for their clients. World was server-centric as well as multi-user; the users would access the system using one of severalIBMcomputer terminalsor "green-screens". (Later on, users would runterminal emulatorsoftware on their personal computers). As anERP system, World comprised the three basic areas of expertise:functional/business analyst,programmer/software developer, andCNC/system administration. By late 1996, JD Edwards delivered to its customers the result of a major corporate initiative: the software was now ported to platform-independentclient–serversystems. It was brandedJD Edwards OneWorld, an entirely new product with agraphical user interfaceand adistributed computingmodel replacing the old server-centric model. The architecture JD Edwards had developed for this newer technology, calledConfigurable Network Computingor CNC, transparently shielded business applications from the servers that ran those same applications, the databases in which the data were stored, and the underlying operating system and hardware. By first quarter 1998, JD Edwards had 26 OneWorld customers and was moving its medium-sized customers to the new client–server flavor of ERP. By second quarter 1998, JDE had 48 customers,[6]and by 2001, the company had more than 600 customers using OneWorld, a fourfold increase over 2000.[7] The company became publicly listed on September 24, 1997, with vice-president Doug Massingill being promoted tochief executive officer, at an initial price of $23 per share, trading onNASDAQunder the symbol JDEC. By 1998, JD Edwards' revenue was more than $934 million and McVaney decided to retire. Within a year of the release of OneWorld, customers and industry analysts were discussing serious reliability, unpredictability and other bug-related issues. In user group meetings, these issues were raised with JDE management. So serious were these major quality issues with OneWorld that customers began to raise the possibility of class-action lawsuits, leading to McVaney's return from retirement as CEO. At an internal meeting in 2000, McVaney said he had decided to "wait however long it took to have OneWorld 100% reliable" and had thus delayed the release of a new version of OneWorld because he "wasn't going to let it go out on the street until it was ready for prime time." McVaney also encouraged customer feedback by supporting an independent JD Edwards user group calledQuest International. After delaying the upgrade for one year and refusing all requests by marketing for what he felt was a premature release, in the fall of 2000 JD Edwards released version B7333, now rebranded as OneWorld Xe. Despite press skepticism, Xe proved to be the most stable release to date and went a long way toward restoring customer confidence. McVaney retired again in January 2002, although remaining a director, and Robert Dutkowsky fromTeradynewas appointed as the new president and CEO. After the release of Xe, the product began to go through more broad change and several new versions. A newweb-based client, in which the user accesses the JD Edwards software through their web browser, was introduced in 2001. This web-based client was robust enough for customer use and was given application version number 8.10 in 2005. Initial issues with release 8.11 in 2005 lead to a quick service pack to version 8.11 SP1, salvaging the reputation of that product. By 2006, version 8.12 was announced. Throughout the application releases, new releases of system/foundation code called Tools Releases were announced, moving from Tools Release versions 8.94 to 8.95. Tools Release 8.96, along with the application's upgrade to version 8.12, saw the replacement of the older, often unstable proprietary object specifications (also called "specs") with a new XML-based system, proving to be much more reliable. Tools Release 8.97 shipped a newweb servicelayer allowing the JD Edwards software to communicate with third-party systems. In June 2003, the JD Edwards board agreed to an offer in whichPeopleSoft, a former competitor of JD Edwards, would acquire JD Edwards.[1]The takeover was completed in July. OneWorld was added to PeopleSoft's software line, along with PeopleSoft's flagship product Enterprise, and was renamedJD Edwards EnterpriseOne.[2] Within days of the PeopleSoft announcement,Oracle Corporationmounted a hostile takeover bid of PeopleSoft. Although the first attempts to purchase the company were rebuffed by the PeopleSoft board of directors, by December 2004 the board decided to accept Oracle's offer. The final purchase went through in January 2005; Oracle now owned both PeopleSoft and JD Edwards. Most JD Edwards customers, employees, and industry analysts predicted Oracle would kill the JD Edwards products. However, Oracle saw a position for JDE in the medium-sized company space that was not filled with either its e-Business Suite or its newly acquired PeopleSoft Enterprise product. Oracle's JD Edwards products are known as JD Edwards EnterpriseOne and JD Edwards World. Oracle announced that JD Edwards support would continue until at least 2033.[8] Support for the older releases such as the Xe product were to expire by 2013, spurring the acceptance of upgrades to newer application releases. By 2015, the latest offering of EnterpriseOne was application version 9.2, released October 2015.[9]The latest version of World (now with a web-based interface) was version A9.4, released in April 2015.[10] Shortly after Oracle acquired PeopleSoft and JD Edwards in 2005, Oracle announced the development of a new product calledOracle Fusion Applications.[11]Fusion was designed to co-exist or replace JD Edwards EnterpriseOne and World, as well as Oracle E-Business Applications Suite and other products acquired by Oracle, and was finally released in September 2010.[12]Despite the release of Fusion apps, JD Edwards EnterpriseOne and World is still sold and supported by Oracle and runs numerous businesses worldwide. System
https://en.wikipedia.org/wiki/JD_Edwards
Incomputer science, aheartbeatis aperiodic signalgenerated byhardwareorsoftwareto indicate normal operation or tosynchronizeother parts of acomputer system.[1][2]Heartbeat mechanism is one of the common techniques inmission critical systemsfor providinghigh availabilityandfault toleranceofnetwork servicesby detecting thenetworkor systems failures ofnodesordaemonswhich belongs to anetwork cluster—administered by amaster server—for the purpose of automatic adaptation andrebalancingof the system by using the remaining redundant nodes on the cluster to take over theloadof failed nodes for providing constant services.[3][1]Usually a heartbeat is sent between machines at a regular interval in the order of seconds; aheartbeat message.[4]If the endpoint does not receive a heartbeat for a time—usually a few heartbeat intervals—the machine that should have sent the heartbeat is assumed to have failed.[5]Heartbeat messages are typically sent non-stop on aperiodicor recurring basis from the originator's start-up until the originator's shutdown. When the destination identifies a lack of heartbeat messages during an anticipated arrival period, the destination may determine that the originator has failed, shutdown, or is generally no longer available. A heartbeat protocol is generally used to negotiate and monitor the availability of a resource, such as a floatingIP address, and the procedure involves sendingnetwork packetsto all the nodes in the cluster to verify itsreachability.[3]Typically when a heartbeat starts on a machine, it will perform an election process with other machines on theheartbeat networkto determine which machine, if any, owns the resource. On heartbeat networks of more than two machines, it is important to take into account partitioning, where two halves of the network could be functioning but not able to communicate with each other. In a situation such as this, it is important that the resource is only owned by one machine, not one machine in each partition. As a heartbeat is intended to be used to indicate the health of a machine, it is important that the heartbeat protocol and the transport that it runs on are as reliable as possible. Causing afailoverbecause of a false alarm may, depending on the resource, be highly undesirable. It is also important to react quickly to an actual failure, further signifying the reliability of the heartbeat messages. For this reason, it is often desirable to have a heartbeat running over more than one transport; for instance, anEthernetsegment usingUDP/IP, and a serial link. A "cluster membership" of a node is a property ofnetwork reachability: if the master can communicate with the nodex{\displaystyle x}, it's considered a member of the cluster and "dead" otherwise.[6]A heartbeat program as a whole consist of varioussubsystems:[7] Heartbeat messages are sent in a periodic manner through techniques such asbroadcastormulticastsin larger clusters.[6]Since CMs have transactions across the cluster, the most common pattern is to send heartbeat messages to all the nodes and "await" responses innon-blockingfashion.[8]Since the heartbeat orkeepalivemessages are the overwhelming majority of non-application related cluster control messages—which also goes to all the members of the cluster—major critical systems also include non-IPprotocols likeserial portsto deliver heartbeats.[9] Every CM on the master server maintains afinite-state machinewith three states for each node it administers: Down, Init, and Alive.[10]Whenever a new node joins, the CM changes the state of the node from Down to Init and broadcasts a "boot-up message", which the node receives the executes set of start-up procedures. It then responses with an acknowledgment message, CM then includes the node as the member of the cluster andtransitions the stateof the node from Init to Alive. Every node in the Alive state would receive a periodic broadcast heartbeat message from the HS subsystem and expects an acknowledgment message back within atimeout range. If CM didn't receive an acknowledgment heartbeat message back, the node is consideredunavailable, and a state transition from Alive to Down takes place for that node by CM.[11]The procedures or scripts to run, and actions to take between each state transition is animplementation detailof the system. Heartbeat network is aprivate networkwhich is shared only by the nodes in the cluster, and is not accessible from outside the cluster. It is used by cluster nodes in order to monitor each node's status and communicate with each other messages necessary for maintaining the operation of the cluster. The heartbeat method uses theFIFOnature of the signals sent across the network. By making sure that all messages have been received, the system ensures that events can be properly ordered.[12] In thiscommunications protocolevery node sends back a message in a given interval, saydelta, in effect confirming that it is alive and has a heartbeat. These messages are viewed as control messages that help determine that the network includes no delayed messages. A receiver node called a "sync", maintains an ordered list of the received messages. Once a message with atimestamplater than the given marked time is received from every node, the system determines that all messages have been received since the FIFO property ensures that the messages are ordered.[13] In general, it is difficult to select a delta that is optimal for all applications. If delta is too small, it requires too much overhead and if it is large it results in performance degradation as everything waits for the next heartbeat signal.[14]
https://en.wikipedia.org/wiki/Heartbeat_private_network
High-availability clusters(also known asHA clusters,fail-over clusters) are groups ofcomputersthat supportserverapplicationsthat can be reliably utilized witha minimum amount of down-time. They operate by usinghigh availability softwareto harnessredundantcomputers in groups orclustersthat provide continued service when system components fail. Without clustering, if a server running a particular application crashes, the application will be unavailable until the crashed server is fixed. HA clustering remedies this situation by detecting hardware/software faults, and immediately restarting the application on another system without requiring administrative intervention, a process known asfailover. As part of this process, clustering software may configure the node before starting the application on it. For example, appropriate file systems may need to be imported and mounted, network hardware may have to be configured, and some supporting applications may need to be running as well.[1] HA clusters are often used for criticaldatabases, file sharing on a network, business applications, and customer services such aselectronic commercewebsites. HA cluster implementations attempt to build redundancy into a cluster to eliminate single points of failure, including multiple network connections and data storage which is redundantly connected viastorage area networks. HA clusters usually use aheartbeatprivate network connection which is used to monitor the health and status of each node in the cluster. One subtle but serious condition all clustering software must be able to handle issplit-brain, which occurs when all of the private links go down simultaneously, but the cluster nodes are still running. If that happens, each node in the cluster may mistakenly decide that every other node has gone down and attempt to start services that other nodes are still running. Having duplicate instances of services may cause data corruption on the shared storage. HA clusters often also usequorumwitness storage (local or cloud) to avoid this scenario. A witness device cannot be shared between two halves of a split cluster, so in the event that all cluster members cannot communicate with each other (e.g., failed heartbeat), if a member cannot access the witness, it cannot become active. Not every application can run in a high-availability cluster environment, and the necessary design decisions need to be made early in the software design phase. In order to run in a high-availability cluster environment, an application must satisfy at least the following technical requirements, the last two of which are critical to its reliable function in a cluster, and are the most difficult to satisfy fully: The most common size for an HA cluster is a two-node cluster, since that is the minimum required to provide redundancy, but many clusters consist of many more, sometimes dozens of nodes. The attached diagram is a good overview of a classic HA cluster, with the caveat that it does not make any mention of quorum/witness functionality (see above). Such configurations can sometimes be categorized into one of the following models: The termslogical hostorcluster logical hostis used to describe thenetwork addressthat is used to access services provided by the cluster. This logical host identity is not tied to a single cluster node. It is actually a network address/hostname that is linked with the service(s) provided by the cluster. If a cluster node with a running database goes down, the database will be restarted on another cluster node. HA clusters usually use all available techniques to make the individual systems and shared infrastructure as reliable as possible. These include: These features help minimize the chances that the clustering failover between systems will be required. In such a failover, the service provided is unavailable for at least a little while, so measures to avoid failover are preferred. Systems that handle failures in distributed computing have different strategies to cure a failure. For instance, theApache CassandraAPIHectordefines three ways to configure a failover:
https://en.wikipedia.org/wiki/High-availability_cluster
Indistributed computing, asingle system image(SSI) cluster is aclusterof machines that appears to be one single system.[1][2][3]The concept is often considered synonymous with that of adistributed operating system,[4][5]but a single image may be presented for more limited purposes, justjob schedulingfor instance, which may be achieved by means of an additional layer of software over conventionaloperating system imagesrunning on eachnode.[6]The interest in SSI clusters is based on the perception that they may be simpler to use and administer than more specialized clusters. Different SSI systems may provide a more or less completeillusionof a single system. Different SSI systems may, depending on their intended usage, provide some subset of these features. Many SSI systems provideprocess migration.[7]Processes may start on onenodeand be moved to another node, possibly forresource balancingor administrative reasons.[note 1]As processes are moved from one node to another, other associated resources (for exampleIPCresources) may be moved with them. Some SSI systems allowcheckpointingof running processes, allowing their current state to be saved and reloaded at a later date.[note 2]Checkpointing can be seen as related to migration, as migrating a process from one node to another can be implemented by first checkpointing the process, then restarting it on another node. Alternatively checkpointing can be considered asmigration to disk. Some SSI systems provide the illusion that all processes are running on the same machine - the process management tools (e.g. "ps", "kill" onUnixlike systems) operate on all processes in the cluster. Most SSI systems provide a single view of the file system. This may be achieved by a simpleNFSserver, shared disk devices or even file replication. The advantage of a single root view is that processes may be run on any available node and access needed files with no special precautions. If the cluster implements process migration a single root view enables direct accesses to the files from the node where the process is currently running. Some SSI systems provide a way of "breaking the illusion", having some node-specific files even in a single root.HPTruClusterprovides a "context dependent symbolic link" (CDSL) which points to different files depending on the node that accesses it.HPVMSclusterprovides a search list logical name with node specific files occluding cluster shared files where necessary. This capability may be necessary to deal withheterogeneousclusters, where not all nodes have the same configuration. In more complex configurations such as multiple nodes of multiple architectures over multiple sites, several local disks may combine to form the logical single root. Some SSI systems allow all nodes to access the I/O devices (e.g. tapes, disks, serial lines and so on) of other nodes. There may be some restrictions on the kinds of accesses allowed (For example,OpenSSIcan't mount disk devices from one node on another node). Some SSI systems allow processes on different nodes to communicate usinginter-process communicationsmechanisms as if they were running on the same machine. On some SSI systems this can even includeshared memory(can be emulated in software withdistributed shared memory). In most cases inter-node IPC will be slower than IPC on the same machine, possibly drastically slower for shared memory. Some SSI clusters include special hardware to reduce this slowdown. Some SSI systems provide a "cluster IPaddress", a single address visible from outside the cluster that can be used to contact the cluster as if it were one machine. This can be used for load balancing inbound calls to the cluster, directing them to lightly loaded nodes, or for redundancy, moving the cluster address from one machine to another as nodes join or leave the cluster.[note 3] Examples here vary from commercial platforms with scaling capabilities, to packages/frameworks for creating distributed systems, as well as those that actually implement a single system image.
https://en.wikipedia.org/wiki/Single_system_image
Inconcurrent programming, an operation (or set of operations) islinearizableif it consists of an ordered list ofinvocationand responseevents, that may be extended by adding response events such that: Informally, this means that the unmodified list of events is linearizableif and only ifits invocations were serializable, but some of the responses of the serial schedule have yet to return.[1] In a concurrent system, processes can access a sharedobjectat the same time. Because multiple processes are accessing a single object, a situation may arise in which while one process is accessing the object, another process changes its contents. Making a system linearizable is one solution to this problem. In a linearizable system, although operations overlap on a shared object, each operation appears to take place instantaneously. Linearizability is a strong correctness condition, which constrains what outputs are possible when an object is accessed by multiple processes concurrently. It is a safety property which ensures that operations do not complete unexpectedly or unpredictably. If a system is linearizable it allows a programmer to reason about the system.[2] Linearizability was first introduced as aconsistency modelbyHerlihyandWingin 1987. It encompassed more restrictive definitions of atomic, such as "an atomic operation is one which cannot be (or is not) interrupted by concurrent operations", which are usually vague about when an operation is considered to begin and end. An atomic object can be understood immediately and completely from its sequential definition, as a set of operations run in parallel which always appear to occur one after the other; no inconsistencies may emerge. Specifically, linearizability guarantees that theinvariantsof a system areobservedandpreservedby all operations: if all operations individually preserve an invariant, the system as a whole will. A concurrent system consists of a collection of processes communicating through shared data structures or objects. Linearizability is important in these concurrent systems where objects may be accessed by multiple processes at the same time and a programmer needs to be able to reason about the expected results. An execution of a concurrent system results in ahistory, an ordered sequence of completed operations. Ahistoryis a sequence ofinvocationsandresponsesmade of an object by a set ofthreadsor processes. An invocation can be thought of as the start of an operation, and the response being the signaled end of that operation. Each invocation of a function will have a subsequent response. This can be used to model any use of an object. Suppose, for example, that two threads, A and B, both attempt to grab a lock, backing off if it's already taken. This would be modeled as both threads invoking the lock operation, then both threads receiving a response, one successful, one not. Asequentialhistory is one in which all invocations have immediate responses; that is the invocation and response are considered to take place instantaneously. A sequential history should be trivial to reason about, as it has no real concurrency; the previous example was not sequential, and thus is hard to reason about. This is where linearizability comes in. A history islinearizableif there is a linear orderσ{\displaystyle \sigma }of the completed operations such that: In other words: Note that the first two bullet points here matchserializability: the operations appear to happen in some order. It is the last point which is unique to linearizability, and is thus the major contribution of Herlihy and Wing.[1] Consider two ways of reordering the locking example above. Reordering B's invocation after A's response yields a sequential history. This is easy to reason about, as all operations now happen in an obvious order. However, it does not match the sequential definition of the object (it doesn't match the semantics of the program): A should have successfully obtained the lock, and B should have subsequently aborted. This is another correct sequential history. It is also a linearization since it matches the sequential definition. Note that the definition of linearizability only precludes responses that precede invocations from being reordered; since the original history had no responses before invocations, they can be reordered. Hence the original history is indeed linearizable. An object (as opposed to a history) is linearizable if all valid histories of its use can be linearized. This is a much harder assertion to prove. Consider the following history, again of two objects interacting with a lock: This history is not valid because there is a point at which both A and B hold the lock; moreover, it cannot be reordered to a valid sequential history without violating the ordering rule. Therefore, it is not linearizable. However, under serializability, B's unlock operation may be moved tobeforeA's original lock, which is a valid history (assuming the object begins the history in a locked state): This reordering is sensible provided there is no alternative means of communicating between A and B. Linearizability is better when considering individual objects separately, as the reordering restrictions ensure that multiple linearizable objects are, considered as a whole, still linearizable. This definition of linearizability is equivalent to the following: This alternative is usually much easier to prove. It is also much easier to reason about as a user, largely due to its intuitiveness. This property of occurring instantaneously, or indivisibly, leads to the use of the termatomicas an alternative to the longer "linearizable".[1] In the examples below, the linearization point of the counter built on compare-and-swap is the linearization point of the first (and only) successful compare-and-swap update. The counter built using locking can be considered to linearize at any moment while the locks are held, since any potentially conflicting operations are excluded from running during that period. Processors haveinstructionsthat can be used to implementlockingandlock-free and wait-free algorithms. The ability to temporarily inhibitinterrupts, ensuring that the currently runningprocesscannot becontext switched, also suffices on auniprocessor. These instructions are used directly by compiler and operating system writers but are also abstracted and exposed as bytecodes and library functions in higher-level languages: Mostprocessorsinclude store operations that are not atomic with respect to memory. These include multiple-word stores and string operations. Should a high priority interrupt occur when a portion of the store is complete, the operation must be completed when the interrupt level is returned. The routine that processes the interrupt must not modify the memory being changed. It is important to take this into account when writing interrupt routines. When there are multiple instructions which must be completed without interruption, a CPU instruction which temporarily disables interrupts is used. This must be kept to only a few instructions and the interrupts must be re-enabled to avoid unacceptable response time to interrupts or even losing interrupts. This mechanism is not sufficient in a multi-processor environment since each CPU can interfere with the process regardless of whether interrupts occur or not. Further, in the presence of aninstruction pipeline, uninterruptible operations present a security risk, as they can potentially be chained in aninfinite loopto create adenial of service attack, as in theCyrix coma bug. TheC standardandSUSv3providesig_atomic_tfor simple atomic reads and writes; incrementing or decrementing is not guaranteed to be atomic.[3]More complex atomic operations are available inC11, which providesstdatomic.h. Compilers use the hardware features or more complex methods to implement the operations; an example is libatomic of GCC. TheARM instruction setprovidesLDREXandSTREXinstructions which can be used to implement atomic memory access by usingexclusive monitorsimplemented in the processor to track memory accesses for a specific address.[4]However, if acontext switchoccurs between calls toLDREXandSTREX, the documentation notes thatSTREXwill fail, indicating the operation should be retried. In the case of 64-bit ARMv8-A architecture, it providesLDXRandSTXRinstructions for byte, half-word, word, and double-word size.[5] The easiest way to achieve linearizability is running groups of primitive operations in acritical section. Strictly, independent operations can then be carefully permitted to overlap their critical sections, provided this does not violate linearizability. Such an approach must balance the cost of large numbers oflocksagainst the benefits of increased parallelism. Another approach, favoured by researchers (but not yet widely used in the software industry), is to design a linearizable object using the native atomic primitives provided by the hardware. This has the potential to maximise available parallelism and minimise synchronisation costs, but requires mathematical proofs which show that the objects behave correctly. A promising hybrid of these two is to provide atransactional memoryabstraction. As with critical sections, the user marks sequential code that must be run in isolation from other threads. The implementation then ensures the code executes atomically. This style of abstraction is common when interacting with databases; for instance, when using theSpring Framework, annotating a method with @Transactional will ensure all enclosed database interactions occur in a singledatabase transaction. Transactional memory goes a step further, ensuring that all memory interactions occur atomically. As with database transactions, issues arise regarding composition of transactions, especially database and in-memory transactions. A common theme when designing linearizable objects is to provide an all-or-nothing interface: either an operation succeeds completely, or it fails and does nothing. (ACIDdatabases refer to this principle asatomicity.) If the operation fails (usually due to concurrent operations), the user must retry, usually performing a different operation. For example: To demonstrate the power and necessity of linearizability we will consider a simple counter which different processes can increment. We would like to implement a counter object which multiple processes can access. Many common systems make use of counters to keep track of the number of times an event has occurred. The counter object can be accessed by multiple processes and has two available operations. We will attempt to implement this counter object usingshared registers. Our first attempt which we will see is non-linearizable has the following implementation using one shared register among the processes. The naive, non-atomic implementation: Increment: Read: Read register R This simple implementation is not linearizable, as is demonstrated by the following example. Imagine two processes are running accessing the single counter object initialized to have value 0: The second process is finished running and the first process continues running from where it left off: In the above example, two processes invoked an increment command, however the value of the object only increased from 0 to 1, instead of 2 as it should have. One of the increment operations was lost as a result of the system not being linearizable. The above example shows the need for carefully thinking through implementations of data structures and how linearizability can have an effect on the correctness of the system. To implement a linearizable or atomic counter object we will modify our previous implementation soeach process Piwill use its own register Ri Each process increments and reads according to the following algorithm: Increment: Read: This implementation solves the problem with our original implementation. In this system the increment operations are linearized at the write step. The linearization point of an increment operation is when that operation writes the new value in its register Ri.The read operations are linearized to a point in the system when the value returned by the read is equal to the sum of all the values stored in each register Ri. This is a trivial example. In a real system, the operations can be more complex and the errors introduced extremely subtle. For example, reading a64-bitvalue from memory may actually be implemented as twosequentialreads of two32-bitmemory locations. If a process has only read the first 32 bits, and before it reads the second 32 bits the value in memory gets changed, it will have neither the original value nor the new value but a mixed-up value. Furthermore, the specific order in which the processes run can change the results, making such an error difficult to detect, reproduce anddebug. Most systems provide an atomic compare-and-swap instruction that reads from a memory location, compares the value with an "expected" one provided by the user, and writes out a "new" value if the two match, returning whether the update succeeded. We can use this to fix the non-atomic counter algorithm as follows: Since the compare-and-swap occurs (or appears to occur) instantaneously, if another process updates the location while we are in-progress, the compare-and-swap is guaranteed to fail. Many systems provide an atomic fetch-and-increment instruction that reads from a memory location, unconditionally writes a new value (the old value plus one), and returns the old value. We can use this to fix the non-atomic counter algorithm as follows: Using fetch-and increment is always better (requires fewer memory references) for some algorithms—such as the one shown here—than compare-and-swap,[6]even though Herlihy earlier proved that compare-and-swap is better for certain other algorithms that can't be implemented at all using only fetch-and-increment. SoCPU designswith both fetch-and-increment and compare-and-swap (or equivalent instructions) may be a better choice than ones with only one or the other.[6] Another approach is to turn the naive algorithm into acritical section, preventing other threads from disrupting it, using alock. Once again fixing the non-atomic counter algorithm: This strategy works as expected; the lock prevents other threads from updating the value until it is released. However, when compared with direct use of atomic operations, it can suffer from significant overhead due to lock contention. To improve program performance, it may therefore be a good idea to replace simple critical sections with atomic operations fornon-blocking synchronization(as we have just done for the counter with compare-and-swap and fetch-and-increment), instead of the other way around, but unfortunately a significant improvement is not guaranteed and lock-free algorithms can easily become too complicated to be worth the effort.
https://en.wikipedia.org/wiki/Linearizability
Incomputer science, asemaphoreis avariableorabstract data typeused to control access to a common resource by multiplethreadsand avoidcritical sectionproblems in aconcurrentsystem such as amultitaskingoperating system. Semaphores are a type ofsynchronization primitive. A trivial semaphore is a plain variable that is changed (for example, incremented or decremented, or toggled) depending on programmer-defined conditions. A useful way to think of a semaphore as used in a real-world system is as a record of how many units of a particular resource are available, coupled with operations to adjust that recordsafely(i.e., to avoidrace conditions) as units are acquired or become free, and, if necessary, wait until a unit of the resource becomes available. Though semaphores are useful for preventing race conditions, they do not guarantee their absence. Semaphores that allow an arbitrary resource count are calledcounting semaphores, while semaphores that are restricted to the values 0 and 1 (or locked/unlocked, unavailable/available) are calledbinary semaphoresand are used to implementlocks. The semaphore concept was invented byDutchcomputer scientistEdsger Dijkstrain 1962 or 1963,[1]when Dijkstra and his team were developing anoperating systemfor theElectrologica X8. That system eventually became known as theTHE multiprogramming system. Suppose a physicallibraryhas ten identical study rooms, to be used by one student at a time. Students must request a room from the front desk. If no rooms are free, students wait at the desk until someone relinquishes a room. When a student has finished using a room, the student must return to the desk and indicate that the room is free. In the simplest implementation, theclerkat thefront deskknows only the number of free rooms available. This requires that all of the students use their room while they have signed up for it and return it when they are done. When a student requests a room, the clerk decreases this number. When a student releases a room, the clerk increases this number. The room can be used for as long as desired, and so it is not possible to book rooms ahead of time. In this scenario, the front desk count-holder represents a counting semaphore, the rooms are the resource, and the students represent processes/threads. The value of the semaphore in this scenario is initially 10, with all rooms empty. When a student requests a room, they are granted access, and the value of the semaphore is changed to 9. After the next student comes, it drops to 8, then 7, and so on. If someone requests a room and the current value of the semaphore is 0,[2]they are forced to wait until a room is freed (when the count is increased from 0). If one of the rooms was released, but there are several students waiting, then any method can be used to select the one who will occupy the room (likeFIFOor randomly picking one). And of course, a student must inform the clerk about releasing their room only after really leaving it. When used to control access to apoolof resources, a semaphore tracks onlyhow manyresources are free. It does not keep track ofwhichof the resources are free. Some other mechanism (possibly involving more semaphores) may be required to select a particular free resource. The paradigm is especially powerful because the semaphore count may serve as a useful trigger for a number of different actions. The librarian above may turn the lights off in the study hall when there are no students remaining, or may place a sign that says the rooms are very busy when most of the rooms are occupied. The success of the protocol requires applications to follow it correctly. Fairness and safety are likely to be compromised (which practically means a program may behave slowly, act erratically,hang, orcrash) if even a single process acts incorrectly. This includes: Even if all processes follow these rules,multi-resourcedeadlockmay still occur when there are different resources managed by different semaphores and when processes need to use more than one resource at a time, as illustrated by thedining philosophers problem. Counting semaphores are equipped with two operations, historically denoted as P and V (see§ Operation namesfor alternative names). Operation V increments the semaphoreS, and operation P decrements it. The value of the semaphoreSis the number of units of the resource that are currently available. The P operationwastes timeorsleepsuntil a resource protected by the semaphore becomes available, at which time the resource is immediately claimed. The V operation is the inverse: it makes a resource available again after the process has finished using it. One important property of semaphoreSis that its value cannot be changed except by using the V and P operations. A simple way to understandwait(P) andsignal(V) operations is: Many operating systems provide efficient semaphore primitives that unblock a waiting process when the semaphore is incremented. This means that processes do not waste time checking the semaphore value unnecessarily. The counting semaphore concept can be extended with the ability to claim or return more than one "unit" from the semaphore, a technique implemented inUnix. The modified V and P operations are as follows, using square brackets to indicateatomic operations, i.e., operations that appear indivisible to other processes: However, the rest of this section refers to semaphores with unary V and P operations, unless otherwise specified. To avoidstarvation, a semaphore has an associatedqueueof processes (usually withFIFOsemantics). If a process performs a P operation on a semaphore that has the value zero, the process is added to the semaphore's queue and its execution is suspended. When another process increments the semaphore by performing a V operation, and there are processes on the queue, one of them is removed from the queue and resumes execution. When processes have different priorities the queue may be ordered thereby, such that the highest priority process is taken from the queue first. If the implementation does not ensure atomicity of the increment, decrement, and comparison operations, there is a risk of increments or decrements being forgotten, or of the semaphore value becoming negative. Atomicity may be achieved by using a machine instruction that canread, modify, and writethe semaphore in a single operation. Without such a hardware instruction, an atomic operation may be synthesized by using asoftware mutual exclusion algorithm. Onuniprocessorsystems, atomic operations can be ensured by temporarily suspendingpreemptionor disabling hardwareinterrupts. This approach does not work on multiprocessor systems where it is possible for two programs sharing a semaphore to run on different processors at the same time. To solve this problem in a multiprocessor system, a locking variable can be used to control access to the semaphore. The locking variable is manipulated using atest-and-set-lockcommand. Consider a variableAand a boolean variableS.Ais only accessed whenSis marked true. Thus,Sis a semaphore forA. One can imagine a stoplight signal (S) just before a train station (A). In this case, if the signal is green, then one can enter the train station. If it is yellow or red (or any other color), the train station cannot be accessed. Consider a system that can only support ten users (S=10). Whenever a user logs in, P is called, decrementing the semaphoreSby 1. Whenever a user logs out, V is called, incrementingSby 1 representing a login slot that has become available. WhenSis 0, any users wishing to log in must wait untilSincreases. The login request is enqueued onto a FIFO queue until a slot is freed.Mutual exclusionis used to ensure that requests are enqueued in order. WheneverSincreases (login slots available), a login request is dequeued, and the user owning the request is allowed to log in. IfSis already greater than 0, then login requests are immediately dequeued. In theproducer–consumer problem, one process (the producer) generates data items and another process (the consumer) receives and uses them. They communicate using a queue of maximum sizeNand are subject to the following conditions: The semaphore solution to the producer–consumer problem tracks the state of the queue with two semaphores:emptyCount, the number of empty places in the queue, andfullCount, the number of elements in the queue. To maintain integrity,emptyCountmay be lower (but never higher) than the actual number of empty places in the queue, andfullCountmay be lower (but never higher) than the actual number of items in the queue. Empty places and items represent two kinds of resources, empty boxes and full boxes, and the semaphoresemptyCountandfullCountmaintain control over these resources. The binary semaphoreuseQueueensures that the integrity of the state of the queue itself is not compromised, for example, by two producers attempting to add items to an empty queue simultaneously, thereby corrupting its internal state. Alternatively amutexcould be used in place of the binary semaphore. TheemptyCountis initiallyN,fullCountis initially 0, anduseQueueis initially 1. The producer does the following repeatedly: The consumer does the following repeatedly Below is a substantive example: Note thatemptyCountmay be much lower than the actual number of empty places in the queue, for example, where many producers have decremented it but are waiting their turn onuseQueuebefore filling empty places. Note thatemptyCount + fullCount ≤Nalways holds, with equality if and only if no producers or consumers are executing their critical sections. The "Passing the baton" pattern[3][4][5]proposed by Gregory R. Andrews is a generic scheme to solve many complex concurrent programming problems in which multiple processes compete for the same resource with complex access conditions (such as satisfying specific priority criteria or avoiding starvation). Given a shared resource, the pattern requires a private "priv" semaphore (initialized to zero) for each process (or class of processes) involved and a single mutual exclusion "mutex" semaphore (initialized to one). The pseudo-code for each process is: The pseudo-code of the resource acquisition and release primitives are: Both primitives in turn use the "pass_the_baton" method, whose pseudo-code is: Remarks The pattern is called "passing the baton" because a process that releases the resource as well as a freshly reactivated process will activate at most one suspended process, that is, shall "pass the baton to it". The mutex is released only when a process is going to suspend itself (resource_acquire), or when pass_the_baton is unable to reactivate another suspended process. The canonical names V and P come from the initials ofDutchwords. V is generally explained asverhogen("increase"). Several explanations have been offered for P, includingproberen("to test" or "to try"),[6]passeren("pass"), andpakken("grab"). Dijkstra's earliest paper on the subject[1]givespassering("passing") as the meaning forP, andvrijgave("release") as the meaning for V. It also mentions that the terminology is taken from that used in railroad signals. Dijkstra subsequently wrote that he intendedPto stand forprolaag,[7]short forprobeer te verlagen, literally "try to reduce", or to parallel the terms used in the other case, "try to decrease".[8][9][10] InALGOL 68, theLinux kernel,[11]and in some English textbooks, theVandPoperations are called, respectively,upanddown. In software engineering practice, they are often calledsignalandwait,[12]releaseandacquire[12](standardJavalibrary),[13]orpostandpend. Some texts call themvacateandprocureto match the original Dutch initials.[14][15] Amutexis alocking mechanismthat sometimes uses the same basic implementation as the binary semaphore. However, they differ in how they are used. While a binary semaphore may be colloquially referred to as a mutex, a true mutex has a more specific use-case and definition, in that only thetaskthat locked the mutex is supposed to unlock it. This constraint aims to handle some potential problems of using semaphores:
https://en.wikipedia.org/wiki/Semaphore_(programming)
Incomputer science,software transactional memory(STM) is aconcurrency controlmechanism analogous todatabase transactionsfor controlling access toshared memoryinconcurrent computing. It is an alternative tolock-based synchronization. STM is a strategy implemented in software, rather than as a hardware component. A transaction in this context occurs when a piece of code executes a series of reads and writes to shared memory. These reads and writes logically occur at a single instant in time; intermediate states are not visible to other (successful) transactions. The idea of providing hardware support for transactions originated in a 1986 paper byTom Knight.[1]The idea was popularized byMaurice HerlihyandJ. Eliot B. Moss.[2]In 1995,Nir Shavitand Dan Touitou extended this idea to software-only transactional memory (STM).[3]Since 2005, STM has been the focus of intense research[4]and support for practical implementations is growing. Unlike thelockingtechniques used in most modern multithreaded applications, STM is often veryoptimistic: athreadcompletes modifications to shared memory without regard for what other threads might be doing, recording every read and write that it is performing in a log. Instead of placing the onus on the writer to make sure it does not adversely affect other operations in progress, it is placed on the reader, who after completing an entire transaction verifies that other threads have not concurrently made changes to memory that it accessed in the past. This final operation, in which the changes of a transaction are validated and, if validation is successful, made permanent, is called acommit. A transaction may alsoabortat any time, causing all of its prior changes to be rolled back or undone. If a transaction cannot be committed due to conflicting changes, it is typically aborted and re-executed from the beginning until it succeeds. The benefit of this optimistic approach is increased concurrency: no thread needs to wait for access to a resource, and different threads can safely and simultaneously modify disjoint parts of a data structure that would normally be protected under the same lock. However, in practice, STM systems also suffer a performance hit compared to fine-grained lock-based systems on small numbers of processors (1 to 4 depending on the application). This is due primarily to the overhead associated with maintaining the log and the time spent committing transactions. Even in this case performance is typically no worse than twice as slow.[5]Advocates of STM believe this penalty is justified by the conceptual benefits of STM.[citation needed] Theoretically, theworst casespace and time complexity ofnconcurrent transactions isO(n). Actual needs depend on implementation details (one can make transactions fail early enough to avoid overhead), but there will also be cases, albeit rare, where lock-based algorithms have better time complexity than software transactional memory. In addition to their performance benefits,[citation needed]STM greatly simplifies conceptual understanding of multithreaded programs and helps make programs more maintainable by working in harmony with existing high-level abstractions such as objects and modules. Lock-based programming has a number of well-known problems that frequently arise in practice: In contrast, the concept of a memory transaction is much simpler, because each transaction can be viewed in isolation as a single-threaded computation. Deadlock and livelock are either prevented entirely or handled by an external transaction manager; the programmer need hardly worry about it. Priority inversion can still be an issue, but high-priority transactions can abort conflicting lower priority transactions that have not already committed. However, the need to retry and abort transactions limits their behavior. Any operation performed within a transaction must beidempotentsince a transaction might be retried. Additionally, if an operation hasside effectsthat must be undone if the transaction is aborted, then a correspondingrollbackoperation must be included. This makes manyinput/output(I/O) operations difficult or impossible to perform within transactions. Such limits are typically overcome in practice by creating buffers that queue up the irreversible operations and perform them after the transaction succeeds. InHaskell, this limit is enforced atcompile timeby thetype system. In 2005, Tim Harris,Simon Marlow,Simon Peyton Jones, andMaurice Herlihydescribed an STM system built onConcurrent Haskellthat enables arbitraryatomic operationsto be composed into larger atomic operations, a useful concept impossible with lock-based programming. To quote the authors: Perhaps the most fundamental objection [...] is thatlock-based programs do not compose: correct fragments may fail when combined. For example, consider a hash table with thread-safe insert and delete operations. Now suppose that we want to delete one item A from table t1, and insert it into table t2; but the intermediate state (in which neither table contains the item) must not be visible to other threads. Unless the implementor of the hash table anticipates this need, there is simply no way to satisfy this requirement. [...] In short, operations that are individually correct (insert, delete) cannot be composed into larger correct operations.—Tim Harris et al., "Composable Memory Transactions", Section 2: Background, pg.2[6] With STM, this problem is simple to solve: simply wrapping two operations in a transaction makes the combined operation atomic. The only sticking point is that it is unclear to the caller, who is unaware of the implementation details of the component methods, when it should attempt to re-execute the transaction if it fails. In response, the authors proposed aretrycommand which uses the transaction log generated by the failed transaction to determine which memory cells it read, and automatically retries the transaction when one of these cells is modified, based on the logic that the transaction will not behave differently until at least one such value is changed. The authors also proposed a mechanism for composition ofalternatives, theorElsefunction. It runs one transaction and, if that transaction does aretry, runs a second one. If both retry, it tries them both again as soon as a relevant change is made.[clarification needed]This facility, comparable to features such as the Portable Operating System Interface (POSIX) networkingselect()call, allows the caller to wait on any one of a number of events simultaneously. It also simplifies programming interfaces, for example by providing a simple mechanism to convert between blocking and nonblocking operations. This scheme has been implemented in theGlasgow Haskell Compiler. The conceptual simplicity of STMs enables them to be exposed to the programmer using relatively simple languagesyntax. Tim Harris and Keir Fraser's "Language Support for Lightweight Transactions" proposed the idea of using the classicalconditional critical region(CCR) to represent transactions. In its simplest form, this is just an "atomic block", a block of code which logically occurs at a single instant: When the end of the block is reached, the transaction is committed if possible, or else aborted and retried. (This is simply a conceptual example, not correct code. For example, it behaves incorrectly if node is deleted from the list during the transaction.) CCRs also permit aguard condition, which enables a transaction to wait until it has work to do: If the condition is not satisfied, the transaction manager will wait until another transaction has made acommitthat affects the condition before retrying. Thisloose couplingbetween producers and consumers enhances modularity compared to explicit signaling between threads. "Composable Memory Transactions"[6]took this a step farther with itsretrycommand (discussed above), which can, at any time, abort the transaction and wait untilsome valuepreviously read by the transaction is modified before retrying. For example: This ability to retry dynamically late in the transaction simplifies the programming model and opens up new possibilities. One issue is how exceptions behave when they propagate outside of transactions. In "Composable Memory Transactions",[6]the authors decided that this should abort the transaction, since exceptions normally indicate unexpected errors in Concurrent Haskell, but that the exception could retain information allocated by and read during the transaction for diagnostic purposes. They stress that other design decisions may be reasonable in other settings. STM can be implemented as a lock-free algorithm or it can use locking.[7]There are two types of locking schemes: In encounter-time locking (Ennals, Saha, and Harris), memory writes are done by first temporarily acquiring a lock for a given location, writing the value directly, and logging it in the undo log. Commit-time locking locks memory locations only during the commit phase. A commit-time scheme named "Transactional Locking II" implemented by Dice, Shalev, and Shavit uses a global version clock. Every transaction starts by reading the current value of the clock and storing it as the read-version. Then, on every read or write, the version of the particular memory location is compared to the read-version; and, if it is greater, the transaction is aborted. This guarantees that the code is executed on a consistent snapshot of memory. During commit, all write locations are locked, and version numbers of all read and write locations are re-checked. Finally, the global version clock is incremented, new write values from the log are written back to memory and stamped with the new clock version. One problem with implementing software transactional memory with optimistic reading is that it is possible for an incomplete transaction to read inconsistent state (that is, to read a mixture of old and new values written by another transaction). Such a transaction is doomed to abort if it ever tries to commit, so this does not violate the consistency condition enforced by the transactional system, but it is possible for this "temporary" inconsistent state to cause a transaction to trigger a fatal exceptional condition such as a segmentation fault or even enter an endless loop, as in the following contrived example from Figure 4 of "Language Support for Lightweight Transactions": Providedx=yinitially, neither transaction above alters this invariant, but it is possible that transaction A will readxafter transaction B updates it but readybefore transaction B updates it, causing it to enter an infinite loop. The usual strategy for dealing with this is to intercept any fatal exceptions and abort any transaction that is not valid. One way to deal with these issues is to detect transactions that execute illegal operations or fail to terminate and abort them cleanly; another approach is thetransactional lockingscheme.
https://en.wikipedia.org/wiki/Software_transactional_memory
Transactional Synchronization Extensions(TSX), also calledTransactional Synchronization Extensions New Instructions(TSX-NI), is an extension to thex86instruction set architecture(ISA) that adds hardwaretransactional memorysupport, speeding up execution of multi-threaded software through lock elision. According to different benchmarks, TSX/TSX-NI can provide around 40% faster applications execution in specific workloads, and 4–5 times more databasetransactions per second(TPS).[1][2][3][4] TSX/TSX-NI was documented byIntelin February 2012, and debuted in June 2013 on selected Intelmicroprocessorsbased on theHaswellmicroarchitecture.[5][6][7]Haswell processors below 45xx as well as R-series and K-series (with unlocked multiplier)SKUsdo not support TSX/TSX-NI.[8]In August 2014, Intel announced a bug in the TSX/TSX-NI implementation on current steppings of Haswell, Haswell-E, Haswell-EP and earlyBroadwellCPUs, which resulted in disabling the TSX/TSX-NI feature on affected CPUs via amicrocodeupdate.[9][10] In 2016, aside-channeltiming attackwas found by abusing the way TSX/TSX-NI handles transactional faults (i.e.page faults) in order to breakkernel address space layout randomization(KASLR) on all major operating systems.[11]In 2021, Intel released a microcode update that disabled the TSX/TSX-NI feature on CPU generations fromSkylaketoCoffee Lake, as a mitigation for discovered security issues.[12] While TSX/TSX-NI is not supported anymore in desktop-class processors, it remains supported in theXeonline of processors (at least on specific models, as of the 6th generation).[13] Support for TSX/TSX-NI emulation is provided as part of the Intel Software Development Emulator.[14]There is also experimental support for TSX/TSX-NI emulation in aQEMUfork.[15] TSX/TSX-NI provides two software interfaces for designating code regions for transactional execution.Hardware Lock Elision(HLE) is an instruction prefix-based interface designed to be backward compatible with processors without TSX/TSX-NI support.Restricted Transactional Memory(RTM) is a new instruction set interface that provides greater flexibility for programmers.[16] TSX/TSX-NI enablesoptimistic executionof transactional code regions. The hardware monitors multiple threads for conflicting memory accesses, while aborting and rolling back transactions that cannot be successfully completed. Mechanisms are provided for software to detect and handle failed transactions.[16] In other words, lock elision through transactional execution uses memory transactions as a fast path where possible, while the slow (fallback) path is still a normal lock. Hardware Lock Elision (HLE) adds two new instruction prefixes,XACQUIREandXRELEASE. These two prefixes reuse theopcodesof the existingREPNE/REPEprefixes (F2H/F3H). On processors that do not support HLE,REPNE/REPEprefixes are ignored on instructions for which theXACQUIRE/XRELEASEare valid, thus enabling backward compatibility.[17] TheXACQUIREprefix hint can only be used with the following instructions with an explicitLOCKprefix:ADD,ADC,AND,BTC,BTR,BTS,CMPXCHG,CMPXCHG8B,DEC,INC,NEG,NOT,OR,SBB,SUB,XOR,XADD, andXCHG. TheXCHGinstruction can be used without theLOCKprefix as well. TheXRELEASEprefix hint can be used both with the instructions listed above, and with theMOV mem, regandMOV mem, imminstructions. HLE allows optimistic execution of a critical section by skipping the write to a lock, so that the lock appears to be free to other threads. A failed transaction results in execution restarting from theXACQUIRE-prefixed instruction, but treating the instruction as if theXACQUIREprefix were not present. Restricted Transactional Memory (RTM) is an alternative implementation to HLE which gives the programmer the flexibility to specify a fallback code path that is executed when a transaction cannot be successfully executed. Unlike HLE, RTM is not backward compatible with processors that do not support it. For backward compatibility, programs are required to detect support for RTM in the CPU before using the new instructions. RTM adds three new instructions:XBEGIN,XENDandXABORT. TheXBEGINandXENDinstructions mark the start and the end of a transactional code region; theXABORTinstruction explicitly aborts a transaction. Transaction failure redirects the processor to the fallback code path specified by theXBEGINinstruction, with the abort status returned in theEAXregister. TSX/TSX-NI provides a newXTESTinstruction that returns whether the processor is executing a transactional region. This instruction is supported by the processor if it supports HLE or RTM or both. TSX/TSX-NI Suspend Load Address Tracking (TSXLDTRK) is an instruction set extension that allows to temporarily disable tracking loads from memory in a section of code within a transactional region. This feature extends HLE and RTM, and its support in the processor must be detected separately. TSXLDTRK introduces two new instructions,XSUSLDTRKandXRESLDTRK, for suspending and resuming load address tracking, respectively. While the tracking is suspended, any loads from memory will not be added to the transaction read set. This means that, unless these memory locations were added to the transaction read or write sets outside the suspend region, writes at these locations by other threads will not cause transaction abort. Suspending load address tracking for a portion of code within a transactional region allows to reduce the amount of memory that needs to be tracked for read-write conflicts and therefore increase the probability of successful commit of the transaction. Intel's TSX/TSX-NI specification describes how the transactional memory is exposed to programmers, but withholds details on the actual transactional memory implementation.[18]Intel specifies in its developer's and optimization manuals that Haswell maintains both read-sets and write-sets at the granularity of a cache line, tracking addresses in the L1 data cache of the processor.[19][20][21][22]Intel also states that data conflicts are detected through thecache coherenceprotocol.[20] Haswell's L1 data cache has an associativity of eight. This means that in this implementation, a transactional execution that writes to nine distinct locations mapping to the same cache set will abort. However, due to micro-architectural implementations, this does not mean that fewer accesses to the same set are guaranteed to never abort. Additionally, in CPU configurations withHyper-Threading Technology, the L1 cache is shared between the two threads on the same core, so operations in a sibling logical processor of the same core can cause evictions.[20] Independent research points into Haswell’s transactional memory most likely being a deferred update system using the per-core caches for transactional data and register checkpoints.[18]In other words, Haswell is more likely to use the cache-based transactional memory system, as it is a much less risky implementation choice. On the other hand, Intel'sSkylakeor later may combine this cache-based approach withmemory ordering buffer(MOB) for the same purpose, possibly also providing multi-versioned transactional memory that is more amenable tospeculative multithreading.[23] In August 2014, Intel announced that a bug exists in the TSX/TSX-NI implementation on Haswell, Haswell-E, Haswell-EP and early Broadwell CPUs, which resulted in disabling the TSX/TSX-NI feature on affected CPUs via a microcode update.[9][10][24]The bug was fixed in F-0 steppings of the vPro-enabled Core M-5Y70 Broadwell CPU in November 2014.[25] The bug was found and then reported during a diploma thesis in the School of Electrical and Computer Engineering of theNational Technical University of Athens.[26] In October 2018, Intel disclosed a TSX/TSX-NI memory ordering issue found in someSkylakeprocessors.[27]As a result of a microcode update, HLE support was disabled in the affected CPUs, and RTM was mitigated by sacrificing one performance counter when used outside of IntelSGXmode or System Management Mode (SMM). System software would have to either effectively disable RTM or update performance monitoring tools not to use the affected performance counter. In June 2021, Intel published a microcode update that further disables TSX/TSX-NI on various Xeon and Core processor models fromSkylakethroughCoffee LakeandWhiskey Lakeas a mitigation for TSX Asynchronous Abort (TAA) vulnerability. Earlier mitigation for memory ordering issue was removed.[28]By default, with the updated microcode, the processor would still indicate support for RTM but would always abort the transaction. System software is able to detect this mode of operation and mask support for TSX/TSX-NI from theCPUIDinstruction, preventing detection of TSX/TSX-NI by applications. System software may also enable the "Unsupported Software Development Mode", where RTM is fully active, but in this case RTM usage may be subject to the issues described earlier, and therefore this mode should not be enabled on production systems. On some systems RTM can't be re-enabled when SGX is active. HLE is always disabled. According to Intel 64 and IA-32 Architectures Software Developer's Manual from May 2020, Volume 1, Chapter 2.5 Intel Instruction Set Architecture And Features Removed,[19]HLE has been removed from Intel products released in 2019 and later. RTM is not documented as removed. However, Intel 10th generationComet LakeandIce Lakeclient processors, which were released in 2020, do not support TSX/TSX-NI,[29][30][31][32][33]including both HLE and RTM. Engineering versions of Comet Lake processors were still retaining TSX/TSX-NI support. In Intel Architecture Instruction Set Extensions Programming Reference revision 41 from October 2020,[34]a new TSXLDTRK instruction set extension was documented. It was first included inSapphire Rapidsprocessors released in January 2023.
https://en.wikipedia.org/wiki/Transactional_Synchronization_Extensions
In the fields ofdatabasesandtransaction processing(transaction management), aschedule(orhistory) of a system is an abstract model to describe the order ofexecutionsin a set of transactions running in the system. Often it is alistof operations (actions) ordered by time, performed by a set oftransactionsthat are executed together in the system. If the order in time between certain operations is not determined by the system, then apartial orderis used. Examples of such operations are requesting a read operation, reading, writing, aborting,committing, requesting alock, locking, etc. Often, only a subset of the transaction operation types are included in a schedule. Schedules are fundamental concepts in databaseconcurrency controltheory. In practice, most general purpose database systems employ conflict-serializable and strict recoverable schedules. Grid notation: Operations (a.k.a., actions): Alternatively, a schedule can be represented with adirected acyclic graph(or DAG) in which there is an arc (i.e.,directed edge) between eachordered pairof operations. The following is an example of a schedule: In this example, the columns represent the different transactions in the schedule D. Schedule D consists of three transactions T1, T2, T3. First T1 Reads and Writes to object X, and then Commits. Then T2 Reads and Writes to object Y and Commits, and finally, T3 Reads and Writes to object Z and Commits. The schedule D above can be represented as list in the following way: D = R1(X) W1(X) Com1 R2(Y) W2(Y) Com2 R3(Z) W3(Z) Com3 Usually, for the purpose of reasoning about concurrency control in databases, an operation is modelled asatomic, occurring at a point in time, without duration. Real executed operations always have some duration. Operations of transactions in a schedule can interleave (i.e., transactions can be executedconcurrently), but time orders between operations in each transaction must remain unchanged. The schedule is inpartial orderwhen the operations of transactions in a schedule interleave (i.e., when the schedule is conflict-serializable but not serial). The schedule is intotal orderwhen the operations of transactions in a schedule do not interleave (i.e., when the schedule is serial). Acomplete scheduleis one that contains either an abort (a.k.a.rollback)or commit action for each of its transactions. A transaction's last action is either to commit or abort. To maintainatomicity, a transaction must undo all its actions if it is aborted. A schedule isserialif the executed transactions are non-interleaved (i.e., a serial schedule is one in which no transaction starts until a running transaction has ended). Schedule D is an example of a serial schedule: A schedule isserializableif it is equivalent (in its outcome) to a serial schedule. In schedule E, the order in which the actions of the transactions are executed is not the same as in D, but in the end, E gives the same result as D. Serializability is used to keep the data in the data item in a consistent state. It is the major criterion for the correctness of concurrent transactions' schedule, and thus supported in all general purpose database systems. Schedules that are not serializable are likely to generate erroneous outcomes; which can be extremely harmful (e.g., when dealing with money within banks).[1][2][3] If any specific order between some transactions is requested by an application, then it is enforced independently of the underlying serializability mechanisms. These mechanisms are typically indifferent to any specific order, and generate some unpredictablepartial orderthat is typically compatible with multiple serial orders of these transactions. Two actions are said to be in conflict (conflicting pair) if and only if all of the 3 following conditions are satisfied: Equivalently, two actions are considered conflicting if and only if they arenoncommutative. Equivalently, two actions are considered conflicting if and only if they are aread-write,write-read, orwrite-writeconflict. The following set of actions is conflicting: While the following sets of actions are not conflicting: Reducing conflicts, such as through commutativity, enhances performance because conflicts are the fundamental cause of delays and aborts. The conflict ismaterializedif the requested conflicting operation is actually executed: in many cases a requested/issued conflicting operation by a transaction is delayed and even never executed, typically by alockon the operation's object, held by another transaction, or when writing to a transaction's temporary private workspace and materializing, copying to the database itself, upon commit; as long as a requested/issued conflicting operation is not executed upon the database itself, the conflict isnon-materialized; non-materialized conflicts are not represented by an edge in the precedence graph. The schedules S1 and S2 are said to be conflict-equivalent if and only if both of the following two conditions are satisfied: Equivalently, two schedules are said to be conflict equivalent if and only if one can be transformed to another by swapping pairs of non-conflicting operations (whether adjacent or not) while maintaining the order of actions for each transaction.[4] Equivalently, two schedules are said to be conflict equivalent if and only if one can be transformed to another by swapping pairs of non-conflicting adjacent operations with different transactions.[7] A schedule is said to beconflict-serializablewhen the schedule is conflict-equivalent to one or more serial schedules. Equivalently, a schedule is conflict-serializable if and only if itsprecedence graphis acyclic when only committed transactions are considered. Note that if the graph is defined to also include uncommitted transactions, then cycles involving uncommitted transactions may occur without conflict serializability violation. The schedule K is conflict-equivalent to the serial schedule <T1,T2>, but not <T2,T1>. Conflict serializability can be enforced by restarting any transaction within the cycle in the precedence graph, or by implementingtwo-phase locking,timestamp ordering, orserializable snapshot isolation.[8] Two schedules S1 and S2 are said to be view-equivalent when the following conditions are satisfied: Additionally, two view-equivalent schedules must involve the same set of transactions such that each transaction has the same actions in the same order. In the example below, the schedules S1 and S2 are view-equivalent, but neither S1 nor S2 are view-equivalent to the schedule S3. The conditions for S3 to be view-equivalent to S1 and S2 were not satisfied at the corresponding superscripts for the following reasons: To quickly analyze whether two schedules are view-equivalent, write both schedules as a list with each action's subscript representing which view-equivalence condition they match. The schedules are view equivalent if and only if all the actions have the same subscript (or lack thereof) in both schedules: A schedule isview-serializableif it is view-equivalent to some serial schedule. Note that by definition, all conflict-serializable schedules are view-serializable. Notice that the above example (which is the same as the example in the discussion of conflict-serializable) is both view-serializable and conflict-serializable at the same time. There are however view-serializable schedules that are not conflict-serializable: those schedules with a transaction performing ablind write: The above example is not conflict-serializable, but it is view-serializable since it has a view-equivalent serial schedule <T1,| T2,| T3>. Since determining whether a schedule is view-serializable isNP-complete, view-serializability has little practical interest.[citation needed] In arecoverable schedule, transactions only commit after all transactions whose changes they read have committed. A schedule becomesunrecoverableif a transactionTi{\displaystyle T_{i}}reads and relies on changes from another transactionTj{\displaystyle T_{j}}, and thenTi{\displaystyle T_{i}}commits andTj{\displaystyle T_{j}}aborts. These schedules are recoverable. The schedule F is recoverable because T1 commits before T2, that makes the value read by T2 correct. Then T2 can commit itself. In the F2 schedule, if T1 aborted, T2 has to abort because the value of A it read is incorrect. In both cases, the database is left in a consistent state. Schedule J is unrecoverable because T2 committed before T1 despite previously reading the value written by T1. Because T1 aborted after T2 committed, the value read by T2 is wrong. Because a transaction cannot be rolled-back after it commits, the schedule is unrecoverable. Cascadeless schedules(a.k.a, "Avoiding Cascading Aborts (ACA) schedules") are schedules which avoid cascading aborts by disallowingdirty reads.Cascading abortsoccur when one transaction's abort causes another transaction to abort because it read and relied on the first transaction's changes to an object. Adirty readoccurs when a transaction reads data from uncommitted write in another transaction.[9] The following examples are the same as the ones in the discussion on recoverable: In this example, although F2 is recoverable, it does not avoid cascading aborts. It can be seen that if T1 aborts, T2 will have to be aborted too in order to maintain the correctness of the schedule as T2 has already read the uncommitted value written by T1. The following is a recoverable schedule which avoids cascading abort. Note, however, that the update of A by T1 is always lost (since T1 is aborted). Note that this Schedule would not be serializable if T1 would be committed. Cascading aborts avoidance is sufficient but not necessary for a schedule to be recoverable. A schedule isstrictif for any two transactions T1, T2, if a write operation of T1 precedes aconflictingoperation of T2 (either read or write), then the commit or abort event of T1 also precedes that conflicting operation of T2. For example, the schedule F3 above is strict. Any strict schedule is cascade-less, but not the converse. Strictness allows efficient recovery of databases from failure. The following expressions illustrate the hierarchical (containment) relationships betweenserializabilityandrecoverabilityclasses: TheVenn diagram(below) illustrates the above clauses graphically.
https://en.wikipedia.org/wiki/Database_transaction_schedule
Incomputer security, asandboxis a security mechanism for separating running programs, usually in an effort to mitigate system failures and/or softwarevulnerabilitiesfrom spreading. Thesandboxmetaphor derives from the concept of a child's sandbox—a play area where children can build, destroy, and experiment without causing any real-world damage.[1]It is often used to kill untested or untrusted programs or code, possibly from unverified or untrusted third parties, suppliers, users or websites, without risking harm to the host machine oroperating system.[2]A sandbox typically provides a tightly controlled set of resources for guest programs to run in, such as storage and memoryscratch space. Network access, the ability to inspect the host system, or read from input devices are usually disallowed or heavily restricted. In the sense of providing a highly controlled environment, sandboxes may be seen as a specific example ofvirtualization. Sandboxing is frequently used to test unverified programs that may contain avirusor othermalicious codewithout allowing the software to harm the host device.[3] A sandbox is implemented by executing the software in a restricted operating system environment, thus controlling the resources (e.g.file descriptors, memory, file system space, etc.) that a process may use.[4] Examples of sandbox implementations include the following: Some of the use cases for sandboxes include the following:
https://en.wikipedia.org/wiki/Isolation_(computer_science)
Distributed concurrency controlis theconcurrency controlof a systemdistributedover acomputer network(Bernstein et al. 1987,Weikum and Vossen 2001). Indatabase systemsandtransaction processing(transaction management) distributed concurrency control refers primarily to the concurrency control of adistributed database. It also refers to the concurrency control in a multidatabase (and other multi-transactional object) environment (e.g.,federated database,grid computing, andcloud computingenvironments. A major goal for distributed concurrency control is distributedserializability(orglobal serializabilityfor multidatabase systems). Distributed concurrency control poses special challenges beyond centralized one, primarily due to communication and computerlatency. It often requires special techniques, likedistributed lock managerover fastcomputer networkswith low latency, likeswitched fabric(e.g.,InfiniBand). The most common distributed concurrency control technique isstrong strict two-phase locking(SS2PL, also namedrigorousness), which is also a common centralized concurrency control technique. SS2PL provides both theserializabilityandstrictness. Strictness, a special case of recoverability, is utilized for effective recovery from failure. For large-scale distribution and complex transactions, distributed locking's typical heavy performance penalty (due to delays, latency) can be saved by using theatomic commitmentprotocol, which is needed in a distributed database for (distributed) transactions'atomicity.
https://en.wikipedia.org/wiki/Distributed_concurrency_control
The Java programming language'sJava Collections Frameworkversion 1.5 and later defines and implements the original regular single-threaded Maps, and also new thread-safe Maps implementing thejava.util.concurrent.ConcurrentMapinterface among other concurrent interfaces.[1]In Java 1.6, thejava.util.NavigableMapinterface was added, extendingjava.util.SortedMap, and thejava.util.concurrent.ConcurrentNavigableMapinterface was added as a subinterface combination. The version 1.8 Map interface diagram has the shape below. Sets can be considered sub-cases of corresponding Maps in which the values are always a particular constant which can be ignored, although the Set API uses corresponding but differently named methods. At the bottom is thejava.util.concurrent.ConcurrentNavigableMap, which is a multiple-inheritance. For unordered access as defined in thejava.util.Mapinterface, thejava.util.concurrent.ConcurrentHashMapimplementsjava.util.concurrent.ConcurrentMap.[2]The mechanism is a hash access to a hash table with lists of entries, each entry holding a key, a value, the hash, and a next reference. Previous to Java 8, there were multiple locks each serializing access to a 'segment' of the table. In Java 8, native synchronization is used on the heads of the lists themselves, and the lists can mutate into small trees when they threaten to grow too large due to unfortunate hash collisions. Also, Java 8 uses the compare-and-set primitive optimistically to place the initial heads in the table, which is very fast. Performance isO(n), but there are delays occasionally when rehashing is necessary. After the hash table expands, it never shrinks, possibly leading to a memory 'leak' after entries are removed. For ordered access as defined by thejava.util.NavigableMapinterface,java.util.concurrent.ConcurrentSkipListMapwas added in Java 1.6,[1]and implementsjava.util.concurrent.ConcurrentMapand alsojava.util.concurrent.ConcurrentNavigableMap. It is aSkip listwhich uses Lock-free techniques to make a tree. Performance isO(log(n)). One problem solved by the Java 1.5java.util.concurrentpackage is that of concurrent modification. The collection classes it provides may be reliably used by multiple Threads. All Thread-shared non-concurrent Maps and other collections need to use some form of explicit locking such as native synchronization in order to prevent concurrent modification, or else there must be a way to prove from the program logic that concurrent modification cannot occur. Concurrent modification of aMapby multiple Threads will sometimes destroy the internal consistency of the data structures inside theMap, leading to bugs which manifest rarely or unpredictably, and which are difficult to detect and fix. Also, concurrent modification by one Thread with read access by another Thread or Threads will sometimes give unpredictable results to the reader, although the Map's internal consistency will not be destroyed. Using external program logic to prevent concurrent modification increases code complexity and creates an unpredictable risk of errors in existing and future code, although it enables non-concurrent Collections to be used. However, either locks or program logic cannot coordinate external threads which may come in contact with theCollection. In order to help with the concurrent modification problem, the non-concurrentMapimplementations and otherCollections use internal modification counters which are consulted before and after a read to watch for changes: the writers increment the modification counters. A concurrent modification is supposed to be detected by this mechanism, throwing ajava.util.ConcurrentModificationException,[3]but it is not guaranteed to occur in all cases and should not be relied on. The counter maintenance is also a performance reducer. For performance reasons, the counters are not volatile, so it is not guaranteed that changes to them will be propagated betweenThreads. One solution to the concurrent modification problem is using a particular wrapper class provided by a factory injava.util.Collections:public static<K,V> Map<K,V> synchronizedMap(Map<K,V> m)which wraps an existing non-thread-safeMapwith methods that synchronize on an internal mutex.[4]There are also wrappers for the other kinds of Collections. This is a partial solution, because it is still possible that the underlyingMapcan be inadvertently accessed byThreads which keep or obtain unwrapped references. Also, all Collections implement thejava.lang.Iterablebut the synchronized-wrapped Maps and other wrappedCollectionsdo not provide synchronized iterators, so the synchronization is left to the client code, which is slow and error prone and not possible to expect to be duplicated by other consumers of the synchronizedMap. The entire duration of the iteration must be protected as well. Furthermore, aMapwhich is wrapped twice in different places will have different internal mutex Objects on which the synchronizations operate, allowing overlap. The delegation is a performance reducer, but modern Just-in-Time compilers often inline heavily, limiting the performance reduction. Here is how the wrapping works inside the wrapper - the mutex is just a finalObjectand m is the final wrappedMap: The synchronization of the iteration is recommended as follows; however, this synchronizes on the wrapper rather than on the internal mutex, allowing overlap:[5] AnyMapcan be used safely in a multi-threaded system by ensuring that all accesses to it are handled by the Java synchronization mechanism: The code using ajava.util.concurrent.ReentrantReadWriteLockis similar to that for native synchronization. However, for safety, the locks should be used in a try/finally block so that early exit such asjava.lang.Exceptionthrowing or break/continue will be sure to pass through the unlock. This technique is better than using synchronization[6]because reads can overlap each other, there is a new issue in deciding how to prioritize the writes with respect to the reads. For simplicity ajava.util.concurrent.ReentrantLockcan be used instead, which makes no read/write distinction. More operations on the locks are possible than with synchronization, such astryLock()andtryLock(long timeout, TimeUnit unit). Mutual exclusion has alock convoyproblem, in which threads may pile up on a lock, causing the JVM to need to maintain expensive queues of waiters and to 'park' the waitingThreads. It is expensive to park and unpark aThreads, and a slow context switch may occur. Context switches require from microseconds to milliseconds, while the Map's own basic operations normally take nanoseconds. Performance can drop to a small fraction of a singleThread's throughput as contention increases. When there is no or little contention for the lock, there is little performance impact; however, except for the lock's contention test. Modern JVMs will inline most of the lock code, reducing it to only a few instructions, keeping the no-contention case very fast. Reentrant techniques like native synchronization orjava.util.concurrent.ReentrantReadWriteLockhowever have extra performance-reducing baggage in the maintenance of the reentrancy depth, affecting the no-contention case as well. The Convoy problem seems to be easing with modern JVMs, but it can be hidden by slow context switching: in this case, latency will increase, but throughput will continue to be acceptable. With hundreds ofThreads , a context switch time of 10ms produces a latency in seconds. Mutual exclusion solutions fail to take advantage of all of the computing power of a multiple-core system, because only oneThreadis allowed inside theMapcode at a time. The implementations of the particular concurrent Maps provided by the Java Collections Framework and others sometimes take advantage of multiple cores usinglock freeprogramming techniques. Lock-free techniques use operations like the compareAndSet() intrinsic method available on many of the Java classes such asAtomicReferenceto do conditional updates of some Map-internal structures atomically. The compareAndSet() primitive is augmented in the JCF classes by native code that can do compareAndSet on special internal parts of some Objects for some algorithms (using 'unsafe' access). The techniques are complex, relying often on the rules of inter-thread communication provided by volatile variables, the happens-before relation, special kinds of lock-free 'retry loops' (which are not like spin locks in that they always produce progress). The compareAndSet() relies on special processor-specific instructions. It is possible for any Java code to use for other purposes the compareAndSet() method on various concurrent classes to achieve Lock-free or even Wait-free concurrency, which provides finite latency. Lock-free techniques are simple in many common cases and with some simple collections like stacks. The diagram indicates how synchronizing usingCollections.synchronizedMap(java.util.Map)wrapping a regular HashMap (purple) may not scale as well as ConcurrentHashMap (red). The others are the ordered ConcurrentNavigableMaps AirConcurrentMap (blue) and ConcurrentSkipListMap (CSLM green). (The flat spots may be rehashes producing tables that are bigger than the Nursery, and ConcurrentHashMap takes more space. Note y axis should say 'puts K'. System is 8-core i7 2.5 GHz, with -Xms5000m to prevent GC). GC and JVM process expansion change the curves considerably, and some internal lock-Free techniques generate garbage on contention. Yet another problem with mutual exclusion approaches is that the assumption of complete atomicity made by some single-threaded code creates sporadic unacceptably long inter-Thread delays in a concurrent environment. In particular, Iterators and bulk operations like putAll() and others can take a length of time proportional to the Map size, delaying otherThreads that expect predictably low latency for non-bulk operations. For example, a multi-threaded web server cannot allow some responses to be delayed by long-running iterations of other threads executing other requests that are searching for a particular value. Related to this is the fact thatThreads that lock theMapdo not actually have any requirement ever to relinquish the lock, and an infinite loop in the ownerThreadmay propagate permanent blocking to otherThreads . Slow ownerThreads can sometimes be Interrupted. Hash-based Maps also are subject to spontaneous delays during rehashing. Thejava.util.concurrentpackages' solution to the concurrent modification problem, the convoy problem, the predictable latency problem, and the multi-core problem includes an architectural choice called weak consistency. This choice means that reads likeget(java.lang.Object)will not block even when updates are in progress, and it is allowable even for updates to overlap with themselves and with reads. Weak consistency allows, for example, the contents of aConcurrentMapto change during an iteration of it by a singleThread.[7]The Iterators are designed to be used by oneThreadat a time. So, for example, aMapcontaining two entries that are inter-dependent may be seen in an inconsistent way by a readerThreadduring modification by anotherThread. An update that is supposed to change the key of an Entry (k1,v) to an Entry (k2,v) atomically would need to do a remove(k1) and then a put(k2, v), while an iteration might miss the entry or see it in two places. Retrievals return the value for a given key that reflectsthe latest previous completedupdate for that key. Thus there is a 'happens-before' relation. There is no way forConcurrentMaps to lock the entire table. There is no possibility ofConcurrentModificationExceptionas there is with inadvertent concurrent modification of non-concurrentMaps. Thesize()method may take a long time, as opposed to the corresponding non-concurrentMaps and other collections which usually include a size field for fast access, because they may need to scan the entireMapin some way. When concurrent modifications are occurring, the results reflect the state of theMapat some time, but not necessarily a single consistent state, hencesize(),isEmpty()andcontainsValue(java.lang.Object)may be best used only for monitoring. There are some operations provided byConcurrentMapthat are not inMap- which it extends - to allow atomicity of modifications. The replace(K, v1, v2) will test for the existence ofv1in the Entry identified byKand only if found, then thev1is replaced byv2atomically. The new replace(k,v) will do a put(k,v) only ifkis already in the Map. Also, putIfAbsent(k,v) will do a put(k,v) only ifkis not already in theMap, and remove(k, v) will remove the Entry for v only if v is present. This atomicity can be important for some multi-threaded use cases, but is not related to the weak-consistency constraint. ForConcurrentMaps, the following are atomic. m.putIfAbsent(k, v) is atomic but equivalent to: m.replace(k, v) is atomic but equivalent to: m.replace(k, v1, v2) is atomic but equivalent to: m.remove(k, v) is atomic but equivalent to: BecauseMapandConcurrentMapare interfaces, new methods cannot be added to them without breaking implementations. However, Java 1.8 added the capability for default interface implementations and it added to theMapinterface default implementations of some new methods getOrDefault(Object, V), forEach(BiConsumer), replaceAll(BiFunction), computeIfAbsent(K, Function), computeIfPresent(K, BiFunction), compute(K,BiFunction), and merge(K, V, BiFunction). The default implementations inMapdo not guarantee atomicity, but in theConcurrentMapoverriding defaults these useLock freetechniques to achieve atomicity, and existing ConcurrentMap implementations will automatically be atomic. The lock-free techniques may be slower than overrides in the concrete classes, so concrete classes may choose to implement them atomically or not and document the concurrency properties. It is possible to uselock-freetechniques with ConcurrentMaps because they include methods of asufficiently high consensus number, namely infinity, meaning that any number ofThreads may be coordinated. This example could be implemented with the Java 8 merge() but it shows the overall lock-free pattern, which is more general. This example is not related to the internals of the ConcurrentMap but to the client code's use of the ConcurrentMap. For example, if we want to multiply a value in the Map by a constant C atomically: The putIfAbsent(k, v) is also useful when the entry for the key is allowed to be absent. This example could be implemented with the Java 8 compute() but it shows the overall lock-free pattern, which is more general. The replace(k,v1,v2) does not accept null parameters, so sometimes a combination of them is necessary. In other words, ifv1is null, then putIfAbsent(k, v2) is invoked, otherwise replace(k,v1,v2) is invoked. The Java collections framework was designed and developed primarily byJoshua Bloch, and was introduced inJDK 1.2.[8]The original concurrency classes came fromDoug Lea's[9]collection package.
https://en.wikipedia.org/wiki/Java_ConcurrentMap
Arace conditionorrace hazardis the condition of anelectronics,software, or othersystemwhere the system's substantive behavior isdependenton the sequence or timing of other uncontrollable events, leading to unexpected or inconsistent results. It becomes abugwhen one or more of the possible behaviors is undesirable. The termrace conditionwas already in use by 1954, for example inDavid A. Huffman's doctoral thesis "The synthesis of sequential switching circuits".[1] Race conditions can occur especially inlogic circuitsormultithreadedordistributedsoftware programs. Usingmutual exclusioncan prevent race conditions in distributed software systems. A typical example of a race condition may occur when alogic gatecombines signals that have traveled along different paths from the same source. The inputs to the gate can change at slightly different times in response to a change in the source signal. The output may, for a brief period, change to an unwanted state before settling back to the designed state. Certain systems can tolerate suchglitchesbut if this output functions as aclock signalfor further systems that contain memory, for example, the system can rapidly depart from its designed behaviour (in effect, the temporary glitch becomes a permanent glitch). Consider, for example, a two-inputAND gatefed with the following logic:output=A∧A¯{\displaystyle {\text{output}}=A\wedge {\overline {A}}}A logic signalA{\displaystyle A}on one input and its negation,¬A{\displaystyle \neg A}(the ¬ is aBoolean negation), on another input in theory never output a true value:A∧A¯≠1{\displaystyle A\wedge {\overline {A}}\neq 1}. If, however, changes in the value ofA{\displaystyle A}take longer to propagate to the second input than the first whenA{\displaystyle A}changes from false to true then a brief period will ensue during which both inputs are true, and so the gate's output will also be true.[2] A practical example of a race condition can occur when logic circuitry is used to detect certain outputs of a counter. If all the bits of the counter do not change exactly simultaneously, there will be intermediate patterns that can trigger false matches. Acritical race conditionoccurs when the order in which internal variables are changed determines the eventual state that thestate machinewill end up in. Anon-critical race conditionoccurs when the order in which internal variables are changed does not determine the eventual state that the state machine will end up in. Astatic race conditionoccurs when a signal and its complement are combined. Adynamic race conditionoccurs when it results in multiple transitions when only one is intended. They are due to interaction between gates. It can be eliminated by using no more than two levels of gating. Anessential race conditionoccurs when an input has two transitions in less than the total feedback propagation time. Sometimes they are cured using inductivedelay lineelements to effectively increase the time duration of an input signal. Design techniques such asKarnaugh mapsencourage designers to recognize and eliminate race conditions before they cause problems. Oftenlogic redundancycan be added to eliminate some kinds of races. As well as these problems, some logic elements can entermetastable states, which create further problems for circuit designers. A race condition can arise in software when a computer program has multiple code paths that are executing at the same time. If the multiple code paths take a different amount of time than expected, they can finish in a different order than expected, which can cause software bugs due to unanticipated behavior. A race can also occur between two programs, resulting in security issues. Critical race conditions cause invalid execution andsoftware bugs. Critical race conditions often happen when the processes or threads depend on some shared state. Operations upon shared states are done incritical sectionsthat must bemutually exclusive. Failure to obey this rule can corrupt the shared state. A data race is a type of race condition. Data races are important parts of various formalmemory models. The memory model defined in theC11andC++11standards specify that a C or C++ program containing a data race hasundefined behavior.[3][4] A race condition can be difficult to reproduce and debug because the end result isnondeterministicand depends on the relative timing between interfering threads. Problems of this nature can therefore disappear when running in debug mode, adding extra logging, or attaching a debugger. A bug that disappears like this during debugging attempts is often referred to as a "Heisenbug". It is therefore better to avoid race conditions by careful software design. Assume that two threads each increment the value of a global integer variable by 1. Ideally, the following sequence of operations would take place: In the case shown above, the final value is 2, as expected. However, if the two threads run simultaneously without locking or synchronization (viasemaphores), the outcome of the operation could be wrong. The alternative sequence of operations below demonstrates this scenario: In this case, the final value is 1 instead of the expected result of 2. This occurs because here the increment operations are not mutually exclusive. Mutually exclusive operations are those that cannot be interrupted while accessing some resource such as a memory location. Not everyone regards data races as a subset of race conditions.[5]The precise definition of data race is specific to the formal concurrency model being used, but typically it refers to a situation where a memory operation in one thread could potentially attempt to access a memory location at the same time that a memory operation in another thread is writing to that memory location, in a context where this is dangerous. This implies that a data race is different from a race condition as it is possible to havenondeterminismdue to timing even in a program without data races, for example, in a program in which all memory accesses use onlyatomic operations. This can be dangerous because on many platforms, if two threads write to a memory location at the same time, it may be possible for the memory location to end up holding a value that is some arbitrary and meaningless combination of the bits representing the values that each thread was attempting to write; this could result in memory corruption if the resulting value is one that neither thread attempted to write (sometimes this is called a 'torn write'). Similarly, if one thread reads from a location while another thread is writing to it, it may be possible for the read to return a value that is some arbitrary and meaningless combination of the bits representing the value that the memory location held before the write, and of the bits representing the value being written. On many platforms, special memory operations are provided for simultaneous access; in such cases, typically simultaneous access using these special operations is safe, but simultaneous access using other memory operations is dangerous. Sometimes such special operations (which are safe for simultaneous access) are calledatomicorsynchronizationoperations, whereas the ordinary operations (which are unsafe for simultaneous access) are calleddataoperations. This is probably why the term isdatarace; on many platforms, where there is a race condition involving onlysynchronizationoperations, such a race may be nondeterministic but otherwise safe; but adatarace could lead to memory corruption or undefined behavior. The precise definition of data race differs across formal concurrency models. This matters because concurrent behavior is often non-intuitive and so formal reasoning is sometimes applied. TheC++ standard, in draft N4296 (2014-11-19), defines data race as follows in section 1.10.23 (page 14)[6] Two actions arepotentially concurrentif The execution of a program contains adata raceif it contains two potentially concurrent conflicting actions, at least one of which is not atomic, and neither happens before the other, except for the special case for signal handlers described below [omitted]. Any such data race results in undefined behavior. The parts of this definition relating to signal handlers are idiosyncratic to C++ and are not typical of definitions ofdata race. The paperDetecting Data Races on Weak Memory Systems[7]provides a different definition: "two memory operationsconflictif they access the same location and at least one of them is a write operation ... "Two memory operations, x and y, in a sequentially consistent execution form a race 〈x,y〉,iffx and y conflict, and they are not ordered by the hb1 relation of the execution. The race 〈x,y〉, is adata raceiff at least one of x or y is a data operation. Here we have two memory operations accessing the same location, one of which is a write. The hb1 relation is defined elsewhere in the paper, and is an example of a typical "happens-before" relation; intuitively, if we can prove that we are in a situation where one memory operation X is guaranteed to be executed to completion before another memory operation Y begins, then we say that "X happens-before Y". If neither "X happens-before Y" nor "Y happens-before X", then we say that X and Y are "not ordered by the hb1 relation". So, the clause "... and they are not ordered by the hb1 relation of the execution" can be intuitively translated as "... and X and Y are potentially concurrent". The paper considers dangerous only those situations in which at least one of the memory operations is a "data operation"; in other parts of this paper, the paper also defines a class of "synchronization operations" which are safe for potentially simultaneous use, in contrast to "data operations". TheJava Language Specification[8]provides a different definition: Two accesses to (reads of or writes to) the same variable are said to be conflicting if at least one of the accesses is a write ... When a program contains two conflicting accesses (§17.4.1) that are not ordered by a happens-before relationship, it is said to contain a data race ... a data race cannot cause incorrect behavior such as returning the wrong length for an array. A critical difference between the C++ approach and the Java approach is that in C++, a data race is undefined behavior, whereas in Java, a data race merely affects "inter-thread actions".[8]This means that in C++, an attempt to execute a program containing a data race could (while still adhering to the spec) crash or could exhibit insecure or bizarre behavior, whereas in Java, an attempt to execute a program containing a data race may produce undesired concurrency behavior but is otherwise (assuming that the implementation adheres to the spec) safe. An important facet of data races is that in some contexts, a program that is free of data races is guaranteed to execute in asequentially consistentmanner, greatly easing reasoning about the concurrent behavior of the program. Formal memory models that provide such a guarantee are said to exhibit an "SC for DRF" (Sequential Consistency for Data Race Freedom) property. This approach has been said to have achieved recent consensus (presumably compared to approaches which guarantee sequential consistency in all cases, or approaches which do not guarantee it at all).[9] For example, in Java, this guarantee is directly specified:[8] A program is correctly synchronized if and only if all sequentially consistent executions are free of data races. If a program is correctly synchronized, then all executions of the program will appear to be sequentially consistent (§17.4.3). This is an extremely strong guarantee for programmers. Programmers do not need to reason about reorderings to determine that their code contains data races. Therefore they do not need to reason about reorderings when determining whether their code is correctly synchronized. Once the determination that the code is correctly synchronized is made, the programmer does not need to worry that reorderings will affect his or her code. A program must be correctly synchronized to avoid the kinds of counterintuitive behaviors that can be observed when code is reordered. The use of correct synchronization does not ensure that the overall behavior of a program is correct. However, its use does allow a programmer to reason about the possible behaviors of a program in a simple way; the behavior of a correctly synchronized program is much less dependent on possible reorderings. Without correct synchronization, very strange, confusing and counterintuitive behaviors are possible. By contrast, a draft C++ specification does not directly require an SC for DRF property, but merely observes that there exists a theorem providing it: [Note:It can be shown that programs that correctly use mutexes and memory_order_seq_cst operations to prevent all data races and use no other synchronization operations behave as if the operations executed by their constituent threads were simply interleaved, with each value computation of an object being taken from the last side effect on that object in that interleaving. This is normally referred to as “sequential consistency”. However, this applies only to data-race-free programs, and data-race-free programs cannot observe most program transformations that do not change single-threaded program semantics. In fact, most single-threaded program transformations continue to be allowed, since any program that behaves differently as a result must perform an undefined operation.— end note Note that the C++ draft specification admits the possibility of programs that are valid but use synchronization operations with a memory_order other than memory_order_seq_cst, in which case the result may be a program which is correct but for which no guarantee of sequentially consistency is provided. In other words, in C++, some correct programs are not sequentially consistent. This approach is thought to give C++ programmers the freedom to choose faster program execution at the cost of giving up ease of reasoning about their program.[9] There are various theorems, often provided in the form of memory models, that provide SC for DRF guarantees given various contexts. The premises of these theorems typically place constraints upon both the memory model (and therefore upon the implementation), and also upon the programmer; that is to say, typically it is the case that there are programs which do not meet the premises of the theorem and which could not be guaranteed to execute in a sequentially consistent manner. The DRF1 memory model[10]provides SC for DRF and allows the optimizations of the WO (weak ordering), RCsc (Release Consistencywith sequentially consistent special operations), VAX memory model, and data-race-free-0 memory models. The PLpc memory model[11]provides SC for DRF and allows the optimizations of the TSO (Total Store Order), PSO, PC (Processor Consistency), and RCpc (Release Consistencywith processor consistency special operations) models. DRFrlx[12]provides a sketch of an SC for DRF theorem in the presence of relaxed atomics. Many software race conditions have associatedcomputer securityimplications. A race condition allows an attacker with access to a shared resource to cause other actors that utilize that resource to malfunction, resulting in effects includingdenial of service[13]andprivilege escalation.[14][15] A specific kind of race condition involves checking for a predicate (e.g. forauthentication), then acting on the predicate, while the state can change between thetime-of-checkand thetime-of-use. When this kind ofbugexists in security-sensitive code, asecurity vulnerabilitycalled atime-of-check-to-time-of-use(TOCTTOU) bug is created. Race conditions are also intentionally used to createhardware random number generatorsandphysically unclonable functions.[16][citation needed]PUFs can be created by designing circuit topologies with identical paths to a node and relying on manufacturing variations to randomly determine which paths will complete first. By measuring each manufactured circuit's specific set of race condition outcomes, a profile can be collected for each circuit and kept secret in order to later verify a circuit's identity. Two or more programs may collide in their attempts to modify or access a file system, which can result in data corruption or privilege escalation.[14]File lockingprovides a commonly used solution. A more cumbersome remedy involves organizing the system in such a way that one unique process (running adaemonor the like) has exclusive access to the file, and all other processes that need to access the data in that file do so only via interprocess communication with that one process. This requires synchronization at the process level. A different form of race condition exists in file systems where unrelated programs may affect each other by suddenly using up available resources such as disk space, memory space, or processor cycles. Software not carefully designed to anticipate and handle this race situation may then become unpredictable. Such a risk may be overlooked for a long time in a system that seems very reliable. But eventually enough data may accumulate or enough other software may be added to critically destabilize many parts of a system. An example of this occurred withthe near loss of the Mars Rover "Spirit"not long after landing, which occurred due to deleted file entries causing the file system library to consume all available memory space.[17]A solution is for software to request and reserve all the resources it will need before beginning a task; if this request fails then the task is postponed, avoiding the many points where failure could have occurred. Alternatively, each of those points can be equipped with error handling, or the success of the entire task can be verified afterwards, before continuing. A more common approach is to simply verify that enough system resources are available before starting a task; however, this may not be adequate because in complex systems the actions of other running programs can be unpredictable. In networking, consider a distributed chat network likeIRC, where a user who starts a channel automatically acquires channel-operator privileges. If two users on different servers, on different ends of the same network, try to start the same-named channel at the same time, each user's respective server will grant channel-operator privileges to each user, since neither server will yet have received the other server's signal that it has allocated that channel. (This problem has been largelysolvedby various IRC server implementations.) In this case of a race condition, the concept of the "shared resource" covers the state of the network (what channels exist, as well as what users started them and therefore have what privileges), which each server can freely change as long as it signals the other servers on the network about the changes so that they can update their conception of the state of the network. However, thelatencyacross the network makes possible the kind of race condition described. In this case, heading off race conditions by imposing a form of control over access to the shared resource—say, appointing one server to control who holds what privileges—would mean turning the distributed network into a centralized one (at least for that one part of the network operation). Race conditions can also exist when a computer program is written withnon-blocking sockets, in which case the performance of the program can be dependent on the speed of the network link. Software flaws inlife-critical systemscan be disastrous. Race conditions were among the flaws in theTherac-25radiation therapymachine, which led to the death of at least three patients and injuries to several more.[18] Another example is theenergy management systemprovided byGE Energyand used byOhio-basedFirstEnergy Corp(among other power facilities). A race condition existed in the alarm subsystem; when three sagging power lines were tripped simultaneously, the condition prevented alerts from being raised to the monitoring technicians, delaying their awareness of the problem. This software flaw eventually led to theNorth American Blackout of 2003.[19]GE Energy later developed a software patch to correct the previously undiscovered error. Many software tools exist to help detect race conditions in software. They can be largely categorized into two groups:static analysistools anddynamic analysistools. Thread Safety Analysis is a static analysis tool for annotation-based intra-procedural static analysis, originally implemented as a branch of gcc, and now reimplemented inClang, supporting PThreads.[20][non-primary source needed] Dynamic analysis tools include: There are several benchmarks designed to evaluate the effectiveness of data race detection tools Race conditions are a common concern in human-computerinteraction designand softwareusability. Intuitively designed human-machine interfaces require that the user receives feedback on their actions that align with their expectations, but system-generated actions can interrupt a user's current action or workflow in unexpected ways, such as inadvertently answering or rejecting an incoming call on a smartphone while performing a different task.[citation needed] InUK railway signalling, a race condition would arise in the carrying out ofRule 55. According to this rule, if a train was stopped on a running line by a signal, the locomotive fireman would walk to the signal box in order to remind the signalman that the train was present. In at least one case, atWinwickin 1934, an accident occurred because the signalman accepted another train before the fireman arrived. Modern signalling practice removes the race condition by making it possible for the driver to instantaneously contact the signal box by radio. Race conditions are not confined to digital systems. Neuroscience is demonstrating that race conditions can occur in mammal brains as well, for example.[25][26]
https://en.wikipedia.org/wiki/Race_condition#Computing
Incomputer science,transaction processingis information processing[1]that is divided into individual, indivisible operations calledtransactions. Each transaction must succeed orfailas a complete unit; it can never be only partially complete. For example, when you purchase a book from an online bookstore, you exchange money (in the form ofcredit) for a book. If your credit is good, a series of related operations ensures that you get the book and the bookstore gets your money. However, if a single operation in the series fails during the exchange, the entire exchange fails. You do not get the book and the bookstore does not get your money. The technology responsible for making the exchange balanced and predictable is calledtransaction processing. Transactions ensure that data-oriented resources are not permanently updated unless all operations within the transactional unit complete successfully. By combining a set of related operations into a unit that either completely succeeds or completely fails, one can simplify error recovery and make one's application more reliable. Transaction processing systems consist of computer hardware and software hosting a transaction-oriented application that performs the routine transactions necessary to conduct business. Examples include systems that manage sales order entry, airline reservations, payroll, employee records, manufacturing, and shipping. Since most, though not necessarily all, transaction processing today is interactive, the term is often treated as synonymous withonline transaction processing. Transaction processing is designed to maintain a system's Integrity (typically adatabaseor some modernfilesystems) in a known, consistent state, by ensuring that interdependent operations on the system are either all completed successfully or all canceled successfully. For example, consider a typical banking transaction that involves moving $700 from a customer's savings account to a customer's checking account. This transaction involves at least two separate operations in computer terms: debiting the savings account by $700, and crediting the checking account by $700. If one operation succeeds but the other does not, the books of the bank will not balance at the end of the day. There must, therefore, be a way to ensure that either both operations succeed or both fail so that there is never any inconsistency in the bank's database as a whole. Transaction processing links multiple individual operations in a single, indivisible transaction, and ensures that either all operations in a transaction are completed without error, or none of them are. If some of the operations are completed but errors occur when the others are attempted, the transaction-processing system "rolls back"allof the operations of the transaction (including the successful ones), thereby erasing all traces of the transaction and restoring the system to the consistent, known state that it was in before processing of the transaction began. If all operations of a transaction are completed successfully, the transaction iscommittedby the system, and all changes to the database are made permanent; the transaction cannot be rolled back once this is done. Transaction processing guards against hardware and software errors that might leave a transaction partially completed. If the computer system crashes in the middle of a transaction, the transaction processing system guarantees that all operations in any uncommitted transactions are cancelled. Generally, transactions are issued concurrently. If they overlap (i.e. need to touch the same portion of the database), this can create conflicts. For example, if the customer mentioned in the example above has $150 in his savings account and attempts to transfer $100 to a different person while at the same time moving $100 to the checking account, only one of them can succeed. However, forcing transactions to be processed sequentially is inefficient. Therefore, concurrent implementations of transaction processing is programmed to guarantee that the end result reflects a conflict-free outcome, the same as could be reached if executing the transactions sequentially in any order (a property calledserializability). In our example, this means that no matter which transaction was issued first, either the transfer to a different person or the move to the checking account succeeds, while the other one fails. The basic principles of all transaction-processing systems are the same. However, the terminology may vary from one transaction-processing system to another, and the terms used below are not necessarily universal. Transaction-processing systems ensure database integrity by recording intermediate states of the database as it is modified, then using these records to restore the database to a known state if a transaction cannot be committed. For example, copies of information on the databasepriorto its modification by a transaction are set aside by the system before the transaction can make any modifications (this is sometimes called abefore image). If any part of the transaction fails before it is committed, these copies are used to restore the database to the state it was in before the transaction began. It is also possible to keep a separatejournalof all modifications to a database management system. (sometimes calledafter images). This is not required for rollback of failed transactions but it is useful for updating the database management system in the event of a database failure, so some transaction-processing systems provide it. If the database management system fails entirely, it must be restored from the most recent back-up. The back-up will not reflect transactions committed since the back-up was made. However, once the database management system is restored, the journal of after images can be applied to the database (rollforward) to bring the database management system up to date. Any transactions in progress at the time of the failure can then be rolled back. The result is a database in a consistent, known state that includes the results of all transactions committed up to the moment of failure. In some cases, two transactions may, in the course of their processing, attempt to access the same portion of a database at the same time, in a way that prevents them from proceeding. For example, transaction A may access portion X of the database, and transaction B may access portion Y of the database. If at that point, transaction A then tries to access portion Y of the database while transaction B tries to access portion X, adeadlockoccurs, and neither transaction can move forward. Transaction-processing systems are designed to detect these deadlocks when they occur. Typically both transactions will be cancelled and rolled back, and then they will be started again in a different order, automatically, so that the deadlock does not occur again. Or sometimes, just one of the deadlocked transactions will be cancelled, rolled back, and automatically restarted after a short delay. Deadlocks can also occur among three or more transactions. The more transactions involved, the more difficult they are to detect, to the point that transaction processing systems find there is a practical limit to the deadlocks they can detect. In systems where commit and rollback mechanisms are not available or undesirable, acompensating transactionis often used to undo failed transactions and restore the system to a previous state. Jim Graydefined properties of a reliable transaction system in the late 1970s under the acronymACID—atomicity, consistency, isolation, and durability.[1] A transaction's changes to the state are atomic: either all happen or none happen. These changes include database changes, messages, and actions on transducers. Consistency: A transaction is a correct transformation of the state. The actions taken as a group do not violate any of the integrity constraints associated with the state. Even though transactions execute concurrently, it appears to each transaction T, that others executed either before T or after T, but not both. Once a transaction completes successfully (commits), its changes to the database survive failures and retain its changes. Standard transaction-processingsoftware, such asIBM'sInformation Management System, was first developed in the 1960s, and was often closely coupled to particulardatabase management systems.Client–server computingimplemented similar principles in the 1980s with mixed success. However, in more recent years, the distributed client–server model has become considerably more difficult to maintain. As the number of transactions grew in response to various online services (especially theWeb), a single distributed database was not a practical solution. In addition, most online systems consist of a whole suite of programs operating together, as opposed to a strict client–server model where the single server could handle the transaction processing. Today a number of transaction processing systems are available that work at the inter-program level and which scale to large systems, includingmainframes. One effort is theX/Open Distributed Transaction Processing(DTP) (see alsoJava Transaction API(JTA). However, proprietary transaction-processing environments such as IBM'sCICSare still very popular,[citation needed]although CICS has evolved to include open industry standards as well. The term extreme transaction processing (XTP) was used to describe transaction processing systems with uncommonly challenging requirements, particularly throughput requirements (transactions per second). Such systems may be implemented via distributed or cluster style architectures. It was used at least by 2011.[2][3]
https://en.wikipedia.org/wiki/Transaction_processing
Theactive objectdesign patterndecouples method execution from method invocation for objects that each reside in their ownthreadof control.[1]The goal is to introduceconcurrency, by usingasynchronous method invocationand aschedulerfor handling requests.[2] The pattern consists of six elements:[3] An example of active object pattern inJava.[4] Firstly we can see a standard class that provides two methods that set a double to be a certain value. This class doesNOTconform to the active object pattern. The class is dangerous in a multithreading scenario because both methods can be called simultaneously, so the value of val (which is not atomic—it's updated in multiple steps) could be undefined—a classic race condition. You can, of course, use synchronization to solve this problem, which in this trivial case is easy. But once the class becomes realistically complex, synchronization can become very difficult.[5] To rewrite this class as an active object, you could do the following: Another example of active object pattern in Java instead implemented in Java 8 providing a shorter solution. Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Active_object
Design Patterns: Elements of Reusable Object-Oriented Software(1994) is asoftware engineeringbook describingsoftware design patterns. The book was written byErich Gamma,Richard Helm,Ralph Johnson, andJohn Vlissides, with a foreword byGrady Booch. The book is divided into two parts, with the first two chapters exploring the capabilities and pitfalls ofobject-oriented programming, and the remaining chapters describing 23 classicsoftware design patterns. The book includes examples inC++andSmalltalk. It has been influential to the field of software engineering and is regarded as an important source for object-oriented design theory and practice. More than 500,000 copies have been sold in English and in 13 other languages.[1]The authors are often referred to as theGang of Four(GoF).[2][3][4][5] The book started at a birds-of-a-feather session at the 1990OOPSLAmeeting, "Towards an Architecture Handbook", where Erich Gamma and Richard Helm met and discovered their common interest. They were later joined by Ralph Johnson and John Vlissides.[6]The book was originally published on 21 October 1994, with a 1995 copyright, and was made available to the public at the 1994 OOPSLA meeting. Chapter 1 is a discussion ofobject-orienteddesign techniques, based on the authors' experience, which they believe would lead to good object-oriented software design, including: The authors claim the following as advantages ofinterfacesover implementation: Use of an interface also leads todynamic bindingandpolymorphism, which are central features of object-oriented programming. The authors refer toinheritanceaswhite-boxreuse, with white-box referring to visibility, because the internals of parent classes are often visible tosubclasses. In contrast, the authors refer toobject composition(in which objects with well-defined interfaces are used dynamically at runtime by objects obtaining references to other objects) asblack-boxreusebecause no internal details of composed objects need be visible in the code using them. The authors discuss the tension between inheritance and encapsulation at length and state that in their experience, designers overuse inheritance (Gang of Four 1995:20). The danger is stated as follows: They warn that the implementation of a subclass can become so bound up with the implementation of its parent class that any change in the parent's implementation will force the subclass to change. Furthermore, they claim that a way to avoid this is to inherit only from abstract classes—but then, they point out that there is minimal code reuse. Using inheritance is recommended mainly when adding to the functionality of existing components, reusing most of the old code and adding relatively small amounts of new code. To the authors, 'delegation' is an extreme form of object composition that can always be used to replace inheritance. Delegation involves two objects: a 'sender' passes itself to a 'delegate' to let the delegate refer to the sender. Thus the link between two parts of a system are established only at runtime, not at compile-time. TheCallbackarticle has more information about delegation. The authors also discuss so-called parameterized types, which are also known asgenerics(Ada,Eiffel,Java,C#,Visual Basic (.NET), andDelphi) or templates (C++). These allow any type to be defined without specifying all the other types it uses—the unspecified types are supplied as 'parameters' at the point of use. The authors admit that delegation and parameterization are very powerful but add a warning: The authors further distinguish between 'Aggregation', where one object 'has' or 'is part of' another object (implying that an aggregate object and its owner have identical lifetimes) and acquaintance, where one object merely 'knows of' another object. Sometimes acquaintance is called 'association' or the 'using' relationship. Acquaintance objects may request operations of each other, but they are not responsible for each other. Acquaintance is a weaker relationship than aggregation and suggests muchlooser couplingbetween objects, which can often be desirable for maximum maintainability in designs. The authors employ the term 'toolkit' where others might today use 'class library', as in C# or Java. In their parlance, toolkits are the object-oriented equivalent of subroutine libraries, whereas a 'framework' is a set of cooperating classes that make up a reusable design for a specific class of software. They state that applications are hard to design, toolkits are harder, and frameworks are the hardest to design. Creational patternsare ones that create objects, rather than having to instantiate objects directly. This gives the program more flexibility in deciding which objects need to be created for a given case. Structural patternsconcern class and object composition. They use inheritance to compose interfaces and define ways to compose objects to obtain new functionality. Mostbehavioral design patternsare specifically concerned with communication between objects. In 2005 the ACMSIGPLANawarded that year's Programming Languages Achievement Award to the authors, in recognition of the impact of their work "on programming practice andprogramming languagedesign".[7] Criticism has been directed at the concept ofsoftware design patternsgenerally, and atDesign Patternsspecifically. A primary criticism ofDesign Patternsis that its patterns are simply workarounds for missing features in C++, replacing elegant abstract features with lengthy concrete patterns, essentially becoming a "human compiler".Paul Grahamwrote:[8] When I see patterns in my programs, I consider it a sign of trouble. The shape of a program should reflect only the problem it needs to solve. Any other regularity in the code is a sign, to me at least, that I'm using abstractions that aren't powerful enough-- often that I'm generating by hand the expansions of some macro that I need to write. Peter Norvigdemonstrates that 16 out of the 23 patterns inDesign Patternsare simplified or eliminated by language features inLisporDylan.[9]Related observations were made by Hannemann andKiczaleswho implemented several of the 23 design patterns using anaspect-oriented programminglanguage (AspectJ) and showed that code-level dependencies were removed from the implementations of 17 of the 23 design patterns and that aspect-oriented programming could simplify the implementations of design patterns.[10] In an interview with InformIT in 2009, Erich Gamma stated that the book authors had a discussion in 2005 on how they would have refactored the book and concluded that they would have recategorized some patterns and added a few additional ones, such as extension object/interface, dependency injection, type object, and null object. Gamma wanted to remove the singleton pattern, but there was no consensus among the authors to do so.[11]
https://en.wikipedia.org/wiki/Design_Patterns
Insoftware engineering,behavioral design patternsaredesign patternsthat identify common communication patterns among objects. By doing so, these patterns increase flexibility in carrying out communication. Examples of this type of design pattern include:
https://en.wikipedia.org/wiki/Behavioral_pattern
Insoftware engineering,creational design patternsaredesign patternsthat deal withobject creationmechanisms, trying to createobjectsin a manner suitable to the situation. The basic form of object creation could result in design problems or in added complexity to the design due to inflexibility in the creation procedures. Creational design patterns solve this problem by somehow controlling this object creation. Creational design patterns are composed of two dominant ideas. One is encapsulating knowledge about which concreteclassesthe system uses. Another is hiding howinstancesof these concrete classes are created and combined.[1] Creational design patterns are further categorized into object-creational patterns and class-creational patterns, where object-creational patterns deal with object creation and class-creational patterns deal with class-instantiation. In greater details, object-creational patterns defer part of its object creation to another object, while class-creational patterns defer its object creation to subclasses.[2] Five well-known design patterns that are parts of creational patterns are the The creational patterns aim to separate a system from how its objects are created, composed, and represented. They increase the system's flexibility in terms of the what, who, how, and when of object creation.[6] As modern software engineering depends more onobject compositionthan class inheritance, emphasis shifts away from hard-coding behaviors toward defining a smaller set of basic behaviors that can be composed into more complex ones.[7]Hard-coding behaviors are inflexible because they require overriding or re-implementing the whole thing in order to change parts of the design. Additionally, hard-coding does not promote reuse and makes it difficult to keep track of errors. For these reasons, creational patterns are more useful than hard-coding behaviors. Creational patterns make design become more flexible. They provide different ways to remove explicit references in the concrete classes from the code that needs to instantiate them.[8]In other words, they create independency for objects and classes. Consider applying creational patterns when: Below is a simple class diagram that most creational patterns have in common. Note that different creational patterns require additional and different participated classes. Participants: Some examples of creational design patterns include:
https://en.wikipedia.org/wiki/Creational_pattern
Insoftware engineering,structural design patternsaredesign patternsthat ease the design by identifying a simple way to realize relationships among entities. Examples of Structural Patterns include:
https://en.wikipedia.org/wiki/Structural_pattern
Incomputer science,SYNTAXis a system used to generatelexicaland syntactic analyzers (parsers) (both deterministic and non-deterministic) for all kinds ofcontext-free grammars(CFGs) as well as some classes of contextual grammars.[citation needed]It has been developed atINRIAinFrancefor several decades, mostly by Pierre Boullier, but has becomefree softwaresince 2007 only. SYNTAX is distributed under theCeCILLlicense.[citation needed] SYNTAX handles most classes of deterministic (unambiguous) grammars (LR,LALR, RLR as well as general context-free grammars. The deterministic version has been used in operational contexts (e.g.,Ada[1]), and is currently used both in the domain ofcompilation.[2]The non-deterministic features include an Earley parser generator used fornatural language processing.[3]Parsers generated by SYNTAX include powerful error recovery mechanisms, and allow the execution of semantic actions and attribute evaluation on the abstract tree or on the shared parse forest. The current version of SYNTAX (version 6.0 beta) includes also parser generators for other formalisms, used for natural language processing as well as bio-informatics. These formalisms are context-sensitive formalisms (TAG, RCG or formalisms that rely on context-free grammars and are extended thanks to attribute evaluation, in particular for natural language processing (LFG). A nice feature of SYNTAX (compared toLex/Yacc) is its built-in algorithm[4]for automatically recovering from lexical and syntactic errors, by deleting extra characters or tokens, inserting missing characters or tokens, permuting characters or tokens, etc. This algorithm has a default behaviour that can be modified by providing a custom set of recovery rules adapted to the language for which the lexer and parser are built.
https://en.wikipedia.org/wiki/SYNTAX
CADP[1](Construction and Analysis of Distributed Processes) is a toolbox for the design of communication protocols and distributed systems. CADP is developed by the CONVECS team (formerly by the VASY team) atINRIARhone-Alpes and connected to various complementary tools. CADP is maintained, regularly improved, and used in many industrial projects. The purpose of the CADP toolkit is to facilitate the design of reliable systems by use of formal description techniques together with software tools for simulation,rapid application development, verification, and test generation. CADP can be applied to any system that comprises asynchronous concurrency, i.e., any system whose behavior can be modeled as a set of parallel processes governed by interleaving semantics. Therefore, CADP can be used to design hardware architecture, distributed algorithms, telecommunications protocols, etc. The enumerative verification (also known as explicit state verification) techniques implemented in CADP, though less general that theorem proving, enable an automatic, cost-efficient detection of design errors in complex systems. CADP includes tools to support use of two approaches in formal methods, both of which are needed for reliable systems design: Work began on CADP in 1986, when the development of the first two tools, CAESAR and ALDEBARAN, was undertaken. In 1989, the CADP acronym was coined, which stood forCAESAR/ALDEBARAN Distribution Package. Over time, several tools were added, including programming interfaces that enabled tools to be contributed: the CADP acronym then became theCAESAR/ALDEBARAN Development Package. Currently CADP contains over 50 tools. While keeping the same acronym, the name of the toolbox has been changed to better indicate its purpose:Construction and Analysis of Distributed Processes. The releases of CADP have been successively named with alphabetic letters (from "A" to "Z"), then with the names of cities hosting academic research groups actively working on theLOTOSlanguage and, more generally, the names of cities in which major contributions toconcurrency theoryhave been made. Between major releases, minor releases are often available, providing early access to new features and improvements. For more information, see thechange listpage on the CADP website. CADP offers a wide set of functionalities, ranging from step-by-step simulation to massively parallelmodel checking. It includes: CADP is designed in a modular way and puts the emphasis on intermediate formats and programming interfaces (such as the BCG and OPEN/CAESAR software environments), which allow the CADP tools to be combined with other tools and adapted to various specification languages. Verification is comparison of a complex system against a set of properties characterizing the intended functioning of the system (for instance, deadlock freedom, mutual exclusion, fairness, etc.). Most of the verification algorithms in CADP are based on the labeled transition systems (or, simply, automata or graphs) model, which consists of a set of states, an initial state, and a transition relation between states. This model is often generated automatically from high level descriptions of the system under study, then compared against the system properties using various decision procedures. Depending on the formalism used to express the properties, two approaches are possible: Although these techniques are efficient and automated, their main limitation is the state explosion problem, which occurs when models are too large to fit in computer memory. CADP provides software technologies for handling models in two complementary ways: Accurate specification of reliable, complex systems requires a language that is executable (for enumerative verification) and has formal semantics (to avoid any as language ambiguities that could lead to interpretation divergences between designers and implementors). Formal semantics are also required when it is necessary to establish the correctness of an infinite system; this cannot be done using enumerative techniques because they deal only with finite abstractions, so must be done using theorem proving techniques, which only apply to languages with a formal semantics. CADP acts on aLOTOSdescription of the system. LOTOS is an international standard for protocol description (ISO/IEC standard 8807:1989), which combines the concepts of process algebras (in particularCCSandCSPand algebraic abstract data types. Thus, LOTOS can describe both asynchronous concurrent processes and complex data structures. LOTOS was heavily revised in 2001, leading to the publication of E-LOTOS (Enhanced-Lotos, ISO/IEC standard 15437:2001), which tries to provide a greater expressiveness (for instance, by introducing quantitative time to describe systems with real-time constraints) together with a better user friendliness. Several tools exist to convert descriptions in other process calculi or intermediate format into LOTOS, so that the CADP tools can then be used for verification. CADP is distributed free of charge to universities and public research centers. Users in industry can obtain an evaluation license for non-commercial use during a limited period of time, after which a full license is required. To request a copy of CADP, complete the registration form at.[3]After the license agreement has been signed, you will receive details of how to download and install CADP. The toolbox contains several tools: A number of tools have been developed within the OPEN/CAESAR environment: The connection between explicit models (such as BCG graphs) and implicit models (explored on the fly) is ensured by OPEN/CAESAR-compliant compilers including: The CADP toolbox also includes additional tools, such as ALDEBARAN and TGV (Test Generation based on Verification) developed by the Verimag laboratory (Grenoble) and the Vertecs project-team of INRIA Rennes. The CADP tools are well-integrated and can be accessed easily using either the EUCALYPTUS graphical interface or the SVL[10]scripting language. Both EUCALYPTUS and SVL provide users with an easy, uniform access to the CADP tools by performing file format conversions automatically whenever needed and by supplying appropriate command-line options as the tools are invoked.
https://en.wikipedia.org/wiki/CADP
AppScaleis a software company that offers cloud infrastructure software and services to enterprises, government agencies, contractors, and third-party service providers. The company commercially supports one software product, AppScale ATS, a managed hybrid cloud infrastructure software platform that emulates the core AWS APIs. In 2019, the company ended commercial support for its open-source serverless computing platform AppScale GTS, but AppScale GTS source code remains freely available to the open-source community.[1] AppScale began as a research project at theUniversity of California, Santa BarbaraComputer Science Department under the supervision of Professor Chandra Krintz.[2]The project was originally funded by theNSF, with additional funding fromGoogle,IBMandNIH. In 2012, co-founders Dr. Chandra Krintz, Chief Scientist, Dr. Navraj Chohan, Development Lead, and Woody Rollins, CEO founded AppScale Systems to commercialize the private PaaS AppScale technology. Rollins, a pioneer in private cloud infrastructure, was a co-founder and former CEO ofEucalyptus Systems.[3]In 2014, Graziano Obertelli joined AppScale as VP of Operations from Eucalyptus Systems, where he was a co-founder.[4]In 2017, Dimitrii Calzago joined AppScale as CTO from Hewlett Packard Enterprise, where he was Director of Cloud R&D.[5] In April 2014, AppScale Systems was named a 2014 Cool Vendor inPaaSbyGartner, Inc.[6]In September 2014, AppScale Systems won a Bossie Award fromInfoWorldfor bestopen sourcedata center and cloud software.[7]AppScale partnered with Optimal Dynamics on April 11, 2016.[8]AppScale was part of the AliLaunch Program, August 9, 2016.[9]Chandra Krintz, Chief Science Officer of AppScale, was featured on Dev Radio in the episode titled "How to Rescue your apps with the help of AppScale" on December 16, 2016.[10] In late 2017, AppScale Systems started offering commercial support forEucalyptusprivate cloud software afterDXC Technologychose to stop the development and support of Eucalyptus. This prompted AppScale, led by members of the Eucalyptus founding team, to fork the code and continue developing the software, which was renamed AppScale ATS. AppScale ATS (formerlyEucalyptus) is a managed hybrid cloud infrastructure software that emulates the core AWS APIs. AppScale ATS implements AWS-compatible cloud services over dedicated infrastructure, providing a dedicated private AWS region. ATS enables the creation of cost-effective and flexible AWS hybrid cloud environments with a seamless experience for developers and workloads across public and private resources. No special-purpose hardware or unorthodox operating system configurations are required and the entire software stack utilizes open-sourced components. The software is primarily used by enterprises and government agencies to place data and compute in specific geographies (for compliance) or close to data sources (for latency). AppScale GTS is anopen-sourceserverless computingplatform that automatically deploys and scales unmodifiedGoogle App Engineapplications over public and private clouds and on-premises clusters.[11]AppScale is modeled on the App Engine APIs and supportsGo,Java,PHP, andPythonapplications.[12] The platform has a rapid API-driven development environment that can run applications on any cloud infrastructure.[13]It decouples app logic from its service ecosystem, allowing better control over app deployment, data storage, resource use, backup, migration, service discovery, load-balancing, fault-tolerance, and auto-scaling.[14] AppScale was developed and maintained by AppScale Systems, Inc., based inSanta Barbara, California, and Google.[15]
https://en.wikipedia.org/wiki/AppScale
8.0.2macOS30 May 2024; 11 months ago(2024-05-30)8.0.2Linux30 May 2024; 11 months ago(2024-05-30)8.0.2Android30 May 2024; 11 months ago(2024-05-30) TheBerkeley Open Infrastructure for Network Computing[2](BOINC, pronounced/bɔɪŋk/– rhymes with "oink"[3]) is anopen-sourcemiddlewaresystem forvolunteer computing(a type ofdistributed computing).[4]Developed originally to supportSETI@home,[5]it became the platform for many other applications in areas as diverse asmedicine,molecular biology,mathematics,linguistics,climatology,environmental science, andastrophysics, among others.[6]The purpose of BOINC is to enable researchers to utilizeprocessing resourcesofpersonal computersand other devices around the world. BOINC development began with a group based at theSpace Sciences Laboratory(SSL) at theUniversity of California, Berkeley, and led byDavid P. Anderson, who also led SETI@home. As a high-performance volunteer computing platform, BOINC brings together 34,236 active participants employing 136,341 active computers (hosts) worldwide, processing daily on average 20.164PetaFLOPSas of 16 November 2021[update][7](it would be the 21st largest processing capability in the world compared with an individualsupercomputer).[8]TheNational Science Foundation(NSF) funds BOINC through awards SCI/0221529,[9]SCI/0438443[10]and SCI/0721124.[11]Guinness World Recordsranks BOINC as the largestcomputing gridin the world.[12] BOINCcoderuns on variousoperating systems, includingMicrosoft Windows,macOS,Android,[13]Linux, andFreeBSD.[14]BOINC isfree softwarereleased under the terms of theGNU Lesser General Public License(LGPL). BOINC was originally developed to manage theSETI@homeproject.David P. Andersonhas said that he chose its name because he wanted something that was not "imposing", but rather "light, catchy, and maybe - like 'Unix' - a littlerisqué", so he "played around with various acronyms and settled on 'BOINC'".[15] The original SETI client was a non-BOINC software exclusively for SETI@home. It was one of the firstvolunteer computingprojects, and not designed with a high level of security. As a result, some participants in the project attempted to cheat the project to gain "credits", while others submitted entirely falsified work. BOINC was designed, in part, to combat these security breaches.[16] The BOINC project started in February 2002, and its first version was released on April 10, 2002. The first BOINC-based project wasPredictor@home, launched on June 9, 2004. In 2009,AQUA@homedeployed multi-threaded CPU applications for the first time,[17]followed by the firstOpenCLapplication in 2010. As of 15 August 2022, there are 33 projects on the official list.[18]There are also, however, BOINC projects not included on the official list. Each year, an international BOINC Workshop is hosted to increase collaboration among project administrators. In 2021, the workshop was hosted virtually.[19] While not affiliated with BOINC officially, there have been several independent projects that reward BOINC users for their participation, includingCharity Engine(sweepstakes based on processing power with prizes funded by private entities who purchase computational time of CE users), Bitcoin Utopia (now defunct), andGridcoin(a blockchain which mints coins based on processing power). BOINC issoftwarethat can exploit the unusedCPUandGPUcycles oncomputer hardwareto perform scientific computing. In 2008, BOINC's website announced thatNvidiahad developed a language calledCUDAthat uses GPUs for scientific computing. With NVIDIA's assistance, several BOINC-based projects (e.g.,MilkyWay@home.SETI@home) developed applications that run on NVIDIA GPUs using CUDA. BOINC added support for theATI/AMDfamily of GPUs in October 2009. The GPU applications run from 2 to 10 times faster than the former CPU-only versions. GPU support (viaOpenCL) was added for computers usingmacOSwith AMD Radeon graphic cards, with the current BOINC client supporting OpenCL on Windows, Linux, and macOS. GPU support is also provided forIntelGPUs.[20] BOINC consists of aserversystem andclient softwarethat communicate to process and distribute work units and return results. A BOINC app also exists for Android, allowing every person owning an Android device – smartphone, tablet and/or Kindle – to share their unused computing power. The user is allowed to select the research projects they want to support, if it is in the app's available project list. By default, the application will allow computing only when the device is connected to a WiFi network, is being charged, and the battery has a charge of at least 90%.[21]Some of these settings can be changed to users needs. Not all BOINC projects are available[22]and some of the projects are not compatible with all versions of Android operating system or availability of work is intermittent. Currently available projects[22]are Asteroids@home,Einstein@Home,LHC@home,Moo! Wrapper,Rosetta@home,World Community GridandYoyo@home[ru]. As of September 2021, the most recent version of the mobile application can only be downloaded from the BOINC website or the F-Droid repository as the official Google Play store does not allow downloading and running executables not signed by the app developer and each BOINC project has their own executable files. BOINC can be controlled remotely byremote procedure calls(RPC), from thecommand line, and from a BOINC Manager. BOINC Manager currently has two "views": theAdvanced Viewand theSimplifiedGUI. TheGrid Viewwas removed in the 6.6.x clients as it was redundant. The appearance (skin) of the Simplified GUI is user-customizable, in that users can create their own designs. A BOINC Account Manager is an application that manages multiple BOINC project accounts across multiple computers (CPUs) and operating systems. Account managers were designed for people who are new to BOINC or have several computers participating in several projects. The account manager concept was conceived and developed jointly byGridRepublicand BOINC. Current and past account managers include: BOINC is used by many groups and individuals. Some BOINC projects are based at universities and research labs while others are independent areas of research or interest.[24] 2 Spy Hill Research Mission 2: Develop forum software forInteractions in Understanding the Universe[89] University of Southern California
https://en.wikipedia.org/wiki/BOINC
Indistributed computing,code mobilityis the ability for runningprograms, code or objects to bemigrated(or moved) from one machine or application to another.[1]This is the process of movingmobile codeacross the nodes of anetworkas opposed to distributed computation where thedatais moved. It is common practice in distributed systems to require the movement of code or processes between parts of the system, instead of data.[1] Examples of code mobility include scripts downloaded over a network (for exampleJavaScript,VBScript),Java applets,ActiveX controls,Flash animations, Shockwave movies (and Xtras), andmacrosembedded withinMicrosoft Officedocuments.[2] The purpose of code mobility is to support sophisticated operations. For example, an application can send an object to another machine, and the object can resume executing inside the application on the remote machine with the same state as it had in the originating application. According to a classification proposed by Fuggetta, Picco and Vigna,[1]code mobility can be either strong or weak:strong code mobilityinvolves moving both thecode, data and the execution state from one host to another, notably via aprocess image(this is important in cases where the running application needs to maintain its state as it migrates from host to host), whileweak code mobilityinvolves moving the code and the data only. Therefore, it may be necessary to restart the execution of the program at the destination host. Several paradigms, orarchitectural styles, exist within code mobility:[1] Mobile code can also download and execute in the client workstation via email. Mobile code may download via an email attachment (e.g., macro in a Word file) or via an HTML email body (e.g., JavaScript). For example, theILOVEYOU, TRUELOVE, and AnnaK emails viruses/worms all were implemented as mobile code (VBScript in a .vbs email attachment that executed in Windows Scripting Host). In almost all situations, the user is not aware that mobile code is downloading and executing in their workstation.[citation needed] Mobile code also refers to code "used for rent", a way of making software packages more affordable. i.e. to use on demand. This is specially relevant to the mobile devices being developed which are cellular phones, PDAs, etc. all in one. Instead of installing software packages, they can be "leased" and paid for on a per-usage basis.[citation needed]
https://en.wikipedia.org/wiki/Code_mobility
Incomputer programming,dataflow programmingis aprogramming paradigmthat models a program as adirected graphof the data flowing between operations, thus implementingdataflowprinciples and architecture.[1]Dataflowprogramming languagesshare some features offunctional languages, and were generally developed in order to bring some functional concepts to a language more suitable for numeric processing. Some authors use the termdatastreaminstead ofdataflowto avoid confusion with dataflow computing ordataflow architecture, based on an indeterministic machine paradigm. Dataflow programming was pioneered byJack Dennisand his graduate students at MIT in the 1960s. Traditionally, a program is modelled as a series of operations happening in a specific order; this may be referred to as sequential,[2]: p.3procedural,[3]control flow[3](indicating that the program chooses a specific path), orimperative programming. The program focuses on commands, in line with thevon Neumann[2]: p.3vision of sequential programming, where data is normally "at rest".[3]: p.7 In contrast, dataflow programming emphasizes the movement of data and models programs as a series of connections. Explicitly defined inputs and outputs connect operations, which function likeblack boxes.[3]: p.2An operation runs as soon as all of its inputs become valid.[4]Thus, dataflow languages are inherently parallel and can work well in large, decentralized systems.[2]: p.3[5][6] One of the key concepts in computer programming is the idea ofstate, essentially a snapshot of various conditions in the system. Most programming languages require a considerable amount of state information, which is generally hidden from the programmer. Often, the computer itself has no idea which piece of information encodes the enduring state. This is a serious problem, as the state information needs to be shared across multiple processors inparallel processingmachines. Most languages force the programmer to add extra code to indicate which data and parts of the code are important to the state. This code tends to be both expensive in terms of performance, as well as difficult to read or debug.Explicit parallelismis one of the main reasons for the poor performance ofEnterprise Java Beanswhen building data-intensive, non-OLTPapplications.[citation needed] Where a sequential program can be imagined as a single worker moving between tasks (operations), a dataflow program is more like a series of workers on anassembly line, each doing a specific task whenever materials are available. Since the operations are only concerned with the availability of data inputs, they have no hidden state to track, and are all "ready" at the same time. Dataflow programs are represented in different ways. A traditional program is usually represented as a series of text instructions, which is reasonable for describing a serial system which pipes data between small, single-purpose tools that receive, process, and return. Dataflow programs start with an input, perhaps thecommand lineparameters, and illustrate how that data is used and modified. The flow of data is explicit, often visually illustrated as a line or pipe. In terms of encoding, a dataflow program might be implemented as ahash table, with uniquely identified inputs as the keys, used to look up pointers to the instructions. When any operation completes, the program scans down the list of operations until it finds the first operation where all inputs are currently valid, and runs it. When that operation finishes, it will typically output data, thereby making another operation become valid. For parallel operation, only the list needs to be shared; it is the state of the entire program. Thus the task of maintaining state is removed from the programmer and given to the language'sruntime. On machines with a single processor core where an implementation designed for parallel operation would simply introduce overhead, this overhead can be removed completely by using a different runtime. Some recent dataflow libraries such asDifferential/TimelyDataflow have usedincremental computingfor much more efficient data processing.[1][7][8] A pioneer dataflow language was BLOck DIagram (BLODI), published in 1961 byJohn Larry Kelly, Jr., Carol Lochbaum andVictor A. Vyssotskyfor specifyingsampled data systems.[9]A BLODI specification of functional units (amplifiers, adders, delay lines, etc.) and their interconnections was compiled into a single loop that updated the entire system for one clock tick. In a 1966 Ph.D. thesis,The On-line Graphical Specification of Computer Procedures,[10]Bert Sutherlandcreated one of the first graphical dataflow programming frameworks in order to make parallel programming easier. Subsequent dataflow languages were often developed at the largesupercomputerlabs. POGOL, an otherwise conventional data-processing language developed atNSA, compiled large-scale applications composed of multiple file-to-file operations, e.g. merge, select, summarize, or transform, into efficient code that eliminated the creation of or writing to intermediate files to the greatest extent possible.[11]SISAL, a popular dataflow language developed atLawrence Livermore National Laboratory, looks like most statement-driven languages, but variables should beassigned once. This allows thecompilerto easily identify the inputs and outputs. A number of offshoots of SISAL have been developed, includingSAC,Single Assignment C, which tries to remain as close to the popularC programming languageas possible. The United States Navy funded development of signal processing graph notation (SPGN) and ACOS starting in the early 1980s. This is in use on a number of platforms in the field today.[12] A more radical concept isPrograph, in which programs are constructed as graphs onscreen, and variables are replaced entirely with lines linking inputs to outputs. Prograph was originally written on theMacintosh, which remained single-processor until the introduction of theDayStar Genesis MPin 1996.[citation needed] There are many hardware architectures oriented toward the efficient implementation of dataflow programming models.[vague]MIT's tagged token dataflow architecture was designed byGreg Papadopoulos.[undue weight?–discuss] Data flow has been proposed[by whom?]as an abstraction for specifying the global behavior of distributed system components: in thelive distributed objectsprogramming model,distributed data flowsare used to store and communicate state, and as such, they play the role analogous to variables, fields, and parameters in Java-like programming languages[original research?]. Dataflow programming languages include:
https://en.wikipedia.org/wiki/Dataflow_programming
Decentralized computingis the allocation of resources, bothhardwareandsoftware, to each individualworkstation, or office location. In contrast,centralized computingexists when the majority of functions are carried out or obtained from a remote centralized location. Decentralized computing is a trend in modern-day business environments. This is the opposite ofcentralized computing, which was prevalent during the early days of computers. A decentralized computer system has many benefits over a conventional centralizednetwork.[1]Desktop computershave advanced so rapidly, that their potential performance far exceeds the requirements of mostbusiness applications. This results in most desktop computers remainingidle(in relation to their full potential). A decentralized system can use the potential of these systems to maximize efficiency. However, it is debatable whether these networks increase overall effectiveness. All computers have to be updated individually with new software, unlike a centralized computer system. Decentralized systems still enablefile sharingand all computers can shareperipheralssuch asprintersandscannersas well asmodems, allowing all the computers in the network to connect to theinternet. A collection of decentralized computers systems are components of a larger computer network, held together by local stations of equal importance and capability. These systems are capable of running independently of each other. The origins of decentralized computing originate from the work ofDavid Chaum.[citation needed] During 1979 he conceived the first concept of a decentralized computer system known asMix Network. It provided an anonymous email communications network, which decentralized the authentication of the messages in a protocol that would become the precursor toOnion Routing, the protocol of theTOR browser. Through this initial development of an anonymous communications network, David Chaum applied his Mix Network philosophy to design the world's first decentralized payment system and patented it in 1980.[2]Later in 1982, for his PhD dissertation, he wrote about the need for decentralized computing services in the paper Computer Systems Established, Maintained and Trusted by Mutually Suspicious Groups.[3]Chaum proposed an electronic payment system calledEcashin 1982. Chaum's companyDigiCashimplemented this system from 1990 until 1998.[non-primary source needed] Based on a "grid model" a peer-to-peer system, or P2P system, is a collection of applications run on several computers, which connect remotely to each other to complete a function or a task. There is no mainoperating systemto which satellite systems are subordinate. This approach tosoftware development(and distribution) affords developers great savings, as they don't have to create a central control point. An example application isLAN messagingwhich allows users to communicate without a central server. Peer-to-peer networks, where no entity controls an effective or controlling number of the network nodes, runningopen source softwarealso not controlled by any entity, are said to effect adecentralized network protocol. These networks are harder for outside actors to shut down, as they have no central headquarters.[4][better source needed] One of the most notable debates over decentralized computing involvedNapster, a musicfile sharingapplication, which granted users access to an enormous database of files.Record companiesbrought legal action against Napster, blaming the system for lost record sales. Napster was found in violation ofcopyrightlaws by distributingunlicensed software, and was shut down.[5] After the fall of Napster, there was demand for a file sharing system that would be less vulnerable tolitigation.Gnutella, a decentralized system, was developed. This system allowed files to be queried and shared between users without relying upon a centraldirectory, and this decentralization shielded the network from litigation related to the actions of individual users.
https://en.wikipedia.org/wiki/Decentralized_computing
Adistributed algorithmis analgorithmdesigned to run oncomputer hardwareconstructed from interconnectedprocessors. Distributed algorithms are used in different application areas ofdistributed computing, such astelecommunications,scientific computing, distributedinformation processing, and real-timeprocess control. Standard problems solved by distributed algorithms includeleader election,consensus, distributedsearch,spanning treegeneration,mutual exclusion, andresource allocation.[1] Distributed algorithms are a sub-type ofparallel algorithm, typically executedconcurrently, with separate parts of the algorithm being run simultaneously on independent processors, and having limited information about what the other parts of the algorithm are doing. One of the major challenges in developing and implementing distributed algorithms is successfully coordinating the behavior of the independent parts of the algorithm in the face of processor failures and unreliable communications links. The choice of an appropriate distributed algorithm to solve a given problem depends on both the characteristics of the problem, and characteristics of the system the algorithm will run on such as the type and probability of processor or link failures, the kind ofinter-process communicationthat can be performed, and the level of timing synchronization between separate processes.[1]
https://en.wikipedia.org/wiki/Distributed_algorithm
Distributed algorithmic mechanism design(DAMD) is an extension ofalgorithmic mechanism design. DAMD differs fromAlgorithmic mechanism designsince thealgorithmis computed in a distributed manner rather than by a central authority. This greatly improvescomputation timesince the burden is shared by allagentswithin anetwork. One major obstacle in DAMD is ensuring thatagentsreveal the true costs orpreferencesrelated to a given scenario. Often theseagentswould rather lie in order to improve their ownutility. DAMD is full of new challenges since one can no longer assume an obedient networking and mechanism infrastructure where rational players control the message paths and mechanism computation. Game theoryanddistributed computingboth deal with a system with many agents, in which the agents may possibly pursue different goals. However they have different focuses. For instance one of the concerns of distributed computing is to prove the correctness of algorithms that tolerate faulty agents and agents performing actions concurrently. On the other hand, in game theory the focus is on devising a strategy which leads us to an equilibrium in the system.[1] Nash equilibriumis the most commonly-used notion of equilibrium in game theory. However, the Nash equilibrium does not deal with faulty or unexpected behavior. A protocol that reaches Nash equilibrium is guaranteed to execute correctly in the face of rational agents, with no agent being able to improve its utility by deviating from the protocol.[2] There is no trusted center as there is inAMD. Thus, mechanisms must be implemented by the agents themselves. The solution preference assumption requires that each agent prefers any outcome to no outcome at all: thus, agents have no incentive to disagree on an outcome or cause the algorithm to fail. In other words, as Afek et al. said, "agents cannot gain if the algorithm fails".[3]As a result, though agents have preferences, they have no incentive to fail the algorithm. A mechanism is considered to be truthful if the agents gain nothing by lying about their or other agents' values. A good example would be aleader electionalgorithm that selects a computation server within a network. The algorithm specifies that agents should send their total computational power to each other, after which the most powerful agent is chosen as the leader to complete the task. In this algorithm agents may lie about their true computation power because they are potentially in danger of being tasked with CPU-intensive jobs which will reduce their power to complete local jobs. This can be overcome with the help of truthful mechanisms which, without any a priori knowledge of the existing data and inputs of each agent, cause each agent to respond truthfully to requests.[4] A well-known truthful mechanism in game theory is theVickrey auction. Leader electionis a fundamental problem in distributed computing and there are numerous protocols to solve this problem. System agents are assumed to be rational, and therefore prefer having a leader to not having one. The agents may also have different preferences regarding who becomes the leader (an agent may prefer that he himself becomes the leader). Standard protocols may choose leaders based on the lowest or highest ID of system agents. However, since agents have an incentive to lie about their ID in order to improve their utility such protocols are rendered useless in the setting of algorithmic mechanism design.A protocol for leader election in the presence of rational agents has been introduced by Ittai et al.: This protocol correctly elects a leader while reaching equilibrium and is truthful since no agent can benefit by lying about its input.[5]
https://en.wikipedia.org/wiki/Distributed_algorithmic_mechanism_design
Incomputing, adistributed cacheis an extension of the traditional concept ofcacheused in a singlelocale. A distributed cache may span multiple servers so that it can grow in size and in transactional capacity. It is mainly used to store application data residing indatabaseand websessiondata. The idea of distributed caching[1]has become feasible now becausemain memoryhas become very cheap andnetwork cardshave become very fast, with 1 Gbit now standard everywhere and 10 Gbit gaining traction.[when?]Also, a distributed cache works well on lower cost machines usually employed forweb serversas opposed todatabase serverswhich require expensive hardware.[2]An emerging internet architecture known asInformation-centric networking(ICN) is one of the best examples of a distributed cache network. The ICN is a network level solution hence the existing distributed network cache management schemes are not well suited for ICN.[3]In thesupercomputerenvironment, distributed cache is typically implemented in the form ofburst buffer. In distributed caching, each cache key is assigned to a specificshard(a.k.a partition). There are different sharding strategies:[4]
https://en.wikipedia.org/wiki/Distributed_cache
Distributed GISrefers toGI Systemsthat do not have all of the system components in the same physical location.[1]This could be theprocessing, thedatabase, the rendering or theuser interface. It represents a special case ofdistributed computing, with examples of distributed systems includingInternet GIS,Web GIS, andMobile GIS. Distribution of resources provides corporate and enterprise-based models for GIS (involving multiple databases, different computers undertakingspatial analysisand a diverse ecosystem of often spatially-enabled client devices). Distributed GIS permits ashared servicesmodel, including data fusion (ormashups) based onOpen Geospatial Consortium(OGC) web services. Distributed GIS technology enables modern online mapping systems (such asGoogle MapsandBing Maps),Location-based services(LBS), web-based GIS (such as ArcGIS Online) and numerous map-enabled applications. Other applications include transportation,logistics, utilities, farm / agricultural information systems, real-time environmental information systems and the analysis of the movement of people. In terms of data, the concept has been extended to includevolunteered geographical information. Distributed processing allows improvements to the performance of spatial analysis through the use of techniques such as parallel processing. The term Distributed GIS was coined byBruce Gittingsat theUniversity of Edinburgh. He was responsible for one of the firstInternet-based distributed GIS. In 1994, he designed and implemented the World Wide Earthquake Locator, which provided maps of recent earthquake occurrences to a location-independent user, which used theXerox PARC Map Viewer(based inCalifornia, USA), managed by an interface based inEdinburgh (Scotland), which drew data in real-time from theNational Earthquake Information Center(USGS) inColorado, USA.[2]Gittings first taught a course in Distributed GIS in 2005 as part of the Masters Programme in GIS at that institution .[3] Parallel processingis the use of multipleCPU’s to execute different sections of a program together. The terms "concurrent computing," "parallel computing," and "distributed computing" do not have a clear distinction between them.[4]Parallel computing today involves the utilization of a single computer withmulti-core processorsor multiple computers that are connected over a network working on the same task.[5][6]In the case of Distributed GIS, parallel computing using multi-core processors on the same machine would be where the line starts to blur between traditional desktop GIS and distributed. When done in different locations, it is much clearer. As parallel computing has become the dominant paradigm incomputer architecture, mainly in the form ofmulti-core processors, this is important to mention.[5] Today, there are many examples of applying parallel computing to GIS. For example,remote sensingand surveying equipment have been providing vast amounts of spatial information, and how to manage, process or dispose of this data have become major issues in the field ofGeographic Information Science(GIS).[7]To solve these problems there has been much research into the area of parallel processing of GIS information. This involves the utilization of a single computer with multiple processors or multiple computers that are connected over a network working on the same task, or series of tasks. Thehadoopframework has been used successfully in GIS processing.[8] Enterprise GISrefers to a geographical information system that integrates geographic data across multiple departments and serves the whole organisation.[9]The basic idea of an enterprise GIS is to deal with departmental needs collectively instead of individually. When organisations started usingGISin the 1960s and 1970s, the focus was on individual projects where individual users created and maintained data sets on their own desktop computers. Due to extensive interaction and work-flow between departments, many organisations have in recent years switched from independent, stand-alone GIS systems to more integrated approaches that share resources and applications.[10] Some of the potential benefits that an enterprise GIS can provide include significantly reduced redundancy of data across the system, improved accuracy and integrity of geographic information, and more efficient use and sharing of data.[11]Since data is one of the most significant investments in any GIS program, any approach that reduces acquisition costs while maintaining data quality is important. The implementation of an enterprise GIS may also reduce the overall GIS maintenance and support costs providing a more effective use of departmental GIS resources. Data can be integrated and used in decision making processes across the whole organisation.[11] A corporateGeographical Information System, is similar toEnterprise GISand satisfies the spatial information needs of an organisation as a whole in an integrated manner.[12]Corporate GIS consists of four technological elements which aredata,standards,information technologyand personnel with expertise. It is a coordinated approach that moves away from fragmented desktop GIS. The design of a corporate GIS includes the construction of a centralised corporatedatabasethat is designed to be the principle resource for an entire organisation. The corporate database is specifically designed to efficiently and effectively suit the requirements of the organisation. Essential to a corporate GIS is the effective management of the corporate database and the establishment of standards such asOGCfor mapping and database technologies. Benefits include that all the users in the organisation have access to shared, complete, accurate, high quality and up-to-date data. All the users in the organisation also have access to shared technology and people with expertise. This improves the efficiency and effectiveness of the organisation as a whole. A successfully managed corporate database reduces redundant collection and storage of information across the organisation. By centralising resources and efforts, it reduces the overall cost. Internet GISis broad set of technologies and applications that employ theInternetto access, analyze, visualize, and distributespatial dataviageographic information systems(GIS).[13][14][15][16][17]Internet GIS is an outgrowth of traditional GIS, and represents a shift from conducting GIS on an individual computer to working with remotely distributed data and functions.[13]Two major issues in GIS are accessing and distributing spatial data and GIS outputs.[18]Internet GIS helps to solve that problem by allowing users to access vast databases impossible to store on a single desktop computer, and by allowing rapid dissemination of both maps and raw data to others.[19][18]These methods include bothfile sharingandemail. This has enabled the general public to participate in map creation and make use of GIS technology.[20][21] Cell phones and other wireless communication forms have become common in society.[1][28][29][30]Many of these devices are connected to the internet and can access internet GIS applications like any other computer.[28][29]These devices are networked together, using technology such as themobile web. Unlike traditional computers, however, these devices generate immense amounts of spatial data available to the device user and many governments and private entities.[28][29]The tools, applications, and hardware used to facilitate GIS through the use of wireless technology ismobile GIS. Used by the holder of the device, mobile GIS enables navigation applications likeGoogle Mapsto help the user navigate to a location.[28][29]When used by private firms, the location data collected can help businesses understand foot traffic in an area to optimize business practices.[28][29]Governments can use this data to monitor citizens. Access to locational data by third parties has led to privacy concerns.[28][29] With ~80% of all data deemed to have a spatial component, modern Mobile GIS is a powerful tool.[31]The number of mobile devices in circulation has surpassed the world's population (2013) with a rapid acceleration iniOS,AndroidandWindows 8tablet up-take. Tablets are fast becoming popular for Utility field use. Low-cost MIL-STD-810 certified cases transform consumer tablets into fully ruggedized yet lightweight field-use units at 10% of legacy ruggedized laptop costs. Although not all applications of mobile GIS are limited by the device, many are. These limitations are more applicable to smaller devices such ascell phonesandPDAs. Such devices have small screens with poor resolution, limited memory and processing power, a poor (or no) keyboard, and short battery life. Additional limitations can be found in web client-based tablet applications: poor web GUI and device integration, online reliance, and very limited offline web client cache. Mobile GIS has a significant overlap with internet GIS; however, not all mobile GIS employs the internet, much less the mobile web.[1]Thus, the categories are distinct.[1] CyberGIS, or cyber geographic information science and systems, is a term used to describe the use ofcyberinfrastructure, to perform GIS tasks with storage and processing resources of multiple institutions through, usually through the World Wide Web.[32]CyberGIS focuses on computational and data-intensive geospatial problem-solving within various research and education domains by leveraging the power of distributed computation. CyberGIS has been described as "GIS detached from the desktop and deployed on the web, with the associated issues of hardware, software, data storage, digital networks, people, training and education."[33]The term CyberGIS first entered the literature in 2010, and is predominantly used by theUniversity of Illinois at Urbana-Champaignand collaborators to describe their software and research developed to use big data andhigh-performance computingapproaches to collaborative problem-solving.[32][34] In 2014, the CyberGIS Center for Advanced Digital and Spatial Studies at the University of Illinois at Urbana-Champaign received aNational Science Foundationmajor research instrumentation grant to establish ROGER as the first cyberGISsupercomputer. ROGER, hosted by theNational Center for Supercomputing Applications, is optimized to deal with geospatial data and computation and is equipped with: CyberGIS software and tools integrate these system components to support a large number of users who are investigating scientific problems in areas spanning biosciences, engineering, geosciences, and social sciences. Location-based services(LBS) are services that are distributed wirelessly and provide information relevant to the user's current location. These services include such things as ‘find my nearest …’, directions, and various vehicle monitoring systems, such as theGM OnStar systemamongst others. Location-based services are generally run on mobile phones and PDAs, and are intended for use by the general public more than Mobile GIS systems which are geared towards commercial enterprise. Devices can be located by triangulation using the mobile phone network and/or GPS. A web mapping service is a means of displaying and interacting with maps on the Web. The first web mapping service was theXerox PARC Map Viewerbuilt in 1993 and decommissioned in 2000. There have been 3 generations ofweb map service. The first generation was from 1993 onwards and consisted of simpleimage mapswhich had a single click function. The second generation was from 1996 onwards and still usedimage mapsthe one click function. However, they also had zoom and pan capabilities (although slow) and could be customised through the use of theURLAPI. The third generation was from 1998 onwards and were the first to include slippy maps. They utiliseAJAXtechnology which enables seamless panning and zooming. They are customisable using theURLAPIand can have extended functionality programmed in using theDOM. Web map services are based on the concept of theimage mapwhereby this defines the area overlaying an image (e.g. GIF). Animage mapcan be processed client or server side. As functionality is built into the web server, performance is good. Image maps can be dynamic. When image maps are used for geographic purposes, the co-ordinate system must be transformed to the geographical origin to conform to the geographical standard of having the origin at the bottom left corner. Web maps are used forlocation-based services. Local search is a recent approach to internet searching that incorporates geographical information into search queries so that the links that you return are more relevant to where you are. It developed out of an increasing awareness that manysearch engineusers are using it to look for a business or service in the local area. Local search has stimulated the development of web mapping, which is used either as a tool to use in geographically restricting your search (seeLive Search Maps) or as an additional resource to be returned along with search result listings (seeGoogle Maps). It has also led to an increase in the number of small businesses advertising on the web. In distributed GIS, the term mashup refers to a generic web service which combines content and functionality from disparate sources; mashups reflect a separation of information and presentation. Mashups are increasingly being used in commercial and government applications as well as in the public domain.[38]When used in GIS, it reflects the concept of connecting an application with a mapping service. An examples is combining Google maps withChicagocrime statistics to create theChicago crime statistics map. Mashups are fast, provide value for money and remove responsibility for the data from the creator. Second-generation systems provide mashups mainly based on URL parameters, while Third generation systems (e.g. Google Maps) allow customization via script (e.g. JavaScript).[39] The main standards for Distributed GIS are provided by theOpen Geospatial Consortium(OGC). OGC is a non-profit international group that seeks to Web-Enable GIS and, in turn Geo-Enable the web. One of the major issues concerning distributed GIS is the interoperability of the data since it can come in different formats using different projection systems. OGC standards seek to provide interoperability between data and to integrate existing data. Global System for Mobile Communications (GSM) is a global standard for mobile phones around the world. Networks using the GSM system offer transmission of voice, data, and messages in text and multimedia form and provide web, telenet, FTP, email services, etc., over the mobile network. Almost two million people are now using GSM. Five main standards of GSM exist: GSM 400, GSM 850, GSM 900, GSM-1800 (DCS), and GSM1900 (PCS). GSM 850 and GSM 1900 are used in North America, parts of Latin America, and parts of Africa. In Europe, Asia, and Australia GSM 900/1800 standard is used. GSM consists of two components: the mobile radio telephone andSubscriber Identity Module. GSM is acellular network, which is a radio network made up of a number of cells. For each cell, the transmitter (known as a base station) is transmitting and receiving signals. The base station is controlled through the Base Station Controller via the Mobile Switching Centre. For GSM enhancementGeneral Packet Radio Service(GPRS), a packet-oriented data service for data transmission, andUniversal Mobile Telecommunications System(UTMS), the Third Generation (3G) mobile communication system, technology was introduced. Both provide similar services to 2G, but with greaterbandwidthand speed. Wireless Application Protocol(WAP) is a standard for the data transmission of internet content and services. It is a secure specification that allows users to access information instantly via mobile phones, pagers, two-way radios, smartphones, and communicators. WAP supportsHTMLandXML, andWMLlanguage, and is specifically designed for small screens and one-hand navigation without a keyboard. WML is scalable from two-line text displays up to the graphical screens found on smartphones. It is much stricter than HTML and is similar toJavaScript.
https://en.wikipedia.org/wiki/Distributed_GIS
Distributed networkingis adistributed computingnetwork system where components of the program and data depend on multiple sources. Distributed networking, used indistributed computing, is the network system over whichcomputer programming,software, and its data are spread out across more than one computer, but communicate complex messages through their nodes (computers), and are dependent upon each other. The goal of a distributed network is to share resources, typically to accomplish a single or similar goal.[1][2]Usually, this takes place over acomputer network,[1]however,internet-based computingis rising in popularity.[3]Typically, a distributed networking system is composed ofprocesses,threads,agents, anddistributed objects.[3]Merely distributed physical components is not enough to suffice as a distributed network; typically distributed networking usesconcurrentprogram execution.[2] Client/servercomputing is a type of distributed computing where one computer, a client, requests data from the server, a primary computing center, which responds to the client directly with the requested data, sometimes through an agent. Client/server distributed networking is also popular in web-based computing.[3]Client/Server is the principle that a client computer can provide certain capabilities for a user and request others from other computers that provide services for the clients. TheWeb'sHypertext Transfer Protocolis basically all client/server.[1][4][5][6] A distributed network can also beagent-based, where what controls the agent or component is loosely defined, and the components can have either pre-configured or dynamic settings.[3] Decentralizationis where each computer on the network can be used for the computing task at hand, which is the opposite of the client/server model. Typically, only idle computers are used, and in this way, it is thought that networks are more efficient.[5]Peer-to-peer (P2P)computation is based on a decentralized, distributed network, including thedistributed ledgertechnology such asblockchain.[7][8] Mesh networkingis a local network composed of devices (nodes) that was originally designed to communicate through radio waves, allowing for different types of devices. Each node is able to communicate with every other node on the network. Prior to the 1980s, computing was typically centralized on a single low-cost desktop computer.[9]But today, computing resources (computers or servers) are typically physically distributed in many places, which distributed networking excels at. Some types of computing doesn't scale well past a certain level ofparallelismand the gains of superior hardware components, and thus isbottle-necked, such as byVery Large Scale Instruction Words. By increasing the number of computers rather than the power of their components, these bottlenecks are overcome. Situations whereresource sharingbecomes an issue, or where higherfault toleranceis needed also find aid in distributed networking.[2]Distributed networking is also very supportive of higher levels of anonymity.[10] Enterprises with rapid growth and scaling needs may find it challenging to maintain their own distributed network under the traditional client/server computing model. Cloud Computing is the utility of distributed computing over Internet-based applications, storage, and computing services. A cloud is a cluster of computers or servers that are closely connected to providescalable, high-capacity computing or related tasks.[2][11]
https://en.wikipedia.org/wiki/Distributed_networking
Adistributed operating systemis system software over a collection of independent software,networked,communicating, and physically separate computational nodes. They handle jobs which are serviced by multiple CPUs.[1]Each individual node holds a specific software subset of the global aggregate operating system. Each subset is a composite of two distinct service provisioners.[2]The first is a ubiquitous minimalkernel, ormicrokernel, that directly controls that node's hardware. Second is a higher-level collection ofsystem management componentsthat coordinate the node's individual and collaborative activities. These components abstract microkernel functions and support user applications.[3] The microkernel and the management components collection work together. They support the system's goal of integrating multiple resources and processing functionality into an efficient and stable system.[4]This seamless integration of individual nodes into a global system is referred to astransparency, orsingle system image; describing the illusion provided to users of the global system's appearance as a single computational entity. A distributed OS provides the essential services and functionality required of an OS but adds attributes and particularconfigurationsto allow it to support additional requirements such as increased scale and availability. To a user, a distributed OS works in a manner similar to a single-node,monolithic operating system. That is, although it consists of multiple nodes, it appears to users and applications as a single-node. Separating minimal system-level functionality from additional user-level modular services provides a "separation of mechanism and policy". Mechanism and policy can be simply interpreted as "what something is done" versus "how something is done," respectively. This separation increases flexibility and scalability. At eachlocale(typically a node), the kernel provides a minimally complete set of node-level utilities necessary for operating a node's underlying hardware and resources. These mechanisms include allocation, management, and disposition of a node's resources, processes, communication, andinput/outputmanagement support functions.[5]Within the kernel, the communications sub-system is of foremost importance for a distributed OS.[3] In a distributed OS, the kernel often supports a minimal set of functions, including low-leveladdress spacemanagement,threadmanagement, andinter-process communication(IPC). A kernel of this design is referred to as amicrokernel.[6][7]Its modular nature enhances reliability and security, essential features for a distributed OS.[8] System management components are software processes that define the node'spolicies. These components are the part of the OS outside the kernel. These components provide higher-level communication, process and resource management, reliability, performance and security. The components match the functions of a single-entity system, adding the transparency required in a distributed environment.[3] The distributed nature of the OS requires additional services to support a node's responsibilities to the global system. In addition, the system management components accept the "defensive" responsibilities of reliability, availability, and persistence. These responsibilities can conflict with each other. A consistent approach, balanced perspective, and a deep understanding of the overall system can assist in identifyingdiminishing returns. Separation of policy and mechanism mitigates such conflicts.[9] The architecture and design of a distributed operating system must realize both individual node and global system goals. Architecture and design must be approached in a manner consistent with separating policy and mechanism. In doing so, a distributed operating system attempts to provide an efficient and reliable distributed computing framework allowing for an absolute minimal user awareness of the underlying command and control efforts.[8] The multi-level collaboration between a kernel and the system management components, and in turn between the distinct nodes in a distributed operating system is the functional challenge of the distributed operating system. This is the point in the system that must maintain a perfect harmony of purpose, and simultaneously maintain a complete disconnect of intent from implementation. This challenge is the distributed operating system's opportunity to produce the foundation and framework for a reliable, efficient, available, robust, extensible, and scalable system. However, this opportunity comes at a very high cost in complexity. In a distributed operating system, the exceptional degree of inherent complexity could easily render the entire system an anathema to any user. As such, the logical price of realizing a distributed operation system must be calculated in terms of overcoming vast amounts of complexity in many areas, and on many levels. This calculation includes the depth, breadth, and range of design investment and architectural planning required in achieving even the most modest implementation.[10] These design and development considerations are critical and unforgiving. For instance, a deep understanding of a distributed operating system's overall architectural and design detail is required at an exceptionally early point.[1]An exhausting array of design considerations are inherent in the development of a distributed operating system. Each of these design considerations can potentially affect many of the others to a significant degree. This leads to a massive effort in balanced approach, in terms of the individual design considerations, and many of their permutations. As an aid in this effort, most rely on documented experience and research in distributed computing power. Research and experimentation efforts began in earnest in the 1970s and continued through the 1990s, with focused interest peaking in the late 1980s. A number of distributed operating systems were introduced during this period; however, very few of these implementations achieved even modest commercial success. Fundamental and pioneering implementations of primitive distributed operating system component concepts date to the early 1950s.[11][12][13]Some of these individual steps were not focused directly on distributed computing, and at the time, many may not have realized their important impact. These pioneering efforts laid important groundwork, and inspired continued research in areas related to distributed computing.[14][15][16][17][18][19] In the mid-1970s, research produced important advances in distributed computing. These breakthroughs provided a solid, stable foundation for efforts that continued through the 1990s. The accelerating proliferation ofmulti-processorandmulti-core processorsystems research led to a resurgence of the distributed OS concept. One of the first efforts was theDYSEAC, a general-purposesynchronouscomputer. In one of the earliest publications of theAssociation for Computing Machinery, in April 1954, a researcher at theNational Bureau of Standards– now the NationalInstitute of Standards and Technology(NIST) – presented a detailed specification of the DYSEAC. The introduction focused upon the requirements of the intended applications, including flexible communications, but also mentioned other computers: Finally, the external devices could even include other full-scale computers employing the same digital language as the DYSEAC. For example, the SEAC or other computers similar to it could be harnessed to the DYSEAC and by use of coordinated programs could be made to work together in mutual cooperation on a common task… Consequently[,] the computer can be used to coordinate the diverse activities of all the external devices into an effective ensemble operation. The specification discussed the architecture of multi-computer systems, preferring peer-to-peer rather than master-slave. Each member of such an interconnected group of separate computers is free at any time to initiate and dispatch special control orders to any of its partners in the system. As a consequence, the supervisory control over the common task may initially be loosely distributed throughout the system and then temporarily concentrated in one computer, or even passed rapidly from one machine to the other as the need arises. …the various interruption facilities which have been described are based on mutual cooperation between the computer and the external devices subsidiary to it, and do not reflect merely a simple master-slave relationship. This is one of the earliest examples of a computer with distributed control. TheDept. of the Armyreports[20]certified it reliable and that it passed all acceptance tests in April 1954. It was completed and delivered on time, in May 1954. This was a "portable computer", housed in atractor-trailer, with 2 attendant vehicles and6 tons of refrigerationcapacity. Described as an experimental input-output system, theLincoln TX-2emphasized flexible, simultaneously operational input-output devices, i.e.,multiprogramming. The design of the TX-2 was modular, supporting a high degree of modification and expansion.[12] The system employed The Multiple-Sequence Program Technique. This technique allowed multipleprogram countersto each associate with one of 32 possible sequences of program code. These explicitly prioritized sequences could be interleaved and executed concurrently, affecting not only the computation in process, but also the control flow of sequences and switching of devices as well. Much discussion related to device sequencing. Similar to DYSEAC the TX-2 separately programmed devices can operate simultaneously, increasingthroughput. The full power of the central unit was available to any device. The TX-2 was another example of a system exhibiting distributed control, its central unit not having dedicated control. One early effort at abstracting memory access was Intercommunicating Cells, where a cell was composed of a collection ofmemoryelements. A memory element was basically a binary electronicflip-floporrelay. Within a cell there were two types of elements,symbolandcell. Each cell structure storesdatain astringof symbols, consisting of anameand a set ofparameters. Information is linked through cell associations.[13] The theory contended that addressing is a wasteful and non-valuablelevel of indirection. Information was accessed in two ways, direct and cross-retrieval. Direct retrieval accepts a name and returns a parameter set. Cross-retrievalprojectsthrough parameter sets and returns a set of names containing the givensubsetof parameters. This was similar to a modifiedhash tabledata structurethat allowed multiplevalues(parameters) for eachkey(name). Thisconfigurationwas ideal for distributed systems. The constant-time projection through memory for storing and retrieval was inherentlyatomicandexclusive. The cellular memory's intrinsic distributed characteristics would be invaluable. The impact on theuser,hardware/device, orApplication programming interfaceswas indirect. The authors were considering distributed systems, stating: We wanted to present here the basic ideas of a distributed logic system with... the macroscopic concept of logical design, away from scanning, from searching, from addressing, and from counting, is equally important. We must, at all cost, free ourselves from the burdens of detailed local problems which only befit a machine low on the evolutionary scale of machines. Algorithms for scalable synchronization on shared-memory multiprocessors[21] Measurements of a distributed file system[22]Memory coherence in shared virtual memory systems[23] TransactionsSagas[24] Transactional MemoryComposable memory transactions[25]Transactional memory: architectural support for lock-free data structures[26]Software transactional memory for dynamic-sized data structures[27]Software transactional memory[28] OceanStore: an architecture for global-scale persistent storage[29] Weighted voting for replicated data[30]Consensus in the presence of partial synchrony[31] Sanity checksThe Byzantine Generals Problem[32]Fail-stop processors: an approach to designing fault-tolerant computing systems[33] RecoverabilityDistributedsnapshots: determining global states of distributed systems[34]Optimistic recovery in distributed systems[35] To better illustrate this point, examine three systemarchitectures; centralized, decentralized, and distributed. In this examination, consider three structural aspects: organization, connection, and control. Organization describes a system's physical arrangement characteristics. Connection covers the communication pathways among nodes. Control manages the operation of the earlier two considerations. Acentralized systemhas one level of structure, where all constituent elements directly depend upon a single control element. Adecentralized systemis hierarchical. The bottom level unites subsets of a system's entities. These entity subsets in turn combine at higher levels, ultimately culminating at a central master element. A distributed system is a collection of autonomous elements with no concept of levels. Centralized systems connect constituents directly to a central master entity in a hub and spoke fashion. A decentralized system (akanetwork system) incorporates direct and indirect paths between constituent elements and the central entity. Typically this is configured as a hierarchy with only one shortest path between any two elements. Finally, the distributed operating system requires no pattern; direct and indirect connections are possible between any two elements. Consider the 1970s phenomena of “string art” or aspirographdrawing as afully connected system, and thespider's webor theInterstate Highway Systembetween U.S. cities as examples of apartially connected system. Centralized and decentralized systems have directedflows of connectionto and from the central entity, while distributed systems communicate along arbitrary paths. This is the pivotal notion of the third consideration. Control involves allocating tasks and data to system elements balancing efficiency, responsiveness, and complexity. Centralized and decentralized systems offer more control, potentially easing administration by limiting options. Distributed systems are more difficult to explicitly control, but scale better horizontally and offer fewer points of system-wide failure. The associations conform to the needs imposed by its design but not by organizational chaos Transparencyorsingle-system imagerefers to the ability of an application to treat the system on which it operates without regard to whether it is distributed and without regard to hardware or other implementation details. Many areas of a system can benefit from transparency, including access, location, performance, naming, and migration. The consideration of transparency directly affects decision making in every aspect of design of a distributed operating system. Transparency can impose certain requirements and/or restrictions on other design considerations. Systems can optionally violate transparency to varying degrees to meet specific application requirements. For example, a distributed operating system may present a hard drive on one computer as "C:" and a drive on another computer as "G:". The user does not require any knowledge of device drivers or the drive's location; both devices work the same way, from the application's perspective. A less transparent interface might require the application to know which computer hosts the drive. Transparency domains: Inter-Process Communication(IPC) is the implementation of general communication, process interaction, anddataflowbetweenthreadsand/orprocessesboth within a node, and between nodes in a distributed OS. The intra-node and inter-node communication requirements drive low-level IPC design, which is the typical approach to implementing communication functions that support transparency. In this sense, Interprocess communication is the greatest underlying concept in the low-level design considerations of a distributed operating system. Process managementprovides policies and mechanisms for effective and efficient sharing of resources between distributed processes. These policies and mechanisms support operations involving the allocation and de-allocation of processes and ports to processors, as well as mechanisms to run, suspend, migrate, halt, or resume process execution. While these resources and operations can be either local or remote with respect to each other, the distributed OS maintains state and synchronization over all processes in the system. As an example,load balancingis a common process management function. Load balancing monitors node performance and is responsible for shifting activity across nodes when the system is out of balance. One load balancing function is picking a process to move. The kernel may employ several selection mechanisms, including priority-based choice. This mechanism chooses a process based on a policy such as 'newest request'. The system implements the policy Systems resourcessuch as memory, files, devices, etc. are distributed throughout a system, and at any given moment, any of these nodes may have light to idle workloads.Load sharingand load balancing require many policy-oriented decisions, ranging from finding idle CPUs, when to move, and which to move. Manyalgorithmsexist to aid in these decisions; however, this calls for a second level of decision making policy in choosing the algorithm best suited for the scenario, and the conditions surrounding the scenario. Distributed OS can provide the necessary resources and services to achieve high levels ofreliability, or the ability to prevent and/or recover from errors.Faultsare physical or logical defects that can cause errors in the system. For a system to be reliable, it must somehow overcome the adverse effects of faults. The primary methods for dealing with faults includefault avoidance,fault tolerance, andfault detection and recovery. Fault avoidance covers proactive measures taken to minimize the occurrence of faults. These proactive measures can be in the form oftransactions,replicationandbackups. Fault tolerance is the ability of a system to continue operation in the presence of a fault. In the event, the system should detect and recover full functionality. In any event, any actions taken should make every effort to preserve thesingle system image. Availabilityis the fraction of time during which the system can respond to requests. Manybenchmark metricsquantifyperformance; throughput, response time, job completions per unit time, system utilization, etc. With respect to a distributed OS, performance most often distills to a balance betweenprocess parallelismand IPC.[citation needed]Managing thetask granularityof parallelism in a sensible relation to the messages required for support is extremely effective.[citation needed]Also, identifying when it is more beneficial tomigrate a processto its data, rather than copy the data, is effective as well.[citation needed] Cooperatingconcurrent processeshave an inherent need forsynchronization, which ensures that changes happen in a correct and predictable fashion. Three basic situations that define the scope of this need: Improper synchronization can lead to multiple failure modes including loss ofatomicity, consistency, isolation and durability,deadlock,livelockand loss ofserializability.[citation needed] Flexibilityin a distributed operating system is enhanced through the modular characteristics of the distributed OS, and by providing a richer set of higher-level services. The completeness and quality of the kernel/microkernel simplifies implementation of such services, and potentially enables service providers greater choice of providers for such services.[citation needed] Architectural Design of E1 Distributed Operating System[38]The Cronus distributed operating system[39]Design and development of MINIX distributed operating system[40]
https://en.wikipedia.org/wiki/Distributed_operating_system
Eventual consistencyis aconsistency modelused indistributed computingto achievehigh availability. Put simply: if no new updates are made to a given data item,eventuallyall accesses to that item will return the last updated value.[1]Eventual consistency, also calledoptimistic replication,[2]is widely deployed in distributed systems and has origins in early mobile computing projects.[3]A system that has achieved eventual consistency is often said to haveconverged, or achievedreplica convergence.[4]Eventual consistency is a weak guarantee – most stronger models, likelinearizability, are trivially eventually consistent. Eventually-consistent services are often classified as providing BASE semantics (basically-available,soft-state,eventual consistency), in contrast to traditionalACID (atomicity, consistency, isolation, durability).[5][6]In chemistry, abaseis the opposite of anacid, which helps in remembering the acronym.[7]According to the same resource, these are the rough definitions of each term in BASE: Eventual consistency faces criticism[8]for adding complexity to distributed software applications. This complexity arises because eventual consistency provides only alivenessguarantee (ensuring reads eventually return the same value) withoutsafetyguarantees—allowing any intermediate value before convergence. Application developers find this challenging because it differs from single-threaded programming, where variables reliably return their assigned values immediately. With weak consistency guarantees, developers must carefully consider these limitations, as incorrect assumptions about consistency levels can lead to subtle bugs that only surface during network failures or high concurrency.[9] In order to ensure replica convergence, a system must reconcile differences between multiple copies of distributed data. This consists of two parts: The most appropriate approach to reconciliation depends on the application. A widespread approach is "last writer wins".[1]Another is to invoke a user-specified conflict handler.[4]Timestampsandvector clocksare often used to detect concurrency between updates. Some people use "first writer wins" in situations where "last writer wins" is unacceptable.[11] Reconciliation of concurrent writes must occur sometime before the next read, and can be scheduled at different instants:[3][12] Whereas eventual consistency is only alivenessguarantee (updates will be observed eventually),strong eventual consistency(SEC) adds thesafetyguarantee that any two nodes that have received the same (unordered) set of updates will be in the same state. If, furthermore, the system ismonotonic, the application will never suffer rollbacks. A common approach to ensure SEC isconflict-free replicated data types.[13]
https://en.wikipedia.org/wiki/Eventual_consistency
TheACM Symposium on Principles of Distributed Computing(PODC) is anacademic conferencein the field ofdistributed computingorganised annually by theAssociation for Computing Machinery(special interest groupsSIGACTandSIGOPS).[1] Work presented at PODC typically studies theoretical aspects of distributed computing, such as the design and analysis ofdistributed algorithms. The scope of PODC is similar to the scope ofInternational Symposium on Distributed Computing(DISC),[2]with the main difference being geographical: DISC is usually organized in European locations,[3]while PODC has been traditionally held in North America.[4]TheEdsger W. Dijkstra Prize in Distributed Computingis presented alternately at PODC and at DISC.[5][6][7] Other closely related conferences include ACMSymposium on Parallelism in Algorithms and Architectures(SPAA), which – as the name suggests – puts more emphasis onparallel algorithmsthan distributed algorithms. PODC and SPAA have been co-located in 1998, 2005, and 2009. PODC is often mentioned to be one of the top conferences in the field of distributed computing.[8][9][10]In the 2007 Australian Ranking of ICT Conferences, PODC was the only conference in the field that received the highest ranking, "A+".[11] During the recent years 2004–2009, the number of regular papers submitted to PODC has fluctuated between 110 and 224 each year. Of these submissions, 27–40 papers have been accepted for presentation at the conference each year; acceptance rates for regular papers have been between 16% and 31%.[12][13] PODC was first organised on 18–20 August 1982, inOttawa, Ontario, Canada.[14]PODC was part of theFederated Computing Research Conferencein 1996, 1999 and 2011. Between 1982 and 2009, PODC was always held in a North American location – usually in the United States or Canada, and once in Mexico.[4]In 2010, PODC was held in Europe for the first time in its history,[4]and in the same year, its European sister conference DISC was organised in the United States for the first time in its history.[3][15]PODC 2010 took place inZürich, Switzerland, and DISC 2010 took place inCambridge, Massachusetts. Since 2000, a review of the PODC conference appears in the year-ending issue of the ACM SIGACT News Distributed Computing Column.[16]The review is usually written by a member of the distributed computing research community.
https://en.wikipedia.org/wiki/Edsger_W._Dijkstra_Prize_in_Distributed_Computing
Afederationis a group of computing or network providers agreeing upon standards of operation in a collective fashion. The most widely known example is the Internet, which is Federated around theInternet Protocol(IP) stack of protocols. Another, more visible, example isEmail, where the common use of theSimple Mail Transfer Protocol(SMTP), allows alice@example.com to communicate with bob@example.edu and eve@example.org although the software implementing each of these systems can be completely different. The term may be used when describing the inter-operation of two distinct, formerly disconnected,telecommunications networksthat may have different internal structures.[1]The term "federated cloud" refers to facilitating the interconnection of two or more geographically separatecomputing clouds.[2] The term may also be used when groups attempt to delegate collective authority of development to preventfragmentation. In a telecommunication interconnection, the internalmodi operandiof the different systems are irrelevant to the existence of a federation. Joining two distinct networks: Collective authority: In networking systems, to befederatedmeans users are able to send messages from one network to the other.[citation needed]This is not the same as having a client that can operate with both networks, but interacts with both independently. For example, in 2009,GoogleallowedGMailusers to log into theirAOL Instant Messenger(AIM) accounts from GMail. One could not send messages fromGTalkaccounts orXMPP(which Google/GTalkisfederated with—XMPP lingo for federation iss2s, whichFacebookandMSN Live's implementations do not support[4]) to AIM screen names, nor vice versa.[5]In May 2011, AIM and Gmail federated, allowing users of each network to add and communicate with each other. This article related totelecommunicationsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Federation_(information_technology)
Flat Neighborhood Network (FNN)is atopologyfordistributed computingand other computer networks. Eachnodeconnects to two or moreswitcheswhich, ideally, entirely cover the node collection, so that each node can connect to any other node in two "hops" (jump up to one switch and down to the other node). This contrasts to topologies with fewer cables per node which communicate with remote nodes via intermediate nodes, as in Hypercube (seeThe Connection Machine). Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it. This supercomputer-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Flat_neighborhood_network
Fog computing[1][2]orfog networking, also known asfogging,[3][4]is an architecture that usesedge devicesto carry out a substantial amount of computation (edge computing), storage, and communication locally and routed over theInternet backbone. In 2011, the need to extend cloud computing with fog computing emerged, in order to cope with huge number of IoT devices and big data volumes for real-time low-latency applications.[5]Fog computing, also called edge computing, is intended for distributed computing where numerous "peripheral" devices connect to acloud. The word "fog" refers to its cloud-like properties, but closer to the "ground", i.e. IoT devices.[6]Many of these devices will generate voluminous raw data (e.g., from sensors), and rather than forward all this data to cloud-based servers to be processed, the idea behind fog computing is to do as much processing as possible using computing units co-located with the data-generating devices, so that processed rather than raw data is forwarded, and bandwidth requirements are reduced. An additional benefit is that the processed data is most likely to be needed by the same devices that generated the data, so that by processing locally rather than remotely, the latency between input and response is minimized. This idea is not entirely new: in non-cloud-computing scenarios, special-purpose hardware (e.g., signal-processing chips performingfast Fourier transforms) has long been used to reduce latency and reduce the burden on a CPU. Fog networking consists of acontrol planeand adata plane. For example, on the data plane, fog computing enables computing services to reside at the edge of the network as opposed to servers in a data-center. Compared tocloud computing, fog computing emphasizes proximity to end-users and client objectives (e.g. operational costs, security policies,[7]resource exploitation), dense geographical distribution and context-awareness (for what concerns computational and IoT resources), latency reduction and backbone bandwidth savings to achieve betterquality of service(QoS)[8]and edge analytics/stream mining, resulting in superior user-experience[9]and redundancy in case of failure while it is also able to be used inAssisted Livingscenarios.[10][11][12][13][14][15] Fog networking supports theInternet of Things(IoT) concept, in which most of the devices used by humans on a daily basis will be connected to each other. Examples include phones, wearable health monitoring devices,connected vehicleandaugmented realityusing devices such as theGoogle Glass.[16][17][18][19][20]IoT devices are often resource-constrained and have limited computational abilities to perform cryptography computations. A fog node can provide security for IoT devices by performing these cryptographic computations instead.[21] SPAWAR, a division of the US Navy, is prototyping and testing a scalable, secure Disruption Tolerant Mesh Network to protect strategic military assets, both stationary and mobile. Machine-control applications, running on the mesh nodes, "take over", when Internet connectivity is lost. Use cases include Internet of Things e.g. smart drone swarms.[22] The University of Melbourne is addressing the challenges of collecting and processing data from cameras, ECG devices, laptops, smartphones, and IoT devices with its project FogBus 2, which uses edge/fog andOracle Cloud Infrastructureto process data in real-time.[23] ISO/IEC 20248provides a method whereby the data of objects identified byedge computingusing Automated Identification Data Carriers (AIDC), abarcodeand/orRFIDtag, can be read, interpreted, verified and made available into the "Fog" and on the "Edge," even when the AIDC tag has moved on.[24] The term "fog computing" was first developed byCiscoin 2012.[25]November 19, 2015,Cisco Systems,ARM Holdings,Dell,Intel,Microsoft, andPrinceton University, founded theOpenFog Consortiumto promote interests and development in fog computing.[26]Cisco Sr. Managing-DirectorHelder Antunesbecame the consortium's first chairman and Intel's Chief IoT Strategist Jeff Fedders became its first president.[27] Bothcloud computingand fog computing provide storage, applications, and data to end-users. However, fog computing is closer to end-users and has wider geographical distribution.[28] 'Cloud computing' is the practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer.[29] Also known as edge computing or fogging, fog computing facilitates the operation of compute, storage, and networking services between end devices and cloud computing data centers. National Institute of Standards and Technologyin March 2018 released a definition of fog computing adopting much of Cisco's commercial terminology as NIST Special Publication 500-325,Fog Computing Conceptual Model, that defines fog computing as a horizontal, physical or virtual resource paradigm that resides between smart end-devices and traditionalcloud computingordata center.[6]This paradigm supports vertically-isolated, latency-sensitive applications by providing ubiquitous, scalable, layered, federated, distributed computing, storage, and network connectivity. Thus, fog computing is most distinguished by distance from the edge. In the theoretical model of fog computing, fog computing nodes are physically and functionally operative between edge nodes and centralized cloud.[30]Much of the terminology is undefined, including key architectural terms like "smart", and the distinction between fog computing from edge computing is not generally agreed. While edge computing is typically referred to the location where services are instantiated, fog computing implies distribution of the communication, computation, storage resources, and services on or close to devices and systems in the control of end-users.[31][32]Fog computing is a medium weight and intermediate level of computing power.[33]Rather than a substitute, fog computing often serves as a complement to cloud computing.[34]Fog computing is more energy-efficient than cloud computing.[35] IEEE adopted the fog computing standards proposed by OpenFog Consortium.[36]
https://en.wikipedia.org/wiki/Fog_computing
Folding@home(FAHorF@h) is adistributed computingproject aimed to help scientists develop new therapeutics for a variety of diseases by the means of simulatingprotein dynamics. This includes the process ofprotein foldingand the movements ofproteins, and is reliant on simulations run on volunteers'personal computers.[5]Folding@home is currently based at theUniversity of Pennsylvaniaand led byGreg Bowman, a former student ofVijay Pande.[6] The project utilizesgraphics processing units(GPUs),central processing units(CPUs), andARMprocessors like those on theRaspberry Pifor distributed computing and scientific research. The project uses statisticalsimulationmethodology that is aparadigm shiftfrom traditional computing methods.[7]As part of theclient–server modelnetwork architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project'sdatabase servers, where the units are compiled into an overall simulation. Volunteers can track their contributions on the Folding@home website, which makes volunteers' participation competitive and encourages long-term involvement. Folding@home is one of the world's fastest computing systems. With heightened interest in the project as a result of theCOVID-19 pandemic,[8]the system achieved a speed of approximately 1.22exaflopsby late March 2020 and reached 2.43 exaflops by April 12, 2020,[9]making it the world's firstexaflop computing system. This level of performance from its large-scale computing network has allowed researchers to runcomputationally costlyatomic-level simulations of protein folding thousands of times longer than formerly achieved. Since its launch on October 1, 2000, Folding@home has been involved in the production of 226scientific research papers.[10]Results from the project's simulations agree well with experiments.[11][12][13] Proteinsare an essential component to many biological functions and participate in virtually all processes withinbiological cells. They often act asenzymes, performing biochemical reactions includingcell signaling, molecular transportation, andcellular regulation. As structural elements, some proteins act as a type ofskeleton for cells, and asantibodies, while other proteins participate in theimmune system. Before a protein can take on these roles, it must fold into a functionalthree-dimensional structure, a process that often occurs spontaneously and is dependent on interactions within itsamino acidsequence and interactions of the amino acids with their surroundings. Protein folding is driven by the search to find the most energetically favorable conformation of the protein, i.e., itsnative state. Thus, understanding protein folding is critical to understanding what a protein does and how it works, and is considered a holy grail ofcomputational biology.[14][15]Despite folding occurring within acrowded cellular environment, it typically proceeds smoothly. However, due to a protein's chemical properties or other factors, proteins maymisfold, that is, fold down the wrong pathway and end up misshapen. Unless cellular mechanisms can destroy or refold misfolded proteins, they can subsequentlyaggregateand cause a variety of debilitating diseases.[16]Laboratory experiments studying these processes can be limited in scope and atomic detail, leading scientists to use physics-based computing models that, when complementing experiments, seek to provide a more complete picture of protein folding, misfolding, and aggregation.[17][18] Due to the complexity of proteins' conformation orconfiguration space(the set of possible shapes a protein can take), and limits in computing power, all-atom molecular dynamics simulations have been severely limited in the timescales that they can study. While most proteins typically fold in the order of milliseconds,[17][19]before 2010, simulations could only reach nanosecond to microsecond timescales.[11]General-purposesupercomputershave been used to simulate protein folding, but such systems are intrinsically costly and typically shared among many research groups. Further, because the computations in kinetic models occur serially, strongscalingof traditional molecular simulations to these architectures is exceptionally difficult.[20][21]Moreover, as protein folding is astochastic process(i.e., random) and can statistically vary over time, it is challenging computationally to use long simulations for comprehensive views of the folding process.[22][23] Protein folding does not occur in one step.[16]Instead, proteins spend most of their folding time, nearly 96% in some cases,[24]waitingin various intermediateconformationalstates, each a localthermodynamic free energyminimum in the protein'senergy landscape. Through a process known asadaptive sampling, these conformations are used by Folding@home as starting points for asetof simulation trajectories. As the simulations discover more conformations, the trajectories are restarted from them, and aMarkov state model(MSM) is gradually created from this cyclic process. MSMs arediscrete-timemaster equationmodels which describe a biomolecule's conformational and energy landscape as a set of distinct structures and the short transitions between them. The adaptive sampling Markov state model method significantly increases the efficiency of simulation as it avoids computation inside the local energy minimum itself, and is amenable to distributed computing (including onGPUGRID) as it allows for the statistical aggregation of short, independent simulation trajectories.[25]The amount of time it takes to construct a Markov state model is inversely proportional to the number of parallel simulations run, i.e., the number of processors available. In other words, it achieves linearparallelization, leading to an approximately fourorders of magnitudereduction in overall serial calculation time. A completed MSM may contain tens of thousands of sample states from the protein'sphase space(all the conformations a protein can take on) and the transitions between them. The model illustrates folding events and pathways (i.e., routes) and researchers can later use kinetic clustering to view a coarse-grained representation of the otherwise highly detailed model. They can use these MSMs to reveal how proteins misfold and to quantitatively compare simulations with experiments.[7][22][26] Between 2000 and 2010, the length of the proteins Folding@home has studied have increased by a factor of four, while its timescales for protein folding simulations have increased by six orders of magnitude.[27]In 2002, Folding@home used Markov state models to complete approximately a millionCPUdays of simulations over the span of several months,[13]and in 2011, MSMs parallelized another simulation that required an aggregate 10 million CPU hours of computing.[28]In January 2010, Folding@home used MSMs to simulate the dynamics of the slow-folding 32-residueNTL9 protein out to 1.52 milliseconds, a timescale consistent with experimental folding rate predictions but a thousand times longer than formerly achieved. The model consisted of many individual trajectories, each two orders of magnitude shorter, and provided an unprecedented level of detail into the protein's energy landscape.[7][11][29]In 2010, Folding@home researcher Gregory Bowman was awarded theThomas Kuhn Paradigm Shift Awardfrom theAmerican Chemical Societyfor the development of theopen-sourceMSMBuilder software and for attaining quantitative agreement between theory and experiment.[30][31]For his work, Pande was awarded the 2012 Michael and Kate Bárány Award for Young Investigators for "developing field-defining and field-changing computational methods to produce leading theoretical models for protein andRNAfolding",[32]and the 2006 Irving Sigal Young Investigator Award for his simulation results which "have stimulated a re-examination of the meaning of both ensemble and single-molecule measurements, making Pande's efforts pioneering contributions to simulation methodology."[33] Protein misfolding can result in avariety of diseasesincluding Alzheimer's disease,cancer,Creutzfeldt–Jakob disease,cystic fibrosis, Huntington's disease,sickle-cell anemia, andtype II diabetes.[16][34][35]Cellular infection by viruses such asHIVandinfluenzaalso involve folding events oncell membranes.[36]Once protein misfolding is better understood, therapies can be developed that augment cells' natural ability to regulate protein folding. Suchtherapiesinclude the use of engineered molecules to alter the production of a given protein, help destroy a misfolded protein, or assist in the folding process.[37]The combination of computational molecular modeling and experimental analysis has the possibility to fundamentally shape the future of molecular medicine and therational design of therapeutics,[18]such as expediting and lowering the costs ofdrug discovery.[38]The goal of the first five years of Folding@home was to make advances in understanding folding, while the current goal is to understand misfolding and related disease, especially Alzheimer's.[39] The simulations run on Folding@home are used in conjunction with laboratory experiments,[22]but researchers can use them to study how foldingin vitrodiffers from folding in native cellular environments. This is advantageous in studying aspects of folding, misfolding, and their relationships to disease that are difficult to observe experimentally. For example, in 2011, Folding@home simulated protein folding inside aribosomalexit tunnel, to help scientists better understand how natural confinement and crowding might influence the folding process.[40][41]Furthermore, scientists typically employ chemicaldenaturantsto unfold proteins from their stable native state. It is not generally known how the denaturant affects the protein's refolding, and it is difficult to experimentally determine if these denatured states contain residual structures which may influence folding behavior. In 2010, Folding@home used GPUs to simulate the unfolded states ofProtein L, and predicted its collapse rate in strong agreement with experimental results.[42] The large data sets from the project are freely available for other researchers to use upon request and some can be accessed from the Folding@home website.[43][44]The Pande lab has collaborated with other molecular dynamics systems such as theBlue Genesupercomputer,[45]and they share Folding@home's key software with other researchers, so that the algorithms which benefited Folding@home may aid other scientific areas.[43]In 2011, they released the open-source Copernicus software, which is based on Folding@home's MSM and other parallelizing methods and aims to improve the efficiency and scaling of molecular simulations on largecomputer clustersorsupercomputers.[46][47]Summaries of all scientific findings from Folding@home are posted on the Folding@home website after publication.[48] Alzheimer's diseaseis an incurableneurodegenerativedisease which most often affects the elderly and accounts for more than half of all cases ofdementia. Its exact cause remains unknown, but the disease is identified as aprotein misfolding disease. Alzheimer's is associated with toxicaggregationsof theamyloid beta(Aβ)peptide, caused by Aβ misfolding and clumping together with other Aβ peptides. These Aβ aggregates then grow into significantly largersenile plaques, a pathological marker of Alzheimer's disease.[49][50][51]Due to the heterogeneous nature of these aggregates, experimental methods such asX-ray crystallographyandnuclear magnetic resonance(NMR) have had difficulty characterizing their structures. Moreover, atomic simulations of Aβ aggregation are highly demanding computationally due to their size and complexity.[52][53] Preventing Aβ aggregation is a promising method to developing therapeutic drugs for Alzheimer's disease, according to Naeem and Fazili in aliterature reviewarticle.[54]In 2008, Folding@home simulated the dynamics of Aβ aggregation in atomic detail over timescales of the order of tens of seconds. Prior studies were only able to simulate about 10 microseconds. Folding@home was able to simulate Aβ folding for six orders of magnitude longer than formerly possible. Researchers used the results of this study to identify abeta hairpinthat was a major source of molecular interactions within the structure.[55]The study helped prepare the Pande lab for future aggregation studies and for further research to find a small peptide which may stabilize the aggregation process.[52] In December 2008, Folding@home found several small drug candidates which appear to inhibit the toxicity of Aβ aggregates.[56]In 2010, in close cooperation with the Center for Protein Folding Machinery, these drug leads began to be tested onbiological tissue.[35]In 2011, Folding@home completed simulations of severalmutationsof Aβ that appear to stabilize the aggregate formation, which could aid in the development of therapeutic drug therapies for the disease and greatly assist with experimentalnuclear magnetic resonance spectroscopystudies of Aβoligomers.[53][57]Later that year, Folding@home began simulations of various Aβ fragments to determine how various natural enzymes affect the structure and folding of Aβ.[58][59] Huntington's diseaseis a neurodegenerativegenetic disorderthat is associated with protein misfolding and aggregation.Excessive repeatsof theglutamineamino acid at theN-terminusof thehuntingtin proteincause aggregation, and although the behavior of the repeats is not completely understood, it does lead to the cognitive decline associated with the disease.[60]As with other aggregates, there is difficulty in experimentally determining its structure.[61]Scientists are using Folding@home to study the structure of the huntingtin protein aggregate and to predict how it forms, assisting withrational drug designmethods to stop the aggregate formation.[35]The N17 fragment of the huntingtin protein accelerates this aggregation, and while there have been several mechanisms proposed, its exact role in this process remains largely unknown.[62]Folding@home has simulated this and other fragments to clarify their roles in the disease.[63]Since 2008, its drug design methods for Alzheimer's disease have been applied to Huntington's.[35] More than half of all known cancers involvemutationsofp53, atumor suppressorprotein present in every cell which regulates thecell cycleand signals forcell deathin the event of damage toDNA. Specific mutations in p53 can disrupt these functions, allowing an abnormal cell to continue growing unchecked, resulting in the development oftumors. Analysis of these mutations helps explain the root causes of p53-related cancers.[64]In 2004, Folding@home was used to perform the first molecular dynamics study of the refolding of p53'sprotein dimerin anall-atom simulation of water. The simulation's results agreed with experimental observations and gave insights into the refolding of the dimer that were formerly unobtainable.[65]This was the firstpeer reviewedpublication on cancer from a distributed computing project.[66]The following year, Folding@home powered a new method to identify the amino acids crucial for the stability of a given protein, which was then used to study mutations of p53. The method was reasonably successful in identifying cancer-promoting mutations and determined the effects of specific mutations which could not otherwise be measured experimentally.[67] Folding@home is also used to studyprotein chaperones,[35]heat shock proteinswhich play essential roles in cell survival by assisting with the folding of other proteins in thecrowdedand chemically stressful environment within a cell. Rapidly growing cancer cells rely on specific chaperones, and some chaperones play key roles inchemotherapyresistance. Inhibitions to these specific chaperones are seen as potential modes of action for efficient chemotherapy drugs or for reducing the spread of cancer.[68]Using Folding@home and working closely with the Center for Protein Folding Machinery, the Pande lab hopes to find a drug which inhibits those chaperones involved in cancerous cells.[69]Researchers are also using Folding@home to study other molecules related to cancer, such as the enzymeSrc kinase, and some forms of theengrailedhomeodomain: a large protein which may be involved in many diseases, including cancer.[70][71]In 2011, Folding@home began simulations of the dynamics of the smallknottinprotein EETI, which can identifycarcinomasinimaging scansby binding tosurface receptorsof cancer cells.[72][73] Interleukin 2(IL-2) is a protein that helpsT cellsof theimmune systemattack pathogens and tumors. However, its use as a cancer treatment is restricted due to serious side effects such aspulmonary edema. IL-2 binds to these pulmonary cells differently than it does to T cells, so IL-2 research involves understanding the differences between these binding mechanisms. In 2012, Folding@home assisted with the discovery of a mutant form of IL-2 which is three hundred times more effective in its immune system role but carries fewer side effects. In experiments, this altered form significantly outperformed natural IL-2 in impeding tumor growth.Pharmaceutical companieshave expressed interest in the mutant molecule, and theNational Institutes of Healthare testing it against a large variety of tumor models to try to accelerate its development as a therapeutic.[74][75] Osteogenesis imperfecta, known as brittle bone disease, is an incurable genetic bone disorder which can be lethal. Those with the disease are unable to make functional connective bone tissue. This is most commonly due to a mutation inType-I collagen,[76]which fulfills a variety of structural roles and is the most abundant protein inmammals.[77]The mutation causes a deformation incollagen's triple helix structure, which if not naturally destroyed, leads to abnormal and weakened bone tissue.[78]In 2005, Folding@home tested a newquantum mechanicalmethod that improved upon prior simulation methods, and which may be useful for future computing studies of collagen.[79]Although researchers have used Folding@home to study collagen folding and misfolding, the interest stands as a pilot project compared toAlzheimer's and Huntington's research.[35] Folding@home is assisting in research towards preventing someviruses, such asinfluenzaandHIV, from recognizing and enteringbiological cells.[35]In 2011, Folding@home began simulations of the dynamics of the enzymeRNase H, a key component of HIV, to try to design drugs to deactivate it.[80]Folding@home has also been used to studymembrane fusion, an essential event forviral infectionand a wide range of biological functions. This fusion involvesconformational changesof viral fusion proteins andprotein docking,[36]but the exact molecular mechanisms behind fusion remain largely unknown.[81]Fusion events may consist of over a half million atoms interacting for hundreds of microseconds. This complexity limits typical computer simulations to about ten thousand atoms over tens of nanoseconds: a difference of several orders of magnitude.[55]The development of models to predict the mechanisms of membrane fusion will assist in the scientific understanding of how to target the process with antiviral drugs.[82]In 2006, scientists applied Markov state models and the Folding@home network to discover two pathways for fusion and gain other mechanistic insights.[55] Following detailed simulations from Folding@home of small cells known asvesicles, in 2007, the Pande lab introduced a new computing method to measure thetopologyof its structural changes during fusion.[83]In 2009, researchers used Folding@home to study mutations ofinfluenza hemagglutinin, a protein that attaches a virus to itshostcell and assists with viral entry. Mutations to hemagglutinin affecthow well the protein bindsto a host'scell surface receptormolecules, which determines howinfectivethe virus strain is to the host organism. Knowledge of the effects of hemagglutinin mutations assists in the development ofantiviral drugs.[84][85]As of 2012, Folding@home continues to simulate the folding and interactions of hemagglutinin, complementing experimental studies at theUniversity of Virginia.[35][86] In March 2020, Folding@home launched a program to assist researchers around the world who are working on finding a cure and learning more about thecoronavirus pandemic. The initial wave of projects simulate potentially druggable protein targets from SARS-CoV-2 virus, and the related SARS-CoV virus, about which there is significantly more data available.[87][88][89] Drugsfunction bybindingtospecific locationson target molecules and causing some desired change, such as disabling a target or causing aconformational change. Ideally, a drug should act very specifically, and bind only to its target without interfering with other biological functions. However, it is difficult to precisely determine where andhow tightlytwo molecules will bind. Due to limits in computing power, currentin silicomethods usually must trade speed foraccuracy; e.g., use rapidprotein dockingmethods instead of computationally costlyfree energy calculations. Folding@home's computing performance allows researchers to use both methods, and evaluate their efficiency and reliability.[39][90][91]Computer-assisted drug design has the potential to expedite and lower the costs of drug discovery.[38]In 2010, Folding@home used MSMs and free energy calculations to predict the native state of thevillinprotein to within 1.8angstrom(Å)root mean square deviation(RMSD) from thecrystalline structureexperimentally determined throughX-ray crystallography. This accuracy has implications to futureprotein structure predictionmethods, including forintrinsically unstructured proteins.[55]Scientists have used Folding@home to researchdrug resistanceby studyingvancomycin, an antibioticdrug of last resort, andbeta-lactamase, a protein that can break down antibiotics likepenicillin.[92][93] Chemical activity occurs along a protein'sactive site. Traditional drug design methods involve tightly binding to this site and blocking its activity, under the assumption that the target protein exists in one rigid structure. However, this approach works for approximately only 15% of all proteins. Proteins containallosteric siteswhich, when bound to by small molecules, can alter a protein's conformation and ultimately affect the protein's activity. These sites are attractive drug targets, but locating them is verycomputationally costly. In 2012, Folding@home and MSMs were used to identify allosteric sites in three medically relevant proteins: beta-lactamase,interleukin-2, andRNase H.[93][94] Approximately half of all knownantibioticsinterfere with the workings of a bacteria'sribosome, a large and complex biochemical machine that performsprotein biosynthesisbytranslatingmessenger RNAinto proteins.Macrolide antibioticsclog the ribosome's exit tunnel, preventing synthesis of essential bacterial proteins. In 2007, the Pande lab received agrantto study and design new antibiotics.[35]In 2008, they used Folding@home to study the interior of this tunnel and how specific molecules may affect it.[95]The full structure of the ribosome was determined only as of 2011, and Folding@home has also simulatedribosomal proteins, as many of their functions remain largely unknown.[96] Like otherdistributed computingprojects, Folding@home is an onlinecitizen scienceproject. In these projects non-specialists contribute computer processing power or help to analyze data produced by professional scientists. Participants receive little or no obvious reward. Research has been carried out into the motivations of citizen scientists and most of these studies have found that participants are motivated to take part because of altruistic reasons; that is, they want to help scientists and make a contribution to the advancement of their research.[97][98][99][100]Many participants in citizen science have an underlying interest in the topic of the research and gravitate towards projects that are in disciplines of interest to them. Folding@home is no different in that respect.[101]Research carried out recently on over 400 active participants revealed that they wanted to help make a contribution to research and that many had friends or relatives affected by the diseases that the Folding@home scientists investigate. Folding@home attracts participants who are computer hardware enthusiasts. These groups bring considerable expertise to the project and are able to build computers with advanced processing power.[102][need quotation to verify]Other distributed computing projects attract these types of participants and projects are often used to benchmark the performance of modified computers, and this aspect of the hobby is accommodated through the competitive nature of the project. Individuals and teams can compete to see who can process the most computer processing units (CPUs). This latest research on Folding@home involving interview and ethnographic observation of online groups showed that teams of hardware enthusiasts can sometimes work together, sharing best practice with regard to maximizing processing output. Such teams can becomecommunities of practice, with a shared language and online culture. This pattern of participation has been observed in other distributed computing projects.[103][104] Another key observation of Folding@home participants is that many are male.[101]This has also been observed in other distributed projects. Furthermore, many participants work in computer and technology-based jobs and careers.[101][105][106] Not all Folding@home participants are hardware enthusiasts. Many participants run the project software on unmodified machines and do take part competitively. By January 2020, the number of users was down to 30,000.[107]However, it is difficult to ascertain what proportion of participants are hardware enthusiasts. Although, according to the project managers, the contribution of the enthusiast community is substantially larger in terms of processing power.[108] Supercomputer FLOPS performance is assessed by running the legacyLINPACKbenchmark. This short-term testing has difficulty in accurately reflecting sustained performance on real-world tasks because LINPACK more efficiently maps to supercomputer hardware. Computing systems vary in architecture and design, so direct comparison is difficult. Despite this, FLOPS remain the primary speed metric used in supercomputing.[109][need quotation to verify]In contrast, Folding@home determines its FLOPS usingwall-clock timeby measuring how much time its work units take to complete.[110] On September 16, 2007, due in large part to the participation of PlayStation 3 consoles, the Folding@home project officially attained a sustained performance level higher than one nativepetaFLOPS, becoming the first computing system of any kind to do so.[111][112]Top500's fastest supercomputer at the time wasBlueGene/L, at 0.280 petaFLOPS.[113]The following year, on May 7, 2008, the project attained a sustained performance level higher than two native petaFLOPS,[114]followed by the three and four native petaFLOPS milestones in August 2008[115][116]and September 28, 2008 respectively.[117]On February 18, 2009, Folding@home achieved five native petaFLOPS,[118][119]and was the first computing project to meet these five levels.[120][121]In comparison, November 2008's fastest supercomputer wasIBM'sRoadrunnerat 1.105 petaFLOPS.[122]On November 10, 2011, Folding@home's performance exceeded six native petaFLOPS with the equivalent of nearly eight x86 petaFLOPS.[112][123]In mid-May 2013, Folding@home attained over seven native petaFLOPS, with the equivalent of 14.87 x86 petaFLOPS. It then reached eight native petaFLOPS on June 21, followed by nine on September 9 of that year, with 17.9 x86 petaFLOPS.[124]On May 11, 2016 Folding@home announced that it was moving towards reaching the 100 x86 petaFLOPS mark.[125] Further use grew from increased awareness and participation in the project from the coronavirus pandemic in 2020. On March 20, 2020 Folding@home announced via Twitter that it was running with over 470 native petaFLOPS,[126]the equivalent of 958 x86 petaFLOPS.[127]By March 25 it reached 768 petaFLOPS, or 1.5 x86 exaFLOPS, making it the firstexaFLOP computing system.[128] As of 23 December 2024[update], the computing power of Folding@home stands at 14.3 petaFLOPS, or 27.5 x86 petaFLOPS.[129] Similarly to other distributed computing projects, Folding@home quantitatively assesses user computing contributions to the project through a credit system.[130]All units from a given protein project have uniform base credit, which is determined by benchmarking one or more work units from that project on an official reference machine before the project is released.[130]Each user receives these base points for completing every work unit, though through the use of a passkey they can receive added bonus points for reliably and rapidly completing units which are more demanding computationally or have a greater scientific priority.[131][132]Users may also receive credit for their work by clients on multiple machines.[133]This point system attempts to align awarded credit with the value of the scientific results.[130] Users can register their contributions under a team, which combine the points of all their members. A user can start their own team, or they can join an existing team. In some cases, a team may have their own community-driven sources of help or recruitment such as anInternet forum.[134]The points can foster friendly competition between individuals and teams to compute the most for the project, which can benefit the folding community and accelerate scientific research.[130][135][136]Individual and team statistics are posted on the Folding@home website.[130] If a user does not form a new team, or does not join an existing team, that user automatically becomes part of a "Default" team. This "Default" team has a team number of "0". Statistics are accumulated for this "Default" team as well as for specially named teams. Folding@home software at the user's end involves three primary components: work units, cores, and a client. A work unit is the protein data that the client is asked to process. Work units are a fraction of the simulation between the states in aMarkov model. After the work unit has been downloaded and completely processed by a volunteer's computer, it is returned to Folding@home servers, which then award the volunteer the credit points. This cycle repeats automatically.[135]All work units have associated deadlines, and if this deadline is exceeded, the user may not get credit and the unit will be automatically reissued to another participant. As protein folding occurs serially, and many work units are generated from their predecessors, this allows the overall simulation process to proceed normally if a work unit is not returned after a reasonable period of time. Due to these deadlines, the minimum system requirement for Folding@home is a Pentium 3 450 MHz CPU withStreaming SIMD Extensions(SSE).[133]However, work units for high-performance clients have a much shorter deadline than those for the uniprocessor client, as a major part of the scientific benefit is dependent on rapidly completing simulations.[137] Before public release, work units go through severalquality assurancesteps to keep problematic ones from becoming fully available. These testing stages include internal, beta, and advanced, before a final full release across Folding@home.[138]Folding@home's work units are normally processed only once, except in the rare event that errors occur during processing. If this occurs for three different users, the unit is automatically pulled from distribution.[139][140]The Folding@home support forum can be used to differentiate between issues arising from problematic hardware and bad work units.[141] Specialized molecular dynamics programs, referred to as "FahCores" and often abbreviated "cores", perform the calculations on the work unit as abackground process. A large majority of Folding@home's cores are based onGROMACS,[135]one of the fastest and most popular molecular dynamics software packages, which largely consists of manually optimizedassembly languagecode and hardware optimizations.[142][143]Although GROMACS isopen-source softwareand there is a cooperative effort between the Pande lab and GROMACS developers, Folding@home uses aclosed-sourcelicense to help ensure data validity.[144]Less active cores include ProtoMol and SHARPEN. Folding@home has usedAMBER,CPMD,Desmond, andTINKER, but these have since been retired and are no longer in active service.[4][145][146]Some of these cores performexplicit solvationcalculations in which the surroundingsolvent(usually water) is modeled atom-by-atom; while others performimplicit solvationmethods, where the solvent is treated as a mathematical continuum.[147][148]The core is separate from the client to enable the scientific methods to be updated automatically without requiring a client update. The cores periodically create calculationcheckpointsso that if they are interrupted they can resume work from that point upon startup.[135] A Folding@home participant installs aclientprogramon theirpersonal computer. The user interacts with the client, which manages the other software components in the background. Through the client, the user may pause the folding process, open an event log, check the work progress, or view personal statistics.[149]The computer clients run continuously in thebackgroundat a very low priority, using idle processing power so that normal computer use is unaffected.[133]The maximum CPU use can be adjusted via client settings.[149][150]The client connects to a Folding@homeserverand retrieves a work unit and may also download the appropriate core for the client's settings, operating system, and the underlying hardware architecture. After processing, the work unit is returned to the Folding@home servers. Computer clients are tailored touniprocessorandmulti-core processorsystems, andgraphics processing units. The diversity and power of eachhardware architectureprovides Folding@home with the ability to efficiently complete many types of simulations in a timely manner (in a few weeks or months rather than years), which is of significant scientific value. Together, these clients allow researchers to study biomedical questions formerly considered impractical to tackle computationally.[39][135][137] Professional software developers are responsible for most of Folding@home's code, both for the client and server-side. The development team includes programmers fromNvidia,ATI,Sony, and Cauldron Development.[151]Clients can be downloaded only from the official Folding@home website or its commercial partners, and will only interact with Folding@home computer files. They will upload and download data with Folding@home's data servers (overport8080, with 80 as an alternate), and the communication is verified using 2048-bitdigital signatures.[133][152]While the client'sgraphical user interface(GUI) is open-source,[153]the client isproprietary softwareciting security and scientific integrity as the reasons.[154][155][156] However, this rationale of using proprietary software is disputed since while the license could be enforceable in the legal domain retrospectively, it does not practically prevent the modification (also known aspatching) of the executablebinary files. Likewise,binary-onlydistribution does not prevent the malicious modification of executable binary-code, either through aman-in-the-middle attackwhile being downloaded via the internet,[157]or by the redistribution of binaries by a third-party that have been previously modified either in their binary state (i.e.patched),[158]or by decompiling[159]and recompiling them after modification.[160][161]These modifications are possible unless the binary files – and the transport channel – aresignedand the recipient person/system is able to verify the digital signature, in which case unwarranted modifications should be detectable, but not always.[162]Either way, since in the case of Folding@home the input data and output result processed by the client-software are both digitally signed,[133][152]the integrity of work can be verified independently from the integrity of the client software itself. Folding@home uses theCosmsoftware libraries for networking.[135][151]Folding@home was launched on October 1, 2000, and was the firstdistributed computingproject aimed at bio-molecular systems.[163]Its first client was ascreensaver, which would run while the computer was not otherwise in use.[164][165]In 2004, the Pande lab collaborated withDavid P. Andersonto test a supplemental client on the open-sourceBOINCframework. This client was released to closed beta in April 2005;[166]however, the method became unworkable and was shelved in June 2006.[167] The specialized hardware ofgraphics processing units(GPU) is designed to accelerate rendering of 3-D graphics applications such as video games and can significantly outperform CPUs for some types of calculations. GPUs are one of the most powerful and rapidly growing computing platforms, and many scientists and researchers are pursuinggeneral-purpose computing on graphics processing units(GPGPU). However, GPU hardware is difficult to use for non-graphics tasks and usually requires significant algorithm restructuring and an advanced understanding of the underlying architecture.[168]Such customization is challenging, more so to researchers with limited software development resources. Folding@home uses theopen-sourceOpenMMlibrary, which uses abridge design patternwith twoapplication programming interface(API) levels to interface molecular simulation software to an underlying hardware architecture. With the addition of hardware optimizations, OpenMM-based GPU simulations need no significant modification but achieve performance nearly equal to hand-tuned GPU code, and greatly outperform CPU implementations.[147][169] Before 2010, the computing reliability of GPGPU consumer-grade hardware was largely unknown, and circumstantial evidence related to the lack of built-inerror detection and correctionin GPU memory raised reliability concerns. In the first large-scale test of GPU scientific accuracy, a 2010 study of over 20,000 hosts on the Folding@home network detectedsoft errorsin the memory subsystems of two-thirds of the tested GPUs. These errors strongly correlated to board architecture, though the study concluded that reliable GPU computing was very feasible as long as attention is paid to the hardware traits, such as software-side error detection.[170] The first generation of Folding@home's GPU client (GPU1) was released to the public on October 2, 2006,[167]delivering a 20–30 times speedup for some calculations over its CPU-basedGROMACScounterparts.[171]It was the first time GPUs had been used for either distributed computing or major molecular dynamics calculations.[172][173]GPU1 gave researchers significant knowledge and experience with the development ofGPGPUsoftware, but in response to scientific inaccuracies withDirectX, on April 10, 2008, it was succeeded by GPU2, the second generation of the client.[171][174]Following the introduction of GPU2, GPU1 was officially retired on June 6.[171]Compared to GPU1, GPU2 was more scientifically reliable and productive, ran onATIandCUDA-enabledNvidiaGPUs, and supported more advanced algorithms, larger proteins, and real-time visualization of the protein simulation.[175][176]Following this, the third generation of Folding@home's GPU client (GPU3) was released on May 25, 2010. Whilebackward compatiblewith GPU2, GPU3 was more stable, efficient, and flexibile in its scientific abilities,[177]and used OpenMM on top of anOpenCLframework.[177][178]Although these GPU3 clients did not natively support the operating systemsLinuxandmacOS, Linux users with Nvidia graphics cards were able to run them through theWinesoftware application.[179][180]GPUs remain Folding@home's most powerful platform inFLOPS. As of November 2012, GPU clients account for 87% of the entire project's x86 FLOPS throughput.[181] Native support for Nvidia and AMD graphics cards under Linux was introduced with FahCore 17, which uses OpenCL rather than CUDA.[182] From March 2007 until November 2012, Folding@home took advantage of the computing power ofPlayStation 3s. At the time of its inception, its mainstreamingCell processordelivered a 20 times speed increase over PCs for some calculations, processing power which could not be found on other systems such as theXbox 360.[39][183]The PS3's high speed and efficiency introduced other opportunities for worthwhile optimizations according toAmdahl's law, and significantly changed the tradeoff between computing efficiency and overall accuracy, allowing the use of more complex molecular models at little added computing cost.[184]This allowed Folding@home to run biomedical calculations that would have been otherwise infeasible computationally.[185] The PS3 client was developed in a collaborative effort betweenSonyand the Pande lab and was first released as a standalone client on March 23, 2007.[39][186]Its release made Folding@home the first distributed computing project to use PS3s.[187]On September 18 of the following year, the PS3 client became a channel ofLife with PlayStationon its launch.[188][189]In the types of calculations it can perform, at the time of its introduction, the client fit in between a CPU's flexibility and a GPU's speed.[135]However, unlike clients running onpersonal computers, users were unable to perform other activities on their PS3 while running Folding@home.[185]The PS3's uniform console environment madetechnical supporteasier and made Folding@home moreuser friendly.[39]The PS3 also had the ability to stream data quickly to its GPU, which was used for real-time atomic-level visualizing of the current protein dynamics.[184] On November 6, 2012, Sony ended support for the Folding@home PS3 client and other services available under Life with PlayStation. Over its lifetime of five years and seven months, more than 15 million users contributed over 100 million hours of computing to Folding@home, greatly assisting the project with disease research. Following discussions with the Pande lab, Sony decided to terminate the application. Pande considered the PlayStation 3 client a "game changer" for the project.[190][191][192] Folding@home can use theparallel computingabilities of modern multi-core processors. The ability to use several CPU cores simultaneously allows completing the full simulation far faster. Working together, these CPU cores complete single work units proportionately faster than the standard uniprocessor client. This method is scientifically valuable because it enables much longer simulation trajectories to be performed in the same amount of time, and reduces the traditional difficulties of scaling a large simulation to many separate processors.[193]A 2007 publication in theJournal of Molecular Biologyrelied on multi-core processing to simulate the folding of part of thevillinprotein approximately 10 times longer than was possible with a single-processor client, in agreement with experimental folding rates.[194] In November 2006, first-generationsymmetric multiprocessing(SMP) clients were publicly released for open beta testing, referred to as SMP1.[167]These clients usedMessage Passing Interface(MPI) communication protocols for parallel processing, as at that time the GROMACS cores werenot designedto be used with multiple threads.[137]This was the first time a distributed computing project had used MPI.[195]Although the clients performed well inUnix-based operating systems such as Linux and macOS, they were troublesome underWindows.[193][195]On January 24, 2010, SMP2, the second generation of the SMP clients and the successor to SMP1, was released as an open beta and replaced the complex MPI with a more reliablethread-based implementation.[132][151] SMP2 supports a trial of a special category ofbigadvwork units, designed to simulate proteins that are unusually large and computationally intensive and have a great scientific priority. These units originally required a minimum of eight CPU cores,[196]which was raised to sixteen later, on February 7, 2012.[197]Along with these added hardware requirements over standard SMP2 work units, they require more system resources such asrandom-access memory(RAM) andInternet bandwidth. In return, users who run these are rewarded with a 20% increase over SMP2's bonus point system.[198]The bigadv category allows Folding@home to run especially demanding simulations for long times that had formerly required use of supercomputingclustersand could not be performed anywhere else on Folding@home.[196]Many users with hardware able to run bigadv units have later had their hardware setup deemed ineligible for bigadv work units when CPU core minimums were increased, leaving them only able to run the normal SMP work units. This frustrated many users who invested significant amounts of money into the program only to have their hardware be obsolete for bigadv purposes shortly after. As a result, Pande announced in January 2014 that the bigadv program would end on January 31, 2015.[199] The V7 client is the seventh generation of the Folding@home client software, and is a full rewrite and unification of the prior clients forWindows,macOS, andLinuxoperating systems.[200][201]It was released on March 22, 2012.[202]Like its predecessors, V7 can run Folding@home in the background at a very lowpriority, allowing other applications to use CPU resources as they need. It is designed to make the installation, start-up, and operation more user-friendly for novices, and offer greater scientific flexibility to researchers than prior clients.[203]V7 usesTracformanaging its bug ticketsso that users can see its development process and provide feedback.[201] V7 consists of four integrated elements. The user typically interacts with V7's open-sourceGUI, named FAHControl.[153][204]This has Novice, Advanced, and Expert user interface modes, and has the ability to monitor, configure, and control many remote folding clients from one computer. FAHControl directs FAHClient, aback-endapplication that in turn manages each FAHSlot (orslot). Each slot acts as replacement for the formerly distinct Folding@home v6 uniprocessor, SMP, or GPU computer clients, as it can download, process, and upload work units independently. The FAHViewer function, modeled after the PS3's viewer, displays a real-time 3-D rendering, if available, of the protein currently being processed.[200][201] In 2014, a client for theGoogle ChromeandChromiumweb browsers was released, allowing users to run Folding@home in their web browser. The client usedGoogle'sNative Client(NaCl) feature on Chromium-based web browsers to run the Folding@home code at near-native speed in asandboxon the user's machine.[205]Due to the phasing out of NaCL and changes at Folding@home, the web client was permanently shut down in June 2019.[206] In July 2015, a client forAndroidmobile phones was released onGoogle Playfor devices runningAndroid 4.4 KitKator newer.[207][208] On February 16, 2018, the Android client, which was offered in cooperation withSony, was removed from Google Play. Plans were announced to offer an open source alternative in the future.[209] Rosetta@homeis a distributed computing project aimed at protein structure prediction and is one of the most accuratetertiary structurepredictors.[210][211]The conformational states from Rosetta's software can be used to initialize a Markov state model as starting points for Folding@home simulations.[25]Conversely, structure prediction algorithms can be improved from thermodynamic and kinetic models and the sampling aspects of protein folding simulations.[212]As Rosetta only tries to predict the final folded state, and not how folding proceeds, Rosetta@home and Folding@home are complementary and address very different molecular questions.[25][213] Antonis a special-purpose supercomputer built for molecular dynamics simulations. In October 2011, Anton and Folding@home were the two most powerful molecular dynamics systems.[214]Anton is unique in its ability to produce single ultra-long computationally costly molecular trajectories,[215]such as one in 2010 which reached the millisecond range.[216][217]These long trajectories may be especially helpful for some types of biochemical problems.[218][219]However, Anton does not use Markov state models (MSM) for analysis. In 2011, the Pande lab constructed a MSM from two 100-μsAnton simulations and found alternative folding pathways that were not visible through Anton's traditional analysis. They concluded that there was little difference between MSMs constructed from a limited number of long trajectories or one assembled from many shorter trajectories.[215]In June 2011 Folding@home added sampling of an Anton simulation in an effort to better determine how its methods compare to Anton's.[220][221]However, unlike Folding@home's shorter trajectories, which are more amenable to distributed computing and other parallelizing methods, longer trajectories do not require adaptive sampling to sufficiently sample the protein'sphase space. Due to this, it is possible that a combination of Anton's and Folding@home's simulation methods would provide a more thorough sampling of this space.[215]
https://en.wikipedia.org/wiki/Folding@home
Grid computingis the use of widely distributedcomputerresourcesto reach a common goal. A computing grid can be thought of as adistributed systemwith non-interactive workloads that involve many files. Grid computing is distinguished from conventional high-performance computing systems such asclustercomputing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be moreheterogeneousand geographically dispersed (thus not physically coupled) than cluster computers.[1]Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose gridmiddlewaresoftware libraries. Grid sizes can be quite large.[2] Grids are a form ofdistributed computingcomposed of many networkedloosely coupledcomputers acting together to perform large tasks. For certain applications, distributed or grid computing can be seen as a special type ofparallel computingthat relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to acomputer network(private or public) by a conventionalnetwork interface, such asEthernet. This is in contrast to the traditional notion of asupercomputer, which has many processors connected by a local high-speedcomputer bus. This technology has been applied to computationally intensive scientific, mathematical, and academic problems throughvolunteer computing, and it is used in commercial enterprises for such diverse applications asdrug discovery,economic forecasting,seismic analysis, andback officedata processing in support fore-commerceandWeb services. Grid computing combines computers from multiple administrative domains to reach a common goal,[3]to solve a single task, and may then disappear just as quickly. The size of a grid may vary from small—confined to a network of computer workstations within a corporation, for example—to large, public collaborations across many companies and networks. "The notion of a confined grid may also be known as an intra-nodes cooperation whereas the notion of a larger, wider grid may thus refer to an inter-nodes cooperation".[4] Coordinating applications on Grids can be a complex task, especially when coordinating the flow of information across distributed computing resources.Grid workflowsystems have been developed as a specialized form of aworkflow management systemdesigned specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in the grid context. “Distributed” or “grid” computing in general is a special type ofparallel computingthat relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to anetwork(private, public or theInternet) by a conventionalnetwork interfaceproducing commodity hardware, compared to the lower efficiency of designing and constructing a small number of custom supercomputers. The primary performance disadvantage is that the various processors and local storage areas do not have high-speed connections. This arrangement is thus well-suited to applications in which multiple parallel computations can take place independently, without the need to communicate intermediate results between processors.[5]The high-endscalabilityof geographically dispersed grids is generally favorable, due to the low need for connectivity betweennodesrelative to the capacity of the public Internet.[6] There are also some differences between programming for a supercomputer and programming for a grid computing system. It can be costly and difficult to write programs that can run in the environment of a supercomputer, which may have a custom operating system, or require the program to addressconcurrencyissues. If a problem can be adequately parallelized, a “thin” layer of “grid” infrastructure can allow conventional, standalone programs, given a different part of the same problem, to run on multiple machines. This makes it possible to write and debug on a single conventional machine and eliminates complications due to multiple instances of the same program running in the same sharedmemoryand storage space at the same time. One feature of distributed grids is that they can be formed from computing resources belonging to one or multiple individuals or organizations (known as multipleadministrative domains). This can facilitate commercial transactions, as inutility computing, or make it easier to assemblevolunteer computingnetworks. One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or erroneous results, and from using the system as an attack vector. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. Discrepancies would identify malfunctioning and malicious nodes. However, due to the lack of central control over the hardware, there is no way to guarantee thatnodeswill not drop out of the network at random times. Some nodes (like laptops ordial-upInternet customers) may also be available for computation but not network communications for unpredictable periods. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results in the expected time. Another set of what could be termed social compatibility issues in the early days of grid computing related to the goals of grid developers to carry their innovation beyond the original field of high-performance computing and across disciplinary boundaries into new fields, like that of high-energy physics.[7] The impacts of trust and availability on performance and development difficulty can influence the choice of whether to deploy onto a dedicated cluster, to idle machines internal to the developing organization, or to an open external network of volunteers or contractors. In many cases, the participating nodes must trust the central system not to abuse the access that is being granted, by interfering with the operation of other programs, mangling stored information, transmitting private data, or creating new security holes. Other systems employ measures to reduce the amount of trust “client” nodes must place in the central system such as placing applications in virtual machines. Public systems or those crossing administrative domains (including different departments in the same organization) often result in the need to run onheterogeneoussystems, using differentoperating systemsandhardware architectures. With many languages, there is a trade-off between investment in software development and the number of platforms that can be supported (and thus the size of the resulting network).Cross-platformlanguages can reduce the need to make this tradeoff, though potentially at the expense of high performance on any givennode(due to run-time interpretation or lack of optimization for the particular platform). Variousmiddlewareprojects have created generic infrastructure to allow diverse scientific and commercial projects to harness a particular associated grid or for the purpose of setting up new grids.BOINCis a common one for various academic projects seeking public volunteers; more are listed at theend of the article. In fact, the middleware can be seen as a layer between the hardware and the software. On top of the middleware, a number of technical areas have to be considered, and these may or may not be middleware independent. Example areas includeSLAmanagement, Trust, and Security,Virtual organizationmanagement, License Management, Portals and Data Management. These technical areas may be taken care of in a commercial solution, though the cutting edge of each area is often found within specific research projects examining the field. For the segmentation of the grid computing market, two perspectives need to be considered: the provider side and the user side: The overall grid market comprises several specific markets. These are the grid middleware market, the market for grid-enabled applications, theutility computingmarket, and the software-as-a-service (SaaS) market. Gridmiddlewareis a specific software product, which enables the sharing of heterogeneous resources, and Virtual Organizations. It is installed and integrated into the existing infrastructure of the involved company or companies and provides a special layer placed among the heterogeneous infrastructure and the specific user applications. Major grid middlewares are Globus Toolkit,gLite, andUNICORE. Utility computing is referred to as the provision of grid computing and applications as service either as an open grid utility or as a hosting solution for one organization or aVO. Major players in the utility computing market areSun Microsystems,IBM, andHP. Grid-enabled applications are specific software applications that can utilize grid infrastructure. This is made possible by the use of grid middleware, as pointed out above. Software as a service(SaaS) is “software that is owned, delivered and managed remotely by one or more providers.” (Gartner2007) Additionally, SaaS applications are based on a single set of common code and data definitions. They are consumed in a one-to-many model, and SaaS uses a Pay As You Go (PAYG) model or a subscription model that is based on usage. Providers of SaaS do not necessarily own the computing resources themselves, which are required to run their SaaS. Therefore, SaaS providers may draw upon the utility computing market. The utility computing market provides computing resources for SaaS providers. For companies on the demand or user side of the grid computing market, the different segments have significant implications for their IT deployment strategy. The IT deployment strategy as well as the type of IT investments made are relevant aspects for potential grid users and play an important role for grid adoption. CPU-scavenging,cycle-scavenging, orshared computingcreates a “grid” from the idle resources in a network of participants (whether worldwide or internal to an organization). Typically, this technique exploits the 'spare'instruction cyclesresulting from the intermittent inactivity that typically occurs at night, during lunch breaks, or even during the (comparatively minuscule, though numerous) moments of idle waiting that modern desktop CPU's experience throughout the day (when the computer is waiting on IO from the user, network, or storage). In practice, participating computers also donate some supporting amount of disk storage space, RAM, and network bandwidth, in addition to raw CPU power.[citation needed] Manyvolunteer computingprojects, such asBOINC, use the CPU scavenging model. Sincenodesare likely to go "offline" from time to time, as their owners use their resources for their primary purpose, this model must be designed to handle such contingencies. Creating anOpportunistic Environmentis another implementation of CPU-scavenging where special workload management system harvests the idle desktop computers for compute-intensive jobs, it also refers as Enterprise Desktop Grid (EDG). For instance,HTCondor[8](the open-source high-throughput computing software framework for coarse-grained distributed rationalization of computationally intensive tasks) can be configured to only use desktop machines where the keyboard and mouse are idle to effectively harness wasted CPU power from otherwise idle desktop workstations. Like other full-featured batch systems, HTCondor provides a job queueing mechanism, scheduling policy, priority scheme, resource monitoring, and resource management. It can be used to manage workload on a dedicated cluster of computers as well or it can seamlessly integrate both dedicated resources (rack-mounted clusters) and non-dedicated desktop machines (cycle scavenging) into one computing environment. The termgrid computingoriginated in the early 1990s as ametaphorfor making computer power as easy to access as an electricpower grid. The power grid metaphor for accessible computing quickly became canonical whenIan FosterandCarl Kesselmanpublished their seminal work, "The Grid: Blueprint for a new computing infrastructure" (1999). This was preceded by decades by the metaphor ofutility computing(1961): computing as a public utility, analogous to the phone system.[9][10] CPU scavenging andvolunteer computingwere popularized beginning in 1997 bydistributed.netand later in 1999 bySETI@hometo harness the power of networked PCs worldwide, in order to solve CPU-intensive research problems.[11][12] The ideas of the grid (including those from distributed computing, object-oriented programming, and Web services) were brought together byIan FosterandSteve Tueckeof theUniversity of Chicago, andCarl Kesselmanof theUniversity of Southern California'sInformation Sciences Institute.[13]The trio, who led the effort to create the Globus Toolkit, is widely regarded as the "fathers of the grid".[14]The toolkit incorporates not just computation management but alsostorage management, security provisioning, data movement, monitoring, and a toolkit for developing additional services based on the same infrastructure, including agreement negotiation, notification mechanisms, trigger services, and information aggregation.[15]While the Globus Toolkit remains the de facto standard for building grid solutions, a number of other tools have been built that answer some subset of services needed to create an enterprise or global grid.[citation needed] In 2007 the termcloud computingcame into popularity, which is conceptually similar to the canonical Foster definition of grid computing (in terms of computing resources being consumed as electricity is from thepower grid) and earlier utility computing. In November 2006,Edward Seidelreceived theSidney Fernbach Awardat the Supercomputing Conference inTampa, Florida.[16]"For outstanding contributions to the development of software for HPC and Grid computing to enable the collaborative numerical investigation of complex problems in physics; in particular, modeling black hole collisions."[17]This award, which is one of the highest honors in computing, was awarded for his achievements in numerical relativity. Also, as of March 2019, theBitcoin Networkhad a measured computing power equivalent to over 80,000exaFLOPS(Floating-point Operations Per Second).[25]This measurement reflects the number of FLOPS required to equal the hash output of the Bitcoin network rather than its capacity for general floating-point arithmetic operations, since the elements of the Bitcoin network (Bitcoin miningASICs) perform only the specific cryptographic hash computation required by theBitcoinprotocol. Grid computing offers a way to solveGrand Challenge problemssuch asprotein folding, financialmodeling,earthquakesimulation, andclimate/weathermodeling, and was integral in enabling the Large Hadron Collider at CERN.[26]Grids offer a way of using information technology resources optimally inside an organization. They also provide a means for offering information technology as autilityfor commercial and noncommercial clients, with those clients paying only for what they use, as with electricity or water. As of October 2016, over 4 million machines running the open-sourceBerkeley Open Infrastructure for Network Computing(BOINC) platform are members of theWorld Community Grid.[19]One of the projects using BOINC isSETI@home, which was using more than 400,000 computers to achieve 0.828TFLOPSas of October 2016. As of October 2016Folding@home, which is not part of BOINC, achieved more than 101 x86-equivalent petaflops on over 110,000 machines.[18] TheEuropean Unionfunded projects through theframework programmesof theEuropean Commission.BEinGRID(Business Experiments in Grid) was a research project funded by the European Commission[27]as anIntegrated Projectunder theSixth Framework Programme(FP6) sponsorship program. Started on June 1, 2006, the project ran 42 months, until November 2009. The project was coordinated byAtos Origin. According to the project fact sheet, their mission is “to establish effective routes to foster the adoption of grid computing across the EU and to stimulate research into innovative business models using Grid technologies”. To extract best practice and common themes from the experimental implementations, two groups of consultants are analyzing a series of pilots, one technical, one business. The project is significant not only for its long duration but also for its budget, which at 24.8 million Euros, is the largest of any FP6 integrated project. Of this, 15.7 million is provided by the European Commission and the remainder by its 98 contributing partner companies. Since the end of the project, the results of BEinGRID have been taken up and carried forward byIT-Tude.com. The Enabling Grids for E-sciencE project, based in theEuropean Unionand included sites in Asia and the United States, was a follow-up project to the European DataGrid (EDG) and evolved into theEuropean Grid Infrastructure. This, along with theWorldwide LHC Computing Grid[28](WLCG), was developed to support experiments using theCERNLarge Hadron Collider. A list of active sites participating within WLCG can be found online[29]as can real time monitoring of the EGEE infrastructure.[30]The relevant software and documentation is also publicly accessible.[31]There is speculation that dedicated fiber optic links, such as those installed by CERN to address the WLCG's data-intensive needs, may one day be available to home users thereby providing internet services at speeds up to 10,000 times faster than a traditional broadband connection.[32]TheEuropean Grid Infrastructurehas been also used for other research activities and experiments such as the simulation of oncological clinical trials.[33] Thedistributed.netproject was started in 1997. TheNASA Advanced Supercomputing facility(NAS) rangenetic algorithmsusing theCondor cycle scavengerrunning on about 350Sun MicrosystemsandSGIworkstations. In 2001,United Devicesoperated theUnited Devices Cancer Research Projectbased on itsGrid MPproduct, which cycle-scavenges on volunteer PCs connected to the Internet. The project ran on about 3.1 million machines before its close in 2007.[34] Today there are many definitions ofgrid computing: List of grid computing projects
https://en.wikipedia.org/wiki/Grid_computing
Infernois adistributed operating systemstarted atBell Labsand now developed and maintained byVita Nuova Holdingsasfree softwareunder theMIT License.[2][3]Inferno was based on the experience gained withPlan 9 from Bell Labs, and the further research of Bell Labs into operating systems, languages, on-the-fly compilers, graphics, security, networking and portability. The name of the operating system, many of its associated programs, and that of the current company, were inspired byDante Alighieri'sDivine Comedy. In Italian,Infernomeans "hell", of which there are nine circles in Dante'sDivine Comedy. Inferno was created in 1995 by members ofBell Labs' Computer Science Research division to bring ideas derived from their previous operating system,Plan 9 from Bell Labs, to a wider range of devices and networks. Inferno is adistributed operating systembased on three basic principles: To handle the diversity of network environments it was intended to be used in, the designers decided avirtual machine(VM) was a necessary component of the system. This is the same conclusion of the Oak project that becameJava, but arrived at independently. TheDis virtual machineis aregister machineintended to closely match the architecture it runs on, in contrast to thestack machineof theJava virtual machine. An advantage of this approach is the relative simplicity of creating ajust-in-time compilerfor new architectures. The virtual machine provides memory management designed to be efficient on devices with as little as 1 MiB of memory and without memory-mapping hardware. Itsgarbage collectoris a hybrid of reference counting and a real-time coloring collector that gathers cyclic data.[11] The Inferno kernel contains the virtual machine, on-the-fly compiler, scheduler, devices, protocol stacks, the name space evaluator for the file name space of each process, and the root of the file system hierarchy. The kernel also includes some built-in modules that provide interfaces of the virtual operating system, such as system calls, graphics, security, and math modules. The Bell Labs Technical Journal paper introducing Inferno listed several dimensions of portability and versatility provided by the OS:[1] These design choices were directed to provide standard interfaces that free content and service providers from concern of the details of diverse hardware, software, and networks over which their content is delivered. Inferno programs are portable across a broad mix of hardware, networks, and environments. It defines avirtual machine, known asDis, that can be implemented on any real machine, providesLimbo, atype-safelanguage that is compiled to portable byte code, and, more significantly, it includes a virtual operating system that supplies the same interfaces whether Inferno runs natively on hardware or runs as a user program on top of another operating system. Acommunications protocolcalledStyxis applied uniformly to access both local and remote resources, which programs use by calling standard file operations, open, read, write, and close. As of the fourth edition of Inferno, Styx is identical toPlan 9's newer version of its hallmark9Pprotocol,9P2000. Most of the Inferno commands are very similar toUnix commandswith the same name.[12] Inferno is a descendant ofPlan 9 from Bell Labs, and shares many design concepts and even source code in the kernel, particularly around devices and the Styx/9P2000 protocol. Inferno shares with Plan 9 the Unix heritage from Bell Labs and theUnix philosophy. Many of the command line tools in Inferno were Plan 9 tools that were translated to Limbo. In the mid-1990s, Plan 9 development was set aside in favor of Inferno.[13]The new system's existence was leaked by Dennis Ritchie in early 1996, after less than a year of development on the system, and publicly presented later that year as a competitor to Java. At the same time, Bell Labs' parent companyAT&Tlicensed Java technology fromSun Microsystems.[14] In March–April 1997IEEE Internet Computingincluded an advertisement for Inferno networking software. It claimed that various devices could communicate over "any network" including the Internet, telecommunications and LANs. The advertisement stated that video games could talk to computers,–aPlayStationwas pictured–cell phones could access email and voice mail was available via TV. Lucentused Inferno in at least two internal products: the Lucent VPN Firewall Brick, and the Lucent Pathstar phone switch. They initially tried to sell source code licenses of Inferno but found few buyers. Lucent did little marketing and missed the importance of the Internet and Inferno's relation to it. During the same timeSun Microsystemswas heavily marketing its ownJava programming language, which was targeting a similar market, with analogous technology, that worked inweb browsersand also filled the demand forobject-oriented languagespopular at that time. Lucent licensed Java from Sun, claiming that all Inferno devices would be made to run Java. A Java byte code to Dis byte code translator was written to facilitate that. However, Inferno still did not find customers. The Inferno Business Unit closed after three years, and was sold to Vita Nuova Holdings. Vita Nuova continued development and offered commercial licenses to the complete system, and free downloads and licenses (notGPLcompatible) for all of the system except the kernel and VM. They ported the software to new hardware and focused on distributed applications. Eventually, Vita Nuova released the 4th edition under more commonfree softwarelicenses, and in 2021 they relicensed all editions under mainly theMIT License.[6][2][3] Inferno runs on native hardware directly and also as an application providing a virtual operating system which runs on other platforms. Programs can be developed and run on all Inferno platforms without modifying or recompiling. Native ports include these architectures:x86,MIPS,ARM,PowerPC,SPARC. Hosted or virtual OS ports include:Microsoft Windows,Linux,FreeBSD,Plan 9,Mac OS X,Solaris,IRIX,UnixWare. Inferno can also be hosted by aplugintoInternet Explorer.[15]Vita Nuova said that plugins for other browsers were under development, but they were never released.[16] Inferno has also been ported toOpenmoko,[17]Nintendo DS,[18]SheevaPlug,[19]andAndroid.[20] Inferno 4th edition was released in early 2005 asfree software. Specifically, it wasdual-licensedunder two structures.[6]Users could either obtain it under a set offree software licenses, or they could obtain it under a proprietary license. In the case of the free software license scheme, different parts of the system were covered by different licenses, including theGNU General Public License, theGNU Lesser General Public License, theLucent Public License, and theMIT License, excluding the fonts, which are sub-licensed from Bigelow and Holmes. In March 2021, all editions were relicensed under mainly theMIT License.[2][3]
https://en.wikipedia.org/wiki/Inferno_(operating_system)
Internet GISis broad set of technologies and applications that employ theInternetto access, analyze, visualize, and distributespatial dataviageographic information systems(GIS).[1][2][3][4][5]Internet GIS is an outgrowth of traditional GIS, and represents a shift from conducting GIS on an individual computer to working with remotely distributed data and functions.[1]Two major issues in GIS are accessing and distributing spatial data and GIS outputs.[6]Internet GIS helps to solve that problem by allowing users to access vast databases impossible to store on a single desktop computer, and by allowing rapid dissemination of both maps and raw data to others.[7][6]These methods include bothfile sharingandemail. This has enabled the general public to participate in map creation and make use of GIS technology.[8][9] Internet GIS is a subset ofDistributed GIS, but specifically uses the internet rather than generic computer networks. Internet GIS applications are often, but not exclusively, conducted through theWorld Wide Web(also known as the Web), giving rise to the sub-branch ofWeb GIS, often used interchangeably with Internet GIS.[10][11][12][4][5]WhileWeb GIShas become nearly synonymous withInternet GISto many in the industry, the two are as distinct as the internet is from the World Wide Web.[13][14][15]Likewise,Internet GISis as distinct fromdistributed GISas the Internet is from distributed computer networks in general.[1][4][5] Internet GIS includes services beyond those enabled by the Web. Use of any other internet-enabled services to facilitate GIS functions, even if used in conjuncture with the Web, represents the use of Internet GIS.[4][5]One of the most common applications of a distributed GIS system, accessing remotely saved data, can be done through the internet without the need for the Web.[4][5]This is often done in practice when data are sensitive, such as hospital patient data and research facilities proprietary data, where sending data through the Web may be a security risk. This can be done using aVirtual private network(VPN) to access a local network remotely.[16]The use of VPN for these purposes surged during theCOVID-19pandemic, when employers needed to allow employees using GIS access to sensitive spatial data from home.[17][18][19] The history of Internet geographic information systems is linked to the history of the computer, the internet, and the quantitative revolution in geography. Geography tends to adapt technologies from other disciplines rather than innovating and inventing the technologies employed to conduct geographic studies.[20]The computer and internet are not an exception, and were rapidly investigated to purpose towards the needs of geographers. In 1959,Waldo Toblerpublished the first paper detailing the use of computers in map creation.[21]This was the beginning ofcomputer cartography, or the use of computers to create maps.[22][23]In 1960, the first true geographic information system capable of storing, analyzing, changing, and creating visualizations with spatial data was created byRoger Tomlinsonon behalf of the Canadian Government to manage natural resources.[24][25]These technologies represented a paradigm shift incartographyand geography, with desktop computer cartography facilitated through GIS rapidly replaced traditional ways of making maps.[20]The emergence of GIS and computer technology contributed to thequantitative revolutionin geography and the emergence of the branch oftechnical geography.[26][27] As computer technology advanced the desktop machine became the default for producing maps, a process known asdigital mapping, or computer cartography. These computers were networked together to share data and processing power and create redundant communications for defense applications.[12]This computer network evolved into the internet, and by the late 1980s, the internet was available in some people's homes.[12]Over time, the internet moved from a novelty to a major part of daily life. Using the internet, it was no longer necessary to store all data for a project locally, and communications were vastly improved. Following this trend,GIScientistsbegan developing methods for combining the internet and GIS. This process accelerated in the 1990s, with the creation of the World Wide Web in 1990 and the first major web mapping program,Xerox PARC Map Viewer, capable of distributed map creation appearing in 1993.[12][28][9]This software was unique in that it facilitated dynamic user map generation, rather than static images.[28]These new Web-based programs helped users to employ GIS without having it locally installed on their machine, ultimately leading to Web GIS being the dominant way users interact with internet GIS.[10][28] In 1995 The US federal government made the TIGER Mapping Service available to the public, facilitating desktop and Web GIS by hosting US boundary data.[10]This data availability, facilitated through the internet, silently revolutionized cartography by providing the world with authoritative boundary files, for free. In 1996,MapQuestbecame available to the public, facilitating navigation and trip planning.[10]Sometime during the 1990s, more maps were transmitted over the internet than physically printed.[12]This milestone was predicted in 1985 and represented a major shift in how we distribute spatial products to the masses.[29] As of 2020, almost 75% of the population has asmartphone.[1][30]These devices allow users to access the internet wherever they have service, and have revolutionized how we interact with the internet. One notable example is the rise ofmobile apps, which have impacted both how GIS is done, and how data are collected. Some mobile apps like theGoogle Maps mobile appare web-based and allow users to get navigation instructions in real time. Others, like Esri's Survey123 allow users to collect data in the field with their smartphone.[31]As time progresses, internet-based applications that do not make use ofHTMLorWeb Browsershave begun to become to grow in popularity.[32] TheWorld Wide Webis an information system that uses the internet to host, share, and distribute documents, images, and other data.[33]Web GIS involves using the World Wide Web to facilitate GIS tasks traditionally done on a desktop computer, as well as enabling the sharing of maps and spatial data.[7]Most, but not all, internet GIS is Web GIS, however all Web GIS is internet GIS.[10][11]This is quite similar to how much of the activity on the internet is hosted on the World Wide Web, but not everything on the internet is the World Wide Web. The tasks Web GIS are used for are numerous but can be generally divided into the categories of Geospatial web services:web feature services, web processing services, andweb mapping services.[3] By their definition, maps can never be perfect and are simplifications of reality.[34]Ethical cartographers try to keep these inaccuracies documented and to a minimum, while encouraging critical perspectives when using a map. Internet GIS has brought map-making tools to the general public, facilitating the rapidly disseminating these maps.[35]While this is potentially positive, it also means that people without cartographic training can easily make and disseminate misleading maps to a wide audience.[20][36][37]This was brought to public attention during the COVID-19 pandemic, when more than half of all United States state government COVID-19 dashboards had cartographic errors.[38]Further, malicious actors can quickly spread intentionally misleading spatial information while hiding the source.[34]As the internet is decentralized, traditional solutions to problems such as government regulation are difficult or impossible to implement.[39] For many users, the World Wide Web is synonymous with the Internet, which is true for Internet GIS. Most functions done with Internet GIS are conducted through the use of Web GIS. This has caused the borders between the two terms to blur, and "Web GIS" to becomegenericizedinto meaning any GIS done over the internet to some users.
https://en.wikipedia.org/wiki/Internet_GIS
Inqueueing theory, a discipline within the mathematicaltheory of probability, alayered queueing network(orrendezvous network[1]) is aqueueing networkmodel where the service time for each job at each service node is given by the response time of a queueing network (and those service times in turn may also be determined by further nested networks). Resources can be nested and queues form along the nodes of the nesting structure.[2][3]The nesting structure thus defines "layers" within thequeueing model.[2] Layered queueing has applications in a wide range ofdistributed systemswhich involve differentmaster/slave, replicated services andclient-servercomponents, allowing each local node to be represented by a specific queue, then orchestrating the evaluation of these queues.[2] For large population of jobs, afluid limithas been shown inPEPAto be a give good approximation of performance measures.[4] This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Layered_queueing_network
Insoftware engineering, aLibrary Oriented Architecture(LOA) is a set of principles andmethodologiesfor designing and developing software in the form of reusable software libraries constrained in a specific ontology domain. LOA provides one of the many alternate methodologies that enable the further exposure of software through a service-oriented architecture. Library orientation dictates the ontological boundaries of a library that exposes business functionality through a set of public APIs. Library Oriented Architecture further promotes practices similar toModular Programming, and encourages the maintenance of internal libraries and modules with independent internal open-source life-cycles. This approach promotes good software engineering principles and patterns such asseparation of concernsand designing tointerfacesas opposed to implementations. Three principles rule Library Oriented Architecture frameworks: Library Oriented Architecture may provide different process improvements to existing software engineering practices andsoftware development life-cycle. Some tangible benefits from its adoption are:
https://en.wikipedia.org/wiki/Library_Oriented_Architecture
OpenHarmony(OHOS,OH) is a family ofopen-sourcedistributed operating systems based onHarmonyOSderived fromLiteOS, donated the L0-L2 branch source code byHuaweito theOpenAtom Foundation. Similar to HarmonyOS, the open-sourcedistributed operating systemis designed with a layered architecture, consisting of four layers from the bottom to the top: thekernellayer, system service layer, framework layer, andapplicationlayer. It is also an extensive collection offree software, which can be used as an operating system or in parts with other operating systems via Kernel Abstraction Layer subsystems.[3][4] OpenHarmony supports various devices running a mini system, such as printers, speakers, smartwatches, and other smart device with memory as small as 128 KB, or running a standard system with memory greater than 128 MB.[5] The system contains the basic and some advanced capabilities of HarmonyOS such as DSoftBus technology with distributed device virtualization platform,[6]that is a departure from traditional virtualised guest OS for connected devices.[7] The operating system is oriented towards theInternet of things(IoT) andembedded devicesmarket with a diverse range of device support, includingsmartphones,tablets,smart TVs,smart watches,personal computersand othersmart devices.[8] The first version of OpenHarmony was launched by the OpenAtom Foundation on September 10, 2020, after receiving a donation of the open-source code from Huawei.[9] In December 2020, theOpenAtom Foundationand Runhe Software officially launched OpenHarmony open source project with seven units includingHuaweiand Software Institute of the Chinese Academy of Sciences. The OpenHarmony 2.0 (Canary version) was launched in June 2021, supporting a variety of smart terminal devices.[9] Based on its earlier version, OpenAtom Foundation launched OpenHarmony 3.0 on September 30, 2021, and brought substantial improvements over the past version to optimize the operating system, including supports for file security access (the ability to convert files into URIs and resolve URIs to open files) and support for basic capabilities of relational databases and distributed data management.[10] A release of OpenHarmony supporting devices with up to 4 GB RAM was made available in April 2021.[11] OpenAtom Foundation added a UniProton kernel, a hardware-basedMicrokernelreal-time operating system, into its repo as part of the Kernel subsystem of the OpenHarmony operating system as an add-on on August 10, 2022.[12] The primaryIDEknown asDevEco Studioto build OpenHarmony applications with OpenHarmony SDK full development kit that includes a comprehensive set of development tools, including adebugger, tester system via DevEco Testing, a repository withsoftware librariesforsoftware development, an embedded deviceemulator, previewer, documentation, sample code, and tutorials. Applications for OpenHarmony are mostly built using components ofArkUI, a Declarative User Interface framework. ArkUI elements are adaptable to various custom open-source hardware and industry hardware devices and include new interface rules with automatic updates along with HarmonyOS updates.[13] Hardware development is developed using DevEco Studio via DevEco Device tool for building on OpenHarmony, also creating distros with operating system development withtoolchainsprovided, including verification certification processes for the platform, as well as customising the operating system as an open source variant compared to original closed distro variant HarmonyOS that primarily focus on HarmonyOS Connect partners with Huawei.[14] OpenHarmony Application Binary Interface (ABI) ensures compatibility across various OpenHarmony powered devices with diverse set of chipset instruction set platforms.[15] HDC (OpenHarmony Device Connector) is a command-line tool tailored for developers working with OpenHarmony devices. The BM command tool component of HDC tool is used to facilitate debugging by developers. After entering in the HDC shell command, the BM tool can be utilised.[16][17] LikeHarmonyOS, OpenHarmony usesApp Packfiles suffixed with .app, also known as APP files onAppGalleryand third party distribution application stores on OpenHarmony-based and non-OpenHarmony operating systems such as Linux-basedUnity Operating Systemwhich is beneficial for interoperability and compatibility. Each App Pack has one or moreHarmonyOS Ability Packages(HAP) containing code for their abilities, resources, libraries, and aJSONfile withconfigurationinformation.[18] While incorporating the OpenHarmony layer for running the APP files developed based on HarmonyOS APIs, the operating system utilizes the mainLinux kernelfor bigger memory devices, as well as the RTOS-basedLiteOSkernel for smaller memory-constrained devices, as well as add-ons, custom kernels in distros in the Kernel Abstract Layer (KAL) subsystem that is not kernel dependent nor instruction set dependent. For webview applications, it incorporatesArkWebsoftware engine as of API 11 release at system level for security enhancingChromium Embedded Frameworknweb software engine that facilitatedBlink-basedChromiumin API 5.[19] Unlike with open-sourceAndroidoperating system with countless third-party dependency packages repeatedly built into the apps at a disadvantage when it comes to fragmentation. The OpenHarmony central repositories with the Special Interest Group atOpenAtomgovernance provides commonly used third-party public repositories for developers in the open-source environment which brings greaterinteroperabilityandcompatibilitywith OpenHarmony-based operating systems. Apps does not require repeated built-in third-party dependencies, such asChromium,UnityandUnreal Engine. This can greatly reduce the system ROM volume.[20] Harmony Distributed File System (HMDFS) is a distributed file system designed for large-scale data storage and processing that is also used inopenEuler. It is inspired by theHadoop Distributed File System (HDFS). The file system suitable for scenarios where large-scale data storage and processing are essential, such as IoT applications, edge computing, and cloud services.[21]On Orange Pi OS (OHOS), the native file system shows LOCAL and shared_disk via OpenHarmony's Distributed File System (HMDFS)File path/rootfolder for the file system uses ">" instead of traditional "/" in Unix/Linux/Unix-like and "\" onWindowswith itsDLL (Dynamic-link library)system. Access token manager is an essential component in OpenHarmony-based distributed operating systems, responsible for unified app permission management based on access tokens. Access tokens serve as identifiers for apps, containing information such as app ID, user ID, app privilege level (APL), and app permissions. By default, apps can access limited system resources. ATM ensures controlled access to sensitive functionalities which combines bothRBACandCBACmodels as a hybridACLmodel.[22] OpenHarmony kernel abstract layer employs the third-partymusl libclibrary and native APIs, providing support for the Portable Operating System Interface (POSIX) forLinuxsyscalls within theLinux kernelside and LiteOS kernel that is the inherent part of the originalLiteOSdesign in POSIX API compatibility within multi-kernel Kernel Abstract Layer architecture.[23]Developers and vendors can create components and applications that work on the kernel based onPOSIXstandards.[24] OpenHarmony NDK is a toolset that enables developers to incorporate C and C++ code into their applications. Specifically, in the case of OpenHarmony, the NDK serves as a bridge between the native world (C/C++) and the OpenHarmony ecosystem.[25] This NAPI method is a vital importance of open source community of individual developers, companies and non-profit organisations of stakeholders in manufacturers creating third party libraries for interoperability and compatibility on the operating system native open source and commercial applications development from third-party developers between southbound and northbound interface development of richer APIs, e.g. third party Node.js,Simple DirectMedia Layer,Qtframework,LLVMcompiler,FFmpegetc.[26][27] OpenHarmony can be deployed on various hardware devices ofARM,RISC-Vandx86architectures withmemoryvolumes ranging from as small as 128 KB up to more than 1 MB. It supports hardware devices with three types of system as follows:[28] To ensure OpenHarmony-based devices are compatible and interoperable in the ecosystem, the OpenAtom Foundation has set up product compatibility specifications, with a Compatibility Working Group to evaluate and certify the products that are compatible with OpenHarmony.[29][30] The following two types of certifications were published for the partners supporting the compatibility work, with the right to use the OpenHarmony Compatibility Logo on their certified products, packaging, and marketing materials.[31] On April 25, 2022, 44 products have obtained the compatibility certificates, and more than 80 software and hardware products are in the process of evaluation for OpenHarmony compatibility.[citation needed] Since OpenHarmony was open source in September 2020 to December 2021, more than 1,200 developers and 40 organizations have participated in the open source project and contributed code. At present, OpenHarmony has developed to 4.x version. Support for rich 3D applications, withOpenGL,OpenGL ESandWebGLtechnologies.[35] Connection security, etc., media support for richer encoding, support for more refined broadcast control capabilities, etc. As well asArkWebsoftware engine featured onHarmonyOS NEXT, replaces old nweb software engine that takes advantage ofChromiumweb browser andBlinkbrowser engine. Core File Kit API enhancedAccess token managerwith on-deviceAIandcapability-basedfeatures on OpenHarmony Distributed File System (HMDFS) system as well as Local file system with Application files, user files and system files taking advantage of TEE kernel hardware-level features interoperable with commercial HarmonyOS NEXT system cross-file sharing and accessing interactions.[38] NFC provides HCE card emulation capabilities. Public Basic Class Library supportsThread Pools, "workers" within HSP and HAR modules ofHAPapps. ArkGraphics 2D, 2D Draw API supported. [39][40][41][42] OpenHarmony is the most activeopen sourceproject hosted on theGiteeplatform. As of September 2023, it has over 30 open-source software distributions compatible with OpenHarmony for various sectors such as education, finance, smart home, transportation, digital government and other industries.[47][48][49] On 14, September 2021, Huawei announced the launch of commercial proprietary MineHarmony OS, a customized operating system by Huawei based on its in-house HarmonyOS distro based on OpenHarmony for industrial use. MineHarmony is compatible with about 400 types of underground coal mining equipment, providing the equipment with a single interface to transmit and collect data for analysis. Wang Chenglu, President of Huawei's consumer business AI and smart full-scenario business department, indicated that the launch of MineHarmony OS signified that the HarmonyOS ecology had taken a step further fromB2CtoB2B.[50][51][52] Midea, a Chinese electrical appliance manufacturer launched Midea IoT operating system 1.0. An IoT centric operating system based on OpenHarmony 2.0 officially launched in October 2021. After, the company used HarmonyOS operating system with Huawei partnership for its smart devices compatibility since June 2, 2021 launch of HarmonyOS 2.0.[53][54][55][56] On January 6, 2022, OpenHarmony in Space (OHIS) by OHIS Working Group and Dalian University of Technology led by Yu Xiaozhou was reported to be a vital play in the future from a scientific and engineering point of view, expecting to open up opportunities for development in China's satellite systems, and surpassSpaceX’s Star Chain plan with the idea of micro-nano satellite technology.[57] Based on OpenHarmony, SwanLinkOS was released in June 2022 by Honghu Wanlian (Jiangsu) Technology Development, a subsidiary ofiSoftStone, for the transportation industry. The operating system supports mainstream chipsets, such asRockchipRK3399 and RK3568, and can be applied in transportation and shipping equipment for monitoring road conditions, big data analysis, maritime search and rescue.[58] It was awarded the OpenHarmony Ecological Product Compatibility Certificate by the OpenAtom Foundation.[59] On November 7, 2022, ArcherMind Cooperation that deals with operating systems, interconnection solutions, smart innovations, and R&D aspects launched the HongZOS system that supports OpenHarmony and HiSilicon chips, solution mainly focuses on AIoT in industrial sectors.[60] On November 28, 2022, Orange Pi launched the Orange Pi OS based on the open-source OpenHarmony version.[61]In October 2023, they released the Orange Pi 3B board with the Orange Pi OHOS version for hobbyists and developers based on the OpenHarmony 4.0 Beta1 version.[62][63][64] On December 23, 2022, the integrated software and hardware solution together with the self-developed hardware products of Youbo Terminal runs RobanTrust OS, based on OpenHarmony that was launched as version 1.0 with 3.1.1 compatibility release.[65] On January 14, 2023, Red Flag smart supercharger, first launched on OpenHarmony-based KaihongOS with OpenHarmony 3.1 support that supports the distributed soft bus that allows interconnection with other electronic devices and electrical facilities.[66]On January 17, 2023, an electronic class card with 21.5-inch screen developed by Chinasoft and New Cape Electronics.[67]On November 17, 2023, Kaihong Technology andLeju Robotcollaborated to release the world's first humanoid robot powered by the open-source OpenHarmony distro KaihongOS withRockchipSoC hardware usingRTOSkernel technology for industrial robotic machines with predictable response times in determinism.[citation needed] On April 15, 2023, Tongxin Software became OpenAtom's OpenHarmony Ecological Partner.[citation needed]An intelligent terminal operating system for enterprises in China by Tongxin Software was passed for compatibility certification on June 7, 2023. Tongxin intelligent terminal operating system supports ARM, X86, and other architectures that is supported. Tongxin has established cooperative relations with major domestic mobile chip manufacturers and has completed adaptations using the Linux kernel. Together with the desktop operating system and the server operating system, it constitutes the Tongxin operating system family.[citation needed] PolyOS Mobile is anAIIoTopen-source operating system tailored forRISC-Vintelligent terminal devices by the PolyOS Project based on OpenHarmony, which was released on August 30, 2023, and is available forQEMUvirtualisation on Windows 10 and 11 desktop machines.[68] LightBeeOS launched on September 28, 2023, is an OpenHarmony-based distro that supports financial level security, with distribution bus by Shenzhen Zhengtong Company used for industrial public banking solutions of systems, tested on ATM machines with UnionPay in Chinese domestic market. The operating system has been launched with OpenHarmony 3.2 support and up.[69] On September 28, 2021, theEclipse Foundationand theOpenAtom Foundationannounced their intention to form a partnership to collaborate on OpenHarmony European distro which is a global family of operating systems under it and a family of the OpenHarmony operating system. Like OpenHarmony, it is one OS kit for all paradigm, enables a collection offree software, which can be used as an operating system or can be used in parts with other operating systems via Kernel Abstraction Layer subsystems on Oniro OS distros.[71] Oniro OS or simply Oniro, also known as Eclipse Oniro Core Platform, is adistributed operating systemforAIoTembedded systemslaunched on October 26, 2021, as Oniro OS 1.0, which is implemented to be compatible with HarmonyOS based on OpenHarmony L0-L2 branch source code, was later launched by the Eclipse Foundation for the global market with the founding members including Huawei,Linaroand Seco among others joined later on. Oniro is designed on the basis of open source and aims to be transparent, vendor-neutral, and independent system in the era ofIoTwith globalisation and localisation strategies resolving a fragmentated IoT andEmbedded devicesmarket.[72][73] The operating system featured aYoctosystem ofLinux kernelfor developments ofOpenEmbeddedbuild system with BitBake and Poky which is now part of Oniro blueprints that aims to be platform agnostic, however it is now aligned with OpenAtom development of OpenHarmony.[74]The goal is to increase the distro with partners that create their own OpenHarmony-Oniro compatible distros that increase interoperability which reduces fragmentation of diverse platforms with diverse set of hardwares with enhancements from derived project back to original project in Upstream development of OpenHarmony source code branch to improve global industrial standards compatibilities customised for global markets. It is also used for Downstream development for enhancing OpenHarmony base in global and western markets for compatibility and interoperability with connected IoT systems as well as custom third-party support on-device AI features on custom frameworks such asTensorflow,CUDAand others, alongside native HuaweiMindSporesolutions across the entire OpenHarmony ecosystem. Oniro platform which is both compatible with OpenHarmony systems in China and Huawei's ownHarmonyOSplatform globally, including western markets in connectivity and apps.[75][76] Rustin a framework alongside theData Plane Development Kit(DPDK) IP Pipeline andprofiling,React Nativeand Kanto in Applications development system on top of OpenHarmony,ServoandLinarotools in system services,Matteropеn-sourcе, royalty-frее connеctivity standard that aims to unify smart homе dеvicеs and incrеasе thеir compatibility with various platforms andOSGiin driver subsystem, IoTex in swappable kernel development, andEclipse Theiainintegrated development environmentto build Oniro OS apps that has interoperability with OpenHarmony based operating systems. Data can be transmitted directly rather than being shared via cloud online, enabling low latency architectures in more secure methods and privacy functions suitable for AIoT and smart home devices integration.[77][78] In September 2023, Open Mobile Hub (OMH) led byLinux Foundationwas formed, as an open-source platform ecosystem that aims to simplify and enhance the development of mobile applications for various platforms, includingiOS,Android, and OpenHarmony based global Oniro OS alongside,HarmonyOS(NEXT) with greater cross platform and open interoperability in mobile with OMH plugins such asGoogle APIs,Google Drive,OpenStreetMapalongsideBing Maps,Mapbox,Microsoft,Facebook,Dropbox,LinkedIn,Xand more. Open Mobile Hub platform aims to provide a set of tools and resources to streamline the mobile app development process.[79] The Oniro project is focused on being a horizontal platform for application processors and microcontrollers.[80]it is anembeddedOS, using theYoctobuild system, with a choice of either theLinux kernel,Zephyr, orFreeRTOS.[80]It includes an IP toolchain, maintenance,OTA, and OpenHarmony. It provides example combinations of components for various use cases, called "Blueprints".[80]Oniro OS 2.0 was released in 2022 and Oniro OS 3.0 based on OpenHarmony 3.2 LTS in October 2023, alongside latest 4.0 version as of December 6, 2023 on the main branch.[81][82][83] Huaweiofficially announced the commercial distro of proprietary HarmonyOS NEXT,microkernel-based coredistributed operating systemforHarmonyOSat Huawei Developer Conference 2023 (HDC) on August 4, 2023, which supports only nativeAPPapps via Ark Compiler withHuawei Mobile Services(HMS) Core support. Proprietary system built on OpenHarmony, HarmonyOS NEXT has the HarmonyOS microkernel at its core and it has noapkcompatibility support built exclusively for Huawei devices ecosystem.[85] With its customized architecture, HarmonyOS NEXT moves beyond OpenHarmony to support a broader range of applications and device ecosystems. It integrates a dual-frame design, optimizing compatibility with EMUI userland. The system is tailored for various hardware categories, including smartphones, tablets, cars, TVs, wearables, and IoT devices, utilizing either a Linux-based kernel or the lightweight LiteOS kernel for specific applications. On the same day at HDC 2023, the developer preview version of HarmonyOS NEXT was opened for cooperating enterprise developers to build and test native mobile apps. It will be open to all developers in the first quarter of 2024 according to the official announcement.[86][87][88] On 18 January 2024, Huawei announced HarmonyOS NEXT Galaxy stable rollout will begin in Q4 2024 based on OpenHarmony 5.0 (API 12) version after OpenHarmony 4.1 (API 11) based Q2 Developer Beta after release of public developer access of HarmonyOS NEXT Developer Preview 1 that has been in the hands of closed cooperative developers partners since August 2023 debut. The new system of HarmonyOS 5 version will replace previous HarmonyOS 4.2 system for commercial Huawei consumer devices that can only run native HarmonyOS apps built for HarmonyOS and OpenHarmony, as well as localisation using Oniro OS for downstream development at global level customised to global markets and standards enhancing OpenHarmony development.[89] On June 21, 2024, Huawei announced via HDC 2024 conference and released Developer Beta milestone of HarmonyOS NEXT based on OpenHarmony 5.0 beta1 version for registered public developers withHMSCore library embedded in native NEXT-specific API Developer Kit alongside supported compatible OpenHarmony APIs for native OpenHarmony-based HarmonyOS apps. The company officially confirmed the operating system is OpenHarmony compatible with the new boot image system.[90] On October 22, 2024, Huawei launched HarmonyOS 5.0.0 at its launch event, upgrading the HarmonyOS Next developer internal and public software versions, completing the transitioning and replacing dual-framework of previous mainline HarmonyOS versions with full OpenHarmony base with customHarmonyOS kernelon the original L0-L2 codebase branch, marking officially as an independent commercial operating system and ecosystem fromAndroidfork dependencies with 15,000+ native apps launched on the platform. As a result, OpenHarmony-based systems, including Oniro-based systems are aimed to be compatible with HarmonyOS nativeHAPapps,NearLinkwireless connectivity stack and cross-device with upgraded DSoftBus connectivity.[91][92] In terms of architecture, OpenHarmony alongside HarmonyOS has close relationship with server-based multi-kernel operating system OpenEuler, which is a community edition ofEulerOS, as they have implemented the sharing of kernel technology as revealed by Deng Taihua, President of Huawei's Computing Product Line.[93]The sharing is reportedly to be strengthened in the future in the areas of the distributedsoftware bus, app framework, system security, device driver framework and new programming language on the server side.[94] Harmony Distributed File System (HMDFS) is a distributed file system designed for large-scale data storage and processing that is also used in openEuler server operating system.
https://en.wikipedia.org/wiki/OpenHarmony
Plan 9 from Bell Labsis adistributed operating systemwhich originated from the Computing Science Research Center (CSRC) atBell Labsin the mid-1980s and built onUNIXconcepts first developed there in the late 1960s. Since 2000, Plan 9 has beenfree and open-source. The final official release was in early 2015. Under Plan 9, UNIX'severything is a filemetaphor is extended via a pervasive network-centricfilesystem, and thecursor-addressed,terminal-basedI/Oat the heart ofUNIX-likeoperating systems is replaced by awindowing systemandgraphical user interfacewithout cursor addressing, althoughrc, the Plan 9shell, is text-based. The namePlan 9 from Bell Labsis a reference to theEd Wood1957cultscience fictionZ-moviePlan 9 from Outer Space.[17]The system continues to be used and developed by operating system researchers and hobbyists.[18][19] Plan 9 from Bell Labs was originally developed, starting in the late 1980s,[19]by members of the Computing Science Research Center at Bell Labs, the same group that originally developedUnixand theC programming language.[20]The Plan 9 team was initially led byRob Pike,Ken Thompson, Dave Presotto and Phil Winterbottom, with support fromDennis Ritchieas head of the Computing Techniques Research Department. Over the years, many notable developers have contributed to the project, includingBrian Kernighan,Tom Duff,Doug McIlroy,Bjarne Stroustrupand Bruce Ellis.[21] Plan 9 replaced Unix as Bell Labs's primary platform for operating systems research.[22]It explored several changes to the original Unix model that facilitate the use and programming of the system, notably in distributedmulti-userenvironments. After several years of development and internal use, Bell Labs shipped the operating system to universities in 1992. Three years later, Plan 9 was made available for commercial parties by AT&T via the book publisherHarcourt Brace. With source licenses costing $350, AT&T targeted the embedded systems market rather than the computer market at large. Ritchie commented that the developers did not expect to do "much displacement" given how established other operating systems had become.[23] By early 1996, the Plan 9 project had been "put on the back burner" by AT&T in favor ofInferno, intended to be a rival toSun Microsystems'Java platform.[24]In the late 1990s, Bell Labs' new ownerLucent Technologiesdropped commercial support for the project and in 2000, a third release was distributed under anopen-source license.[25]A fourth release under a newfree softwarelicense occurred in 2002.[26]In early 2015, the final official release of Plan 9 occurred.[25] A user and development community, including current and formerBell Labspersonnel, produced minor daily releases in the form ofISO images. Bell Labs hosted the development.[27]The development source tree is accessible over the9PandHTTPprotocols and is used to update existing installations.[28]In addition to the official components of the OS included in the ISOs, Bell Labs also hosts a repository of externally developed applications and tools.[29] AsBell Labshas moved on to later projects in recent years, development of the official Plan 9 system had stopped. On March 23, 2021, development resumed following the transfer of copyright fromBell Labsto the Plan 9 Foundation.[10][30][31]Unofficial development for the system also continues on the 9front fork, where active contributors provide monthly builds and new functionality. So far, the 9front fork has provided the systemWi-Fidrivers, Audio drivers,USBsupport and built-in game emulator, along with other features.[32][33]Other recent Plan 9-inspired operating systems include Harvey OS[34]and Jehanne OS.[35] Plan 9 from Bell Labs is like theQuakers: distinguished by its stress on the 'Inner Light,' noted for simplicity of life, in particular for plainness of speech. Like the Quakers, Plan 9 does not proselytize. Plan 9 is adistributed operating system, designed to make a network ofheterogeneousand geographically separated computers function as a single system.[38]In a typical Plan 9 installation, users work at terminals running the window systemrio, and they access CPU servers which handle computation-intensive processes. Permanent data storage is provided by additional network hosts acting as file servers and archival storage.[39] Its designers state that, [t]he foundations of the system are built on two ideas: a per-processname spaceand a simple message-oriented file system protocol. The first idea (a per-process name space) means that, unlike on most operating systems,processes(running programs) each have their own view of thenamespace, corresponding to what other operating systems call the file system; a single path name may refer to different resources for different processes. The potential complexity of this setup is controlled by a set of conventional locations for common resources.[41][42] The second idea (a message-oriented filesystem) means that processes can offer their services to other processes by providing virtual files that appear in the other processes' namespace. Theclientprocess's input/output on such a file becomesinter-process communicationbetween the two processes. This way, Plan 9 generalizes the Unix notion of thefilesystemas the central point of access to computing resources. It carries over Unix's idea ofdevice filesto provide access to peripheral devices (mice, removable media, etc.) and the possibility tomountfilesystems residing on physically distinct filesystems into a hierarchical namespace, but adds the possibility to mount a connection to a server program that speaks a standardized protocol and treat its services as part of the namespace. For example, the original window system, called 8½, exploited these possibilities as follows. Plan 9 represents the user interface on a terminal by means of three pseudo-files:mouse, which can be read by a program to get notification of mouse movements and button clicks;cons, which can be used to perform textual input/output; andbitblt, writing to which enacts graphics operations (seebit blit). The window system multiplexes these devices: when creating a new window to run some program in, it first sets up a new namespace in whichmouse,consandbitbltare connected to itself, hiding the actual device files to which it itself has access. The window system thus receives all input and output commands from the program and handles these appropriately, by sending output to the actual screen device and giving the currently focused program the keyboard and mouse input.[39]The program does not need to know if it is communicating directly with the operating system's device drivers, or with the window system; it only has to assume that its namespace is set up so that these special files provide the kind of input and accept the kind of messages that it expects. Plan 9's distributed operation relies on the per-process namespaces as well, allowing client and server processes to communicate across machines in the way just outlined. For example, thecpucommand starts a remote session on a computation server. The command exports part of its local namespace, including the user's terminal's devices (mouse,cons,bitblt), to the server, so that remote programs can perform input/output using the terminal's mouse, keyboard and display, combining the effects ofremote loginand a shared network filesystem.[39][40] All programs that wish to provide services-as-files to other programs speak a unified protocol, called 9P. Compared to other systems, this reduces the number of customprogramming interfaces. 9P is a generic, medium-agnostic,byte-orientedprotocol that provides for messages delivered between a server and a client.[43]The protocol is used to refer to and communicate with processes, programs, and data, including both the user interface and the network.[44]With the release of the 4th edition, it was modified and renamed 9P2000.[26] Unlike most other operating systems, Plan 9 does not provide specialapplication programming interfaces(such asBerkeley sockets,X resourcesorioctlsystem calls) to access devices.[43]Instead, Plan 9 device drivers implement their control interface as a file system, so that the hardware can be accessed by the ordinary fileinput/outputoperationsreadandwrite. Consequently, sharing the device across the network can be accomplished by mounting the corresponding directory tree to the target machine.[17] Plan 9 allows the user to collect the files (callednames) from different directory trees in a single location. The resultinguniondirectorybehaves as the concatenation of the underlying directories (the order of concatenation can be controlled); if the constituent directories contain files having the same name, a listing of the union directory (lsorlc) will simply report duplicate names.[45]Resolution of a single path name is performed top-down: if the directoriestopandbottomare unioned intouwithtopfirst, thenu/namedenotestop/nameif it exists,bottom/nameonly if it existsandtop/namedoes not exist, and no file if neither exists. No recursive unioning of subdirectories is performed, so iftop/subdirexists, the files inbottom/subdirare not accessible through the union.[46] A union directory can be created by using a sequence ofbindcommands: In the example above,/arm/binis mounted at/bin, the contents of/arm/binreplacing the previous contents of/bin.Acme'sbindirectory is then union mounted after/bin, and Alice's personalbindirectory is union mounted before. When a file is requested from/bin, it is first looked for in/usr/alice/bin, then in/arm/bin, and then finally in/acme/bin/arm. The separate process namespaces thus usually replace the notion of asearch pathin the shell. A path environment variable ($path) still exists in thercshell (the shell mainly used in Plan 9); however, rc's path environment variable conventionally only contains the/binand.directories and modifying the variable is discouraged, instead, adding additional commands should be done by binding several directories together as a single/bin.[47][39]Unlike in Plan 9, the path environment variable of Unix shells should be set to include the additional directories whose executable files need to be added as commands. Furthermore, the kernel can keep separate mount tables for each process,[37]and can thus provide each process with its own file systemnamespace. Processes' namespaces can be constructed independently, and the user may work simultaneously with programs that have heterogeneous namespaces.[40]Namespaces may be used to create an isolated environment similar tochroot, but in a more secure way.[43] Plan 9's union directory architecture inspired4.4BSDandLinuxunion file systemimplementations,[45]although the developers of the BSD union mounting facility found the non-recursive merging of directories in Plan 9 "too restrictive for general purpose use".[46] Instead of having system calls specifically forprocess management, Plan 9 provides the/procfile system. Eachprocessappears as a directory containing information and control files which can be manipulated by the ordinary file IO system calls.[8] The file system approach allows Plan 9 processes to be managed with simple file management tools such aslsandcat; however, the processes cannot be copied and moved as files.[8] Plan 9 does not have specialised system calls orioctlsfor accessing the networking stack or networking hardware. Instead, the/netfile system is used. Network connections are controlled by reading and writing control messages to control files. Sub-directories such as/net/tcpand/net/udpare used as an interface to their respective protocols.[8] To reduce the complexity of managingcharacter encodings, Plan 9 usesUnicodethroughout the system. The initial Unicode implementation wasISO/IEC 10646-1:1993.Ken Thompsoninvented UTF-8, which became thenativeencoding in Plan 9. The entire system was converted to general use in 1992.[49]UTF-8 preserves backwards compatibility with traditionalnull-terminated strings, enabling more reliable information processing and the chaining of multilingual string data withUnix pipesbetween multiple processes. Using a single UTF-8 encoding with characters for all cultures and regions eliminates the need for switching between code sets.[50] Though interesting on their own, the design concepts of Plan 9 were supposed to be most useful when combined. For example, to implement anetwork address translation(NAT) server, a union directory can be created, overlaying therouter's/netdirectory tree with its own/net. Similarly, avirtual private network(VPN) can be implemented by overlaying in a union directory a/nethierarchy from a remotegateway, using secured 9P over the public Internet. A union directory with the/nethierarchy and filters can be used tosandboxan untrusted application or to implement afirewall.[43]In the same manner, a distributed computing network can be composed with a union directory of/prochierarchies from remote hosts, which allows interacting with them as if they are local. When used together, these features allow for assembling a complex distributed computing environment by reusing the existing hierarchical name system.[8] As a benefit from the system's design, most tasks in Plan 9 can be accomplished by usingls,cat,grep,cpandrmutilities in combination with therc shell(the default Plan 9 shell). Factotumis anauthenticationandkey managementserver for Plan 9. It handles authentication on behalf of other programs such that bothsecret keysand implementation details need only be known to Factotum.[51] UnlikeUnix, Plan 9 was designed with graphics in mind.[44]After booting, a Plan 9 terminal will run theriowindowing system, in which the user can create new windows displayingrc.[52]Graphical programs invoked from this shell replace it in its window. Theplumberprovides aninter-process communicationmechanism which allows system-wide hyperlinking. Samandacmeare Plan 9's text editors.[53] Plan 9 supports theKfs,Paq,Cwfs,FAT, andFossilfile systems. The last was designed at Bell Labs specifically for Plan 9 and provides snapshot storage capability. It can be used directly with a hard drive or backed withVenti, an archival file system and permanent data storage system. The distribution package for Plan 9 includes special compiler variants and programming languages, and provides a tailored set of libraries along with a windowinguser interfacesystem specific to Plan 9.[54]The bulk of the system is written in a dialect of C (ANSI Cwith some extensions and some other features left out). The compilers for this language were custom built with portability in mind; according to their author, they "compile quickly, load slowly, and produce medium quality object code".[55] Aconcurrent programming languagecalledAlefwas available in the first two editions, but was then dropped for maintenance reasons and replaced by athreadinglibrary for C.[56][57] Though Plan 9 was supposed to be a further development of Unix concepts, compatibility with preexisting Unix software was never the goal for the project. Manycommand-line utilitiesof Plan 9 share the names of Unix counterparts, but work differently.[48] Plan 9 can supportPOSIXapplications and can emulate theBerkeley socket interfacethrough theANSI/POSIX Environment(APE) that implements aninterfaceclose toANSI CandPOSIX, with some common extensions (the native Plan 9 C interfaces conform to neither standard). It also includes a POSIX-compatible shell. APE's authors claim to have used it to port theX Window System(X11) to Plan 9, although they do not ship X11 "because supporting it properly is too big a job".[58]Some Linux binaries can be used with the help of a "linuxemu" (Linux emulator) application; however, it is still a work in progress.[59]Vice versa, theVx32virtual machine allows a slightly modified Plan 9 kernel to run as a user process in Linux, supporting unmodified Plan 9 programs.[60] In 1991, Plan 9's designers compared their system to other early nineties operating systems in terms of size, showing that the source code for a minimal ("working, albeit not very useful") version was less than one-fifth the size of aMachmicrokernelwithout any device drivers (5899 or 4622lines of codefor Plan 9, depending on metric, vs. 25530 lines). The complete kernel comprised 18000 lines of code.[39](According to a 2006 count, the kernel was then some 150,000 lines, but this was compared against more than 4.8 million inLinux.[43]) Within the operating systems research community, as well as the commercial Unix world, other attempts at achieving distributed computing and remote filesystem access were made concurrently with the Plan 9 design effort. These included theNetwork File Systemand the associatedvnodearchitecture developed atSun Microsystems, and more radical departures from the Unix model such as theSpriteOS fromUC Berkeley. Sprite developer Brent Welch points out that the SunOS vnode architecture is limited compared to Plan 9's capabilities in that it does not support remote device access and remote inter-process communication cleanly, even though it could have, had the preexistingUNIX domain sockets(which "can essentially be used to name user-level servers") been integrated with the vnode architecture.[41] One critique of the "everything is a file", communication-by-textual-message design of Plan 9 pointed out limitations of this paradigm compared to thetypedinterfaces of Sun'sobject-oriented operating system,Spring: Plan 9 constrains everything to look like a file. In most cases the real interface type comprises the protocol of messages that must be written to, and read from, a file descriptor. This is difficult to specify and document, and prohibits any automatictype checkingat all, except for file errors at run time. (...) [A] path name relative to a process' implicit root context is theonlyway to name a service. Binding a name to an object can only be done by giving an existing name for the object, in the same context as the new name. As such, interface references simplycannotbe passed between processes, much less across networks. Instead, communication has to rely on conventions, which are prone to error and do not scale. A later retrospective comparison of Plan 9, Sprite and a third contemporary distributed research operating system,Amoeba, found that the environments they [Amoeba and Sprite] build are tightly coupled within the OS, making communication with external services difficult. Such systems suffer from the radical departure from the UNIX model, which also discourages portability of already existing software to the platform (...). The lack of developers, the very small range of supported hardware and the small, even compared to Plan 9, user base have also significantly slowed the adoption of those systems (...). In retrospect, Plan 9 was the only research distributed OS from that time which managed to attract developers and be used in commercial projects long enough to warrant its survival to this day. Plan 9 demonstrated that an integral concept of Unix—that every system interface could be represented as a set of files—could be successfully implemented in a modern distributed system.[52]Some features from Plan 9, like the UTF-8 character encoding of Unicode, have been implemented in other operating systems. Unix-like operating systems such as Linux have implemented 9P2000, Plan 9's protocol for accessing remote files, and have adopted features ofrfork, Plan 9's process creation mechanism.[64]Additionally, inPlan 9 from User Space, several of Plan 9's applications and tools, including the sam and acme editors, have been ported to Unix and Linux systems and have achieved some level of popularity. Several projects seek to replace theGNUoperating system programs surrounding the Linux kernel with the Plan 9 operating system programs.[65][66]The 9wmwindow managerwas inspired by8½, the older windowing system of Plan 9;[67]wmiiis also heavily influenced by Plan 9.[63]In computer science research, Plan 9 has been used as agrid computingplatform[68][62]and as a vehicle for research intoubiquitous computingwithoutmiddleware.[69]In commerce, Plan 9 underliesCoraidstorage systems. However, Plan 9 has never approached Unix in popularity, and has been primarily a research tool: [I]t looks like Plan 9 failed simply because it fell short of being a compelling enough improvement on Unix to displace its ancestor. Compared to Plan 9, Unix creaks and clanks and has obvious rust spots, but it gets the job done well enough to hold its position. There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough. Other factors that contributed to low adoption of Plan 9 include the lack of commercial backup, the low number of end-user applications, and the lack ofdevice drivers.[52][53] Plan 9 proponents and developers claim that the problems hindering its adoption have been solved, that its original goals as a distributed system, development environment, and research platform have been met, and that it enjoys moderate but growing popularity.[citation needed]Inferno, through its hosted capabilities, has been a vehicle for bringing Plan 9 technologies to other systems as a hosted part of heterogeneous computing grids.[70][71][72] Several projects work to extend Plan 9, including 9atom and 9front. Theseforksaugment Plan 9 with additionalhardware driversand software, including an improved version of the Upase-mailsystem, theGocompiler,Mercurialversion control systemsupport (and now also a git implementation), and other programs.[19][73]Plan 9 wasportedto theRaspberry Pisingle-board computer.[74][75]The Harvey project attempts to replace the custom Plan 9 C compiler withGCC, to leverage modern development tools such asGitHubandCoverity, and speed up development.[76] SinceWindows 10 version 1903, theWindows Subsystem for Linuximplements thePlan 9 Filesystem Protocolas a server and the hostWindowsoperating system acts as a client.[77] Starting with the release of Fourth edition in April 2002,[26]the full source code of Plan 9 from Bell Labs is freely available underLucent Public License1.02, which is considered to be anopen-source licenseby theOpen Source Initiative(OSI),free softwarelicense by theFree Software Foundation, and it passes theDebian Free Software Guidelines.[43] In February 2014, theUniversity of California, Berkeley, was authorized by the current Plan 9copyright holder–Alcatel-Lucent– to release all Plan 9 software previously governed by the Lucent Public License, Version 1.02 under theGPL-2.0-only.[92] On March 23, 2021, ownership of Plan 9 transferred fromBell Labsto the Plan 9 Foundation,[93]and all previous releases have been relicensed to theMIT License.[10]
https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs
Ashared-nothing architecture(SN) is adistributed computingarchitecturein which each update request is satisfied by a single node (processor/memory/storage unit) in acomputer cluster. The intent is to eliminate contention among nodes. Nodes do not share (independently access) the same memory or storage. One alternative architecture is shared everything, in which requests are satisfied by arbitrary combinations of nodes. This may introduce contention, as multiple nodes may seek to update the same data at the same time. It also contrasts withshared-diskandshared-memoryarchitectures. SN eliminatessingle points of failure, allowing the overall system to continue operating despite failures in individual nodes and allowing individual nodes to upgrade hardware or software without a system-wide shutdown.[1] A SN system can scale simply by adding nodes, since no central resource bottlenecks the system.[2]In databases, a term for the part of a database on a single node is ashard. A SN system typically partitions its data among many nodes. A refinement is to replicate commonly used but infrequently modified data across many nodes, allowing more requests to be resolved on a single node. Michael Stonebrakerat theUniversity of California, Berkeleyused the term in a 1986 database paper.[3]Teradatadelivered the first SN database system in 1983.[4]Tandem ComputersNonStopsystems, a shared-nothing implementation of hardware and software was released to market in 1976.[5][6]Tandem Computers later releasedNonStop SQL, a shared-nothing relational database, in 1984.[7] Shared-nothing is popular forweb development. Shared-nothing architectures are prevalent fordata warehousingapplications, although requests that require data from multiple nodes can dramatically reduce throughput.[8]
https://en.wikipedia.org/wiki/Shared_nothing_architecture
Web GIS, also known asWeb-based GIS, areGeographic Information Systems(GIS) that employ theWorld Wide Web(the Web) to facilitate the storage, visualization, analysis, and distribution ofspatial informationover theInternet.[1][2][3][4][5][6]Web GIS involves using the Web to facilitate GIS tasks traditionally done on a desktop computer, as well as enabling the sharing of maps and spatial data. Web GIS is a subset ofInternet GIS, which is itself a subset ofdistributed GIS.[5][6][7][8][9][10]The most common application of Web GIS isWeb mapping, so much so that the two terms are often used interchangeably in much the same way as betweendigital mappingand GIS. However, Web GIS and web mapping are distinct concepts, with web mapping not necessarily requiring a Web GIS.[5] The use of the Web has dramatically increased the effectiveness of both accessing and distributing spatial data, two of the most significant challenges of desktop GIS.[1][11][12]Many functions, such as interactivity, and dynamic scaling, are made widely available to end users by web services.[13]The scale of the Web can sometimes make finding quality and reliable data a challenge for GIS professionals and end users, with a significant amount of low-quality, poorly organized, or poorly sourced material available for public consumption.[12][13]This can make finding spatial data a time consuming activity for GIS users.[12] The history of Web GIS is very closely tied to the history of geographic information systems,Digital mapping, and theWorld Wide Webor the Web. The Web was first created in 1990, and the first major web mapping program capable of distributed map creation appeared shortly after in 1993.[8][11][14]This software, named PARC Map Viewer, was unique in that it facilitated dynamic user map generation, rather than static images.[14][15]This software also allowed users to employ GIS without having it locally installed on their machine.[1][14]The US federal government made the TIGER Mapping Service available to the public in 1995, which facilitated desktop and Web GIS by hosting US boundary data.[1][16]In 1996,MapQuestbecame available to the public, facilitating navigation and trip planning, which quickly became a major utility on the early Web.[1][13] In 1997,Esribegan to focus on their desktop GIS software, which in 2000 becameArcGIS.[17]This led to Esri dominating the GIS industry for the next several years.[11]In 2000 Esri launched the Geography Network, which offered some web GIS functions. In 2014, ArcGIS Online replaced this, and offers significant Web GIS functions including hosting, manipulating, and visualizing data in dynamic applications.[1][2][11] Web GIS has numerous applications and functions and manages most distributed spatial information.[18]Diverse industries and disciplines, including mathematics, history, business and education can all leverage Web GIS to integrate geographic approaches to data.[18] The United States Census Department extensively uses Web GIS to distribute its boundary data, such as TIGER files, and demographics to the public.[1][16]The "2020 Census Demographic Data Map Viewer" runs on an ESRI Web Map Application, and provides demographic information, such as population, race, and housing information at the state, county, and census tract levels.[19][20] Literature has identified educational benefits and applications of Web GIS at the elementary, primary, and university levels of education.[18][21]Using story maps and dashboards allows for new ways of displaying spatial data, and facilitates student interaction.[18]As Web GIS tools are often user friendly, teachers can create their own visualizations for the classroom, or even have students make their own to teach geographic concepts.[21] Web GIS has been used extensively in public health to communicate health data to the public and policymakers.[22]During the COVID-19 Pandemic, dashboard Web GIS Apps were popularized as a template for displaying health data byJohns Hopkins University, which was updated until March 10th, 2023.[22][23]In the United States, all 50 state governments, the CDC, and others ultimately made use of these tools.[24]These dashboards displayed various information but generally included a choropleth map showing COVID-19 case data.[24] Web GIS has numerous functions, which can be divided into categories of Geospatial web services, includingweb feature services, web processing services, andweb mapping services.[3]Geospatial web services are distinct software packages available on the World Wide Web that can be employed to perform a function with spatial data.[3] Web feature services allow users to access, edit, and make use of hosted geospatial feature datasets.[3] Web processing services allow users to perform GIS calculations on spatial data.[3]Web processing services standardize inputs, and outputs, for spatial data within an internet GIS and may have standardized algorithms forspatial statistics. Web mapping involves using distributed tools to create and host both static and dynamic maps.[8][3][1][2]It is different than desktopdigital mappingin that the data, software, or both might not be stored locally and are often distributed across many computers. Web mapping allows for the rapid distribution of spatial visualizations without the need for printing.[25]They also facilitates rapid updating to reflect new datasets and allow for interactive datasets that would be impossible in print media. Web mapping was employed extensively during theCOVID-19pandemic to visualize the datasets in close to real-time.[26][27][28] In terms of interoperability, the use of communication standards in Distributed GIS is particularly important. General standards forGeospatialData have been developed by theOpen Geospatial Consortium(OGC). For the exchange of Geospatial Data over the web, the most important OGC standards areWeb Map Service(WMS) andWeb Feature Service(WFS). Using OGC-compliantgatewaysallows for building very flexible Distributed GI Systems. Unlike monolithic GI Systems, OGC compliant systems are naturallyweb-basedand do not have strict definitions ofserversand clients. For instance, if a user (client) accesses a server, that server itself can act as a client of a number of further servers in order to retrieve data requested by theuser. This concept allows for data retrieval from any number of different sources, providing consistent data standards are used. This concept allows data transfer with systems not capable of GIS functionality. A key function ofOGC standardsis the integration of different systems already existing and thus geo-enabling the web.Web servicesproviding different functionality can be used simultaneously to combine data from different sources (mash-ups). Thus, different services on distributed servers can be combined for ‘service-chaining’ in order to add additional value to existing services. Providing a wide use of OGC standards by different web services, sharing distributed data of multiple organizations becomes possible. Some important languages used in OGC-compliant systems are described in the following.XMLstands for eXtensible Markup language and is widely used for displaying and interpreting data from computers. Thus the development of a web-based GI system requires several useful XML encodings that can effectively describe two-dimensional graphics such as mapsSVGand, at the same time, store and transfer simple featuresGML. Because GML and SVG are both XML encodings, it is very straightforward to convert between the two using an XML Style Language TransformationXSLT. This gives an application a means of rendering GML, and in fact, is the primary way that it has been accomplished among existing applications today.[30]XML can introduce innovative web services, in terms of GIS. It allows geographic information to be easily translated in graphics and in these terms, scalar vector graphics (SVG) can produce high-quality dynamic outputs by using data retrieved from spatial databases. In the same aspect, Google, one of the pioneers in web-based GIS, has developed its own language, which also uses an XML structure.Keyhole Markup Language(KML) is a file format used to display geographic data in an earth browser, such as Google Earth, Google Maps, and Google Maps for mobile browsers"Google KML definition". Retrieved2007-11-21. The Geospatial Semantic Web is a vision to include geospatial information at the core of theSemantic Webto facilitateinformation retrievalandinformation integration.[31]This vision requires the definition of geospatialontologies, semanticgazetteers, and shared technical vocabularies to describegeographic phenomena.[32]The Semantic Geospatial Web is part ofgeographic information science.[3] All maps are simplifications of reality and, therefore, can never be perfectly accurate.[33]These inaccuracies include distortions introduced during projection, simplifications, and human error. While traditionally trained ethical cartographers try to minimize these errors and document the known sources of error, including where the data originated, Web GIS facilitates the creation of maps by non-traditionally trained cartographers and, more significantly, facilitates the rapid dissemination of their potentially erroneous maps.[16][13][34]While this democratization of GIS has many potential positives, including empowering traditionally disenfranchised groups of people, it also means that a wide audience can see bad maps.[25][28][33][35]Further, malicious actors can quickly spread intentionally misleading spatial information while hiding the source.[33]This has significant implications, and contributes to theinfodemicsurrounding many topics, including the spread of potentially misleading information on theCOVID-19pandemic.[22][24]Even a map made by a skilled cartographer has significant limitations over traditional distribution methods when using the Web. Among a variety of issues, computer monitors have a variety of different color settings and sizes.[13][36]This renders ratio, representative fraction, and verbal scales useless, leaving only the scale bar. It also means a color choice selected by the cartographer might not be what the end-user experiences.[13][36]These issues are not limited to cartography but are difficult to solve. Due to the nature of the Web, using it for storing and computation is less secure than using local networks.[37][38][39]When working with sensitive data, Web GIS may expose an organization to the additional risk of having its data breached then if they use dedicated hardware and avirtual private network(VPN) to access that hardware remotely over the internet.[37][38][39]The convenience and relatively low cost of Web GIS often prevents this from being implemented. As Web GIS is built on the web, it is subject tolink rotphenomena.[24]This phenomenon can lead to previously available data being lost due to users changing the URL, physical hardware failures, or the content being deleted by the publisher. If the hardware and information accessed within a Web GIS is lost, "a single disk failure could be like the burning of the library at Alexandria."[40]One study found that 23% of COVID-19 Dashboards available on government sites on February of 2021 were no longer available at the previous URLs by April of 2023.[24]
https://en.wikipedia.org/wiki/Web_GIS
Lisp Flavored Erlang(LFE) is afunctional,concurrent,garbage collected, general-purposeprogramming languageandLispdialectbuilt on CoreErlangand the Erlang virtual machine (BEAM). LFE builds on Erlang to provide a Lisp syntax for writing distributed,fault-tolerant, softreal-time, non-stop applications. LFE also extends Erlang to supportmetaprogrammingwith Lisp macros and an improved developer experience with a feature-richread–eval–print loop(REPL).[1]LFE is actively supported on all recent releases of Erlang; the oldest version of Erlang supported is R14. Initial work on LFE began in 2007, when Robert Virding started creating a prototype of Lisp running on Erlang.[2]This work was focused primarily on parsing and exploring what an implementation might look like. No version control system was being used at the time, so tracking exact initial dates is somewhat problematic.[2] Virding announced the first release of LFE on theErlang Questionsmail list in March 2008.[3]This release of LFE was very limited: it did not handle recursiveletrecs,binarys,receive, ortry; it also did not support a Lisp shell.[4] Initial development of LFE was done with version R12B-0 of Erlang[5]on a Dell XPS laptop.[4] Robert Virding has stated that there were several reasons why he started the LFE programming language:[2] Like Lisp, LFE is anexpression-oriented language. Unlike non-homoiconicprogramming languages, Lisps make no or little syntactic distinction betweenexpressionsandstatements: all code and data are written as expressions. LFE brought homoiconicity to the Erlang VM. In LFE, the list data type is written with its elements separated by whitespace, and surrounded by parentheses. For example,(list12'foo)is a list whose elements are the integers1and2, and the atom [[foo|foo]]. These values are implicitly typed: they are respectively two integers and a Lisp-specific data type called asymbolic atom, and need not be declared as such. As seen in the example above, LFE expressions are written as lists, usingprefix notation. The first element in the list is the name of aform, i.e., a function, operator, or macro. The remainder of the list are the arguments. The LFE-Erlang operators are used in the same way. The expression evaluates to 42. Unlike functions in Erlang and LFE, arithmetic operators in Lisp arevariadic(orn-ary), able to take any number of arguments. LFE haslambda, just like Common Lisp. It also, however, haslambda-matchto account for Erlang's pattern-matching abilities in anonymous function calls. This section does not represent a complete comparison between Erlang and LFE, but should give a taste. Erlang: LFE: Erlang: LFE: Or idiomatic functional style: Erlang: LFE: Erlang: LFE: or using a ``cons`` literal instead of the constructor form: Erlang: LFE: Erlang: LFE: or: Calls to Erlang functions take the form(<module>:<function> <arg1> ... <argn>): Using recursion to define theAckermann function: Composing functions: Message-passing with Erlang's light-weight "processes": Multiple simultaneous HTTP requests:
https://en.wikipedia.org/wiki/LFE_(programming_language)
Mixis abuild automationtool for working with applications written in theElixirprogramming language.[3][4]Mix was created in 2012 by Anthony Grimes, who took inspiration from Clojure's Leiningen. Soon after, Mix was merged into the Elixir programming language itself and to this day is one of the six applications that are part of the Elixir language. Mix provides functionality for creating, compiling, and testing Elixirsource codeand for managing dependencies and deploying Elixir applications.[5] Mix providestaskstocreate, clean,build,compile,run, andtestElixir applications. For example, Mix may be used to create a new Elixir project, such as a new hello_world application. Runningmix new hello_worldwill result in Mix uses the information defined in a Mix Project to compile, build, and assemble the application. By convention, this information is typically managed in an Elixir script file named mix.exs. The file may include version information, dependencies, and other configuration information. As the Elixir build tool, Mix is used on applications that target the Erlang virtual machine (as opposed to theJava virtual machineor the .NETCommon Language Runtime).[6]Mix is used with web applications built on the Phoenix framework.[7] Thissoftware-engineering-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Mix_(build_tool)
Phoenixis aweb development frameworkwritten in thefunctional programminglanguageElixir. Phoenix uses aserver-sidemodel–view–controller(MVC) pattern.[2]Based on the Plug library,[3]and ultimately theErlangHTTP server Cowboy,[4]it was developed to provide highly performant and scalableweb applications. In addition to the request/response functionality provided by the underlying Cowboy server,[5]Phoenix provides soft realtime communication to external clients throughWebSocketsorlong pollingusing its language agnostic channels feature.[6][7] Two notable features of Phoenix are LiveView and HEEx. LiveView provides real-time user experiences with server-renderedHTMLoverHTTPand WebSocket.[8]HEEx is Phoenix's templating language which provides HTML-aware compile time checking.[9]
https://en.wikipedia.org/wiki/Phoenix_(web_framework)
Gleamis ageneral-purpose,concurrent,functionalhigh-levelprogramming languagethat compiles toErlangorJavaScriptsource code.[2][7][8] Gleam is a statically-typed language,[9]which is different from the most popular languages that run on Erlang’s virtual machineBEAM,ErlangandElixir. Gleam has its own type-safe implementation of OTP, Erlang's actor framework.[10]Packages are provided using the Hex package manager, and an index for finding packages written for Gleam is available.[11] The first numbered version of Gleam was released on April 15, 2019.[12]Compiling to JavaScript was introduced with version v0.16.[13] In 2023 the Erlang Ecosystem Foundation funded the creation of a course for learning Gleam on the learning platformExercism.[14] Version v1.0.0 was released on March 4, 2024.[15] Gleam includes the following features, many common to other functional programming languages:[8] A"Hello, World!"example: Gleam supportstail calloptimization:[16] Gleam's toolchain is implemented in theRust programming language.[17]The toolchain is a single native binary executable which contains the compiler, build tool, package manager, source code formatter, andlanguage server. AWebAssemblybinary containing the Gleam compiler is also available, enabling Gleam code to be compiled within a web browser. Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Gleam_(programming_language)
This is a list ofacademic conferencesincomputer science. Only conferences with separate articles are included; within each field, the conferences are listed alphabetically by their short names. Conferences accepting a broad range of topics fromtheoretical computer science, includingalgorithms,data structures,computability,computational complexity,automata theoryandformal languages: Conferences whose topic isalgorithmsanddata structuresconsidered broadly, but that do not include other areas of theoretical computer science such as computational complexity theory: Conferences oncomputational geometry,graph drawing, and other application areas of geometric computing: Conferences onprogramming languages,programming language theoryandcompilers: Conferences onsoftware engineering: Conferences onformal methodsin software engineering, includingformal specification,formal verification, andstatic code analysis: Conferences onconcurrent,distributed, andparallel computing,fault-tolerant systems, and dependable systems: Conferences onhigh-performance computing,cluster computing, andgrid computing: Conferences onoperating systems,storage systemsandmiddleware: Conferences oncomputer architecture: Conferences oncomputer-aided designandelectronic design automation: Conferences oncomputer networking: Wireless networksandmobile computing, includingubiquitousandpervasive computing,wireless ad hoc networksandwireless sensor networks: Conferences oncomputer securityandprivacy: Cryptographyconferences: Conferences ondatabases,information systems,information retrieval,data miningand theWorld Wide Web: Conferences onartificial intelligenceandmachine learning: Conferences onEvolutionary computation. Conferences onautomated reasoning: Conferences oncomputer vision(including alsoimage analysis) andpattern recognition: Conferences oncomputational linguisticsandnatural language processing: Conferences oncomputer graphics,geometry processing,image processing, andmultimedia: Conferences onhuman–computer interactionanduser interfaces: Conferences onbioinformaticsandcomputational biology:
https://en.wikipedia.org/wiki/List_of_computer_science_conferences
This is a list ofacademic conferencesincomputer science, ordered by their acronyms or abbreviations.
https://en.wikipedia.org/wiki/List_of_computer_science_conference_acronyms
Incomputing,POSIX Threads, commonly known aspthreads, is anexecution modelthat exists independently from aprogramming language, as well as aparallel executionmodel. It allows aprogramto control multiple different flows of work that overlap in time. Each flow of work is referred to as athread, and creation and control over these flows is achieved by making calls to the POSIX ThreadsAPI. POSIX Threads is an API defined by theInstitute of Electrical and Electronics Engineers(IEEE) standardPOSIX.1c, Threads extensions (IEEE Std 1003.1c-1995). Implementations of the API are available on manyUnix-likePOSIX-conformantoperating systemssuch asFreeBSD,NetBSD,OpenBSD,Linux,macOS,Android,[1]Solaris,Redox, andAUTOSARAdaptive, typically bundled as a librarylibpthread.DR-DOSandMicrosoft Windowsimplementations also exist: within theSFU/SUAsubsystem which provides a native implementation of a number of POSIX APIs, and also withinthird-partypackages such aspthreads-w32,[2]which implements pthreads on top of existingWindows API. pthreads defines a set ofCprogramming languagetypes,functionsand constants. It is implemented with apthread.hheader and a threadlibrary. There are around 100 threads procedures, all prefixedpthread_and they can be categorized into five groups: The POSIXsemaphoreAPI works with POSIX threads but is not part of the threads standard, having been defined in thePOSIX.1b, Real-time extensions (IEEE Std 1003.1b-1993)standard. Consequently, the semaphore procedures are prefixed bysem_instead ofpthread_. An example illustrating the use of pthreads in C: This program creates five threads, each executing the functionperform_workthat prints the unique number of this thread to standard output. If a programmer wanted the threads to communicate with each other, this would require defining a variable outside of the scope of any of the functions, making it aglobal variable. This program can be compiled using thegcccompiler with the following command: Here is one of the many possible outputs from running this program. Windows does not support the pthreads standard natively, therefore the Pthreads4w project seeks to provide a portable and open-sourcewrapperimplementation. It can also be used to portUnixsoftware (which uses pthreads) with little or no modification to the Windows platform.[4]Pthreads4w version 3.0.0[5]or later, released under the Apache Public License v2.0, is compatible with 64-bit or 32-bit Windows systems. Version 2.11.0,[6]released under the LGPLv3 license, is also 64-bit or 32-bit compatible. TheMingw-w64project also contains a wrapper implementation of 'pthreads,winpthreads, which tries to use more native system calls than the Pthreads4w project.[7] Interixenvironment subsystem available in theWindows Services for UNIX/Subsystem for UNIX-based Applicationspackage provides a native port of the pthreads API, i.e. not mapped on Win32 API but built directly on the operating systemsyscallinterface.[8]
https://en.wikipedia.org/wiki/POSIX_Threads