text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
File attributesare a type ofmetadatathat describe and may modify howfilesand/ordirectoriesin afilesystembehave. Typical file attributes may, for example, indicate or specify whether a file is visible, modifiable, compressed, or encrypted. The availability of most file attributes depends on support by the underlying filesystem (such asFAT,NTFS,ext4)
where attribute data must be stored along with other control structures. Each attribute can have one of two states: set and cleared. Attributes are considered distinct from other metadata, such as dates and times,filename extensionsorfile system permissions. In addition to files,folders,volumesand other file system objects may have attributes.
Traditionally, inDOSandMicrosoft Windows,filesandfoldersaccepted four attributes:[1][2][3]
As new versions of Windows came out, Microsoft has added to the inventory of available attributes on theNTFSfile system,[7]including but not limited to:[8]
Other attributes that are displayed in the "Attributes" column of Windows Explorer[7]include:
In DOS,OS/2and Windows, theattribcommand incmd.exeandcommand.comcan be used to change and display the four traditional file attributes.[3][9]File Explorer in Windows can show the seven mentioned attributes but cannot set or clear the System attribute.[5]Windows PowerShell, which has become a component ofWindows 7and later, features two commands that can read and write attributes:Get-ItemPropertyandSet-ItemProperty.[10]To change an attribute on a file onWindows NT, the user must have appropriatefile system permissionsknown asWrite AttributesandWrite Extended Attributes.[11]
InUnixand Unix-like systems, includingPOSIX-conforming systems, each file has a 'mode' containing 9 bit flags controlling read, write and execute permission for each of the file's owner, group and all other users (seeFile-system permissions §Traditional Unix permissionsfor more details) plus thesetuidandsetgidbit flags and a'sticky' bit flag.
The mode also specifies thefile type(regular file, directory, or some other special kind).
In4.4BSDand4.4BSD-Lite, files and directories (folders) accepted four attributes that could be set by the owner of the file or thesuperuser(the "User" attributes) and two attributes that could only be set by the superuser (the "System" attributes):[12]
FreeBSDadded some additional attributes,[13]also supported byDragonFly BSD:[14]
FreeBSD also supports:[13]
whereas DragonFly BSD supports:[14]
NetBSDadded another attribute,[15]also supported byOpenBSD:[16]
macOSadded three attributes:
In these systems, thechflagsandlscommands can be used to change and display file attributes. To change a "user" attribute on a file in 4.4BSD-derived operating systems, the user must be the owner of the file or the superuser; to change a "system" attribute, the user must be the superuser.
TheLinuxoperating system can support awide range of file attributesthat can be listed by thelsattrcommand and modified, where possible, by thechattrcommand.
Programs can examine and alter attributes usingioctloperations.[18]
Many Linux file systems support only a limited set of attributes, and none of them support every attribute thatchattrcan change. File systems that support at least some attributes includeext4,XFSandbtrfs.
Writing to file only allowed in append mode.
Prevents any change to file's contents or metadata: file/directory cannot be written to, deleted, renamed, or hard-linked.
Support for "system attributes" (in which the operating system defines the meaning, unlike generalextended file attributes) was added to OpenSolaris in 2007 in support of the CIFS server.[19]It has been carried forward from there into both theOracle Solaris11 releases and the open sourceillumosproject.
In this implementation, awide range of attributescan be set via thechmodcommand[20][21]and listed by thelscommand.[22][23]Programs can examine and alter attributes using thegetattratandsetattratfunctions.[24][25]
Currently theZFSfile system supports all defined attributes, and starting in Oracle Solaris 11.2, thetmpfsfile system supports a subset of attributes.[26]
Writing to file only allowed in append mode.
Prevents any change to file's contents or metadata (except access time): file/directory cannot be written to, deleted, or renamed.
|
https://en.wikipedia.org/wiki/File_attribute
|
Apple File System(APFS) is aproprietaryfile systemdeveloped and deployed byApple Inc.formacOSSierra (10.12.4)[6]and later,iOS10.3,tvOS10.2,[7]watchOS3.2,[8]and all versions ofiPadOS.[9][10]It aims to fixcore problemsofHFS+(also called Mac OS Extended), APFS's predecessor which had been in use since 1998. APFS is optimized forsolid-state drivestorage and supportsencryption,snapshots, and improved handling ofmetadataintegrity.[11][12]
Apple File System was announced atApple'sdevelopers’ conference(WWDC) in June 2016 as a replacement forHFS+, which had been in use since 1998.[11][12]APFS was released for64-bitiOSdevices on March 27, 2017, with the release of iOS 10.3, and for macOS devices on September 25, 2017, with the release ofmacOS 10.13.[13][8]
Apple released a partial specification for APFS in September 2018 which supported read-only access to Apple File Systems on unencrypted, non-Fusion storage devices. The specification for software encryption was documented later.[14]
The file system can be used on devices with relatively small or large amounts of storage. It uses 64-bitinodenumbers,[2]and allows for more secure storage by using a technology called Data Protection. The APFS code, like the HFS+ code, uses theTRIM commandfor better space management and performance. It may increase read-write speeds on iOS and macOS,[8]as well as space on iOS devices, due to the way APFS calculates available data.[15]
APFS uses theGPTpartition scheme. Within the GPT scheme are one or more APFS containers (partition type GUID is7C3457EF-0000-11AA-AA11-00306543ECAC). Within each container there are one or more APFS volumes, all of which share the allocated space of the container, and each volume may have APFS volume roles.macOS Catalina(macOS 10.15) introduced the APFS volume group, which are groups of volumes thatFinderdisplays as one volume. APFS firmlinks lie betweenhard linksandsoft linksand link between volumes.
In macOS Catalina theSystemvolume role (usually named "Macintosh HD") became read-only, and inmacOS Big Sur(macOS 11) it became a signed system volume (SSV) and only volume snapshots are mounted. TheDatavolume role (usually named "Macintosh HD - Data") is used as an overlay or shadow of theSystemvolume, and both theSystemandDatavolumes are part of the same volume group and shown as one in Finder.
Clones allow the operating system to make efficient file copies on the same volume without occupying additional storage space. Changes to a cloned file are saved asdelta extents, reducing storage space required for document revisions and copies.[10]There is, however, no interface to mark two copies of the same file as clones of the other, or for other types ofdata deduplication.
The feature is automatically available when you copy any files using theFinderapplication, which ismacOS's defaultfile manager, but not when using thecpcommand.[16]To do that on thecommand-line, thecputility on macOS has a-cparameter that allows it to use theclonefilesystem call.[17]
APFS volumes supportsnapshotsfor creating a point-in-time, read-only instance of the file system.[10]
Apple File System natively supportsfull disk encryption,[2]and file encryption with the following options:
APFS supports 64-bitinode numbers, supporting over 9 quintillion files (263) on a single volume.[2][5]
Apple File System useschecksumsto ensuredata integrityformetadatabut not for the actual user data, relying instead onerror-correcting code(ECC) mechanisms in thestoragehardware.[18]
Apple File System is designed to avoid metadata corruption caused bysystem crashes. Instead of overwriting existing metadata records in place, it writes entirely new records, points to the new ones and then releases the old ones, an approach known asredirect-on-write. This avoids corrupted records containing partial old and partial new data caused by a crash that occurs during an update. It also avoids having to write the change twice, as happens with an HFS+ journaled file system, where changes are written first to the journal and then to the catalog file.[18]
APFS supports transparent compression on individual files using Deflate (Zlib), LZVN (libFastCompression), andLZFSE. All three areLempel-Ziv-type algorithms. This feature is inherited from HFS+, and is implemented with the same AppleFSCompression / decmpfs system using resource forks or extended attributes. As with HFS+, the transparency is broken for tools that do not use decmpfs-wrapped routines.[19]
APFS adds the ability to have multiple logical drives (referred to asvolumes) in the same container where free space is available to all volumes in that container (block device).[20]
While APFS includes numerous improvements relative to its predecessor, HFS+, a number of limitations have been noted.
APFS does not provide checksums for user data.[21]It also does not take advantage of byte-addressablenon-volatile random-access memory.[22][23]
Enumerating files, and anyinodemetadata in general, is much slower on APFS when it is located on ahard disk drive(HDD). This is because instead of storingmetadataat a fixed location likeHFS+does, APFS stores them alongside the actual file data. OnSSDs, thisfragmentationof metadata is inconsequential due to their lack of moving parts, but on HDDs, it leads to substantial performance degradation as the drive’s read/write heads must physicallyseekout scattered data fragments.[24]
Besides that, a key feature of APFS is "copy-on-write," which allows for rapid file duplication by creating references to the original data rather than copying it outright. This feature enables functionalities likesnapshotsand quick file copies. However, when files are modified after being copied, APFS creates new extents (data blocks) for the changes, leading to more fragmentation over time. This issue is exacerbated with applications likeTime Machine, which creates multiple versions of files, further increasing fragmentation and slowingperformance.[25]As a result, APFS is generally not recommended for use on HDDs, particularly for workloads involving frequent file modifications, copying, or snapshot usage.[26]
Unlike HFS+, APFS does not supporthard linksto directories.[3][27]Since the version of theTime Machinebackup software included in Mac OS X 10.5 (Leopard) through macOS 10.15 (Catalina) relied on hard links to directories, APFS was initially not a supported option for its backup volumes.[28][27]This limitation was overcome starting inmacOS 11 Big Sur, wherein APFS is now the default file system for new Time Machine backups (existingHFS+-formatted backup drives are also still supported).[29]macOS Big Sur's implementation of Time Machine in conjunction with APFS-formatted drives enables "faster, more compact, and more reliable backups" than were possible with HFS+-formatted backup drives.[30][31]
An experimental version of APFS, with some limitations, is provided inmacOS Sierra10.12.4. It is available through the command linediskutilutility. Among these limitations, it does not performUnicode normalizationwhile HFS+ does,[37]leading to problems with languages other than English.[38]Drives formatted with Sierra’s version of APFS may also not be compatible with later versions of macOS or APFS, and the Sierra version of APFS cannot be used withTime Machine, FileVault volumes, or Fusion Drives.[39]
SincemacOS 10.13 High Sierra, all devices with flash storage are automatically converted to APFS.[40]As ofmacOS 10.14 Mojave,Fusion Drivesand hard disk drives are also upgraded on installation.[41]The primary user interface to upgrade does not present an option to opt out of this conversion, and devices formatted with the High Sierra version of APFS will not be readable in previous versions of macOS.[40]Users can disable APFS conversion by using the installer'sstartosinstallutility on the command line and passing--converttoapfs NO.[42]
FileVaultvolumes are not converted to APFS as of macOS Big Sur 11.2.1. Instead macOS formats external FileVault drives as CoreStorage Logical Volumes formatted with Mac OS Extended (Journaled). FileVault drives can be optionally encrypted.[citation needed]
iOS 10.3,tvOS 10.2, andwatchOS 3.2convert the existingHFSXfile system to APFS on compatible devices.[13][8][43]
Despite the ubiquity of APFS volumes in today's Macs and the format's 2016 introduction, third-party repair utilities continue to have notable limitations in supporting APFS volumes, due to Apple's delayed release of complete documentation. According to Alsoft, the maker of DiskWarrior, Apple's 2018 release of partial APFS format documentation has delayed the creation of a version of DiskWarrior that can safely rebuild APFS disks.[44]Competing products, including MicroMat's TechTool and Prosoft's Drive Genius, are expected to increase APFS support as well.
Paragon Software Grouphas published asoftware development kitunder the 4-ClauseBSD Licensethat supports read-only access of APFS drives.[45]An independent read-onlyopen sourceimplementation by Joachim Metz, libfsapfs, is released underGNU Lesser General Public Licensev3. It has been packaged intoDebian,Fedora Linux,Rocky Linux,Red Hat Enterprise LinuxandUbuntusoftware repositories.[46][47][48]Both are command-line tools that do not expose a normal filesystem driver interface. There is aFilesystem in Userspace(FUSE) driver for Linux called apfs-fuse with read-only access.[49]An "APFS for Linux" project is working to integrate APFS support into the Linux kernel.[50]
A commercial product, Paragon's APFS for Windows, allows for read and write support to APFS volumes in all versions of Windows from Windows 7 through Windows 11 and Windows Server 2008 R2 through Windows Server 2022, but it is unable to format or verify APFS volumes, and it cannot read APFS volumes which are hardware-encrypted against theApple T2security chip.[51]
|
https://en.wikipedia.org/wiki/APFS
|
Bcachefsis acopy-on-write(COW)file systemforLinux-based operating systems. Its primary developer, Kent Overstreet, first announced it in 2015, and it was added to theLinux kernelbeginning with 6.7.[1][2]It is intended to compete with the modern features ofZFSorBtrfs, and the speed and performance ofext4orXFS.
Bcachefs is acopy-on-write(COW)file systemforLinux-based operating systems.[3]Features includecaching,[4]full file-systemencryptionusing theChaCha20andPoly1305algorithms,[5]nativecompression[4]viaLZ4,gzip[6]andZstandard,[7]snapshots,[4]CRC-32Cand 64-bitchecksumming.[3]It can span block devices, including inRAIDconfigurations.[5]
Earlier versions of Bcachefs provided all the functionality ofBcache, a block-layercachesystem for Linux, with which Bcachefs shares about 80% of its code.[8]As of December 2021, the block-layer cache functionality has been removed.[7]
On a data structure level, bcachefs usesB-treeslike many other modern file systems, but with an unusually large node size defaulting to 256 KiB. These nodes are internallylog-structured, forming a hybrid data structure, reducing the need for rewriting nodes on update.[9]Snapshots are not implemented by cloning a COW tree, but by adding a version number to filesystem objects.[10]The COW feature and the bucket allocator enables a RAID implementation which is claimed to not suffer from thewrite holenor IO fragmentation.[7]
Bcachefs describes itself as "working and stable, with a small community of users".[11]When discussing Linux 6.9-rc3 on April 7, 2024,Linus Torvaldstouched on the stability of bcachefs, saying "if you thought bcachefs was stable already,I have a bridge to sell you",[12]and in August of 2024 that "nobody sane uses bcachefs and expects it to be stable".[13]
In August 2024, the Debian maintainer of bcachefs-tools, a package providing "userspace tools and docs", orphaned the package, questioning its long term supportability.[14]The maintainer further commented in a blog post that: "I’d advise that if you consider using bcachefs for any kind of production use in the near future, you first consider how supportable it is long-term, and whether there’s really anyone at all that is succeeding in providing stable support for it."[15]
Primary development has been by Kent Overstreet, the developer ofBcache, which he describes as a "prototype" for the ideas that became Bcachefs. Overstreet intends Bcachefs to replace Bcache.[8]Overstreet has stated that development of Bcachefs began as Bcache's developers realized that its codebase had "been evolving ... into a full blown, general-purposePOSIXfilesystem", and that "there was a really clean and elegant design" within it if they took it in that direction. Some time after Bcache was merged in 2013 into the mainline Linux kernel, Overstreet left his job atGoogleto work full-time on Bcachefs.[3]
After a few years' unfunded development, Overstreet announced Bcachefs in 2015, at which point he called the code "more or less feature complete", and called for testers and contributors. He intended it to be an advanced file system with modern features[16]like those ofZFSorBtrfs, with the speed and performance of file systems such asext4andXFS.[3]As of 2017 Overstreet was receiving financial support for the development of Bcachefs viaPatreon.[5]
As of mid-2018, the on-disk format had settled.[8]Patches had been submitted for review to have Bcachefs included in the mainline Linux kernel, but had not yet been accepted.[4]
By mid-2019, the desired features of Bcachefs were completed and the associated patches toLKMLwere submitted for peer review.[17][18]In October 2023 Bcachefs was merged into the Linux 6.7 kernel,[19]which was released in January 2024.[2]
In November 2024, Kent Overstreet was restricted by Linux's Code of Conduct Committee from sending in contributions during the Linux 6.13 kernel development cycle due to "written abuse of another community member" and taking "insufficient action to restore the community's faith in having otherwise productive technical discussions without the fear of personal attacks".[20][21]Patches were later accepted without issue during the Linux 6.14 kernel development.[22]
|
https://en.wikipedia.org/wiki/Bcachefs
|
HAMMERis a high-availability64-bitfile systemdeveloped byMatthew DillonforDragonFly BSDusingB+ trees. Its major features include infinite NFS-exportablesnapshots,master–multislaveoperation, configurable history retention,fsckless-mount, andchecksumsto deal withdata corruption.[5]HAMMER also supports data blockdeduplication, meaning that identical data blocks will be stored only once on a file system.[6]A successor,HAMMER2, was announced in 2011 and became the default in Dragonfly 5.2 (April 2018).[7]
HAMMER file system provides configurable fine-grained and coarse-grained filesystem histories with online snapshots availability. Up to 65536master(read–write) andslave(read-only)pseudo file systems(PFSs), with independent individual retention parameters and inode numbering, may be created for each file system; PFS may be mirrored to multiple slaves both locally or over network connection with near real-time performance. No file system checking is required onremount.[5][8][9][10]
HAMMER supports volumes up to 1EiBof storage capacity. File system supportsCRCchecksumming of data and metadata, online layout correction anddata deduplication, and dynamicinodesallocation with an effectively unlimited number of inodes.[8][11][12]
As of May 2020[update], regular maintenance is required to keep the file system clean and regain space after file deletions. By default, acronjob performs the necessary actions on DragonFly BSD daily. HAMMER does not support multi-master configurations.[8][10]
HAMMER is optimized to reduce the number of physical I/O operations to cover the most likely path,[13]ensuringsequential accessfor optimal performance.
The following performance-related improvements were introduced inJuly 2011:[14]
HAMMER was developed specifically for DragonFly BSD to provide a feature-rich yet better designed analogue[according to whom?]of the then increasingly popularZFS.
HAMMER was declared production-ready with DragonFly 2.2 in 2009;[9]in 2012, design-level work shifted ontoHAMMER2, which was declared stable with DragonFly 5.2 in 2018.
As of 2019[update], HAMMER is now often referred to as HAMMER1 to avoid confusion with HAMMER2, although an official renaming has not happened. Both filesystems are independent of each other due to different on-disk formats,[15][16]and continue to receive separate updates and improvements independently.[17]
|
https://en.wikipedia.org/wiki/HAMMER_(file_system)
|
Resilient File System(ReFS),[6]codenamed "Protogon",[7]is aMicrosoftproprietaryfile systemintroduced withWindows Server 2012with the intent of becoming the "next generation"file systemafterNTFS.
ReFS was designed to overcome problems that had become significant over the years since NTFS was conceived, which are related to how data storage requirements have changed. These requirements arose from two major changes in storage systems and usage – the size of storage in use (large or massive arraysof multi-terabyte drives now common), and the need forcontinual reliability. As a result, the file system needs to be self-repairing (to prevent disk checking from being impractically slow or disruptive), along withabstractionorvirtualizationbetween physical disks andlogical volumes.
The key design advantages of ReFS include automaticintegrity checkinganddata scrubbing, elimination of the need for runningchkdsk, protection againstdata degradation, built-in handling ofhard disk drive failureandredundancy, integration ofRAIDfunctionality, a switch tocopy/allocate on writefor data andmetadataupdates, handling ofvery long paths and filenames, andstorage virtualizationandpooling, including almost arbitrarily sizedlogical volumes(unrelated to the physical sizes of the used drives).
ReFS usesB+ treesfor all on-disk structures, including all metadata and file data.[2][8]Metadata and file data are organized into tables similar to arelational database. The file size, number of files in afolder, total volume size, and number of folders in a volume are limited by 64-bit numbers; as a result, ReFS supports a maximum file size of 35petabytes, and a maximum volume size of 35 petabytes.[3]
ReFS employs anallocation-on-writeupdate strategy for metadata,[2]which allocates new chunks for every update transaction and uses largeIObatches. All ReFS metadata have 64-bit checksums which are stored independently. The file data can have an optional checksum in a separate "integritystream", which used modifiedCRC-32Calgorithm to checkallocation units,[9]in which case the file update strategy also implements allocation-on-write for file data; this is controlled by a new "integrity" attribute applicable to both files and directories. If file data or metadata become corrupt, the file can be deleted without taking the whole volume offline for maintenance, and then be restored from the backup. As a result of built-in resiliency, administrators do not need to periodically run error-checking tools such asCHKDSKwhen using ReFS. In contrast,NTFSonly calculates checksum formetadata, the check forsectorsis done by storage hardware (such as sector CRC-32 command of SATA and NVMe).[10]
ReFS supports only a subset of NTFS features, and only supports Win32 APIs that are "widely adopted". It does not require new system APIs, and most file system filters continue to work with ReFS volumes.[2]ReFS supports many existing Windows and NTFS features such asBitLockerencryption,Access Control Lists,USN Journal, change notifications,[11]symbolic links,junction points,mount points,reparse points,volume snapshots,file IDs, andoplock. ReFS seamlessly integrates withStorage Spaces,[2]astorage virtualizationlayer that allows data mirroring and striping, as well as sharing storage pools between machines.[12]ReFS resiliency features enhance the mirroring feature provided by Storage Spaces and can detect whether any mirrored copies of files become corrupt using adata scrubbingprocess,[8]which periodically reads all mirror copies and verifies their checksums, then replaces bad copies with good ones.
Microsoft Windows and Windows Server includeReFSUtil, acommand-lineutility that can be used to diagnose heavily damaged ReFS volumes, identify remaining files, and copy those files to another volume.[13]
Some NTFS features are not implemented in ReFS. These includeobject IDs,8.3 filename,NTFS compression,Encrypting File System(EFS),transactional NTFS,extended attributes, anddisk quotas.[7][2][14]Dynamic disks with mirrored or striped volumes are replaced with mirrored or striped storage pools provided by Storage Spaces; however, automated error-correction is only supported on mirrored spaces.Data deduplicationwas missing in early versions of ReFS.[2]It was implemented in v3.2, debuting in Windows Server v1709.[4]
Support foralternate data streamsandhard linkswas initially not implemented in ReFS. In Windows 8.1 64-bit and Server 2012 R2, the file system reacquired support for alternate data streams, with lengths of up to 128K, and automatic correction of corruption when integrity streams are used on parity spaces.[15]ReFS had initially been unsuitable forMicrosoft SQL Serverinstance allocation due to the absence of alternate data streams.[16]Hard links were introduced with preview versions of Windows Server 2022 but are not yet available in Windows 11.
ReFS was initially added toWindows Server 2012only, with the aim of gradual migration to consumer systems in future versions; this was achieved as ofWindows 8.1.[3]The initial versions didn't have some of the NTFS features, such asdisk quotas,alternate data streams, andextended attributes. Some of these were implemented in later versions of ReFS.
In early versions (2012–2013), ReFS was similar to or slightly faster than NTFS in most tests,[17]but far slower when full integrity checking was enabled, a result attributed to the relative newness of ReFS.[18][self-published source][19][self-published source]
The ability to create ReFS volumes was removed in Windows 10's 2017 Fall Creators Update for all editions except Enterprise and Pro for Workstations.[5][why?]
Starting with Windows Server 2022 and Windows 11 build 22557, theboot environmentnatively supports ReFS, allowing the system to be installed and run in a special way on a volume formatted with ReFS v3. If it is a volume formatted with ReFS v1, it cannot be booted with ReFS.[20]
Starting with Windows 11 build 22621.2338, ReFS is re-introduced via aDev Drivefeature; allowing fixed storage drives and VHDs to be formatted as ReFS, with special file andMicrosoft Defenderpolicies added during use.
Thecluster sizeof a ReFS volume is either 4 KB or 64 KB.[21]
At the Storage Developer Conference 2015, a Microsoft developer presented enhancements of ReFS expected to be released withWindows Server 2016and included in Technical Preview 4, titled "ReFS v2".[22]It highlighted that ReFS now included capabilities for very high speed moving, reordering, and cloning of blocks between files[23](which can be done for all blocks of a file). This is particularly needed forvirtualization, and is stated to allow fast provisioning, diff merging, and tiering. Other enhancements cover the redo log (for synchronous disk writes),parallelization, efficient tracking of uninitialized sparse data and files, and efficient 4kI/O.[22]
Windows Server 2022(using ReFS version 3.7) supports file-level snapshots.[3]
Windows Insider Preview 22H2 and 23H2 (builds 226** and 25***) support ReFS volume compression usingLZ4andzstdalgorithms.[24]
ReFS has some different versions, with various degrees of compatibility between operating system versions. Aside from development versions of the filesystem, usually, later operating system versions can mount filesystems created with earlier OS versions (backwards compatibility). Some features may not be compatible with the feature set of the OS. The version, cluster size and other features of the filesystem can be queried with the commandfsutil fsinfo refsinfo volumename.
Issues identified or suggested for ReFS, when running on Storage Spaces, include:
Like ReFS,ZFS,Bcachefs, andBtrfsare designed to integrate data protection, snapshots, and background error correction.
In 2012,Phoronixwrote an analysis[27]of ReFS vs Btrfs. At the time, their features were similar, with both supporting checksums,RAID-like use of multiple disks, and error correction. However, ReFS lackedcopy-on-writesnapshots and compression, both found in Btrfs and ZFS.
In 2014, BetaNews wrote a review of ReFS and assessed its readiness for production use.[28]The review concluded that ReFS had at least some advantages over twofile systemsthen available for file servers runningUnix-likeoperating systems, ZFS andReiserFS.
ZFS (used inSolaris,illumos,FreeBSDand others) was widely criticized for its comparatively extreme memory requirements of many gigabytes ofRAMfor online deduplication. However, online deduplication was not enabled by default in ZFS and was not supported at the time by ReFS (it has since been added), so not enabling ZFS online deduplication yielded a more even comparison between the two file systems as ZFS then has a memory requirement of only a few hundred megabytes.[29]
As of November 2019[update], Microsoft has not published any specifications for ReFS, nor have any working open-source drivers been made. A third-party open-source project to document ReFS is on GitHub.[30][31]
Paragon Software Groupprovides a closed-source driver for Windows and Linux.
|
https://en.wikipedia.org/wiki/ReFS
|
Windows Server 2012, codenamed "Windows Server 8", is the ninth major version of theWindows NToperating systemproduced byMicrosoftto be released under theWindows Serverbrand name. It is theserverversion of Windows based onWindows 8and succeeds theWindows 7-basedWindows Server 2008 R2, released nearly three years earlier. Two pre-release versions, adeveloper previewand abeta version, were released during development. The software wasofficially launchedon September 4, 2012, which was the month before the release ofWindows 8.[4]It was succeeded byWindows Server 2012 R2. Mainstream support ended on October 9, 2018, and extended support ended on October 10, 2023. It is eligible for the paid Extended Security Updates (ESU) program, which offers continued security updates until October 13, 2026.
It removed support forItaniumand processors withoutPAE,SSE2andNX.[5]Four editions were released. Various features were added or improved over Windows Server 2008 R2 (with many placing an emphasis oncloud computing), such as an updated version ofHyper-V, anIP address managementrole, a new version ofWindows Task Manager, andReFS, a newfile system. Windows Server 2012 received generally good reviews in spite of having included the same controversialMetro-based user interface seen in Windows 8, which includes the Charms Bar for quick access to settings in the desktop environment.
It is the final version of Windows Server that supports processors without CMPXCHG16b, PrefetchW, LAHF and SAHF.
As of April 2017, 35% of servers were running Windows Server 2012, surpassing usage share ofWindows Server 2008.[6]
Windows Server 2012, codenamed "Windows Server 8",[7]is the fifth release ofWindows Serverfamily of operating systems developed concurrently withWindows 8.[8][9]
Microsoft introduced Windows Server 2012 and itsdeveloper previewin theBUILD 2011conference on September 9, 2011.[10]However, unlike Windows 8, the developer preview of Windows Server 2012 was only made available toMSDNsubscribers.[11]It included agraphical user interface(GUI) based onMetro design languageand a new Server Manager, a graphical application used for server management.[12]On February 16, 2012, Microsoft released an update for developer preview build that extended its expiry date from April 8, 2012 to January 15, 2013.[13]
Before Windows Server 2012 was finalized, two testbuildswere made public. A publicbeta versionof Windows Server 2012 was released along with theWindows 8Consumer Preview on February 29, 2012.[8]On April 17, 2012, Microsoft revealed "Windows Server 2012" as the final name for the operating system.[7]Therelease candidateof Windows Server 2012 was released on May 31, 2012, along with the Windows 8 Release Preview.[9]
The product wasreleased to manufacturingon August 1, 2012 (along withWindows 8) and becamegenerally availableon September 4, that year.[4]However, not alleditions of Windows Server 2012were released at the same time. Windows Server 2012 Essentials was released to manufacturing on October 9, 2012[14]and was made generally available on November 1, 2012.[15]As of September 23, 2012, all students subscribed toDreamSparkprogram can download Windows Server 2012 Standard or Datacenter free of charge.[16]
Windows Server 2012 is based onWindows 8and is the second version of Windows Server which runs only on64-bitCPUs.[17]Coupled with fundamental changes in the structure of the client backups and the shared folders, there is no clear method for migrating from the previous version to Windows Server 2012.
Unlike its predecessor, Windows Server 2012 users can switch between "Server Core" and "Server with aGUI" installation options without a full re-installation. Server Core – an option with a command-line interface only – is now the recommended configuration. There is also a third installation option that allows some GUI elements such asMMCand Server Manager to run, but without the normal desktop,shellor default programs likeFile Explorer.[12]
Server Manager has been redesigned with an emphasis on easing management of multiple servers.[18]The operating system, likeWindows 8, uses the Metro-baseduser interfaceunless installed in Server Core mode.[19]TheWindows Storeis available by installing the desktop experience feature from the server manager, but is not installed by default.[20]Windows PowerShellin this version has over 2300 commandlets, compared to around 200 in Windows Server 2008 R2.[21]
Windows Server 2012 includes a new version ofWindows Task Managertogether with the old version.[22]In the new version the tabs are hidden by default, showing applications only. In the new Processes tab, the processes are displayed in varying shades of yellow, with darker shades representing heavier resource use.[23]Information found in the older versions are now moved to the new Details tab. The Performance tab shows "CPU", "Memory", "Disk", "Wi-Fi" and "Ethernet" graphs. Unlike theWindows 8version of Task Manager (which looks similar), the "Disk" activity graph is not enabled by default. The CPU tab no longer displays individual graphs for every logical processor on the system by default, although that remains an option. Additionally, it can display data for eachnon-uniform memory access(NUMA) node. When displaying data for each logical processor for machines with more than 64 logical processors, the CPU tab now displays simple utilization percentages on heat-mapping tiles.[24]The color used for these heat maps is blue, with darker shades again indicating heavier utilization. Hovering the cursor over any logical processor's data now shows the NUMA node of that processor and its ID, if applicable. Additionally, a new Startup tab has been added that lists startup applications,[25]however this tab does not exist in Windows Server 2012.[26]The new task manager recognizes when aWindows Storeapp has the "Suspended" status.
Windows Server 2012 has anIP address managementrole for discovering, monitoring, auditing, and managing theIP addressspace used on a corporate network. The IPAM is used for the management and monitoring ofDomain Name System(DNS) andDynamic Host Configuration Protocol(DHCP) servers. BothIPv4andIPv6are fully supported.[27]
Windows Server 2012 has a number of changes toActive Directoryfrom the version shipped with Windows Server 2008 R2. The Active Directory Domain Services installation wizard has been replaced by a new section in Server Manager, and a GUI has been added to the Active Directory Recycle Bin.[28]Multiple password policies can be set in the same domain.[29]Active Directory in Windows Server 2012 is now aware of any changes resulting from virtualization, and virtualized domain controllers can be safely cloned. Upgrades of the domain functional level to Windows Server 2012 are simplified; it can be performed entirely in Server Manager. Active Directory Federation Services is no longer required to be downloaded when installed as a role, and claims which can be used by the Active Directory Federation Services have been introduced into the Kerberos token. Windows Powershell commands used by Active Directory Administrative Center can be viewed in a "Powershell History Viewer".[30][31]
Windows Server 2012, along withWindows 8, includes a new version ofHyper-V,[32]as presented at the Microsoft BUILD event.[33]Many new features have been added to Hyper-V, including network virtualization, multi-tenancy, storage resource pools, cross-premises connectivity, and cloud backup. Additionally, many of the former restrictions on resource consumption have been greatly lifted. Each virtual machine in this version of Hyper-V can access up to 64 virtual processors, up to 1 terabyte of memory, and up to 64 terabytes of virtual disk space per virtual hard disk (using a new .vhdx format).[34][35]Up to 1024 virtual machines can be active per host, and up to 8000 can be active per failover cluster.[36]SLATis a required processor feature for Hyper-V onWindows 8, while for Windows Server 2012 it is only required for the supplementaryRemoteFXrole.[37]
Resilient File System (ReFS),[38]codenamed "Protogon",[39]is a newfile systemin Windows Server 2012 initially intended forfile serversthat improves onNTFSin some respects. Major new features of ReFS include:[40][41]
Some NTFS features are not supported in ReFS, includingobject IDs,short names,file compression,file level encryption (EFS),user data transactions,hard links,extended attributes, anddisk quotas.[39][40]Sparse filesare supported.[44][45]Support fornamed streamsis not implemented in Windows 8 and Windows Server 2012, though it was later added in Windows 8.1 and Windows Server 2012 R2.[46]ReFS does not itself offerdata deduplication.[40]Dynamic disks with mirrored or striped volumes are replaced with mirrored or striped storage pools provided by Storage Spaces. In Windows Server 2012, automated error-correction with integrity streams is only supported on mirrored spaces; automatic recovery on parity spaces was added in Windows 8.1 and Windows Server 2012 R2.[46]Booting from ReFS is not supported either.
Windows Server 2012 includes version 8.0 ofInternet Information Services(IIS). The new version contains new features such asSNI, CPU usage caps for particular websites,[47]centralized management ofSSL certificates,WebSocketsupport and improved support for NUMA, but few other substantial changes were made.[48]
Remote Desktop Protocolhas new functions such as Adaptive Graphics (progressive rendering and related techniques), automatic selection of TCP or UDP as transport protocol,multi touchsupport, DirectX 11 support for vGPU,USB redirectionsupported independently of vGPU support, etc.[49]A "connection quality" button is displayed in the RDP client connection bar for RDP 8.0 connections; clicking on it provides further information about connection, including whether UDP is in use or not.[50]
Windows Server 2012 supports the following maximum hardware specifications.[35][51]Windows Server 2012 improves over its predecessor Windows Server 2008 R2:
Windows Server 2012 runs only onx86-64processors. Unlike older versions, Windows Server 2012 does not supportItanium.[5]
Upgrades from Windows Server 2008 and Windows Server 2008 R2 are supported, although upgrades from prior releases are not.[53]
Windows Server 2012 has four editions: Foundation, Essentials, Standard and Datacenter.[54][55][56][57][51]
Reviews of Windows Server 2012 have been generally positive.[60][61][62]Simon Bisson ofZDNetdescribed it as "ready for the datacenter, today,"[60]while Tim Anderson ofThe Registersaid that "The move towards greater modularity, stronger automation and improved virtualisation makes perfect sense in a world of public and private clouds" but remarked that "That said, the capability of Windows to deliver obscure and time-consuming errors is unchanged" and concluded that "Nevertheless, this is a strong upgrade overall."[61]
InfoWorldnoted that Server 2012's use of Windows 8's panned "Metro" user interface was countered by Microsoft's increasing emphasis on the Server Core mode, which had been "fleshed out with new depth and ease-of-use features" and increased use of the "practically mandatory" PowerShell.[63]However, Michael Otey of Windows IT Pro expressed dislike with the new Metro interface and the lack of ability to use the older desktop interface alone, saying that most users of Windows Server manage their servers using the graphical user interface rather than PowerShell.[64]
Paul Ferrill wrote that "Windows Server 2012 Essentials provides all the pieces necessary to provide centralized file storage, client backups, and remote access,"[65]but Tim Anderson contended that "Many businesses that are usingSBS2011and earlier will want to stick with what they have", citing the absence ofExchange, the lack of ability to synchronize withActive Directory Federation Servicesand the 25-user limit,[66]while Paul Thurott wrote "you should choose Foundation only if you have at least some in-company IT staff and/or are comfortable outsourcing management to a Microsoft partner or solution provider" and "Essentials is, in my mind, ideal for any modern startup of just a few people."[67]
A second release,Windows Server 2012 R2, which is derived from theWindows 8.1codebase, wasreleased to manufacturingon August 27, 2013[68]and became generally available on October 18, 2013, byMicrosoft.[69]An updated version, formally designated Windows Server 2012 R2 Update, was released in April 2014.[70][71]
Microsoft originally planned to end mainstream support for Windows Server 2012 and Windows Server 2012 R2 on January 9, 2018, with extended support ending on January 10, 2023. In order to provide customers the standard transition lifecycle timeline, Microsoft extended Windows Server 2012 and 2012 R2 support in March 2017 by 9 months. Windows Server 2012 reached the end of mainstream support on October 9, 2018 and entered the extended support phase, which ended on October 10, 2023.[72][73][74]
Microsoft announced in July 2021 that they will distribute paid Extended Security Updates for volume licensed editions of Windows Server 2012 and Windows Server 2012 R2 for up to 3 years after the end of extended support.[75]For Windows Server 2012 and Windows Server 2012 R2, these updates will last until October 13, 2026. This will mark the final end of all security updates for the Windows NT 6.2 product line after 14 years, 2 months and 12 days and will also mark the final end of all security updates for the Windows NT 6.3 product line after 13 years, 1 month and 16 days.
|
https://en.wikipedia.org/wiki/Windows_Server_2012
|
Thepartition type(orpartition ID) in a partition's entry in the partition table inside amaster boot record(MBR) is a byte value intended to specify thefile systemthe partition contains or to flag special access methods used to access these partitions (e.g. specialCHSmappings,LBAaccess, logical mapped geometries, special driver access, hidden partitions, secured or encrypted file systems, etc.).
Lists of assigned partition types to be used in the partition table in the MBR were originally maintained byIBMandMicrosoftinternally. When the market of PC operating systems and disk tools grew and liberated, other vendors had a need to assign special partition types to their products as well. As Microsoft neither documented all partition types already assigned by them nor wanted to maintain foreign assignments, third parties started to simply assign partition types on their own behalf in a mostly uncoordinated trial-and-error manner. This led to various conflicting assignments sometimes causing severe compatibility problems between certain products.[1]
Several industry experts including Hale Landis,Ralf D. Brown, Matthias R. Paul, andAndries E. Brouwerin the 1990s started to research partition types and published (and later synchronized) partition type lists in order to help document the industryde facto standardand thereby reduce the risk of further conflicts. Some of them also actively helped to maintain software dealing with partitions to work with the updated lists, indicated conflicts, devised additional detection methods and work-arounds for vendors, or engaged in coordinating new non-conflictive partition type assignments as well.
It is up to anoperating system'sboot loaderorkernelhow to interpret the value. So the table specifies whichoperating systemsor disk-related products introduced an ID and what file system or special partition type they mapped it to. Partitions with partition types unknown to the software should be treated as reserved but occupied disk storage space which should not be dealt with by the software, save forpartition managers.
While the list is not officially maintained,[1]newassignments should be coordinated.
In particular temporary partition type assignments for local or experimental projects can utilize type7Fhin order to avoid conflicts with already assigned types. This type was specially reserved for individual use as part of theAlternative OS Development Partition Standard(AODPS) initiative since 2002.[2]
This is a list of known master boot record partition types onIBM PC compatiblecomputers:
|
https://en.wikipedia.org/wiki/List_of_partition_IDs
|
Amaster boot record(MBR) is a type ofboot sectorin the first block ofpartitionedcomputermass storage deviceslikefixed disksorremovable drivesintended for use withIBM PC-compatiblesystems and beyond. The concept of MBRs was publicly introduced in 1983 withPC DOS 2.0.
The MBR holds the information on how the disc's sectors (A.K.A. "blocks") are divided into partitions, each partition notionally containing a file system. The MBR also contains executable code to function as a loader for the installed operating system—usually by passing control over to the loader's second stage, or in conjunction with each partition's volume boot record (VBR). This MBR code is usually referred to as a boot loader.
The organization of the partition table in the MBR limits the maximum addressable storage space of a partitioned disk to 2TiB(232× 512 bytes).[1]Approaches to slightly raise this limit utilizing 32-bit arithmetic or 4096-byte sectors are not officially supported, as they fatally break compatibility with existing boot loaders, most MBR-compliant operating systems and associated system tools, and may cause serious data corruption when used outside of narrowly controlled system environments. Therefore, the MBR-based partitioning scheme is in the process of being superseded by theGUID Partition Table(GPT) scheme in new computers. A GPT can coexist with an MBR in order to provide some limited form of backward compatibility for older systems.
MBRs are not present on non-partitioned media such asfloppies,superfloppiesor other storage devices configured to behave as such, nor are they necessarily present on drives used in non-PC platforms.
Support for partitioned media, and thereby the master boot record (MBR), was introduced with IBMPC DOS2.0 in March 1983 in order to support the 10 MBhard diskof the then-newIBM Personal Computer XT, still using theFAT12file system. The original version of the MBR was written by David Litton of IBM in June 1982. The partition table supported up to fourprimary partitions. This did not change whenFAT16was introduced as a new file system with DOS 3.0. Support for anextended partition, a special primary partition type used as a container to hold other partitions, was added with DOS 3.2, and nestedlogical drivesinside an extended partition came with DOS 3.30. Since MS-DOS, PC DOS, OS/2 and Windows were never enabled to boot off them, the MBR format and boot code remained almost unchanged in functionality (except some third-party implementations) throughout the eras of DOS and OS/2 up to 1996.
In 1996, support forlogical block addressing(LBA) was introduced in Windows 95B and MS-DOS 7.10 (Not to be confused with IBM PC-DOS 7.1) in order to support disks larger than 8 GB.Disk timestampswere also introduced.[2]This also reflected the idea that the MBR is meant to be operating system and file system independent. However, this design rule was partially compromised in more recent Microsoft implementations of the MBR, which enforceCHSaccess forFAT16BandFAT32partition types0x06/0x0B, whereas LBA is used for0x0E/0x0C.
Despite sometimes poor documentation of certain intrinsic details of the MBR format (which occasionally caused compatibility problems), it has been widely adopted as a de facto industry standard, due to the broad popularity of PC-compatible computers and its semi-static nature over decades. This was even to the extent of being supported by computer operating systems for other platforms. Sometimes this was in addition to other pre-existing orcross-platformstandards for bootstrapping and partitioning.[3]
MBR partition entries and the MBR boot code used in commercial operating systems, however, are limited to 32 bits.[1]Therefore, the maximum disk size supported on disks using 512-byte sectors (whether real or emulated) by the MBR partitioning scheme (without 32-bit arithmetic) is limited to 2 TiB.[1]Consequently, a different partitioning scheme must be used for larger disks, as they have become widely available since 2010. The MBR partitioning scheme is therefore in the process of being superseded by theGUID Partition Table(GPT). The official approach does little more than ensuring data integrity by employing aprotective MBR. Specifically, it does not provide backward compatibility with operating systems that do not support the GPT scheme as well. Meanwhile, multiple forms ofhybrid MBRshave been designed and implemented by third parties in order to maintain partitions located in the first physical 2 TiB of a disk in both partitioning schemes "in parallel" and/or to allow older operating systems to boot off GPT partitions as well. The present non-standard nature of these solutions causes various compatibility problems in certain scenarios.
The MBR consists of 512 or morebyteslocated in the firstsectorof the drive.
It may contain one or more of:
IBMPC DOS2.0 introduced theFDISKutility to set up and maintain MBR partitions. When a storage device has been partitioned according to this scheme, its MBR contains a partition table describing the locations, sizes, and other attributes of linear regions referred to as partitions.
The partitions themselves may also contain data to describe more complex partitioning schemes, such asextended boot records(EBRs),BSD disklabels, orLogical Disk Managermetadata partitions.[8]
The MBR is not located in a partition; it is located at a first sector of the device (physical offset 0), preceding the first partition. (The boot sector present on a non-partitioned device or within an individual partition is called avolume boot recordinstead.) In cases where the computer is running aDDO BIOS overlayorboot manager, the partition table may be moved to some other physical location on the device; e.g.,Ontrack Disk Manageroften placed a copy of the original MBR contents in the second sector, then hid itself from any subsequently booted OS or application, so the MBR copy was treated as if it were still residing in the first sector.
By convention, there are exactly four primary partition table entries in the MBR partition table scheme, although some operating systems and system tools extended this to five (Advanced Active Partitions (AAP) withPTS-DOS6.60[9]andDR-DOS7.07), eight (ASTandNECMS-DOS3.x[10][11]as well asStorage DimensionsSpeedStor), or even sixteen entries (withOntrack Disk Manager).
An artifact of hard disk technology from the era of thePC XT, the partition table subdivides a storage medium using units ofcylinders,heads, andsectors(CHSaddressing). These values no longer correspond to their namesakes in modern disk drives, as well as being irrelevant in other devices such assolid-state drives, which do not physically have cylinders or heads.
In the CHS scheme, sector indices have (almost) always begun with sector 1 rather than sector 0 by convention, and due to an error in all versions of MS-DOS/PC DOS up to including 7.10, the number of heads is generally limited to 255[h]instead of 256. When a CHS address is too large to fit into these fields, thetuple(1023, 254, 63) is typically used today, although on older systems, and with older disk tools, the cylinder value often wrapped around modulo the CHS barrier near 8 GB, causing ambiguity and risks of data corruption. (If the situation involves a "protective" MBR on a disk with a GPT, Intel'sExtensible Firmware Interfacespecification requires that the tuple (1023, 255, 63) be used.) The 10-bit cylinder value is recorded within two bytes in order to facilitate making calls to the original/legacyINT 13hBIOS disk access routines, where 16 bits were divided into sector and cylinder parts, and not on byte boundaries.[13]
Due to the limits of CHS addressing,[16][17]a transition was made to using LBA, orlogical block addressing. Both the partition length and partition start address are sector values stored in the partition table entries as 32-bit quantities. The sector size used to be considered fixed at 512 (29) bytes, and a broad range of important components includingchipsets,boot sectors,operating systems,database engines,partitioningtools,backupandfile systemutilities and other software had this value hard-coded. Since the end of 2009, disk drives employing 4096-byte sectors (4KnorAdvanced Format) have been available, although the size of the sector for some of these drives was still reported as 512 bytes to the host system through conversion in the hard-drive firmware and referred to as 512 emulation drives (512e).
Since block addresses and sizes are stored in the partition table of an MBR using 32 bits, the maximum size, as well as the highest start address, of a partition using drives that have 512-byte sectors (actual or emulated) cannot exceed 2TiB−512 bytes (2199023255040bytes or4294967295(232−1) sectors × 512 (29) bytes per sector).[1]Alleviating this capacity limitation was one of the prime motivations for the development of the GPT.
Since partitioning information is stored in the MBR partition table using a beginning block address and a length, it may in theory be possible to define partitions in such a way that the allocated space for a disk with 512-byte sectors gives a total size approaching 4 TiB, if all but one partition are located below the 2 TiB limit and the last one is assigned as starting at or close to block 232−1 and specify the size as up to 232−1, thereby defining a partition that requires 33 rather than 32 bits for the sector address to be accessed. However, in practice, only certainLBA-48-enabled operating systems, including Linux, FreeBSD and Windows 7[18]that use 64-bit sector addresses internally actually support this. Due to code space constraints and the nature of the MBR partition table to only support 32 bits, boot sectors, even if enabled to support LBA-48 rather thanLBA-28, often use 32-bit calculations, unless they are specifically designed to support the full address range of LBA-48 or are intended to run on 64-bit platforms only. Any boot code or operating system using 32-bit sector addresses internally would cause addresses to wrap around accessing this partition and thereby result in serious data corruption over all partitions.
For disks that present a sector size other than 512 bytes, such asUSBexternal drives, there are limitations as well. A sector size of 4096 results in an eight-fold increase in the size of a partition that can be defined using MBR, allowing partitions up to 16 TiB (232× 4096 bytes) in size.[19]Versions of Windows more recent than Windows XP support the larger sector sizes, as well as Mac OS X, andLinuxhas supported larger sector sizes since 2.6.31[20]or 2.6.32,[21]but issues with boot loaders, partitioning tools and computer BIOS implementations present certain limitations,[22]since they are often hard-wired to reserve only 512 bytes for sector buffers, causing memory to become overwritten for larger sector sizes. This may cause unpredictable behaviour as well, and therefore should be avoided when compatibility and standard conformity is an issue.
Where a data storage device has been partitioned with the GPT scheme, the master boot record will still contain a partition table, but its only purpose is to indicate the existence of the GPT and to prevent utility programs that understand only the MBR partition table scheme from creating any partitions in what they would otherwise see as free space on the disk, thereby accidentally erasing the GPT.
OnIBM PC-compatiblecomputers, thebootstrappingfirmware(contained within theROMBIOS) loads and executes the master boot record.[23]ThePC/XT (type 5160)used anIntel 8088microprocessor. In order to remain compatible, all x86 BIOS architecture systems start with the microprocessor in anoperating modereferred to asreal mode. The BIOS reads the MBR from the storage device intophysical memory, and then it directs the microprocessor to the start of the boot code. The BIOS will switch the processor to real mode, then begin to execute the MBR program, and so the beginning of the MBR is expected to contain real-modemachine code.[23]
Since the BIOS bootstrap routine loads and runs exactly one sector from the physical disk, having the partition table in the MBR with the boot code simplifies the design of the MBR program. It contains a small program that loads theVolume Boot Record(VBR) of the targeted partition. Control is then passed to this code, which is responsible for loading the actual operating system. This process is known aschain loading.
Popular MBR code programs were created for bootingPC DOSandMS-DOS, and similar boot code remains in wide use. These boot sectors expect theFDISKpartition table scheme to be in use and scans the list of partitions in the MBR's embedded partition table to find the only one that is marked with theactive flag.[24]It then loads and runs thevolume boot record(VBR) of the active partition.
There are alternative boot code implementations, some of which are installed byboot managers, which operate in a variety of ways. Some MBR code loads additional code for a boot manager from the first track of the disk, which it assumes to be "free" space that is not allocated to any disk partition, and executes it. A MBR program may interact with the user to determine which partition on which drive should boot, and may transfer control to the MBR of a different drive. Other MBR code contains a list of disk locations (often corresponding to the contents offilesin afilesystem) of the remainder of the boot manager code to load and to execute. (The first relies on behavior that is not universal across all disk partitioning utilities, most notably those that read and write GPTs. The last requires that the embedded list of disk locations be updated when changes are made that would relocate the remainder of the code.)
On machines that do not usex86processors, or on x86 machines with non-BIOS firmware such asOpen FirmwareorExtensible Firmware Interface(EFI) firmware, this design is unsuitable, and the MBR is not used as part of the system bootstrap.[25]EFI firmware is instead capable of directly understanding the GPT partitioning scheme and theFATfilesystem format, and loads and runs programs held as files in theEFI System partition.[26]The MBR will be involved only insofar as it might contain a partition table for compatibility purposes if the GPT partition table scheme has been used.
There is some MBR replacement code that emulates EFI firmware's bootstrap, which makes non-EFI machines capable of booting from disks using the GPT partitioning scheme. It detects a GPT, places the processor in the correct operating mode, and loads the EFI compatible code from disk to complete this task.
In addition to the bootstrap code and a partition table, master boot records may contain adisk signature. This is a 32-bit value that is intended to identify uniquely the disk medium (as opposed to the disk unit—the two not necessarily being the same for removable hard disks).
The disk signature was introduced by Windows NT version 3.5, but it is now used by several operating systems, including theLinux kernelversion 2.6 and later. Linux tools can use the NT disk signature to determine which disk the machine booted from.[27]
Windows NT (and later Microsoft operating systems) uses the disk signature as an index to all the partitions on any disk ever connected to the computer under that OS; these signatures are kept inWindows Registrykeys, primarily for storing the persistent mappings between disk partitions and drive letters. It may also be used in Windows NTBOOT.INIfiles (though most do not), to describe the location of bootable Windows NT (or later) partitions.[28]One key (among many), where NT disk signatures appear in a Windows 2000/XP registry, is:
If a disk's signature stored in the MBR wasA8 E1 B9 D2(in that order) and its first partition corresponded with logical drive C: under Windows, then theREG_BINARYdata under the key value\DosDevices\C:would be:
The first four bytes are said disk signature. (In other keys, these bytes may appear in reverse order from that found in the MBR sector.) These are followed by eight more bytes, forming a 64-bit integer, inlittle-endiannotation, which are used to locate the byte offset of this partition. In this case,00 7Ecorresponds to the hexadecimal value0x7E00(32,256). Under the assumption that the drive in question reports a sector size of 512 bytes, then dividing this byte offset by 512 results in 63, which is the physical sector number (or LBA) containing the first sector of the partition (unlike thesector countused in the sectors value of CHS tuples, which counts fromone, the absolute or LBA sector value startscounting fromzero).
If this disk had another partition with the values00 F8 93 71 02following the disk signature (under, e.g., the key value\DosDevices\D:), it would begin at byte offset0x00027193F800(10,495,457,280), which is also the first byte of physical sector20,498,940.
Starting withWindows Vista, the disk signature is also stored in theBoot Configuration Data(BCD) store, and the boot process depends on it.[29]If the disk signature changes, cannot be found or has a conflict, Windows is unable to boot.[30]Unless Windows is forced to use the overlapping part of the LBA address of the Advanced Active Partition entry as pseudo-disk signature, Windows' usage is conflictive with the Advanced Active Partition feature of PTS-DOS 7 and DR-DOS 7.07, in particular if their boot code is located outside the first 8 GB of the disk, so that LBA addressing must be used.
The MBR originated in thePC XT.[31]IBM PC-compatiblecomputers arelittle-endian, which means theprocessorstores numeric values spanning two or more bytes in memoryleast significant bytefirst. The format of the MBR on media reflects this convention. Thus, the MBR signature will appear in adisk editoras the sequence55 AA.[a]
The bootstrap sequence in the BIOS will load the first valid MBR that it finds into the computer'sphysical memoryataddress0x7C00to0x7DFF.[31]The last instruction executed in the BIOS code will be a "jump" to that address in order to direct execution to the beginning of the MBR copy. The primary validation for most BIOSes is the signature at offset0x01FE, although a BIOS implementer may choose to include other checks, such as verifying that the MBR contains a valid partition table without entries referring to sectors beyond the reported capacity of the disk.
To the BIOS, removable (e.g. floppy) and fixed disks are essentially the same. For either, the BIOS reads the first physical sector of the media into RAM at absolute address0x7C00, checks the signature in the last two bytes of the loaded sector, and then, if the correct signature is found, transfers control to the first byte of the sector with a jump (JMP) instruction. The only real distinction that the BIOS makes is that (by default, or if the boot order is not configurable) it attempts to boot from the first removable disk before trying to boot from the first fixed disk. From the perspective of the BIOS, the action of the MBR loading a volume boot record into RAM is exactly the same as the action of a floppy disk volume boot record loading the object code of an operating system loader into RAM. In either case, the program that the BIOS loaded is going about the work of chain loading an operating system.
While the MBRboot sectorcode expects to be loaded at physical address0x0000:0x7C00,[i]all the memory from physical address0x0000:0x0501(address0x0000:0x0500is the last one used by a Phoenix BIOS)[13]to0x0000:0x7FFF,[31]later relaxed to0x0000:0xFFFF[32](and sometimes[j]up to0x9000:0xFFFF)—the end of the first 640KB—is available in real mode.[k]TheINT 12hBIOS interrupt callmay help in determining how much memory can be allocated safely (by default, it simply reads the base memory size in KB fromsegment:offset location0x0040:0x0013, but it may be hooked by other resident pre-boot software like BIOS overlays,RPLcode or viruses to reduce the reported amount of available memory in order to keep other boot stage software like boot sectors from overwriting them).
The last 66 bytes of the 512-byte MBR are reserved for the partition table and other information, so the MBR boot sector program must be small enough to fit within 446 bytes of memory or less.
The MBR code examines the partition table, selects a suitable partition and loads the program that will perform the next stage of the boot process, usually by making use of INT 13hBIOS calls. The MBR bootstrap code loads and runs (a boot loader- or operating system-dependent)volume boot recordcode that is located at the beginning of the "active" partition. The volume boot record will fit within a 512-byte sector, but it is safe for the MBR code to load additional sectors to accommodate boot loaders longer than one sector, provided they do not make any assumptions on what the sector size is. In fact, at least 1 KB of RAM is available at address0x7C00in every IBM XT- and AT-class machine, so a 1 KB sector could be used with no problem. Like the MBR, a volume boot record normally expects to be loaded at address0x0000:0x7C00. This derives from the fact that the volume boot record design originated on unpartitioned media, where a volume boot record would be directly loaded by the BIOS boot procedure; as mentioned above, the BIOS treats MBRs and volume boot records (VBRs)[l]exactly alike. Since this is the same location where the MBR is loaded, one of the first tasks of an MBR is torelocateitself somewhere else in memory. The relocation address is determined by the MBR, but it is most often0x0000:0x0600(for MS-DOS/PC DOS, OS/2 and Windows MBR code) or0x0060:0x0000(most DR-DOS MBRs). (Even though both of these segmented addresses resolve to the same physical memory address in real mode, forApple Darwinto boot, the MBR must be relocated to0x0000:0x0600instead of0x0060:0x0000, since the code depends on the DS:SI pointer to the partition entry provided by the MBR, but it erroneously refers to it via0x0000:SI only.[33]) It is important not to relocate to other addresses in memory because manyVBRswill assume a certain standard memory layout when loading their boot file.
TheStatusfield in a partition table record is used to indicate an active partition. Standard-conformant MBRs will allow only one partition marked active and use this as part of a sanity-check to determine the existence of a valid partition table. They will display an error message, if more than one partition has been marked active. Some non-standard MBRs will not treat this as an error condition and just use the first marked partition in the row.
Traditionally, values other than0x00(not active) and0x80(active) were invalid and the bootstrap program would display an error message upon encountering them. However, thePlug and Play BIOS SpecificationandBIOS Boot Specification(BBS) allowed other devices to become bootable as well since 1994.[32][34]Consequently, with the introduction of MS-DOS 7.10 (Windows 95B) and higher, the MBR started to treat a set bit 7 as active flag and showed an error message for values0x01..0x7Fonly. It continued to treat the entry as physical drive unit to be used when loading the corresponding partition's VBR later on, thereby now also accepting other boot drives than0x80as valid, however, MS-DOS did not make use of this extension by itself. Storing the actual physical drive number in the partition table does not normally cause backward compatibility problems, since the value will differ from0x80onlyon drives other than the first one (which have not been bootable before, anyway). However, even with systems enabled to boot off other drives, the extension may still not work universally, for example, after the BIOS assignment of physical drives has changed when drives are removed, added or swapped. Therefore, per theBIOS Boot Specification(BBS),[32]it is best practice for a modern MBR accepting bit 7 as active flag to pass on the DL value originally provided by the BIOS instead of using the entry in the partition table.
The MBR is loaded at memory location0x0000:0x7C00and with the followingCPUregisters set up when the prior bootstrap loader (normally theIPLin the BIOS) passes execution to it by jumping to0x0000:0x7C00in the CPU'sreal mode.
Systems withPlug-and-PlayBIOS or BBS support will provide a pointer to PnP data in addition to DL:[32][34]
By convention, a standard conformant MBR passes execution to a successfully loaded VBR, loaded at memory location0x0000:0x7C00, by jumping to0x0000:0x7C00in the CPU's real mode with the following registers maintained or specifically set up:
The MBR code passes additional information to the VBR in many implementations:
Under DR-DOS 7.07 an extended interface may be optionally provided by the extended MBR and in conjunction with LOADER:
In conjunction with GPT, anEnhanced Disk Drive Specification(EDD) 4Hybrid MBRproposal recommends another extension to the interface:[37]
Though it is possible to manipulate thebytesin the MBR sector directly using variousdisk editors, there are tools to write fixed sets of functioning code to the MBR. Since MS-DOS 5.0, the programFDISKhas included the switch/MBR, which will rewrite the MBR code.[38]UnderWindows 2000andWindows XP, theRecovery Consolecan be used to write new MBR code to a storage device using itsfixmbrcommand. UnderWindows VistaandWindows 7, theRecovery Environmentcan be used to write new MBR code using theBOOTREC /FIXMBRcommand.
Some third-party utilities may also be used for directly editing the contents of partition tables (without requiring any knowledge of hexadecimal or disk/sector editors), such as MBRWizard.[o]
ddis a POSIX command commonly used to read or write any location on a storage device, MBR included. InLinux, ms-sys may be used to install a Windows MBR. TheGRUBandLILOprojects have tools for writing code to the MBR sector, namelygrub-installandlilo -mbr. The GRUB Legacy interactive console can write to the MBR, using thesetupandembedcommands, but GRUB2 currently requiresgrub-installto be run from within an operating system.
Various programs are able to create a "backup" of both the primary partition table and the logical partitions in the extended partition.
Linuxsfdisk(on aSystemRescueCD) is able to save a backup of the primary and extended partition table. It creates a file that can be read in a text editor, or this file can be used by sfdisk to restore the primary/extended partition table. An example command to back up the partition table issfdisk -d /dev/hda > hda.outand to restore issfdisk /dev/hda < hda.out. It is possible to copy the partition table from one disk to another this way, useful for setting up mirroring, but sfdisk executes the command without prompting/warnings usingsfdisk -d /dev/sda | sfdisk /dev/sdb.[39]
|
https://en.wikipedia.org/wiki/Master_Boot_Record
|
TheGUID Partition Table(GPT) is a standard for the layout ofpartition tablesof a physicalcomputer storage device, such as ahard disk driveorsolid-state drive. It is part of theUnified Extensible Firmware Interface(UEFI) standard.
It has several advantages overmaster boot record(MBR) partition tables, such as support for more than four primary partitions and 64-bit rather than 32-bitlogical block addresses(LBA) for blocks on a storage device. The larger LBA size supports larger disks.
Some BIOSes support GPT partition tables as well as MBR partition tables, in order to support larger disks than MBR partition tables can support.
GPT usesuniversally unique identifiers(UUIDs), which are also known as globally unique identifiers (GUIDs), to identify partitions and partition types.
All modern personal computeroperating systemssupport GPT. Some, includingmacOSandMicrosoft Windowson the x86 architecture, support booting from GPT partitions only on systems with EFI firmware, butFreeBSDand mostLinux distributionscan boot from GPT partitions on systems with either the BIOS or the EFI firmware interface.
The Master Boot Record (MBR) partitioning scheme, widely used since the early 1980s, had limitations when it came to modern hardware. The available size for block addresses and related information is limited to 32 bits. For hard disks with 512‑byte sectors, the MBR partition table entries allow a maximum size of 2TiB(2³² × 512‑bytes) or 2.20TB(2.20 × 10¹² bytes).[1]
In the late 1990s,Inteldeveloped a new partition table format as part of what eventually became theUnified Extensible Firmware Interface(UEFI). The GUID Partition Table is specified in chapter 5 of the UEFI 2.11 specification.[2]: 111GPT uses 64 bits for logical block addresses, allowing a maximum disk size of 264sectors. For disks with 512‑byte sectors, the maximum size is 8ZiB(264× 512‑bytes) or 9.44ZB(9.44 × 10²¹ bytes).[1]For disks with 4,096‑byte sectors the maximum size is 64ZiB(264× 4,096‑bytes) or 75.6ZB(75.6 × 10²¹ bytes).
In 2010, hard-disk manufacturers introduced drives with 4,096‑byte sectors (Advanced Format).[3]For compatibility with legacy hardware and software, those drives include an emulation technology (512e) that presents 512‑byte sectors to the entity accessing the hard drive, despite their underlying 4,096‑byte physical sectors.[4]Performance could be degraded on write operations, when the drive is forced to perform two read-modify-write operations to satisfy a single misaligned 4,096‑byte write operation.[4]Since April 2014, enterprise-class drives without emulation technology (4K native) have been available on the market.[5][6]
Readiness of the support for 4 KB logical sectors within operating systems differs among their types, vendors and versions.[7]For example,Microsoft Windowssupports 4K native drives sinceWindows 8andWindows Server 2012(both released in 2012) inUEFI.[8]
Like MBR, GPT useslogical block addressing(LBA) in place of the historicalcylinder-head-sector(CHS) addressing. The protective MBR is stored at LBA 0, and the GPT header is in LBA 1. The GPT header has apointerto the partition table (Partition Entry Array), which is typically at LBA 2. Each entry in the partition table has the same size, which is 128 or 256 or 512, etc., bytes; typically this size is 128 bytes. The UEFI specification stipulates that a minimum of 16,384 bytes, regardless of sector size, are allocated for the Partition Entry Array. Thus, on a disk with 512-byte sectors, at least 32 sectors are used for the Partition Entry Array, and the first usable block is at LBA 34 or higher, while on a 4,096-byte sector disk, at least 4 sectors are used for the Partition Entry Array, and the first usable block is at LBA 6 or higher. In addition to the primary GPT header and Partition Entry Array, stored at the beginning of the disk, there is a backup GPT header and Partition Entry Array, stored at the end of the disk. The backup GPT header must be at the last block on the disk (LBA -1) and the backup Partition Entry Array is placed between the end of the last partition and the last block.[2]: pp. 115-120, §5.3
For limited backward compatibility, the space of the legacyMaster Boot Record(MBR) is still reserved in the GPT specification, but it is now used in a way that prevents MBR-based disk utilities from misrecognizing and possibly overwriting GPT disks. This is referred to as aprotective MBR.[9]
A single partition of typeEEh, encompassing the entire GPT drive (where "entire" actually means as much of the drive as can be represented in an MBR), is indicated and identifies it as GPT. Operating systems and tools which cannot read GPT disks will generally recognize the disk as containing one partition of unknown type and no empty space, and will typically refuse to modify the disk unless the user explicitly requests and confirms the deletion of this partition. This minimizes accidental erasures.[9]Furthermore, GPT-aware OSes may check the protective MBR and if the enclosed partition type is not of typeEEhor if there are multiple partitions defined on the target device, the OS may refuse to manipulate the partition table.[10]
If the actual size of the disk exceeds the maximum partition size representable using the legacy 32-bit LBA entries in the MBR partition table, the recorded size of this partition is clipped at the maximum, thereby ignoring the rest of the disk. This amounts to a maximum reported size of 2 TiB, assuming a disk with 512 bytes per sector (see512e). It would result in 16 TiB with 4 KiB sectors (4Kn), but since many older operating systems and tools are hard coded for a sector size of 512 bytes or are limited to 32-bit calculations, exceeding the 2 TiB limit could cause compatibility problems.[9]
In operating systems that support GPT-based boot through BIOS services rather than EFI, the first sector may also still be used to store the first stage of the bootloader code, but modified to recognize GPT partitions. The bootloader in the MBR must not assume a sector size of 512 bytes.[9]
The partition table header defines the usable blocks on the disk. It also defines the number and size of the partition entries that make up the partition table (offsets 80 and 84 in the table).[2]: 117-118
After the primary header and before the backup header, the Partition Entry Array describes partitions, using a minimum size of 128 bytes for each entry block.[13]The starting location of the array on disk, and the size of each entry, are given in the GPT header. The first 16 bytes of each entry designate the partition type's globally unique identifier (GUID). For example, the GUID for anEFI system partitionisC12A7328-F81F-11D2-BA4B-00A0C93EC93B. The second 16 bytes are a GUID unique to the partition. Then follow the starting and ending 64 bit LBAs, partition attributes, and the 36 character (max.)Unicodepartition name. As is the nature and purpose of GUIDs and as per RFC 4122, no central registry is needed to ensure the uniqueness of the GUID partition type designators.[14][2]:1970
The 64-bit partition table attributes are shared between 48-bit common attributes for all partition types, and 16-bit type-specific attributes:
Microsoft defines the type-specific attributes forbasic data partitionas:[16][17]
Google defines the type-specific attributes for ChromeOS kernel as:[18]
Windows 7 and earlier do not support UEFI on 32-bit platforms, and therefore do not allow booting from GPT partitions.[33]
Limited to 128 partitions per disk.[33]
"Partition type GUID" means that each partition type is strictly identified by a GUID number unique to that type, and therefore partitions of the same type will all have the same "partition type GUID". Each partition also has a "partition unique GUID" as a separate entry, which as the name implies is a unique id for each partition.
|
https://en.wikipedia.org/wiki/GUID_Partition_Table
|
Apple Partition Map(APM) is apartitionscheme used to define the low-level organization of data on disks formatted for use with68kandPowerPCMacintoshcomputers. It was introduced with theMacintosh II.[1]
Disks using the Apple Partition Map are divided intological blocks, with 512 bytes usually belonging to eachblock. The first block,Block 0, contains an Apple-specific data structure called "Driver Descriptor Map" for theMacintosh ToolboxROM to load driver updates and patches before loading from an MFS or HFS partition.[2]Because APM allows 32 bits worth of logical blocks, the historical size of an APM formatted disk using small blocks[3]is limited to 2TiB.[4]
TheApple Partition Mapmaps out all space used (including the map) and unused (free space) on disk, unlike the minimal x86master boot recordthat only accounts for used non-map partitions. This means that every block on the disk (with the exception of the first block,Block 0) belongs to a partition.
Some hybrid disks contain both anISO 9660primary volume descriptor and an Apple Partition Map, thus allowing the disc to work on different types of computers, including Apple systems.
For accessing volumes, both APM andGPTpartitions can be used in a standard manner withMac OS X Tiger(10.4) and higher. For starting an operating system,PowerPC-based systemscan only boot from APM disks.[5]In contrast,Intel-based systemsgenerally boot from GPT disks.[1][6][7]Nevertheless, older Intel-based Macs are able to boot from APM, GPT (GUID Partition Table) and MBR (Master Boot Record, using theBIOS-Emulation called EFI-CSM i.e. theCompatibility Support Moduleprovided byEFI).
Intel-based models that came with Mac OS X Tiger (10.4) orLeopard(10.5) preinstalled had to be able to boot from both APM and GPT disks due to the installation media for theseuniversal versionsof Mac OS X, which are APM partitioned in order to remain compatible with PowerPC-based systems.[8]However, the installation of OS X on an Intel-based Mac demands a GPT partitioned disk or will refuse to continue, the same way installation on a PowerPC-based system will demand an APM partitioned destination volume.Cloningan already installed OS X to an APM partition on Intel systems will remain bootable even on 2011 Intel-based Macs. Despite this apparent APM support, Apple never officially supported booting from an internal APM disk on an Intel-based system. The one exception for a universal version of Mac OS X (Tiger or Leopard) is an official Apple document describing how to set up a dual bootable external APM disk for use with PowerPC and Intel.[9]
Each entry of the partition table is the size of one data block, which is normally 512 bytes.[1][10]Each partition entry on the table is the size of one block or sector of data. Because the partition table itself is also a partition, the size of this first partition limits the number of entries to the partition table itself.
The normal case is that 64 sectors (64 × 512 = 32 KB) are used by theApple Partition Map: one block for theDriver Descriptor MapasBlock 0, one block for the partition table itself and 62 blocks for a maximum of 62 data partitions.[11]
Each partition entry includes the starting sector and the size, as well as a name, a type, a position of the data area, and a possible boot code. It also includes the total number of partitions in that partition table.[12]This ensures that, after reading the first partition table entry, the firmware is aware of how many blocks more to read from the media in order to process every partition table entry. All entries are inbig-endianbyte-order.[citation needed]
Types beginning with "Apple_" are reserved for assignment by Apple, all other custom defined types are free to use. However registration
with Apple is encouraged.
Partition status is abit fieldcomposed of the flags:
|
https://en.wikipedia.org/wiki/Apple_Partition_Map
|
In computing, arigid disk block(RDB) is the block on ahard diskwhere theAmigaseries of computers store the disk's partition and filesystem information. TheIBM's PCequivalent of the Amiga's RDB is themaster boot record(MBR).
Unlike its PC equivalent, the RDB doesn't directly contain metadata for each partition. Instead it points to alinked listof partition blocks, which contain the actual partition data. The partition data includes the start, length, filesystem, boot priority, buffer memory type and "flavor", though the latter was never used. Because there is no limitation in partition block count, there is no need to distinguish primary and extended types and all partitions are equal in stature and architecture.
Additionally, it may point to additional filesystem drivers, allowing the Amiga to boot from filesystems not directly supported by the ROM, such asPFSorSFS.
The data in the rigid disk block must start with theASCIIbytes "RDSK". Furthermore, its position is not restricted to the very first block of a volume, instead it could be located anywhere within its first 16 blocks. Thus it could safely coexist with a master boot record, which is forced to be found at block 0.
Nearly all Amiga hard disk controllers support the RDB standard, enabling the user to exchange disks between controllers.
Thiscomputer hardwarearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Amiga_Rigid_Disk_Block
|
This article presents atimelineof events in the history of 16-bitx86DOS-familydisk operating systemsfrom 1980 to present.Non-x86 operating systems named "DOS"are not part of the scope of this timeline.
Also presented is a timeline of events in the history of the 8-bit8080-based and 16-bit x86-basedCP/Moperating systems from 1974 to 2014, as well as the hardware and software developments from 1973 to 1995 which formed the foundation for the initial version and subsequent enhanced versions of these operating systems.
DOS releases have been in the forms of:
IBM combined SYSINIT with its customized ROM-BIOS interface code to create the BIOS extensionsfileIBMBIO.COM, the DOS-BIOS which deals withinput/outputhandling, ordevicehandling, and added a few external commands of their own:COMP,DISKCOMP,DISKCOPY, andMODE(configureprinter) to finish their product. The 160 KB DOS diskette also included 23 sample BASICprogramsdemonstrating the abilities of the PC, including the gameDONKEY.BAS. The twosystem files, IBMBIO.COM and IBMDOS.COM, arehidden. The first sector of DOS-formatted diskettes is theboot record. Two copies of the File Allocation Table occupy the two sectors which follow the boot record. Sectors four through seven hold theroot directory. The remaining 313 sectors (160,256 bytes) store the data contents of files. Disk space is allocated inclusters, which are one-sector in length. Because an 8-bit FAT can't support over 300 clusters, Paterson implemented a new 12-bit FAT, which would be calledFAT12.[D]DOS 1.0 diskettes have up to 64 32-byte directory entries, holding the 8-bytefilename, 3-bytefilename extension, 1-bytefile attribute(with a hidden bit, system bit and six undefined bits), 12 bytes reserved for future use, 2-byte last modified date, 2-byte starting cluster number and 4-bytefile size. The two standard formats for program files areCOMandEXE; aProgram Segment Prefixis built when they are loaded into memory. The third kind of command processing file is thebatch file.AUTOEXEC.BATis checked for, and executed by COMMAND.COM at start-up.[83]Special batch file commands arePAUSEandREM. I/O is madedevice independentby treatingperipheralsas if they were files. Whenever thereserved filenamesCON:(console),PRN:(printer), orAUX:(auxiliaryserial port) appear in theFile Control Blockof a file named in a command, all operations are directed to the device.[24]Thevideo controller, floppy disk controller, further memory, serial andparallel portsare added via up to five 8-bitISAexpansion cards. Delivery of the computer is scheduled for October.[86]
In addition to Microsoft's new commands in MS-DOS 2.0 (above), IBM adds more includingFDISK, the fixed disk[F]setup program, used to write themaster boot recordwhich supports up to fourpartitionson hard drives. Only one DOS partition is allowed, the others are intended for other operating systems such as CP/M-86, UCSD p-System and Xenix. The fixed disk has 10,618,880 bytes[G]of raw space.
The DOS partition on the fixed disk continues to use the FAT12 format, but with adaptations to support the much larger size of the fixed disk partition compared to floppy disks. Space in the user data area of the disk is allocated in clusters which are fixed at 8 sectors each. With DOS the only partition, the combined overhead is 50 sectors[H]leaving 10,592,256 bytes[I]for user data.[83]ABIOS parameter block(BPB) is added to volume boot records.
PC DOS does not include the FC command, which is similar to COMP. DOS 2 is about 12 KB larger than DOS 1.1 – despite its complex new features, it's only 24 KB of code.[24][134][135][136]Under pressure from IBM to leave sufficient memory available for applications on smaller PC systems, the developers had reduced the system size from triple that of DOS 1.1.[21]Peter Norton found many problems with the release.Interrupts25h and 26h, which read or write complete sectors, redefined their rules for absolute sector addressing, "sabotaging" programs using these services.[83][137]The XT motherboard uses 64-kilobit DIP chips, supporting up to 256 KB on board. With 384 KB on expansion cards, users could officially reach the 640 KB barrier ofconventional memory.[138]The power supply capacity was doubled to about 130 watts, to accommodate the hard drive.[139]
The other EMS 4.0 partners are evaluating the XMS spec, but stopped short of endorsing it.[212][360]
Excluding maintenance releases, this is the last version of Windows that could run on 8088 and 8086-based XT-class PCs (in real mode).
|
https://en.wikipedia.org/wiki/Timeline_of_DOS_operating_systems
|
Microsoft Windowswas announced byBill Gateson November 10, 1983, 2 years before it was first released.[1]Microsoft introduced Windows as agraphical user interfaceforMS-DOS, which had been introduced two years earlier, on August 12, 1981. The product line evolved in the 1990s from anoperating environmentinto a fully complete, modernoperating systemover two lines of development, each with their own separate codebase.
The first versions of Windows (1.0 through to 3.11) weregraphical shellsthat ran from MS-DOS.Windows 95, though still being based on MS-DOS, was its own operating system. Windows 95 also had a significant amount of 16-bit code ported from Windows 3.1.[2][3][4]Windows 95 introduced multiple features that have been part of the product ever since, including theStart menu, thetaskbar, andWindows Explorer(renamed File Explorer in Windows 8). In 1997, Microsoft releasedInternet Explorer 4which included the (at the time controversial)Windows Desktop Update. It aimed to integrate Internet Explorer and thewebinto the user interface and also brought new features into Windows, such as the ability to displayJPEGimages as the desktop wallpaper and single window navigation in Windows Explorer. In 1998, Microsoft released Windows 98, which also included the Windows Desktop Update and Internet Explorer 4 by default. The inclusion of Internet Explorer 4 and the Desktop Update led to anantitrust case in the United States. Windows 98 included USB support out of the box, and alsoplug and play, which allows devices to work when plugged in without requiring a system reboot or manual configuration.Windows Me, the last DOS-based version of Windows, was aimed at consumers and released in 2000. It introducedSystem Restore,Help and Support Center, updated versions of theDisk Defragmenterand other system tools.
In 1993, Microsoft releasedWindows NT 3.1, the first version of the newly developedWindows NToperating system, followed byWindows NT 3.5in 1994, andWindows NT 3.51in 1995. "NT" is an initialism for "New Technology".[3]Unlike theWindows 9xseries of operating systems, it was a fully 32-bit operating system. NT 3.1 introducedNTFS, a file system designed to replace the olderFile Allocation Table(FAT) which was used by DOS and the DOS-based Windows operating systems. In 1996,Windows NT 4.0was released, which included a fully 32-bit version of Windows Explorer written specifically for it, making the operating system work like Windows 95. Windows NT was originally designed to be used on high-end systems and servers, but with the release ofWindows 2000, many consumer-oriented features from Windows 95 and Windows 98 were included, such as theWindows Desktop Update,Internet Explorer 5, USB support andWindows Media Player. These consumer-oriented features were further extended inWindows XPin 2001, which included a new visual style calledLuna, a more user-friendly interface, updated versions of Windows Media Player andInternet Explorer 6by default, and extended features from Windows Me, such as the Help and Support Center and System Restore.Windows Vista, which was released in 2007, focused on securing the Windows operating system againstcomputer virusesand othermalicious softwareby introducing features such asUser Account Control. New features includeWindows Aero, updated versions of the standard games (e.g.Solitaire), Windows Movie Maker, and Windows Mail to replaceOutlook Express. Despite this, Windows Vista was critically panned for its poor performance on older hardware and its at-the-time high system requirements.Windows 7followed in 2009 nearly three years after its launch, and despite it technically having higher system requirements,[5][6]reviewers noted that it ran better than Windows Vista.[7]Windows 7 removed many applications, such asWindows Movie Maker,Windows Photo GalleryandWindows Mail, instead requiring users to download separateWindows Live Essentialsto gain some of those features and other online services.Windows 8, which was released in 2012, introduced many controversial changes, such as the replacement of the Start menu with the Start Screen, the removal of the Aero interface in favor of a flat, colored interface as well as the introduction of "Metro" apps (later renamed toUniversal Windows Platform apps), and the Charms Bar user interface element, all of which received considerable criticism from reviewers.[8][9][10]Windows 8.1, a free upgrade to Windows 8, was released in 2013.[11]
The following version of Windows,Windows 10, which was released in 2015, reintroduced the Start menu and added the ability to run Universal Windows Platform apps in a window instead of always in full screen. Windows 10 was generally well-received, with many reviewers stating that Windows 10 is what Windows 8 should have been.[12][13][14]
The latest version of Windows,Windows 11, was released to the general public on October 5, 2021. Windows 11 incorporates a redesigned user interface, including a new Start menu, a visual style featuring rounded corners, and a new layout for the Microsoft Store,[15]and also includedMicrosoft Edgeby default.
Windows 1.0, the first independent version of Microsoft Windows, released on November 20, 1985, achieved little popularity. The project was briefly codenamed "Interface Manager" before the windowing system was implemented—contrary to popular belief that it was the original name for Windows andRowland Hanson, the head of marketing at Microsoft, convinced the company that the nameWindowswould be more appealing to customers.[16]
Windows 1.0 was not a complete operating system, but rather an "operating environment" that extendedMS-DOS, and shared the latter's inherent flaws.
The first version of Microsoft Windows included a simple graphics painting program calledWindows Paint;Windows Write, a simpleword processor; an appointment calendar; a card-filer; anotepad; a clock; acontrol panel; acomputer terminal;Clipboard; andRAMdriver. It also included theMS-DOS Executiveand a game calledReversi.
Microsoft had worked withApple Computerto develop applications for Apple's newMacintoshcomputer, which featured agraphical user interface. As part of the related business negotiations, Microsoft had licensed certain aspects of the Macintosh user interface from Apple; in later litigation, a district court summarized these aspects as "screen displays".
In the development of Windows 1.0, Microsoft intentionally limited its borrowing of certain GUI elements from the Macintosh user interface, to comply with its license. For example, windows were only displayed "tiled" on the screen; that is, they could not overlap or overlie one another.
On December 31, 2001, Microsoft declared Windows 1.0 obsolete and stopped providing support and updates for the system.
During the mid to late 1980s, Microsoft andIBMhad cooperatively been developingOS/2as a successor to DOS. OS/2 would take full advantage of the aforementioned protected mode of theIntel 80286processor and up to 16 MB of memory. OS/2 1.0, released in 1987, supported swapping and multitasking and allowed running ofDOSexecutables.
IBM licensed Windows'GUIfor OS/2 asPresentation Manager, and the two companies stated that it and Windows 2.0 would be almost identical.[17]Presentation Manager was not available with OS/2 until version 1.1, released in 1988. ItsAPIwas incompatible with Windows. Version 1.2, released in 1989, introduced a newfile system,HPFS, to replace theFATfile system.
By the early 1990s, conflicts developed in the Microsoft/IBM relationship. They cooperated with each other in developing their PC operating systems and had access to each other's code. Microsoft wanted to further develop Windows, while IBM desired for future work to be based on OS/2. In an attempt to resolve this tension, IBM and Microsoft agreed that IBM would develop OS/2 2.0, to replace OS/2 1.3 and Windows 3.0, while Microsoft would develop the next version, OS/2 3.0.
This agreement soon fell apart however, and the Microsoft/IBM relationship was terminated. IBM continued to develop OS/2, while Microsoft changed the name of its (as yet unreleased) OS/2 3.0 toWindows NT. Both retained the rights to use OS/2 and Windows technology developed up to the termination of the agreement; Windows NT, however, was to be written anew, mostly independently (see below).
After an interim 1.3 version to fix up many remaining problems with the 1.x series, IBM released OS/2 version 2.0 in 1992. This was a major improvement: it featured a new, object-oriented GUI, the Workplace Shell (WPS), that included a desktop and was considered by many to be OS/2's best feature. Microsoft would later imitate much of it in Windows 95. Version 2.0 also provided a full 32-bit API, offered smooth multitasking and could take advantage of the 4 gigabytes of address space provided by theIntel 80386. Still, much of the system had 16-bit code internally which required, among other things, device drivers to be 16-bit code as well. This was one of the reasons for the chronic shortage of OS/2 drivers for the latest devices. Version 2.0 could also run DOS and Windows 3.0 programs, since IBM had retained the right to use the DOS and Windows code as a result of the breakup.
Microsoft Windows version 2.0 (2.01 and 2.03 internally) came out on December 9, 1987, and proved slightly more popular than its predecessor. Much of the popularity forWindows 2.0came by way of its inclusion as a "run-time version" with Microsoft's new graphical applications,ExcelandWord for Windows. They could be run from MS-DOS, executing Windows for the duration of their activity, and closing down Windows upon exit.
Microsoft Windows received a major boost around this time whenAldus PageMakerappeared in a Windows version, having previously run only onMacintosh. Some computer historians[who?]date this, the first appearance of a significantandnon-Microsoft application for Windows, as the start of the success of Windows.
Like prior versions of Windows, version 2.0 could use thereal-modememorymodel, which confined it to a maximum of 1megabyteof memory. In such a configuration, it could run under another multitasker likeDESQview, which used the286protected mode. It was also the first version to support theHigh Memory Areawhen running on an Intel 80286 compatible processor. This edition was renamedWindows/286with the release of Windows 2.1.
A separateWindows/386edition had aprotected modekernel, which required an 80386 compatible processor, withLIM-standard EMSemulationandVxDdrivers in the kernel. All Windows and DOS-based applications at the time were real mode, and Windows/386 could run them over the protected mode kernel by using thevirtual 8086 mode, which was new with the 80386 processor.
Version 2.1 came out on May 27, 1988, followed by version 2.11 on March 13, 1989; they included a few minor changes.
InApple Computer, Inc. v. Microsoft Corp., version 2.03, and later 3.0, faced challenges from Apple over its overlapping windows and other features Apple charged mimicked the ostensibly copyrighted "look and feel" of its operating system and "embodie[d] and generated a copy of the Macintosh" in its OS. Judge William Schwarzer dropped all but 10 of Apple's 189 claims of copyright infringement, and ruled that most of the remaining 10 were over uncopyrightable ideas.[18]
On December 31, 2001, Microsoft declared Windows 2.x obsolete and stopped providing support and updates for the system.
Windows 3.0, released in May 1990, improved capabilities given to native applications. It also allowed users to bettermultitaskolder MS-DOS based software compared to Windows/386, thanks to the introduction ofvirtual memory.
Windows 3.0's user interface finally resembled a serious competitor to the user interface of theMacintoshcomputer. PCs had improved graphics by this time, due toVGAvideo cards, and the protected/enhanced mode allowed Windows applications to use more memory in a more painless manner than their DOS counterparts could. Windows 3.0 could run in real, standard, or 386 enhanced modes, and was compatible with any Intel processor from the8086/8088up to the80286and80386. This was the first version to run Windows programs in protected mode, although the 386 enhanced modekernelwas an enhanced version of the protected mode kernel in Windows/386.
Windows 3.0 received two updates. A few months after introduction, Windows 3.0a was released as a maintenance release, resolving bugs and improving stability. A "multimedia" version, Windows 3.0 with Multimedia Extensions 1.0, was released in October 1991. This was bundled with "multimedia upgrade kits", comprising aCD-ROM driveand asound card, such as theCreative LabsSound Blaster Pro. This version was the precursor to the multimedia features available inWindows 3.1(first released in April 1992) and later, and was part of Microsoft's specification for theMultimedia PC.
The features listed above and growing market support from application software developers made Windows 3.0 wildly successful, selling around 10 million copies in the two years before the release of version 3.1. Windows 3.0 became a major source of income for Microsoft, and led the company to revise some of its earlier plans. Support was discontinued on December 31, 2001.[19]
In response to the impending release of OS/2 2.0, Microsoft developedWindows 3.1(first released in April 1992), which included several improvements to Windows 3.0, such as display ofTrueTypescalable fonts (developed jointly with Apple), improved disk performance in 386 Enhanced Mode, multimedia support, and bugfixes. It also removed Real Mode, and only ran on an80286or better processor. Later Microsoft also released Windows 3.11, a touch-up to Windows 3.1 which included all of the patches and updates that followed the release of Windows 3.1 in 1992.
In 1992 and 1993, Microsoft released Windows for Workgroups (WfW), which was available both as an add-on for existing Windows 3.1 installations and in a version that included the base Windows environment and the networking extensions all in one package. Windows for Workgroups included improved network drivers and protocol stacks, and support for peer-to-peer networking. There were two versions of Windows for Workgroups – 3.1 and 3.11. Unlike prior versions, Windows for Workgroups 3.11 ran in 386 Enhanced Mode only, and needed at least an80386SXprocessor. One optional download for WfW was the "Wolverine" TCP/IP protocol stack, which allowed for easy access to the Internet through corporate networks.
All these versions continued version 3.0's impressive sales pace. Even though the 3.1x series still lacked most of the important features of OS/2, such as long file names, a desktop, or protection of the system against misbehaving applications, Microsoft quickly took over the OS and GUI markets for theIBM PC. TheWindows APIbecame the de facto standard for consumer software.
On December 31, 2001, Microsoft declared Windows 3.1 obsolete and stopped providing support and updates for the system. However,OEMlicensing for Windows for Workgroups 3.11 onembedded systemscontinued to be available until November 1, 2008.[20]
Meanwhile, Microsoft continued to develop Windows NT. The main architect of the system wasDave Cutler, one of the chief architects ofVAX/VMSatDigital Equipment Corporation.[21]Microsoft hired him in October 1988 to create a successor to OS/2, but Cutler created a completely new system instead. Cutler had been developing a follow-on to VMS at DEC calledMICA, and when DEC dropped the project he brought the expertise and around 20 engineers with him to Microsoft.
Windows NT Workstation (Microsoft marketing wanted Windows NT to appear to be a continuation of Windows 3.1) arrived in Beta form to developers at the July 1992Professional Developers ConferenceinSan Francisco.[22]Microsoft announced at the conference its intentions to develop a successor to both Windows NT and Windows 3.1's replacement (Windows 95, codenamed Chicago), which would unify the two into one operating system. This successor was codenamedCairo. In hindsight, Cairo was a much more difficult project than Microsoft had anticipated and, as a result, NT and Chicago would not be unified untilWindows XP—albeitWindows 2000, oriented to business, had already unified most of the system's bolts and gears, it was XP that was sold to home consumers like Windows 95 and came to be viewed as the final unified OS. Parts of Cairo have still not made it into Windows as of 2025[update]: most notably, theWinFSfile system, which was the much touted Object File System of Cairo. Microsoft announced in 2006 that they would not make a separate release of WinFS for Windows XP and Windows Vista[23]and would gradually incorporate the technologies developed for WinFS in other products and technologies, notablyMicrosoft SQL Server.
Driver support was lacking due to the increased programming difficulty in dealing with NT's superior hardware abstraction model. This problem plagued the NT line all the way through Windows 2000. Programmers complained that it was too hard to write drivers for NT, and hardware developers were not going to go through the trouble of developing drivers for a small segment of the market. Additionally, although allowing for good performance and fuller exploitation of system resources, it was also resource-intensive on limited hardware, and thus was only suitable for larger, more expensive machines.
However, these same features made Windows NT perfect for theLANserver market (which in 1993 was experiencing a rapid boom, as office networking was becoming common). NT also had advanced network connectivity options andNTFS, an efficient file system. Windows NT version 3.51 was Microsoft's entry into this field, and took away market share from Novell (the dominant player) in the following years.
One of Microsoft's biggest advances initially developed for Windows NT was a new 32-bit API, to replace the legacy 16-bitWindows API. This API was calledWin32, and from then on Microsoft referred to the older 16-bit API asWin16. The Win32 API had three levels of implementation: the complete one for Windows NT, a subset for Chicago (originally calledWin32c) missing features primarily of interest to enterprise customers (at the time) such as security andUnicodesupport, and a more limited subset calledWin32swhich could be used on Windows 3.1 systems. Thus Microsoft sought to ensure some degree of compatibility between the Chicago design and Windows NT, even though the two systems had radically different internal architectures.
Windows NT was the first Windows operating system based on ahybrid kernel. The hybrid kernel was designed as a modifiedmicrokernel, influenced by theMach microkerneldeveloped byRichard Rashidat Carnegie Mellon University, but without meeting all of the criteria of a pure microkernel.
As released, Windows NT 3.x went through three versions (3.1, 3.5, and 3.51), changes were primarily internal and reflected back end changes. The 3.5 release added support for new types of hardware and improved performance and data reliability; the 3.51 release was primarily to update the Win32 APIs to be compatible with software being written for the Win32c APIs in what became Windows 95. Support for Windows NT 3.51 ended in 2001 and 2002 for the Workstation and Server editions, respectively.
AfterWindows 3.11, Microsoft began to develop a new consumer-oriented version of the operating system codenamed Chicago. Chicago was designed to have support for 32-bit preemptive multitasking like OS/2 and Windows NT, although a 16-bit kernel would remain for the sake of backward compatibility. The Win32APIfirst introduced with Windows NT was adopted as the standard 32-bit programming interface, with Win16 compatibility being preserved through a technique known as "thunking". A newobject-orientedGUI was not originally planned as part of the release, although elements of the Cairo user interface were borrowed and added as other aspects of the release (notably Plug and Play) slipped.
Microsoft did not change all of the Windows code to 32-bit; parts of it remained 16-bit (albeit not directly usingreal mode) for reasons of compatibility, performance, and development time. Additionally it was necessary to carry over design decisions from earlier versions of Windows for reasons of backwards compatibility, even if these design decisions no longer matched a more modern computing environment. These factors eventually began to impact the operating system's efficiency and stability.
Microsoft marketing adoptedWindows 95as the product name for Chicago when it was released on August 24, 1995. Microsoft had a double gain from its release: first, it made it impossible for consumers to run Windows 95 on a cheaper, non-Microsoft DOS, secondly, although traces of DOS were never completely removed from the system and MS DOS 7 would be loaded briefly as a part of thebootingprocess, Windows 95 applications ran solely in 386 enhanced mode, with a flat 32-bit address space andvirtual memory. These features make it possible for Win32 applications to address up to 2gigabytesof virtual RAM (with another 2 GB reserved for the operating system), and in theory prevented them from inadvertently corrupting the memory space of other Win32 applications. In this respect the functionality of Windows 95 moved closer toWindows NT, although Windows 95/98/Me did not support more than 512megabytesof physical RAM without obscure system tweaks. Three years after its introduction, Windows 95 was succeeded byWindows 98.
IBMcontinued to market OS/2, producing later versions in OS/2 3.0 and 4.0 (also called Warp). Responding to complaints about OS/2 2.0's high demands on computer hardware, version 3.0 was significantly optimized both for speed and size. Before Windows 95 was released, OS/2 Warp 3.0 was even shipped pre-installed with several large German hardware vendor chains. However, with the release of Windows 95, OS/2 began to lose market share.
It is probably impossible to choose one specific reason why OS/2 failed to gain much market share. While OS/2 continued to run Windows 3.1 applications, it lacked support for anything but theWin32ssubset of Win32 API (see above). Unlike with Windows 3.1, IBM did not have access to the source code for Windows 95 and was unwilling to commit the time and resources to emulate the moving target of the Win32 API. IBM later introduced OS/2 into theUnited States v. Microsoftcase, blaming unfair marketing tactics on Microsoft's part.
Microsoft went on to release five different versions of Windows 95:
OSR2, OSR2.1, and OSR2.5 were not released to the general public, rather, they were available only toOEMsthat would preload the OS onto computers. Some companies sold new hard drives with OSR2 preinstalled (officially justifying this as needed due to the hard drive's capacity).
The firstMicrosoft Plus!add-on pack was sold for Windows 95. Microsoft ended extended support for Windows 95 on December 31, 2001.
4.00
Microsoft released the successor to NT 3.51,Windows NT 4.0, on August 24, 1996, one year after the release of Windows 95. It was Microsoft's primary business-oriented operating system until the introduction ofWindows 2000. Major new features included the new Explorer shell from Windows 95, scalability and feature improvements to the corearchitecture, kernel,USER32,COMandMSRPC.[24]
Windows NT 4.0 came in five versions:
Microsoft ended mainstream support for Windows NT 4.0 Workstation on June 30, 2002, and ended extended support on June 30, 2004, while Windows NT 4.0 Server mainstream support ended on December 31, 2002, and extended support ended on December 31, 2004. Both editions were succeeded byWindows 2000Professional and the Windows 2000 Server Family, respectively.[25][26][27]
Microsoft ended mainstream support for Windows NT 4.0 Embedded on June 30, 2003, and ended extended support on July 11, 2006. This edition was succeeded byWindows XP Embedded.
On June 25, 1998, Microsoft releasedWindows 98(code-named Memphis), three years after the release ofWindows 95, two years after the release ofWindows NT 4.0, and 21 months before the release ofWindows 2000. It included new hardware drivers and theFAT32file system which supports disk partitions that are larger than 2 GB (first introduced in Windows 95 OSR2).USBsupport in Windows 98 is marketed as a vast improvement over Windows 95. The release continued the controversial inclusion of theInternet Explorerbrowser with the operating system that started with Windows 95 OEM Service Release 1. The action eventually led to the filing of theUnited States v. Microsoftcase, dealing with the question of whether Microsoft was introducing unfair practices into the market in an effort to eliminate competition from other companies such asNetscape.[28]
In 1999, Microsoft released Windows 98 Second Edition, an interim release. One of the more notable new features was the addition ofInternet Connection Sharing, a form ofnetwork address translation, allowing several machines on a LAN (Local Area Network) to share a singleInternet connection. Hardware support through device drivers was increased and this version shipped with Internet Explorer 5. Many minor problems that existed in the first edition were fixed making it, according to many, the most stable release of theWindows 9xfamily.[29]
Mainstream support for Windows 98 and 98 SE ended on June 30, 2002. Extended support ended on July 11, 2006.
Microsoft released Windows 2000 on February 17, 2000, as the successor toWindows NT 4.0, 17 months after the release ofWindows 98. It has the version number Windows NT 5.0, and it was Microsoft's business-oriented operating system starting with the official release on February 17, 2000, until 2001 when it was succeeded byWindows XP. Windows 2000 has had four official service packs. It was successfully deployed both on the server and the workstation markets. Amongst Windows 2000's most significant new features wasActive Directory, a near-complete replacement of the NT 4.0Windows Server domainmodel, which built on industry-standard technologies likeDNS,LDAP, andKerberosto connect machines to one another.Terminal Services, previously only available as a separate edition of NT 4, was expanded to all server versions. A number of features from Windows 98 were incorporated also, such as an improved Device Manager,Windows Media Player, and a revisedDirectXthat made it possible for the first time for many modern games to work on the NT kernel. Windows 2000 is also the last NT-kernel Windows operating system to lackproduct activation.
While Windows 2000 upgrades were available for Windows 95 and Windows 98, it was not intended for home users.[30]
Windows 2000 was available in four editions:
Microsoft ended support for both Windows 2000 andWindows XP Service Pack 2on July 13, 2010.
On September 14, 2000, Microsoft released a successor to Windows 98 calledWindows Me, short for "Millennium Edition". It was the last DOS-based operating system from Microsoft. Windows Me introduced a new multimedia-editing application calledWindows Movie Maker, came standard with Internet Explorer 5.5 andWindows Media Player 7, and debuted the first version ofSystem Restore– a recovery utility that enables the operating system to revert system files back to a prior date and time. System Restore was a notable feature that would continue to thrive in all later versions of Windows.
Windows Me was conceived as a quick one-year project that served as a stopgap release between Windows 98 and Windows XP. Many of the new features were available from theWindows Update siteas updates for older Windows versions (System RestoreandWindows Movie Makerwere exceptions). Windows Me was criticized for stability issues, as well as for lackingreal modeDOS support, to the point of being referred to as the "Mistake Edition".[31]Windows Me was the last operating system to be based on the Windows 9x (monolithic) kernel andMS-DOS, with its successorWindows XPbeing based on Microsoft'sWindows NT kernelinstead.
On October 25, 2001, Microsoft released Windows XP (codenamed "Whistler"). The merging of the Windows NT/2000 and Windows 95/98/Me lines was finally achieved with Windows XP. Windows XP uses the Windows NT 5.1kernel, marking the entrance of the Windows NT core to the consumer market, to replace the agingWindows 9xbranch. The initial release was met with considerablecriticism, particularly in the area ofsecurity, leading to the release of three majorService Packs. Windows XP SP1 was released in September 2002, SP2 was released in August 2004 and SP3 was released in April 2008. Service Pack 2 provided significant improvements and encouraged widespread adoption of XP among both home and business users. Windows XP was one of Microsoft's longest-running flagship operating systems, beginning with the public release on October 25, 2001, for at least 5 years, and ending on January 30, 2007, when it was succeeded by Windows Vista.
Windows XP is available in a number of versions:
On April 25, 2003, Microsoft launched Windows Server 2003, a notable update toWindows 2000 Serverencompassing many newsecurityfeatures, a new "Manage YourServer" wizard that simplifies configuring a machine for specific roles, and improved performance. It is based on the Windows NT 5.2 kernel. A few services not essential for server environments are disabled by default for stability reasons, most noticeable are the "Windows Audio" and "Themes" services; users have to enable them manually to get sound or the "Luna" look as per Windows XP. The hardware acceleration for display is also turned off by default, users have to turn the acceleration level up themselves if they trust the display card driver.
In December 2005, Microsoft released Windows Server 2003 R2, which is actually Windows Server 2003 with SP1 (Service Pack1), together with anadd-onpackage.
Among the newfeaturesare a number of management features for branch offices, file serving, printing and company-wide identity integration.
Windows Server 2003 is available in six editions:
Windows Server 2003 R2, an update of Windows Server 2003, was released to manufacturing on December 6, 2005. It is distributed on two CDs, with one CD being the Windows Server 2003 SP1 CD. The other CD adds many optionally installable features for Windows Server 2003. The R2 update was released for all x86 and x64 versions, except Windows Server 2003 R2 Enterprise Edition, which was not released for Itanium.
On April 25, 2005, Microsoft released Windows XP Professional x64 Edition and Windows Server 2003, x64 Editions in Standard, Enterprise and Datacenter SKUs. Windows XP Professional x64 Edition is an edition ofWindows XPforx86-64personal computers. It is designed to use the expanded 64-bit memory address space provided by the x86–64 architecture.[32]
Windows XP Professional x64 Edition is based on theWindows Server 2003codebase, with the server features removed and client features added. BothWindows Server 2003 x64and Windows XP Professional x64 Edition use identical kernels.[33]
Windows XPProfessionalx64 Editionis not to be confused withWindows XP64-bit Edition, as the latter was designed forIntelItaniumprocessors.[34][35]During the initial development phases, Windows XP Professional x64 Edition was namedWindows XP 64-Bit Edition for 64-Bit Extended Systems.[36]
In July 2006, Microsoft released athin-clientversion of Windows XP Service Pack 2, calledWindows Fundamentals for Legacy PCs(WinFLP). It is only available toSoftware Assurancecustomers. The aim of WinFLP is to give companies a viable upgrade option for older PCs that are running Windows 95, 98, and Me that will be supported with patches and updates for the next several years. Most user applications will typically be run on a remote machine using Terminal Services orCitrix.
While being visually the same as Windows XP, it has some differences. For example, if the screen has been set to 16 bit colors, the Windows 2000 recycle bin icon and some XP 16-bit icons will show. Paint and some games like Solitaire aren't present too.
Windows Home Server (code-named Q, Quattro) is a server product based onWindows Server 2003, designed for consumer use. The system was announced on January 7, 2007, byBill Gates. Windows Home Server can be configured and monitored using a console program that can be installed on a client PC. Such features as Media Sharing, local and remote drive backup and file duplication are all listed as features. The release of Windows Home Server Power Pack 3 added support forWindows 7to Windows Home Server.
Windows Vistawas released on November 30, 2006, to business customers—consumer versions followed on January 30, 2007. Windows Vista intended to have enhanced security by introducing a new restricted user mode calledUser Account Control, replacing the "administrator-by-default" philosophy of Windows XP. Vista was the target of much criticism and negative press, and in general was not well regarded, this was seen as leading to the relatively swift release of Windows 7.
One major difference between Vista and earlier versions of Windows, Windows 95 and later, was that the original start button was replaced with the Windows icon in a circle (called the Start Orb). Vista also featured new graphics features, theWindows AeroGUI, new applications (such asWindows Calendar, Windows DVD Maker and some new games includingChess,Mahjong, andPurble Place),[37]Internet Explorer 7,Windows Media Player 11, and a large number of underlying architectural changes. Windows Vista had the version number NT 6.0. During its lifetime, Windows Vista had two service packs.
Windows Vista shipped insix editions:[38]
All editions (except Starter edition) were available in both 32-bit and 64-bit versions. The biggest advantage of the 64-bit version was breaking the 4 gigabyte memory barrier, which 32-bit computers cannot fully access.
Windows Server 2008, released on February 27, 2008, was originally known as Windows Server Codename "Longhorn". Windows Server 2008 built on the technological and security advances first introduced with Windows Vista, and was significantly more modular than its predecessor, Windows Server 2003.
Windows Server 2008 shipped in ten editions:
Windows 7 was released to manufacturing on July 22, 2009, and reached general retail availability on October 22, 2009.[39][40]Since its release, Windows 7 had one service pack.
Some features of Windows 7 were fasterbooting, Device Stage,Windows PowerShell, less obtrusive User Account Control, multi-touch, and improved window management. The interface was renewed with a bigger taskbar and some improvements in the searching system and the Start menu.[41]Features included with Windows Vista and not in Windows 7 include the sidebar (although gadgets remain) and several programs that were removed in favor of downloading theirWindows Livecounterparts. Windows 7 met with positive reviews, which said the OS was faster and easier to use than Windows Vista.
Windows 7 shipped insix editions:[42]
In some countries in theEuropean Union, there were other editions that lacked some features such as Windows Media Player, Windows Media Center and Internet Explorer—these editions were called names such as "Windows 7 N."
Microsoft focused on selling Windows 7 Home Premium and Professional. All editions, except the Starter edition, were available in both 32-bit and 64-bit versions.
Unlike the corresponding Vista editions, the Professional and Enterprise editions were supersets of the Home Premium edition.
At theProfessional Developers Conference(PDC) 2008, Microsoft also announcedWindows Server 2008 R2, as the server variant ofWindows 7. Windows Server 2008 R2 shipped in 64-bit versions (x64andItanium) only.
In 2010, Microsoft released Windows Thin PC or WinTPC, which was a feature-and size-reduced locked-down version of Windows 7 expressly designed to turn older PCs into thin clients. WinTPC was available forsoftware assurancecustomers and relied oncloud computingin a business network. Wireless operation is supported since WinTPC has full wireless stack integration, but wireless operation may not be as good as the operation on a wired connection.[43][44]
Windows Home Server 2011 code named 'Vail'[45]was released on April 6, 2011.[46]Windows Home Server 2011 is built on theWindows Server 2008 R2code base and removed the Drive Extender drive pooling technology in the original Windows Home Server release.[47]Windows Home Server 2011 is considered a "major release".[45]Its predecessor was built onWindows Server 2003. WHS 2011 only supportsx86-64hardware.
Microsoft decided to discontinue Windows Home Server 2011 on July 5, 2012, while including its features into Windows Server 2012 Essentials.[48]Windows Home Server 2011 was supported until April 12, 2016.[49]
On June 1, 2011, Microsoft previewed Windows 8 at bothComputex Taipeiand theD9: All Things Digitalconference in California.[50][51]The first public preview of Windows Server 2012 was shown by Microsoft at the 2011 Microsoft Worldwide Partner Conference.[52]Windows 8 Release Preview and Windows Server 2012 Release Candidate were both released on May 31, 2012.[53]Product development on Windows 8 was completed on August 1, 2012, and it was released to manufacturing the same day.[54]Windows Server 2012 went on sale to the public on September 4, 2012. Windows 8 went on sale to the public on October 26, 2012. One edition,Windows RT, runs on some system-on-a-chip devices with mobile32-bitARM (ARMv7) processors. Windows 8 features a redesigned user interface, designed to make it easier for touchscreen users to use Windows. The interface introduced an updated Start menu known as the Start screen, and a new full-screen application platform. The desktop interface is also present for running windowed applications, although Windows RT will not run any desktop applications not included in the system. On the Building Windows 8 blog, it was announced that a computer running Windows 8 can boot up much faster than Windows 7.[55]New features also includeUSB 3.0support, theWindows Store, the ability to run from USB drives withWindows To Go, and others.
Windows 8 is available in the following editions:
Microsoft ended support forWindows 8on January 12, 2016.
Windows 8.1 and Windows Server 2012 R2 were released on October 17, 2013. Windows 8.1 is available as an update in theWindows StoreforWindows 8users only and also available to download for clean installation.[56]The update adds new options for resizing the live tiles on the Start screen.[57]Windows 8 was given the kernel number NT 6.2, with its successor 8.1 receiving the kernel number 6.3. Neither had any service packs, although many consider Windows 8.1 to be a service pack for Windows 8. However, Windows 8.1 received two main updates in 2014.[58]Both versions received some criticism due to the removal of the Start menu and some difficulties to perform tasks and commands.
Windows 8.1 is available in the same editions as its predecessor for users not running Windows 8.
Microsoft ended support on January 10, 2023.
Windows 10 was unveiled on September 30, 2014, as the successor for Windows 8, and was released on July 29, 2015.[59]It was distributed without charge to Windows 7 and 8.1 users for one year after release. A number of new features likeCortana, theMicrosoft Edgeweb browser, the ability to view Windows Store apps as a window instead of fullscreen, the return of the Start menu, virtual desktops, revamped core apps, Continuum, and a unified Settings app were all features debuted in Windows 10. Like its successor, the operating system was announced as a service OS that would receive constant performance and stability updates. Unlike Windows 8, Windows 10 received mostly positive reviews, praising improvements of stability and practicality than its predecessor, however, it received some criticism due to mandatory update installation, privacy concerns and advertising-supported software tactics.
Although Microsoft claimed Windows 10 would be the last Windows version, eventually a new major release, Windows 11, was announced in 2021. That made Windows 10 last longer as Microsoft's flagship operating system than any other version of Windows, beginning with the public release on July 29, 2015, for six years, and ending on October 5, 2021, when Windows 11 was released. Windows 10 had received thirteen main updates.
Windows Server 2016is a release of the Microsoft Windows Server operating system that was unveiled on September 30, 2014. Windows Server 2016 was officially released at Microsoft'sIgniteConference, September 26–30, 2016.[67]It is based on the Windows 10 Anniversary Update codebase.
Windows Server 2019is a release of the Microsoft Windows Server operating system that was announced on March 20, 2018. The firstWindows Insiderpreview version was released on the same day. It was released for general availability on October 2, 2018. Windows Server 2019 is based on the Windows 10 October 2018 Update codebase.
On October 6, 2018, distribution of Windows version 1809 (build 17763) was paused while Microsoft investigated an issue with user data being deleted during an in-place upgrade. It affected systems where a user profile folder (e.g. Documents, Music or Pictures) had been moved to another location, but data was left in the original location. As Windows Server 2019 is based on the Windows version 1809 codebase, it too was removed from distribution at the time, but was re-released on November 13, 2018. Thesoftware product life cyclefor Server 2019 was reset in accordance with the new release date.
Windows Server 2022was released on August 18, 2021. This is the first NT server version which does not share the build number with any of its client version counterpart, although its codename is 21H2, similar to the Windows 10 November 2021 Update.
Windows 11 is the latest release of Windows NT, and the successor to Windows 10. It was unveiled on June 24, 2021, and was released on October 5,[68]serving as a free upgrade to compatible Windows 10 devices. The system incorporates a renewed interface called "Mica", which includes translucent backgrounds, rounded edges and color combinations. The taskbar's icons are center aligned by default, while the Start menu replaces the "Live Tiles" with pinned apps and recommended apps and files. The MSN widget panel, the Microsoft Store, and the file browser, among other applications, have also been redesigned. However, some features and programs such as Cortana, Internet Explorer (replaced by Microsoft Edge as the default web browser) and Paint 3D were removed. Apps like 3D Viewer, Paint 3D, Skype and OneNote for Windows 10 can be downloaded from the Microsoft Store.[69]Beginning in 2021, Windows 11 included compatibility with Android applications, however, Microsoft has announced support for Android apps will end in March, 2025; the Amazon Appstore is included in Windows Subsystem for Android. Windows 11 received a positive reception from critics. While it was praised for its redesigned interface, and increased security and productivity, it was criticized for its high system requirements (which includes an installedTPM 2.0chip, enabling theSecure Bootprotocol, andUEFIfirmware) and various UI changes and regressions (such as requiring a Microsoft account for first-time setup, preventing users from changing default browsers, and inconsistent dark theme) compared to Windows 10.[70][71][72]
Windows Server 2025follows on Windows Server 2022 and was released on November 1, 2024. It is graphically based on Windows 11 and uses features like Hotpatching, among others.
|
https://en.wikipedia.org/wiki/History_of_Microsoft_Windows
|
fdiskis acommand-line utilityfordisk partitioning. It has been part ofDOS,DRFlexOS,IBMOS/2, and early versions ofMicrosoft Windows, as well as certain ports ofFreeBSD,[2]NetBSD,[3]OpenBSD,[4]DragonFly BSD[5]andmacOS[6]for compatibility reasons.Windows 2000and its successors have replaced fdisk with a more advanced tool calleddiskpart.
IBMintroduced the first version of fdisk (officially dubbed "Fixed Disk Setup Program") in March 1983, with the release of theIBM PC/XTcomputer (the first PC to store data on ahard disk) and theIBM PC DOS2.0 operating system. fdisk version 1.0 can create oneFAT12partition, delete it, change theactive partition, or display partition data. fdisk writes themaster boot record, which supports up to four partitions. The other three were intended for other operating systems such asCP/M-86andXenix, which were expected to have their own partitioning utilities.
Microsoft first added fdisk toMS-DOSin version 3.2.[7]MS-DOS versions 2.0 through 3.10 included OEM-specific partitioning tools, which may have been named fdisk.
PC DOS 3.0, released in August 1984, added support forFAT16partitions to handle larger hard disks more efficiently. PC DOS 3.30, released in April 1987, added support forextended partitions. (These partitions do not store data directly but can contain up to 23logical drives.) In both cases, fdisk was modified to work with FAT16 and extended partitions. Support forFAT16Bwas first added to Compaq's fdisk in MS-DOS 3.31. FAT16B later became available with MS-DOS and PC DOS 4.0.
The undocumented/mbrswitch in fdisk, which could repair themaster boot record, soon became popular.
IBM PC DOS 7.10 shipped with the new fdisk32 utility.
ROM-DOS,[8]DR DOS 6.0[9]FlexOS,[10]PTS-DOS2000 Pro,[11]andFreeDOS,[12]include an implementation of the fdisk command.
Windows 95,Windows 98, andWindows MEshipped with a derivative of the MS-DOS fdisk.Windows 2000and its successors, however, came with the more advanced[according to whom?]diskpartand the graphicalDisk Managementutilities.
Starting with Windows 95 OSR2, fdisk supports theFAT32file system.[13]
The version of fdisk that ships with Windows 95 does not report the correct size of a hard disk that is larger than 64 GB. An updated fdisk is available from Microsoft to correct this issue.[14]In addition, fdisk cannot create partitions larger than 512 GB, even though FAT32 supports partitions as big as 2 TB. This limitation applies to all versions of fdisk supplied with Windows 95 OSR 2.1, Windows 98 and Windows ME.
Before version 4.0,OS/2shipped with two partition table managers. These were thetext modefdisk[15]and thegraphicalfdiskpm.[16]The two have identical functionality, and can manipulate both FAT partitions and the more advancedHPFSpartitions.
OS/2 4.5 and higher (includingeComStationandArcaOS) can use theJFSfile system, as well as FAT and HPFS. They replaced fdisk with theLogical Volume Manager(LVM).
fdisk forMach Operating Systemwas written by Robert Baron. It was ported to386BSDby Julian Elischer,[17]and the implementation is being used byFreeBSD,[2]NetBSD[3]andDragonFly BSD,[5]all as of 2019, as well as the early versions ofOpenBSDbetween 1995 and 1997 before OpenBSD 2.2.[1]
Tobias Weingartner re-wrote fdisk in 1997 before OpenBSD 2.2,[4]which has subsequently been forked byApple Computer, Incin 2002, and is still used as the basis for fdisk on macOS as of 2019.[6]
For native partitions, BSD systems traditionally useBSD disklabel, and fdisk partitioning is supported only on certain architectures (for compatibility reasons) and only in addition to the BSD disklabel (which is mandatory).
In Linux, fdisk is a part of a standard package distributed by the Linux Kernel organization,util-linux. The original program was written by Andries E. Brouwer and A. V. Le Blanc and was later rewritten by Karel Zak and Davidlohr Bueso when they forked the util-linux package in 2006. An alternative,ncurses-based program,cfdisk, allows users to create partition layouts via atext-based user interface(TUI).[18]
|
https://en.wikipedia.org/wiki/FDISK
|
TheCXFS file system(ClusteredXFS) is aproprietaryshared disk file systemdesigned bySilicon Graphics(SGI) specifically to be used in astorage area network(SAN) environment.
A significant difference between CXFS and other shared disk file systems is that data andmetadataare managed separately from each other. CXFS provides direct access to data via the SAN for all hosts which will act as clients. This means that a client is able to access file data via the fiber connection to the SAN, rather than over alocal area networksuch asEthernet(as is the case in most other distributed file systems, likeNFS). File metadata however, is managed via ametadata broker. The metadata communication is performed via TCP/IP and Ethernet.
Another difference is that file locks are managed by the metadata broker, rather than the individual host clients. This results in the elimination of a number of problems which typically plague distributed file systems.
Though CXFS supports having a heterogeneous environment (includingSolaris,Linux,Mac OS X,AIXandWindows), either SGI'sIRIXOperating System orLinuxis required to be installed on the host which acts as the metadata broker.
|
https://en.wikipedia.org/wiki/CXFS
|
Stratisis auser-spaceconfigurationdaemonthat configures and monitors existing components fromLinux's underlying storage components oflogical volume management(LVM) andXFSfilesystem viaD-Bus.
Stratis is not a user-levelfilesystemlike theFilesystem in Userspace(FUSE) system. Stratis configuration daemon was originally developed byRed Hatto have feature parity withZFSandBtrfs. The hope was due to Stratis configuration daemon being in userland, it would more quickly reach maturity versus the years of kernel level development of file systems ZFS and Btrfs.[2][3]It is built upon enterprise-tested components LVM and XFS with over a decade of enterprise deployments and the lessons learned from System Storage Manager inRed Hat Enterprise Linux7.[4]
Stratis provides ZFS/Btrfs-style features by integrating layers of existing technology: Linux'sdevice mappersubsystem, and the XFS filesystem. Thestratisddaemon manages collections of block devices, and provides a D-BusAPI. Thestratis-cliDNFpackageprovides a command-line toolstratis, which itself uses the D-Bus API to communicate withstratisd.
|
https://en.wikipedia.org/wiki/Stratis_(configuration_daemon)
|
Aversioning file systemis any computerfile systemwhich allows acomputer fileto exist in several versions at the same time. Thus it is a form ofrevision control. Most common versioning file systems keep a number of old copies of the file. Some limit the number of changes per minute or per hour to avoid storing large numbers of trivial changes. Others instead take periodic snapshots whose contents can be accessed using methods similar as those for normal file access.
A versioning file system is similar to a periodicbackup, with several key differences.
Versioning file systems provide some of the features ofrevision control systems. However, unlike most revision control systems, they are transparent to users, not requiring a separate "commit" step to record a new revision.
Versioning file systems should not be confused withjournaling file systems. Whereasjournaling file systemswork by keeping a log of the changes made to a file before committing those changes to that file system (and overwriting the prior version), a versioning file system keeps previous copies of a file when saving new changes. The two features serve different purposes and are not mutually exclusive.
Someobject storageimplementations offers object versioning, such asAmazon S3.
An early implementation of versioning, possibly the first, was in MIT'sITS. In ITS, a filename consisted of two six-character parts; if the second part was numeric (consisted only of digits), it was treated as a version number. When specifying a file to open for read or write, one could supply a second part of ">"; when reading, this meant to open the highest-numbered version of the file; when writing, it meant to increment the highest existing version number and create the new version for writing.
Another early implementation of versioning was inTENEX, which becameTOPS-20.[1]
A powerful example of a file versioning system is built into theRSX-11andOpenVMSoperating system fromDigital Equipment Corporation. In essence, whenever an application opens a file for writing, the file system automatically creates a new instance of the file, with a version number appended to the name. Version numbers start at 1 and count upward as new instances of a file are created. When an application opens a file for reading, it can either specify the exact file name including version number, or just the file name without the version number, in which case the most recent instance of the file is opened.
The "purge"DCL/CCLcommand can be used at any time to manage the number of versions in a specific directory. By default, all but the highest numbered versions of all files in the current directory will be deleted; this behavior can be overridden with the /keep=n switch and/or by specifying directory path(s) and/or filename patterns. VMS systems are often scripted to purge user directories on a regular schedule; this is sometimes misconstrued by end-users as a property of the versioning system.
On February 8, 2004, Kiran-Kumar Muniswamy-Reddy, Charles P. Wright, Andrew Himmer, and Erez Zadok (all fromStony Brook University) proposed a stackable file system Versionfs, providing a versioning layer on top of any other Linux file systems.[3]
The Lisp Machine File System supports versioning. This was provided by implementations from MIT, LMI, Symbolics and Texas Instruments. Such an operating system wasSymbolics Genera.
Starting withLion(10.7),macOShas a feature calledVersionswhich allowsTime Machine-like saving and browsing of past versions of documents for applications written to use Versions. This functionality, however, takes place at the application layer, not the filesystem layer;[4]Lion and later releases do not incorporate a true versioning file system.
HTFS, adopted as the primary filesystem forSCO OpenServerin 1995, supports file versioning. Versioning is enabled on a per-directory basis by setting the directory's setuid bit, which is inherited when subdirectories are created. If versioning is enabled, a new file version is created when a file or directory is removed, or when an existing file is opened with truncation. Non-current versions remain in the filesystem namespace, under the name of the original file but with a suffix attached consisting of a semicolon and version sequence number. All but the current version are hidden from directory reads (unless the SHOWVERSIONS environment variable is set), but versions are otherwise accessible for all normal operations. The environment variable and general accessibility allow versions to be managed with the usual filesystem utilities, though there is also an "undelete" command that can be used to purge and restore files, enable and disable versioning on directories, etc.
The following are not versioning filesystems, but allow similar functionality.
|
https://en.wikipedia.org/wiki/Versioning_file_system
|
Double boot(also known ascold double boot,double cold boot,double POST,power-onauto reboot, orfake boot) is a feature of theBIOS, and may occur after changes to the BIOS' settings or thesystem's configuration, or apower failurewhile the system was in one of certainsleep modes.
Changing some parameters in the BIOS will cause this issue, even for items as simple as initializing the currentCPUand memoryclocks. At such times, arebootwill be required. If thecomputerdid not have any power and had just been plugged in, the same parameters would need to be implemented again, and since these parameters require a reboot, the computer will do a quickresetto implement the parameters that are set in the BIOS.[1]Even after the computer is turned off, these parameters will not need to be re-entered for as long as thepower supplyis still receiving power.
In a double boot, thePCwill power on for about two seconds, off for about a second, turn back on, display thePOSTscreen, and then continue to boot up normally.
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Double_boot
|
TheExtended System Configuration Data(ESCD) is a specification for configuringx86computers of theISA PNPera. The specification was developed byCompaq,IntelandPhoenix Technologies. It consists of a method for storing configuration information innonvolatile BIOS memoryand threeBIOSfunctions for working with that data.[1][2]
The ESCD data may at one time have been stored in the latter portion of the 128 byte extended bank of battery-backed CMOS RAM but eventually it became too large and so was moved to BIOS flash.[3][4]
It contains information aboutISAPnPdevices is stored. It is used by theBIOSto allocate resources fordeviceslikeexpansion cards. The ESCD data is stored using the data serialization format used forEISA. Its data starts with the "ACFG" signature in ASCII. PCI configuration can also be stored in ESCD, using virtual slots.[5]Typical storage usage for ESCD data is 2–4 KB
The BIOS also updates the ESCD each time thehardwareconfiguration changes, after deciding how to re-allocate resources likeIRQandmemory mappingranges. After the ESCD has been updated, the decision need not be made again, which thereafter results in faster startup without conflicts until the next hardware configuration change.
Thiscomputer hardwarearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Extended_System_Configuration_Data
|
Input/Output Control System(IOCS) is any of several packages on earlyIBMentry-level andmainframecomputers that providedlow levelaccess torecordson peripheral equipment. IOCS provides functionality similar to 1960s packages from other vendors, e.g.,File Control Processor(FCP)[1]in RCA 3301 Realcom Operating System,GEFRC[2]inGECOS, and to the laterRecord Management Services[3](RMS) inDECVAX/VMS(laterOpenVMS.)
Computers in the 1950s and 1960s typically dealt with data that were organized into records either by the nature of the media, e.g., lines of print, or by application requirements. IOCS was intended to allowAssembler languageprogrammers to read and write records without having to worry about the details of the various devices or the blocking of logical records into physical records. IOCS provided the run time I/O support for several compilers.
Computers of this era often did not haveoperating systemsin the modern sense. Application programs called IOCS routines in aresident monitor, or included macro instructions that expanded to IOCS routines.
In some cases[4]IOCS was designed to coexist withSimultaneous Peripheral Operations On-line(SPOOL)[5]software.
The level of access is at a higher level than that provided byBIOSandBDOSin the PC world; in fact, IOCS has no support for character-oriented I/O, primarily because the systems for which it was designed didn't support it. Versions of IOCS existed for theIBM 705 III,[6]1401/1440/1460,1410/7010,7070/7072/7074,[7][8][9]7080[10]and7040/7044/7090/7094.[11]These systems heavily influenced the data management components of the operating systems[12]for theSystem/360;the name IOCS was carried through inDOS/360throughz/VSE,[13]with a distinction betweenLogical IOCS(LIOCS)[14]andPhysical IOCS(PIOCS).[14]
Although some technical details and nomenclature are different among the various IOCS packages, the fundamental concepts are the same. For concreteness, the discussion and examples in this article will mostly be in terms of 7070 IOCS.[7][8]Also, multiple continuation lines will be shown as ellipses (...) when they don't serve to illustrate the narrative.
An IOCS program must do three things, each discussed in a subsection below.
For the 7070 these are done using 7070Autocoder[15][16]declarative statements andmacro instructions.
IOCS supported several classes of I/O equipment
Some services offered by IOCS were not needed by all applications, e.g., checkpoints, label processing. An IOCS program must identify the particular devices types and services it uses. A 7070 IOCS program must specify one or more DIOCS[7]: 16–19[15]: 22–25statements:[b]
These declarative statements identify index registers reserved for the use of IOCS, indicate channels used, indicate whether the program is to coexist withSPOOLand provide processing options. The END DIOCS statement causes the assembly of IOCS unless a preassembled version is requested. The first (general) form is omitted when the D729 form is used.
In some other IOCS packages similar functions are provided by control cards.
An IOCS program must create a control block for each file, specifying information unique to the file. For 7070 IOCS these are entries in theFile Specification Tablefor tape files, each of which is generated by a DTF[7]: 19–26[15]: 26–28statement, or separate control blocks generated by DDF[8]: 31–37[15]: 29–30or DUF[7]: 44–47[15]: 31–33statements.
In some other IOCS packages similar functions are provided by control cards.
The above code defines a tape file on channel 1 called OUT, a sequential 1301/1302 disk file called DAFILE and a card file called CONSFILE.
Any IOCS program must specify the actions that it wishes to perform. In 7070 IOCS this is done with processing macros.[b]
In some other IOCS packages similar functions are provided by explicit subroutine calls.
|
https://en.wikipedia.org/wiki/Input/Output_Control_System
|
Advanced Configuration and Power Interface(ACPI) is anopen standardthatoperating systemscan use to discover and configurecomputer hardwarecomponents, to performpower management(e.g. putting unused hardware components to sleep), auto configuration (e.g.Plug and Playandhot swapping), and status monitoring. It was first released in December 1996. ACPI aims to replaceAdvanced Power Management(APM), theMultiProcessor Specification, and thePlug and Play BIOS(PnP) Specification.[1]ACPI brings power management under the control of the operating system, as opposed to the previous BIOS-centric system that relied on platform-specific firmware to determine power management and configuration policies.[2]The specification is central to theOperating System-directed configuration and Power Management(OSPM) system. ACPI defineshardware abstractioninterfaces between the device's firmware (e.g.BIOS,UEFI), thecomputer hardwarecomponents, and theoperating systems.[3][4]
Internally, ACPI advertises the available components and their functions to theoperating system kernelusing instruction lists ("methods") provided through the systemfirmware(UEFIorBIOS), which the kernel parses. ACPI then executes the desired operations written inACPI Machine Language(such as the initialization of hardware components) using an embedded minimalvirtual machine.
Intel,MicrosoftandToshibaoriginally developed the standard, whileHP,HuaweiandPhoenixalso participated later. In October 2013, ACPI Special Interest Group (ACPI SIG), the original developers of the ACPI standard, agreed to transfer all assets to theUEFI Forum, in which all future development will take place.[5]The latest version[update]of the standard 6.5 was released in August 2022.[6]
The firmware-level ACPI has three main components: the ACPI tables, the ACPI BIOS, and the ACPI registers. The ACPI BIOS generates ACPI tables and loads ACPI tables intomain memory. Much of the firmware ACPI functionality is provided inbytecodeofACPI Machine Language(AML), aTuring-complete,domain-specificlow-level language, stored in the ACPI tables.[7]To make use of the ACPI tables, the operating system must have aninterpreterfor the AML bytecode. A reference AML interpreter implementation is provided by the ACPI Component Architecture (ACPICA). At the BIOS development time, AML bytecode is compiled from the ASL (ACPI Source Language) code.[8][9]
TheACPI Component Architecture(ACPICA), mainly written by Intel's engineers, provides anopen-sourceplatform-independent reference implementation of the operating system–related ACPI code.[10]The ACPICA code is used by Linux,Haiku,ArcaOS[11]andFreeBSD,[8]which supplement it with their operating-system specific code.
The first revision of the ACPI specification was released in December 1996, supporting 16, 24 and32-bitaddressing spaces. It was not until August 2000 that ACPI received64-bitaddress support as well as support for multiprocessor workstations and servers with revision 2.0.
In 1999, thenMicrosoftCEOBill Gatesstated in an e-mail thatLinuxwould benefit from ACPI without them having to do work and suggested to make it Windows-only.[12][13][14]
In September 2004, revision 3.0 was released, bringing to the ACPI specification support forSATAinterfaces,PCI Expressbus,multiprocessorsupport for more than 256 processors,ambient light sensorsand user-presence devices, as well as extending the thermal model beyond the previous processor-centric support.
Released in June 2009, revision 4.0 of the ACPI specification added various new features to the design; most notable are theUSB 3.0support, logical processor idling support, andx2APICsupport.
Initially ACPI was exclusive tox86architecture; Revision 5.0 of the ACPI specification was released in December 2011,[15]which added theARM architecturesupport. The revision 5.1 was released in July 2014.[16]
The latest specification revision is 6.5, which was released in August 2022.[6]
Microsoft'sWindows 98was the first operating system to implement ACPI,[17][18]but its implementation was somewhat buggy or incomplete,[19][20]although some of the problems associated with it were caused by the first-generation ACPI hardware.[21]Other operating systems, including later versions ofWindows,macOS(x86 macOS only),eComStation,ArcaOS,[22]FreeBSD(since FreeBSD 5.0[23]),NetBSD(since NetBSD 1.6[24]),OpenBSD(since OpenBSD 3.8[25]),HP-UX,OpenVMS,Linux,GNU/HurdandPCversions ofSolaris, have at least some support for ACPI.[26]Some newer operating systems, likeWindows Vista, require the computer to have an ACPI-compliant BIOS, and sinceWindows 8, theS0ix/Modern Standbystate was implemented.[27]
Windows operating systems use acpi.sys[28]to access ACPI events.
The 2.4 series of the Linux kernel had only minimal support for ACPI, with better support implemented (and enabled by default) from kernel version 2.6.0 onwards.[29]Old ACPI BIOS implementations tend to be quite buggy, and consequently are not supported by later operating systems. For example,Windows 2000,Windows XP, andWindows Server 2003only use ACPI if the BIOS date is after January 1, 1999.[30]Similarly, Linux kernel 2.6 may not use ACPI if the BIOS date is before January 1, 2001.[29]
Linux-based operating systems can provide handling of ACPI events via acpid.[31]
Once an OSPM-compatible operating system activates ACPI, it takes exclusive control of all aspects of power management and device configuration. The OSPM implementation must expose an ACPI-compatible environment to device drivers, which exposes certain system, device and processor states.
The ACPI Specification defines the following four global "Gx" states and six sleep "Sx" states for an ACPI-compliant computer system:[32][33]
The specification also defines aLegacystate: the state of an operating system which does not support ACPI. In this state, the hardware and power are not managed via ACPI, effectively disabling ACPI.
The device statesD0–D3are device dependent:
The CPU power statesC0–C3are defined as follows:
While a device or processor operates (D0 and C0, respectively), it can be in one of severalpower-performance states. These states are implementation-dependent. P0 is always the highest-performance state, with P1 to Pnbeing successively lower-performance states. The total number of states is device or processor dependent, but can be no greater than 16.[41]
P-states have become known asSpeedStepinIntelprocessors, asPowerNow!orCool'n'QuietinAMDprocessors, and asPowerSaverinVIAprocessors.
ACPI-compliant systems interact with hardware through either a "Function Fixed Hardware (FFH) Interface", or a platform-independent hardware programming model which relies on platform-specific ACPI Machine Language (AML) provided by theoriginal equipment manufacturer(OEM).
Function Fixed Hardware interfaces are platform-specific features, provided by platform manufacturers for the purposes of performance and failure recovery. StandardIntel-basedPCshave a fixed function interface defined by Intel,[43]which provides a set of core functionality that reduces an ACPI-compliant system's need for full driver stacks for providing basic functionality during boot time or in the case of major system failure.
ACPI Platform Error Interface (APEI) is a specification for reporting of hardware errors, e.g. chipset, RAM to the operating system.
ACPI defines many tables that provide the interface between an ACPI-compliantoperating systemand system firmware (BIOSorUEFI). This includes RSDP, RSDT, XSDT, FADT, FACS, DSDT, SSDT, MADT, and MCFG, for example.[44][45]
The tables allow description of system hardware in a platform-independent manner, and are presented as either fixed-formatted data structures or in AML. The main AML table is the DSDT (differentiated system description table). The AML can be decompiled by tools like Intel's iASL (open-source, part of ACPICA) for purposes like patching the tables for expanding OS compatibility.[46][47]
The Root System Description Pointer (RSDP) is located in a platform-dependent manner, and describes the rest of the tables.
A custom ACPI table called the Windows Platform Binary Table (WPBT) is used by Microsoft to allow vendors to add software into the Windows OS automatically. Some vendors, such asLenovo, have been caught using this feature to install harmful software such asSuperfish.[48]Samsungshipped PCs with Windows Update disabled.[48]Windows versions older than Windows 7 do not support this feature, but alternative techniques can be used. This behavior has been compared torootkits.[49][50]
In November 2003,Linus Torvalds—author of theLinux kernel—described ACPI as "a complete design disaster in every way".[51][52]
|
https://en.wikipedia.org/wiki/ACPI
|
Incomputing, theSystem Management BIOS(SMBIOS) specification definesdata structures(and access methods) that can be used to read management information produced by theBIOSof acomputer.[1]This eliminates the need for theoperating systemto probe hardware directly to discover what devices are present in the computer. The SMBIOS specification is produced by theDistributed Management Task Force(DMTF), a non-profitstandards development organization. The DMTF estimates that two billion client and server systems implement SMBIOS.[2]
SMBIOS was originally known as Desktop Management BIOS (DMIBIOS), since it interacted with theDesktop Management Interface(DMI).[3]
The DMTF released the version 3.7.1 of the specification on May 24, 2024.[4]
Version 1 of the Desktop Management BIOS (DMIBIOS) specification was produced byPhoenix Technologiesin or before 1996.[5][6]
Version 2.0 of the Desktop Management BIOS specification was released on March 6, 1996 byAmerican Megatrends(AMI),Award Software,Dell,Intel, Phoenix Technologies, andSystemSoft Corporation. It introduced 16-bit plug-and-play functions used to access the structures from Windows 95.[7]
The last version to be published directly by vendors was 2.3 on August 12, 1998. The authors were American Megatrends, Award Software,Compaq, Dell,Hewlett-Packard, Intel,International Business Machines(IBM), Phoenix Technologies, and SystemSoft Corporation.
Circa 1999, theDistributed Management Task Force(DMTF) took ownership of the specification. The first version published by the DMTF was 2.3.1 on March 16, 1999. At approximately the same timeMicrosoftstarted to require thatOEMsand BIOS vendors support the interface/data-set in order to have Microsoftcertification.
Version 3.0.0, introduced in February 2015, added a 64-bit entry point, which can coexist with the previously defined 32-bit entry point.
Version 3.4.0 was released in August 2020.[8]
Version 3.5.0 was released in September 2021.[9]
Version 3.6.0 was released in June 2022.[10]
Version 3.7.0 was released in July 2023.[11]
The SMBIOS table consists of an entry point (two types are defined, 32-bit and 64-bit), and a variable number of structures that describe platform components and features. These structures are occasionally referred to as "tables" or "records" in third-party documentation.
As of version 3.3.0, the SMBIOS specification defines the following structure types:[12][13]
The EFI configuration table (EFI_CONFIGURATION_TABLE) contains entries pointing to the SMBIOS 2 and/or SMBIOS 3 tables.[14]There are several ways to access the data, depending on the platform and operating system.
In theUEFI Shell, theSmbiosViewcommand can retrieve and display the SMBIOS data.[15][16]One can often enter the UEFI shell by entering the system firmware settings, and then selecting the shell as a boot option (as opposed to a DVD drive or hard drive).
ForLinux,FreeBSD, etc., thedmidecodeutility can be used.
MicrosoftspecifiesWMIas the preferred mechanism for accessing SMBIOS information fromMicrosoft Windows.[17][18]
On Windows systems that support it (XP and later), some SMBIOS information can be viewed with either theWMICutility with 'BIOS'/'MEMORYCHIP'/'BASEBOARD' and similar parameters, or by looking in the Windows Registry under HKLM\HARDWARE\DESCRIPTION\System.
Various software utilities can retrieve raw SMBIOS data, including FirmwareTablesView[19]andAIDA64.
Table and structure creation is normally up to the system firmware/BIOS. TheUEFI Platform Initialization(PI) specification includes an SMBIOS protocol (EFI_SMBIOS_PROTOCOL) that allows components to submit SMBIOS structures for inclusion, and enables the producer to create the SMBIOS table for a platform.[20]
Platform virtualization softwarecan also generate SMBIOS tables for use inside VMs, for instanceQEMU.[21]
If the SMBIOS data is not generated and filled correctly then the machine may behave unexpectedly. For example, aMini PCthat advertisesChassis Information | Type = Tabletmay behave unexpectedly using Linux. A desktop manager likeGNOMEwill attempt to monitor a non-existent battery and shut down the screen and network interfaces when the missing battery drops below a threshold. Additionally, if theChassis Information | Manufactureris not filled in correctly then work-arounds for the incorrectType = Tabletproblem cannot be applied.[22]
|
https://en.wikipedia.org/wiki/System_Management_BIOS
|
Unified Extensible Firmware Interface(UEFI,/ˈjuːɪfaɪ/or as an acronym)[c]is aspecificationfor the firmwarearchitectureof acomputing platform. When a computeris powered on, the UEFI-implementation is typically the first that runs, before starting theoperating system. Examples includeAMI Aptio,Phoenix SecureCore,TianoCore EDK II,InsydeH2O.
UEFI replaces theBIOSthat was present in theboot ROMof allpersonal computersthat areIBM PC compatible,[5][6]although it can providebackwards compatibilitywith the BIOS usingCSM booting. Unlike its predecessor, BIOS, which is ade factostandard originally created byIBMas proprietary software, UEFI is an open standard maintained by an industryconsortium. Like BIOS, most UEFI implementations are proprietary.
Inteldeveloped the originalExtensible Firmware Interface(EFI) specification. The last Intel version of EFI was 1.10 released in 2005. Subsequent versions have been developed as UEFI by theUEFI Forum.
UEFI is independent of platform and programming language, butCis used for the reference implementation TianoCore EDKII.
The original motivation for EFI came during early development of the first Intel–HPItaniumsystems in the mid-1990s.BIOSlimitations (such as 16-bitreal mode, 1 MB addressable memory space,[7]assembly languageprogramming, andPC AThardware) had become too restrictive for the larger server platforms Itanium was targeting.[8]The effort to address these concerns began in 1998 and was initially calledIntel Boot Initiative.[9]It was later renamed toExtensible Firmware Interface(EFI).[10][11]
The firstopen sourceUEFI implementation, Tiano, was released by Intel in 2004. Tiano has since then been superseded by EDK[12]and EDK II[13]and is now maintained by the TianoCore community.[14]
In July 2005, Intel ceased its development of the EFI specification at version 1.10, and contributed it to theUnified EFI Forum, which has developed the specification as theUnified Extensible Firmware Interface(UEFI). The original EFI specification remains owned by Intel, which exclusively provides licenses for EFI-based products, but the UEFI specification is owned by the UEFI Forum.[8][15]
Version 2.0 of the UEFI specification was released on 31 January 2006. It addedcryptographyand security.
Version 2.1 of the UEFI specification was released on 7 January 2007. It added network authentication and theuser interfacearchitecture ('Human Interface Infrastructure' in UEFI).
In October 2018, Arm announcedArm ServerReady, a compliance certification program for landing the generic off-the-shelf operating systems andhypervisorson Arm-based servers. The program requires the system firmware to comply with Server Base Boot Requirements (SBBR). SBBR requires UEFI,ACPIandSMBIOScompliance. In October 2020, Arm announced the extension of the program to theedgeandIoTmarket. The new program name isArm SystemReady. Arm SystemReady defined the Base Boot Requirements (BBR) specification that currently provides three recipes, two of which are related to UEFI: 1) SBBR: which requires UEFI, ACPI and SMBIOS compliance suitable for enterprise level operating environments such as Windows, Red Hat Enterprise Linux, and VMware ESXi; and 2) EBBR: which requires compliance to a set of UEFI interfaces as defined in the Embedded Base Boot Requirements (EBBR) suitable for embedded environments such as Yocto. Many Linux and BSD distros can support both recipes.
In December 2018,Microsoftannounced Project Mu, a fork of TianoCore EDK II used inMicrosoft SurfaceandHyper-Vproducts. The project promotes the idea offirmware as a service.[16]
The latest UEFI specification, version 2.11, was published in December 2024.[17]
The interface defined by the EFI specification includes data tables that contain platform information, and boot and runtime services that are available to the OS loader and OS. UEFI firmware provides several technical advantages over a BIOS:[18]
With UEFI, it is possible to store product keys for operating systems such as Windows, on the UEFI firmware of the device.[21][22][23]UEFI is required forSecure Booton devices shipping with Windows 8[24][25]and above.
It is also possible for operating systems to access UEFI configuration data.[26]
As of version 2.5, processor bindings exist for Itanium, x86, x86-64,ARM(AArch32) andARM64(AArch64).[27]Onlylittle-endianprocessors can be supported.[28]Unofficial UEFI support is under development for POWERPC64 by implementingTianoCore[broken anchor]on top of OPAL,[29]the OpenPOWER abstraction layer, running in little-endian mode.[30]Similar projects exist forMIPS[31]andRISC-V.[32]As of UEFI 2.7, RISC-V processor bindings have been officially established for 32-, 64- and 128-bit modes.[33]
Standard PC BIOS is limited to a 16-bit processor mode and 1 MB of addressable memory space, resulting from the design based on theIBM 5150that used a 16-bitIntel 8088processor.[8][34]In comparison, the processor mode in a UEFI environment can be either 32-bit (IA-32, AArch32) or 64-bit (x86-64, Itanium, and AArch64).[8][35]64-bit UEFI firmware implementations supportlong mode, which allows applications in the preboot environment to use 64-bit addressing to get direct access to all of the machine's memory.[36]
UEFI requires the firmware and operating system loader (or kernel) to be size-matched; that is, a 64-bit UEFI firmware implementation can load only a 64-bit operating system (OS) boot loader or kernel (unless the CSM-basedlegacy bootis used) and the same applies to 32-bit. After the system transitions fromboot servicestoruntime services, the operating system kernel takes over. At this point, the kernel can change processor modes if it desires, but this bars usage of the runtime services (unless the kernel switches back again).[37]: sections 2.3.2 and 2.3.4As of version 3.15, theLinux kernelsupports 64-bit kernels to bebootedon 32-bit UEFI firmware implementations running onx86-64CPUs, withUEFI handoversupport from a UEFI boot loader as the requirement.[38]UEFI handover protocoldeduplicatesthe UEFI initialization code between the kernel and UEFI boot loaders, leaving the initialization to be performed only by the Linux kernel'sUEFI boot stub.[39][40]
In addition to the standard PC disk partition scheme that uses amaster boot record(MBR), UEFI also works with theGUID Partition Table(GPT) partitioning scheme, which is free from many of the limitations of MBR. In particular, the MBR limits on the number and size of disk partitions (up to fourprimary partitionsper disk, and up to 2TB(2 × 240bytes)per disk) are relaxed.[41]More specifically, GPT allows for a maximum disk and partition size of 8ZiB(8 × 270bytes).[42][43]
Support for GPT inLinuxis enabled by turning on the optionCONFIG_EFI_PARTITION(EFI GUID Partition Support) during kernel configuration.[44]This option allows Linux to recognize and use GPT disks after the system firmware passes control over the system to Linux.
For reverse compatibility, Linux can use GPT disks in BIOS-based systems for both data storage and booting, as bothGRUB 2and Linux are GPT-aware. Such a setup is usually referred to asBIOS-GPT.[45][unreliable source?]As GPT incorporates the protective MBR, a BIOS-based computer can boot from a GPT disk using a GPT-aware boot loader stored in the protective MBR'sbootstrap code area.[43]In the case of GRUB, such a configuration requires aBIOS boot partitionfor GRUB to embed its second-stage code due to absence of the post-MBR gap in GPT partitioned disks (which is taken over by the GPT'sPrimary HeaderandPrimary Partition Table). Commonly 1MBin size, this partition'sGlobally Unique Identifier(GUID) in GPT scheme is21686148-6449-6E6F-744E-656564454649and is used by GRUB only in BIOS-GPT setups. From GRUB's perspective, no such partition type exists in case of MBR partitioning. This partition is not required if the system is UEFI-based because no embedding of the second-stage code is needed in that case.[19][43][45]
UEFI systems can access GPT disks and boot directly from them, which allows Linux to use UEFI boot methods. Booting Linux from GPT disks on UEFI systems involves creation of anEFI system partition(ESP), which contains UEFI applications such as bootloaders, operating system kernels, and utility software.[46][47][48][unreliable source?]Such a setup is usually referred to asUEFI-GPT, while ESP is recommended to be at least 512 MB in size and formatted with a FAT32 filesystem for maximum compatibility.[43][45][49][unreliable source?]
Forbackward compatibility, some UEFI implementations also support booting from MBR-partitioned disks through the Compatibility Support Module (CSM) that provides legacy BIOS compatibility.[50]In that case, booting Linux on UEFI systems is the same as on legacy BIOS-based systems.
Some of the EFI's practices and data formats mirror those ofMicrosoft Windows.[51][52]
The 64-bit versions ofWindows VistaSP1 and later and 64-bit versions ofWindows 8,8.1,10, and11can boot from a GPT disk that is larger than 2TB.
EFI defines two types of services:boot servicesandruntime services. Boot services are available only while the firmware owns the platform (i.e., before theExitBootServices()call), and they include text and graphical consoles on various devices, and bus, block and file services. Runtime services are still accessible while the operating system is running; they include services such as date, time andNVRAMaccess.
Beyond loading an OS, UEFI can runUEFI applications, which reside as files on theEFI system partition. They can be executed from the UEFI Shell, by the firmware'sboot manager, or by other UEFI applications.UEFI applicationscan be developed and installed independently of theoriginal equipment manufacturers(OEMs).
A type of UEFI application is an OS boot loader such asGRUB,rEFInd,Gummiboot, andWindows Boot Manager, which loads some OS files into memory and executes them. Also, an OS boot loader can provide a user interface to allow the selection of another UEFI application to run. Utilities like the UEFI Shell are also UEFI applications.
EFI defines protocols as a set of software interfaces used for communication between two binary modules. All EFI drivers must provide services to others via protocols. The EFI Protocols are similar to theBIOS interrupt calls.
In addition to standardinstruction set architecture-specific device drivers, EFI provides for a ISA-independentdevice driverstored innon-volatile memoryasEFI byte codeorEBC. System firmware has an interpreter for EBC images. In that sense, EBC is analogous toOpen Firmware, the ISA-independent firmware used inPowerPC-basedApple MacintoshandSun MicrosystemsSPARCcomputers, among others.
Some architecture-specific (non-EFI Byte Code) EFI drivers for some device types can have interfaces for use by the OS. This allows the OS to rely on EFI for drivers to perform basic graphics and network functions before, and if, operating-system-specific drivers are loaded.
In other cases, the EFI driver can be filesystem drivers that allow for booting from other types of disk volumes. Examples includeefifsfor 37 file systems (based onGRUB2code),[56]used byRufusfor chain-loading NTFS ESPs.[57]
The EFI 1.0 specification defined a UGA (Universal Graphic Adapter) protocol as a way to support graphics features. UEFI did not include UGA and replaced it withGOP (Graphics Output Protocol).[58]
UEFI 2.1 defined a "Human Interface Infrastructure" (HII) to manage user input, localized strings, fonts, and forms (in theHTMLsense). These enableoriginal equipment manufacturers(OEMs) orindependent BIOS vendors(IBVs) to design graphical interfaces for pre-boot configuration. UEFI usesUTF-16to encode strings by default.
Most early UEFI firmware implementations were console-based. Today many UEFI firmware implementations are GUI-based.
An EFI system partition, often abbreviated to ESP, is adata storage devicepartition that is used in computers adhering to the UEFI specification. Accessed by the UEFI firmware when a computer is powered up, it stores UEFI applications and the files these applications need to run, including operating systemboot loaders. Supportedpartition tableschemes includeMBRandGPT, as well asEl Toritovolumes on optical discs.[37]: section 2.6.2For use on ESPs, UEFI defines a specific version of theFAT file system, which is maintained as part of the UEFI specification and independently from the original FAT specification, encompassing theFAT32,FAT16andFAT12file systems.[37]: section 12.3[59][60][61]The ESP also provides space for a boot sector as part of the backward BIOS compatibility.[50]
Unlike the legacy PC BIOS, UEFI does not rely onboot sectors, defining instead a boot manager as part of the UEFI specification. When a computer is powered on, the boot manager checks the boot configuration and, based on its settings, then executes the specified OSboot loaderoroperating system kernel(usually boot loader[62]). The boot configuration is defined by variables stored inNVRAM, including variables that indicate the file system paths to OS loaders or OS kernels.
OS boot loaders can be automatically detected by UEFI, which enables easybootingfrom removable devices such asUSB flash drives. This automated detection relies on standardized file paths to the OS boot loader, with the path varying depending on thecomputer architecture. The format of the file path is defined as<EFI_SYSTEM_PARTITION>\EFI\BOOT\BOOT<MACHINE_TYPE_SHORT_NAME>.EFI; for example, the file path to the OS loader on anx86-64system is\efi\boot\bootx64.efi,[37]and\efi\boot\bootaa64.efion ARM64 architecture.
Booting UEFI systems from GPT-partitioned disks is commonly calledUEFI-GPT booting. Despite the fact that the UEFI specification requires MBR partition tables to be fully supported,[37]some UEFI firmware implementations immediately switch to the BIOS-based CSM booting depending on the type of boot disk's partition table, effectively preventing UEFI booting to be performed fromEFI System Partitionon MBR-partitioned disks.[50]Such a boot scheme is commonly calledUEFI-MBR.
It is also common for a boot manager to have a textual user interface so the user can select the desired OS (or setup utility) from a list of available boot options.
On PC platforms, the BIOS firmware that supports UEFI boot can be calledUEFI BIOS, although it may not support CSM boot method, as modern x86 PCs deprecated use of CSM.
To ensure backward compatibility, UEFI firmware implementations on PC-class machines could support booting in legacy BIOS mode from MBR-partitioned disks through theCompatibility Support Module (CSM)that provides legacy BIOS compatibility. In this scenario, booting is performed in the same way as on legacy BIOS-based systems, by ignoring the partition table and relying on the content of aboot sector.[50]
BIOS-style booting from MBR-partitioned disks is commonly calledBIOS-MBR, regardless of it being performed on UEFI or legacy BIOS-based systems. Furthermore, booting legacy BIOS-based systems from GPT disks is also possible, and such a boot scheme is commonly calledBIOS-GPT.
TheCompatibility Support Moduleallows legacy operating systems and some legacyoption ROMsthat do not support UEFI to still be used.[63]It also provides required legacySystem Management Mode(SMM) functionality, calledCompatibilitySmm, as an addition to features provided by the UEFI SMM. An example of such a legacy SMM functionality is providing USB legacy support for keyboard and mouse, by emulating their classicPS/2counterparts.[63]
In November 2017, Intel announced that it planned to phase out support CSM for client platforms by 2020.[64]
In July, of 2022, Kaspersky Labs published information regarding a Rootkit designed to chain boot malicious code on machines using Intel's H81 chipset and the Compatibility Support module of affected motherboards.[65]
In August 2023, Intel announced that it planned to phase out support CSM for server platforms by 2024.[66]
Currently[when?]most computers based on Intel platforms do not support CSM.[citation needed]
The UEFI specification includes support for booting over network via thePreboot eXecution Environment(PXE). PXE bootingnetwork protocolsincludeInternet Protocol(IPv4andIPv6),User Datagram Protocol(UDP),Dynamic Host Configuration Protocol(DHCP),Trivial File Transfer Protocol(TFTP) andiSCSI.[37][67]
OS images can be remotely stored onstorage area networks(SANs), withInternet Small Computer System Interface(iSCSI) andFibre Channel over Ethernet(FCoE) as supported protocols for accessing the SANs.[37][68][69]
Version 2.5 of the UEFI specification adds support for accessing boot images overHTTP.[70]
The UEFI specification defines a protocol known asSecure Boot, which can secure the boot process by preventing the loading of UEFI drivers or OS boot loaders that are notsignedwith an acceptabledigital signature. The details of how these drivers are signed is specified in theUEFI Specification[71]When Secure Boot is enabled, it is initially placed in "setup" mode, which allows a public key known as the "platform key" (PK) to be written to the firmware. Once the key is written, Secure Boot enters "User" mode, where only UEFI drivers and OS boot loaders signed with the platform key can be loaded by the firmware. Additional "key exchange keys" (KEK) can be added to a database stored in memory to allow other certificates to be used, but they must still have a connection to the private portion of the platform key.[72]Secure Boot can also be placed in "Custom" mode, where additional public keys can be added to the system that do not match the private key.[73]
Secure Boot is supported byWindows 8and8.1,Windows Server 2012and 2012 R2,Windows 10,Windows Server 2016,2019, and2022, andWindows 11, VMware vSphere 6.5[74]and a number ofLinux distributionsincludingFedora(since version 18),openSUSE(since version 12.3), RHEL (since version 7), CentOS (since version 7[75]), Debian (since version 10),[76]Ubuntu(since version 12.04.2),Linux Mint(since version 21.3).,[77][78]andAlmaLinux OS(since version 8.4[79]). As of January 2025[update],FreeBSDsupport is in a planning stage.[80]
UEFI provides ashell environment, which can be used to execute other UEFI applications, including UEFIboot loaders.[48]Apart from that, commands available in the UEFI shell can be used for obtaining various other information about the system or the firmware, including getting the memory map (memmap), modifying boot manager variables (bcfg), running partitioning programs (diskpart), loading UEFI drivers, and editing text files (edit).[81][unreliable source?][82][83]
Source code for a UEFI shell can be downloaded from theIntel'sTianoCore[broken anchor]UDK/EDK2 project.[84]A pre-built ShellBinPkg is also available.[85]Shell v2 works best in UEFI 2.3+ systems and is recommended over Shell v1 in those systems. Shell v1 should work in all UEFI systems.[81][86][87]
Methods used for launching UEFI shell depend on the manufacturer and model of the systemmotherboard. Some of them already provide a direct option in firmware setup for launching, e.g. compiled x86-64 version of the shell needs to be made available as<EFI_SYSTEM_PARTITION>/SHELLX64.EFI. Some other systems have an already embedded UEFI shell which can be launched by appropriate key press combinations.[88][unreliable source?][89]For other systems, the solution is either creating an appropriate USB flash drive or adding manually (bcfg) a boot option associated with the compiled version of shell.[83][88][90][unreliable source?][91][unreliable source?]
The following is a list ofcommandssupported by the EFI shell.[82]
Extensions to UEFI can be loaded from virtually anynon-volatilestorage device attached to the computer. For example, anoriginal equipment manufacturer(OEM) can distribute systems with anEFI system partitionon the hard drive, which would add additional functions to the standard UEFI firmware stored on the motherboard'sROM.
UEFI Capsule defines a Firmware-to-OS firmware update interface, marketed as modern and secure.[92]Windows 8,Windows 8.1,Windows 10,[93]andFwupdfor Linux each support the UEFI Capsule.
LikeBIOS, UEFI initializes and tests system hardware components (e.g. memory training, PCIe link training, USB link training on typical x86 systems), and then loads theboot loaderfrom amass storage deviceor through anetwork connection. Inx86systems, the UEFI firmware is usually stored in theNOR flashchip of the motherboard.[94][95]In some ARM-based Android and Windows Phone devices, the UEFI boot loader is stored in theeMMCoreUFSflash memory.
UEFI machines can have one of the following classes, which were used to help ease the transition to UEFI:[96]
Starting from the 10th Gen Intel Core, Intel no longer provides LegacyVideo BIOSfor the iGPU (Intel Graphics Technology). Legacy boot with those CPUs requires a Legacy Video BIOS, which can still be provided by a video card.[citation needed]
This is the first stage of the UEFI boot but may have platform specific binary code that precedes it. (e.g.,Intel ME,AMD PSP, CPUmicrocode). It consists of minimal code written inassembly languagefor the specific architecture. It initializes a temporary memory (often CPU cache-as-RAM (CAR), or SoC on-chip SRAM) and serves as the system's software root of trust with the option of verifying PEI before hand-off.
The second stage of UEFI boot consists of a dependency-aware dispatcher that loads and runs PEI modules (PEIMs) to handle early hardware initialization tasks such asmain memoryinitialization (initializememory controllerandDRAM) and firmware recovery operations. Additionally, it is responsible for discovery of the current boot mode and handling many ACPI S3 operations. In the case of ACPI S3 resume, it is responsible for restoring many hardware registers to a pre-sleep state. PEI also uses CAR. Initialization at this stage involves creating data structures in memory and establishing default values within these structures.[98]
This stage has several components including PEI foundation, PEIMs and PPI. Due less resources available in this stage, this stage must be minimal and do minimal preparations for the next stage(DXE), Which is more richer.
After SEC phase hand off, platform responsibility is taken by PEI Foundation. it's responsibility is:
This component is responsible for invoking PEIMs and managing there dependencies.
Those are minimal PEI drivers that is responsible for initialization of the hardware like permanent memory, CPU, chipset and motherboard. Each PEIMs has single responsibility and focused on single initialization. Those drivers came from different vendors.
This is adata structurethat composed of GUID pairs of pointers. PPIs are discovered by PEIMs through PEI services.
After minimal initialization of the system for DXE, PEI foundation locates and passes control to DXE. The PEI foundation dispatches DXE foundation through special PPI called IPL(Initial Program Load).
This stage consist of C modules and a dependency-aware dispatcher. With main memory now available, CPU, chipset, mainboard and other I/O devices are initialized in DXE and BDS. Initialization at this stage involves assigning EFI device paths to the hardware connected to the motherboard, and transferring configuration data to the hardware.[99]
BDS is a part of the DXE.[100][101]In this stage, boot devices are initialized, UEFI drivers orOption ROMsof PCI devices are executed according to architecturally defined variables calledNVRAM.
This is the stage between boot device selection and hand-off to the OS. At this point one may enter a UEFI shell, or execute a UEFI application such as the OS boot loader.
The UEFI hands off to theoperating system(OS) afterExitBootServices()is executed. A UEFI compatible OS is now responsible for exiting boot services triggering the firmware to unload all no longer needed code and data, leaving only runtime services code/data, e.g.SMMandACPI.[102][failed verification]A typical modern OS will prefer to use its own programs (such askernel drivers) to control hardware devices.
When a legacy OS is used, CSM will handle this call ensuring the system is compatible with legacy BIOS expectations.
Intel's implementation of EFI is theIntel Platform Innovation Framework, codenamedTiano. Tiano runs on Intel'sXScale,Itanium,IA-32andx86-64processors, and is proprietary software, although a portion of the code has been released under theBSD licenseorEclipse Public License(EPL) asTianoCore EDK II. TianoCore can be used as a payload forcoreboot.[103]
Phoenix Technologies' implementation of UEFI is branded as SecureCore Technology (SCT).[104]American Megatrendsoffers its own UEFI firmware implementation known as Aptio,[105]whileInsyde Softwareoffers InsydeH2O,[106]and Byosoft offers ByoCore.
In December 2018,Microsoftreleased an open source version of its TianoCore EDK2-based UEFI implementation from theSurfaceline,Project Mu.[107]
An implementation of the UEFI API was introduced into the Universal Boot Loader (Das U-Boot) in 2017.[108]On theARMv8architectureLinuxdistributions use the U-Boot UEFI implementation in conjunction withGNU GRUBfor booting (e.g.SUSE Linux[109]), the same holds true for OpenBSD.[110]For booting from iSCSIiPXEcan be used as a UEFI application loaded by U-Boot.[111]
Intel's firstItaniumworkstations and servers, released in 2000, implemented EFI 1.02.
Hewlett-Packard's firstItanium 2systems, released in 2002, implemented EFI 1.10; they were able to bootWindows,Linux,FreeBSDandHP-UX;OpenVMSadded UEFI capability in June 2003.
In January 2006,Apple Inc.shipped its firstIntel-based Macintosh computers. These systems used EFI instead ofOpen Firmware, which had been used on its previous PowerPC-based systems.[112]On 5 April 2006, Apple first releasedBoot Camp, which produces a Windows drivers disk and a non-destructive partitioning tool to allow the installation of Windows XP or Vista without requiring a reinstallation of Mac OS X (now macOS). A firmware update was also released that added BIOS compatibility to its EFI implementation. Subsequent Macintosh models shipped with the newer firmware.[113]
During 2005, more than one million Intel systems shipped with Intel's implementation of UEFI.[114][failed verification]New mobile, desktop and server products, using Intel's implementation of UEFI, started shipping in 2006. For instance, boards that use the Intel 945 chipset series use Intel's UEFI firmware implementation.
Since 2005, EFI has also been implemented on non-PC architectures, such asembedded systemsbased onXScalecores.[114]
The EDK (EFI Developer Kit) includes an NT32 target, which allows EFI firmware and EFI applications to run within aWindowsapplication. But no direct hardware access is allowed by EDK NT32. This means only a subset of EFI application and drivers can be executed by the EDK NT32 target.
In 2008, more x86-64 systems adopted UEFI. While many of these systems still allow booting only the BIOS-based OSes via the Compatibility Support Module (CSM) (thus not appearing to the user to be UEFI-based), other systems started to allow booting UEFI-based OSes. For example, IBM x3450 server,MSImotherboards with ClickBIOS, HP EliteBook Notebook PCs.
In 2009, IBM shippedSystem xmachines (x3550 M2, x3650 M2, iDataPlex dx360 M2) andBladeCenterHS22 with UEFI capability. Dell shipped PowerEdge T610, R610, R710, M610 and M710 servers with UEFI capability. More commercially available systems are mentioned in a UEFI whitepaper.[115]
In 2011, major vendors (such asASRock,Asus,Gigabyte, andMSI) launched several consumer-oriented motherboards using the Intel6-seriesLGA 1155chipset and AMD 9 SeriesAM3+chipsets with UEFI.[116]
With the release of Windows 8 in October 2012, Microsoft's certification requirements now require that computers include firmware that implements the UEFI specification. Furthermore, if the computer supports the "Connected Standby" feature of Windows 8 (which allows devices to have power management comparable tosmartphones, with an almost instantaneous return from standby mode), then the firmware is not permitted to contain a Compatibility Support Module (CSM). As such, systems that support Connected Standby are incapable of booting Legacy BIOS operating systems.[117][118]
In October 2017, Intel announced that it would remove legacy PC BIOS support from all its products by 2020, in favor of UEFI Class 3.[119]By 2019, all computers based on Intel platforms no longer have legacy PC BIOS support.
An operating system that can be booted from a (U)EFI is called a (U)EFI-aware operating system, defined by (U)EFI specification. Here the termbooted from a (U)EFImeans directly booting the system using a (U)EFI operating system loader stored on any storage device. The default location for the operating system loader is<EFI_SYSTEM_PARTITION>/BOOT/BOOT<MACHINE_TYPE_SHORT_NAME>.EFI, where short name of the machine type can beIA32,X64,IA64,ARMorAA64.[37]Some operating systems vendors may have their own boot loaders. They may also change the default boot location.
EDK2 Application Development Kit(EADK) makes it possible to usestandard C libraryfunctions in UEFI applications. EADK can be freely downloaded from theIntel's TianoCore UDK / EDK2SourceForgeproject. As an example, a port of thePythoninterpreter is made available as a UEFI application by using the EADK.[158]The development has moved to GitHub since UDK2015.[159]
A minimalistic "hello, world" C program written using EADK looks similar to itsusual C counterpart:
Numerous digital rights activists have protested UEFI.Ronald G. Minnich, a co-author ofcoreboot, andCory Doctorow, a digital rights activist, have criticized UEFI as an attempt to remove the ability of the user to truly control the computer.[160][161]It does not solve the BIOS's long-standing problems of requiring two different drivers—one for the firmware and one for the operating system—for most hardware.[162]
Open-source project TianoCore also provides UEFIs.[163]TianoCore lacks the specialized firmware drivers and modules that initialize chipset functions, but TianoCore is one of many payload options ofcoreboot. The development of coreboot requires cooperation from chipset manufacturers to provide the specifications needed to develop initialization drivers.
In 2011, Microsoft announced that computers certified to run itsWindows 8operating system had to ship with Microsoft's public key enrolled and Secure Boot enabled, which implies that using UEFI is a requirement for these devices.[164][165]Following the announcement, the company was accused by critics and free software/open source advocates (including theFree Software Foundation) of trying to use the Secure Boot functionality of UEFI tohinder or outright preventthe installation of alternative operating systems such asLinux. Microsoft denied that the Secure Boot requirement was intended to serve as a form oflock-in, and clarified its requirements by stating that x86-based systems certified for Windows 8 must allow Secure Boot to enter custom mode or be disabled, but not on systems using theARM architecture.[73][166]Windows 10allowsOEMsto decide whether or not Secure Boot can be managed by users of their x86 systems.[167]
Other developers raised concerns about the legal and practical issues of implementing support for Secure Boot on Linux systems in general. FormerRed HatdeveloperMatthew Garrettnoted that conditions in theGNU General Public License version 3may prevent the use of theGNU GRand Unified Bootloaderwithout a distribution's developer disclosing the private key (however, theFree Software Foundationhas since clarified its position, assuring that the responsibility to make keys available was held by the hardware manufacturer),[168][122]and that it would also be difficult for advanced users to build customkernelsthat could function with Secure Boot enabled without self-signing them.[166]Other developers suggested that signed builds of Linux with another key could be provided, but noted that it would be difficult to persuade OEMs to ship their computers with the required key alongside the Microsoft key.[6]
Several major Linux distributions have developed different implementations for Secure Boot. Garrett himself developed a minimal bootloader known as a shim, which is a precompiled, signed bootloader that allows the user to individually trust keys provided by Linux distributions.[169]Ubuntu 12.10uses an older version of shim[which?]pre-configured for use withCanonical's own key that verifies only the bootloader and allows unsigned kernels to be loaded; developers believed that the practice of signing only the bootloader is more feasible, since a trusted kernel is effective at securing only theuser space, and not the pre-boot state for which Secure Boot is designed to add protection. That also allows users to build their own kernels and use customkernel modulesas well, without the need to reconfigure the system.[122][170][171]Canonical also maintains its own private key to sign installations of Ubuntu pre-loaded on certified OEM computers that run the operating system, and also plans to enforce a Secure Boot requirement as well—requiring both a Canonical key and a Microsoft key (for compatibility reasons) to be included in their firmware.Fedoraalso uses shim,[which?]but requires that both the kernel and its modules be signed as well.[170]shim has Machine Owner Key (MOK) that can be used to sign locally-compiled kernels and other software not signed by distribution maintainer.[172]
It has been disputed whether the operating system kernel and its modules must be signed as well; while the UEFI specifications do not require it, Microsoft has asserted that their contractual requirements do, and that it reserves the right to revoke any certificates used to sign code that can be used to compromise the security of the system.[171]In Windows, if Secure Boot is enabled, all kernel drivers must be digitally signed; non-WHQL drivers may be refused to load. In February 2013, another Red Hat developer attempted to submit a patch to the Linux kernel that would allow it to parse Microsoft's authenticode signing using a masterX.509key embedded inPEfiles signed by Microsoft. However, the proposal was criticized by Linux creatorLinus Torvalds, who attacked Red Hat for supporting Microsoft's control over the Secure Boot infrastructure.[173]
On 26 March 2013, theSpanishfree software development group Hispalinux filed a formal complaint with theEuropean Commission, contending that Microsoft's Secure Boot requirements on OEM systems were "obstructive" andanti-competitive.[174]
At theBlack Hat conferencein August 2013, a group of security researchers presented a series of exploits in specific vendor implementations of UEFI that could be used to exploit Secure Boot.[175]
In August 2016 it was reported that two security researchers had found the "golden key" security key Microsoft uses in signing operating systems.[176]Technically, no key was exposed, however, an exploitable binary signed by the key was. This allows any software to run as though it was genuinely signed by Microsoft and exposes the possibility ofrootkitandbootkitattacks. This also makes patching the fault impossible, since any patch can be replaced (downgraded) by the (signed) exploitable binary. Microsoft responded in a statement that the vulnerability only exists inARM architectureandWindows RTdevices, and has released two patches; however, the patches do not (and cannot) remove the vulnerability, which would require key replacements in end user firmware to fix.[citation needed]
On March 1, 2023, researchers from ESET Cybersecurity Firm reported “The first in-the-wild UEFI bootkit bypassing UEFI Secure Boot” named ‘BlackLotus’ in their public analyses findings describing the theory behind its mechanics exploiting the patches that “do not (and cannot) remove the vulnerability”.[177][178]
In August 2024, theWindows 11andWindows 10security updates applied the Secure Boot Advanced Targeting (SBAT) settings to device's UEFI NVRAM, which caused some Linux distributions to fail to load. SBAT is a protocol that supported in new versions ofWindows Boot Managerand shim, which refuse buggy or vulnerable intermediate bootloaders (usually older versions of Windows Boot Manager andGRUB) to load in the boot process. The change was reverted the next month.[179]
ManyLinux distributionssupport UEFI Secure Boot as of January 2025[update], such asRHEL(RHEL 7 and later),CentOS(CentOS 7 and later[180]),Ubuntu,Fedora,Debian(Debian 10 and later[181]),OpenSUSE, andSUSE Linux Enterprise.[182]
The increased prominence of UEFI firmware in devices has also led to a number of technical problems blamed on their respective implementations.[183]
Following the release of Windows 8 in late 2012, it was discovered that certainLenovocomputer models with Secure Boot had firmware that was hardcoded to allow only executables named "Windows Boot Manager" or "Red Hat Enterprise Linux" to load, regardless of any other setting.[184]Other problems were encountered by severalToshibalaptop models with Secure Boot that were missing certain certificates required for its proper operation.[183]
In January 2013, a bug surrounding the UEFI implementation on someSamsunglaptops was publicized, which caused them to bebrickedafter installing a Linux distribution in UEFI mode. While potential conflicts with a kernel module designed to access system features on Samsung laptops were initially blamed (also prompting kernel maintainers to disable the module on UEFI systems as a safety measure), Matthew Garrett discovered that the bug was actually triggered by storing too many UEFI variables to memory, and that the bug could also be triggered under Windows under certain conditions. In conclusion, he determined that the offending kernel module had caused kernel message dumps to be written to the firmware, thus triggering the bug.[54][185][186]
|
https://en.wikipedia.org/wiki/UEFI
|
Das U-Boot(subtitled "the Universal Boot Loader" and often shortened toU-Boot; seeHistoryfor more about the name) is anopen-sourceboot loaderused inembedded devicesto perform various low-level hardware initialization tasks and boot the device's operating system kernel. It is available for a number ofcomputer architectures, includingM68000,ARM,Blackfin,MicroBlaze,AArch64,MIPS,Nios II,SuperH,PPC,Power ISA,RISC-V,LoongArchandx86.
U-Boot is both a first-stage and second-stage bootloader. It is loaded by the system's ROM (e.g. on-chip ROM of an ARM CPU) from a supported boot device, such as an SD card, SATA drive, NOR flash (e.g. usingSPIorI²C), or NAND flash. If there are size constraints, U-Boot may be split into two stages: the platform would load a small SPL (Secondary Program Loader), which is a stripped-down version of U-Boot, and the SPL would do some initial hardware configuration (e.g.DRAMinitialization using CPU cache as RAM) and load the larger, fully featured version of U-Boot.[3][4][5]Regardless of whether the SPL is used, U-Boot performs both first-stage (e.g., configuringmemory controller,SDRAM,mainboardand other I/O devices) and second-stage booting (e.g., loadingOS kerneland other related files from storage device).
U-Boot implements a subset of theUEFIspecification as defined in the Embedded Base Boot Requirements (EBBR) specification.[6]UEFI binaries likeGRUBor theLinuxkernel can be booted via the boot manager or from the command-line interface.
U-Boot runs acommand-line interfaceon a console or a serial port. Using the CLI, users can load and boot a kernel, possibly changing parameters from the default. There are also commands to read device information, read and write flash memory, download files (kernels, boot images, etc.) from the serial port or network, manipulatedevice trees, and work with environment variables (which can be written to persistent storage, and are used to control U-Boot behavior such as the default boot command and timeout before auto-booting, as well as hardware data such as the Ethernet MAC address).
Unlike PC bootloaders which obscure or automatically choose the memory locations of the kernel and other boot data, U-Boot requires its boot commands to explicitly specify the physical memory addresses as destinations for copying data (kernel, ramdisk, device tree, etc.) and for jumping to the kernel and as arguments for the kernel. Because U-Boot's commands are fairly low-level, it takes several steps to boot a kernel, but this also makes U-Boot more flexible than other bootloaders, since the same commands can be used for more general tasks. It's even possible to upgrade U-Boot using U-Boot, simply by reading the new bootloader from somewhere (local storage, or from the serial port or network) into memory, and writing that data to persistent storage where the bootloader belongs.
U-Boot has support for USB, so it can use a USB keyboard to operate the console (in addition to input from the serial port), and it can access and boot from USB Mass Storage devices such as SD card readers.
U-Boot boots an operating system by reading the kernel and any other required data (e.g. device tree or ramdisk image) into memory, and then executing the kernel with the appropriate arguments.
U-Boot's commands are actually generalized commands which can be used to read or write any arbitrary data. Using these commands, data can be read from or written to any storage system that U-Boot supports, which include:
(Note: These are boot sources from which U-Boot is capable of loading data (e.g. a kernel or ramdisk image) into memory. U-Boot itself must be booted by the platform, and that must be done from a device that the platform's ROM is capable of booting from, which naturally depends on the platform.)
On some embedded device implementations, the CPU or SoC will locate and load the bootloader (such as Das U-Boot) from the boot partition (such asext4orFATfilesystems) directly.
U-Boot does not need to be able to read a filesystem in order for the kernel to use it as a root filesystem or initial ramdisk; U-Boot simply provides an appropriate parameter to the kernel, and/or copies the data to memory without understanding its contents.
However, U-Boot can also read from (and in some cases, write to) filesystems. This way, rather than requiring the data that U-Boot will load to be stored at a fixed location on the storage device, U-Boot can read the filesystem to search for and load the kernel, device tree, etc., by pathname.
U-Boot includes support for these filesystems:
Device treeis a data structure for describing hardware layout. Using Device tree, a vendor might be able to use a less modifiedmainlineU-Boot on otherwise special purpose hardware. As also adopted by the Linux kernel, Device tree is intended to ameliorate the situation in theembeddedindustry, where a vast number of product specificforks(of U-Boot and Linux) exist. The ability to run mainline software practically gives customers indemnity against lack of vendor updates.
The project started as a 8xx PowerPC bootloader called8xxROMwritten by Magnus Damm.[7]In October 1999 Wolfgang Denk moved the project to SourceForge.net and renamed it toPPCBoot, because SF.net did not allow project names starting with digits.[7]Version 0.4.1 of PPCBoot was first publicly released July 19, 2000.
In 2002 a previous version of thesource codewas brieflyforkedinto a product calledARMBoot, but was merged back into the PPCBoot project shortly thereafter. On October 31, 2002PPCBoot−2.0.0was released. This marked the last release under the PPCBoot name, as it was renamed to reflect its ability to work on other architectures besides the PPC ISA.[8][9]
PPCBoot−2.0.0 becameU−Boot−0.1.0in November 2002, expanded to work on thex86processor architecture. Additional architecture capabilities were added in the following months:MIPS32in March 2003,MIPS64in April,Nios IIin October,ColdFirein December, andMicroBlazein April 2004. The May 2004 release of U-Boot-1.1.2 worked on the products of 216 board manufacturers across the various architectures.[9]
The current nameDas U-Bootadds aGerman definite article, to create a bilingualpunon the classic 1981 German submarine filmDas Boot, which takes place on a World War II GermanU-boat. It isfree softwarereleased under the terms of theGNU General Public License. It can be built on an x86 PC for any of its intended architectures using a cross development GNUtoolchain, for example crosstool, the Embedded Linux Development Kit (ELDK) or OSELAS.Toolchain.
The importance of U-Boot in embedded Linux systems is quite succinctly stated in the bookBuilding Embedded Linux Systems, by Karim Yaghmour, whose text about U-Boot begins, "Though there are quite a few other bootloaders, 'Das U-Boot', the universal bootloader, is arguably the richest, most flexible, and most actively developed open source bootloader available."[10]
In 2025, multiplevulnerabilitiesdiscovered in 2024 have been disclosed in U-Boot.[11]Abusing the filesystem support feature (ext4,SquashFS) of U-Boot by manually modifying filesystem data structures, an attacker can cause aninteger overflow, astack overflowor aheap overflow. As a result, an attacker can perform anarbitrary code executionand bypass the bootchain of trust. These issues are mitigated by the version v2025.01-rc1.
|
https://en.wikipedia.org/wiki/Das_U-Boot
|
Inoperating systems,memory managementis the function responsible for managing the computer'sprimary memory.[1]: 105–208
The memory management function keeps track of the status of each memory location, eitherallocatedorfree. It determines how memory is allocated among competing processes, deciding which gets memory, when they receive it, and how much they are allowed. When memory is allocated it determines which memory locations will be assigned. It tracks when memory is freed orunallocatedand updates the status.
This is distinct fromapplication memory management, which is how a process manages the memory assigned to it by the operating system.
Single allocationis the simplest memory management technique. All the computer's memory, usually with the exception of a small portion reserved for the operating system, is available to a single application.MS-DOSis an example of a system that allocates memory in this way. Anembedded systemrunning a single application might also use this technique.
A system using single contiguous allocation may stillmultitaskbyswappingthe contents of memory to switch among users. Early versions of theMUSICoperating system used this technique.
Partitioned allocationdivides primary memory into multiplememory partitions, usually contiguous areas of memory. Each partition might contain all the information for a specificjobortask. Memory management consists of allocating a partition to a job when it starts and unallocating it when the job ends.
Partitioned allocation usually requires some hardware support to prevent the jobs from interfering with one another or with the operating system. TheIBM System/360uses alock-and-keytechnique. TheUNIVAC 1108,PDP-6andPDP-10, andGE-600 seriesusebase and boundsregisters to indicate the ranges of accessible memory.
Partitions may be eitherstatic, that is defined atInitial Program Load(IPL) orboot time, or by thecomputer operator, ordynamic, that is, automatically created for a specific job.IBM System/360 Operating SystemMultiprogramming with a Fixed Number of Tasks(MFT) is an example of static partitioning, andMultiprogramming with a Variable Number of Tasks(MVT) is an example of dynamic. MVT and successors use the termregionto distinguish dynamic partitions from static ones in other systems.[2]
Partitions may be relocatable with base registers, as in the UNIVAC 1108, PDP-6 and PDP-10, and GE-600 series. Relocatable partitions are able to becompactedto provide larger chunks of contiguous physical memory. Compaction moves "in-use" areas of memory to eliminate "holes" or unused areas of memory caused by process termination in order to create larger contiguous free areas.[3]
Some systems allow partitions to beswapped outtosecondary storageto free additional memory. Early versions of IBM'sTime Sharing Option(TSO) swapped users in and out oftime-sharingpartitions.[4][a]
Paged allocationdivides the computer's primary memory into fixed-size units calledpage frames, and the program's virtualaddress spaceintopagesof the same size. The hardwarememory management unitmaps pages to frames. The physical memory can be allocated on a page basis while the address space appears contiguous.
Usually, with paged memory management, each job runs in its own address space. However, there are somesingle address space operating systemsthat run all processes within a single address space, such asIBM i, which runs all processes within a large address space, and IBMOS/VS1andOS/VS2 (SVS), which ran all jobs in a single 16MiB virtual address space.
Paged memory can bedemand-pagedwhen the system can move pages as required between primary and secondary memory.
Segmented memoryis the only memory management technique that does not provide the user's program with a "linear and contiguous address space."[1]: 165Segmentsare areas of memory that usually correspond to a logical grouping of information such as a code procedure or a data array. Segments require hardware support in the form of asegment tablewhich usually contains the physical address of the segment in memory, its size, and other data such as access protection bits and status (swapped in, swapped out, etc.)
Segmentation allows better access protection than other schemes because memory references are relative to a specific segment and the hardware will not permit the application to reference memory not defined for that segment.
It is possible to implement segmentation with or without paging. Without paging support the segment is the physical unit swapped in and out of memory if required. With paging support the pages are usually the unit of swapping and segmentation only adds an additional level of security.
Addresses in a segmented system usually consist of the segment id and an offset relative to the segment base address, defined to be offset zero.
The IntelIA-32(x86) architecture allows a process to have up to 16,383 segments of up to 4GiB each. IA-32 segments are subdivisions of the computer'slinear address space, the virtual address space provided by the paging hardware.[5]
TheMulticsoperating system is probably the best known system implementing segmented memory. Multics segments are subdivisions of the computer'sphysical memoryof up to 256 pages, each page being 1K 36-bit words in size, resulting in a maximum segment size of 1MiB (with 9-bit bytes, as used in Multics). A process could have up to 4046 segments.[6]
Rollout/rollin (RO/RI) is a computer operating system memory management technique where the entire non-sharedcode and data of a running program is swapped out toauxiliary memory(disk or drum) to freemain storagefor another task. Programs may be rolled out "by demand end or...when waiting for some long event."[7]Rollout/rollin was commonly used intime-sharingsystems,[8]where the user's "think time" was relatively long compared to the time to do the swap.
Unlikevirtual storage—paging or segmentation, rollout/rollin does not require any special memory management hardware; however, unless the system has relocation hardware such as amemory maporbase and boundsregisters, the program must be rolled back in to its original memory locations. Rollout/rollin has been largely superseded by virtual memory.
Rollout/rollin was an optional feature ofOS/360 Multiprogramming with a Variable number of Tasks (MVT)
Rollout/rollin allows the temporary, dynamic expansion of a particular job beyond its originally specified region. When a job needs more space, rollout/rollin attempts to obtain unassigned storage for the job's use. If there is no such unassigned storage, another job is rolled out—i.e., is transferred to auxiliary storage—so that its region may be used by the first job. When released by the first job, this additional storage is again available, either (1) as unassigned storage, if that was its source, or (2) to receive the job to be transferred back into main storage (rolled in).[9]
In OS/360, rollout/rollin was used only for batch jobs, and rollin does not occur until the jobstep borrowing the region terminates.
|
https://en.wikipedia.org/wiki/Memory_management_(operating_systems)
|
Container Linux(formerlyCoreOS Linux) is a discontinuedopen-sourcelightweightoperating systembased on theLinux kerneland designed for providing infrastructure forclustereddeployments. One of its focuses wasscalability. As an operating system, Container Linux provided only the minimal functionality required for deploying applications insidesoftware containers, together with built-in mechanisms forservice discoveryand configuration sharing.[10][11][12][13][14]
Container Linux shares foundations withGentoo Linux,[15][16]ChromeOS, andChromiumOSthrough a commonsoftware development kit(SDK). Container Linux adds new functionality and customization to this shared foundation to support server hardware and use cases.[13][17]: 7:02CoreOS was developed primarily byAlex Polvi, Brandon Philips, and Michael Marineau,[12]with its major features available as astable release.[18][19][20]
The CoreOS team announced theend-of-lifefor Container Linux on May 26, 2020,[1]offeringFedora CoreOS,[21]and RHEL CoreOS as its replacement, both based onRed Hat Enterprise Linux.
Container Linux provides nopackage manageras a way for distributing payload applications, requiring instead all applications to run inside their containers. Serving as a single control host, a Container Linux instance uses the underlyingoperating-system-level virtualizationfeatures of the Linux kernel to create and configure multiple containers that perform as isolatedLinuxsystems. That way,resourcepartitioning between containers is performed through multiple isolateduserspaceinstances, instead of using ahypervisorand providing full-fledgedvirtual machines. This approach relies on the Linux kernel'scgroupsandnamespacesfunctionalities,[22][23]which together provide abilities to limit, account and isolate resource usage (CPU, memory, diskI/O, etc.) for the collections of userspaceprocesses.[11][14][24]
Initially, Container Linux exclusively usedDockeras a component providing an additional layer of abstraction andinterface[25]to the operating-system-level virtualization features of the Linux kernel, as well as providing a standardized format for containers that allows applications to run in different environments.[11][24]In December 2014, CoreOS released and started to supportrkt(initially released asRocket) as an alternative to Docker, providing through it another standardized format of the application-container images, the related definition of the containerruntime environment, and aprotocolfor discovering and retrieving container images.[26][27][28][29]CoreOS provides rkt as an implementation of the so-calledapp container(appc) specification that describes the required properties of theapplication container image(ACI). CoreOS created appc and ACI as an independent committee-steered set of specifications[30][31]aimed to become part of the vendor- and operating-system-independentOpen Container Initiative,or OCI, initially named theOpen Container Project(OCP)
containerization standard,[32]which was announced[by whom?]in June 2015.[33][34][35]
Container Linux usesebuildscripts from Gentoo Linux for automatedcompilationof its system components,[15][16]and usessystemdas its primaryinitsystem, with tight integration between systemd and various Container Linux's internal mechanisms.[11][36]
Container Linux achieves additional security and reliability of its operating systemupdatesby employingFastPatchas a dual-partition scheme for the read-only part of its installation, meaning that the updates are performed as a whole and installed onto a passive secondary bootpartitionthat becomes active upon a reboot orkexec. This approach avoids possible issues arising from updating only certain parts of the operating system, ensures easy rollbacks to a known-to-be-stable version of the operating system, and allows each boot partition to besignedfor additional security.[11][14][37]The root partition and itsroot file systemare automatically resized to fill all available disk-space upon reboots; while the root partition provides read-write storage space, the operating system itself ismountedread-only under/usr.[38][39][40]
To ensure that only a certain part of theclusterreboots at once when the operating system updates are applied, preserving the resources required for running deployed applications, CoreOS provideslocksmithas arebootmanager for Container Linux.[41]Using locksmith, one can select between different update strategies that are determined by how the reboots are performed as the last step in applying updates; for example, one can configure how many cluster members are allowed to reboot simultaneously. Internally, locksmith operates as thelocksmithddaemonthat runs on cluster members, while thelocksmithctlcommand-line utilitymanages configuration parameters.[42][43]Locksmith is written in theGo languageand distributed under the terms of theApache License 2.0.[44]
The updates distribution system employed by Container Linux is based onGoogle's open-sourceOmahaproject, which provides a mechanism for rolling out updates and the underlyingrequest–responseprotocol based onXML.[6][45][46]Additionally, CoreOS providesCoreUpdateas a web-baseddashboardfor the management of cluster-wide updates. Operations available through CoreUpdate include assigning cluster members to different groups that share customized update policies, reviewing cluster-wide breakdowns of Container Linux versions, stopping and restarting updates, and reviewing recorded update logs. CoreUpdate also provides anHTTP-basedAPIthat allows its integration into third-party utilities ordeployment systems.[37][47][48]
Container Linux provides etcd, a daemon that runs across all computers in a cluster and provides a dynamic configuration registry, allowing various configuration data to be easily and reliably shared between the cluster members.[6][38]Since thekey–value datastored withinetcdis automaticallydistributedandreplicatedwith automatedmaster electionandconsensusestablishment using theRaftalgorithm, all changes in stored data are reflected across the entire cluster, while the achievedredundancyprevents failures of single cluster members from causing data loss.[29][50]Beside the configuration management,etcdalso providesservice discoveryby allowing deployed applications to announce themselves and the services they offer. Communication withetcdis performed through an exposedREST-based API, which internally usesJSONon top of HTTP; the API may be used directly (throughcurlorwget, for example), or indirectly throughetcdctl, which is a specialized command-line utility also supplied by CoreOS.[11][14][51][52][53]etcd is also used inKubernetessoftware.
Container Linux also provides thefleetcluster manager, which controls Container Linux's separate systemd instances at the cluster level. As of 2017, "fleet" is no longer actively developed and is deprecated in favor of Kubernetes.[54]By usingfleetd, Container Linux creates a distributedinit systemthat ties together separate systemd instances and a cluster-wideetcddeployment;[50]internally,fleetddaemon communicates with localsystemdinstances overD-Bus, and with theetcddeployment through its exposed API. Usingfleetdallows the deployment of single or multiplecontainerscluster-wide, with more advanced options includingredundancy,failover, deployment to specific cluster members, dependencies between containers, and grouped deployment of containers. A command-line utility calledfleetctlis used to configure and monitor this distributed init system;[55]internally, it communicates with thefleetddaemon using a JSON-based API on top of HTTP, which may also be used directly. When used locally on a cluster member,fleetctlcommunicates with the localfleetdinstance over aUnix domain socket; when used from an external host,SSH tunnelingis used with authentication provided throughpublic SSH keys.[56][57][58][59][60]
All of the above-mentioned daemons and command-line utilities (etcd,etcdctl,fleetdandfleetctl) are written in the Go language and distributed under the terms of the Apache License 2.0.[8][61]
When running on dedicated hardware, Container Linux can be either permanently installed on local storage, such as ahard disk drive(HDD) orsolid-state drive(SSD),[62]or booted remotelyover a networkusingPreboot Execution Environment(PXE) in general, oriPXEas one of its implementations.[63][64]CoreOS also supports deployments on varioushardware virtualizationplatforms, includingAmazon EC2,DigitalOcean,Google Compute Engine,Microsoft Azure,OpenStack,QEMU/KVM,VagrantandVMware.[14][65][66][67]Container Linux may also be installed on Citrix XenServer, noting that a "template" for CoreOS exists.
Container Linux can also be deployed through its commercial distribution calledTectonic, which additionally integrates Google'sKubernetesas a cluster management utility. As of April 2015[update], Tectonic is planned to be offered asbeta softwareto select customers.[30][68][69]Furthermore, CoreOS providesFlannelas a component, implementing anoverlay networkrequired primarily for the integration with Kubernetes.[30][70][71]
As of February 2015[update], Container Linux supports only thex86-64architecture.[6]
Following its acquisition of CoreOS, Inc.[72]in January 2018, Red Hat announced[73]that it would be merging CoreOS Container Linux with Red Hat's Project Atomic to create a new operating system, Red Hat CoreOS, while aligning the upstream Fedora Project open source community around Fedora CoreOS, combining technologies from both predecessors.
On March 6, 2018, Kinvolk GmbH announced[74]Flatcar Container Linux, a derivative of CoreOS Container Linux. This tracks the upstream CoreOS alpha, beta, and stable channel releases, with an experimental Edge release channel added in May 2019.[75]
LWN.netreviewed CoreOS in 2014:[76]
For those who are putting together large, distributed systems—web applications being a prime example—CoreOS would appear to have a lot of interesting functionality. It should allow applications of that type to grow and shrink as needed with demand, as well as provide a stable platform where upgrades are not a constant headache. For "massive server deployments", CoreOS, or something with many of the same characteristics, looks like the future.
|
https://en.wikipedia.org/wiki/Container_Linux
|
Insystem administration,orchestrationis theautomatedconfiguration, coordination,[1]deployment,development, andmanagementofcomputer systemsandsoftware.[2]Many tools existto automate server configuration and management.
Orchestration is often discussed in the context ofservice-oriented architecture,virtualization,provisioning,converged infrastructureand dynamicdata centertopics. Orchestration in this sense is about aligning the business request with the applications, data, and infrastructure.[3]
In the context ofcloud computing, the main difference betweenworkflow automationand orchestration is that workflows are processed and completed as processes within a single domain for automation purposes, whereas orchestration includes a workflow and provides a directed action towards larger goals and objectives.[2]
In this context, and with the overall aim to achieve specific goals and objectives (described through thequality of serviceparameters), for example, meet application performance goals using minimized cost[4]and maximize application performance within budget constraints,[5]cloud management solutions also encompass frameworks for workflow mapping and management.
|
https://en.wikipedia.org/wiki/Orchestration_(computing)
|
Flatpakis autilityforsoftware deploymentandpackage managementforLinux. It provides asandboxenvironment in which users can runapplication softwarein (partial) isolation from the rest of the system.[5][6]Flatpak was known asxdg-app until 2016.[7]
Applications using Flatpak need permissions to access resources such asBluetooth, sound (withPulseAudio),network, andfiles. These permissions are configured by the maintainer of the Flatpak and can be added or removed by users on their system.[8][9]
Another key feature of Flatpak allows application developers to directly provide updates to users without going throughLinux distributions, and without having to package and test the application separately for each distribution.[10]
Because Flatpak runs in a sandbox (which provides a separate,ABI-stable version of common system libraries), it uses more space on the system than common native packages. However,OSTree, a technology underlying Flatpak,deduplicatesmatching files. This means that the first few Flatpak installations will occupy more space, but as more packages are added, the system will use space more efficiently.[11]
Flathub, a centralized repository (or remote source in the Flatpak terminology) located atflathub.org, is thede factostandardfor obtaining applications packaged with Flatpak.[12]Packages are contributed by both Flathub administrators and application developers, with a stated preference for submissions from the developers themselves.[13]
AlthoughFlathubis the de facto source for applications packaged with Flatpak, it is possible to host a Flatpak repository that is independent of Flathub.[14][15][16]
Theoretically, Flatpak apps can be installed on any existing and futureLinux distribution, including those installed with theWindows Subsystem for Linuxcompatibility layer, so long asBubblewrapandOSTreeare available.
It can also be used onLinux kernel-based systems likeChromeOS.[17]
|
https://en.wikipedia.org/wiki/Flatpak
|
cgroups(abbreviated fromcontrol groups) is aLinux kernelfeature that limits, accounts for, and isolates theresource usage(CPU, memory, disk I/O, etc.[1]) of a collection ofprocesses.
Engineers atGooglestarted the work on this feature in 2006 under the name "process containers".[2]In late 2007, the nomenclature changed to "control groups" to avoid confusion caused by multiple meanings of the term "container" in the Linux kernel context, and the control groups functionality was merged into theLinux kernel mainlinein kernel version 2.6.24, which was released in January 2008.[3]Since then, developers have added many new features and controllers, such as support forkernfsin 2014,[4]firewalling,[5]and unified hierarchy.[6]cgroup v2 was merged in Linux kernel 4.5[7]with significant changes to the interface and internal functionality.[8]
There are two versions of cgroups.
Cgroups was originally written by Paul Menage and Rohit Seth, and merged into the mainline Linux kernel in 2007. Afterwards this is called cgroups version 1.[9]
Development and maintenance of cgroups was then taken over byTejun Heo. Tejun Heo redesigned and rewrote cgroups. This rewrite is now called version 2, the documentation of cgroup-v2 first appeared in Linux kernel 4.5 released on 14 March 2016.[7]
Unlike v1, cgroup v2 has only a single process hierarchy and discriminates between processes, not threads.
One of the design goals of cgroups is to provide a unified interface to many differentuse cases, from controlling single processes (by usingnice, for example) to fulloperating system-level virtualization(as provided byOpenVZ,Linux-VServerorLXC, for example). Cgroups provides:
A control group (abbreviated as cgroup) is a collection of processes that are bound by the same criteria and associated with a set of parameters or limits. These groups can be hierarchical, meaning that each group inherits limits from its parent group. The kernel provides access to multiple controllers (also called subsystems) through the cgroup interface;[3]for example, the "memory" controller limits memory use, "cpuacct" accounts CPU usage, etc.
Control groups can be used in multiple ways:
The Linux kernel documentation contains some technical details of the setup and use of control groups version 1[21]and version 2.[22]systemd-cgtop[23]command can be used to show top control groups by their resource usage.
Redesign of cgroups started in 2013,[24]with additional changes brought by versions 3.15 and 3.16 of the Linux kernel.[25][26][27]
While not technically part of the cgroups work, a related feature of the Linux kernel isnamespace isolation, where groups of processes are separated such that they cannot "see" resources in other groups. For example, a PID namespace provides a separate enumeration ofprocess identifierswithin each namespace. Also available are mount, user, UTS (Unix Time Sharing), network and SysV IPC namespaces.
Namespaces are created with the "unshare" command orsyscall, or as "new" flags in a "clone" syscall.[33]
The "ns" subsystem was added early in cgroups development to integrate namespaces and control groups. If the "ns" cgroup was mounted, each namespace would also create a new group in the cgroup hierarchy. This was an experiment that was later judged to be a poor fit for the cgroups API, and removed from the kernel.
Linux namespaces were inspired by the more general namespace functionality used heavily throughoutPlan 9 from Bell Labs.[34]
Kernfswas introduced into the Linux kernel with version 3.14 in March 2014, the main author being Tejun Heo.[35]One of the main motivators for a separate kernfs is the cgroups file system. Kernfs is basically created by splitting off some of thesysfslogic into an independent entity, thus easing for other kernel subsystems the implementation of their own virtual file system with handling for device connect and disconnect, dynamic creation and removal, and other attributes. Redesign continued into version 3.15 of the Linux kernel.[36]
Kernel memory control groups(kmemcg) were merged into version 3.8 (2013 February 18; 12 years ago(18-02-2013)) of theLinux kernel mainline.[37][38][39]The kmemcg controller can limit the amount of memory that the kernel can utilize to manage its own internal processes.
Linux Kernel 4.19 (October 2018) introduced cgroup awareness ofOOM killerimplementation which adds an ability to kill a cgroup as a single unit and so guarantee the integrity of the workload.[40]
Various projects use cgroups as their basis, includingCoreOS,Docker(in 2013),Hadoop,Jelastic,Kubernetes,[41]lmctfy(Let Me Contain That For You),LXC(Linux Containers),systemd,Mesosand Mesosphere,[41]andHTCondor.
Major Linux distributions also adopted it such asRed Hat Enterprise Linux(RHEL) 6.0 in November 2010, three years before adoption by the mainline Linux kernel.[42]
On 29 October 2019, theFedora Projectmodified Fedora 31 to use CgroupsV2 by default[43]
|
https://en.wikipedia.org/wiki/Cgroups
|
Namespacesare a feature of theLinux kernelthat partition kernel resources such that one set ofprocessessees one set of resources, while another set of processes sees a different set of resources. The feature works by having the same namespace for a set of resources and processes, but those namespaces refer to distinct resources. Resources may exist in multiple namespaces. Examples of such resources are process IDs, host-names, user IDs, file names, some names associated with network access, andInter-process communication.
Namespaces are a required aspect of functioningcontainersin Linux. The term "namespace" is often used to denote a specific type of namespace (e.g., process ID) as well as for a particular space of names.[1]
A Linux system begins with a single namespace of each type, used by all processes. Processes can create additional namespaces and can also join different namespaces.
Linux namespaces were inspired by the wider namespace functionality used heavily throughoutPlan 9 from Bell Labs.[2]The Linux Namespaces originated in 2002 in the 2.4.19 kernel with work on the mount namespace kind. Additional namespaces were added beginning in 2006[3]and continuing into the future.
Adequatecontainerssupport functionality was finished in kernel version 3.8[4][5]with the introduction of User namespaces.[6]
Sincekernelversion 5.6, there are 8 kinds of namespaces. Namespace functionality is the same across all kinds: each process is associated with a namespace and can only see or use the resources associated with that namespace, and descendant namespaces where applicable. This way, each process (or process group thereof) can have a unique view on the resources. Which resource is isolated depends on the kind of namespace that has been created for a given process group.
Mount namespaces controlmount points. Upon creation the mounts from the current mount namespace are copied to the new namespace, but mount points created afterwards do not propagate between namespaces (using shared subtrees, it is possible to propagate mount points between namespaces[7]).
The clone flag used to create a new namespace of this type is CLONE_NEWNS - short for "NEW NameSpace". This term is not descriptive (it does not tell which kind of namespace is to be created) because mount namespaces were the first kind of namespace and designers did not anticipate there being any others.
ThePIDnamespace provides processes with an independent set of process IDs (PIDs) from other namespaces. PID namespaces are nested, meaning when a new process is created it will have a PID for each namespace from its current namespace up to the initial PID namespace. Hence, the initial PID namespace is able to see all processes, albeit with different PIDs than other namespaces will see processes with.
The first process created in a PID namespace is assigned the process ID number 1 and receives most of the same special treatment as the normalinitprocess, most notably thatorphaned processeswithin the namespace are attached to it. This also means that the termination of this PID 1 process will immediately terminate all processes in its PID namespace and any descendants.[8]
Network namespaces virtualize thenetwork stack. On creation, a network namespace contains only aloopbackinterface. Each network interface (physical or virtual) is present in exactly 1 namespace and can be moved between namespaces.
Each namespace will have a private set ofIP addresses, its ownrouting table,socketlisting, connection tracking table,firewall, and other network-related resources.
Destroying a network namespace destroys any virtual interfaces within it and moves any physical interfaces within it back to the initial network namespace.
IPC namespaces isolate processes fromSysVstyle inter-process communication. This prevents processes in different IPC namespaces from using, for example, the SHM family of functions to establish a range of shared memory between the two processes. Instead, each process will be able to use the same identifiers for a shared memory region and produce two such distinct regions.
UTS (UNIXTime-Sharing) namespaces allow a single system to appear to have differenthostanddomain namesto different processes. When a process creates a new UTS namespace, the hostname and domain of the new UTS namespace are copied from the corresponding values in the caller's UTS namespace.[9]
User namespaces are a feature to provide both privilege isolation and user identification segregation across multiple sets of processes, available since kernel 3.8.[10]With administrative assistance, it is possible to build a container with seeming administrative rights without actually giving elevated privileges to user processes. Like the PID namespace, user namespaces are nested, and each new user namespace is considered to be a child of the user namespace that created it.
A user namespace contains a mapping table converting user IDs from the container's point of view to the system's point of view. This allows, for example, therootuser to have user ID 0 in the container but is actually treated as user ID 1,400,000 by the system for ownership checks. A similar table is used for group ID mappings and ownership checks.
To facilitate privilege isolation of administrative actions, each namespace type is considered owned by a user namespace based on the active user namespace at the moment of creation. A user with administrative privileges in the appropriate user namespace will be allowed to perform administrative actions within that other namespace type. For example, if a process has administrative permission to change the IP address of a network interface, it may do so as long as its own user namespace is the same as (or ancestor of) the user namespace that owns the network namespace. Hence, the initial user namespace has administrative control over all namespace types in the system.[11]
Thecgroupnamespace type hides the identity of thecontrol groupof which the process is a member. A process in such a namespace, checking which control group any process is part of, would see a path that is actually relative to the control group set at creation time, hiding its true control group position and identity. This namespace type has existed since March 2016 in Linux 4.6.[12][13]
The time namespace allows processes to see different system times in a way similar to the UTS namespace. It was proposed in 2018 and was released in Linux 5.6, in March 2020.[14]
The syslog namespace was proposed by Rui Xiang, an engineer atHuawei, but wasn't merged into the Linux kernel.[15]systemdimplemented a similar feature called “journal namespace” in February 2020.[16]
The kernel assigns each process a symbolic link per namespace kind in/proc/<pid>/ns/. The inode number pointed to by this symlink is the same for each process in this namespace. This uniquely identifies each namespace by the inode number pointed to by one of its symlinks.
Reading the symlink via readlink returns a string containing the namespace kind name and the inode number of the namespace.
Three syscalls can directly manipulate namespaces:
If a namespace is no longer referenced, it will be deleted, the handling of the contained resource depends on the namespace kind. Namespaces can be referenced in three ways:
Various container software use Linux namespaces in combination withcgroupsto isolate their processes, includingDocker[17]andLXC.
Other applications, such asGoogle Chromemake use of namespaces to isolate its own processes which are at risk from attack on the internet.[18]
There is also an unshare wrapper inutil-linux. An example of its use is:
|
https://en.wikipedia.org/wiki/Linux_namespaces
|
Ahypervisor, also known as avirtual machine monitor(VMM) orvirtualizer, is a type of computersoftware,firmwareorhardwarethat creates and runsvirtual machines. A computer on which a hypervisor runs one or more virtual machines is called ahost machine, and each virtual machine is called aguest machine. The hypervisor presents the guest operating systems with avirtual operating platformand manages the execution of the guest operating systems. Unlike anemulator, the guest executes most instructions on the native hardware.[1]Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example,Linux,Windows, andmacOSinstances can all run on a single physicalx86machine. This contrasts withoperating-system–level virtualization, where all instances (usually calledcontainers) must share a single kernel, though the guest operating systems can differ inuser space, such as differentLinux distributionswith the same kernel.
The termhypervisoris a variant ofsupervisor, a traditional term for thekernelof anoperating system: the hypervisor is the supervisor of the supervisors,[2]withhyper-used as a stronger variant ofsuper-.[a]The term dates to circa 1970;[3]IBM coined it for software that ranOS/360and the 7090 emulator concurrently on the360/65[4]and later used it for the DIAG handler of CP-67. In the earlierCP/CMS(1967) system, the termControl Programwas used instead.
Some literature, especially inmicrokernelcontexts, makes a distinction betweenhypervisorandvirtual machine monitor(VMM). There, both components form the overallvirtualization stackof a certain system.Hypervisorrefers tokernel-spacefunctionality and VMM touser-spacefunctionality. Specifically in these contexts, ahypervisoris a microkernel implementing virtualization infrastructure that must run in kernel-space for technical reasons, such asIntel VMX. Microkernels implementing virtualization mechanisms are also referred to asmicrohypervisor.[5][6]Applying this terminology toLinux,KVMis ahypervisorandQEMUorCloud Hypervisorare VMMs utilizing KVM as hypervisor.[7]
In his 1973 thesis, "Architectural Principles for Virtual Computer Systems,"Robert P. Goldbergclassified two types of hypervisor:[1]
The distinction between these two types is not always clear. For instance,KVMandbhyvearekernel modules[9]that effectively convert the host operating system to a type-1 hypervisor.[10]
The first hypervisors providingfull virtualizationwere the test toolSIMMONand the one-offIBM CP-40research system, which began production use in January 1967 and became the first version of the IBMCP/CMSoperating system. CP-40 ran on aS/360-40modified at theCambridge Scientific Centerto supportdynamic address translation, a feature that enabled virtualization. Prior to this time, computer hardware had only been virtualized to the extent to allow multiple user applications to run concurrently, such as inCTSSandIBM M44/44X. With CP-40, the hardware'ssupervisor statewas virtualized as well, allowing multiple operating systems to run concurrently in separatevirtual machinecontexts.
Programmers soon implemented CP-40 (asCP-67) for theIBM System/360-67, the first production computer system capable of full virtualization. IBM shipped this machine in 1966; it includedpage-translation-tablehardware for virtual memory and other techniques that allowed a full virtualization of all kernel tasks, including I/O and interrupt handling. (The "official" operating system, the ill-fatedTSS/360, did not employ full virtualization.) Both CP-40 and CP-67 began production use in 1967.CP/CMSwas available to IBM customers from 1968 to early 1970s, in source code form without support.
CP/CMSformed part of IBM's attempt to build robusttime-sharingsystems for itsmainframecomputers. By running multiple operating systems concurrently, the hypervisor increased system robustness and stability: Even if one operating system crashed, the others would continue working without interruption. Indeed, this even allowedbetaor experimental versions of operating systems—or even of new hardware[11]—to be deployed and debugged, without jeopardizing the stable main production system, and without requiring costly additional development systems.
IBM announced itsSystem/370series in 1970 without thevirtual memoryfeature needed for virtualization, but added it in the August 1972 Advanced Function announcement. Virtualization has been featured in all successor systems, such that all modern-day IBM mainframes, including thezSeriesline, retain backward compatibility with the 1960s-era IBM S/360 line. The 1972 announcement also includedVM/370, a reimplementation ofCP/CMSfor the S/370. UnlikeCP/CMS, IBM provided support for this version (though it was still distributed in source code form for several releases).VMstands forVirtual Machine, emphasizing that all, not just some, of the hardware interfaces are virtualized. Both VM and CP/CMS enjoyed early acceptance and rapid development by universities, corporate users, andtime-sharingvendors, as well as within IBM. Users played an active role in ongoing development, anticipating trends seen in modernopen sourceprojects. However, in a series of disputed and bitter battles[citation needed], time-sharing lost out tobatch processingthrough IBM political infighting, and VM remained IBM's "other" mainframe operating system for decades, losing toMVS. It enjoyed a resurgence of popularity and support from 2000 as thez/VMproduct, for example as the platform forLinux on IBM Z.
As mentioned above, the VM control program includes ahypervisor-callhandler that intercepts DIAG ("Diagnose", opcode x'83') instructions used within a virtual machine. This provides fast-path non-virtualized execution of file-system access and other operations (DIAG is a model-dependent privileged instruction, not used in normal programming, and thus is not virtualized. It is therefore available for use as a signal to the "host" operating system). When first implemented inCP/CMSrelease 3.1, this use of DIAG provided an operating system interface that was analogous to theSystem/360Supervisor Call instruction(SVC), but that did not require altering or extending the system's virtualization of SVC.
In 1985 IBM introduced thePR/SMhypervisor to managelogical partitions(LPAR).
Several factors led to a resurgence around 2005 in the use ofvirtualizationtechnology amongUnix,Linux, and otherUnix-likeoperating systems:[12]
Major Unix vendors, includingHP,IBM,SGI, andSun Microsystems, have been selling virtualized hardware since before 2000. These have generally been large, expensive systems (in the multimillion-dollar range at the high end), although virtualization has also been available on some low- and mid-range systems, such as IBMpSeriesservers,HP Superdomeseries machines, andSun/OracleT-series CoolThreads servers.
AlthoughSolarishas always been the only guest domain OS officially supported by Sun/Oracle on theirLogical Domainshypervisor, as of late 2006[update],Linux(Ubuntu and Gentoo), andFreeBSDhave been ported to run on top of the hypervisor (and can all run simultaneously on the same processor, as fully virtualized independent guest OSes). Wind River "Carrier Grade Linux" also runs on Sun's Hypervisor.[13]Full virtualization onSPARCprocessors proved straightforward: since its inception in the mid-1980s Sun deliberately kept the SPARC architecture clean of artifacts that would have impeded virtualization. (Compare with virtualization on x86 processors below.)[14]
HPE providesHP Integrity Virtual Machines(Integrity VM) to host multiple operating systems on theirItaniumpowered Integrity systems. Itanium can runHP-UX, Linux, Windows andOpenVMS, and these environments are also supported as virtual servers on HP's Integrity VM platform. The HP-UX operating system hosts the Integrity VM hypervisor layer that allows for multiple features of HP-UX to be taken advantage of and provides major differentiation between this platform and other commodity platforms - such as processor hotswap, memory hotswap, and dynamic kernel updates without system reboot. While it heavily leverages HP-UX, the Integrity VM hypervisor is really a hybrid that runs on bare-metal while guests are executing. Running normal HP-UX applications on an Integrity VM host is heavily discouraged,[by whom?]because Integrity VM implements its own memory management, scheduling and I/O policies that are tuned for virtual machines and are not as effective for normal applications. HPE also provides more rigid partitioning of their Integrity and HP9000 systems by way of VPAR andnPartechnology, the former offering shared resource partitioning and the latter offering complete I/O and processing isolation. The flexibility of virtual server environment (VSE) has given way to its use more frequently in newer deployments.[citation needed]
IBM provides virtualization partition technology known aslogical partitioning(LPAR) onSystem/390,zSeries,pSeriesandIBM AS/400systems. For IBM's Power Systems, the POWER Hypervisor (PHYP) is a native (bare-metal) hypervisor in firmware and provides isolation between LPARs. Processor capacity is provided to LPARs in either a dedicated fashion or on an entitlement basis where unused capacity is harvested and can be re-allocated to busy workloads. Groups of LPARs can have their processor capacity managed as if they were in a "pool" - IBM refers to this capability as Multiple Shared-Processor Pools (MSPPs) and implements it in servers with thePOWER6processor. LPAR and MSPP capacity allocations can be dynamically changed. Memory is allocated to each LPAR (at LPAR initiation or dynamically) and is address-controlled by the POWER Hypervisor. For real-mode addressing by operating systems (AIX,Linux,IBM i), thePowerprocessors (POWER4onwards) have designed virtualization capabilities where a hardware address-offset is evaluated with the OS address-offset to arrive at the physical memory address. Input/Output (I/O) adapters can be exclusively "owned" by LPARs or shared by LPARs through an appliance partition known as the Virtual I/O Server (VIOS). The Power Hypervisor provides for high levels of reliability, availability and serviceability (RAS) by facilitating hot add/replace of multiple parts (model dependent: processors, memory, I/O adapters, blowers, power units, disks, system controllers, etc.)
Similar trends have occurred with x86/x86-64 server platforms, whereopen-sourceprojects such asXenhave led virtualization efforts. These include hypervisors built on Linux and Solaris kernels as well as custom kernels. Since these technologies span from large systems down to desktops, they are described in the next section.
X86 virtualizationwas introduced in the 1990s, with its emulation being included inBochs.[15]Intel and AMD released their first x86 processors with hardware virtualisation in 2005 withIntel VT-x(code-named Vanderpool) andAMD-V(code-named Pacifica).
An alternative approach requires modifying the guest operating system to make asystem callto the underlying hypervisor, rather than executing machine I/O instructions that the hypervisor simulates. This is calledparavirtualizationinXen, a "hypercall" inParallels Workstation, and a "DIAGNOSE code" in IBMVM. Some microkernels, such asMachandL4, are flexible enough to allow paravirtualization of guest operating systems.
Embedded hypervisors, targetingembedded systemsand certainreal-time operating system(RTOS) environments, are designed with different requirements when compared to desktop and enterprise systems, including robustness, security andreal-timecapabilities. The resource-constrained nature of multiple embedded systems, especially battery-powered mobile systems, imposes a further requirement for small memory-size and low overhead. Finally, in contrast to the ubiquity of the x86 architecture in the PC world, the embedded world uses a wider variety of architectures and less standardized environments. Support for virtualization requiresmemory protection(in the form of amemory management unitor at least a memory protection unit) and a distinction betweenuser modeandprivileged mode, which rules out mostmicrocontrollers. This still leavesx86,MIPS,ARMandPowerPCas widely deployed architectures on medium- to high-end embedded systems.[16]
As manufacturers of embedded systems usually have the source code to their operating systems, they have less need for full virtualization in this space. Instead, the performance advantages ofparavirtualizationmake this usually the virtualization technology of choice. Nevertheless, ARM and MIPS have recently added full virtualization support as an IP option and has included it in their latest high-end processors and architecture versions, such asARM Cortex-A15 MPCoreand ARMv8 EL2.
Other differences between virtualization in server/desktop and embedded environments include requirements for efficient sharing of resources across virtual machines, high-bandwidth, low-latency inter-VM communication, a global view of scheduling and power management, and fine-grained control of information flows.[17]
The use of hypervisor technology bymalwareandrootkitsinstalling themselves as a hypervisor below the operating system, known ashyperjacking, can make them more difficult to detect because the malware could intercept any operations of the operating system (such as someone entering a password) without the anti-malware software necessarily detecting it (since the malware runs below the entire operating system). Implementation of the concept has allegedly occurred in theSubVirtlaboratory rootkit (developed jointly byMicrosoftandUniversity of Michiganresearchers[18]) as well as in theBlue Pill malwarepackage. However, such assertions have been disputed by others who claim that it would be possible to detect the presence of a hypervisor-based rootkit.[19]
In 2009, researchers from Microsoft andNorth Carolina State Universitydemonstrated a hypervisor-layer anti-rootkit calledHooksafethat can provide generic protection against kernel-moderootkits.[20]
|
https://en.wikipedia.org/wiki/Hypervisor
|
Portable application creators allow the creation ofportable applications(also called portable apps). They usually useapplication virtualization.
Noagentorclientis required for these (also called "agentless" solutions):
|
https://en.wikipedia.org/wiki/Portable_application_creators
|
TheOpen Container Initiative(OCI) is aLinux Foundationproject, started in June 2015 byDocker,CoreOS, and the maintainers of appc (short for "App Container") to designopen standardsforoperating system-level virtualization(containers).[1][2][3]At launch, OCI was focused onLinux containersand subsequent work has extended it to other operating systems.[4][5][6]
There are currently three OCI specifications in development and use: theRuntime Specification(runtime-spec), theImage Specification(image-spec), and theDistribution Specification(distribution-spec).
The OCI organization includes the development ofrunc, which is the reference implementation of the runtime-spec,[7][8]a container runtime that implements their specification and serves as a basis for other higher-level tools. runc was first released in July 2015 as version 0.0.1[9]and it reached version 1.0.0 on June 22, 2021.[10]
The OCI Image Format Project was split out from the Runtime Project into its own specification on March 23, 2016.[11]The image-spec is a software shipping container image format spec (OCI Image Format) that reached version 1.0.0 on July 19, 2017.[12]
The OCI Distribution Spec Project defines the distribution-spec, an API protocol to facilitate and standardize the distribution of content. The distribution-spec was created on March 8, 2018 from a Proposal for a JSON Registry API V2.1.[13]The distribution-spec reached version 1.0.0 on April 26, 2021.[14]
|
https://en.wikipedia.org/wiki/Open_Container_Initiative
|
Asandboxis atesting environmentthat isolates untestedcodechanges and outright experimentation from theproduction environmentor repository[1]in the context ofsoftware development, includingweb development,automation,revision control,configuration management(see alsochange management), andpatch management.
Sandboxing protects "live" servers and their data, vetted source code distributions, and other collections of code, data and/or content, proprietary or public, from changes that could be damaging to a mission-critical system or which could simply be difficult torevert, regardless of the intent of the author of those changes. Sandboxes replicate at least the minimal functionality needed to accurately test the programs or other code under development (e.g. usage of the sameenvironment variablesas, or access to an identical database to that used by, the stable prior implementation intended to be modified; there are many other possibilities, as the specific functionality needs vary widely with the nature of the code and the application[s] for which it is intended).
The concept of sandboxing is built intorevision control softwaresuch asGit,CVSandSubversion (SVN), in which developers "check out" acopyof the source code tree, or a branch thereof, to examine and work on. After the developer has fully tested the code changes in their own sandbox, the changes would be checked back into and merged with the repository and thereby made available to other developers or end users of the software.[2]
By further analogy, the term "sandbox" can also be applied in computing and networking to other temporary or indefinite isolation areas, such assecurity sandboxesandsearch engine sandboxes(both of which have highly specific meanings), that prevent incoming data from affecting a "live" system (or aspects thereof) unless/until defined requirements or criteria have been met.
Sandboxing (see also 'soft launching') is often considered a best practice when making any changes to a system, regardless of whether that change is considered 'development', a modification of configuration state, or updating the system.[3]
The term sandbox is commonly used for the development ofweb servicesto refer to amirroredproduction environment for use by external developers. Typically, a third-party developer will develop and create an application that will use a web service from the sandbox, which is used to allow a third-party team to validate their code before migrating it to the production environment.Microsoft,[4]Google,Amazon,[5]Salesforce,[6]PayPal,[7]eBay,[8]andYahoo,[9]among others, provide such services.
Wikisalso typically employ a shared sandbox model of testing, though it is intended principally for learning and outright experimentation with features rather than for testing of alterations to existing content (the wiki analog of source code). An edit preview mode is usually used instead to test specific changes made to the texts or layout of wiki pages.
|
https://en.wikipedia.org/wiki/Sandbox_(software_development)
|
Aseparation kernelis a type of securitykernelused to simulate a distributed environment. The concept was introduced byJohn Rushbyin a 1981 paper.[1]Rushby proposed the separation kernel as a solution to the difficulties and problems that had arisen in the development and verification of large, complex security kernels that were intended to "provide multilevel secure operation on general-purpose multi-user systems." According to Rushby, "the task of a separation kernel is to create an environment which is indistinguishable from that provided by a physically distributed system: it must appear as if each regime is a separate, isolated machine and that information can only flow from one machine to another along known external communication lines. One of the properties we must prove of a separation kernel, therefore, is that there are no channels for information flow between regimes other than those explicitly provided."
A variant of the separation kernel, the partitioning kernel, has gained acceptance in the commercial aviation community as a way of consolidating multiple functions onto a single processor, perhaps ofmixed criticality. Commercialreal-time operating systemproducts in this genre have been used byaircraft manufacturersfor safety-critical avionics applications.
In 2007 the Information Assurance Directorate of the U.S.National Security Agency(NSA) published the Separation Kernel Protection Profile (SKPP),[2]a security requirements specification for separation kernels suitable to be used in the most hostile threat environments. The SKPP describes, inCommon Criteria[3]parlance, a class of modern products that provide the foundational properties of Rushby's conceptual separation kernel. It defines the security functional and assurance requirements for the construction and evaluation of separation kernels while yet providing some latitude in the choices available to developers.
The SKPP defines separation kernel as "hardware and/or firmware and/or software mechanisms whose primary function is to establish, isolate and separate multiple partitions and control information flow between the subjects and exported resources allocated to those partitions." Further, the separation kernel's core functional requirements include:
The separation kernel allocates all exported resources under its control into partitions. The partitions are isolated except for explicitly allowed information flows. The actions of a subject in one partition are isolated from (viz., cannot be detected by or communicated to) subjects in another partition, unless that flow has been allowed. The partitions and flows are defined in configuration data. Note that 'partition' and 'subject' are orthogonal abstractions. 'Partition,' as indicated by its mathematical genesis, provides for a set-theoretic grouping of system entities, whereas 'subject' allows us to reason about the individual active entities of a system. Thus, a partition (a collection, containing zero or more elements) is not a subject (an active element), but may contain zero or more subjects.[2]The separation kernel provides to its hosted software programs high-assurance partitioning and information flow control properties that are both tamperproof and non-bypassable. These capabilities provide a configurable trusted foundation for a variety of system architectures.[2]
In 2011, the Information Assurance Directorate sunset the SKPP. NSA will no longer certify specific operating systems, including separation kernels against the SKPP, noting "conformance to this protection profile, by itself, does not offer sufficient confidence that national security information is appropriately protected in the context of a larger system in which the conformant product is integrated".[5]
TheseL4 microkernelhas a formal proof of concept that it can be configured as a separation kernel.[6]The enforced continuance of information[7]along with this implies it is an elevated level example of assurance. The Muen[8]separation kernel is also a formally verified open source separation kernel for x86 machines.
|
https://en.wikipedia.org/wiki/Separation_kernel
|
Serverless computingis "a cloud service category in which the customer can use different cloud capability types without the customer having to provision, deploy and manage either hardware or software resources, other than providing customer application code or providing customer data. Serverless computing represents a form of virtualized computing." according toISO/IEC 22123-2.[1]Serverless computing is a broad ecosystem that includes the cloud provider,Function as a Service, managed services, tools, frameworks, engineers, stakeholders, and other interconnected elements, according to Sheen Brisals.[2]
Serverlessis amisnomerin the sense that servers are still used by cloud service providers to execute code fordevelopers. The definition of serverless computing has evolved over time, leading to varied interpretations. According to Ben Kehoe, serverless represents a spectrum rather than a rigid definition. Emphasis should shift from strict definitions and specific technologies to adopting a serverless mindset, focusing on leveraging serverless solutions to address business challenges.[3]
Serverless computing does not eliminate complexity but shifts much of it from the operations team to the development team. However, this shift is not absolute, as operations teams continue to manage aspects such as identity and access management (IAM), networking, security policies, and cost optimization. Additionally, while breaking down applications into finer-grained components can increase management complexity, the relationship between granularity and management difficulty is not strictly linear. There is often an optimal level of modularization where the benefits outweigh the added management overhead.[4][2]
According to Yan Cui, serverless should be adopted only when it helps to deliver customer value faster. And while adopting, organizations should take small steps and de-risk along the way.[5]
Serverless applications are prone tofallacies of distributed computing. In addition, they are prone to following fallacies:[6][7]
Monitoring and debugging serverless applications can present unique challenges due to their distributed, event-driven nature and proprietary environments. Traditional tools may fall short, making it difficult to track execution flows across services. However, modern solutions such as distributed tracing tools (e.g., AWS X-Ray, Datadog), centralized logging, and cloud-agnostic observability platforms are mitigating these challenges. Emerging technologies like OpenTelemetry, AI-powered anomaly detection, and serverless-specific frameworks are further improving visibility and root cause analysis. While challenges persist, advancements in monitoring and debugging tools are steadily addressing these limitations.[8][9]
According toOWASP, serverless applications are vulnerable to variations of traditional attacks, insecure code, and some serverless-specific attacks (like Denial of Wallet[10]). So, the risks have changed and attack prevention requires a shift in mindset.[11][12]
Serverless computing is provided as a third-party service. Applications and software that run in the serverless environment are by default locked to a specific cloud vendor. This issue is exacerbated in serverless computing, as with its increased level of abstraction, public vendors only allow customers to upload code to a FaaS platform without the authority to configure underlying environments. More importantly, when considering a more complex workflow that includes Backend-as-a-Service (BaaS), a BaaS offering can typically only natively trigger a FaaS offering from the same provider. This makes the workload migration in serverless computing virtually impossible. Therefore, considering how to design and deploy serverless workflows from amulti-cloudperspective seems promising and is starting to prevail[when?].[13][14][15]
Serverless computing may not be ideal for certainhigh-performance computing(HPC) workloads due to resource limits often imposed by cloud providers, including maximum memory, CPU, and runtime restrictions. For workloads requiring sustained or predictable resource usage, bulk-provisioned servers can sometimes be more cost-effective than the pay-per-use model typical of serverless platforms. However, serverless computing is increasingly capable of supporting specific HPC workloads, particularly those that are highly parallelizable and event-driven, by leveraging its scalability and elasticity. The suitability of serverless computing for HPC continues to evolve with advancements in cloud technologies.[16][17][18]
The "Grain of Sand Anti-pattern" refers to the creation of excessively small components (e.g., functions) within a system, often resulting in increased complexity, operational overhead, and performance inefficiencies.[19]"Lambda Pinball" is a related anti-pattern that can occur in serverless architectures when functions (e.g., AWS Lambda, Azure Functions) excessively invoke each other in fragmented chains, leading to latency, debugging and testing challenges, and reduced observability.[20]These anti-patterns are associated with the formation of a distributed monolith.
These anti-patterns are often addressed through the application of clear domain boundaries, which distinguish between public and published interfaces.[20][21]Public interfaces are technically accessible interfaces, such as methods, classes, API endpoints, or triggers, but they do not come with formal stability guarantees. In contrast, published interfaces involve an explicit stability contract, including formal versioning, thorough documentation, a defined deprecation policy, and often support for backward compatibility. Published interfaces may also require maintaining multiple versions simultaneously and adhering to formal deprecation processes when breaking changes are introduced.[21]
Fragmented chains of function calls are often observed in systems where serverless components (functions) interact with other resources in complex patterns, sometimes described as spaghetti architecture or a distributed monolith. In contrast, systems exhibiting clearer boundaries typically organize serverless components into cohesive groups, where internal public interfaces manage inter-component communication, and published interfaces define communication across group boundaries. This distinction highlights differences in stability guarantees and maintenance commitments, contributing to reduced dependency complexity.[20][21]
Additionally, patterns associated with excessive serverless function chaining are sometimes addressed through architectural strategies that emphasize native service integrations instead of individual functions, a concept referred to as the functionless mindset. However, this approach is noted to involve a steeper learning curve, and integration limitations may vary even within the same cloud vendor ecosystem.[2]
Reporting on serverless databases presents challenges, as retrieving data for a reporting service can either break thebounded contexts, reduce the timeliness of the data, or do both. This applies regardless of whether data is pulled directly from databases, retrieved via HTTP, or collected in batches. Mark Richards refers to this as the "Reach-in Reporting Antipattern".[19]A possible alternative to this approach is for databases to asynchronously push the necessary data to the reporting service instead of the reporting service pulling it. While this method requires a separate contract between services and the reporting service and can be complex to implement, it helps preserve bounded contexts while maintaining a high level of data timeliness.[19]
AdoptingDevSecOpspractices can help improve the use and security of serverless technologies.[22]
In serverless applications, the distinction between infrastructure and business logic is often blurred, with applications typically distributed across multiple services. To maximize the effectiveness of testing, integration testing is emphasized for serverless applications.[5]Additionally, to facilitate debugging and implementation,orchestrationis used within thebounded context, whilechoreographyis employed between different bounded contexts.[5]
Ephemeral resources are typically kept together to maintain highcohesion. However, shared resources with long spin-up times, such asAWS RDSclusters and landing zones, are often managed in separate repositories,deployment pipeline, and stacks.[5]
|
https://en.wikipedia.org/wiki/Serverless_computing
|
Snapis a softwarepackaginganddeploymentsystem developed byCanonicalforoperating systemsthat use theLinuxkernel and thesystemdinitsystem. The packages, calledsnaps, and the tool for using them,snapd, work across a range ofLinux distributions[3]and allowupstreamsoftware developers to distribute their applications directly to users. Snaps are self-contained applications running in a sandbox with mediated access to the host system. Snap was originally released forcloudapplications[4]but was later ported to also work forInternet of Thingsdevices[5][6]and desktop[7][8]applications.
Applications in a Snap run in a container with limited access to the host system. UsingInterfaces, users can give an application mediated access to additional features of the host such as recording audio, accessing USB devices and recording video.[9][10][11]These interfaces mediate regular Linux APIs so that applications can function in the sandbox without needing to be rewritten. Desktop applications can also use the XDG Desktop Portals, a standardized API originally created by theFlatpakproject (originally called xdg-app) to give sandboxed desktop applications access to host resources.[12][13]These portals often provide a better user experience compared to the native Linux APIs because they prompt the user for permission to use resources such as a webcam at the time the application uses them. The downside is that applications and toolkits need to be rewritten in order to use these newer APIs.
The Snap sandbox also supports sharing data andUnix socketsbetween Snaps.[14]This is often used to share common libraries and application frameworks between Snaps to reduce the size of Snaps by avoiding duplication.[15][16]
The Snap sandbox heavily relies on theAppArmorLinux Security Module from the upstreamLinux kernel. Because only one "major"Linux Security Module(LSM) can be active at the same time,[17]the Snap sandbox is much less secure when another major LSM is enabled. As a result, on distributions such asFedorawhich enableSELinuxby default, the Snap sandbox is heavily degraded. Although Canonical is working with many other developers and companies to make it possible for multiple LSMs to run at the same time, this solution is still[when?]a long time away.[18][17][19]
Multiple times a day, snapd checks for available updates of all Snaps and installs them in the background using anatomic operation. Updates can be reverted[20][21]and usedelta encodingto reduce their download size.[22][23][24]
Publishers can release and update multiple versions of their software in parallel usingchannels. Each channel has a specifictrackandrisk, which indicate theversionandstabilityof the software released on that channel. When installing an application, Snap defaults to using thelatest/stablechannel, which will automatically update to new major releases of the software when they become available. Publishers can create additional channels to give users the possibility to stick to specific major releases of their software. For example, a2.0/stablechannel would allow users to stick to the 2.0 version of the software and only get minor updates without the risk of backwards incompatible changes. When the publisher releases a new major version in a new channel, users can manually update to the next version when they choose.[25][26][27][28]
The schedule, frequency and timing of automatic updates can be configured by users. Users can also pause automatic updates for a certain period of time, or indefinitely.[29][30][31]Updates are automatically paused on metered connections.[32][33]
Snapcraft is a tool for developers to package their programs in the Snap format.[36]It runs on any Linux distribution supported by Snap,macOS[37]andMicrosoft Windows.[38]Snapcraft builds the packages in aVirtual Machineusing Multipass,[39]in order to ensure the result of a build is the same, regardless of which distribution or operating system it is built on.[40]Snapcraft supports multiple build tools and programming languages, such asGo,Java,JavaScript,Python,C/C++andRust. It also allows importing application metadata from multiple sources such asAppStream,git, shell scripts andsetup.pyfiles.[37][41]
The Snap Store allows developers to publish their snap-packaged applications.[42]All apps uploaded to the Snap Store undergo automatic testing, including amalwarescan. However, the scan does not catch all issues. In one case in May 2018, two applications by the same developer were found to contain acryptocurrencyminer which ran in the background during application execution. In 2024, fake cryptocurrency wallets were uploaded that would steal the user's funds, and then when taken down by Canonical, simply reuploaded by a new account.[43]Although the Snap sandbox attempts to reduce the impact of a malicious app, multiple exploits have been found that allow malicious Snaps to escape the sandbox and gain direct access to the user's data.[44][45]Canonical recommends users only install Snaps from publishers trusted by the user.[46][47]
Snapsareself-containedpackages that work across a range ofLinux distributions. This is unlike traditional Linux package management approaches, which require specifically adapted packages for each Linux distribution.[48][49]
The snapfile formatis a single compressedfilesystemusing theSquashFSformat with the extension.snap. This filesystem contains the application, libraries it depends on, and declarative metadata. This metadata is interpreted by snapd to set up an appropriately shaped securesandboxfor that application. After installation, the snap is mounted by the host operating system and decompressed on the fly when the files are used.[50][28]Although this has the advantage that snaps use less disk space, it also means some large applications start more slowly.[51][52]
Snap supports any class of Linux application such as desktop applications, server tools, IoT apps and even system services such as the printer driver stack.[53][54]To ensure this, Snap relies onsystemdfor features such as running socket-activated system services in a Snap.[55]This causes Snap to work best only on distributions that can adopt thatinit system.[56]
Snap initially only supported the all-SnapUbuntuCore distribution, but in June 2016, it was ported to a wide range of Linux distributions to become a format for universal Linux packages.[57]Snap requiresSystemdwhich is available in most, but not all, Linux distributions. OtherUnix-likesystems (e.g.FreeBSD) are not supported.[58]ChromeOSdoes not support Snap directly, only through Linux distributions installed in it that support Snap, such asGallium OS.[59]
Ubuntu and its official derivatives pre-install Snap by default, as well as other Ubuntu-based distributions such asKDE Neon, andZorin OS.[60]Solushave currently planned to drop Snap, to reduce the burden of maintaining AppArmor patches needed for strict Snap confinement.[61]Zorin OShave removed Snap as a default package in the Zorin OS 17 release.[62]While other official Ubuntu derivatives such asKubuntu,Xubuntu, andUbuntu MATEhave also shipped with the competingFlatpakas a complement, they will no longer do so beginning with Ubuntu 23.04, meaning that it must be installed manually by the user.[63]
A number of notable desktop software development companies publish their software in the Snap Store, includingGoogle,[64]JetBrains,[65]KDE,[66]Microsoft(for Linux versions of e.g. .NET Core 3.1,[67]Visual Studio Code,Skype,[68]andPowerShell),Mozilla[69]andSpotify.[70]Snaps are also used inInternet-of-Thingsenvironments, ranging from consumer-facing products[71]to enterprise device management gateways[72]andsatellite communicationnetworks.[73][74]Finally, Snap is also used by developers of server applications such asInfluxDB,[75]Kata Containers,[76]Nextcloud[77]andTravis CI.[78]
Snap has received mixed reaction from the developer community. On Snap's promotional site,Herokupraised Snap's auto-update as it fits their fast release schedule well.Microsoftmentions its ease of use and Snap beingYAML-based, as well as it being distribution-agnostic.JetBrainssays the Snap Store gives their tools more exposure,[79][better source needed]although some users claim launching the tools takes much longer when it's installed from the Snap Store than when it's installed another way.[80][unreliable source]
Others have objected to the closed-source nature of the Snap Store. Clément Lefèbvre (Linux Mintfounder and project leader[81][82]) has written that Snap is biased and has a conflict of interest. The reasons he cited include it being governed by Canonical and locked to their store, and also that Snap works better on Ubuntu than on other distributions.[83]He later announced that the installing of Snap would be blocked byAPTin Linux Mint,[84][85]although a way to disable this restriction would be documented.[86]
On recent versions of Ubuntu, Canonical has migrated certain packages exclusively to Snap, such asChromiumandFirefox[87]web browsers.[88][42]The replacement of Firefox led to mixed reception from users due to performance issues with the Snap version, especially on startup.[87]
|
https://en.wikipedia.org/wiki/Snap_(software)
|
Software-defined storage(SDS) is a marketing term forcomputer data storagesoftware for policy-based provisioning and management of data storage independent of the underlying hardware. Software-defined storage typically includes a form ofstorage virtualizationto separate the storage hardware from the software that manages it.[1]The software enabling a software-defined storage environment may also provide policy management for features such asdata deduplication, replication,thin provisioning, snapshots, copy-on-write clones, tiering and backup.
Software-defined storage (SDS) hardware may or may not also have abstraction, pooling, or automation software of its own. When implemented as software only in conjunction with commodity servers with internal disks, it may suggest software such as a virtual or globalfile systemordistributed block storage. If it is software layered over sophisticated large storage arrays, it suggests software such asstorage virtualizationorstorage resource management, categories of products that address separate and different problems. If the policy and management functions also include a form ofartificial intelligenceto automate protection and recovery, it can be considered as intelligent abstraction.[2]Software-defined storage may be implemented via appliances over a traditionalstorage area network(SAN), or implemented asnetwork-attached storage(NAS), or usingobject-based storage. In March 2014 theStorage Networking Industry Association(SNIA) began a report on software-defined storage.[3]
VMware used the marketing term "software-defined data center" (SDDC) for a broader concept wherein all the virtual storage, server, networking and security resources required by an application can be defined by software and provisioned automatically.[4][5]Other smaller companies then adopted the term "software-defined storage", such asCleversafe(acquired byIBM), andOpenIO.
Based on similar concepts assoftware-defined networking(SDN),[6]interest in SDS rose afterVMwareacquiredNicirafor over a billion dollars in 2012.
Data storage vendors used various definitions for software-defined storage depending on their product-line.Storage Networking Industry Association(SNIA), a standards group, attempted a multi-vendor, negotiated definition with examples.[7]
The software-defined storage industry is projected to reach $86 billion by 2023.[8]
Building on the concept of VMware, esurfing cloud has launched a new software-defined storage product called HBlock. HBlock is a lightweight storage cluster controller that operates in user mode. It can be installed on any Linux operating system as a regular application without root access, and deployed alongside other applications on the server. HBlock integrates unused disk space across various servers to create high-performance and highly available virtual disks. These virtual disks can be mounted to local or other remote servers using the standard iSCSI protocol, revitalizing storage resources on-site without impacting existing operations or requiring additional hardware purchases.[9]
Characteristics of software-defined storage may include the following features:[10]
Incomputing, astorage hypervisoris a software program which can run on a physical server hardware platform, on avirtual machine, inside a hypervisor OS or in the storage network. It may co-reside with virtual machinesupervisorsor have exclusive control of its platform. Similar to virtual serverhypervisorsa storage hypervisor may run on a specific hardware platform, a specific hardware architecture, or be hardware independent.[11]
The storage hypervisor software virtualizes the individual storage resources it controls and creates one or more flexible pools of storage capacity. In this way it separates the direct link between physical and logical resources in parallel to virtual server hypervisors. By moving storage management into isolated layer it also helps to increase system uptime andHigh Availability. "Similarly, a storage hypervisor can be used to manage virtualized storage resources to increase utilization rates of disk while maintaining high reliability."[12]
The storage hypervisor, a centrally-managed supervisory software program, provides a comprehensive set of storage control and monitoring functions that operate as a transparent virtual layer across consolidated disk pools to improve theiravailability, speed and utilization.
Storage hypervisors enhance the combined value of multipledisk storagesystems, including dissimilar and incompatible models, by supplementing their individual capabilities with extended provisioning, data protection, replication and performance acceleration services.
In contrast to embedded software or disk controllerfirmwareconfined to a packaged storage system or appliance, the storage hypervisor and its functionality spans different models and brands and types of storage [including SSD (solid state disks), SAN (storage area network) and DAS (direct attached storage) and Unified Storage(SAN and NAS)] covering a wide range of price and performance characteristics or tiers. The underlying devices need not be explicitly integrated with each other nor bundled together.
A storage hypervisor enables hardware interchangeability. The storage hardware underlying a storage hypervisor matters only in a generic way with regard to performance and capacity. While underlying "features" may be passed through the hypervisor, the benefits of a storage hypervisor underline its ability to present uniform virtual devices and services from dissimilar and incompatible hardware, thus making these devices interchangeable. Continuous replacement and substitution of the underlying physical storage may take place, without altering or interrupting the virtual storage environment that is presented.
The storage hypervisor manages, virtualizes and controls all storage resources, allocating and providing the needed attributes (performance, availability) and services (automatedprovisioning,snapshots,replication), either directly or over a storage network, as required to serve the needs of each individual environment.
The term "hypervisor" within "storage hypervisor" is so named because it goes beyond a supervisor,[13]it is conceptually a level higher than a supervisor and therefore acts as the next higher level of management and intelligence that sits above and spans its control over device-level storage controllers, disk arrays, and virtualization middleware.
A storage hypervisor has also been defined as a higher level of storage virtualization[14]software, providing a "Consolidation and cost: Storage pooling increases utilization and decreases costs. Business availability: Data mobility of virtual volumes can improve availability. Application support: Tiered storage optimization aligns storage costs with required application service levels".[15]The term has also been used in reference to use cases including its reference to its role with storage virtualization in disaster recovery[16]and, in a more limited way, defined as a volume migration capability across SANs.[17]
An analogy can be drawn between the concept of a server hypervisor and the concept of a storage hypervisor. By virtualizing servers, server hypervisors (VMware ESX,Microsoft Hyper-V, Citrix Hypervisor,Linux KVM,Xen,z/VM) increased the utilization rates for server resources, and provided management flexibility by de-coupling servers from hardware. This led to cost savings in server infrastructure since fewer physical servers were needed to handle the same workload, and provided flexibility in administrative operations like backup, failover and disaster recovery.
A storage hypervisor does for storage resources what the server hypervisor did for server resources. A storage hypervisor changes how the server hypervisor handles storage I/O to get more performance out of existing storage resources, and increases efficiency in storage capacity consumption, storage provisioning and snapshot/clone technology. A storage hypervisor, like a server hypervisor, increases performance and management flexibility for improved resource utilization.
|
https://en.wikipedia.org/wiki/Storage_hypervisor
|
Avirtual private server(VPS) orvirtual dedicated server(VDS) is avirtual machinesoldas a serviceby anInternet hostingcompany.[1]
A virtual private server runs its own copy of anoperating system(OS), and customers may havesuperuser-level access to that operating system instance, so they can install almost any software that runs on that OS. For many purposes, it is functionally equivalent to adedicated physical serverand, being software-defined, can be created and configured more easily. A virtual server costs less than an equivalent physical server. However, as virtual servers share the underlying physical hardware with other VPS, performance may be lower depending on the workload of any other executing virtual machines.[1]
The force driving servervirtualizationis similar to that which led to the development oftime-sharingandmultiprogrammingin the past. Although the resources are still shared, as under the time-sharing model, virtualization provides a higher level of security, dependent on the type of virtualization used, as the individual virtual servers are mostly isolated from each other and may run their own full-fledgedoperating systemwhich can be independently rebooted as a virtual instance.
Partitioning a single server to appear as multiple servers has been increasingly common onmicrocomputerssince the release ofVMware ESX Serverin 2001.[2]VMware later replaced ESX Server with VMware ESXi, a more lightweight hypervisor architecture that eliminated the Linux-based Console Operating System (COS) used in the original ESX.[3]The physical server typically runs ahypervisorwhich is tasked with creating, releasing, and managing the resources of "guest" operating systems, orvirtual machines. These guest operating systems are allocated a share of resources of the physical server, typically in a manner in which the guest is not aware of any other physical resources except for those allocated to it by the hypervisor. As a VPS runs its own copy of its operating system, customers havesuperuser-level access to that operating system instance, and can install almost any software that runs on the OS; however, due to the number of virtualization clients typically running on a single machine, a VPS generally has limited processor time,RAM, and disk space.[4]
There are several approaches to virtualization. Inhardware virtualization, ahypervisorsuch as theKernel-based Virtual Machineallows each virtual machine (VM) to run its own independent kernel, providing greater isolation from the host system. By contrast,container-based virtualization—for exampleOpenVZ—shares the host kernel among multiple containers. This can improve resource efficiency, but usually offers less isolation and fewer customization options for each instance.[5]
Many companies offer virtual private server hosting or virtualdedicated serverhosting as an extension forweb hostingservices. There are several challenges to consider when licensing proprietary software inmulti-tenantvirtual environments.
Withunmanagedorself-managedhosting, the customer is left to administer their own server instance.
Unmeteredhosting is generally offered with no limit on the amount of data transferred on a fixed bandwidth line. Usually, unmetered hosting is offered with 10 Mbit/s, 100 Mbit/s, or 1000 Mbit/s (with some as high as 10 Gbit/s). This means that the customer is theoretically able to use approximately 3 TB on 10 Mbit/s or up to approximately 300 TB on a 1000 Mbit/s line per month; although in practice the values will be significantly less. In a virtual private server, this will be shared bandwidth and a fair usage policy should be involved.Unlimitedhosting is also commonly marketed but generallylimitedby acceptable usage policies and terms of service. Offers of unlimited disk space and bandwidth are always false due to cost, carrier capacities, and technological boundaries.
|
https://en.wikipedia.org/wiki/Virtual_private_server
|
Virtual resource partitioning(VRP) is anoperating system-level virtualizationtechnology that allocates computing resources (such asCPU&I/O) to transactions. Conventional virtualization technologies allocate resources on anoperating system(Windows,Linux...) wide basis. VRP works 2 levels deeper by allowing regulation and control of the resources used by specific transactions within an application.[1]
In many computerized environments, a single user, application, or transaction can appropriate all server resources and by that, affect the quality of service & user experience of other active users, applications or transactions. For example, a single report in adata warehouseenvironment can monopolize data access by demanding large amounts of data. Similarly, a CPU-bound application may consume all server processing power and starve other activities.
VRP allows balancing, regulating and manipulating the resource consumption of individual transactions, and by that, improving the overall quality of service, compliance with service level agreements, and the ultimate end user experience.
VRP is usually implemented in the OS in a way that is completely transparent to the application or transaction. The technology creates virtual resource "lanes", each of which has access to a controllable amount of resources, and redirects specific transactions to those lanes allowing them to take more or less resources.
VRP can be implemented in any OS and is available onWindows,Red Hat,Suse,HP-UX,Solaris,tru64,AIXand others.
In any OS, the application communicates with the OS kernel in a specific way which requires a different VRP implementation. A safe implementation of VRP usually combines several resource allocation techniques. VRP implementations depend on rapidly varying transaction type, consumed resource and kernel state. The VRP implementation must adapt to such changes in real-time.
|
https://en.wikipedia.org/wiki/Virtual_resource_partitioning
|
Inalgebra, thezero-product propertystates that the product of twononzero elementsis nonzero. In other words,ifab=0,thena=0orb=0.{\displaystyle {\text{if }}ab=0,{\text{ then }}a=0{\text{ or }}b=0.}
This property is also known as therule of zero product, thenull factor law, themultiplication property of zero, thenonexistence of nontrivialzero divisors, or one of the twozero-factor properties.[1]All of thenumber systemsstudied inelementary mathematics— theintegersZ{\displaystyle \mathbb {Z} }, therational numbersQ{\displaystyle \mathbb {Q} }, thereal numbersR{\displaystyle \mathbb {R} }, and thecomplex numbersC{\displaystyle \mathbb {C} }— satisfy the zero-product property. In general, aringwhich satisfies the zero-product property is called adomain.
SupposeA{\displaystyle A}is an algebraic structure. We might ask, doesA{\displaystyle A}have the zero-product property? In order for this question to have meaning,A{\displaystyle A}must have both additive structure and multiplicative structure.[2]Usually one assumes thatA{\displaystyle A}is aring, though it could be something else, e.g. the set of nonnegative integers{0,1,2,…}{\displaystyle \{0,1,2,\ldots \}}with ordinary addition and multiplication, which is only a (commutative)semiring.
Note that ifA{\displaystyle A}satisfies the zero-product property, and ifB{\displaystyle B}is a subset ofA{\displaystyle A}, thenB{\displaystyle B}also satisfies the zero product property: ifa{\displaystyle a}andb{\displaystyle b}are elements ofB{\displaystyle B}such thatab=0{\displaystyle ab=0}, then eithera=0{\displaystyle a=0}orb=0{\displaystyle b=0}becausea{\displaystyle a}andb{\displaystyle b}can also be considered as elements ofA{\displaystyle A}.
SupposeP{\displaystyle P}andQ{\displaystyle Q}are univariate polynomials with real coefficients, andx{\displaystyle x}is a real number such thatP(x)Q(x)=0{\displaystyle P(x)Q(x)=0}. (Actually, we may allow the coefficients andx{\displaystyle x}to come from any integral domain.) By the zero-product property, it follows that eitherP(x)=0{\displaystyle P(x)=0}orQ(x)=0{\displaystyle Q(x)=0}. In other words, the roots ofPQ{\displaystyle PQ}are precisely the roots ofP{\displaystyle P}together with the roots ofQ{\displaystyle Q}.
Thus, one can usefactorizationto find the roots of a polynomial. For example, the polynomialx3−2x2−5x+6{\displaystyle x^{3}-2x^{2}-5x+6}factorizes as(x−3)(x−1)(x+2){\displaystyle (x-3)(x-1)(x+2)}; hence, its roots are precisely 3, 1, and −2.
In general, supposeR{\displaystyle R}is an integral domain andf{\displaystyle f}is amonicunivariate polynomial of degreed≥1{\displaystyle d\geq 1}with coefficients inR{\displaystyle R}. Suppose also thatf{\displaystyle f}hasd{\displaystyle d}distinct rootsr1,…,rd∈R{\displaystyle r_{1},\ldots ,r_{d}\in R}. It follows (but we do not prove here) thatf{\displaystyle f}factorizes asf(x)=(x−r1)⋯(x−rd){\displaystyle f(x)=(x-r_{1})\cdots (x-r_{d})}. By the zero-product property, it follows thatr1,…,rd{\displaystyle r_{1},\ldots ,r_{d}}are theonlyroots off{\displaystyle f}: any root off{\displaystyle f}must be a root of(x−ri){\displaystyle (x-r_{i})}for somei{\displaystyle i}. In particular,f{\displaystyle f}has at mostd{\displaystyle d}distinct roots.
If howeverR{\displaystyle R}is not an integral domain, then the conclusion need not hold. For example, the cubic polynomialx3+3x2+2x{\displaystyle x^{3}+3x^{2}+2x}has six roots inZ6{\displaystyle \mathbb {Z} _{6}}(though it has only three roots inZ{\displaystyle \mathbb {Z} }).
|
https://en.wikipedia.org/wiki/Zero-product_property
|
This is aglossary of commutative algebra.
See alsolist of algebraic geometry topics,glossary of classical algebraic geometry,glossary of algebraic geometry,glossary of ring theoryandglossary of module theory.
In this article, all rings are assumed to becommutativewith identity 1.
|
https://en.wikipedia.org/wiki/Glossary_of_commutative_algebra
|
Inmathematics, and more specifically incombinatorial commutative algebra, azero-divisor graphis anundirected graphrepresenting thezero divisorsof acommutative ring. It has elements of theringas itsvertices, and pairs of elements whose product is zero as itsedges.[1]
There are two variations of the zero-divisor graph commonly used.
In the original definition ofBeck (1988), the vertices represent all elements of the ring.[2]In a later variant studied byAnderson & Livingston (1999), the vertices represent only thezero divisorsof the given ring.[3]
Ifn{\displaystyle n}is asemiprime number(the product of twoprime numbers)
then the zero-divisor graph of the ring ofintegersmodulon{\displaystyle n}(with only the zero divisors as its vertices) is either acomplete graphor acomplete bipartite graph.
It is a complete graphKp−1{\displaystyle K_{p-1}}in the case thatn=p2{\displaystyle n=p^{2}}for some prime numberp{\displaystyle p}. In this case the vertices are all the nonzero multiples ofp{\displaystyle p}, and the product of any two of these numbers is zero modulop2{\displaystyle p^{2}}.[3]
It is a complete bipartite graphKp−1,q−1{\displaystyle K_{p-1,q-1}}in the case thatn=pq{\displaystyle n=pq}for two distinct prime numbersp{\displaystyle p}andq{\displaystyle q}. The two sides of the bipartition are thep−1{\displaystyle p-1}nonzero multiples ofq{\displaystyle q}and theq−1{\displaystyle q-1}nonzero multiples ofp{\displaystyle p}, respectively. Two numbers (that are not themselves zero modulon{\displaystyle n}) multiply to zero modulon{\displaystyle n}if and only ifone is a multiple ofp{\displaystyle p}and the other is a multiple ofq{\displaystyle q}, so this graph has an edge between each pair of vertices on opposite sides of the bipartition, and no other edges. More generally, the zero-divisor graph is a complete bipartite graph for any ring that is aproductof twointegral domains.[3]
The onlycycle graphsthat can be realized as zero-product graphs (with zero divisors as vertices) are the cycles of length 3 or 4.[3]The onlytreesthat may be realized as zero-divisor graphs are thestars(complete bipartite graphs that are trees) and the five-vertex tree formed as the zero-divisor graph ofZ2×Z4{\displaystyle \mathbb {Z} _{2}\times \mathbb {Z} _{4}}.[1][3]
In the version of the graph that includes all elements, 0 is auniversal vertex, and the zero divisors can be identified as the vertices that have a neighbor other than 0.
Because it has a universal vertex, the graph of all ring elements is alwaysconnectedand hasdiameterat most two. The graph of all zero divisors is non-empty for every ring that is not anintegral domain. It remains connected, has diameter at most three,[3]and (if it contains acycle) hasgirthat most four.[4][5]
The zero-divisor graph of a ring that is not an integral domain is finite if and only if the ring isfinite.[3]More concretely, if the graph has maximumdegreed{\displaystyle d}, the ring has at most(d2−2d+2)2{\displaystyle (d^{2}-2d+2)^{2}}elements.
If the ring and the graph are infinite, every edge has an endpoint with infinitely many neighbors.[1]
Beck (1988)conjecturedthat (like theperfect graphs) zero-divisor graphs always have equalclique numberandchromatic number. However, this is not true; acounterexamplewas discovered byAnderson & Naseer (1993).[6]
|
https://en.wikipedia.org/wiki/Zero-divisor_graph
|
Inabstract algebra, thesedenionsform a 16-dimensionalnoncommutativeandnonassociativealgebraover thereal numbers, usually represented by the capital letter S, boldfaceSorblackboard boldS{\displaystyle \mathbb {S} }.
The sedenions are obtained by applying theCayley–Dickson constructionto theoctonions, which can be mathematically expressed asS=CD(O,1){\displaystyle \mathbb {S} ={\mathcal {CD}}(\mathbb {O} ,1)}.[1]As such, the octonions areisomorphicto asubalgebraof the sedenions. Unlike the octonions, the sedenions are not analternative algebra. Applying the Cayley–Dickson construction to the sedenions yields a 32-dimensional algebra, called thetrigintaduonionsor sometimes the 32-nions.[2]
The termsedenionis also used for other 16-dimensional algebraic structures, such as a tensor product of two copies of thebiquaternions, or the algebra of 4 × 4matricesover the real numbers, or that studied bySmith (1995).
Every sedenion is alinear combinationof the unit sedenionse0{\displaystyle e_{0}},e1{\displaystyle e_{1}},e2{\displaystyle e_{2}},e3{\displaystyle e_{3}}, ...,e15{\displaystyle e_{15}},
which form abasisof thevector spaceof sedenions. Every sedenion can be represented in the form
Addition and subtraction are defined by the addition and subtraction of corresponding coefficients and multiplication isdistributiveover addition.
Like other algebras based on theCayley–Dickson construction, the sedenions contain the algebra they were constructed from. So they contain the octonions (generated bye0{\displaystyle e_{0}}toe7{\displaystyle e_{7}}in the table below), and therefore also thequaternions(generated bye0{\displaystyle e_{0}}toe3{\displaystyle e_{3}}),complex numbers(generated bye0{\displaystyle e_{0}}ande1{\displaystyle e_{1}}) and real numbers (generated bye0{\displaystyle e_{0}}).
Likeoctonions,multiplicationof sedenions is neithercommutativenorassociative. However, in contrast to the octonions, the sedenions do not even have the property of beingalternative. They do, however, have the property ofpower associativity, which can be stated as that, for any elementx{\displaystyle x}ofS{\displaystyle \mathbb {S} }, the powerxn{\displaystyle x^{n}}is well defined. They are alsoflexible.
The sedenions have a multiplicativeidentity elemente0{\displaystyle e_{0}}and multiplicative inverses, but they are not adivision algebrabecause they havezero divisors: two nonzero sedenions can be multiplied to obtain zero, for example(e3+e10)(e6−e15){\displaystyle (e_{3}+e_{10})(e_{6}-e_{15})}. Allhypercomplex numbersystems after sedenions that are based on the Cayley–Dickson construction also contain zero divisors.
The sedenion multiplication table is shown below:
From the above table, we can see that:
The sedenions are not fully anti-associative. Choose any four generators,i,j,k{\displaystyle i,j,k}andl{\displaystyle l}. The following 5-cycle shows that these five relations cannot all be anti-associative.
(ij)(kl)=−((ij)k)l=(i(jk))l=−i((jk)l)=i(j(kl))=−(ij)(kl)=0{\displaystyle (ij)(kl)=-((ij)k)l=(i(jk))l=-i((jk)l)=i(j(kl))=-(ij)(kl)=0}
In particular, in the table above, usinge1,e2,e4{\displaystyle e_{1},e_{2},e_{4}}ande8{\displaystyle e_{8}}the last expression associates.(e1e2)e12=e1(e2e12)=−e15{\displaystyle (e_{1}e_{2})e_{12}=e_{1}(e_{2}e_{12})=-e_{15}}
The particular sedenion multiplication table shown above is represented by 35 triads. The table and its triads have been constructed from anoctonionrepresented by the bolded set of 7 triads usingCayley–Dickson construction. It is one of 480 possible sets of 7 triads (one of two shown in the octonion article) and is the one based on the Cayley–Dickson construction ofquaternionsfrom two possible quaternion constructions from thecomplex numbers. The binary representations of the indices of these triplesbitwise XORto 0. These 35 triads are:
{{1, 2, 3},{1, 4, 5},{1, 7, 6}, {1, 8, 9}, {1, 11, 10}, {1, 13, 12}, {1, 14, 15},{2, 4, 6},{2, 5, 7}, {2, 8, 10}, {2, 9, 11}, {2, 14, 12}, {2, 15, 13},{3, 4, 7},{3, 6, 5}, {3, 8, 11}, {3, 10, 9}, {3, 13, 14}, {3, 15, 12}, {4, 8, 12}, {4, 9, 13},{4, 10, 14}, {4, 11, 15}, {5, 8, 13}, {5, 10, 15}, {5, 12, 9}, {5, 14, 11}, {6, 8, 14},{6, 11, 13}, {6, 12, 10}, {6, 15, 9}, {7, 8, 15}, {7, 9, 14}, {7, 12, 11}, {7, 13, 10} }
The list of 84 sets of zero divisors{ea,eb,ec,ed}{\displaystyle \{e_{a},e_{b},e_{c},e_{d}\}}, where(ea+eb)∘(ec+ed)=0{\displaystyle (e_{a}+e_{b})\circ (e_{c}+e_{d})=0}:
Sedenion Zero Divisors{ea,eb,ec,ed}where(ea+eb)∘(ec+ed)=01≤a≤6,c>a,9≤b≤15{9≤d≤15}{−9≥d≥−15}{9≤d≤15}{−9≥d≥−15}{e1,e10,e5,e14}{e1,e10,e4,−e15}{e1,e10,e7,e12}{e1,e10,e6,−e13}{e1,e11,e4,e14}{e1,e11,e6,−e12}{e1,e11,e5,e15}{e1,e11,e7,−e13}{e1,e12,e2,e15}{e1,e12,e3,−e14}{e1,e12,e6,e11}{e1,e12,e7,−e10}{e1,e13,e6,e10}{e1,e13,e2,−e14}{e1,e13,e7,e11}{e1,e13,e3,−e15}{e1,e14,e2,e13}{e1,e14,e4,−e11}{e1,e14,e3,e12}{e1,e14,e5,−e10}{e1,e15,e3,e13}{e1,e15,e2,−e12}{e1,e15,e4,e10}{e1,e15,e5,−e11}{e2,e9,e4,e15}{e2,e9,e5,−e14}{e2,e9,e6,e13}{e2,e9,e7,−e12}{e2,e11,e5,e12}{e2,e11,e4,−e13}{e2,e11,e6,e15}{e2,e11,e7,−e14}{e2,e12,e3,e13}{e2,e12,e5,−e11}{e2,e12,e7,e9}{e2,e13,e3,−e12}{e2,e13,e4,e11}{e2,e13,e6,−e9}{e2,e14,e5,e9}{e2,e14,e3,−e15}{e2,e14,e7,e11}{e2,e15,e4,−e9}{e2,e15,e3,e14}{e2,e15,e6,−e11}{e3,e9,e6,e12}{e3,e9,e4,−e14}{e3,e9,e7,e13}{e3,e9,e5,−e15}{e3,e10,e4,e13}{e3,e10,e5,−e12}{e3,e10,e7,e14}{e3,e10,e6,−e15}{e3,e12,e5,e10}{e3,e12,e6,−e9}{e3,e14,e4,e9}{e3,e13,e4,−e10}{e3,e15,e5,e9}{e3,e13,e7,−e9}{e3,e15,e6,e10}{e3,e14,e7,−e10}{e4,e9,e7,e10}{e4,e9,e6,−e11}{e4,e10,e5,e11}{e4,e10,e7,−e9}{e4,e11,e6,e9}{e4,e11,e5,−e10}{e4,e13,e6,e15}{e4,e13,e7,−e14}{e4,e14,e7,e13}{e4,e14,e5,−e15}{e4,e15,e5,e14}{e4,e15,e6,−e13}{e5,e10,e6,e9}{e5,e9,e6,−e10}{e5,e11,e7,e9}{e5,e9,e7,−e11}{e5,e12,e7,e14}{e5,e12,e6,−e15}{e5,e15,e6,e12}{e5,e14,e7,−e12}{e6,e11,e7,e10}{e6,e10,e7,−e11}{e6,e13,e7,e12}{e6,e12,e7,−e13}{\displaystyle {\begin{array}{c}{\text{Sedenion Zero Divisors}}\quad \{e_{a},e_{b},e_{c},e_{d}\}\\{\text{where}}~(e_{a}+e_{b})\circ (e_{c}+e_{d})=0\\{\begin{array}{ccc}1\leq a\leq 6,&c>a,&9\leq b\leq 15\\\end{array}}\\\\{\begin{array}{lccr}\{9\leq d\leq 15\}&\{-9\geq d\geq -15\}&\{9\leq d\leq 15\}&\{-9\geq d\geq -15\}\\\end{array}}\\\\{\begin{array}{lccr}\{e_{1},e_{10},e_{5},e_{14}\}&\{e_{1},e_{10},e_{4},-e_{15}\}&\{e_{1},e_{10},e_{7},e_{12}\}&\{e_{1},e_{10},e_{6},-e_{13}\}\\\{e_{1},e_{11},e_{4},e_{14}\}&\{e_{1},e_{11},e_{6},-e_{12}\}&\{e_{1},e_{11},e_{5},e_{15}\}&\{e_{1},e_{11},e_{7},-e_{13}\}\\\{e_{1},e_{12},e_{2},e_{15}\}&\{e_{1},e_{12},e_{3},-e_{14}\}&\{e_{1},e_{12},e_{6},e_{11}\}&\{e_{1},e_{12},e_{7},-e_{10}\}\\\{e_{1},e_{13},e_{6},e_{10}\}&\{e_{1},e_{13},e_{2},-e_{14}\}&\{e_{1},e_{13},e_{7},e_{11}\}&\{e_{1},e_{13},e_{3},-e_{15}\}\\\{e_{1},e_{14},e_{2},e_{13}\}&\{e_{1},e_{14},e_{4},-e_{11}\}&\{e_{1},e_{14},e_{3},e_{12}\}&\{e_{1},e_{14},e_{5},-e_{10}\}\\\{e_{1},e_{15},e_{3},e_{13}\}&\{e_{1},e_{15},e_{2},-e_{12}\}&\{e_{1},e_{15},e_{4},e_{10}\}&\{e_{1},e_{15},e_{5},-e_{11}\}\\\\\{e_{2},e_{9},e_{4},e_{15}\}&\{e_{2},e_{9},e_{5},-e_{14}\}&\{e_{2},e_{9},e_{6},e_{13}\}&\{e_{2},e_{9},e_{7},-e_{12}\}\\\{e_{2},e_{11},e_{5},e_{12}\}&\{e_{2},e_{11},e_{4},-e_{13}\}&\{e_{2},e_{11},e_{6},e_{15}\}&\{e_{2},e_{11},e_{7},-e_{14}\}\\\{e_{2},e_{12},e_{3},e_{13}\}&\{e_{2},e_{12},e_{5},-e_{11}\}&\{e_{2},e_{12},e_{7},e_{9}\}&\{e_{2},e_{13},e_{3},-e_{12}\}\\\{e_{2},e_{13},e_{4},e_{11}\}&\{e_{2},e_{13},e_{6},-e_{9}\}&\{e_{2},e_{14},e_{5},e_{9}\}&\{e_{2},e_{14},e_{3},-e_{15}\}\\\{e_{2},e_{14},e_{7},e_{11}\}&\{e_{2},e_{15},e_{4},-e_{9}\}&\{e_{2},e_{15},e_{3},e_{14}\}&\{e_{2},e_{15},e_{6},-e_{11}\}\\\\\{e_{3},e_{9},e_{6},e_{12}\}&\{e_{3},e_{9},e_{4},-e_{14}\}&\{e_{3},e_{9},e_{7},e_{13}\}&\{e_{3},e_{9},e_{5},-e_{15}\}\\\{e_{3},e_{10},e_{4},e_{13}\}&\{e_{3},e_{10},e_{5},-e_{12}\}&\{e_{3},e_{10},e_{7},e_{14}\}&\{e_{3},e_{10},e_{6},-e_{15}\}\\\{e_{3},e_{12},e_{5},e_{10}\}&\{e_{3},e_{12},e_{6},-e_{9}\}&\{e_{3},e_{14},e_{4},e_{9}\}&\{e_{3},e_{13},e_{4},-e_{10}\}\\\{e_{3},e_{15},e_{5},e_{9}\}&\{e_{3},e_{13},e_{7},-e_{9}\}&\{e_{3},e_{15},e_{6},e_{10}\}&\{e_{3},e_{14},e_{7},-e_{10}\}\\\\\{e_{4},e_{9},e_{7},e_{10}\}&\{e_{4},e_{9},e_{6},-e_{11}\}&\{e_{4},e_{10},e_{5},e_{11}\}&\{e_{4},e_{10},e_{7},-e_{9}\}\\\{e_{4},e_{11},e_{6},e_{9}\}&\{e_{4},e_{11},e_{5},-e_{10}\}&\{e_{4},e_{13},e_{6},e_{15}\}&\{e_{4},e_{13},e_{7},-e_{14}\}\\\{e_{4},e_{14},e_{7},e_{13}\}&\{e_{4},e_{14},e_{5},-e_{15}\}&\{e_{4},e_{15},e_{5},e_{14}\}&\{e_{4},e_{15},e_{6},-e_{13}\}\\\\\{e_{5},e_{10},e_{6},e_{9}\}&\{e_{5},e_{9},e_{6},-e_{10}\}&\{e_{5},e_{11},e_{7},e_{9}\}&\{e_{5},e_{9},e_{7},-e_{11}\}\\\{e_{5},e_{12},e_{7},e_{14}\}&\{e_{5},e_{12},e_{6},-e_{15}\}&\{e_{5},e_{15},e_{6},e_{12}\}&\{e_{5},e_{14},e_{7},-e_{12}\}\\\\\{e_{6},e_{11},e_{7},e_{10}\}&\{e_{6},e_{10},e_{7},-e_{11}\}&\{e_{6},e_{13},e_{7},e_{12}\}&\{e_{6},e_{12},e_{7},-e_{13}\}\end{array}}\end{array}}}
Moreno (1998)showed that the space of pairs of norm-one sedenions that multiply to zero ishomeomorphicto the compact form of the exceptionalLie groupG2. (Note that in his paper, a "zero divisor" means apairof elements that multiply to zero.)
Guillard & Gresnigt (2019)demonstrated that the three generations ofleptonsandquarksthat are associated with unbrokengauge symmetrySU(3)c×U(1)em{\displaystyle \mathrm {SU(3)_{c}\times U(1)_{em}} }can be represented using the algebra of the complexified sedenionsC⊗S{\displaystyle \mathbb {C\otimes S} }. Their reasoning follows that a primitiveidempotentprojectorρ+=1/2(1+ie15){\displaystyle \rho _{+}=1/2(1+ie_{15})}— wheree15{\displaystyle e_{15}}is chosen as animaginary unitakin toe7{\displaystyle e_{7}}forO{\displaystyle \mathbb {O} }in theFano plane— thatactson thestandard basisof the sedenions uniquely divides the algebra into three sets ofsplit basiselements forC⊗O{\displaystyle \mathbb {C\otimes O} }, whose adjointleft actionson themselvesgenerate three copies of theClifford algebraCl(6){\displaystyle \mathrm {C} l(6)}which in-turn containminimal left idealsthat describe a single generation offermionswith unbrokenSU(3)c×U(1)em{\displaystyle \mathrm {SU(3)_{c}\times U(1)_{em}} }gauge symmetry. In particular, they note thattensor productsbetween normed division algebras generate zero divisors akin to those insideS{\displaystyle \mathbb {S} }, where forC⊗O{\displaystyle \mathbb {C\otimes O} }the lack of alternativity and associativity does not affect the construction of minimal left ideals since their underlying split basis requires only two basis elements to be multiplied together, in-which associativity or alternativity are uninvolved. Still, these ideals constructed from an adjoint algebra of left actions of the algebra on itself remain associative, alternative, andisomorphicto a Clifford algebra. Altogether, this permits three copies of(C⊗O)L≅Cl(6){\displaystyle (\mathbb {C\otimes O} )_{L}\cong \mathrm {Cl(6)} }to exist inside(C⊗S)L{\displaystyle \mathbb {(C\otimes S)} _{L}}. Furthermore, these three complexified octonion subalgebras are not independent; they share a commonCl(2){\displaystyle \mathrm {C} l(2)}subalgebra, which the authors note could form a theoretical basis forCKMandPMNSmatrices that, respectively, describequark mixingandneutrino oscillations.
Sedenion neural networks provide[further explanation needed]a means of efficient and compact expression in machine learning applications and have been used in solving multiple time-series and traffic forecasting problems.[4][5]
|
https://en.wikipedia.org/wiki/Sedenion
|
Indeterminate formis a mathematical expression that can obtain any value depending on circumstances. Incalculus, it is usually possible to compute thelimitof the sum, difference, product, quotient or power of two functions by taking the corresponding combination of the separate limits of each respective function. For example,
limx→c(f(x)+g(x))=limx→cf(x)+limx→cg(x),limx→c(f(x)g(x))=limx→cf(x)⋅limx→cg(x),{\displaystyle {\begin{aligned}\lim _{x\to c}{\bigl (}f(x)+g(x){\bigr )}&=\lim _{x\to c}f(x)+\lim _{x\to c}g(x),\\[3mu]\lim _{x\to c}{\bigl (}f(x)g(x){\bigr )}&=\lim _{x\to c}f(x)\cdot \lim _{x\to c}g(x),\end{aligned}}}
and likewise for other arithmetic operations; this is sometimes called thealgebraic limit theorem. However, certain combinations of particular limiting values cannot be computed in this way, and knowing the limit of each function separately does not suffice to determine the limit of the combination. In these particular situations, the limit is said to take anindeterminate form, described by one of the informal expressions
00,∞∞,0×∞,∞−∞,00,1∞,or∞0,{\displaystyle {\frac {0}{0}},~{\frac {\infty }{\infty }},~0\times \infty ,~\infty -\infty ,~0^{0},~1^{\infty },{\text{ or }}\infty ^{0},}
among a wide variety of uncommon others, where each expression stands for the limit of a function constructed by an arithmetical combination of two functions whose limits respectively tend to0,{\displaystyle 0,}1,{\displaystyle 1,}or∞{\displaystyle \infty }as indicated.[1]
A limit taking one of these indeterminate forms might tend to zero, might tend to any finite value, might tend to infinity, or might diverge, depending on the specific functions involved. A limit which unambiguously tends to infinity, for instancelimx→01/x2=∞,{\textstyle \lim _{x\to 0}1/x^{2}=\infty ,}is not considered indeterminate.[2]The term was originally introduced byCauchy's studentMoignoin the middle of the 19th century.
The most common example of an indeterminate form is the quotient of two functions each of which converges to zero. This indeterminate form is denoted by0/0{\displaystyle 0/0}. For example, asx{\displaystyle x}approaches0,{\displaystyle 0,}the ratiosx/x3{\displaystyle x/x^{3}},x/x{\displaystyle x/x}, andx2/x{\displaystyle x^{2}/x}go to∞{\displaystyle \infty },1{\displaystyle 1}, and0{\displaystyle 0}respectively. In each case, if the limits of the numerator and denominator are substituted, the resulting expression is0/0{\displaystyle 0/0}, which is indeterminate. In this sense,0/0{\displaystyle 0/0}can take on the values0{\displaystyle 0},1{\displaystyle 1}, or∞{\displaystyle \infty }, by appropriate choices of functions to put in the numerator and denominator. A pair of functions for which the limit is any particular given value may in fact be found. Even more surprising, perhaps, the quotient of the two functions may in fact diverge, and not merely diverge to infinity. For example,xsin(1/x)/x{\displaystyle x\sin(1/x)/x}.
So the fact that twofunctionsf(x){\displaystyle f(x)}andg(x){\displaystyle g(x)}converge to0{\displaystyle 0}asx{\displaystyle x}approaches somelimit pointc{\displaystyle c}is insufficient to determinate thelimit
An expression that arises by ways other than applying the algebraic limit theorem may have the same form of an indeterminate form. However it is not appropriate to call an expression "indeterminate form" if the expression is made outside the context of determining limits.
An example is the expression00{\displaystyle 0^{0}}. Whether this expression is left undefined, or is defined to equal1{\displaystyle 1}, depends on the field of application and may vary between authors. For more, see the articleZero to the power of zero. Note that0∞{\displaystyle 0^{\infty }}and other expressions involving infinityare not indeterminate forms.
The indeterminate form0/0{\displaystyle 0/0}is particularly common incalculus, because it often arises in the evaluation ofderivativesusing their definition in terms of limit.
As mentioned above,
while
This is enough to show that0/0{\displaystyle 0/0}is an indeterminate form. Other examples with this indeterminate form include
and
Direct substitution of the number thatx{\displaystyle x}approaches into any of these expressions shows that these are examples correspond to the indeterminate form0/0{\displaystyle 0/0}, but these limits can assume many different values. Any desired valuea{\displaystyle a}can be obtained for this indeterminate form as follows:
The value∞{\displaystyle \infty }can also be obtained (in the sense of divergence to infinity):
The following limits illustrate that the expression00{\displaystyle 0^{0}}is an indeterminate form:limx→0+x0=1,limx→0+0x=0.{\displaystyle {\begin{aligned}\lim _{x\to 0^{+}}x^{0}&=1,\\\lim _{x\to 0^{+}}0^{x}&=0.\end{aligned}}}
Thus, in general, knowing thatlimx→cf(x)=0{\displaystyle \textstyle \lim _{x\to c}f(x)\;=\;0}andlimx→cg(x)=0{\displaystyle \textstyle \lim _{x\to c}g(x)\;=\;0}is not sufficient to evaluate the limitlimx→cf(x)g(x).{\displaystyle \lim _{x\to c}f(x)^{g(x)}.}
If the functionsf{\displaystyle f}andg{\displaystyle g}areanalyticatc{\displaystyle c}, andf{\displaystyle f}is positive forx{\displaystyle x}sufficiently close (but not equal) toc{\displaystyle c}, then the limit off(x)g(x){\displaystyle f(x)^{g(x)}}will be1{\displaystyle 1}.[3]Otherwise, use the transformation in thetablebelow to evaluate the limit.
The expression1/0{\displaystyle 1/0}is not commonly regarded as an indeterminate form, because if the limit off/g{\displaystyle f/g}exists then there is no ambiguity as to its value, as it always diverges. Specifically, iff{\displaystyle f}approaches1{\displaystyle 1}andg{\displaystyle g}approaches0,{\displaystyle 0,}thenf{\displaystyle f}andg{\displaystyle g}may be chosen so that:
In each case the absolute value|f/g|{\displaystyle |f/g|}approaches+∞{\displaystyle +\infty }, and so the quotientf/g{\displaystyle f/g}must diverge, in the sense of theextended real numbers(in the framework of theprojectively extended real line, the limit is theunsigned infinity∞{\displaystyle \infty }in all three cases[4]). Similarly, any expression of the forma/0{\displaystyle a/0}witha≠0{\displaystyle a\neq 0}(includinga=+∞{\displaystyle a=+\infty }anda=−∞{\displaystyle a=-\infty }) is not an indeterminate form, since a quotient giving rise to such an expression will always diverge.
The expression0∞{\displaystyle 0^{\infty }}is not an indeterminate form. The expression0+∞{\displaystyle 0^{+\infty }}obtained from consideringlimx→cf(x)g(x){\displaystyle \lim _{x\to c}f(x)^{g(x)}}gives the limit0,{\displaystyle 0,}provided thatf(x){\displaystyle f(x)}remains nonnegative asx{\displaystyle x}approachesc{\displaystyle c}. The expression0−∞{\displaystyle 0^{-\infty }}is similarly equivalent to1/0{\displaystyle 1/0}; iff(x)>0{\displaystyle f(x)>0}asx{\displaystyle x}approachesc{\displaystyle c}, the limit comes out as+∞{\displaystyle +\infty }.
To see why, letL=limx→cf(x)g(x),{\displaystyle L=\lim _{x\to c}f(x)^{g(x)},}wherelimx→cf(x)=0,{\displaystyle \lim _{x\to c}{f(x)}=0,}andlimx→cg(x)=∞.{\displaystyle \lim _{x\to c}{g(x)}=\infty .}By taking the natural logarithm of both sides and usinglimx→clnf(x)=−∞,{\displaystyle \lim _{x\to c}\ln {f(x)}=-\infty ,}we get thatlnL=limx→c(g(x)×lnf(x))=∞×−∞=−∞,{\displaystyle \ln L=\lim _{x\to c}({g(x)}\times \ln {f(x)})=\infty \times {-\infty }=-\infty ,}which means thatL=e−∞=0.{\displaystyle L={e}^{-\infty }=0.}
The adjectiveindeterminatedoesnotimply that the limit does not exist, as many of the examples above show. In many cases, algebraic elimination,L'Hôpital's rule, or other methods can be used to manipulate the expression so that the limit can be evaluated.
When two variablesα{\displaystyle \alpha }andβ{\displaystyle \beta }converge to zero at the same limit point andlimβα=1{\displaystyle \textstyle \lim {\frac {\beta }{\alpha }}=1}, they are calledequivalent infinitesimal(equiv.α∼β{\displaystyle \alpha \sim \beta }).
Moreover, if variablesα′{\displaystyle \alpha '}andβ′{\displaystyle \beta '}are such thatα∼α′{\displaystyle \alpha \sim \alpha '}andβ∼β′{\displaystyle \beta \sim \beta '}, then:
Here is a brief proof:
Suppose there are two equivalent infinitesimalsα∼α′{\displaystyle \alpha \sim \alpha '}andβ∼β′{\displaystyle \beta \sim \beta '}.
limβα=limββ′α′β′α′α=limββ′limα′αlimβ′α′=limβ′α′{\displaystyle \lim {\frac {\beta }{\alpha }}=\lim {\frac {\beta \beta '\alpha '}{\beta '\alpha '\alpha }}=\lim {\frac {\beta }{\beta '}}\lim {\frac {\alpha '}{\alpha }}\lim {\frac {\beta '}{\alpha '}}=\lim {\frac {\beta '}{\alpha '}}}
For the evaluation of the indeterminate form0/0{\displaystyle 0/0}, one can make use of the following facts about equivalentinfinitesimals(e.g.,x∼sinx{\displaystyle x\sim \sin x}ifxbecomes closer to zero):[5]
For example:
limx→01x3[(2+cosx3)x−1]=limx→0exln2+cosx3−1x3=limx→01x2ln2+cosx3=limx→01x2ln(cosx−13+1)=limx→0cosx−13x2=limx→0−x26x2=−16{\displaystyle {\begin{aligned}\lim _{x\to 0}{\frac {1}{x^{3}}}\left[\left({\frac {2+\cos x}{3}}\right)^{x}-1\right]&=\lim _{x\to 0}{\frac {e^{x\ln {\frac {2+\cos x}{3}}}-1}{x^{3}}}\\&=\lim _{x\to 0}{\frac {1}{x^{2}}}\ln {\frac {2+\cos x}{3}}\\&=\lim _{x\to 0}{\frac {1}{x^{2}}}\ln \left({\frac {\cos x-1}{3}}+1\right)\\&=\lim _{x\to 0}{\frac {\cos x-1}{3x^{2}}}\\&=\lim _{x\to 0}-{\frac {x^{2}}{6x^{2}}}\\&=-{\frac {1}{6}}\end{aligned}}}
In the 2nd equality,ey−1∼y{\displaystyle e^{y}-1\sim y}wherey=xln2+cosx3{\displaystyle y=x\ln {2+\cos x \over 3}}asybecome closer to 0 is used, andy∼ln(1+y){\displaystyle y\sim \ln {(1+y)}}wherey=cosx−13{\displaystyle y={{\cos x-1} \over 3}}is used in the 4th equality, and1−cosx∼x22{\displaystyle 1-\cos x\sim {x^{2} \over 2}}is used in the 5th equality.
L'Hôpital's rule is a general method for evaluating the indeterminate forms0/0{\displaystyle 0/0}and∞/∞{\displaystyle \infty /\infty }. This rule states that (under appropriate conditions)
wheref′{\displaystyle f'}andg′{\displaystyle g'}are thederivativesoff{\displaystyle f}andg{\displaystyle g}. (Note that this rule doesnotapply to expressions∞/0{\displaystyle \infty /0},1/0{\displaystyle 1/0}, and so on, as these expressions are not indeterminate forms.) These derivatives will allow one to perform algebraic simplification and eventually evaluate the limit.
L'Hôpital's rule can also be applied to other indeterminate forms, using first an appropriate algebraic transformation. For example, to evaluate the form 00:
The right-hand side is of the form∞/∞{\displaystyle \infty /\infty }, so L'Hôpital's rule applies to it. Note that this equation is valid (as long as the right-hand side is defined) because thenatural logarithm(ln) is acontinuous function; it is irrelevant how well-behavedf{\displaystyle f}andg{\displaystyle g}may (or may not) be as long asf{\displaystyle f}is asymptotically positive. (the domain of logarithms is the set of all positive real numbers.)
Although L'Hôpital's rule applies to both0/0{\displaystyle 0/0}and∞/∞{\displaystyle \infty /\infty }, one of these forms may be more useful than the other in a particular case (because of the possibility of algebraic simplification afterwards). One can change between these forms by transformingf/g{\displaystyle f/g}to(1/g)/(1/f){\displaystyle (1/g)/(1/f)}.
The following table lists the most common indeterminate forms and the transformations for applying l'Hôpital's rule.
|
https://en.wikipedia.org/wiki/0/0
|
Johann Bernoulli[a](also known asJeanin French orJohnin English; 6 August [O.S.27 July] 1667 – 1 January 1748) was aSwissmathematician and was one of the many prominent mathematicians in theBernoulli family. He is known for his contributions toinfinitesimal calculusand educatingLeonhard Eulerin the pupil's youth.
Johann was born inBasel, the son of Nicolaus Bernoulli, anapothecary, and his wife, Margarethe Schongauer, and began studying medicine atUniversity of Basel. His father desired that he study business so that he might take over the family spice trade, but Johann Bernoulli did not like business and convinced his father to allow him to study medicine instead. Johann Bernoulli began studying mathematics on the side with his older brotherJacob Bernoulli.[5]Throughout Johann Bernoulli's education atBasel University, the Bernoulli brothers worked together, spending much of their time studying the newly discovered infinitesimal calculus. They were among the first mathematicians to not only study and understandcalculusbut to apply it to various problems.[6]In 1690,[7]he completed a degree dissertation in medicine,[8]reviewed byGottfried Leibniz,[7]whose title wasDe Motu musculorum et de effervescent et fermentation.[9]
After graduating from Basel University, Johann Bernoulli moved to teachdifferential equations. Later, in 1694, he married Dorothea Falkner, the daughter of analdermanof Basel, and soon after accepted a position as the professor of mathematics at theUniversity of Groningen. At the request of hisfather-in-law, Bernoulli began the voyage back to his home town of Basel in 1705. Just after setting out on the journey he learned of his brother's death totuberculosis. Bernoulli had planned on becoming the professor of Greek at Basel University upon returning but instead was able to take over as professor of mathematics, his older brother's former position. As a student ofLeibniz's calculus, Bernoulli sided with him in 1713 in theLeibniz–Newton debateover who deserved credit for the discovery of calculus. Bernoulli defended Leibniz by showing that he had solved certain problems with his methods thatNewtonhad failed to solve. Bernoulli also promotedDescartes'vortex theoryoverNewton's theory of gravitation. This ultimately delayed acceptance of Newton's theory incontinental Europe.[10]
In 1724, Johann Bernoulli entered a competition sponsored by the FrenchAcadémie Royale des Sciences, which posed the question:
In defending a view previously espoused by Leibniz, he found himself postulating an infinite external force required to make the body elastic by overcoming the infinite internal force making the body hard. In consequence, he was disqualified for the prize, which was won byMaclaurin. However, Bernoulli's paper was subsequently accepted in 1726 when the Académie considered papers regarding elastic bodies, for which the prize was awarded to Pierre Mazière. Bernoulli received an honourable mention in both competitions.
Although Johann and his brother Jacob Bernoulli worked together before Johann graduated from Basel University, shortly after this, the two developed a jealous and competitive relationship. Johann was jealous of Jacob's position and the two often attempted to outdo each other. After Jacob's death, Johann's jealousy shifted toward his own talented son,Daniel. In 1738 the father–son duo nearly simultaneously published separate works onhydrodynamics(Daniel'sHydrodynamicain 1738 and Johann'sHydraulicain 1743). Johann attempted to take precedence over his son by purposely and falsely predating his work six years prior to his son's.[11][12]
The Bernoulli brothers often worked on the same problems, but not without friction. Their most bitter dispute concerned thebrachistochrone curveproblem, or the equation for the path followed by a particle from one point to another in the shortest amount of time, if the particle is acted upon by gravity alone. Johann presented the problem in 1696, offering a reward for its solution. Entering the challenge, Johann proposed the cycloid, the path of a point on a moving wheel, also pointing out the relation this curve bears to the path taken by a ray of light passing through layers of varied density. Jacob proposed the same solution, but Johann's derivation of the solution was incorrect, and he presented his brother Jacob's derivation as his own.[13]
Bernoulli was hired byGuillaume de l'Hôpitalfor tutoring in mathematics. Bernoulli and l'Hôpital signed a contract which gave l'Hôpital the right to use Bernoulli's discoveries as he pleased. L'Hôpital authored the first textbook on infinitesimal calculus,Analyse des Infiniment Petits pour l'Intelligence des Lignes Courbesin 1696, which mainly consisted of the work of Bernoulli, including what is now known asl'Hôpital's rule.[14][15][16]Subsequently, in letters to Leibniz,Pierre Varignonand others, Bernoulli complained that he had not received enough credit for his contributions, in spite of the preface of his book:
I recognize I owe much to the insights of the Messrs. Bernoulli, especially to those of the younger (John), currently a professor in Groningen. I did unceremoniously use their discoveries, as well as those of Mr. Leibniz. For this reason I consent that they claim as much credit as they please, and will content myself with what they will agree to leave me.
|
https://en.wikipedia.org/wiki/Johann_Bernoulli#Disputes_and_controversy
|
InIBM System/360through present dayz/Architecture, anaddress constantor"adcon"is anassembly languagedata typewhich contains theaddressof a location incomputer memory. An address constant can be one, two, three or four bytes long, although an adcon of less than four bytes is conventionally used to hold an expression for a small integer such as a length, a relative address, or an index value, and does not represent an address at all. Address constants are defined using an assembler language"DC"statement.
Other computer systems have similar facilities, although different names may be used.
Aadcons normally store a four byte relocatable address, however it is possible to specify the length of the constant. For example,AL1(stuff)defines a one-byte adcon, useful mainly for small constants with relocatable values. Other adcon types can similarly have length specification.
Vtype adcons store an external reference to be resolved by thelink-editor.
Yis used for two byte (halfword) addresses. 'Y' adcons can directly address up to 32K bytes of storage, and are not widely used since early System/360 assemblers did not support a 'Y' data type. EarlyDOS/360andBOS/360systems made more use of Y adcons, since the machines these systems ran on had limited storage. The notation 'AL2(value)' is now usually used in preference to 'Y(value)' to define a 16 bit value.
Qaddress constants contain not actual addresses but adisplacementin theExternal Dummy Section– similar to the LinuxGlobal Offset Table(seePosition-independent code). AJadcon is set by the linkage editor to hold the cumulative length of the External Dummy Section, and does not actually contain an address.
Other types of address constants areRwhich had special significance forTSS/360to address thePSECT, andS, which stores an address inbase-displacementformat – a 16 bit value containing a four bit general register number and a twelve bit displacement, the same format as addresses are encoded in instructions.
System zsupports typesAD,JD,QD, andVD, which represent 8 byte (doubleword) versions of types 'A', 'J', 'Q', and 'V' to hold 64 bit addresses.
Thenominal valueof the 'DC' is a list of expressions enclosed in parentheses. Expressions can beabsolute,relocatable, orcomplex relocatable.
An absolute expression can be completely evaluated at assembly time and does not require further processing by the linkage editor. For example,DC A(4900796)has an absolute nominal value.
A relocatable expression is one that contains one or more terms that requirerelocationby the linkage editor when the program ls linked, for example, in the following code 'ACON' has a relocatable nominal value.
A complex relocatable expression contains terms that relate to addresses in different source modules. For example,DC A(X-Y)where 'X' and 'Y' are in different modules.
All these are valid adcon's:-
|
https://en.wikipedia.org/wiki/Address_constant
|
Incomputer science, abounded pointeris apointerthat is augmented with additional information that enable the storage bounds within which it may point to be deduced.[1]This additional information sometimes takes the form of two pointers holding the upper and loweraddressesof the storage occupied by the object to which the bounded pointer points.
Use of bound information makes it possible for acompilerto generate code that performsbounds checking, i.e. that tests if a pointer's value lies within the bounds prior to dereferencing the pointer or modifying the value of the pointer. If the bounds are violated some kind ofexceptionmay be raised. This is especially useful for data constructs such asarraysinC.
Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Bounded_pointer
|
In someprogramming languages,constis atype qualifier(akeywordapplied to adata type) that indicates that the data is read-only. While this can be used to declareconstants,constin theC familyof languages differs from similar constructs in other languages in that it is part of thetype, and thus has complicated behavior when combined withpointers, references,composite data types, andtype-checking. In other languages, the data is not in a singlememory location, but copied atcompile timefor each use.[1]Languages which use it includeC,C++,D,JavaScript,Julia, andRust.
When applied in anobjectdeclaration,[a]it indicates that the object is aconstant: itsvaluemay not be changed, unlike avariable. This basic use – to declare constants – has parallels in many other languages.
However, unlike in other languages, in the C family of languages theconstis part of thetype, not part of theobject. For example, in C,intconstx=1;declares an objectxofint consttype – theconstis part of the type, as if it were parsed "(int const) x" – while inAda,X:constantINTEGER:=1_declares a constant (a kind of object)XofINTEGERtype: theconstantis part of theobject, but not part of thetype.
This has two subtle results. Firstly,constcan be applied to parts of a more complex type – for example,int const * const x;declares a constant pointer to a constant integer, whileint const * x;declares a variable pointer to a constant integer, andint * const x;declares a constant pointer to a variable integer. Secondly, becauseconstis part of the type, it must match as part of type-checking. For example, the following code is invalid:
because the argument tofmust be avariableinteger, butiis aconstantinteger. This matching is a form ofprogram correctness, and is known asconst-correctness. This allows a form ofprogramming by contract, where functions specify as part of theirtype signaturewhether they modify their arguments or not, and whether theirreturn valueis modifiable or not. This type-checking is primarily of interest in pointers and references – not basic value types like integers – but also forcomposite data typesor templated types such ascontainers. It is concealed by the fact that theconstcan often be omitted, due totype coercion(implicittype conversion) and C beingcall-by-value(C++ and D are either call-by-value or call-by-reference).
The idea of const-ness does not imply that the variable as it is stored incomputer memoryis unwritable. Rather,const-ness is acompile-timeconstruct that indicates what a programmershoulddo, not necessarily what theycando. Note, however, that in the case of predefined data (such aschar const *string literals), Cconstisoftenunwritable.
While a constant does not change its value while the program is running, an object declaredconstmay indeed change its value while the program is running. A common example are read only registers within embedded systems like the current state of a digital input. The data registers for digital inputs are often declared asconstandvolatile. The content of these registers may change without the program doing anything (volatile) but it would be ill-formed for the program to attempt write to them (const).
In addition, a (non-static) member-function can be declared asconst. In this case, thethispointerinside such a function is of typeobject_type const *rather than merely of typeobject_type *.[2]This means that non-const functions for this object cannot be called from inside such a function, nor canmember variablesbe modified. In C++, a member variable can be declared asmutable, indicating that this restriction does not apply to it. In some cases, this can be useful, for example withcaching,reference counting, anddata synchronization. In these cases, the logical meaning (state) of the object is unchanged, but the object is not physically constant since its bitwise representation may change.
In C, C++, and D, all data types, including those defined by the user, can be declaredconst, and const-correctness dictates that all variables or objects should be declared as such unless they need to be modified. Such proactive use ofconstmakes values "easier to understand, track, and reason about",[3]and it thus increases the readability and comprehensibility of code and makes working in teams and maintaining code simpler because it communicates information about a value's intended use. This can help thecompileras well as the developer when reasoning about code. It can also enable anoptimizing compilerto generate more efficient code.[4]
For simple non-pointer data types, applying theconstqualifier is straightforward. It can go on either side of some types for historical reasons (for example,const char foo = 'a';is equivalent tochar const foo = 'a';). On some implementations, usingconsttwice (for instance,const char constorchar const const) generates a warning but not an error.
For pointer and reference types, the meaning ofconstis more complicated – either the pointer itself, or the value being pointed to, or both, can beconst. Further, the syntax can be confusing. A pointer can be declared as aconstpointer to writable value, or a writable pointer to aconstvalue, orconstpointer toconstvalue.Aconstpointer cannot be reassigned to point to a different object from the one it is initially assigned, but it can be used to modify the value that it points to (called thepointee).[5][6][7][8][9]Reference variables in C++ are an alternate syntax forconstpointers. A pointer to aconstobject, on the other hand, can be reassigned to point to another memory location (which should be an object of the same type or of a convertible type), but it cannot be used to modify the memory that it is pointing to. Aconstpointer to aconstobject can also be declared and can neither be used to modify the apointee nor be reassigned to point to another object. The following code illustrates these subtleties:
Following usual C convention for declarations, declaration follows use, and the*in a pointer is written on the pointer, indicatingdereferencing. For example, in the declarationint *ptr, the dereferenced form*ptris anint, while the reference formptris a pointer to anint. Thusconstmodifies thenameto its right. The C++ convention is instead to associate the*with the type, as inint* ptr, and read theconstas modifying thetypeto the left.int const * ptrToConstcan thus be read as "*ptrToConstis aint const" (the value is constant), or "ptrToConstis aint const *" (the pointer is a pointer to a constant integer). Thus:
Following C++ convention of analyzing the type, not the value, arule of thumbis to read the declaration from right to left. Thus, everything to the left of the star can be identified as the pointed type and everything to the right of the star are the pointer properties. For instance, in our example above,int const *can be read as a writable pointer that refers to a non-writable integer, andint * constcan be read as a non-writable pointer that refers to a writable integer.
A more generic rule that helps you understand complex declarations and definitions works like this:
Here is an example:
When reading to the left, it is important that you read the elements from right to left. So anint const *becomes apointer to a const intand not aconst pointer to an int.
In some cases C/C++ allows theconstkeyword to be placed to the left of the type. Here are some examples:
Although C/C++ allows such definitions (which closely match the English language when reading the definitions from left to right), the compiler still reads the definitions according to the abovementioned procedure: from right to left. But puttingconstbeforewhat must be constant quickly introduces mismatches between what you intend to write and what the compiler decides you wrote. Consider pointers to pointers:
As a final note regarding pointer definitions: always write the pointer symbol (the *) as much as possible to the right. Attaching the pointer symbol to the type is tricky, as it strongly suggests a pointer type, which isn't the case. Here are some examples:
Bjarne Stroustrup's FAQ recommends only declaring one variable per line if using the C++ convention, to avoid this issue.[10]
The same considerations apply to defining references and rvalue references:
More complicated declarations are encountered when using multidimensional arrays and references (or pointers) to pointers. Although it is sometimes argued[who?]that such declarations are confusing and error-prone and that they therefore should be avoided or be replaced by higher-level structures, the procedure described at the top of this section can always be used without introducing ambiguities or confusion.
constcan be declared both on function parameters and on variables (staticor automatic, including global or local). The interpretation varies between uses. Aconststatic variable (global variable or static local variable) is a constant, and may be used for data like mathematical constants, such asdouble const PI = 3.14159– realistically longer, or overall compile-time parameters. Aconstautomatic variable (non-static local variable) means thatsingle assignmentis happening, though a different value may be used each time, such asint const x_squared = x * x. Aconstparameter in pass-by-reference means that the referenced value is not modified – it is part of thecontract– while aconstparameter in pass-by-value (or the pointer itself, in pass-by-reference) does not add anything to the interface (as the value has been copied), but indicates that internally, the function does not modify the local copy of the parameter (it is a single assignment). For this reason, some favor usingconstin parameters only for pass-by-reference, where it changes the contract, but not for pass-by-value, where it exposes the implementation.
In order to take advantage of thedesign by contractapproach for user-defined types (structs and classes), which can have methods as well as member data, the programmer may tag instance methods asconstif they don't modify the object's data members.
Applying theconstqualifier to instance methods thus is an essential feature for const-correctness, and is not available in many otherobject-orientedlanguages such asJavaandC#or inMicrosoft'sC++/CLIorManaged Extensions for C++.
Whileconstmethods can be called byconstand non-constobjects alike, non-constmethods can only be invoked by non-constobjects.
Theconstmodifier on an instance method applies to the object pointed to by the "this" pointer, which is an implicit argument passed to all instance methods.
Thus havingconstmethods is a way to apply const-correctness to the implicit "this" pointer argument just like other arguments.
This example illustrates:
In the above code, the implicit "this" pointer toSet()has the type "C *const"; whereas the "this" pointer toGet()has type "C const *const", indicating that the method cannot modify its object through the "this" pointer.
Often the programmer will supply both aconstand a non-constmethod with the same name (but possibly quite different uses) in a class to accommodate both types of callers. Consider:
Theconst-ness of the calling object determines which version ofMyArray::Get()will be invoked and thus whether or not the caller is given a reference with which he can manipulate or only observe the private data in the object.
The two methods technically have different signatures because their "this" pointers have different types, allowing the compiler to choose the right one. (Returning aconstreference to anint, instead of merely returning theintby value, may be overkill in the second method, but the same technique can be used for arbitrary types, as in theStandard Template Library.)
There are several loopholes to pure const-correctness in C and C++. They exist primarily for compatibility with existing code.
The first, which applies only to C++, is the use ofconst_cast, which allows the programmer to strip theconstqualifier, making any object modifiable.
The necessity of stripping the qualifier arises when using existing code and libraries that cannot be modified but which are not const-correct. For instance, consider this code:
However, any attempt to modify an object that is itself declaredconstby means of aconst castresults in undefined behavior according to the ISO C++ Standard.
In the example above, ifptrreferences a global, local, or member variable declared asconst, or an object allocated on the heap vianew int const, the code is only correct ifLibraryFuncreally does not modify the value pointed to byptr.
The C language has a need of a loophole because a certain situation exists. Variables with static storage duration are allowed to be defined with an initial value. However, the initializer can use only constants like string constants and other literals, and is not allowed to use non-constant elements like variable names, whether the initializer elements are declaredconstor not, or whether the static duration variable is being declaredconstor not. There is a non-portable way to initialize aconstvariable that has static storage duration. By carefully constructing a typecast on the left hand side of a later assignment, aconstvariable can be written to, effectively stripping away theconstattribute and 'initializing' it with non-constant elements like otherconstvariables and such. Writing into aconstvariable this way may work as intended, but it causes undefined behavior and seriously contradicts const-correctness:
Another loophole[11]applies both to C and C++. Specifically, the languages dictate that member pointers and references are "shallow" with respect to theconst-ness of their owners – that is, a containing object that isconsthas allconstmembers except that member pointees (and referees) are still mutable. To illustrate, consider this C++ code:
Although the objectspassed toFoo()is constant, which makes all of its members constant, the pointee accessible throughs.ptris still modifiable, though this may not be desirable from the standpoint ofconst-correctness becausesmight solely own the pointee.
For this reason, Meyers argues that the default for member pointers and references should be "deep"const-ness, which could be overridden by amutablequalifier when the pointee is not owned by the container, but this strategy would create compatibility issues with existing code.
Thus, for historical reasons[citation needed], this loophole remains open in C and C++.
The latter loophole can be closed by using a class to hide the pointer behind aconst-correct interface, but such classes either do not support the usual copy semantics from aconstobject (implying that the containing class cannot be copied by the usual semantics either) or allow other loopholes by permitting the stripping ofconst-ness through inadvertent or intentional copying.
Finally, several functions in theC standard libraryviolate const-correctness beforeC23, as they accept aconstpointer to a character string and return a non-constpointer to a part of the same string.strstrandstrchrare among these functions.
Some implementations of the C++ standard library, such as Microsoft's[12]try to close this loophole by providing twooverloadedversions of some functions: a "const" version and a "non-const" version.
The use of the type system to express constancy leads to various complexities and problems, and has accordingly been criticized and not adopted outside the narrow C family of C, C++, and D. Java and C#, which are heavily influenced by C and C++, both explicitly rejectedconst-style type qualifiers, instead expressing constancy by keywords that apply to the identifier (finalin Java,constandreadonlyin C#). Even within C and C++, the use ofconstvaries significantly, with some projects and organizations using it consistently, and others avoiding it.
Theconsttype qualifier causes difficulties when the logic of a function is agnostic to whether its input is constant or not, but returns a value which should be of the same qualified type as an input. In other words, for these functions, if the input is constant (const-qualified), the return value should be as well, but if the input is variable (notconst-qualified), the return value should be as well. Because thetype signatureof these functions differs, it requires two functions (or potentially more, in case of multiple inputs) with the same logic – a form ofgeneric programming.
This problem arises even for simple functions in the C standard library, notablystrchr; this observation is credited by Ritchie to Tom Plum in the mid 1980s.[13]Thestrchrfunction locates a character in a string; formally, it returns a pointer to the first occurrence of the charactercin the strings, and in classic C (K&R C) its prototype is:
Thestrchrfunction does not modify the input string, but the return value is often used by the caller to modify the string, such as:
Thus on the one hand the input stringcanbeconst(since it is not modified by the function), and if the input string isconstthe return value should be as well – most simply because it might return exactly the input pointer, if the first character is a match – but on the other hand the return value should not beconstif the original string was notconst, since the caller may wish to use the pointer to modify the original string.
In C++ this is done viafunction overloading, typically implemented via atemplate, resulting in two functions, so that the return value has the sameconst-qualified type as the input:[b]
These can in turn be defined by a template:
In D this is handled via theinoutkeyword, which acts as a wildcard for const, immutable, or unqualified (variable), yielding:[14][c]
However, in C neither of these is possible[d]since C does not have function overloading, and instead, this is handled by having a single function where the input is constant but the output is writable:
This allows idiomatic C code but does strip the const qualifier if the input actually was const-qualified, violating type safety. This solution was proposed by Ritchie and subsequently adopted. This difference is one of the failures ofcompatibility of C and C++.
SinceC23, this problem is solved with the use of generic functions.strchrand the other functions affected by the issue will return aconstpointer if one was passed to them and an unqualified pointer if an unqualified pointer was passed to them.[15]
In Version 2 of theD programming language, two keywords relating to const exist.[16]Theimmutablekeyword denotes data that cannot be modified through any reference.
Theconstkeyword denotes a non-mutable view of mutable data.
Unlike C++const, Dconstandimmutableare "deep" ortransitive, and anything reachable through aconstorimmutableobject isconstorimmutablerespectively.
Example of const vs. immutable in D
Example of transitive or deep const in D
constwas introduced byBjarne StroustrupinC with Classes, the predecessor toC++, in 1981, and was originally calledreadonly.[17][18]As to motivation, Stroustrup writes:[18]
The first use, as a scoped and typed alternative to macros, was analogously fulfilled for function-like macros via theinlinekeyword. Constant pointers, and the* constnotation, were suggested by Dennis Ritchie and so adopted.[18]
constwas then adopted in C as part of standardization, and appears inC89(and subsequent versions) along with the other type qualifier,volatile.[19]A further qualifier,noalias, was suggested at the December 1987 meeting of the X3J11 committee, but was rejected; its goal was ultimately fulfilled by therestrictkeyword inC99. Ritchie was not very supportive of these additions, arguing that they did not "carry their weight", but ultimately did not argue for their removal from the standard.[20]
D subsequently inheritedconstfrom C++, where it is known as atype constructor(nottype qualifier) and added two further type constructors,immutableandinout, to handle related use cases.[e]
Other languages do not follow C/C++ in having constancy part of the type, though they often have superficially similar constructs and may use theconstkeyword. Typically this is only used for constants (constant objects).
C# has aconstkeyword, but with radically different and simpler semantics: it means a compile-time constant, and is not part of the type.
Nimhas aconstkeyword similar to that of C#: it also declares a compile-time constant rather than forming part of the type. However, in Nim, a constant can be declared from any expression that can be evaluated at compile time.[21]In C#, only C# built-in types can be declared asconst; user-defined types, including classes, structs, and arrays, cannot beconst.[22]
Java does not haveconst– it instead hasfinal, which can be applied to local "variable" declarations and applies to theidentifier, not the type. It has a different object-oriented use for object members, which is the origin of the name.
The Java language specification regardsconstas a reserved keyword – i.e., one that cannot be used as variable identifier – but assigns no semantics to it: it is areserved word(it cannot be used in identifiers) but not akeyword(it has no special meaning). The keyword was included as a means for Java compilers to detect and warn about the incorrect usage of C++ keywords.[23]An enhancement request ticket for implementingconstcorrectness exists in theJava Community Process, but was closed in 2005 on the basis that it was impossible to implement in a backwards-compatible fashion.[24]
The contemporaryAda 83independently had the notion of a constant object and aconstantkeyword,[25][f]withinput parametersand loop parameters being implicitly constant. Here theconstantis a property of the object, not of the type.
JavaScripthas aconstdeclaration that defines ablock-scopedvariable that cannot be reassigned nor redeclared. It defines a read-only reference to a variable that cannot be redefined, but in some situations the value of the variable itself may potentially change, such as if the variable refers to an object and a property of it is altered.[26]
|
https://en.wikipedia.org/wiki/Cray_pointer
|
Incomputer science,dynamic dispatchis the process of selecting which implementation of apolymorphicoperation (methodor function) to call atrun time. It is commonly employed in, and considered a prime characteristic of,object-oriented programming(OOP) languages and systems.[1]
Object-oriented systems model a problem as a set of interacting objects that enact operations referred to by name. Polymorphism is the phenomenon wherein somewhat interchangeable objects each expose an operation of the same name but possibly differing in behavior. As an example, aFileobject and aDatabaseobject both have aStoreRecordmethod that can be used to write a personnel record to storage. Their implementations differ. A program holds a reference to an object which may be either aFileobject or aDatabaseobject. Which it is may have been determined by a run-time setting, and at this stage, the program may not know or care which. When the program callsStoreRecordon the object, something needs to choose which behavior gets enacted. If one thinks of OOP assending messagesto objects, then in this example the program sends aStoreRecordmessage to an object of unknown type, leaving it to the run-time support system to dispatch the message to the right object. The object enacts whichever behavior it implements.[2]
Dynamic dispatch contrasts withstatic dispatch, in which the implementation of a polymorphic operation is selected atcompile time. The purpose of dynamic dispatch is to defer the selection of an appropriate implementation until the run time type of a parameter (or multiple parameters) is known.
Dynamic dispatch is different fromlate binding(also known as dynamic binding).Name bindingassociates a name with an operation. A polymorphic operation has several implementations, all associated with the same name. Bindings can be made at compile time or (with late binding) at run time. With dynamic dispatch, one particular implementation of an operation is chosen at run time. While dynamic dispatch does not imply late binding, late binding does imply dynamic dispatch, since the implementation of a late-bound operation is not known until run time.[citation needed]
The choice of which version of a method to call may be based either on a single object, or on a combination of objects. The former is calledsingle dispatchand is directly supported by common object-oriented languages such asSmalltalk,C++,Java,C#,Objective-C,Swift,JavaScript, andPython. In these and similar languages, one may call a method fordivisionwith syntax that resembles
where the parameters are optional. This is thought of as sending a message nameddividewith parameterdivisortodividend. An implementation will be chosen based only ondividend's type (perhapsrational,floating point,matrix), disregarding the type or value ofdivisor.
By contrast, some languages dispatch methods or functions based on the combination of operands; in the division case, the types of thedividendanddivisortogether determine whichdivideoperation will be performed. This is known asmultiple dispatch. Examples of languages that support multiple dispatch areCommon Lisp,Dylan, andJulia.
A language may be implemented with different dynamic dispatch mechanisms. The choices of the dynamic dispatch mechanism offered by a language to a large extent alter the programming paradigms that are available or are most natural to use within a given language.
Normally, in a typed language, the dispatch mechanism will be performed based on the type of the arguments (most commonly based on the type of the receiver of a message). Languages with weak or no typing systems often carry a dispatch table as part of the object data for each object. This allowsinstance behaviouras each instance may map a given message to a separate method.
Some languages offer a hybrid approach.
Dynamic dispatch will always incur an overhead so some languages offer static dispatch for particular methods.
C++ uses early binding and offers both dynamic and static dispatch. The default form of dispatch is static. To get dynamic dispatch the programmer must declare a method asvirtual.
C++ compilers typically implement dynamic dispatch with a data structure called avirtual function table(vtable) that defines the name-to-implementation mapping for a given class as a set of member function pointers. This is purely an implementation detail, as the C++ specification does not mention vtables. Instances of that type will then store a pointer to this table as part of their instance data, complicating scenarios whenmultiple inheritanceis used. Since C++ does not support late binding, the virtual table in a C++ object cannot be modified at runtime, which limits the potential set of dispatch targets to a finite set chosen at compile time.
Type overloading does not produce dynamic dispatch in C++ as the language considers the types of the message parameters part of the formal message name. This means that the message name the programmer sees is not the formal name used for binding.
InGo,RustandNim, a more versatile variation of early binding is used.Vtable pointers are carried with object references as 'fat pointers' ('interfaces' in Go, or 'trait objects' in Rust[3][4]).
This decouples the supported interfaces from the underlying data structures. Each compiled library needn't know the full range of interfaces supported in order to correctly use a type, just the specific vtable layout that they require. Code can pass around different interfaces to the same piece of data to different functions. This versatility comes at the expense of extra data with each object reference, which is problematic if many such references are stored persistently.
The termfat pointersimply refers to apointerwith additional associated information. The additional information may be a vtable pointer for dynamic dispatch described above, but is more commonly the associated object's size to describe e.g. aslice.[citation needed]
Smalltalk uses a type-based message dispatcher. Each instance has a single type whose definition contains the methods. When an instance receives a message, the dispatcher looks up the corresponding method in the message-to-method map for the type and then invokes the method.
Because a type can have a chain of base types, this look-up can be expensive. A naive implementation of Smalltalk's mechanism would seem to have a significantly higher overhead than that of C++ and this overhead would be incurred for every message that an object receives.
Real Smalltalk implementations often use a technique known asinline caching[5]that makes method dispatch very fast. Inline caching basically stores the previous destination method address and object class of the call site (or multiple pairs for multi-way caching). The cached method is initialized with the most common target method (or just the cache miss handler), based on the method selector. When the method call site is reached during execution, it just calls the address in the cache. (In a dynamic code generator, this call is a direct call as the direct address is back patched by cache miss logic.) Prologue code in the called method then compares the cached class with the actual object class, and if they don't match, execution branches to a cache miss handler to find the correct method in the class. A fast implementation may have multiple cache entries and it often only takes a couple of instructions to get execution to the correct method on an initial cache miss. The common case will be a cached class match, and execution will just continue in the method.
Out-of-line caching can also be used in the method invocation logic, using the object class and method selector. In one design, the class and method selector are hashed, and used as an index into a method dispatch cache table.
As Smalltalk is a reflective language, many implementations allow mutating individual objects into objects with dynamically generated method lookup tables. This allows altering object behavior on a per object basis. A whole category of languages known asprototype-based languageshas grown from this, the most famous of which areSelfandJavaScript. Careful design of the method dispatch caching allows even prototype-based languages to have high-performance method dispatch.
Many other dynamically typed languages, includingPython,Ruby,Objective-CandGroovyuse similar approaches.
|
https://en.wikipedia.org/wiki/Fat_pointer
|
Afunction pointer, also called asubroutine pointerorprocedure pointer, is apointerreferencing executable code, rather than data.Dereferencingthe function pointer yields the referencedfunction, which can be invoked and passed arguments just as in a normal function call. Such an invocation is also known as an "indirect" call, because the function is being invokedindirectlythrough a variable instead ofdirectlythrough a fixed identifier or address.
Function pointers allow different code to be executed at runtime. They can also be passed to a function to enablecallbacks.
Function pointers are supported bythird-generationprogramming languages(such asPL/I,COBOL,Fortran,[1]dBASEdBL[clarification needed], andC) andobject-oriented programminglanguages (such asC++,C#, andD).[2]
The simplest implementation of a function (or subroutine) pointer is as avariablecontaining theaddressof the function within executable memory. Olderthird-generation languagessuch asPL/IandCOBOL, as well as more modern languages such asPascalandCgenerally implement function pointers in this manner.[3]
The following C program illustrates the use of two function pointers:
The next program uses a function pointer to invoke one of two functions (sinorcos) indirectly from another function (compute_sum, computing an approximation of the function'sRiemann integration). The program operates by having functionmaincall functioncompute_sumtwice, passing it a pointer to the library functionsinthe first time, and a pointer to functioncosthe second time. Functioncompute_sumin turn invokes one of the two functions indirectly by dereferencing its function pointer argumentfuncpmultiple times, adding together the values that the invoked function returns and returning the resulting sum. The two sums are written to the standard output bymain.
Functors, or function objects, are similar to function pointers, and can be used in similar ways. A functor is an object of a class type that implements thefunction-call operator, allowing the object to be used within expressions using the same syntax as a function call. Functors are more powerful than simple function pointers, being able to contain their own data values, and allowing the programmer to emulateclosures. They are also used as callback functions if it is necessary to use a member function as a callback function.[4]
Many "pure" object-oriented languages do not support function pointers. Something similar can be implemented in these kinds of languages, though, usingreferencestointerfacesthat define a singlemethod(member function).CLI languagessuch asC#andVisual Basic .NETimplementtype-safefunction pointers withdelegates.
In other languages that supportfirst-class functions, functions are regarded as data, and can be passed, returned, and created dynamically directly by other functions, eliminating the need for function pointers.
Extensively using function pointers to call functions may produce a slow-down for the code on modern processors, because abranch predictormay not be able to figure out where to branch to (it depends on the value of the function pointer at run time) although this effect can be overstated as it is often amply compensated for by significantly reduced non-indexed table lookups.
C++ includes support forobject-oriented programming, so classes can havemethods(usually referred to as member functions). Non-static member functions (instance methods) have an implicit parameter (thethispointer) which is the pointer to the object it is operating on, so the type of the object must be included as part of the type of the function pointer. The method is then used on an object of that class by using one of the "pointer-to-member" operators:.*or->*(for an object or a pointer to object, respectively).[dubious–discuss]
Although function pointers in C and C++ can be implemented as simple addresses, so that typicallysizeof(Fx)==sizeof(void *), member pointers in C++ are sometimes implemented as "fat pointers", typically two or three times the size of a simple function pointer, in order to deal withvirtual methodsandvirtual inheritance[citation needed].
In C++, in addition to the method used in C, it is also possible to use the C++ standard library class templatestd::function, of which the instances are function objects:
This is how C++ uses function pointers when dealing with member functions of classes or structs. These are invoked using an object pointer or a this call. They are type safe in that you can only call members of that class (or derivatives) using a pointer of that type. This example also demonstrates the use of a typedef for the pointer to member function added for simplicity. Function pointers to static member functions are done in the traditional 'C' style because there is no object pointer for this call required.
The C and C++ syntax given above is the canonical one used in all the textbooks - but it's difficult to read and explain. Even the abovetypedefexamples use this syntax. However, every C and C++ compiler supports a more clear and concise mechanism to declare function pointers: usetypedef, butdon'tstore the pointer as part of the definition. Note that the only way this kind oftypedefcan actually be used is with a pointer - but that highlights the pointer-ness of it.
These examples use the above definitions. In particular, note that the above definition forFncan be used in pointer-to-member-function definitions:
PL/Iprocedures can be nested, that is, procedure A may contain procedure B, which in turn may contain C. In addition to data declared in B, B can also reference any data declared in A, as long as it doesn’t override the definition. Likewise C can reference data in both A and B. Therefore, PL/I ENTRY variables need to containcontext,[5]to provide procedure C with the addresses of the values of data in B and A at the time C was called.
|
https://en.wikipedia.org/wiki/Function_pointer
|
In amultithreadedcomputingenvironment,hazard pointersare one approach to solving the problems posed bydynamic memory managementof the nodes in alock-freedata structure. These problems generally arise only in environments that don't haveautomatic garbage collection.[1]
Any lock-free data structure that uses thecompare-and-swapprimitive must deal with theABA problem. For example, in a lock-free stack represented as an intrusively linked list, one thread may be attempting to pop an item from the front of the stack (A → B → C). It remembers the second-from-top value "B", and then performscompare_and_swap(target=&head,newvalue=B,expected=A). Unfortunately, in the middle of this operation, another thread may have done two pops and then pushed A back on top, resulting in the stack (A → C). The compare-and-swap succeeds in swapping `head` with `B`, and the result is that the stack now contains garbage (a pointer to the freed element "B").
Furthermore, any lock-free algorithm containing code of the form
suffers from another major problem, in the absence of automatic garbage collection. In between those two lines, it is possible that another thread may pop the node pointed to bythis->headand deallocate it, meaning that the memory access throughcurrentNodeon the second line reads deallocated memory (which may in fact already be in use by some other thread for a completely different purpose).
Hazard pointers can be used to address both of these problems. In a hazard-pointer system, eachthreadkeeps alistof hazard pointers indicating which nodes the thread is currently accessing. (In many systems this "list" may be probably limited to only one[1][2]or two elements.) Nodes on the hazard pointer list must not be modified or deallocated by any other thread.
Each reader thread owns a single-writer/multi-reader shared pointer called "hazard pointer." When a reader thread assigns the address of a map to its hazard pointer, it is basically announcing to other threads (writers), "I am reading this map. You can replace it if you want, but don't change its contents and certainly keep yourdeleteing hands off it."
When a thread wishes to remove a node, it places it on a list of nodes "to be freed later", but does not actually deallocate the node's memory until no other thread's hazard list contains the pointer. This manual garbage collection can be done by a dedicated garbage-collection thread (if the list "to be freed later" is shared by all the threads); alternatively, cleaning up the "to be freed" list can be done by each worker thread as part of an operation such as "pop" (in which case each worker thread can be responsible for its own "to be freed" list).
In 2002,Maged MichaelofIBMfiled an application for a U.S. patent on the hazard pointer technique,[3]but the application was abandoned in 2010.
Alternatives to hazard pointers includereference counting.[1]
|
https://en.wikipedia.org/wiki/Hazard_pointer
|
Incomputer programming, aniteratoris anobjectthat progressively provides access to each item of acollection, in order.[1][2][3]
A collection may provide multiple iterators via itsinterfacethat provide items in different orders, such as forwards and backwards.
An iterator is often implemented in terms of the structure underlying a collection implementation and is often tightlycoupledto the collection to enable the operational semantics of the iterator.
An iterator is behaviorally similar to adatabase cursor.
Iterators date to theCLUprogramming language in 1974.
An iterator provides access to an element of a collection (element access) and can change its internal state to provide access to the next element (element traversal).[4]It also provides for creation and initialization to a first element and indicates whether all elements have been traversed. In some programming contexts, an iterator provides additional functionality.
An iterator allows a consumer to process each element of a collection while isolating the consumer from the internal structure of the collection.[2]The collection can store elements in any manner while the consumer can access them as a sequence.
In object-oriented programming, an iterator class is usually designed in tight coordination with the corresponding collection class. Usually, the collection provides the methods for creating iterators.
Aloop counteris sometimes also referred to as a loop iterator. Aloop counter, however, only provides the traversal functionality and not the element access functionality.
One way of implementing an iterator is via a restricted form ofcoroutine, known as agenerator. By contrast with asubroutine, a generator coroutine canyieldvalues to its caller multiple times, instead of returning just once. Most iterators are naturally expressible as generators, but because generators preserve their local state between invocations, they're particularly well-suited for complicated, stateful iterators, such astree traversers. There are subtle differences and distinctions in the use of the terms "generator" and "iterator", which vary between authors and languages.[5]InPython, a generator is an iteratorconstructor: a function that returns an iterator. An example of a Python generator returning an iterator for theFibonacci numbersusing Python'syieldstatement follows:
Aninternal iteratoris ahigher-order function(often takinganonymous functions) that traverses a collection while applying a function to each element. For example, Python'smapfunction applies a caller-defined function to each element:
Some object-oriented languages such asC#,C++(later versions),Delphi(later versions),Go,Java(later versions),Lua,Perl,Python,Rubyprovide anintrinsicway of iterating through the elements of a collection without an explicit iterator. An iterator object may exist, but is not represented in the source code.[4][6]
An implicit iterator is often manifest in language syntax asforeach.
In Python, a collection object can be iterated directly:
In Ruby, iteration requires accessing an iterator property:
This iteration style is sometimes called "internal iteration" because its code fully executes within the context of the iterable object (that controls all aspects of iteration), and the programmer only provides the operation to execute at each step (using ananonymous function).
Languages that supportlist comprehensionsor similar constructs may also make use of implicit iterators during the construction of the result list, as in Python:
Sometimes the implicit hidden nature is only partial. TheC++language has a few function templates for implicit iteration, such asfor_each(). These functions still require explicit iterator objects as their initial input, but the subsequent iteration does not expose an iterator object to the user.
Iterators are a useful abstraction ofinput streams– they provide a potentially infinite iterable (but not necessarily indexable) object. Several languages, such as Perl and Python, implement streams as iterators. In Python, iterators are objects representing streams of data.[7]Alternative implementations of stream includedata-drivenlanguages, such asAWKandsed.
Instead of using an iterator, many languages allow the use of a subscript operator and aloop counterto access each element. Although indexing may be used with collections, the use of iterators may have advantages such as:[8]
The ability of a collection to be modified while iterating through its elements has become necessary in modernobject-oriented programming, where the interrelationships between objects and the effects of operations may not be obvious. By using an iterator one is isolated from these sorts of consequences. This assertion must however be taken with a grain of salt, because more often than not, for efficiency reasons, the iterator implementation is so tightly bound to the collection that it does preclude modification of the underlying collection without invalidating itself.
For collections that may move around their data in memory, the only way to not invalidate the iterator is, for the collection, to somehow keep track of all the currently alive iterators and update them on the fly. Since the number of iterators at a given time may be arbitrarily large in comparison to the size of the tied collection, updating them all will drastically impair the complexity guarantee on the collection's operations.
An alternative way to keep the number of updates bound relatively to the collection size would be to use a kind of handle mechanism, that is a collection of indirect pointers to the collection's elements that must be updated with the collection, and let the iterators point to these handles instead of directly to the data elements. But this approach will negatively impact the iterator performance, since it must effectuate a double pointer following to access the actual data element. This is usually not desirable, because many algorithms using the iterators invoke the iterators data access operation more often than the advance method. It is therefore especially important to have iterators with very efficient data access.
All in all, this is always a trade-off between security (iterators remain always valid) and efficiency. Most of the time, the added security is not worth the efficiency price to pay for it. Using an alternative collection (for example a singly linked list instead of a vector) would be a better choice (globally more efficient) if the stability of the iterators is needed.
Iterators can be categorised according to their functionality. Here is a (non-exhaustive) list of iterator categories:[9][10]
Different languages or libraries used with these languages define iterator types. Some of them are[13]
Iterators in the.NET Framework(i.e. C#) are called "enumerators" and represented by theIEnumeratorinterface.[16]: 189–190, 344[17]: 53–54IEnumeratorprovides aMoveNext()method, which advances to the next element and indicates whether the end of the collection has been reached;[16]: 344[17]: 55–56[18]: 89aCurrentproperty, to obtain the value of the element currently being pointed at.[16]: 344[17]: 56[18]: 89and an optionalReset()method,[16]: 344to rewind the enumerator back to its initial position. The enumerator initially points to a special value before the first element, so a call toMoveNext()is required to begin iterating.
Enumerators are typically obtained by calling theGetEnumerator()method of an object implementing theIEnumerableinterface.[17]: 54–56[18]: 54–56aCurrentproperty, to obtain the value of the element currently being pointed at;[16]: 344[17]: 56[18]: 89Container classes typically implement this interface. However, theforeachstatement inC#can operate on any object providing such a method, even if it does not implementIEnumerable(duck typing).[18]: 89Both interfaces were expanded intogenericversions in.NET 2.0.
The following shows a simple use of iterators in C# 2.0:
C# 2.0 also supportsgenerators: a method that is declared as returningIEnumerator(orIEnumerable), but uses the "yield return" statement to produce a sequence of elements instead of returning an object instance, will be transformed by the compiler into a new class implementing the appropriate interface.
TheC++language makes wide use of iterators in itsStandard Libraryand describes several categories of iterators differing in the repertoire of operations they allow. These includeforward iterators,bidirectional iterators, andrandom access iterators, in order of increasing possibilities. All of the standard container template types provide iterators of one of these categories. Iterators generalize pointers to elements of an array (which indeed can be used as iterators), and their syntax is designed to resemble that ofCpointer arithmetic, where the*and->operators are used to reference the element to which the iterator points and pointer arithmetic operators like++are used to modify iterators in the traversal of a container.
Traversal using iterators usually involves a single varying iterator, and two fixed iterators that serve to delimit a range to be traversed. The distance between the limiting iterators, in terms of the number of applications of the operator++needed to transform the lower limit into the upper one, equals the number of items in the designated range; the number of distinct iterator values involved is one more than that. By convention, the lower limiting iterator "points to" the first element in the range, while the upper limiting iterator does not point to any element in the range, but rather just beyond the end of the range.
For traversal of an entire container, thebegin()method provides the lower limit, andend()the upper limit. The latter does not reference any element of the container at all but is a valid iterator value that can be compared against.
The following example shows a typical use of an iterator.
Iterator types are separate from the container types they are used with, though the two are often used in concert. The category of the iterator (and thus the operations defined for it) usually depends on the type of container, with for instance arrays or vectors providing random access iterators, but sets (which use a linked structure as implementation) only providing bidirectional iterators. One same container type can have more than one associated iterator type; for instance thestd::vector<T>container type allows traversal either using (raw) pointers to its elements (of type*<T>), or values of a special typestd::vector<T>::iterator, and yet another type is provided for "reverse iterators", whose operations are defined in such a way that an algorithm performing a usual (forward) traversal will actually do traversal in reverse order when called with reverse iterators. Most containers also provide a separateconst_iteratortype, for which operations that would allow changing the values pointed to are intentionally not defined.
Simple traversal of a container object or a range of its elements (including modification of those elements unless aconst_iteratoris used) can be done using iterators alone. But container types may also provide methods likeinsertorerasethat modify the structure of the container itself; these are methods of the container class, but in addition require one or more iterator values to specify the desired operation. While it is possible to have multiple iterators pointing into the same container simultaneously, structure-modifying operations may invalidate certain iterator values (the standard specifies for each case whether this may be so); using an invalidated iterator is an error that will lead to undefined behavior, and such errors need not be signaled by the run time system.
Implicit iteration is also partially supported by C++ through the use of standard function templates, such asstd::for_each(),std::copy()andstd::accumulate().
When used they must be initialized with existing iterators, usuallybeginandend, that define the range over which iteration occurs. But no explicit iterator object is subsequently exposed as the iteration proceeds. This example shows the use offor_each.
The same can be achieved usingstd::copy, passing astd::ostream_iteratorvalue as third iterator:
SinceC++11,lambda functionsyntax can be used to specify to operation to be iterated inline, avoiding the need to define a named function. Here is an example of for-each iteration using a lambda function:
Introduced in theJavaJDK 1.2 release, thejava.util.Iteratorinterface allows the iteration of container classes. EachIteratorprovides anext()andhasNext()method,[19]: 294–295and may optionally support aremove()[19]: 262, 266method. Iterators are created by the corresponding container class, typically by a method namediterator().[20][19]: 99[19]: 217
Thenext()method advances the iterator and returns the value pointed to by the iterator. The first element is obtained upon the first call tonext().[19]: 294–295To determine when all the elements in the container have been visited thehasNext()test method is used.[19]: 262The following example shows a simple use of iterators:
To show thathasNext()can be called repeatedly, we use it to insert commas between the elements but not after the last element.
This approach does not properly separate the advance operation from the actual data access. If the data element must be used more than once for each advance, it needs to be stored in a temporary variable. When an advance is needed without data access (i.e. to skip a given data element), the access is nonetheless performed, though the returned value is ignored in this case.
For collection types that support it, theremove()method of the iterator removes the most recently visited element from the container while keeping the iterator usable. Adding or removing elements by calling the methods of the container (also from the samethread) makes the iterator unusable. An attempt to get the next element throws the exception. An exception is also thrown if there are no more elements remaining (hasNext()has previously returned false).
Additionally, forjava.util.Listthere is ajava.util.ListIteratorwith a similar API but that allows forward and backward iteration, provides its current index in the list and allows setting of the list element at its position.
TheJ2SE5.0 release of Java introduced theIterableinterface to support an enhancedfor(foreach) loop for iterating over collections and arrays.Iterabledefines theiterator()method that returns anIterator.[19]: 266Using the enhancedforloop, the preceding example can be rewritten as
Some containers also use the older (since 1.0)Enumerationclass. It provideshasMoreElements()andnextElement()methods but has no methods to modify the container.
InScala, iterators have a rich set of methods similar to collections, and can be used directly in for loops. Indeed, both iterators and collections inherit from a common base trait -scala.collection.TraversableOnce. However, because of the rich set of methods available in the Scala collections library, such asmap,collect,filteretc., it is often not necessary to deal with iterators directly when programming in Scala.
Java iterators and collections can be automatically converted into Scala iterators and collections, respectively, simply by adding the single line
to the file. TheJavaConversionsobject provides implicit conversions to do this. Implicit conversions are a feature of Scala: methods that, when visible in the current scope, automatically insert calls to themselves into relevant expressions at the appropriate place to make them typecheck when they otherwise would not.
MATLABsupports both external and internal implicit iteration using either "native" arrays orcellarrays. In the case of external iteration where the onus is on the user to advance the traversal and request next elements, one can define a set of elements within an array storage structure and traverse the elements using thefor-loop construct. For example,
traverses an array of integers using theforkeyword.
In the case of internal iteration where the user can supply an operation to the iterator to perform over every element of a collection, many built-in operators and MATLAB functions are overloaded to execute over every element of an array and return a corresponding output array implicitly. Furthermore, thearrayfunandcellfunfunctions can be leveraged for performing custom or user defined operations over "native" arrays andcellarrays respectively. For example,
defines a primary functionsimpleFunthat implicitly applies custom subfunctionmyCustomFunto each element of an array using built-in functionarrayfun.
Alternatively, it may be desirable to abstract the mechanisms of the array storage container from the user by defining a custom object-oriented MATLAB implementation of the Iterator Pattern. Such an implementation supporting external iteration is demonstrated in MATLAB Central File Exchange itemDesign Pattern: Iterator (Behavioral). This is written in the new class-definition syntax introduced with MATLAB software version 7.6 (R2008a) and features a one-dimensionalcellarray realization of theList Abstract Data Type(ADT) as the mechanism for storing a heterogeneous (in data type) set of elements. It provides the functionality for explicit forwardListtraversal with thehasNext(),next()andreset()methods for use in awhile-loop.
PHP'sforeachloopwas introduced in version 4.0 and made compatible with objects as values in 4.0 Beta 4.[21]However, support for iterators was added in PHP 5 through the introduction of the internal[22]Traversableinterface.[23]The two main interfaces for implementation in PHP scripts that enable objects to be iterated via theforeachloop areIteratorandIteratorAggregate. The latter does not require the implementing class to declare all required methods, instead it implements anaccessormethod (getIterator) that returns an instance ofTraversable. TheStandard PHP Libraryprovides several classes to work with special iterators.[24]PHP also supportsGeneratorssince 5.5.[25]
The simplest implementation is by wrapping an array, this can be useful fortype hintingandinformation hiding.
All methods of the example class are used during the execution of a complete foreach loop (foreach ($iterator as $key => $current) {}). The iterator's methods are executed in the following order:
The next example illustrates a PHP class that implements theTraversableinterface, which could be wrapped in anIteratorIteratorclass to act upon the data before it is returned to theforeachloop. The usage together with theMYSQLI_USE_RESULTconstant allows PHP scripts to iterate result sets with billions of rows with very little memory usage. These features are not exclusive to PHP nor to its MySQL class implementations (e.g. thePDOStatementclass implements theTraversableinterface as well).
Iterators inPythonare a fundamental part of the language and in many cases go unseen as they are implicitly used in thefor(foreach) statement, inlist comprehensions, and ingenerator expressions. All of Python's standard built-incollectiontypes support iteration, as well as many classes that are part of the standard library. The following example shows typical implicit iteration over a sequence:
Python dictionaries (a form ofassociative array) can also be directly iterated over, when the dictionary keys are returned; or theitems()method of a dictionary can be iterated over where it yields corresponding key,value pairs as a tuple:
Iterators however can be used and defined explicitly. For any iterable sequence type or class, the built-in functioniter()is used to create an iterator object. The iterator object can then be iterated with thenext()function, which uses the__next__()method internally, which returns the next element in the container. (The previous statement applies to Python 3.x. In Python 2.x, thenext()method is equivalent.) AStopIterationexception will be raised when no more elements are left. The following example shows an equivalent iteration over a sequence using explicit iterators:
Any user-defined class can support standard iteration (either implicit or explicit) by defining an__iter__()method that returns an iterator object. The iterator object then needs to define a__next__()method that returns the next element.
Python'sgeneratorsimplement this iterationprotocol.
Iterators inRakuare a fundamental part of the language, although usually users do not have to care about iterators. Their usage is hidden behind iteration APIs such as theforstatement,map,grep, list indexing with.[$idx], etc.
The following example shows typical implicit iteration over a collection of values:
Raku hashes can also be directly iterated over; this yields key-valuePairobjects. Thekvmethod can be invoked on the hash to iterate over the key and values; thekeysmethod to iterate over the hash's keys; and thevaluesmethod to iterate over the hash's values.
Iterators however can be used and defined explicitly. For any iterable type, there are several methods that control different aspects of the iteration process. For example, theiteratormethod is supposed to return anIteratorobject, and thepull-onemethod is supposed to produce and return the next value if possible, or return the sentinel valueIterationEndif no more values could be produced. The following example shows an equivalent iteration over a collection using explicit iterators:
All iterable types in Raku compose theIterablerole,Iteratorrole, or both. TheIterableis quite simple and only requires theiteratorto be implemented by the composing class. TheIteratoris more complex and provides a series of methods such aspull-one, which allows for a finer operation of iteration in several contexts such as adding or eliminating items, or skipping over them to access other items. Thus, any user-defined class can support standard iteration by composing these roles and implementing theiteratorand/orpull-onemethods.
TheDNAclass represents a DNA strand and implements theiteratorby composing theIterablerole. The DNA strand is split into a group of trinucleotides when iterated over:
TheRepeaterclass composes both theIterableandIteratorroles:
Ruby implements iterators quite differently; all iterations are done by means of passing callback closures to container methods - this way Ruby not only implements basic iteration but also several patterns of iteration like function mapping, filters and reducing. Ruby also supports an alternative syntax for the basic iterating methodeach, the following three examples are equivalent:
...and...
or even shorter
Ruby can also iterate over fixed lists by usingEnumerators and either calling their#nextmethod or doing a for each on them, as above.
Rust makes use of external iterators throughout the standard library, including in itsforloop, which implicitly calls thenext()method of an iterator until it is consumed. The most basicforloop for example iterates over aRangetype:
Specifically, theforloop will call a value'sinto_iter()method, which returns an iterator that in turn yields the elements to the loop. Theforloop (or indeed, any method that consumes the iterator), proceeds until thenext()method returns aNonevalue (iterations yielding elements return aSome(T)value, whereTis the element type).
All collections provided by the standard library implement theIntoIteratortrait (meaning they define theinto_iter()method). Iterators themselves implement theIteratortrait, which requires defining thenext()method. Furthermore, any type implementingIteratoris automatically provided an implementation forIntoIteratorthat returns itself.
Iterators support various adapters (map(),filter(),skip(),take(), etc.) as methods provided automatically by theIteratortrait.
Users can create custom iterators by creating a type implementing theIteratortrait. Custom collections can implement theIntoIteratortrait and return an associated iterator type for their elements, enabling their use directly inforloops. Below, theFibonaccitype implements a custom, unbounded iterator:
|
https://en.wikipedia.org/wiki/Iterator
|
Incomputer programming, anopaque pointeris a special case of anopaque data type, adata typedeclared to be apointerto arecordordata structureof some unspecified type.
Opaque pointers are present in severalprogramming languagesincludingAda,C,C++,DandModula-2.
If the language isstrongly typed,programsandproceduresthat have no other information about an opaque pointer typeTcan still declarevariables,arrays, and record fields of typeT, assign values of that type, and compare those values for equality. However, they will not be able tode-referencesuch a pointer, and can only change the object's content by calling some procedure that has the missing information.
Opaque pointers are a way to hide theimplementationdetails of aninterfacefrom ordinary clients, so that theimplementationmay be changed without the need to recompile themodulesusing it. This benefits the programmer as well since a simple interface can be created, and most details can be hidden in another file.[1]This is important for providingbinary code compatibilitythrough different versions of ashared library, for example.
This technique is described inDesign Patternsas theBridge pattern. It is sometimes referred to as "handleclasses",[2]the "Pimpl idiom" (for "pointer to implementation idiom"),[3]"Compiler firewall idiom",[4]"d-pointer"or "Cheshire Cat", especially among the C++ community.[2]
The typeHandleis an opaque pointer to the real implementation, that is not defined in the specification. Note that the type is not only private (to forbid the clients from accessing the type directly, and only through the operations), but also limited (to avoid the copy of the data structure, and thus preventing dangling references).
These types are sometimes called "Taft types"—named afterTucker Taft, the main designer of Ada 95—because they were introduced in the so-called Taft Amendment to Ada 83.[5]
This example demonstrates a way to achieve theinformation hiding(encapsulation) aspect ofobject-oriented programmingusing the C language. If someone wanted to change the definition ofstruct obj, it would be unnecessary to recompile any other modules in the program that use theobj.hheader file unless theAPIwas also changed. Note that it may be desirable for the functions to check that the passed pointer is notNULL, but such checks have been omitted above for brevity.
The d-pointer pattern is one of the implementations of theopaque pointer. It is commonly used in C++ classes due to its advantages (noted below). A d-pointer is a private data member of the class that points to an instance of a structure. This method allows class declarations to omit private data members, except for the d-pointer itself.[6]As a result,
One side benefit is that compilations are faster because the header file changes less often. Note, possible disadvantage of d-pointer pattern is indirect member access through pointer (e.g., pointer to object in dynamic storage), which is sometimes slower than access to a plain, non-pointer member. The d-pointer is heavily used in theQt[7]andKDElibraries.
|
https://en.wikipedia.org/wiki/Opaque_pointer
|
In someprogramming languages,constis atype qualifier(akeywordapplied to adata type) that indicates that the data is read-only. While this can be used to declareconstants,constin theC familyof languages differs from similar constructs in other languages in that it is part of thetype, and thus has complicated behavior when combined withpointers, references,composite data types, andtype-checking. In other languages, the data is not in a singlememory location, but copied atcompile timefor each use.[1]Languages which use it includeC,C++,D,JavaScript,Julia, andRust.
When applied in anobjectdeclaration,[a]it indicates that the object is aconstant: itsvaluemay not be changed, unlike avariable. This basic use – to declare constants – has parallels in many other languages.
However, unlike in other languages, in the C family of languages theconstis part of thetype, not part of theobject. For example, in C,intconstx=1;declares an objectxofint consttype – theconstis part of the type, as if it were parsed "(int const) x" – while inAda,X:constantINTEGER:=1_declares a constant (a kind of object)XofINTEGERtype: theconstantis part of theobject, but not part of thetype.
This has two subtle results. Firstly,constcan be applied to parts of a more complex type – for example,int const * const x;declares a constant pointer to a constant integer, whileint const * x;declares a variable pointer to a constant integer, andint * const x;declares a constant pointer to a variable integer. Secondly, becauseconstis part of the type, it must match as part of type-checking. For example, the following code is invalid:
because the argument tofmust be avariableinteger, butiis aconstantinteger. This matching is a form ofprogram correctness, and is known asconst-correctness. This allows a form ofprogramming by contract, where functions specify as part of theirtype signaturewhether they modify their arguments or not, and whether theirreturn valueis modifiable or not. This type-checking is primarily of interest in pointers and references – not basic value types like integers – but also forcomposite data typesor templated types such ascontainers. It is concealed by the fact that theconstcan often be omitted, due totype coercion(implicittype conversion) and C beingcall-by-value(C++ and D are either call-by-value or call-by-reference).
The idea of const-ness does not imply that the variable as it is stored incomputer memoryis unwritable. Rather,const-ness is acompile-timeconstruct that indicates what a programmershoulddo, not necessarily what theycando. Note, however, that in the case of predefined data (such aschar const *string literals), Cconstisoftenunwritable.
While a constant does not change its value while the program is running, an object declaredconstmay indeed change its value while the program is running. A common example are read only registers within embedded systems like the current state of a digital input. The data registers for digital inputs are often declared asconstandvolatile. The content of these registers may change without the program doing anything (volatile) but it would be ill-formed for the program to attempt write to them (const).
In addition, a (non-static) member-function can be declared asconst. In this case, thethispointerinside such a function is of typeobject_type const *rather than merely of typeobject_type *.[2]This means that non-const functions for this object cannot be called from inside such a function, nor canmember variablesbe modified. In C++, a member variable can be declared asmutable, indicating that this restriction does not apply to it. In some cases, this can be useful, for example withcaching,reference counting, anddata synchronization. In these cases, the logical meaning (state) of the object is unchanged, but the object is not physically constant since its bitwise representation may change.
In C, C++, and D, all data types, including those defined by the user, can be declaredconst, and const-correctness dictates that all variables or objects should be declared as such unless they need to be modified. Such proactive use ofconstmakes values "easier to understand, track, and reason about",[3]and it thus increases the readability and comprehensibility of code and makes working in teams and maintaining code simpler because it communicates information about a value's intended use. This can help thecompileras well as the developer when reasoning about code. It can also enable anoptimizing compilerto generate more efficient code.[4]
For simple non-pointer data types, applying theconstqualifier is straightforward. It can go on either side of some types for historical reasons (for example,const char foo = 'a';is equivalent tochar const foo = 'a';). On some implementations, usingconsttwice (for instance,const char constorchar const const) generates a warning but not an error.
For pointer and reference types, the meaning ofconstis more complicated – either the pointer itself, or the value being pointed to, or both, can beconst. Further, the syntax can be confusing. A pointer can be declared as aconstpointer to writable value, or a writable pointer to aconstvalue, orconstpointer toconstvalue.Aconstpointer cannot be reassigned to point to a different object from the one it is initially assigned, but it can be used to modify the value that it points to (called thepointee).[5][6][7][8][9]Reference variables in C++ are an alternate syntax forconstpointers. A pointer to aconstobject, on the other hand, can be reassigned to point to another memory location (which should be an object of the same type or of a convertible type), but it cannot be used to modify the memory that it is pointing to. Aconstpointer to aconstobject can also be declared and can neither be used to modify the apointee nor be reassigned to point to another object. The following code illustrates these subtleties:
Following usual C convention for declarations, declaration follows use, and the*in a pointer is written on the pointer, indicatingdereferencing. For example, in the declarationint *ptr, the dereferenced form*ptris anint, while the reference formptris a pointer to anint. Thusconstmodifies thenameto its right. The C++ convention is instead to associate the*with the type, as inint* ptr, and read theconstas modifying thetypeto the left.int const * ptrToConstcan thus be read as "*ptrToConstis aint const" (the value is constant), or "ptrToConstis aint const *" (the pointer is a pointer to a constant integer). Thus:
Following C++ convention of analyzing the type, not the value, arule of thumbis to read the declaration from right to left. Thus, everything to the left of the star can be identified as the pointed type and everything to the right of the star are the pointer properties. For instance, in our example above,int const *can be read as a writable pointer that refers to a non-writable integer, andint * constcan be read as a non-writable pointer that refers to a writable integer.
A more generic rule that helps you understand complex declarations and definitions works like this:
Here is an example:
When reading to the left, it is important that you read the elements from right to left. So anint const *becomes apointer to a const intand not aconst pointer to an int.
In some cases C/C++ allows theconstkeyword to be placed to the left of the type. Here are some examples:
Although C/C++ allows such definitions (which closely match the English language when reading the definitions from left to right), the compiler still reads the definitions according to the abovementioned procedure: from right to left. But puttingconstbeforewhat must be constant quickly introduces mismatches between what you intend to write and what the compiler decides you wrote. Consider pointers to pointers:
As a final note regarding pointer definitions: always write the pointer symbol (the *) as much as possible to the right. Attaching the pointer symbol to the type is tricky, as it strongly suggests a pointer type, which isn't the case. Here are some examples:
Bjarne Stroustrup's FAQ recommends only declaring one variable per line if using the C++ convention, to avoid this issue.[10]
The same considerations apply to defining references and rvalue references:
More complicated declarations are encountered when using multidimensional arrays and references (or pointers) to pointers. Although it is sometimes argued[who?]that such declarations are confusing and error-prone and that they therefore should be avoided or be replaced by higher-level structures, the procedure described at the top of this section can always be used without introducing ambiguities or confusion.
constcan be declared both on function parameters and on variables (staticor automatic, including global or local). The interpretation varies between uses. Aconststatic variable (global variable or static local variable) is a constant, and may be used for data like mathematical constants, such asdouble const PI = 3.14159– realistically longer, or overall compile-time parameters. Aconstautomatic variable (non-static local variable) means thatsingle assignmentis happening, though a different value may be used each time, such asint const x_squared = x * x. Aconstparameter in pass-by-reference means that the referenced value is not modified – it is part of thecontract– while aconstparameter in pass-by-value (or the pointer itself, in pass-by-reference) does not add anything to the interface (as the value has been copied), but indicates that internally, the function does not modify the local copy of the parameter (it is a single assignment). For this reason, some favor usingconstin parameters only for pass-by-reference, where it changes the contract, but not for pass-by-value, where it exposes the implementation.
In order to take advantage of thedesign by contractapproach for user-defined types (structs and classes), which can have methods as well as member data, the programmer may tag instance methods asconstif they don't modify the object's data members.
Applying theconstqualifier to instance methods thus is an essential feature for const-correctness, and is not available in many otherobject-orientedlanguages such asJavaandC#or inMicrosoft'sC++/CLIorManaged Extensions for C++.
Whileconstmethods can be called byconstand non-constobjects alike, non-constmethods can only be invoked by non-constobjects.
Theconstmodifier on an instance method applies to the object pointed to by the "this" pointer, which is an implicit argument passed to all instance methods.
Thus havingconstmethods is a way to apply const-correctness to the implicit "this" pointer argument just like other arguments.
This example illustrates:
In the above code, the implicit "this" pointer toSet()has the type "C *const"; whereas the "this" pointer toGet()has type "C const *const", indicating that the method cannot modify its object through the "this" pointer.
Often the programmer will supply both aconstand a non-constmethod with the same name (but possibly quite different uses) in a class to accommodate both types of callers. Consider:
Theconst-ness of the calling object determines which version ofMyArray::Get()will be invoked and thus whether or not the caller is given a reference with which he can manipulate or only observe the private data in the object.
The two methods technically have different signatures because their "this" pointers have different types, allowing the compiler to choose the right one. (Returning aconstreference to anint, instead of merely returning theintby value, may be overkill in the second method, but the same technique can be used for arbitrary types, as in theStandard Template Library.)
There are several loopholes to pure const-correctness in C and C++. They exist primarily for compatibility with existing code.
The first, which applies only to C++, is the use ofconst_cast, which allows the programmer to strip theconstqualifier, making any object modifiable.
The necessity of stripping the qualifier arises when using existing code and libraries that cannot be modified but which are not const-correct. For instance, consider this code:
However, any attempt to modify an object that is itself declaredconstby means of aconst castresults in undefined behavior according to the ISO C++ Standard.
In the example above, ifptrreferences a global, local, or member variable declared asconst, or an object allocated on the heap vianew int const, the code is only correct ifLibraryFuncreally does not modify the value pointed to byptr.
The C language has a need of a loophole because a certain situation exists. Variables with static storage duration are allowed to be defined with an initial value. However, the initializer can use only constants like string constants and other literals, and is not allowed to use non-constant elements like variable names, whether the initializer elements are declaredconstor not, or whether the static duration variable is being declaredconstor not. There is a non-portable way to initialize aconstvariable that has static storage duration. By carefully constructing a typecast on the left hand side of a later assignment, aconstvariable can be written to, effectively stripping away theconstattribute and 'initializing' it with non-constant elements like otherconstvariables and such. Writing into aconstvariable this way may work as intended, but it causes undefined behavior and seriously contradicts const-correctness:
Another loophole[11]applies both to C and C++. Specifically, the languages dictate that member pointers and references are "shallow" with respect to theconst-ness of their owners – that is, a containing object that isconsthas allconstmembers except that member pointees (and referees) are still mutable. To illustrate, consider this C++ code:
Although the objectspassed toFoo()is constant, which makes all of its members constant, the pointee accessible throughs.ptris still modifiable, though this may not be desirable from the standpoint ofconst-correctness becausesmight solely own the pointee.
For this reason, Meyers argues that the default for member pointers and references should be "deep"const-ness, which could be overridden by amutablequalifier when the pointee is not owned by the container, but this strategy would create compatibility issues with existing code.
Thus, for historical reasons[citation needed], this loophole remains open in C and C++.
The latter loophole can be closed by using a class to hide the pointer behind aconst-correct interface, but such classes either do not support the usual copy semantics from aconstobject (implying that the containing class cannot be copied by the usual semantics either) or allow other loopholes by permitting the stripping ofconst-ness through inadvertent or intentional copying.
Finally, several functions in theC standard libraryviolate const-correctness beforeC23, as they accept aconstpointer to a character string and return a non-constpointer to a part of the same string.strstrandstrchrare among these functions.
Some implementations of the C++ standard library, such as Microsoft's[12]try to close this loophole by providing twooverloadedversions of some functions: a "const" version and a "non-const" version.
The use of the type system to express constancy leads to various complexities and problems, and has accordingly been criticized and not adopted outside the narrow C family of C, C++, and D. Java and C#, which are heavily influenced by C and C++, both explicitly rejectedconst-style type qualifiers, instead expressing constancy by keywords that apply to the identifier (finalin Java,constandreadonlyin C#). Even within C and C++, the use ofconstvaries significantly, with some projects and organizations using it consistently, and others avoiding it.
Theconsttype qualifier causes difficulties when the logic of a function is agnostic to whether its input is constant or not, but returns a value which should be of the same qualified type as an input. In other words, for these functions, if the input is constant (const-qualified), the return value should be as well, but if the input is variable (notconst-qualified), the return value should be as well. Because thetype signatureof these functions differs, it requires two functions (or potentially more, in case of multiple inputs) with the same logic – a form ofgeneric programming.
This problem arises even for simple functions in the C standard library, notablystrchr; this observation is credited by Ritchie to Tom Plum in the mid 1980s.[13]Thestrchrfunction locates a character in a string; formally, it returns a pointer to the first occurrence of the charactercin the strings, and in classic C (K&R C) its prototype is:
Thestrchrfunction does not modify the input string, but the return value is often used by the caller to modify the string, such as:
Thus on the one hand the input stringcanbeconst(since it is not modified by the function), and if the input string isconstthe return value should be as well – most simply because it might return exactly the input pointer, if the first character is a match – but on the other hand the return value should not beconstif the original string was notconst, since the caller may wish to use the pointer to modify the original string.
In C++ this is done viafunction overloading, typically implemented via atemplate, resulting in two functions, so that the return value has the sameconst-qualified type as the input:[b]
These can in turn be defined by a template:
In D this is handled via theinoutkeyword, which acts as a wildcard for const, immutable, or unqualified (variable), yielding:[14][c]
However, in C neither of these is possible[d]since C does not have function overloading, and instead, this is handled by having a single function where the input is constant but the output is writable:
This allows idiomatic C code but does strip the const qualifier if the input actually was const-qualified, violating type safety. This solution was proposed by Ritchie and subsequently adopted. This difference is one of the failures ofcompatibility of C and C++.
SinceC23, this problem is solved with the use of generic functions.strchrand the other functions affected by the issue will return aconstpointer if one was passed to them and an unqualified pointer if an unqualified pointer was passed to them.[15]
In Version 2 of theD programming language, two keywords relating to const exist.[16]Theimmutablekeyword denotes data that cannot be modified through any reference.
Theconstkeyword denotes a non-mutable view of mutable data.
Unlike C++const, Dconstandimmutableare "deep" ortransitive, and anything reachable through aconstorimmutableobject isconstorimmutablerespectively.
Example of const vs. immutable in D
Example of transitive or deep const in D
constwas introduced byBjarne StroustrupinC with Classes, the predecessor toC++, in 1981, and was originally calledreadonly.[17][18]As to motivation, Stroustrup writes:[18]
The first use, as a scoped and typed alternative to macros, was analogously fulfilled for function-like macros via theinlinekeyword. Constant pointers, and the* constnotation, were suggested by Dennis Ritchie and so adopted.[18]
constwas then adopted in C as part of standardization, and appears inC89(and subsequent versions) along with the other type qualifier,volatile.[19]A further qualifier,noalias, was suggested at the December 1987 meeting of the X3J11 committee, but was rejected; its goal was ultimately fulfilled by therestrictkeyword inC99. Ritchie was not very supportive of these additions, arguing that they did not "carry their weight", but ultimately did not argue for their removal from the standard.[20]
D subsequently inheritedconstfrom C++, where it is known as atype constructor(nottype qualifier) and added two further type constructors,immutableandinout, to handle related use cases.[e]
Other languages do not follow C/C++ in having constancy part of the type, though they often have superficially similar constructs and may use theconstkeyword. Typically this is only used for constants (constant objects).
C# has aconstkeyword, but with radically different and simpler semantics: it means a compile-time constant, and is not part of the type.
Nimhas aconstkeyword similar to that of C#: it also declares a compile-time constant rather than forming part of the type. However, in Nim, a constant can be declared from any expression that can be evaluated at compile time.[21]In C#, only C# built-in types can be declared asconst; user-defined types, including classes, structs, and arrays, cannot beconst.[22]
Java does not haveconst– it instead hasfinal, which can be applied to local "variable" declarations and applies to theidentifier, not the type. It has a different object-oriented use for object members, which is the origin of the name.
The Java language specification regardsconstas a reserved keyword – i.e., one that cannot be used as variable identifier – but assigns no semantics to it: it is areserved word(it cannot be used in identifiers) but not akeyword(it has no special meaning). The keyword was included as a means for Java compilers to detect and warn about the incorrect usage of C++ keywords.[23]An enhancement request ticket for implementingconstcorrectness exists in theJava Community Process, but was closed in 2005 on the basis that it was impossible to implement in a backwards-compatible fashion.[24]
The contemporaryAda 83independently had the notion of a constant object and aconstantkeyword,[25][f]withinput parametersand loop parameters being implicitly constant. Here theconstantis a property of the object, not of the type.
JavaScripthas aconstdeclaration that defines ablock-scopedvariable that cannot be reassigned nor redeclared. It defines a read-only reference to a variable that cannot be redefined, but in some situations the value of the variable itself may potentially change, such as if the variable refers to an object and a property of it is altered.[26]
|
https://en.wikipedia.org/wiki/Pointee
|
Incomputer science,pointer swizzlingis the conversion of references based on name orpositioninto directpointerreferences (memory addresses). It is typically performed duringdeserializationorloadingof a relocatable object from a disk file, such as anexecutable fileor pointer-baseddata structure.
The reverse operation, replacing memory pointers with position-independent symbols or positions, is sometimes referred to asunswizzling, and is performed duringserialization(saving). Alternatively, both operations can also be referred to as swizzling.
It is easy to create alinked listdata structure using elements like this:
But saving the list to a file and then reloading it will (on most operating systems) break every link and render the list useless because the nodes will almost never be loaded into the same memory locations. One way to usefully save and retrieve the list is to assign a unique id number to each node and thenunswizzlethe pointers by turning them into a field indicating the id number of the next node:
Records like these can be saved to a file in any order and reloaded without breaking the list. Other options include saving the file offset of the next node or a number indicating its position in the sequence of saved records, or simply saving the nodes in-order to the file.
After loading such a list, finding a node based on its number is cumbersome and inefficient (serial search). Traversing the list was very fast with the original "next" pointers. To convert the list back to its original form, orswizzlethe pointers, requires finding the address of each node and turning theid_number_of_next_nodefields back into direct pointers to the right node.
There are a potentially unlimited number of forms into which a pointer can be unswizzled, but some of the most popular include:
Swizzling in the general case can be complicated. The referencegraphof pointers might contain an arbitrary number ofcycles; this complicates maintaining a mapping from the old unswizzled values to the new addresses.Associative arraysare useful for maintaining the mapping, while algorithms such asbreadth-first searchhelp to traverse the graph, although both of these require extra storage. Variousserializationlibrariesprovide general swizzling systems. In many cases, however, swizzling can be performed with simplifying assumptions, such as atreeorliststructure of references.
The different types of swizzling are:
For security, unswizzling and swizzling must be implemented with great caution. In particular, an attacker's presentation of a specially crafted file may allow access to addresses outside of the expected and proper bounds. In systems with weak memory protection this can lead to exposure of confidential data or modification of code likely to be executed. If the system does not implement guards against execution of data the system may be severely compromised by the installation of various kinds ofmalware.
Methods of protection include verifications prior to releasing the data to an application:
|
https://en.wikipedia.org/wiki/Pointer_swizzling
|
Incomputer programming, areferenceis a value that enables a program to indirectly access a particulardatum, such as avariable's value or arecord, in thecomputer'smemoryor in some otherstorage device. The reference is said toreferto the datum, and accessing the datum is calleddereferencingthe reference. A reference is distinct from the datum itself.
A reference is anabstract data typeand may be implemented in many ways. Typically, a reference refers to data stored in memory on a given system, and its internal value is thememory addressof the data, i.e. a reference is implemented as apointer. For this reason a reference is often said to "point to" the data. Other implementations include an offset (difference) between the datum's address and some fixed "base" address, anindex, oridentifierused in alookupoperation into anarrayortable, an operating systemhandle, aphysical addresson a storage device, or a network address such as aURL.
A referenceRis a value that admits one operation,dereference(R), which yields a value. Usually the reference is typed so that it returns values of a specific type, e.g.:[1][2]
Often the reference also admits an assignment operationstore(R,x), meaning it is anabstract variable.[1]
References are widely used inprogramming, especially to efficiently pass large or mutable data asargumentstoprocedures, or to share such data among various uses. In particular, a reference may point to a variable or record that contains references to other data. This idea is the basis ofindirect addressingand of manylinked data structures, such aslinked lists. References increase flexibility in where objects can be stored, how they are allocated, and how they are passed between areas ofcode. As long as one can access a reference to the data, one can access the data through it, and the data itself need not be moved. They also make sharing of data between different code areas easier; each keeps a reference to it.
References can cause significant complexity in a program, partially due to the possibility ofdanglingandwild referencesand partially because thetopologyof data with references is adirected graph, whose analysis can be quite complicated. Nonetheless, references are still simpler to analyze thanpointersdue to the absence ofpointer arithmetic.
The mechanism of references, if varying in implementation, is a fundamental programming language feature common to nearly all modern programming languages. Even some languages that support no direct use of references have some internal or implicit use. For example, thecall by referencecalling convention can be implemented with either explicit or implicit use of references.
Pointersare the most primitive type of reference. Due to their intimate relationship with the underlying hardware, they are one of the most powerful and efficient types of references. However, also due to this relationship, pointers require a strong understanding by the programmer of the details of memory architecture. Because pointers store a memory location's address, instead of a value directly, inappropriate use of pointers can lead toundefined behaviorin a program, particularly due todangling pointersorwild pointers.Smart pointersareopaque data structuresthat act like pointers but can only be accessed through particular methods.
Ahandleis an abstract reference, and may be represented in various ways. A common example arefile handles(the FILE data structure in theC standard I/O library), used to abstract file content. It usually represents both the file itself, as when requesting alockon the file, and a specific position within the file's content, as when reading a file.
Indistributed computing, the reference may contain more than an address or identifier; it may also include an embedded specification of the network protocols used to locate and access the referenced object, the way information is encoded or serialized. Thus, for example, aWSDLdescription of a remote web service can be viewed as a form of reference; it includes a complete specification of how to locate and bind to a particularweb service. A reference to alive distributed objectis another example: it is a complete specification for how to construct a small software component called aproxythat will subsequently engage in a peer-to-peer interaction, and through which the local machine may gain access to data that is replicated or exists only as a weakly consistent message stream. In all these cases, the reference includes the full set of instructions, or a recipe, for how to access the data; in this sense, it serves the same purpose as an identifier or address in memory.
If we have a set of keysKand a set of data objectsD, any well-defined (single-valued) function fromKtoD∪ {null} defines a type of reference, wherenullis the image of a key not referring to anything meaningful.
An alternative representation of such a function is a directed graph called areachability graph. Here, each datum is represented by a vertex and there is an edge fromutovif the datum inurefers to the datum inv. The maximumout-degreeis one. These graphs are valuable ingarbage collection, where they can be used to separate accessible frominaccessible objects.
In many data structures, large, complex objects are composed of smaller objects. These objects are typically stored in one of two ways:
Internal storage is usually more efficient, because there is a space cost for the references anddynamic allocationmetadata, and a time cost associated with dereferencing a reference and with allocating the memory for the smaller objects. Internal storage also enhanceslocality of referenceby keeping different parts of the same large object close together in memory. However, there are a variety of situations in which external storage is preferred:
Some languages, such asJava,Smalltalk,Python, andScheme, do not support internal storage. In these languages, all objects are uniformly accessed through references.
Inassembly language, it is typical to express references using either raw memory addresses or indexes into tables. These work, but are somewhat tricky to use, because an address tells you nothing about the value it points to, not even how large it is or how to interpret it; such information is encoded in the program logic. The result is that misinterpretations can occur in incorrect programs, causing bewildering errors.
One of the earliest opaque references was that of theLisplanguagecons cell, which is simply arecordcontaining two references to other Lisp objects, including possibly other cons cells. This simple structure is most commonly used to build singlylinked lists, but can also be used to build simplebinary treesand so-called "dotted lists", which terminate not with a null reference but a value.
Thepointeris still one of the most popular types of references today. It is similar to the assembly representation of a raw address, except that it carries a staticdatatypewhich can be used at compile-time to ensure that the data it refers to is not misinterpreted. However, because C has aweak type systemwhich can be violated usingcasts(explicit conversions between various pointer types and between pointer types and integers), misinterpretation is still possible, if more difficult. Its successorC++tried to increasetype safetyof pointers with new cast operators, areference type&, and smart pointers inits standard library, but still retained the ability to circumvent these safety mechanisms for compatibility.
Fortran does not have an explicit representation of references, but does use them implicitly in itscall-by-referencecalling semantics. AFortranreference is best thought of as analiasof another object, such as a scalar variable or a row or column of an array. There is no syntax to dereference the reference or manipulate the contents of the referent directly. Fortran references can be null. As in other languages, these references facilitate the processing of dynamic structures, such as linked lists, queues, and trees.
A number of object-oriented languages such asEiffel,Java,C#, andVisual Basichave adopted a much more opaque type of reference, usually referred to as simply areference. These references have types like C pointers indicating how to interpret the data they reference, but they are typesafe in that they cannot be interpreted as a raw address and unsafe conversions are not permitted. References are extensively used to access andassignobjects. References are also used in function/methodcalls or message passing, andreference countsare frequently used to performgarbage collectionof unused objects.
InStandard ML,OCaml, and many other functional languages, most values are persistent: they cannot be modified by assignment. Assignable "reference cells" providemutable variables, data that can be modified. Such reference cells can hold any value, and so are given thepolymorphictypeα ref, whereαis to be replaced with the type of value pointed to. These mutable references can be pointed to different objects over their lifetime. For example, this permits building of circular data structures. The reference cell is functionally equivalent to a mutable array of length 1.
To preserve safety and efficient implementations, references cannot betype-castin ML, nor can pointer arithmetic be performed. In the functional paradigm, many structures that would be represented using pointers in a language like C are represented using other facilities, such as the powerfulalgebraic datatypemechanism. The programmer is then able to enjoy certain properties (such as the guarantee of immutability) while programming, even though the compiler often uses machine pointers "under the hood".
Perlsupports hard references, which function similarly to those in other languages, andsymbolic references, which are just string values that contain the names of variables. When a value that is not a hard reference is dereferenced, Perl considers it to be a symbolic reference and gives the variable with the name given by the value.[3]PHPhas a similar feature in the form of its$$varsyntax.[4]
|
https://en.wikipedia.org/wiki/Reference_(computer_science)
|
Incomputer science, atagged pointeris apointer(concretely amemory address) with additional data associated with it, such as anindirection bitorreference count. This additional data is often "folded" into the pointer, meaning stored inline in the data representing the address, taking advantage of certain properties of memory addressing. The name comes from "tagged architecture" systems, which reserved bits at the hardware level to indicate the significance of each word; the additional data is called a "tag" or "tags", though strictly speaking "tag" refers to data specifying atype,not other data; however, the usage "tagged pointer" is ubiquitous.
There are various techniques for folding tags into a pointer.[1][unreliable source?]
Most architectures arebyte-addressable(the smallest addressable unit is a byte), but certain types of data will often bealignedto the size of the data, often awordor multiple thereof. This discrepancy leaves a few of theleast significant bitsof the pointer unused, which can be used for tags – most often as abit field(each bit a separate tag) – as long as code that uses the pointermasks outthese bits before accessing memory. E.g., on a32-bitarchitecture (for both addresses and word size), a word is 32 bits = 4 bytes, so word-aligned addresses are always a multiple of 4, hence end in 00, leaving the last 2 bits available; while on a64-bitarchitecture, a word is 64 bits = 8 bytes, so word-aligned addresses end in 000, leaving the last 3 bits available. In cases where data is aligned at a multiple of word size, further bits are available. In case ofword-addressablearchitectures, word-aligned data does not leave any bits available, as there is no discrepancy between alignment and addressing, but data aligned at a multiple of word size does.
Conversely, in some operating systems,virtual addressesare narrower than the overall architecture width, which leaves themost significant bitsavailable for tags; this can be combined with the previous technique in case of aligned addresses. This is particularly the case on 64-bit architectures, as 64 bits of address space are far above the data requirements of all but the largest applications, and thus manypractical 64-bit processorshave narrower addresses. Note that the virtual address width may be narrower than thephysical addresswidth, which in turn may be narrower than the architecture width; for tagging of pointers inuser space, the virtual address space provided by the operating system (in turn provided by thememory management unit) is the relevant width. In fact, some processors specifically forbid use of such tagged pointers at the processor level, notablyx86-64, which requires the use ofcanonical form addressesby the operating system, with most significant bits all 0s or all 1s.
Lastly, thevirtual memorysystem in most modernoperating systemsreserves a block of logical memory around address 0 as unusable. This means that, for example, a pointer to 0 is never a valid pointer and can be used as a specialnull pointervalue. Unlike the previously mentioned techniques, this only allows a single special pointer value, not extra data for pointers generally.
One of the earliest examples of hardware support for tagged pointers in a commercial platform was theIBM System/38.[2]IBM later added tagged pointer support to thePowerPCarchitecture to support theIBM ioperating system, which is an evolution of the System/38 platform.[3]
A significant example of the use of tagged pointers is theObjective-Cruntime oniOS 7onARM64, notably used on theiPhone 5S. In iOS 7, virtual addresses only contain 33 bits of address information but are 64-bits long leaving 31 bits for tags. Objective-C class pointers are 8-byte aligned freeing up an additional 3 bits of address space, and the tag fields are used for many purposes, such as storing a reference count and whether the object has adestructor.[4][5]
Early versions of macOS used tagged addresses called Handles to store references to data objects. The high bits of the address indicated whether the data object was locked, purgeable, and/or originated from a resource file, respectively. This caused compatibility problems, when macOS addressing advanced from 24 bits to 32 bits in System 7.[6]
Use of zero to represent a null pointer is extremely common, with many programming languages (such asAda) explicitly relying on this behavior. In theory, other values in an operating system-reserved block of logical memory could be used to tag conditions other than a null pointer, but these uses appear to be rare, perhaps because they are at bestnon-portable. It is generally accepted practice in software design that if a special pointer value distinct from null (such as asentinelin certaindata structures) is needed, the programmer should explicitly provide for it.
Taking advantage of the alignment of pointers provides more flexibility than null pointers/sentinels because it allows pointers to be tagged with information about the type of data pointed to, conditions under which it may be accessed, or other similar information about the pointer's use. This information can be provided along with every valid pointer. In contrast, null pointers/sentinels provide only a finite number of tagged values distinct from valid pointers.
In atagged architecture, a number of bits in every word of memory are reserved to act as a tag. Tagged architectures, such as theLisp machines, often have hardware support for interpreting and processing tagged pointers.
GNU libcmalloc()provides 8-byte aligned memory addresses for 32-bit platforms, and 16-byte alignment for 64-bit platforms.[7]Larger alignment values can be obtained withposix_memalign().[8]
In the following C code, the value of zero is used to indicate a null pointer:
Here, the programmer has provided a global variable, whose address is then used as a sentinel:
Assume we have a data structuretable_entrythat is always aligned to a 16 byte boundary. In other words, the least significant 4 bits of a table entry's address are always 0(24= 16).We could use these 4 bits to mark the table entry with extra information. For example, bit 0 might mean read only, bit 1 might mean dirty (the table entry needs to be updated), and so on.
If pointers are 16-bit values, then:
The major advantage of tagged pointers is that they take up less space than a pointer along with a separate tag field. This can be especially important when a pointer is a return value from afunction. It can also be important in large tables of pointers.
A more subtle advantage is that by storing a tag in the same place as the pointer, it is often possible to guarantee theatomicityof an operation that updates both the pointer and its tag without externalsynchronizationmechanisms.[further explanation needed]This can be an extremely large performance gain, especially in operating systems.
Tagged pointers have some of the same difficulties asxor linked lists, although to a lesser extent. For example, not alldebuggerswill be able to properly follow tagged pointers; however, this is not an issue for a debugger that is designed with tagged pointers in mind.
The use of zero to represent a null pointer does not suffer from these disadvantages: it is pervasive, most programming languages treat zero as a special null value, and it has thoroughly proven its robustness. An exception is the way that zero participates inoverload resolutionin C++, where zero is treated as an integer rather than a pointer; for this reason the special valuenullptris preferred over the integer zero. However, with tagged pointers zeros are usually not used to represent null pointers.
|
https://en.wikipedia.org/wiki/Tagged_pointer
|
Incomputer programming, avariableis an abstract storage location paired with an associatedsymbolic name, which contains some known or unknown quantity ofdataorobjectreferred to as avalue; or in simpler terms, a variable is a named container for a particular set of bits ortype of data(likeinteger,float,string, etc...).[1]A variable can eventually be associated with or identified by amemory address. The variable name is the usual way toreferencethe stored value, in addition to referring to the variable itself, depending on the context. This separation of name and content allows the name to be used independently of the exact information it represents. The identifier in computersource codecan beboundto avalueduringrun time, and the value of the variable may thus change during the course ofprogram execution.[2][3][4][5]
Variables in programming may not directly correspond to the concept ofvariables in mathematics. The latter isabstract, having no reference to a physical object such as storage location. The value of a computing variable is not necessarily part of anequationorformulaas in mathematics. Variables incomputer programmingare frequently given long names to make them relatively descriptive of their use, whereas variables in mathematics often have terse, one- or two-character names for brevity in transcription and manipulation.
A variable's storage location may be referenced by several different identifiers, a situation known asaliasing. Assigning a value to the variable using one of the identifiers will change the value that can be accessed through the other identifiers.
Compilershave to replace variables' symbolic names with the actual locations of the data. While a variable's name, type, and location often remain fixed, the data stored in the location may be changed during program execution.
Inimperativeprogramming languages, values can generally beaccessedorchangedat any time. Inpurefunctionalandlogic languages, variables areboundto expressions and keep a single value during their entirelifetimedue to the requirements ofreferential transparency. In imperative languages, the same behavior is exhibited by (named)constants(symbolic constants), which are typically contrasted with (normal) variables.
Depending on thetype systemof a programming language, variables may only be able to store a specifieddata type(e.g.integerorstring). Alternatively, a datatype may be associated only with the current value, allowing a single variable to store anything supported by the programming language. Variables are the containers for storing the values.
Variables and scope:
An identifier referencing a variable can be used to access the variable in order to read out the value, or alter the value, or edit otherattributesof the variable, such as access permission,locks,semaphores, etc.
For instance, a variable might be referenced by the identifier "total_count" and the variable can contain the number 1956. If the same variable is referenced by the identifier "r" as well, and if using this identifier "r", the value of the variable is altered to 2009, then reading the value using the identifier "total_count" will yield a result of 2009 and not 1956.
If a variable is only referenced by a single identifier, that identifier can simply be calledthe name of the variable; otherwise, we can speak of it asone of the names of the variable. For instance, in the previous example the identifier "total_count" is the name of the variable in question, and "r" is another name of the same variable.
Thescopeof a variable describes where in a program's text the variable may be used, while theextent(also calledlifetime) of a variable describes when in a program's execution the variable has a (meaningful) value. The scope of a variable affects its extent. The scope of a variable is actually a property of the name of the variable, and the extent is a property of the storage location of the variable. These should not be confused withcontext(also calledenvironment), which is a property of the program, and varies by point in the program's text or execution—seescope: an overview. Further,object lifetimemay coincide with variable lifetime, but in many cases is not tied to it.
Scopeis an important part of thename resolutionof a variable. Most languages define a specificscopefor each variable (as well as any other named entity), which may differ within a given program. The scope of a variable is the portion of the program's text for which the variable's name has meaning and for which the variable is said to be "visible". Entrance into that scope typically begins a variable's lifetime (as it comes into context) and exit from that scope typically ends its lifetime (as it goes out of context). For instance, a variable with "lexical scope" is meaningful only within a certain function/subroutine, or more finely within a block of expressions/statements (accordingly withfunction scopeorblock scope); this is static resolution, performable at parse-time or compile-time. Alternatively, a variable withdynamic scopeis resolved at run-time, based on a global bindingstackthat depends on the specificcontrol flow. Variables only accessible within a certain functions are termed "local variables". A "global variable", or one with indefinite scope, may be referred to anywhere in the program.
Extent, on the other hand, is a runtime (dynamic) aspect of a variable. Eachbindingof a variable to a value can have its ownextentat runtime. The extent of the binding is the portion of the program's execution time during which the variable continues to refer to the same value or memory location. A running program may enter and leave a given extent many times, as in the case of aclosure.
Unless the programming language featuresgarbage collection, a variable whose extent permanently outlasts its scope can result in amemory leak, whereby the memory allocated for the variable can never be freed since the variable which would be used to reference it for deallocation purposes is no longer accessible. However, it can be permissible for a variable binding to extend beyond its scope, as occurs in Lispclosuresand Cstatic local variables; when execution passes back into the variable's scope, the variable may once again be used. A variable whose scope begins before its extent does is said to beuninitializedand often has an undefined, arbitrary value if accessed (seewild pointer), since it has yet to be explicitly given a particular value. A variable whose extent ends before its scope may become adangling pointerand deemed uninitialized once more since its value has been destroyed. Variables described by the previous two cases may be said to beout of extentorunbound. In many languages, it is an error to try to use the value of a variable when it is out of extent. In other languages, doing so may yieldunpredictable results. Such a variable may, however, be assigned a new value, which gives it a new extent.
For space efficiency, a memory space needed for a variable may be allocated only when the variable is first used and freed when it is no longer needed. A variable is only needed when it is in scope, thus beginning each variable's lifetime when it enters scope may give space to unused variables. To avoid wasting such space, compilers often warn programmers if a variable is declared but not used.
It is considered good programming practice to make the scope of variables as narrow as feasible so that different parts of a program do not accidentally interact with each other by modifying each other's variables. Doing so also preventsaction at a distance. Common techniques for doing so are to have different sections of a program use differentname spaces, or to make individual variables "private" through eitherdynamic variable scopingorlexical variable scoping.
Many programming languages employ a reserved value (often namednullornil) to indicate an invalid or uninitialized variable.
Instatically typedlanguages such asC,C++,JavaorC#, a variable also has atype, meaning that only certain kinds of values can be stored in it. For example, a variable of type "integer" is prohibited from storing text values.[6]
Indynamically typedlanguages such asPython, a variable's type is inferred by its value, and can change according to its value. InCommon Lisp, both situations exist simultaneously: A variable is given a type (if undeclared, it is assumed to beT, the universalsupertype) which exists at compile time. Values also have types, which can be checked and queried at runtime.
Typing of variables also allowspolymorphismsto be resolved at compile time. However, this is different from the polymorphism used in object-oriented function calls (referred to asvirtual functionsinC++) which resolves the call based on the value type as opposed to the supertypes the variable is allowed to have.
Variables often store simple data, like integers and literal strings, but some programming languages allow a variable to store values of otherdatatypesas well. Such languages may also enable functions to beparametric polymorphic. These functions operate like variables to represent data of multiple types. For example, a function namedlengthmay determine the length of a list. Such alengthfunction may be parametric polymorphic by including a type variable in itstype signature, since the number of elements in the list is independent of the elements' types.
Theformal parameters(orformal arguments) of functions are also referred to as variables. For instance, in thisPythoncode segment,
the variable namedxis aparameterbecause it is given a value when the function is called. The integer 5 is theargumentwhich givesxits value. In most languages, function parameters have local scope. This specific variable namedxcan only be referred to within theaddtwofunction (though of course other functions can also have variables calledx).
The specifics of variable allocation and the representation of their values vary widely, both among programming languages and among implementations of a given language. Many language implementations allocate space forlocal variables, whose extent lasts for a single function call on thecall stack, and whose memory is automatically reclaimed when the function returns. More generally, inname binding, the name of a variable is bound to the address of some particular block (contiguous sequence) of bytes in memory, and operations on the variable manipulate that block.Referencingis more common for variables whose values have large or unknown sizes when the code is compiled. Such variables reference the location of the value instead of storing the value itself, which is allocated from a pool of memory called theheap.
Bound variables have values. A value, however, is an abstraction, an idea; in implementation, a value is represented by somedata object, which is stored somewhere in computer memory. The program, or theruntime environment, must set aside memory for each data object and, since memory is finite, ensure that this memory is yielded for reuse when the object is no longer needed to represent some variable's value.
Objects allocated from the heap must be reclaimed—especially when the objects are no longer needed. In agarbage-collectedlanguage (such asC#,Java, Python, Golang andLisp), the runtime environment automatically reclaims objects when extant variables can no longer refer to them. In non-garbage-collected languages, such asC, the program (and the programmer) must explicitlyallocatememory, and then later free it, to reclaim its memory. Failure to do so leads tomemory leaks, in which the heap is depleted as the program runs, risks eventual failure from exhausting available memory.
When a variable refers to adata structurecreated dynamically, some of its components may be only indirectly accessed through the variable. In such circumstances, garbage collectors (or analogous program features in languages that lack garbage collectors) must deal with a case where only a portion of the memory reachable from the variable needs to be reclaimed.
Unlike their mathematical counterparts, programming variables and constants commonly take multiple-character names, e.g.COSTortotal. Single-character names are most commonly used only for auxiliary variables; for instance,i,j,kforarray indexvariables.
Some naming conventions are enforced at the language level as part of the language syntax which involves the format of valid identifiers. In almost all languages, variable names cannot start with a digit (0–9) and cannot contain whitespace characters. Whether or not punctuation marks are permitted in variable names varies from language to language; many languages only permit theunderscore("_") in variable names and forbid all other punctuation. In some programming languages,sigils(symbols or punctuation) are affixed to variable identifiers to indicate the variable's datatype or scope.
Case-sensitivityof variable names also varies between languages and some languages require the use of a certain case in naming certain entities;[note 1]Most modern languages are case-sensitive; some older languages are not. Some languages reserve certain forms of variable names for their own internal use; in many languages, names beginning with two underscores ("__") often fall under this category.
However, beyond the basic restrictions imposed by a language, the naming of variables is largely a matter of style. At themachine codelevel, variable names are not used, so the exact names chosen do not matter to the computer. Thus names of variables identify them, for the rest they are just a tool for programmers to make programs easier to write and understand. Using poorly chosen variable names can make code more difficult to review than non-descriptive names, so names that are clear are often encouraged.[7][8]
Programmers often create and adhere to code style guidelines that offer guidance on naming variables or impose a precise naming scheme. Shorter names are faster to type but are less descriptive; longer names often make programs easier to read and the purpose of variables easier to understand. However, extreme verbosity in variable names can also lead to less comprehensible code.
We can classify variables based on their lifetime. The different types of variables are static, stack-dynamic, explicit heap-dynamic, and implicit heap-dynamic. Astatic variableis also known as global variable, it is bound to a memory cell before execution begins and remains to the same memory cell until termination. A typical example is the static variables in C and C++. A Stack-dynamic variable is known as local variable, which is bound when the declaration statement is executed, and it is deallocated when the procedure returns. The main examples are local variables in C subprograms and Java methods. Explicit Heap-Dynamic variables are nameless (abstract) memory cells that are allocated and deallocated by explicit run-time instructions specified by the programmer. The main examples are dynamic objects in C++ (via new and delete) and all objects in Java. Implicit Heap-Dynamic variables are bound to heap storage only when they are assigned values. Allocation and release occur when values are reassigned to variables. As a result, Implicit heap-dynamic variables have the highest degree of flexibility. The main examples are some variables in JavaScript, PHP and all variables in APL.
|
https://en.wikipedia.org/wiki/Variable_(computer_science)
|
Zero-based numberingis a way ofnumberingin which the initial element of asequenceis assigned theindex0, rather than the index 1 as is typical in everyday non-mathematical or non-programming circumstances. Under zero-based numbering, the initial element is sometimes termed thezerothelement,[1]rather than thefirstelement;zerothis acoinedordinal numbercorresponding to the numberzero. In some cases, an object or value that does not (originally) belong to a given sequence, but which could be naturally placed before its initial element, may be termed the zeroth element. There is no wide agreement regarding the correctness of using zero as an ordinal (nor regarding the use of the termzeroth), as it creates ambiguity for all subsequent elements of the sequence when lacking context.
Numbering sequences starting at 0 is quite common in mathematics notation, in particular incombinatorics, though programming languages for mathematics usually index from 1.[2][3][4]Incomputer science,arrayindices usually start at 0 in modern programming languages, so computer programmers might usezerothin situations where others might usefirst, and so forth. In some mathematical contexts, zero-based numbering can be used without confusion, when ordinal forms have well established meaning with an obvious candidate to come beforefirst; for instance, azeroth derivativeof a function is the function itself, obtained bydifferentiatingzero times. Such usage corresponds to naming an element not properly belonging to the sequence but preceding it: the zeroth derivative is not really a derivative at all. However, just as thefirst derivativeprecedes thesecond derivative, so also does thezeroth derivative(or the original function itself) precede thefirst derivative.
Martin Richards, creator of theBCPLlanguage (a precursor ofC), designed arrays initiating at 0 as the natural position to start accessing the array contents in the language, since the value of apointerpused as an address accesses the positionp+ 0in memory.[5][6]BCPL was first compiled for theIBM 7094; the language introduced norun-timeindirection lookups, so the indirection optimization provided by these arrays was done at compile time.[6]The optimization was nevertheless important.[6][7]
In 1982Edsger W. Dijkstrain his pertinent noteWhy numbering should start at zero[8]argued that arrays subscripts should start at zero as the latter being the mostnatural number. Discussing possible designs of array ranges by enclosing them in a chained inequality, combining sharp and standard inequalities to four possibilities, demonstrating that to his conviction zero-based arrays are best represented by non-overlapping index ranges, which start at zero, alluding toopen, half-open and closed intervalsas with the real numbers. Dijkstra's criteria for preferring this convention are in detail that it represents empty sequences in a more natural way(a≤i<a?)than closed "intervals" (a≤i≤ (a− 1) ?), and that with half-open "intervals" of naturals, the length of a sub-sequence equals the upper minus the lower bound (a≤i<bgives(b−a)possible values fori, witha,b,iall integers).
This usage follows from design choices embedded in many influentialprogramming languages, includingC,Java, andLisp. In these three, sequence types (C arrays, Java arrays and lists, and Lisp lists and vectors) are indexed beginning with the zero subscript. Particularly in C, where arrays are closely tied topointerarithmetic, this makes for a simpler implementation: the subscript refers to an offset from the starting position of an array, so the first element has an offset of zero.
Referencing memory by an address and an offset is represented directly incomputer hardwareon virtually all computer architectures, so this design detail in C makes compilation easier, at the cost of some human factors. In this context using "zeroth" as an ordinal is not strictly correct, but a widespread habit in this profession. Other programming languages, such asFortranorCOBOL, have array subscripts starting with one, because they were meant ashigh-level programming languages, and as such they had to have a correspondence to the usualordinal numberswhich predate theinvention of the zeroby a long time.
Pascalallows the range of an array to be of any ordinal type (including enumerated types).APLallows setting the index origin to 0 or 1 during runtime programmatically.[9][10]Some recent languages, such asLuaandVisual Basic, have adopted the same convention for the same reason.
Zero is the lowest unsigned integer value, one of the most fundamental types in programming and hardware design. In computer science,zerois thus often used as the base case for many kinds of numericalrecursion. Proofs and other sorts of mathematical reasoning in computer science often begin with zero. For these reasons, in computer science it is not unusual to number from zero rather than one.
If an array is used to represent a cycle, it is convenient to obtain the index with amodulo function, which can result in zero.
With zero-based numbering, a range can be expressed as the half-openinterval,[0,n), as opposed to the closed interval,[1,n]. Empty ranges, which often occur in algorithms, are tricky to express with a closed interval without resorting to obtuse conventions like[1, 0]. Because of this property, zero-based indexing potentially reducesoff-by-oneandfencepost errors.[8]On the other hand, the repeat countnis calculated in advance, making the use of counting from 0 ton− 1(inclusive) less intuitive. Some authors prefer one-based indexing, as it corresponds more closely to how entities are indexed in other contexts.[11]
Another property of this convention is in the use ofmodular arithmeticas implemented in modern computers. Usually, themodulo functionmaps any integer moduloNto one of the numbers0, 1, 2, ...,N− 1, whereN≥ 1. Because of this, many formulas in algorithms (such as that for calculating hash table indices) can be elegantly expressed in code using the modulo operation when array indices start at zero.
Pointer operations can also be expressed more elegantly on a zero-based index due to the underlying address/offset logic mentioned above. To illustrate, supposeais thememory addressof the first element of an array, andiis the index of the desired element. To compute the address of the desired element, if the index numbers count from 1, the desired address is computed by this expression:
wheresis the size of each element. In contrast, if the index numbers count from 0, the expression becomes
This simpler expression is more efficient to compute atrun time.
However, a language wishing to index arrays from 1 could adopt the convention that every array address is represented bya′ =a–s; that is, rather than using the address of the first array element, such a language would use the address of a fictitious element located immediately before the first actual element. The indexing expression for a 1-based index would then be
Hence, the efficiency benefit at run time of zero-based indexing is not inherent, but is an artifact of the decision to represent an array with the address of its first element rather than the address of the fictitious zeroth element. However, the address of that fictitious element could very well be the address of some other item in memory not related to the array.
Superficially, the fictitious element doesn't scale well to multidimensional arrays. Indexing multidimensional arrays from zero makes a naive (contiguous) conversion to a linear address space (systematically varying one index after the other) look simpler than when indexing from one. For instance, when mapping the three-dimensional arrayA[P][N][M]to a linear arrayL[M⋅N⋅P], both withM ⋅ N ⋅ Pelements, the indexrin the linear array to access a specific element withL[r] = A[z][y][x]in zero-based indexing, i.e.[0 ≤x<P],[0 ≤y<N],[0 ≤z<M], and[0 ≤r<M ⋅ N ⋅ P], is calculated by
Organizing all arrays with 1-based indices ([1 ≤x′≤P],[1 ≤y′≤N],[1 ≤z′≤M],[1 ≤r′≤M ⋅ N ⋅ P]), and assuming an analogous arrangement of the elements, gives
to access the same element, which arguably looks more complicated. Of course,r′ =r+ 1,since[z=z′ – 1],[y=y′ – 1],and[x=x′ – 1].A simple and everyday-life example ispositional notation, which the invention of the zero made possible. In positional notation, tens, hundreds, thousands and all other digits start with zero, only units start at one.[12]
This situation can lead to some confusion in terminology. In a zero-based indexing scheme, the first element is "element number zero"; likewise, the twelfth element is "element number eleven". Therefore, an analogy from the ordinal numbers to the quantity of objects numbered appears; the highest index ofnobjects will ben− 1, and it refers to thenth element. For this reason, the first element is sometimes referred to as thezerothelement, in an attempt to avoid confusion.
Inmathematics, many sequences of numbers or ofpolynomialsare indexed by nonnegative integers, for example, theBernoulli numbersand theBell numbers.
In bothmechanicsandstatistics, the zerothmomentis defined, representing total mass in the case of physicaldensity, or total probability, i.e. one, for aprobability distribution.
Thezeroth law of thermodynamicswas formulated after the first, second, and third laws, but considered more fundamental, thus its name.
In biology, an organism is said to have zero-order intentionality if it shows "no intention of anything at all". This would include a situation where the organism's genetically predetermined phenotype results in a fitness benefit to itself, because it did not "intend" to express its genes.[13]In the similar sense, a computer may be considered from this perspective a zero-order intentional entity, as it does not "intend" to express the code of the programs it runs.[14]
In biological or medical experiments, the first day of an experiment is often numbered as day 0.[15]
Patient zero (orindex case) is the initialpatientin thepopulation sampleof anepidemiologicalinvestigation.
Theyear zerodoes not exist in the widely usedGregorian calendaror in its predecessor, theJulian calendar. Under those systems, the year1 BCis followed byAD 1. However, there is a year zero inastronomical year numbering(where it coincides with the Julian year 1 BC) and inISO 8601:2004(where it coincides with the Gregorian year 1 BC), as well as in allBuddhistandHindu calendars.
In many countries, theground floorin buildings is considered as floor number 0 rather than as the "1st floor", the naming convention usually found in the United States of America. This makes a consistent set with underground floors marked with negative numbers.
While the ordinal of 0 mostly finds use in communities directly connected to mathematics, physics, and computer science, there are also instances in classical music. The composerAnton Brucknerregarded his earlySymphony in D minorto be unworthy of including in the canon of his works, and he wrotegilt nicht("doesn't count") on the score and a circle with a crossbar, intending it to mean "invalid". But posthumously, this work came to be known asSymphony No. 0 in D minor, even though it was actually written afterSymphony No. 1 in C minor. There is an even earlierSymphony in F minorof Bruckner's, which is sometimes calledNo. 00. The Russian composerAlfred Schnittkealso wrote aSymphony No. 0.
In some universities, including Oxford and Cambridge, "week 0" or occasionally "noughth week" refers to the week before the first week of lectures in a term. In Australia, some universities refer to this as "O week", which serves as a pun on "orientation week". As a parallel, the introductory weeks at university educations inSwedenare generally callednollning(zeroing).
TheUnited States Air Forcestarts basic training each Wednesday, and the first week (of eight) is considered to begin with the following Sunday. The four days before that Sunday are often referred to as "zero week".
24-hour clocksand the international standardISO 8601use 0 to denote the first (zeroth) hour of the day, consistent with using the 0 to denote the first (zeroth) minute of the hour and the first (zeroth) second of the minute. Also, the12-hour clocksused inJapanuse 0 to denote the hour immediately after midnight and noon in contrast to 12 used elsewhere, in order to avoid confusionwhether 12 a.m. and 12 p.m. represent noon or midnight.
Robert Crumb's drawings for the first issue ofZap Comixwere stolen, so he drew a whole new issue, which was published as issue 1. Later he re-inked his photocopies of the stolen artwork and published it as issue 0.
TheBrussels ringroad in Belgium is numbered R0. It was built after the ring road aroundAntwerp, but Brussels (being the capital city) was deemed deserving of a more basic number. Similarly the (unfinished) orbital motorway aroundBudapestin Hungary is calledM0.
Zero is sometimes usedin street addresses, especially in schemes where even numbers are one side of the street and odd numbers on the other. A case in point isChrist ChurchonHarvard Square, whose address is 0 Garden Street.
Formerly inFormula One, when a defending world champion did not compete in the following season, the number 1 was not assigned to any driver, but one driver of the world champion team would carry the number 0, and the other, number 2. This did happen both in 1993 and 1994 withDamon Hillcarrying the number 0 in both seasons, as defending championNigel Mansellquit after 1992, and defending championAlain Prostquit after 1993. However, in 2014 the series moved to drivers carrying career-long personalised numbers, instead of team-allocated numbers, other than the defending champion still having the option to carry number 1. Therefore 0 is no longer used in this scenario. It is not clear if it is available as a driver's chosen number, or whether they must be between 2 and 99, but it has not been used to date under this system.
Some team sports allow 0 to be chosen as a player'suniform number(in addition to the typical range of 1-99). The NFL voted to allow this from 2023 onwards.
A chronological prequel of a series may be numbered as 0, such asRing 0: BirthdayorZork Zero.
TheSwiss Federal Railwaysnumber certain classes of rolling stock from zero, for example,Re 460000 to 118.
In the realm of fiction,Isaac Asimoveventually added a Zeroth Law to hisThree Laws of Robotics, essentially making them four laws.
A standardroulettewheel contains the number 0 as well as 1-36. It appears in green, so is classed as neither a "red" nor "black" number for betting purposes. The card gameUnohas number cards running from 0 to 9 along with special cards, within each coloured suit.
TheFour Essential Freedoms of Free Softwareare numbered starting from zero. This is for historical reasons: the list originally had only three freedoms, and when the fourth was added it was placed in the zeroth position as it was considered more basic.
|
https://en.wikipedia.org/wiki/Zero-based_numbering
|
Cache hierarchy,ormulti-level cache, is a memory architecture that uses a hierarchy of memory stores based on varying access speeds to cache data. Highly requested data is cached in high-speed access memory stores, allowing swifter access bycentral processing unit(CPU) cores.
Cache hierarchy is a form and part ofmemory hierarchyand can be considered a form oftiered storage.[1]This design was intended to allow CPU cores to process faster despite thememory latencyofmain memoryaccess. Accessing main memory can act as a bottleneck forCPU core performanceas the CPU waits for data, while making all of main memory high-speed may be prohibitively expensive. High-speed caches are a compromise allowing high-speed access to the data most-used by the CPU, permitting a fasterCPU clock.[2]
In the history of computer and electronic chip development, there was a period when increases in CPU speed outpaced the improvements in memory access speed.[3]The gap between the speed of CPUs and memory meant that the CPU would often be idle.[4]CPUs were increasingly capable of running and executing larger amounts of instructions in a given time, but the time needed to access data from main memory prevented programs from fully benefiting from this capability.[5]This issue motivated the creation of memory models with higher access rates in order to realize the potential of faster processors.[6]
This resulted in the concept ofcache memory, first proposed byMaurice Wilkes, a British computer scientist at the University of Cambridge in 1965. He called such memory models "slave memory".[7]Between roughly 1970 and 1990, papers and articles byAnant Agarwal,Alan Jay Smith,Mark D. Hill, Thomas R. Puzak, and others discussed better cache memory designs. The first cache memory models were implemented at the time, but even as researchers were investigating and proposing better designs, the need for faster memory models continued. This need resulted from the fact that although early cache models improved data access latency, with respect to cost and technical limitations it was not feasible for a computer system's cache to approach the size of main memory. From 1990 onward, ideas such as adding another cache level (second-level), as a backup for the first-level cache were proposed.Jean-Loup Baer, Wen-Hann Wang, Andrew W. Wilson, and others have conducted research on this model. When several simulations and implementations demonstrated the advantages of two-level cache models, the concept of multi-level caches caught on as a new and generally better model of cache memories. Since 2000, multi-level cache models have received widespread attention and are currently implemented in many systems, such as the three-level caches that are present in Intel's Core i7 products.[8]
Accessing main memory for each instruction execution may result in slow processing, with the clock speed depending on the time required to find and fetch the data. In order to hide this memory latency from the processor, data caching is used.[9]Whenever the data is required by the processor, it is fetched from the main memory and stored in the smaller memory structure called a cache. If there is any further need of that data, the cache is searched first before going to the main memory.[10]This structure resides closer to the processor in terms of the time taken to search and fetch data with respect to the main memory.[11]The advantages of using cache can be proven by calculating the average access time (AAT) for the memory hierarchy with and without the cache.[12]
Caches, being small in size, may result in frequent misses – when a search of the cache does not provide the sought-after information – resulting in a call to main memory to fetch data. Hence, the AAT is affected by the miss rate of each structure from which it searches for the data.[13]
AAT for main memory is given by Hit timemain memory. AAT for caches can be given by:
The hit time for caches is less than the hit time for the main memory, so the AAT for data retrieval is significantly lower when accessing data through the cache rather than main memory.[14]
While using the cache may improve memory latency, it may not always result in the required improvement for the time taken to fetch data due to the way caches are organized and traversed. For example, direct-mapped caches that are the same size usually have a higher miss rate than fully associative caches. This may also depend on the benchmark of the computer testing the processor and on the pattern of instructions. But using a fully associative cache may result in more power consumption, as it has to search the whole cache every time. Due to this, the trade-off between power consumption (and associated heat) and the size of the cache becomes critical in the cache design.[13]
In the case of a cache miss, the purpose of using such a structure will be rendered useless and the computer will have to go to the main memory to fetch the required data. However, with amultiple-level cache, if the computer misses the cache closest to the processor (level-one cache or L1) it will then search through the next-closest level(s) of cache and go to main memory only if these methods fail. The general trend is to keep the L1 cache small and at a distance of 1–2 CPU clock cycles from the processor, with the lower levels of caches increasing in size to store more data than L1, hence being more distant but with a lower miss rate. This results in a better AAT.[15]The number of cache levels can be designed by architects according to their requirements after checking for trade-offs between cost, AATs, and size.[16][17]
With the technology-scaling that allowed memory systems able to be accommodated on a single chip, most modern day processors have up to three or four cache levels.[18]The reduction in the AAT can be understood by this example, where the computer checks AAT for different configurations up to L3 caches.
Example: main memory = 50ns, L1 = 1 ns with 10% miss rate, L2 = 5 ns with 1% miss rate, L3 = 10 ns with 0.2% miss rate.
In a banked cache, the cache is divided into a cache dedicated toinstructionstorage and a cache dedicated to data. In contrast, a unified cache contains both the instructions and data in the same cache.[22]During a process, the L1 cache (or most upper-level cache in relation to its connection to the processor) is accessed by the processor to retrieve both instructions and data. Requiring both actions to be implemented at the same time requires multiple ports and more access time in a unified cache. Having multiple ports requires additional hardware and wiring, leading to a significant structure between the caches and processing units.[23]To avoid this, the L1 cache is often organized as a banked cache which results in fewer ports, less hardware, and generally lower access times.[13]
Modern processors have split caches, and in systems with multilevel caches higher level caches may be unified while lower levels split.[24]
Whether a block present in the upper cache layer can also be present in the lower cache level is governed by the memory system'sinclusion policy, which may be inclusive, exclusive or non-inclusive non-exclusive (NINE).[citation needed]
With an inclusive policy, all the blocks present in the upper-level cache have to be present in the lower-level cache as well. Each upper-level cache component is a subset of the lower-level cache component. In this case, since there is a duplication of blocks, there is some wastage of memory. However, checking is faster.[citation needed]
Under an exclusive policy, all the cache hierarchy components are completely exclusive, so that any element in the upper-level cache will not be present in any of the lower cache components. This enables complete usage of the cache memory. However, there is a high memory-access latency.[25]
The above policies require a set of rules to be followed in order to implement them. If none of these are forced, the resulting inclusion policy is called non-inclusive non-exclusive (NINE). This means that the upper-level cache may or may not be present in the lower-level cache.[21]
There are two policies which define the way in which a modified cache block will be updated in the main memory: write through and write back.[citation needed]
In the case of write through policy, whenever the value of the cache block changes, it is further modified in the lower-level memory hierarchy as well.[26]This policy ensures that the data is stored safely as it is written throughout the hierarchy.
However, in the case of the write back policy, the changed cache block will be updated in the lower-level hierarchy only when the cache block is evicted. A "dirty bit" is attached to each cache block and set whenever the cache block is modified.[27]During eviction, blocks with a set dirty bit will be written to the lower-level hierarchy. Under this policy, there is a risk for data-loss as the most recently changed copy of a datum is only stored in the cache and therefore some corrective techniques must be observed.
In case of a write where the byte is not present in the cache block, the byte may be brought to the cache as determined by a write allocate or write no-allocate policy.[28]Write allocate policy states that in case of a write miss, the block is fetched from the main memory and placed in the cache before writing.[29]In the write no-allocate policy, if the block is missed in the cache it will write in the lower-level memory hierarchy without fetching the block into the cache.[30]
The common combinations of the policies are"write back, write allocate" and "write through, write no-allocate".
A private cache is assigned to one particular core in a processor, and cannot be accessed by any other cores. In some architectures, each core has its own private cache; this creates the risk of duplicate blocks in a system's cache architecture, which results in reduced capacity utilization. However, this type of design choice in a multi-layer cache architecture can also be good for a lower data-access latency.[28][31][32]
A shared cache is a cache which can be accessed by multiple cores.[33]Since it is shared, each block in the cache is unique and therefore has a larger hit rate as there will be no duplicate blocks. However, data-access latency can increase as multiple cores try to access the same cache.[34]
Inmulti-core processors, the design choice to make a cache shared or private impacts the performance of the processor.[35]In practice, the upper-level cache L1 (or sometimes L2)[36][37]is implemented as private and lower-level caches are implemented as shared. This design provides high access rates for the high-level caches and low miss rates for the lower-level caches.[35]
Up to 64-core:
6-core (performance| efficiency):
96-core:
20-core (4:1 "performance" core | "efficiency" core):
6- to 16-core:
|
https://en.wikipedia.org/wiki/Cache_hierarchy
|
Incomputer science,locality of reference, also known as theprinciple of locality,[1]is the tendency of a processor to access the same set of memory locations repetitively over a short period of time.[2]There are two basic types of reference locality – temporal and spatial locality. Temporal locality refers to the reuse of specific data and/or resources within a relatively small time duration. Spatial locality (also termeddata locality[3]) refers to the use of data elements within relatively close storage locations. Sequential locality, a special case of spatial locality, occurs when data elements are arranged and accessed linearly, such as traversing the elements in a one-dimensionalarray.
Locality is a type ofpredictablebehavior that occurs in computer systems. Systems that exhibit stronglocality of referenceare great candidates for performance optimization through the use of techniques such as thecaching,prefetchingfor memory and advancedbranch predictorsof a processor core.
There are several different types of locality of reference:
In order to benefit from temporal and spatial locality, which occur frequently, most of the information storage systems arehierarchical. Equidistant locality is usually supported by a processor's diverse nontrivial increment instructions. For branch locality, the contemporary processors have sophisticated branch predictors, and on the basis of this prediction the memory manager of the processor tries to collect and preprocess the data of plausible alternatives.
There are several reasons for locality. These reasons are either goals to achieve or circumstances to accept, depending on the aspect. The reasons below are notdisjoint; in fact, the list below goes from the most general case to special cases:
If most of the time the substantial portion of the references aggregate into clusters, and if the shape of this system of clusters can be well predicted, then it can be used for performance optimization. There are several ways to benefit from locality usingoptimizationtechniques. Common techniques are:
Hierarchical memory is a hardware optimization that takes the benefits of spatial and temporal locality and can be used on several levels of the memory hierarchy.Pagingobviously benefits from temporal and spatial locality. A cache is a simple example of exploiting temporal locality, because it is a specially designed, faster but smaller memory area, generally used to keep recently referenced data and data near recently referenced data, which can lead to potential performance increases.
Data elements in a cache do not necessarily correspond to data elements that are spatially close in the main memory; however, data elements are brought into cache onecache lineat a time. This means that spatial locality is again important: if one element is referenced, a few neighboring elements will also be brought into cache. Finally, temporal locality plays a role on the lowest level, since results that are referenced very closely together can be kept in themachine registers. Some programming languages (such asC) allow the programmer to suggest that certain variables be kept in registers.
Data locality is a typical memory reference feature of regular programs (though many irregular memory access patterns exist). It makes the hierarchical memory layout profitable. In computers, memory is divided into a hierarchy in order to speed up data accesses. The lower levels of the memory hierarchy tend to be slower, but larger. Thus, a program will achieve greater performance if it uses memory while it is cached in the upper levels of the memory hierarchy and avoids bringing other data into the upper levels of the hierarchy that will displace data that will be used shortly in the future. This is an ideal, and sometimes cannot be achieved.
Typical memory hierarchy (access times and cache sizes are approximations of typical values used as of 2013[update]for the purpose of discussion; actual values and actual numbers of levels in the hierarchy vary):
Modern machines tend to read blocks of lower memory into the next level of the memory hierarchy. If this displaces used memory, theoperating systemtries to predict which data will be accessed least (or latest) and move it down the memory hierarchy. Prediction algorithms tend to be simple to reduce hardware complexity, though they are becoming somewhat more complicated.
A common example ismatrix multiplication:
By switching the looping order forjandk, the speedup in large matrix multiplications becomes dramatic, at least for languages that put contiguous array elements in the last dimension. This will not change the mathematical result, but it improves efficiency. In this case, "large" means, approximately, more than 100,000 elements in each matrix, or enough addressable memory such that the matrices will not fit in L1 and L2 caches.
The reason for this speedup is that in the first case, the reads ofA[i][k]are in cache (since thekindex is the contiguous, last dimension), butB[k][j]is not, so there is a cache miss penalty onB[k][j].C[i][j]is irrelevant, because it can behoistedout of the inner loop -- the loop variable there isk.
In the second case, the reads and writes ofC[i][j]are both in cache, the reads ofB[k][j]are in cache, and the read ofA[i][k]can be hoisted out of the inner loop.
Thus, the second example has no cache miss penalty in the inner loop while the first example has a cache penalty.
On a year 2014 processor, the second case is approximately five times faster than the first case, when written inCand compiled withgcc -O3. (A careful examination of the disassembled code shows that in the first case,GCCusesSIMDinstructions and in the second case it does not, but the cache penalty is much worse than the SIMD gain.)[citation needed]
Temporal locality can also be improved in the above example by using a technique calledblocking. The larger matrix can be divided into evenly sized sub-matrices, so that the smaller blocks can be referenced (multiplied) several times while in memory. Note that this example works for square matrices of dimensions SIZE x SIZE, but it can easily be extended for arbitrary matrices by substituting SIZE_I, SIZE_J and SIZE_K where appropriate.
The temporal locality of the above solution is provided because a block can be used several times before moving on, so that it is moved in and out of memory less often. Spatial locality is improved because elements with consecutive memory addresses tend to be pulled up the memory hierarchy together.
|
https://en.wikipedia.org/wiki/Locality_of_reference#Spatial_and_temporal_locality_usage
|
Incomputing, acache(/kæʃ/ⓘKASH)[1]is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. Acache hitoccurs when the requested data can be found in a cache, while acache missoccurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.[2]
To be cost-effective, caches must be relatively small. Nevertheless, caches are effective in many areas of computing because typicalcomputer applicationsaccess data with a high degree oflocality of reference. Such access patterns exhibit temporal locality, where data is requested that has been recently requested, and spatial locality, where data is requested that is stored near data that has already been requested.
In memory design, there is an inherent trade-off between capacity and speed because larger capacity implies larger size and thus greater physical distances for signals to travel causingpropagation delays. There is also a tradeoff between high-performance technologies such asSRAMand cheaper, easily mass-produced commodities such asDRAM,flash, orhard disks.
Thebufferingprovided by a cache benefits one or both oflatencyandthroughput(bandwidth).
A larger resource incurs a significant latency for access – e.g. it can take hundreds of clock cycles for a modern 4 GHz processor to reach DRAM. This is mitigated by reading large chunks into the cache, in the hope that subsequent reads will be from nearby locations and can be read from the cache. Prediction or explicitprefetchingcan be used to guess where future reads will come from and make requests ahead of time; if done optimally, the latency is bypassed altogether.
The use of a cache also allows for higher throughput from the underlying resource, by assembling multiple fine-grain transfers into larger, more efficient requests. In the case of DRAM circuits, the additional throughput may be gained by using a wider data bus.
Hardware implements cache as ablockof memory for temporary storage of data likely to be used again.Central processing units(CPUs),solid-state drives(SSDs) and hard disk drives (HDDs) frequently include hardware-based cache, whileweb browsersandweb serverscommonly rely on software caching.
A cache is made up of a pool of entries. Each entry has associateddata, which is a copy of the same data in somebacking store. Each entry also has atag, which specifies the identity of the data in the backing store of which the entry is a copy.
When the cache client (a CPU, web browser,operating system) needs to access data presumed to exist in the backing store, it first checks the cache. If an entry can be found with a tag matching that of the desired data, the data in the entry is used instead. This situation is known as acache hit. For example, a web browser program might check its local cache on disk to see if it has a local copy of the contents of a web page at a particularURL. In this example, the URL is the tag, and the content of the web page is the data. The percentage of accesses that result in cache hits is known as thehit rateorhit ratioof the cache.
The alternative situation, when the cache is checked and found not to contain any entry with the desired tag, is known as acache miss. This requires a more expensive access of data from the backing store. Once the requested data is retrieved, it is typically copied into the cache, ready for the next access.
During a cache miss, some other previously existing cache entry is typically removed in order to make room for the newly retrieved data. Theheuristicused to select the entry to replace is known as thereplacement policy. One popular replacement policy, least recently used (LRU), replaces the oldest entry, the entry that was accessed less recently than any other entry. More sophisticated caching algorithms also take into account the frequency of use of entries.
Cache writes must eventually be propagated to the backing store. The timing for this is governed by thewrite policy. The two primary write policies are:[3]
A write-back cache is more complex to implement since it needs to track which of its locations have been written over and mark them asdirtyfor later writing to the backing store. The data in these locations are written back to the backing store only when they are evicted from the cache, a process referred to as alazy write. For this reason, a read miss in a write-back cache may require two memory accesses to the backing store: one to write back the dirty data, and one to retrieve the requested data. Other policies may also trigger data write-back. The client may make many changes to data in the cache, and then explicitly notify the cache to write back the data.
Write operations do not return data. Consequently, a decision needs to be made for write misses: whether or not to load the data into the cache. This is determined by thesewrite-miss policies:
While both write policies can Implement either write-miss policy, they are typically paired as follows:[4][5]
Entities other than the cache may change the data in the backing store, in which case the copy in the cache may become out-of-date orstale. Alternatively, when the client updates the data in the cache, copies of that data in other caches will become stale. Communication protocols between the cache managers that keep the data consistent are associated withcache coherence.
On a cache read miss, caches with ademand pagingpolicyread the minimum amount from the backing store. A typical demand-paging virtual memory implementation reads one page of virtual memory (often 4 KB) from disk into the disk cache in RAM. A typical CPU reads a single L2 cache line of 128 bytes from DRAM into the L2 cache, and a single L1 cache line of 64 bytes from the L2 cache into the L1 cache.
Caches with aprefetch input queueor more generalanticipatory paging policygo further—they not only read the data requested, but guess that the next chunk or two of data will soon be required, and so prefetch that data into the cache ahead of time. Anticipatory paging is especially helpful when the backing store has a long latency to read the first chunk and much shorter times to sequentially read the next few chunks, such asdisk storageand DRAM.
A few operating systems go further with aloaderthat always pre-loads the entire executable into RAM. A few caches go even further, not only pre-loading an entire file, but also starting to load other related files that may soon be requested, such as thepage cacheassociated with aprefetcheror theweb cacheassociated withlink prefetching.
Small memories on or close to the CPU can operate faster than the much largermain memory.[6]Most CPUs since the 1980s have used one or more caches, sometimesin cascaded levels; modern high-endembedded,desktopand servermicroprocessorsmay have as many as six types of cache (between levels and functions).[7]Some examples of caches with a specific function are theD-cache,I-cacheand thetranslation lookaside bufferfor thememory management unit(MMU).
Earliergraphics processing units(GPUs) often had limited read-onlytexture cachesand usedswizzlingto improve 2Dlocality of reference.Cache misseswould drastically affect performance, e.g. ifmipmappingwas not used. Caching was important to leverage 32-bit (and wider) transfers for texture data that was often as little as 4 bits per pixel.
As GPUs advanced, supportinggeneral-purpose computing on graphics processing unitsandcompute kernels, they have developed progressively larger and increasingly general caches, includinginstruction cachesforshaders, exhibiting functionality commonly found in CPU caches. These caches have grown to handlesynchronization primitivesbetween threads andatomic operations, and interface with a CPU-style MMU.
Digital signal processorshave similarly generalized over the years. Earlier designs usedscratchpad memoryfed bydirect memory access, but modern DSPs such asQualcomm Hexagonoften include a very similar set of caches to a CPU (e.g.Modified Harvard architecturewith shared L2, split L1 I-cache and D-cache).[8]
A memory management unit (MMU) that fetches page table entries from main memory has a specialized cache, used for recording the results ofvirtual addresstophysical addresstranslations. This specialized cache is called a translation lookaside buffer (TLB).[9]
Information-centric networking(ICN) is an approach to evolve theInternetinfrastructure away from a host-centric paradigm, based on perpetual connectivity and theend-to-end principle, to a network architecture in which the focal point is identified information. Due to the inherent caching capability of the nodes in an ICN, it can be viewed as a loosely connected network of caches, which has unique requirements for caching policies. However, ubiquitous content caching introduces the challenge to content protection against unauthorized access, which requires extra care and solutions.[10]
Unlike proxy servers, in ICN the cache is a network-level solution. Therefore, it has rapidly changing cache states and higher request arrival rates; moreover, smaller cache sizes impose different requirements on the content eviction policies. In particular, eviction policies for ICN should be fast and lightweight. Various cache replication and eviction schemes for different ICN architectures and applications have been proposed.[citation needed]
The time aware least recently used (TLRU) is a variant of LRU designed for the situation where the stored contents in cache have a valid lifetime. The algorithm is suitable in network cache applications, such as ICN,content delivery networks(CDNs) and distributed networks in general. TLRU introduces a new term: time to use (TTU). TTU is a time stamp on content which stipulates the usability time for the content based on the locality of the content and information from the content publisher. Owing to this locality-based time stamp, TTU provides more control to the local administrator to regulate in-network storage.
In the TLRU algorithm, when a piece of content arrives, a cache node calculates the local TTU value based on the TTU value assigned by the content publisher. The local TTU value is calculated by using a locally-defined function. Once the local TTU value is calculated the replacement of content is performed on a subset of the total content stored in cache node. The TLRU ensures that less popular and short-lived content should be replaced with incoming content.[11]
The least frequent recently used (LFRU) cache replacement scheme combines the benefits of LFU and LRU schemes. LFRU is suitable for network cache applications, such as ICN, CDNs and distributed networks in general. In LFRU, the cache is divided into two partitions called privileged and unprivileged partitions. The privileged partition can be seen as a protected partition. If content is highly popular, it is pushed into the privileged partition. Replacement of the privileged partition is done by first evicting content from the unprivileged partition, then pushing content from the privileged partition to the unprivileged partition, and finally inserting new content into the privileged partition. In the above procedure, the LRU is used for the privileged partition and an approximated LFU (ALFU) scheme is used for the unprivileged partition. The basic idea is to cache the locally popular content with the ALFU scheme and push the popular content to the privileged partition.[12]
In 2011, the use of smartphones with weather forecasting options was overly taxingAccuWeatherservers; two requests from the same area would generate separate requests. An optimization by edge-servers to truncate the GPS coordinates to fewer decimal places meant that the cached results from a nearby query would be used. The number of to-the-server lookups per day dropped by half.[13]
While CPU caches are generally managed entirely by hardware, a variety of software manages other caches. Thepage cachein main memory is managed by theoperating system kernel.
While thedisk buffer, which is an integrated part of the hard disk drive or solid state drive, is sometimes misleadingly referred to asdisk cache, its main functions are write sequencing and read prefetching. High-enddisk controllersoften have their own on-board cache for the hard disk drive's data blocks.
Finally, a fast local hard disk drive can also cache information held on even slower data storage devices, such as remote servers (web cache) or localtape drivesoroptical jukeboxes; such a scheme is the main concept ofhierarchical storage management. Also, fast flash-based solid-state drives (SSDs) can be used as caches for slower rotational-media hard disk drives, working together ashybrid drives.
Web browsers andweb proxy servers, either locally or at theInternet service provider(ISP), employ web caches to store previous responses from web servers, such asweb pagesandimages. Web caches reduce the amount of information that needs to be transmitted across the network, as information previously stored in the cache can often be re-used. This reduces bandwidth and processing requirements of the web server, and helps to improveresponsivenessfor users of the web.[14]
Another form of cache isP2P caching, where the files most sought for bypeer-to-peerapplications are stored in an ISP cache to accelerate P2P transfers. Similarly, decentralised equivalents exist, which allow communities to perform the same task for P2P traffic, for example, Corelli.[15]
A cache can store data that is computed on demand rather than retrieved from a backing store.Memoizationis anoptimizationtechnique that stores the results of resource-consumingfunction callswithin a lookup table, allowing subsequent calls to reuse the stored results and avoid repeated computation. It is related to thedynamic programmingalgorithm design methodology, which can also be thought of as a means of caching.
Acontent delivery network(CDN) is a network of distributed servers that deliver pages and otherweb contentto a user, based on the geographic locations of the user, the origin of the web page and the content delivery server.
CDNs were introduced in the late 1990s as a way to speed up the delivery of static content, such as HTML pages, images and videos. By replicating content on multiple servers around the world and delivering it to users based on their location, CDNs can significantly improve the speed and availability of a website or application. When a user requests a piece of content, the CDN will check to see if it has a copy of the content in its cache. If it does, the CDN will deliver the content to the user from the cache.[16]
Acloud storage gatewayis ahybrid cloud storagedevice that connects a local network to one or morecloud storage services, typicallyobject storageservices such asAmazon S3. It provides a cache for frequently accessed data, providing high speed local access to frequently accessed data in the cloud storage service. Cloud storage gateways also provide additional benefits such as accessing cloud object storage through traditional file serving protocols as well as continued access to cached data during connectivity outages.[17]
The BINDDNSdaemon caches a mapping of domain names toIP addresses, as does a resolver library.
Write-through operation is common when operating over unreliable networks (like an Ethernet LAN), because of the enormous complexity of the coherency protocol required between multiple write-back caches when communication is unreliable. For instance, web page caches andclient-sidenetwork file systemcaches (like those inNFSorSMB) are typically read-only or write-through specifically to keep the network protocol simple and reliable.
Search enginesalso frequently make web pages they have indexed available from their cache. For example,Googleprovides a "Cached" link next to each search result. This can prove useful when web pages from a web server are temporarily or permanently inaccessible.
Database cachingcan substantially improve the throughput ofdatabaseapplications, for example in the processing ofindexes,data dictionaries, and frequently used subsets of data.
Adistributed cache[18]uses networked hosts to provide scalability, reliability and performance to the application.[19]The hosts can be co-located or spread over different geographical regions.
The semantics of a "buffer" and a "cache" are not totally different; even so, there are fundamental differences in intent between the process of caching and the process of buffering.
Fundamentally, caching realizes a performance increase for transfers of data that is being repeatedly transferred. While a caching system may realize a performance increase upon the initial (typically write) transfer of a data item, this performance increase is due to buffering occurring within the caching system.
With read caches, a data item must have been fetched from its residing location at least once in order for subsequent reads of the data item to realize a performance increase by virtue of being able to be fetched from the cache's (faster) intermediate storage rather than the data's residing location. With write caches, a performance increase of writing a data item may be realized upon the first write of the data item by virtue of the data item immediately being stored in the cache's intermediate storage, deferring the transfer of the data item to its residing storage at a later stage or else occurring as a background process. Contrary to strict buffering, a caching process must adhere to a (potentially distributed) cache coherency protocol in order to maintain consistency between the cache's intermediate storage and the location where the data resides. Buffering, on the other hand,
With typical caching implementations, a data item that is read or written for the first time is effectively being buffered; and in the case of a write, mostly realizing a performance increase for the application from where the write originated. Additionally, the portion of a caching protocol where individual writes are deferred to a batch of writes is a form of buffering. The portion of a caching protocol where individual reads are deferred to a batch of reads is also a form of buffering, although this form may negatively impact the performance of at least the initial reads (even though it may positively impact the performance of the sum of the individual reads). In practice, caching almost always involves some form of buffering, while strict buffering does not involve caching.
A buffer is a temporary memory location that is traditionally used because CPUinstructionscannot directly address data stored in peripheral devices. Thus, addressable memory is used as an intermediate stage. Additionally, such a buffer may be feasible when a large block of data is assembled or disassembled (as required by a storage device), or when data may be delivered in a different order than that in which it is produced. Also, a whole buffer of data is usually transferred sequentially (for example to hard disk), so buffering itself sometimes increases transfer performance or reduces the variation or jitter of the transfer's latency as opposed to caching where the intent is to reduce the latency. These benefits are present even if the buffered data are written to the buffer once and read from the buffer once.
A cache also increases transfer performance. A part of the increase similarly comes from the possibility that multiple small transfers will combine into one large block. But the main performance-gain occurs because there is a good chance that the same data will be read from cache multiple times, or that written data will soon be read. A cache's sole purpose is to reduce accesses to the underlying slower storage. Cache is also usually anabstraction layerthat is designed to be invisible from the perspective of neighboring layers.
|
https://en.wikipedia.org/wiki/Cache_(computing)#Buffer_vs._cache
|
ACPU cacheis ahardware cacheused by thecentral processing unit(CPU) of acomputerto reduce the average cost (time or energy) to accessdatafrom themain memory.[1]A cache is a smaller, faster memory, located closer to aprocessor core, which stores copies of the data from frequently used mainmemory locations. Most CPUs have a hierarchy of multiple cachelevels(L1, L2, often L3, and rarely even L4), with different instruction-specific and data-specific caches at level 1.[2]The cache memory is typically implemented withstatic random-access memory(SRAM), in modern CPUs by far the largest part of them by chip area, but SRAM is not always used for all levels (of I- or D-cache), or even any level, sometimes some latter or all levels are implemented witheDRAM.
Other types of caches exist (that are not counted towards the "cache size" of the most important caches mentioned above), such as thetranslation lookaside buffer(TLB) which is part of thememory management unit(MMU) which most CPUs have.
When trying to read from or write to a location in the main memory, the processor checks whether the data from that location is already in the cache. If so, the processor will read from or write to the cache instead of the much slower main memory.
Many moderndesktop,server, and industrial CPUs have at least three independent levels of caches (L1, L2 and L3) and different types of caches:
Early examples of CPU caches include theAtlas 2[3]and theIBM System/360 Model 85[4][5]in the 1960s. The first CPUs that used a cache had only one level of cache; unlike later level 1 cache, it was not split into L1d (for data) and L1i (for instructions). Split L1 cache started in 1976 with theIBM 801CPU,[6][7]became mainstream in the late 1980s, and in 1997 entered the embedded CPU market with the ARMv5TE. In 2015, even sub-dollarSoCssplit the L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split, and acts as a common repository for the already split L1 cache. Every core of amulti-core processorhas a dedicated L1 cache and is usually not shared between the cores. The L2 cache, and higher-level caches, may be shared between the cores. L4 cache is currently uncommon, and is generallydynamic random-access memory(DRAM) on a separate die or chip, rather thanstatic random-access memory(SRAM). An exception to this is wheneDRAMis used for all levels of cache, down to L1. Historically L1 was also on a separate die, however bigger die sizes have allowed integration of it as well as other cache levels, with the possible exception of the last level. Each extra level of cache tends to be bigger and optimized differently.
Caches (like for RAM historically) have generally been sized in powers of: 2, 4, 8, 16 etc.KiB; when up toMiBsizes (i.e. for larger non-L1), very early on the pattern broke down, to allow for larger caches without being forced into the doubling-in-size paradigm, with e.g.Intel Core 2 Duowith 3 MiB L2 cache in April 2008. This happened much later for L1 caches, as their size is generally still a small number of KiB. TheIBM zEC12from 2012 is an exception however, to gain unusually large 96 KiB L1 data cache for its time, and e.g. theIBM z13having a 96 KiB L1 instruction cache (and 128 KiB L1 data cache),[8]and IntelIce Lake-based processors from 2018, having 48 KiB L1 data cache and 48 KiB L1 instruction cache. In 2020, someIntel AtomCPUs (with up to 24 cores) have (multiple of) 4.5 MiB and 15 MiB cache sizes.[9][10]
Data is transferred between memory and cache in blocks of fixed size, calledcache linesorcache blocks. When a cache line is copied from memory into the cache, a cache entry is created. The cache entry will include the copied data as well as the requested memory location (called a tag).
When the processor needs to read or write a location in memory, it first checks for a corresponding entry in the cache. The cache checks for the contents of the requested memory location in any cache lines that might contain that address. If the processor finds that the memory location is in the cache, acache hithas occurred. However, if the processor does not find the memory location in the cache, acache misshas occurred. In the case of a cache hit, the processor immediately reads or writes the data in the cache line. For a cache miss, the cache allocates a new entry and copies data from main memory, then the request is fulfilled from the contents of the cache.
To make room for the new entry on a cache miss, the cache may have to evict one of the existing entries. The heuristic it uses to choose the entry to evict is called the replacement policy. The fundamental problem with any replacement policy is that it must predict which existing cache entry is least likely to be used in the future. Predicting the future is difficult, so there is no perfect method to choose among the variety of replacement policies available. One popular replacement policy,least-recently used(LRU), replaces the least recently accessed entry.
Marking some memory ranges as non-cacheable can improve performance, by avoiding caching of memory regions that are rarely re-accessed. This avoids the overhead of loading something into the cache without having any reuse. Cache entries may also be disabled or locked depending on the context.
If data are written to the cache, at some point they must also be written to main memory; the timing of this write is known as the write policy. In awrite-throughcache, every write to the cache causes a write to main memory. Alternatively, in awrite-backor copy-back cache, writes are not immediately mirrored to the main memory, and the cache instead tracks which locations have been written over, marking them asdirty. The data in these locations are written back to the main memory only when those data are evicted from the cache. For this reason, a read miss in a write-back cache may sometimes require two memory accesses to service: one to first write the dirty location to main memory, and then another to read the new location from memory. Also, a write to a main memory location that is not yet mapped in a write-back cache may evict an already dirty location, thereby freeing that cache space for the new memory location.
There are intermediate policies as well. The cache may be write-through, but the writes may be held in a store data queue temporarily, usually so multiple stores can be processed together (which can reduce bus turnarounds and improve bus utilization).
Cached data from the main memory may be changed by other entities (e.g., peripherals usingdirect memory access(DMA) or another core in amulti-core processor), in which case the copy in the cache may become out-of-date or stale. Alternatively, when a CPU in amultiprocessorsystem updates data in the cache, copies of data in caches associated with other CPUs become stale. Communication protocols between the cache managers that keep the data consistent are known ascache coherenceprotocols.
Cache performance measurementhas become important in recent times where the speed gap between the memory performance and the processor performance is increasing exponentially. The cache was introduced to reduce this speed gap. Thus knowing how well the cache is able to bridge the gap in the speed of processor and memory becomes important, especially in high-performance systems. The cache hit rate and the cache miss rate play an important role in determining this performance. To improve the cache performance, reducing the miss rate becomes one of the necessary steps among other steps. Decreasing the access time to the cache also gives a boost to its performance and helps with optimization.
The time taken to fetch one cache line from memory (readlatencydue to a cache miss) matters because the CPU will run out of work while waiting for the cache line. When a CPU reaches this state, it is called a stall. As CPUs become faster compared to main memory, stalls due to cache misses displace more potential computation; modern CPUs can execute hundreds of instructions in the time taken to fetch a single cache line from main memory.
Various techniques have been employed to keep the CPU busy during this time, includingout-of-order executionin which the CPU attempts to execute independent instructions after the instruction that is waiting for the cache miss data. Another technology, used by many processors, issimultaneous multithreading(SMT), which allows an alternate thread to use the CPU core while the first thread waits for required CPU resources to become available.
Theplacement policydecides where in the cache a copy of a particular entry of main memory will go. If the placement policy is free to choose any entry in the cache to hold the copy, the cache is calledfully associative. At the other extreme, if each entry in the main memory can go in just one place in the cache, the cache isdirect-mapped. Many caches implement a compromise in which each entry in the main memory can go to any one of N places in the cache, and are described as N-way set associative.[11]For example, the level-1 data cache in anAMD Athlonis two-way set associative, which means that any particular location in main memory can be cached in either of two locations in the level-1 data cache.
Choosing the right value of associativity involves atrade-off. If there are ten places to which the placement policy could have mapped a memory location, then to check if that location is in the cache, ten cache entries must be searched. Checking more places takes more power and chip area, and potentially more time. On the other hand, caches with more associativity suffer fewer misses (seeconflict misses), so that the CPU wastes less time reading from the slow main memory. The general guideline is that doubling the associativity, from direct mapped to two-way, or from two-way to four-way, has about the same effect on raising the hit rate as doubling the cache size. However, increasing associativity more than four does not improve hit rate as much,[12]and are generally done for other reasons (seevirtual aliasing). Some CPUs can dynamically reduce the associativity of their caches in low-power states, which acts as a power-saving measure.[13]
In order of worse but simple to better but complex:
In this cache organization, each location in the main memory can go in only one entry in the cache. Therefore, a direct-mapped cache can also be called a "one-way set associative" cache. It does not have a placement policy as such, since there is no choice of which cache entry's contents to evict. This means that if two locations map to the same entry, they may continually knock each other out. Although simpler, a direct-mapped cache needs to be much larger than an associative one to give comparable performance, and it is more unpredictable. Letxbe block number in cache,ybe block number of memory, andnbe number of blocks in cache, then mapping is done with the help of the equationx=ymodn.
If each location in the main memory can be cached in either of two locations in the cache, one logical question is:which one of the two?The simplest and most commonly used scheme, shown in the right-hand diagram above, is to use the least significant bits of the memory location's index as the index for the cache memory, and to have two entries for each index. One benefit of this scheme is that the tags stored in the cache do not have to include that part of the main memory address which is implied by the cache memory's index. Since the cache tags have fewer bits, they require fewer transistors, take less space on the processor circuit board or on the microprocessor chip, and can be read and compared faster. AlsoLRUalgorithm is especially simple since only one bit needs to be stored for each pair.
One of the advantages of a direct-mapped cache is that it allows simple and fastspeculation. Once the address has been computed, the one cache index which might have a copy of that location in memory is known. That cache entry can be read, and the processor can continue to work with that data before it finishes checking that the tag actually matches the requested address.
The idea of having the processor use the cached data before the tag match completes can be applied to associative caches as well. A subset of the tag, called ahint, can be used to pick just one of the possible cache entries mapping to the requested address. The entry selected by the hint can then be used in parallel with checking the full tag. The hint technique works best when used in the context of address translation, as explained below.
Other schemes have been suggested, such as theskewed cache,[14]where the index for way 0 is direct, as above, but the index for way 1 is formed with ahash function. A good hash function has the property that addresses which conflict with the direct mapping tend not to conflict when mapped with the hash function, and so it is less likely that a program will suffer from an unexpectedly large number of conflict misses due to a pathological access pattern. The downside is extra latency from computing the hash function.[15]Additionally, when it comes time to load a new line and evict an old line, it may be difficult to determine which existing line was least recently used, because the new line conflicts with data at different indexes in each way;LRUtracking for non-skewed caches is usually done on a per-set basis. Nevertheless, skewed-associative caches have major advantages over conventional set-associative ones.[16]
A true set-associative cache tests all the possible ways simultaneously, using something like acontent-addressable memory. A pseudo-associative cache tests each possible way one at a time. A hash-rehash cache and a column-associative cache are examples of a pseudo-associative cache.
In the common case of finding a hit in the first way tested, a pseudo-associative cache is as fast as a direct-mapped cache, but it has a much lower conflict miss rate than a direct-mapped cache, closer to the miss rate of a fully associative cache.[15]
Comparing with a direct-mapped cache, a set associative cache has a reduced number of bits for its cache set index that maps to a cache set, where multiple ways or blocks stays, such as 2 blocks for a 2-way set associative cache and 4 blocks for a 4-way set associative cache. Comparing with a direct mapped cache, the unused cache index bits become a part of the tag bits. For example, a 2-way set associative cache contributes 1 bit to the tag and a 4-way set associative cache contributes 2 bits to the tag. The basic idea of the multicolumn cache[17]is to use the set index to map to a cache set as a conventional set associative cache does, and to use the added tag bits to index a way in the set. For example, in a 4-way set associative cache, the two bits are used to index way 00, way 01, way 10, and way 11, respectively. This double cache indexing is called a "major location mapping", and its latency is equivalent to a direct-mapped access. Extensive experiments in multicolumn cache design[17]shows that the hit ratio to major locations is as high as 90%. If cache mapping conflicts with a cache block in the major location, the existing cache block will be moved to another cache way in the same set, which is called "selected location". Because the newly indexed cache block is a most recently used (MRU) block, it is placed in the major location in multicolumn cache with a consideration of temporal locality. Since multicolumn cache is designed for a cache with a high associativity, the number of ways in each set is high; thus, it is easy find a selected location in the set. A selected location index by an additional hardware is maintained for the major location in a cache block.[citation needed]
Multicolumn cache remains a high hit ratio due to its high associativity, and has a comparable low latency to a direct-mapped cache due to its high percentage of hits in major locations. The concepts of major locations and selected locations in multicolumn cache have been used in several cache designs in ARM Cortex R chip,[18]Intel's way-predicting cache memory,[19]IBM's reconfigurable multi-way associative cache memory[20]and Oracle's dynamic cache replacement way selection based on address tab bits.[21]
Cache row entries usually have the following structure:
Thedata block(cache line) contains the actual data fetched from the main memory. Thetagcontains (part of) the address of the actual data fetched from the main memory. The flag bits arediscussed below.
The "size" of the cache is the amount of main memory data it can hold. This size can be calculated as the number of bytes stored in each data block times the number of blocks stored in the cache. (The tag, flag anderror correction codebits are not included in the size,[22]although they do affect the physical area of a cache.)
An effective memory address which goes along with the cache line (memory block) is split (MSBtoLSB) into the tag, the index and the block offset.[23][24]
The index describes which cache set that the data has been put in. The index length is⌈log2(s)⌉{\displaystyle \lceil \log _{2}(s)\rceil }bits forscache sets.
The block offset specifies the desired data within the stored data block within the cache row. Typically the effective address is in bytes, so the block offset length is⌈log2(b)⌉{\displaystyle \lceil \log _{2}(b)\rceil }bits, wherebis the number of bytes per data block.
The tag contains the most significant bits of the address, which are checked against all rows in the current set (the set has been retrieved by index) to see if this set contains the requested address. If it does, a cache hit occurs. The tag length in bits is as follows:
Some authors refer to the block offset as simply the "offset"[25]or the "displacement".[26][27]
The originalPentium 4processor had a four-way set associative L1 data cache of 8KiBin size, with 64-byte cache blocks. Hence, there are 8 KiB / 64 = 128 cache blocks. The number of sets is equal to the number of cache blocks divided by the number of ways of associativity, what leads to 128 / 4 = 32 sets, and hence 25= 32 different indices. There are 26= 64 possible offsets. Since the CPU address is 32 bits wide, this implies 32 − 5 − 6 = 21 bits for the tag field.
The original Pentium 4 processor also had an eight-way set associative L2 integrated cache 256 KiB in size, with 128-byte cache blocks. This implies 32 − 8 − 7 = 17 bits for the tag field.[25]
An instruction cache requires only one flag bit per cache row entry: a valid bit. The valid bit indicates whether or not a cache block has been loaded with valid data.
On power-up, the hardware sets all the valid bits in all the caches to "invalid". Some systems also set a valid bit to "invalid" at other times, such as when multi-masterbus snoopinghardware in the cache of one processor hears an address broadcast from some other processor, and realizes that certain data blocks in the local cache are now stale and should be marked invalid.
A data cache typically requires two flag bits per cache line – a valid bit and adirty bit. Having a dirty bit set indicates that the associated cache line has been changed since it was read from main memory ("dirty"), meaning that the processor has written data to that line and the new value has not propagated all the way to main memory.
A cache miss is a failed attempt to read or write a piece of data in the cache, which results in a main memory access with much longer latency. There are three kinds of cache misses: instruction read miss, data read miss, and data write miss.
Cache read missesfrom aninstructioncache generally cause the largest delay, because the processor, or at least thethread of execution, has to wait (stall) until the instruction is fetched from main memory.Cache read missesfrom adatacache usually cause a smaller delay, because instructions not dependent on the cache read can be issued and continue execution until the data are returned from main memory, and the dependent instructions can resume execution.Cache write missesto adatacache generally cause the shortest delay, because the write can be queued and there are few limitations on the execution of subsequent instructions; the processor can continue until the queue is full. For a detailed introduction to the types of misses, seecache performance measurement and metric.
Most general purpose CPUs implement some form ofvirtual memory. To summarize, either each program running on the machine sees its own simplifiedaddress space, which contains code and data for that program only, or all programs run in a common virtual address space. A program executes by calculating, comparing, reading and writing to addresses of its virtual address space, rather than addresses of physical address space, making programs simpler and thus easier to write.
Virtual memory requires the processor to translate virtual addresses generated by the program into physical addresses in main memory. The portion of the processor that does this translation is known as thememory management unit(MMU). The fast path through the MMU can perform those translations stored in thetranslation lookaside buffer(TLB), which is a cache of mappings from the operating system'spage table, segment table, or both.
For the purposes of the present discussion, there are three important features of address translation:
One early virtual memory system, theIBM M44/44X, required an access to a mapping table held incore memorybefore every programmed access to main memory.[28][NB 1]With no caches, and with the mapping table memory running at the same speed as main memory this effectively cut the speed of memory access in half. Two early machines that used apage tablein main memory for mapping, theIBM System/360 Model 67and theGE 645, both had a small associative memory as a cache for accesses to the in-memory page table. Both machines predated the first machine with a cache for main memory, theIBM System/360 Model 85, so the first hardware cache used in a computer system was not a data or instruction cache, but rather a TLB.
Caches can be divided into four types, based on whether the index or tag correspond to physical or virtual addresses:
The speed of this recurrence (theload latency) is crucial to CPU performance, and so most modern level-1 caches are virtually indexed, which at least allows the MMU's TLB lookup to proceed in parallel with fetching the data from the cache RAM.
But virtual indexing is not the best choice for all cache levels. The cost of dealing with virtual aliases grows with cache size, and as a result most level-2 and larger caches are physically indexed.
Caches have historically used both virtual and physical addresses for the cache tags, although virtual tagging is now uncommon. If the TLB lookup can finish before the cache RAM lookup, then the physical address is available in time for tag compare, and there is no need for virtual tagging. Large caches, then, tend to be physically tagged, and only small, very low latency caches are virtually tagged. In recent general-purpose CPUs, virtual tagging has been superseded by virtual hints, as described below.
A cache that relies on virtual indexing and tagging becomes inconsistent after the same virtual address is mapped into different physical addresses (homonym), which can be solved by using physical address for tagging, or by storing the address space identifier in the cache line. However, the latter approach does not help against thesynonymproblem, in which several cache lines end up storing data for the same physical address. Writing to such locations may update only one location in the cache, leaving the others with inconsistent data. This issue may be solved by using non-overlapping memory layouts for different address spaces, or otherwise the cache (or a part of it) must be flushed when the mapping changes.[34]
The great advantage of virtual tags is that, for associative caches, they allow the tag match to proceed before the virtual to physical translation is done. However, coherence probes and evictions present a physical address for action. The hardware must have some means of converting the physical addresses into a cache index, generally by storing physical tags as well as virtual tags. For comparison, a physically tagged cache does not need to keep virtual tags, which is simpler. When a virtual to physical mapping is deleted from the TLB, cache entries with those virtual addresses will have to be flushed somehow. Alternatively, if cache entries are allowed on pages not mapped by the TLB, then those entries will have to be flushed when the access rights on those pages are changed in the page table.
It is also possible for the operating system to ensure that no virtual aliases are simultaneously resident in the cache. The operating system makes this guarantee by enforcing page coloring, which is described below. Some early RISC processors (SPARC, RS/6000) took this approach. It has not been used recently, as the hardware cost of detecting and evicting virtual aliases has fallen and the software complexity and performance penalty of perfect page coloring has risen.
It can be useful to distinguish the two functions of tags in an associative cache: they are used to determine which way of the entry set to select, and they are used to determine if the cache hit or missed. The second function must always be correct, but it is permissible for the first function to guess, and get the wrong answer occasionally.
Some processors (e.g. early SPARCs) have caches with both virtual and physical tags. The virtual tags are used for way selection, and the physical tags are used for determining hit or miss. This kind of cache enjoys the latency advantage of a virtually tagged cache, and the simple software interface of a physically tagged cache. It bears the added cost of duplicated tags, however. Also, during miss processing, the alternate ways of the cache line indexed have to be probed for virtual aliases and any matches evicted.
The extra area (and some latency) can be mitigated by keepingvirtual hintswith each cache entry instead of virtual tags. These hints are a subset or hash of the virtual tag, and are used for selecting the way of the cache from which to get data and a physical tag. Like a virtually tagged cache, there may be a virtual hint match but physical tag mismatch, in which case the cache entry with the matching hint must be evicted so that cache accesses after the cache fill at this address will have just one hint match. Since virtual hints have fewer bits than virtual tags distinguishing them from one another, a virtually hinted cache suffers more conflict misses than a virtually tagged cache.
Perhaps the ultimate reduction of virtual hints can be found in the Pentium 4 (Willamette and Northwood cores). In these processors the virtual hint is effectively two bits, and the cache is four-way set associative. Effectively, the hardware maintains a simple permutation from virtual address to cache index, so that nocontent-addressable memory(CAM) is necessary to select the right one of the four ways fetched.
Large physically indexed caches (usually secondary caches) run into a problem: the operating system rather than the application controls which pages collide with one another in the cache. Differences in page allocation from one program run to the next lead to differences in the cache collision patterns, which can lead to very large differences in program performance. These differences can make it very difficult to get a consistent and repeatable timing for a benchmark run.
To understand the problem, consider a CPU with a 1 MiB physically indexed direct-mapped level-2 cache and 4 KiB virtual memory pages. Sequential physical pages map to sequential locations in the cache until after 256 pages the pattern wraps around. We can label each physical page with a color of 0–255 to denote where in the cache it can go. Locations within physical pages with different colors cannot conflict in the cache.
Programmers attempting to make maximum use of the cache may arrange their programs' access patterns so that only 1 MiB of data need be cached at any given time, thus avoiding capacity misses. But they should also ensure that the access patterns do not have conflict misses. One way to think about this problem is to divide up the virtual pages the program uses and assign them virtual colors in the same way as physical colors were assigned to physical pages before. Programmers can then arrange the access patterns of their code so that no two pages with the same virtual color are in use at the same time. There is a wide literature on such optimizations (e.g.loop nest optimization), largely coming from theHigh Performance Computing (HPC)community.
The snag is that while all the pages in use at any given moment may have different virtual colors, some may have the same physical colors. In fact, if the operating system assigns physical pages to virtual pages randomly and uniformly, it is extremely likely that some pages will have the same physical color, and then locations from those pages will collide in the cache (this is thebirthday paradox).
The solution is to have the operating system attempt to assign different physical color pages to different virtual colors, a technique calledpage coloring. Although the actual mapping from virtual to physical color is irrelevant to system performance, odd mappings are difficult to keep track of and have little benefit, so most approaches to page coloring simply try to keep physical and virtual page colors the same.
If the operating system can guarantee that each physical page maps to only one virtual color, then there are no virtual aliases, and the processor can use virtually indexed caches with no need for extra virtual alias probes during miss handling. Alternatively, the OS can flush a page from the cache whenever it changes from one virtual color to another. As mentioned above, this approach was used for some early SPARC and RS/6000 designs.
The software page coloring technique has been used to effectively partition the shared Last level Cache (LLC) in multicore processors.[35]This operating system-based LLC management in multicore processors has been adopted by Intel.[36]
Modern processors have multiple interacting on-chip caches. The operation of a particular cache can be completely specified by the cache size, the cache block size, the number of blocks in a set, the cache set replacement policy, and the cache write policy (write-through or write-back).[25]
While all of the cache blocks in a particular cache are the same size and have the same associativity, typically the "lower-level" caches (called Level 1 cache) have a smaller number of blocks, smaller block size, and fewer blocks in a set, but have very short access times. "Higher-level" caches (i.e. Level 2 and above) have progressively larger numbers of blocks, larger block size, more blocks in a set, and relatively longer access times, but are still much faster than main memory.
Cache entry replacement policy is determined by acache algorithmselected to be implemented by the processor designers. In some cases, multiple algorithms are provided for different kinds of work loads.
Pipelined CPUs access memory from multiple points in thepipeline: instruction fetch,virtual-to-physicaladdress translation, and data fetch (seeclassic RISC pipeline). The natural design is to use different physical caches for each of these points, so that no one physical resource has to be scheduled to service two points in the pipeline. Thus the pipeline naturally ends up with at least three separate caches (instruction,TLB, and data), each specialized to its particular role.
Avictim cacheis a cache used to hold blocks evicted from a CPU cache upon replacement. The victim cache lies between the main cache and its refill path, and holds only those blocks of data that were evicted from the main cache. The victim cache is usually fully associative, and is intended to reduce the number of conflict misses. Many commonly used programs do not require an associative mapping for all the accesses. In fact, only a small fraction of the memory accesses of the program require high associativity. The victim cache exploits this property by providing high associativity to only these accesses. It was introduced byNorman Jouppifrom DEC in 1990.[37]
Intel'sCrystalwell[38]variant of itsHaswellprocessors introduced an on-package 128 MiBeDRAMLevel 4 cache which serves as a victim cache to the processors' Level 3 cache.[39]In theSkylakemicroarchitecture the Level 4 cache no longer works as a victim cache.[40]
One of the more extreme examples of cache specialization is thetrace cache(also known asexecution trace cache) found in theIntelPentium 4microprocessors. A trace cache is a mechanism for increasing the instruction fetch bandwidth and decreasing power consumption (in the case of the Pentium 4) by storing traces ofinstructionsthat have already been fetched and decoded.[41]
A trace cache stores instructions either after they have been decoded, or as they are retired. Generally, instructions are added to trace caches in groups representing either individualbasic blocksor dynamic instruction traces. The Pentium 4's trace cache storesmicro-operationsresulting from decoding x86 instructions, providing also the functionality of a micro-operation cache. Having this, the next time an instruction is needed, it does not have to be decoded into micro-ops again.[42]: 63–68
Write Coalescing Cache[43]is a special cache that is part of L2 cache inAMD'sBulldozer microarchitecture. Stores from both L1D caches in the module go through the WCC, where they are buffered and coalesced.
The WCC's task is reducing number of writes to the L2 cache.
Amicro-operation cache(μop cache,uop cacheorUC)[44]is a specialized cache that storesmicro-operationsof decoded instructions, as received directly from theinstruction decodersor from the instruction cache. When an instruction needs to be decoded, the μop cache is checked for its decoded form which is re-used if cached; if it is not available, the instruction is decoded and then cached.
One of the early works describing μop cache as an alternative frontend for the IntelP6 processor familyis the 2001 paper"Micro-Operation Cache: A Power Aware Frontend for Variable Instruction Length ISA".[45]Later, Intel included μop caches in itsSandy Bridgeprocessors and in successive microarchitectures likeIvy BridgeandHaswell.[42]: 121–123[46]AMD implemented a μop cache in theirZen microarchitecture.[47]
Fetching complete pre-decoded instructions eliminates the need to repeatedly decode variable length complex instructions into simpler fixed-length micro-operations, and simplifies the process of predicting, fetching, rotating and aligning fetched instructions. A μop cache effectively offloads the fetch and decode hardware, thus decreasingpower consumptionand improving the frontend supply of decoded micro-operations. The μop cache also increases performance by more consistently delivering decoded micro-operations to the backend and eliminating various bottlenecks in the CPU's fetch and decode logic.[45][46]
A μop cache has many similarities with a trace cache, although a μop cache is much simpler thus providing better power efficiency; this makes it better suited for implementations on battery-powered devices. The main disadvantage of the trace cache, leading to its power inefficiency, is the hardware complexity required for itsheuristicdeciding on caching and reusing dynamically created instruction traces.[48]
Abranch target cacheorbranch target instruction cache, the name used onARM microprocessors,[49]is a specialized cache which holds the first few instructions at the destination of a taken branch. This is used by low-powered processors which do not need a normal instruction cache because the memory system is capable of delivering instructions fast enough to satisfy the CPU without one. However, this only applies to consecutive instructions in sequence; it still takes several cycles of latency to restart instruction fetch at a new address, causing a few cycles of pipeline bubble after a control transfer. A branch target cache provides instructions for those few cycles avoiding a delay after most taken branches.
This allows full-speed operation with a much smaller cache than a traditional full-time instruction cache.
Smart cacheis alevel 2orlevel 3caching method for multiple execution cores, developed byIntel.
Smart Cache shares the actual cache memory between the cores of amulti-core processor. In comparison to a dedicated per-core cache, the overallcache missrate decreases when cores do not require equal parts of the cache space. Consequently, a single core can use the full level 2 or level 3 cache while the other cores are inactive.[50]Furthermore, the shared cache makes it faster to share memory among different execution cores.[51]
Another issue is the fundamental tradeoff between cache latency and hit rate. Larger caches have better hit rates but longer latency. To address this tradeoff, many computers use multiple levels of cache, with small fast caches backed up by larger, slower caches. Multi-level caches generally operate by checking the fastest but smallest cache,level 1(L1), first; if it hits, the processor proceeds at high speed. If that cache misses, the slower but larger next level cache,level 2(L2), is checked, and so on, before accessing external memory.
As the latency difference between main memory and the fastest cache has become larger, some processors have begun to utilize as many as three levels of on-chip cache. Price-sensitive designs used this to pull the entire cache hierarchy on-chip, but by the 2010s some of the highest-performance designs returned to having large off-chip caches, which is often implemented ineDRAMand mounted on amulti-chip module, as a fourth cache level. In rare cases, such as in the mainframe CPUIBM z15(2019), all levels down to L1 are implemented by eDRAM, replacingSRAMentirely (for cache, SRAM is still used for registers[citation needed]).Apple'sARM-basedApple siliconseries, starting with theA14andM1, have a 192 KiB L1i cache for each of the high-performance cores, an unusually large amount; however the high-efficiency cores only have 128 KiB. Since then other processors such asIntel'sLunar LakeandQualcomm'sOryonhave also implemented similar L1i cache sizes.
The benefits of L3 and L4 caches depend on the application's access patterns. Examples of products incorporating L3 and L4 caches include the following:
Finally, at the other end of the memory hierarchy, the CPUregister fileitself can be considered the smallest, fastest cache in the system, with the special characteristic that it is scheduled in software—typically by a compiler, as it allocates registers to hold values retrieved from main memory for, as an example,loop nest optimization. However, withregister renamingmost compiler register assignments are reallocated dynamically by hardware at runtime into a register bank, allowing the CPU to break false data dependencies and thus easing pipeline hazards.
Register files sometimes also have hierarchy: TheCray-1(circa 1976) had eight address "A" and eight scalar data "S" registers that were generally usable. There was also a set of 64 address "B" and 64 scalar data "T" registers that took longer to access, but were faster than main memory. The "B" and "T" registers were provided because the Cray-1 did not have a data cache. (The Cray-1 did, however, have an instruction cache.)
When considering a chip withmultiple cores, there is a question of whether the caches should be shared or local to each core. Implementing shared cache inevitably introduces more wiring and complexity. But then, having one cache perchip, rather thancore, greatly reduces the amount of space needed, and thus one can include a larger cache.
Typically, sharing the L1 cache is undesirable because the resulting increase in latency would make each core run considerably slower than a single-core chip. However, for the highest-level cache, the last one called before accessing memory, having a global cache is desirable for several reasons, such as allowing a single core to use the whole cache, reducing data redundancy by making it possible for different processes or threads to share cached data, and reducing the complexity of utilized cache coherency protocols.[53]For example, an eight-core chip with three levels may include an L1 cache for each core, one intermediate L2 cache for each pair of cores, and one L3 cache shared between all cores.
A shared highest-level cache, which is called before accessing memory, is usually referred to as alast level cache(LLC). Additional techniques are used for increasing the level of parallelism when LLC is shared between multiple cores, including slicing it into multiple pieces which are addressing certain ranges of memory addresses, and can be accessed independently.[54]
In a separate cache structure, instructions and data are cached separately, meaning that a cache line is used to cache either instructions or data, but not both; various benefits have been demonstrated with separate data and instructiontranslation lookaside buffers.[55]In a unified structure, this constraint is not present, and cache lines can be used to cache both instructions and data.
Multi-level caches introduce new design decisions. For instance, in some processors, all data in the L1 cache must also be somewhere in the L2 cache. These caches are calledstrictly inclusive. Other processors (like theAMD Athlon) haveexclusivecaches: data are guaranteed to be in at most one of the L1 and L2 caches, never in both. Still other processors (like the IntelPentium II,III, and4) do not require that data in the L1 cache also reside in the L2 cache, although it may often do so. There is no universally accepted name for this intermediate policy;[56][57]two common names are "non-exclusive" and "partially-inclusive".
The advantage of exclusive caches is that they store more data. This advantage is larger when the exclusive L1 cache is comparable to the L2 cache, and diminishes if the L2 cache is many times larger than the L1 cache. When the L1 misses and the L2 hits on an access, the hitting cache line in the L2 is exchanged with a line in the L1. This exchange is quite a bit more work than just copying a line from L2 to L1, which is what an inclusive cache does.[57]
One advantage of strictly inclusive caches is that when external devices or other processors in a multiprocessor system wish to remove a cache line from the processor, they need only have the processor check the L2 cache. In cache hierarchies which do not enforce inclusion, the L1 cache must be checked as well. As a drawback, there is a correlation between the associativities of L1 and L2 caches: if the L2 cache does not have at least as many ways as all L1 caches together, the effective associativity of the L1 caches is restricted. Another disadvantage of inclusive cache is that whenever there is an eviction in L2 cache, the (possibly) corresponding lines in L1 also have to get evicted in order to maintain inclusiveness. This is quite a bit of work, and would result in a higher L1 miss rate.[57]
Another advantage of inclusive caches is that the larger cache can use larger cache lines, which reduces the size of the secondary cache tags. (Exclusive caches require both caches to have the same size cache lines, so that cache lines can be swapped on a L1 miss, L2 hit.) If the secondary cache is an order of magnitude larger than the primary, and the cache data are an order of magnitude larger than the cache tags, this tag area saved can be comparable to the incremental area needed to store the L1 cache data in the L2.[58]
Scratchpad memory(SPM), also known as scratchpad, scratchpad RAM or local store in computer terminology, is a high-speed internal memory used for temporary storage of calculations, data, and other work in progress.
To illustrate both specialization and multi-level caching, here is the cache hierarchy of the K8 core in the AMDAthlon 64CPU.[59]
The K8 has four specialized caches: an instruction cache, an instructionTLB, a data TLB, and a data cache. Each of these caches is specialized:
The K8 also has multiple-level caches. There are second-level instruction and data TLBs, which store only PTEs mapping 4 KiB. Both instruction and data caches, and the various TLBs, can fill from the largeunifiedL2 cache. This cache is exclusive to both the L1 instruction and data caches, which means that any 8-byte line can only be in one of the L1 instruction cache, the L1 data cache, or the L2 cache. It is, however, possible for a line in the data cache to have a PTE which is also in one of the TLBs—the operating system is responsible for keeping the TLBs coherent by flushing portions of them when the page tables in memory are updated.
The K8 also caches information that is never stored in memory—prediction information. These caches are not shown in the above diagram. As is usual for this class of CPU, the K8 has fairly complexbranch prediction, with tables that help predict whether branches are taken and other tables which predict the targets of branches and jumps. Some of this information is associated with instructions, in both the level 1 instruction cache and the unified secondary cache.
The K8 uses an interesting trick to store prediction information with instructions in the secondary cache. Lines in the secondary cache are protected from accidental data corruption (e.g. by analpha particlestrike) by eitherECCorparity, depending on whether those lines were evicted from the data or instruction primary caches. Since the parity code takes fewer bits than the ECC code, lines from the instruction cache have a few spare bits. These bits are used to cache branch prediction information associated with those instructions. The net result is that the branch predictor has a larger effective history table, and so has better accuracy.
Other processors have other kinds of predictors (e.g., the store-to-load bypass predictor in theDECAlpha 21264), and various specialized predictors are likely to flourish in future processors.
These predictors are caches in that they store information that is costly to compute. Some of the terminology used when discussing predictors is the same as that for caches (one speaks of ahitin a branch predictor), but predictors are not generally thought of as part of the cache hierarchy.
The K8 keeps the instruction and data cachescoherentin hardware, which means that a store into an instruction closely following the store instruction will change that following instruction. Other processors, like those in the Alpha and MIPS family, have relied on software to keep the instruction cache coherent. Stores are not guaranteed to show up in the instruction stream until a program calls an operating system facility to ensure coherency.
In computer engineering, atag RAMis used to specify which of the possible memory locations is currently stored in a CPU cache.[60][61]For a simple, direct-mapped design fastSRAMcan be used. Higherassociative cachesusually employcontent-addressable memory.
Cachereadsare the most common CPU operation that takes more than a single cycle. Program execution time tends to be very sensitive to the latency of a level-1 data cache hit. A great deal of design effort, and often power and silicon area are expended making the caches as fast as possible.
The simplest cache is a virtually indexed direct-mapped cache. The virtual address is calculated with an adder, the relevant portion of the address extracted and used to index an SRAM, which returns the loaded data. The data are byte aligned in a byte shifter, and from there are bypassed to the next operation. There is no need for any tag checking in the inner loop – in fact, the tags need not even be read. Later in the pipeline, but before the load instruction is retired, the tag for the loaded data must be read, and checked against the virtual address to make sure there was a cache hit. On a miss, the cache is updated with the requested cache line and the pipeline is restarted.
An associative cache is more complicated, because some form of tag must be read to determine which entry of the cache to select. An N-way set-associative level-1 cache usually reads all N possible tags and N data in parallel, and then chooses the data associated with the matching tag. Level-2 caches sometimes save power by reading the tags first, so that only one data element is read from the data SRAM.
The adjacent diagram is intended to clarify the manner in which the various fields of the address are used. Address bit 31 is most significant, bit 0 is least significant. The diagram shows the SRAMs, indexing, andmultiplexingfor a 4 KiB, 2-way set-associative, virtually indexed and virtually tagged cache with 64 byte (B) lines, a 32-bit read width and 32-bit virtual address.
Because the cache is 4 KiB and has 64 B lines, there are just 64 lines in the cache, and we read two at a time from a Tag SRAM which has 32 rows, each with a pair of 21 bit tags. Although any function of virtual address bits 31 through 6 could be used to index the tag and data SRAMs, it is simplest to use the least significant bits.
Similarly, because the cache is 4 KiB and has a 4 B read path, and reads two ways for each access, the Data SRAM is 512 rows by 8 bytes wide.
A more modern cache might be 16 KiB, 4-way set-associative, virtually indexed, virtually hinted, and physically tagged, with 32 B lines, 32-bit read width and 36-bit physical addresses. The read path recurrence for such a cache looks very similar to the path above. Instead of tags, virtual hints are read, and matched against a subset of the virtual address. Later on in the pipeline, the virtual address is translated into a physical address by the TLB, and the physical tag is read (just one, as the virtual hint supplies which way of the cache to read). Finally the physical address is compared to the physical tag to determine if a hit has occurred.
Some SPARC designs have improved the speed of their L1 caches by a few gate delays by collapsing the virtual address adder into the SRAM decoders. Seesum-addressed decoder.
The early history of cache technology is closely tied to the invention and use of virtual memory.[citation needed]Because of scarcity and cost of semi-conductor memories, early mainframe computers in the 1960s used a complex hierarchy of physical memory, mapped onto a flat virtual memory space used by programs. The memory technologies would span semi-conductor, magnetic core, drum and disc. Virtual memory seen and used by programs would be flat and caching would be used to fetch data and instructions into the fastest memory ahead of processor access. Extensive studies were done to optimize the cache sizes. Optimal values were found to depend greatly on the programming language used with Algol needing the smallest and Fortran and Cobol needing the largest cache sizes.[disputed–discuss]
In the early days of microcomputer technology, memory access was only slightly slower thanregisteraccess. But since the 1980s[62]the performance gap between processor and memory has been growing. Microprocessors have advanced much faster than memory, especially in terms of their operatingfrequency, so memory became a performancebottleneck. While it was technically possible to have all the main memory as fast as the CPU, a more economically viable path has been taken: use plenty of low-speed memory, but also introduce a small high-speed cache memory to alleviate the performance gap. This provided an order of magnitude more capacity—for the same price—with only a slightly reduced combined performance.
The first documented uses of a TLB were on theGE 645[63]and theIBM360/67,[64]both of which used an associative memory as a TLB.
The first documented use of an instruction cache was on theCDC 6600.[65]
The first documented use of a data cache was on theIBMSystem/360 Model 85.[66]
The68010, released in 1982, has a "loop mode" which can be considered a tiny and special-case instruction cache that accelerates loops that consist of only two instructions. The68020, released in 1984, replaced that with a typical instruction cache of 256 bytes, being the first 68k series processor to feature true on-chip cache memory.
The68030, released in 1987, is basically a 68020 core with an additional 256-byte data cache, an on-chipmemory management unit(MMU), a process shrink, and added burst mode for the caches.
The68040, released in 1990, has split instruction and data caches of four kilobytes each.
The68060, released in 1994, has the following: 8 KiB data cache (four-way associative), 8 KiB instruction cache (four-way associative), 96-byte FIFO instruction buffer, 256-entry branch cache, and 64-entry address translation cache MMU buffer (four-way associative).
As thex86microprocessors reached clock rates of 20 MHz and above in the386, small amounts of fast cache memory began to be featured in systems to improve performance. This was because theDRAMused for main memory had significant latency, up to 120 ns, as well as refresh cycles. The cache was constructed from more expensive, but significantly faster,SRAMmemory cells, which at the time had latencies around 10–25 ns. The early caches were external to the processor and typically located on the motherboard in the form of eight or nineDIPdevices placed in sockets to enable the cache as an optional extra or upgrade feature.
Some versions of the Intel 386 processor could support 16 to 256 KiB of external cache.
With the486processor, an 8 KiB cache was integrated directly into the CPU die. This cache was termed Level 1 or L1 cache to differentiate it from the slower on-motherboard, or Level 2 (L2) cache. These on-motherboard caches were much larger, with the most common size being 256 KiB. There were some system boards that contained sockets for the Intel 485Turbocachedaughtercardwhich had either 64 or 128 Kbyte of cache memory.[67][68]The popularity of on-motherboard cache continued through thePentium MMXera but was made obsolete by the introduction ofSDRAMand the growing disparity between bus clock rates and CPU clock rates, which caused on-motherboard cache to be only slightly faster than main memory.
The next development in cache implementation in the x86 microprocessors began with thePentium Pro, which brought the secondary cache onto the same package as the microprocessor, clocked at the same frequency as the microprocessor.
On-motherboard caches enjoyed prolonged popularity thanks to theAMD K6-2andAMD K6-IIIprocessors that still usedSocket 7, which was previously used by Intel with on-motherboard caches. K6-III included 256 KiB on-die L2 cache and took advantage of the on-board cache as a third level cache, named L3 (motherboards with up to 2 MiB of on-board cache were produced). After the Socket 7 became obsolete, on-motherboard cache disappeared from the x86 systems.
The three-level caches were used again first with the introduction of multiple processor cores, where the L3 cache was added to the CPU die. It became common for the total cache sizes to be increasingly larger in newer processor generations, and recently (as of 2011) it is not uncommon to find Level 3 cache sizes of tens of megabytes.[69]
Intelintroduced a Level 4 on-package cache with theHaswellmicroarchitecture.Crystalwell[38]Haswell CPUs, equipped with theGT3evariant of Intel's integrated Iris Pro graphics, effectively feature 128 MiB of embedded DRAM (eDRAM) on the same package. This L4 cache is shared dynamically between the on-die GPU and CPU, and serves as avictim cacheto the CPU's L3 cache.[39]
Apple M1CPU has 128 or 192 KiB instruction L1 cache for each core (important for latency/single-thread performance), depending on core type. This is an unusually large L1 cache for any CPU type (not just for a laptop); the total cache memory size is not unusually large (the total is more important for throughput) for a laptop, and much larger total (e.g. L3 or L4) sizes are available in IBM's mainframes.
Early cache designs focused entirely on the direct cost of cache andRAMand average execution speed.
More recent cache designs also considerenergy efficiency, fault tolerance, and other goals.[70][71]
There are several tools available to computer architects to help explore tradeoffs between the cache cycle time, energy, and area; the CACTI cache simulator[72]and the SimpleScalar instruction set simulator are two open-source options.
A multi-ported cache is a cache which can serve more than one request at a time. When accessing a traditional cache we normally use a single memory address, whereas in a multi-ported cache we may request N addresses at a time – where N is the number of ports that connected through the processor and the cache. The benefit of this is that a pipelined processor may access memory from different phases in its pipeline. Another benefit is that it allows the concept of super-scalar processors through different cache levels.
|
https://en.wikipedia.org/wiki/CPU_cache#Cache_hierarchy_in_a_modern_processor
|
Random-access memory(RAM;/ræm/) is a form ofelectronic computer memorythat can be read and changed in any order, typically used to store workingdataandmachine code.[1][2]Arandom-accessmemory device allows data items to bereador written in almost the same amount of time irrespective of the physical location of data inside the memory, in contrast with other direct-access data storage media (such ashard disksandmagnetic tape), where the time required to read and write data items varies significantly depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement.
In today's technology, random-access memory takes the form ofintegrated circuit(IC) chips withMOS(metal–oxide–semiconductor)memory cells. RAM is normally associated withvolatiletypes of memory where stored information is lost if power is removed. The two main types of volatile random-accesssemiconductor memoryarestatic random-access memory(SRAM) anddynamic random-access memory(DRAM).
Non-volatile RAM has also been developed[3]and other types ofnon-volatile memoriesallow random access for read operations, but either do not allow write operations or have other kinds of limitations. These include most types ofROMandNOR flash memory.
The use of semiconductor RAM dates back to 1965 when IBM introduced the monolithic (single-chip) 16-bit SP95 SRAM chip for theirSystem/360 Model 95computer, andToshibaused bipolar DRAM memory cells for its 180-bit Toscal BC-1411electronic calculator, both based onbipolar transistors. While it offered higher speeds thanmagnetic-core memory, bipolar DRAM could not compete with the lower price of the then-dominant magnetic-core memory.[4]In 1966, Dr.Robert Dennardinvented modern DRAM architecture in which there's a single MOS transistor per capacitor.[5]The first commercial DRAM IC chip, the 1KIntel 1103, was introduced in October 1970.Synchronous dynamic random-access memory(SDRAM) was reintroduced with theSamsungKM48SL2000 chip in 1992.
Early computers usedrelays,mechanical counters[6]ordelay linesfor main memory functions. Ultrasonic delay lines wereserial deviceswhich could only reproduce data in the order it was written.Drum memorycould be expanded at relatively low cost but efficient retrieval of memory items requires knowledge of the physical layout of the drum to optimize speed. Latches built out oftriode vacuum tubes, and later, out ofdiscrete transistors, were used for smaller and faster memories such asregisters. Such registers were relatively large and too costly to use for large amounts of data; generally only a few dozen or few hundred bits of such memory could be provided.
The first practical form of random-access memory was theWilliams tube. It stored data as electrically charged spots on the face of acathode-ray tube. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access. The capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller, faster, and more power-efficient than using individual vacuum tube latches. Developed at theUniversity of Manchesterin England, the Williams tube provided the medium on which the first electronically stored program was implemented in theManchester Babycomputer, which first successfully ran a program on 21 June, 1948.[7]In fact, rather than the Williams tube memory being designed for the Baby, the Baby was atestbedto demonstrate the reliability of the memory.[8][9]
Magnetic-core memorywas invented in 1947 and developed up until the mid-1970s. It became a widespread form of random-access memory, relying on an array of magnetized rings. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring. Since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible. Magnetic core memory was the standard form ofcomputer memoryuntil displaced bysemiconductor memoryinintegrated circuits(ICs) during the early 1970s.[10]
Prior to the development of integratedread-only memory(ROM) circuits,permanent(orread-only) random-access memory was often constructed usingdiode matricesdriven byaddress decoders, or specially woundcore rope memoryplanes.[citation needed]
Semiconductor memoryappeared in the 1960s with bipolar memory, which usedbipolar transistors. Although it was faster, it could not compete with the lower price of magnetic core memory.[11]
In 1957, Frosch and Derick manufactured the first silicon dioxide field-effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface.[12]Subsequently, in 1960, a team demonstrated a workingMOSFETat Bell Labs.[13][14]This led to the development ofmetal–oxide–semiconductor(MOS) memory by John Schmidt atFairchild Semiconductorin 1964.[10][15]In addition to higher speeds, MOSsemiconductor memorywas cheaper and consumed less power than magnetic core memory.[10]The development ofsilicon-gateMOS integrated circuit(MOS IC) technology byFederico Fagginat Fairchild in 1968 enabled the production of MOSmemory chips.[16]MOS memory overtook magnetic core memory as the dominant memory technology in the early 1970s.[10]
Integrated bipolarstatic random-access memory(SRAM) was invented by Robert H. Norman atFairchild Semiconductorin 1963.[17]It was followed by the development of MOS SRAM by John Schmidt at Fairchild in 1964.[10]SRAM became an alternative to magnetic-core memory, but required six MOS transistors for eachbitof data.[18]Commercial use of SRAM began in 1965, whenIBMintroduced the SP95 memory chip for theSystem/360 Model 95.[11]
Dynamic random-access memory(DRAM) allowed replacement of a 4 or 6-transistor latch circuit by a single transistor for each memory bit, greatly increasing memory density at the cost of volatility. Data was stored in the tiny capacitance of each transistor and had to be periodicallyrefreshedevery few milliseconds before the charge could leak away.
Toshiba's Toscal BC-1411electronic calculator, which was introduced in 1965,[19][20][21]used a form of capacitor bipolar DRAM, storing 180-bit data on discretememory cells, consisting ofgermaniumbipolar transistors and capacitors.[20][21]Capacitors had also been used for earlier memory schemes, such as the drum of theAtanasoff–Berry Computer, theWilliams tubeand theSelectron tube. While it offered higher speeds than magnetic-core memory, bipolar DRAM could not compete with the lower price of the then-dominant magnetic-core memory.[22]
In 1966,Robert Dennard, while examining the characteristics of MOS technology, found it was capable of buildingcapacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, and the MOS transistor could control writing the charge to the capacitor. This led to his development of modern DRAM architecture for which there is a single MOS transistor per capacitor.[18]In 1967, Dennard filed a patent under IBM for a single-transistor DRAM memory cell, based on MOS technology.[18][23]The first commercial DRAM IC chip was theIntel 1103, which wasmanufacturedon an8μmMOS process with a capacity of 1kbit, and was released in 1970.[10][24][25]
The earliest DRAMs were often synchronized with the CPU clock and were used with early microprocessors. In the mid-1970s, DRAMs moved to the asynchronous design, but in the 1990s returned to synchronous operation.[26][27]In 1992 Samsung released KM48SL2000, which had a capacity of 16Mbit.[28][29]The first commercialdouble data rateSDRAM was Samsung's 64MbitDDR SDRAM, released in June 1998.[30]GDDR(graphics DDR) is a form ofSGRAM(synchronous graphics RAM), which was first released by Samsung as a 16Mbit memory chip in 1998.[31]
The two widely used forms of modern RAM arestatic RAM(SRAM) anddynamic RAM(DRAM). In SRAM, abit of datais stored using the state of amemory cell, typically using six MOSFETs. This form of RAM is more expensive to produce, but is generally faster and requires less static power than DRAM. In modern computers, SRAM is often used ascache memory for the CPU. DRAM stores a bit of data using a transistor andcapacitorpair (typically a MOSFET andMOS capacitor, respectively),[32]which together comprise a DRAM cell. The capacitor holds a high or low charge (1 or 0, respectively), and the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers.
Both static and dynamic RAM are consideredvolatile, as their state is lost or reset when power is removed from the system. By contrast,read-only memory(ROM) stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writable variants of ROM (such asEEPROMandNOR flash) share properties of both ROM and RAM, enabling data topersistwithout power and to be updated without requiring special equipment.ECC memory(which can be either SRAM or DRAM) includes special circuitry to detect and/or correct random faults (memory errors) in the stored data, usingparity bitsorerror correction codes.
In general, the termRAMrefers solely to solid-state memory devices (either DRAM or SRAM), and more specifically the main memory in most computers. In optical storage, the termDVD-RAMis somewhat of a misnomer since, it is not random access; it behaves much like a hard disc drive if somewhat slower. Aside, unlikeCD-RWorDVD-RW, DVD-RAM does not need to be erased before reuse.
The memory cell is the fundamental building block ofcomputer memory. The memory cell is anelectronic circuitthat stores onebitof binary information and it must be set to store a logic 1 (high voltage level) and reset to store a logic 0 (low voltage level). Its value is maintained/stored until it is changed by the set/reset process. The value in the memory cell can be accessed by reading it.
In SRAM, the memory cell is a type offlip-flopcircuit, usually implemented usingFETs. This means that SRAM requires very low power when not being accessed, but it is expensive and has low storage density.
A second type, DRAM, is based around a capacitor. Charging and discharging this capacitor can store a "1" or a "0" in the cell. However, the charge in this capacitor slowly leaks away, and must be refreshed periodically. Because of this refresh process, DRAM uses more power, but it can achieve greater storage densities and lower unit costs compared to SRAM.
To be useful, memory cells must be readable and writable. Within the RAM device, multiplexing and demultiplexing circuitry is used to select memory cells. Typically, a RAM device has a set of address linesA0,A1,...An{\displaystyle A_{0},A_{1},...A_{n}}, and for each combination of bits that may be applied to these lines, a set of memory cells are activated. Due to this addressing, RAM devices virtually always have a memory capacity that is a power of two.
Usually several memory cells share the same address. For example, a 4 bit "wide" RAM chip has four memory cells for each address. Often the width of the memory and that of the microprocessor are different, for a 32 bit microprocessor, eight 4 bit RAM chips would be needed.
Often more addresses are needed than can be provided by a device. In that case, external multiplexors to the device are used to activate the correct device that is being accessed. RAM is often byte addressable, although it is also possible to make RAM that is word-addressable.[33][34]
One can read and over-write data in RAM. Many computer systems have a memory hierarchy consisting ofprocessor registers, on-dieSRAMcaches, externalcaches,DRAM,pagingsystems andvirtual memoryorswap spaceon a hard drive. This entire pool of memory may be referred to as "RAM" by many developers, even though the various subsystems can have very differentaccess times, violating the original concept behind therandom accessterm in RAM. Even within a hierarchy level such as DRAM, the specific row, column, bank,rank, channel, orinterleaveorganization of the components make the access time variable, although not to the extent that access time to rotatingstorage mediaor a tape is variable. The overall goal of using a memory hierarchy is to obtain the fastest possible average access time while minimizing the total cost of the entire memory system (generally, the memory hierarchy follows the access time with the fast CPU registers at the top and the slow hard drive at the bottom).
In many modern personal computers, the RAM comes in an easily upgraded form of modules calledmemory modulesor DRAM modules about the size of a few sticks of chewing gum. These can be quickly replaced should they become damaged or when changing needs demand more storage capacity. As suggested above, smaller amounts of RAM (mostly SRAM) are also integrated in theCPUand otherICson themotherboard, as well as in hard-drives,CD-ROMs, and several other parts of the computer system.
In addition to serving as temporary storage and working space for the operating system and applications, RAM is used in numerous other ways.
Most modern operating systems employ a method of extending RAM capacity, known as "virtual memory". A portion of the computer'shard driveis set aside for apaging fileor ascratch partition, and the combination of physical RAM and the paging file form the system's total memory. (For example, if a computer has 2 GB (10243B) of RAM and a 1 GB page file, the operating system has 3 GB total memory available to it.) When the system runs low on physical memory, it can "swap" portions of RAM to the paging file to make room for new data, as well as to read previously swapped information back into RAM. Excessive use of this mechanism results inthrashingand generally hampers overall system performance, mainly because hard drives are far slower than RAM.
Software can "partition" a portion of a computer's RAM, allowing it to act as a much faster hard drive that is called aRAM disk. A RAM disk loses the stored data when the computer is shut down, unless memory is arranged to have a standby battery source, or changes to the RAM disk are written out to a nonvolatile disk. The RAM disk is reloaded from the physical disk upon RAM disk initialization.
Sometimes, the contents of a relatively slow ROM chip are copied to read/write memory to allow for shorter access times. The ROM chip is then disabled while the initialized memory locations are switched in on the same block of addresses (often write-protected). This process, sometimes calledshadowing, is fairly common in both computers andembedded systems.
As a common example, theBIOSin typical personal computers often has an option called "use shadow BIOS" or similar. When enabled, functions that rely on data from the BIOS's ROM instead use DRAM locations (most can also toggle shadowing of video card ROM or other ROM sections). Depending on the system, this may not result in increased performance, and may cause incompatibilities. For example, some hardware may be inaccessible to theoperating systemif shadow RAM is used. On some systems the benefit may be hypothetical because the BIOS is not used after booting in favor of direct hardware access. Free memory is reduced by the size of the shadowed ROMs.[35]
Thememory wallis the growing disparity of speed between CPU and the response time of memory (known asmemory latency) outside the CPU chip. An important reason for this disparity is the limited communication bandwidth beyond chip boundaries, which is also referred to asbandwidth wall. From 1986 to 2000,CPUspeed improved at an annual rate of 55% while off-chip memory response time only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelmingbottleneckin computer performance.[36]
Another reason for the disparity is the enormous increase in the size of memory since the start of the PC revolution in the 1980s. Originally, PCs contained less than 1 mebibyte of RAM, which often had a response time of 1 CPU clock cycle, meaning that it required 0 wait states. Larger memory units are inherently slower than smaller ones of the same type, simply because it takes longer for signals to traverse a larger circuit. Constructing a memory unit of many gibibytes with a response time of one clock cycle is difficult or impossible. Today's CPUs often still have a mebibyte of 0 wait state cache memory, but it resides on the same chip as the CPU cores due to the bandwidth limitations of chip-to-chip communication. It must also be constructed from static RAM, which is far more expensive than the dynamic RAM used for larger memories. Static RAM also consumes far more power.
CPU speed improvements slowed significantly partly due to major physical barriers and partly because current CPU designs have already hit the memory wall in some sense.Intelsummarized these causes in a 2005 document.[37]
First of all, as chip geometries shrink and clock frequencies rise, the transistorleakage currentincreases, leading to excess power consumption and heat... Secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with increasing clock frequencies. Third, for certain applications, traditional serial architectures are becoming less efficient as processors get faster (due to the so-calledvon Neumann bottleneck), further undercutting any gains that frequency increases might otherwise buy. In addition, partly due to limitations in the means of producing inductance within solid state devices,resistance-capacitance(RC) delays in signal transmission are growing as feature sizes shrink, imposing an additional bottleneck that frequency increases don't address.
The RC delays in signal transmission were also noted in "Clock Rate versus IPC: The End of the Road for Conventional Microarchitectures"[38]which projected a maximum of 12.5% average annual CPU performance improvement between 2000 and 2014.
A different concept is the processor-memory performance gap, which can be addressed by3D integrated circuitsthat reduce the distance between the logic and memory aspects that are further apart in a 2D chip.[39]Memory subsystem design requires a focus on the gap, which is widening over time.[40]The main method of bridging the gap is the use ofcaches; small amounts of high-speed memory that houses recent operations and instructions nearby the processor, speeding up the execution of those operations or instructions in cases where they are called upon frequently. Multiple levels of caching have been developed to deal with the widening gap, and the performance of high-speed modern computers relies on evolving caching techniques.[41]There can be up to a 53% difference between the growth in speed of processor and the lagging speed of main memory access.[42]
Solid-state hard driveshave continued to increase in speed, from ~400 Mbit/s viaSATA3in 2012 up to ~7 GB/s viaNVMe/PCIein 2024, closing the gap between RAM and hard disk speeds, although RAM continues to be an order of magnitude faster, with single-laneDDR58000MHz capable of 128 GB/s, and modernGDDReven faster. Fast, cheap,non-volatilesolid state drives have replaced some functions formerly performed by RAM, such as holding certain data for immediate availability inserver farms- 1terabyteof SSD storage can be had for $200, while 1 TB of RAM would cost thousands of dollars.[43][44]
|
https://en.wikipedia.org/wiki/Memory_wall
|
Hierarchical storage management(HSM), also known astiered storage,[1]is adata storageanddata managementtechnique that automatically moves data between high-cost and low-coststorage media. HSM systems exist because high-speed storage devices, such assolid-state drivearrays, are more expensive (perbytestored) than slower devices, such ashard disk drives,optical discsand magnetictape drives. While it would be ideal to have all data available on high-speed devices all the time, this is prohibitively expensive for many organizations. Instead, HSM systems store the bulk of the enterprise's data on slower devices, and then copy data to faster disk drives when needed. The HSM system monitors the way data is used and makes best guesses as to which data can safely be moved to slower devices and which data should stay on the fast devices.
HSM may also be used where more robust storage is available for long-term archiving, but this is slow to access. This may be as simple as anoff-site backup, for protection against a building fire.
HSM is a long-established concept, dating back to the beginnings of commercial data processing. The techniques used though have changed significantly as new technology becomes available, for both storage and for long-distance communication of large data sets. The scale of measures such as 'size' and 'access time' have changed dramatically. Despite this, many of the underlying concepts keep returning to favour years later, although at much larger or faster scales.[1]
In a typical HSM scenario, data which is frequently used are stored on warm storage device, such as solid-state disk (SSD). Data that is infrequently accessed is, after some timemigratedto a slower, high capacity cold storage tier. If a user does access data which is on the cold storage tier, it is automatically moved back to warm storage. The advantage is that the total amount of stored data can be much larger than the capacity of the warm storage device, but since only rarely used files are on cold storage, most users will usually not notice any slowdown.
Conceptually, HSM is analogous to thecachefound in most computerCPUs, where small amounts of expensiveSRAMmemory running at very high speeds is used to store frequently used data, but theleast recently useddata is evicted to the slower but much larger mainDRAMmemory when new data has to be loaded.
In practice, HSM is typically performed by dedicated software, such asIBM Tivoli Storage Manager, orOracle'sSAM-QFS.
The deletion of files from a higher level of the hierarchy (e.g. magnetic disk) after they have been moved to a lower level (e.g. optical media) is sometimes calledfile grooming.[2]
Hierarchical Storage Manager (HSM, then DFHSM and finallyDFSMShsm) was first[citation needed]implemented byIBMon March 31, 1978 forMVSto reduce the cost of data storage, and to simplify the retrieval of data from slower media. The user would not need to know where the data was stored and how to get it back; the computer would retrieve the data automatically. The only difference to the user was the speed at which data was returned. HSM could originally migrate datasets only to disk volumes and virtual volumes on aIBM 3850Mass Storage Facility, but a latter release supported magnetic tape volumes for migration level 2 (ML2).
Later, IBM ported HSM to itsAIX operating system, and then to otherUnix-likeoperating systems such asSolaris,HP-UXandLinux.
CSIRO Australia's Division of Computing Research implemented an HSM in its DAD (Drums and Display) operating system with its Document Region in the 1960s, with copies of documents being written to 7-track tape and automatic retrieval upon access to the documents.
HSM was also implemented on the DECVAX/VMSsystems and the Alpha/VMS systems. The first implementation date should be readily determined from the VMS System Implementation Manuals or the VMS Product Description Brochures.
More recently, the development ofSerial ATA(SATA) disks has created a significant market for three-stage HSM: files are migrated from high-performanceFibre Channelstorage area networkdevices to somewhat slower but much cheaper SATAdisk arraystotaling severalterabytesor more, and then eventually from the SATA disks to tape.
HSM is often used for deep archival storage of data to be held long term at low cost. Automated tape robots can silo large quantities of data efficiently with low power consumption.
Some HSM software products allow the user to place portions of data files on high-speed disk cache and the rest on tape. This is used in applications that stream video over the internet—the initial portion of a video is delivered immediately from disk while a robot finds, mounts and streams the rest of the file to the end user. Such a system greatly reduces disk cost for large content provision systems.
HSM software is today used also for tiering betweenhard disk drivesandflash memory, with flash memory being over 30 times faster than magnetic disks, but disks being considerably cheaper.
The key factor behind HSM is a data migration policy that controls the file transfers in the system. More precisely, the policy decides which tier a file should be stored in, so that the entire storage system can be well-organized and have a shortest response time to requests. There are several algorithms realizing this process, such as least recently used replacement (LRU),[3]Size-Temperature Replacement(STP), Heuristic Threshold(STEP)[4]etc. In research of recent years, there are also some intelligent policies coming up by using machine learning technologies.[5]
While tiering solutions and caching may look the same on the surface, the fundamental differences lie in the way the faster storage is utilized and the algorithms used to detect and accelerate frequently accessed data.[6]
Caching operates by making a copy of frequently accessed blocks of data, and storing the copy in the faster storage device and use this copy instead of the original data source on the slower, high capacity backend storage. Every time a storage read occurs, the caching software look to see if a copy of this data already exists on the cache and uses that copy, if available. Otherwise, the data is read from the slower, high capacity storage.[6]
Tiering on the other hand operates very differently. Rather than making acopyof frequently accessed data into fast storage, tieringmovesdata across tiers, for example, by relocatingcold datato low cost, high capacity nearline storage devices.[7][6]The basic idea is, mission-critical and highly accesses or "hot" data is stored in expensive medium such as SSD to take advantage of high I/O performance, whilenearlineor rarely accessed or "cold" data is stored innearline storage mediumsuch as HDD andtapeswhich are inexpensive.[8]Thus, the "data temperature" or activity levels determines theprimary storage hierarchy.[9]
|
https://en.wikipedia.org/wiki/Hierarchical_storage_management
|
Incomputing, amemory access patternorIO access patternis the pattern with which a system or program reads and writesmemoryonsecondary storage. These patterns differ in the level oflocality of referenceand drastically affectcacheperformance,[1]and also have implications for the approach toparallelism[2][3]and distribution of workload inshared memory systems.[4]Further,cache coherencyissues can affectmultiprocessorperformance,[5]which means that certain memory access patterns place a ceiling on parallelism (whichmanycoreapproaches seek to break).[6]
Computer memoryis usually described as "random access", but traversals by software will still exhibit patterns that can be exploited for efficiency. Various tools exist to help system designers[7]and programmers understand, analyse and improve the memory access pattern, includingVTuneandVectorization Advisor,[8][9][10][11][12]including tools to addressGPUmemory access patterns.[13]
Memory access patterns also have implications forsecurity,[14][15]which motivates some to try and disguise a program's activity forprivacyreasons.[16][17]
The simplest extreme is thesequential accesspattern, where data is read, processed, and written out with straightforward incremented/decremented addressing. These access patterns are highly amenable toprefetching.
Stridedor simple 2D, 3D access patterns (e.g., stepping throughmulti-dimensional arrays) are similarly easy to predict, and are found in implementations oflinear algebraalgorithms andimage processing.Loop tilingis an effective approach.[19]Some systems withDMAprovided a strided mode for transferring data between subtile of larger2D arraysandscratchpad memory.[20]
A linear access pattern is closely related to "strided", where amemory addressmay be computed from a linear combination of some index. Stepping through indices sequentially with a linear pattern yieldsstrided access. A linear access pattern for writes (with any access pattern for non-overlapping reads) may guarantee that an algorithm can be parallelised, which is exploited in systems supportingcompute kernels.
Nearest neighbor memory access patterns appear in simulation, and are related to sequential or strided patterns. An algorithm may traverse a data structure using information from the nearest neighbors of a data element (in one or more dimensions) to perform a calculation. These are common in physics simulations operating on grids.[21]Nearest neighbor can also refer to inter-node communication in a cluster; physics simulations which rely on such local access patterns can be parallelized with the data partitioned into cluster nodes, with purely nearest-neighbor communication between them, which may have advantages for latency and communication bandwidth. This use case maps well ontotorus network topology.[22]
In3D rendering, access patterns fortexture mappingandrasterizationof small primitives (with arbitrary distortions of complex surfaces) are far from linear, but can still exhibit spatial locality (e.g., inscreen spaceortexture space). This can be turned into goodmemorylocality via some combination ofmorton order[23]andtilingfortexture mapsandframe bufferdata (mapping spatial regions onto cache lines), or by sorting primitives viatile based deferred rendering.[24]It can also be advantageous to store matrices in morton order inlinear algebra libraries.[25]
Ascattermemory access pattern combines sequential reads with indexed/random addressing for writes.[26]Compared to gather, It may place less load on a cache hierarchy since aprocessing elementmay dispatch writes in a "fire and forget" manner (bypassing a cache altogether), whilst using predictable prefetching (or even DMA) for its source data.
However, it may be harder to parallelise since there is no guarantee the writes do not interact,[27]and many systems are still designed assuming that a hardware cache will coalesce many small writes into larger ones.
In the past,forward texture mappingattempted to handle the randomness with "writes", whilst sequentially reading source texture information.
ThePlayStation 2console used conventional inverse texture mapping, but handled any scatter/gather processing "on-chip" using EDRAM, whilst 3D model (and a lot of texture data) from main memory was fed sequentially by DMA. This is why it lacked support for indexed primitives, and sometimes needed to manage textures "up front" in thedisplay list.
In agathermemory access pattern, reads are randomly addressed or indexed, whilst the writes are sequential (or linear).[26]An example is found ininverse texture mapping, where data can be written out linearly acrossscan lines, whilst random access texture addresses are calculated perpixel.
Compared to scatter, the disadvantage is that caching (and bypassing latencies) is now essential for efficient reads of small elements, however it is easier to parallelise since the writes are guaranteed to not overlap. As such the gather approach is more common forgpgpuprogramming,[27]where the massive threading (enabled by parallelism) is used to hide read latencies.[27]
An algorithm may gather data from one source, perform some computation in local or on chip memory, and scatter results elsewhere. This is essentially the full operation of aGPUpipeline when performing3D rendering- gathering indexed vertices and textures, and scattering shaded pixels inscreen space. Rasterization of opaque primitives using a depth buffer is "commutative", allowing reordering, which facilitates parallel execution. In the general case synchronisation primitives would be needed.
At the opposite extreme is a truly random memory access pattern. A few multiprocessor systems are specialised to deal with these.[28]ThePGASapproach may help by sorting operations by data on the fly (useful when the problem *is* figuring out the locality of unsorted data).[21]Data structures which rely heavily onpointer chasingcan often produce poorlocality of reference, although sorting can sometimes help. Given a truly random memory access pattern, it may be possible to break it down (including scatter or gather stages, or other intermediate sorting) which may improve the locality overall; this is often a prerequisite forparallelizing.
Data-oriented designis an approach intended to maximise the locality of reference, by organising data according to how it is traversed in various stages of a program, contrasting with the more commonobject orientedapproach (i.e., organising such that data layout explicitly mirrors the access pattern).[1]
Locality of referencerefers to a property exhibited by memory access patterns. A programmer will change the memory access pattern (by reworking algorithms) to improve the locality of reference,[29]and/or to increase potential for parallelism.[26]A programmer or system designer may create frameworks or abstractions (e.g.,C++ templatesorhigher-order functions) thatencapsulatea specific memory access pattern.[30][31]
Different considerations for memory access patterns appear in parallelism beyond locality of reference, namely the separation of reads and writes. E.g.: even if the reads and writes are "perfectly" local, it can be impossible to parallelise due todependencies; separating the reads and writes into separate areas yields a different memory access pattern, maybe initially appear worse in pure locality terms, but desirable to leverage modern parallel hardware.[26]
Locality of reference may also refer to individual variables (e.g., the ability of acompilerto cache them inregisters), whilst the term memory access pattern only refers to data held in an indexable memory (especiallymain memory).
|
https://en.wikipedia.org/wiki/Memory_access_pattern
|
Communication-avoiding algorithmsminimize movement of data within amemory hierarchyfor improving its running-time and energy consumption. These minimize the total of two costs (in terms of time and energy): arithmetic and communication. Communication, in this context refers to moving data, either between levels of memory or between multiple processors over a network. It is much more expensive than arithmetic.[1]
A common computational model in analyzing communication-avoiding algorithms is the two-level memory model:
[2]Corollary 6.2:
Theorem—Given matricesA,B,C{\displaystyle A,B,C}of sizesn×m,m×k,n×k{\displaystyle n\times m,m\times k,n\times k}, thenAB+C{\displaystyle AB+C}has communication complexityΩ(max(mkn/M1/2,mk+kn+mk)){\displaystyle \Omega (\max(mkn/M^{1/2},mk+kn+mk))}.
This lower bound is achievable bytiling matrix multiplication.
More general results for other numerical linear algebra operations can be found in.[3]The following proof is from.[4]
We can draw the computation graph ofD=AB+C{\displaystyle D=AB+C}as a cube of lattice points, each point is of form(i,j,k){\displaystyle (i,j,k)}. SinceD[i,k]=∑jA[i,j]B[j,k]+C[i,k]{\displaystyle D[i,k]=\sum _{j}A[i,j]B[j,k]+C[i,k]}, computingAB+C{\displaystyle AB+C}requires the processor to have access to each point within the cube at least once. So the problem becomes covering themnk{\displaystyle mnk}lattice points with a minimal amount of communication.
IfM{\displaystyle M}is large, then we can simply load allmn+nk+mk{\displaystyle mn+nk+mk}entries then writenk{\displaystyle nk}entries. This is uninteresting.
IfM{\displaystyle M}is small, then we can divide the minimal-communication algorithm into separate segments. During each segment, it performs exactlyM{\displaystyle M}reads to cache, and any number of writes from cache.
During each segment, the processor has access to at most2M{\displaystyle 2M}different points fromA,B,C{\displaystyle A,B,C}.
LetE{\displaystyle E}be the set of lattice points covered during this segment. Then by theLoomis–Whitney inequality,
|E|≤|π1(E)||π2(E)||π3(E)|{\displaystyle |E|\leq {\sqrt {|\pi _{1}(E)||\pi _{2}(E)||\pi _{3}(E)|}}}with constraint∑i|πi(E)|≤2M{\displaystyle \sum _{i}|\pi _{i}(E)|\leq 2M}.
By theinequality of arithmetic and geometric means, we have|E|≤(23M)3/2{\displaystyle |E|\leq \left({\frac {2}{3}}M\right)^{3/2}}, with extremum reached whenπi(E)=23M{\displaystyle \pi _{i}(E)={\frac {2}{3}}M}.
Thus the arithmetic intensity is bounded above byCM1/2{\displaystyle CM^{1/2}}whereC=(2/3)3/2{\displaystyle C=(2/3)^{3/2}}, and so the communication is bounded below bynmkCM1/2{\displaystyle {\frac {nmk}{CM^{1/2}}}}.
Direct computation verifies that the tiling matrix multiplication algorithm reaches the lower bound.
Consider the following running-time model:[5]
⇒ Total running time = γ·(no. ofFLOPs) + β·(no. of words)
From the fact thatβ>>γas measured in time and energy, communication cost dominates computation cost. Technological trends[6]indicate that the relative cost of communication is increasing on a variety of platforms, fromcloud computingtosupercomputersto mobile devices. The report also predicts that gap betweenDRAMaccess time and FLOPs will increase 100× over coming decade to balance power usage between processors and DRAM.[1]
Energy consumption increases by orders of magnitude as we go higher in the memory hierarchy.[7]
United States president Barack Obama cited communication-avoiding algorithms in the FY 2012 Department of Energy budget request to Congress:[1]
New Algorithm Improves Performance and Accuracy on Extreme-Scale Computing Systems. On modern computer architectures, communication between processors takes longer than the performance of afloating-point arithmeticoperation by a given processor. ASCR researchers have developed a new method, derived from commonly used linear algebra methods, to minimize communications between processors and the memory hierarchy, by reformulating the communication patterns specified within the algorithm. This method has been implemented in the TRILINOS framework, a highly-regarded suite of software, which provides functionality for researchers around the world to solve large scale, complex multi-physics problems.
Communication-avoiding algorithms are designed with the following objectives:
The following simple example[1]demonstrates how these are achieved.
Let A, B and C be square matrices of ordern×n. The following naive algorithm implements C = C + A * B:
Arithmetic cost (time-complexity):n2(2n− 1) for sufficiently largenor O(n3).
Rewriting this algorithm with communication cost labelled at each step
Fast memory may be defined as the local processor memory (CPU cache) of size M and slow memory may be defined as the DRAM.
Communication cost (reads/writes):n3+ 3n2or O(n3)
Since total running time =γ·O(n3) +β·O(n3) andβ>>γthe communication cost is dominant. The blocked (tiled) matrix multiplication algorithm[1]reduces this dominant term:
Consider A, B and C to ben/b-by-n/bmatrices ofb-by-bsub-blocks where b is called the block size; assume threeb-by-bblocks fit in fast memory.
Communication cost: 2n3/b+ 2n2reads/writes << 2n3arithmetic cost
Makingbas large possible:
we achieve the following communication lower bound:
Most of the approaches investigated in the past to address this problem rely on scheduling or tuning techniques that aim at overlapping communication with computation. However, this approach can lead to an improvement of at most a factor of two. Ghosting is a different technique for reducing communication, in which a processor stores and computes redundantly data from neighboring processors for future computations.Cache-oblivious algorithmsrepresent a different approach introduced in 1999 forfast Fourier transforms,[8]and then extended to graph algorithms, dynamic programming, etc. They were also applied to several operations in linear algebra[9][10][11]as dense LU and QR factorizations. The design of architecture specific algorithms is another approach that can be used for reducing the communication in parallel algorithms, and there are many examples in the literature of algorithms that are adapted to a given communication topology.[12]
|
https://en.wikipedia.org/wiki/Communication-avoiding_algorithm
|
Incomputer architecture,predicationis a feature that provides an alternative toconditionaltransfer ofcontrol, as implemented by conditionalbranchmachineinstructions. Predication works by having conditional (predicated) non-branch instructions associated with apredicate, aBoolean valueused by the instruction to control whether the instruction is allowed to modify the architectural state or not. If the predicate specified in the instruction is true, the instruction modifies the architectural state; otherwise, the architectural state is unchanged. For example, a predicated move instruction (a conditional move) will only modify the destination if the predicate is true. Thus, instead of using a conditional branch to select an instruction or a sequence of instructions toexecutebased on the predicate that controls whether the branch occurs, the instructions to be executed are associated with that predicate, so that they will be executed, or not executed, based on whether that predicate is true or false.[1]
Vector processors, someSIMDISAs (such asAVX2andAVX-512) andGPUsin general make heavy use of predication, applying one bit of a conditionalmask vectorto the corresponding elements in the vector registers being processed, whereas scalar predication in scalar instruction sets only need the one predicate bit. Where predicate masks become particularly powerful invector processingis if anarrayofcondition codes, one per vector element, may feed back into predicate masks that are then applied to subsequent vector instructions.
Mostcomputer programscontainconditionalcode, which will be executed only under specific conditions depending on factors that cannot be determined beforehand, for example depending on user input. As the majority ofprocessorssimply execute the nextinstructionin a sequence, the traditional solution is to insertbranchinstructions that allow a program to conditionally branch to a different section of code, thus changing the next step in the sequence. This was sufficient until designers began improving performance by implementinginstruction pipelining, a method which is slowed down by branches. For a more thorough description of the problems which arose, and a popular solution, seebranch predictor.
Luckily, one of the more common patterns of code that normally relies on branching has a more elegant solution. Consider the followingpseudocode:[1]
On a system that uses conditional branching, this might translate tomachine instructionslooking similar to:[1]
With predication, all possible branch paths are coded inline, but some instructions execute while others do not. The basic idea is that each instruction is associated with a predicate (the word here used similarly to its usage inpredicate logic) and that the instruction will only be executed if the predicate is true. The machine code for the above example using predication might look something like this:[1]
Besides eliminating branches, less code is needed in total, provided the architecture provides predicated instructions. While this does not guarantee faster execution in general, it will if thedo_somethinganddo_something_elseblocks of code are short enough.
Predication's simplest form ispartial predication, where the architecture hasconditional moveorconditional selectinstructions. Conditional move instructions write the contents of one register over another only if the predicate's value is true, whereas conditional select instructions choose which of two registers has its contents written to a third based on the predicate's value. A more generalized and capable form isfull predication. Full predication has a set of predicate registers for storing predicates (which allows multiple nested or sequential branches to be simultaneously eliminated) and most instructions in the architecture have a register specifier field to specify which predicate register supplies the predicate.[2]
The main purpose of predication is to avoid jumps over very small sections of program code, increasing the effectiveness ofpipelinedexecution and avoiding problems with thecache. It also has a number of more subtle benefits:
Predication's primary drawback is in increased encoding space. In typical implementations, every instruction reserves a bitfield for the predicate specifying under what conditions that instruction should have an effect. When available memory is limited, as onembedded devices, this space cost can be prohibitive. However, some architectures such asThumb-2are able to avoid this issue (see below). Other detriments are the following:[3]
Predication is most effective when paths are balanced or when the longest path is the most frequently executed,[3]but determining such a path is very difficult at compile time, even in the presence ofprofiling information.
Predicated instructions were popular in European computer designs of the 1950s, including theMailüfterl(1955), theZuse Z22(1955), theZEBRA(1958), and theElectrologica X1(1958). TheIBM ACS-1design of 1967 allocated a "skip" bit in its instruction formats, and the CDC Flexible Processor in 1976 allocated three conditional execution bits in its microinstruction formats.
Hewlett-Packard'sPA-RISCarchitecture (1986) had a feature callednullification, which allowed most instructions to be predicated by the previous instruction.IBM'sPOWER architecture(1990) featured conditional move instructions. POWER's successor,PowerPC(1993), dropped these instructions.Digital Equipment Corporation'sAlphaarchitecture (1992) also featured conditional move instructions.MIPSgained conditional move instructions in 1994 with the MIPS IV version; andSPARCwas extended in Version 9 (1994) with conditional move instructions for both integer and floating-point registers.
In theHewlett-Packard/IntelIA-64architecture, most instructions are predicated. The predicates are stored in 64 special-purpose predicateregisters; and one of the predicate registers is always true so thatunpredicatedinstructions are simply instructions predicated with the value true. The use of predication is essential in IA-64's implementation ofsoftware pipeliningbecause it avoids the need for writing separated code for prologs and epilogs.[clarification needed]
In thex86architecture, a family of conditional move instructions (CMOVandFCMOV) were added to the architecture by theIntelPentium Pro(1995) processor. TheCMOVinstructions copied the contents of the source register to the destination register depending on a predicate supplied by the value of the flag register.
In theARM architecture, the original 32-bit instruction set provides a feature calledconditional executionthat allows most instructions to be predicated by one of 13 predicates that are based on some combination of the four condition codes set by the previous instruction. ARM'sThumbinstruction set (1994) dropped conditional execution to reduce the size of instructions so they could fit in 16 bits, but its successor,Thumb-2(2003) overcame this problem by using a special instruction which has no effect other than to supply predicates for the following four instructions. The 64-bit instruction set introduced in ARMv8-A (2011) replaced conditional execution with conditional selection instructions.
SomeSIMDinstruction sets, like AVX2, have the ability to use a logicalmaskto conditionally load/store values to memory, a parallel form of the conditional move, and may also apply individual mask bits to individual arithmetic units executing a parallel operation. The techniqueis known in Flynn's taxonomy as "associative processing".
This form of predication is also used invector processorsandsingle instruction, multiple threadsGPU computing. All the techniques, advantages and disadvantages of single scalar predication apply just as well to the parallel processing case.
|
https://en.wikipedia.org/wiki/Branch_predication
|
Incomputing,code generationis part of the process chain of acompiler, in which anintermediate representationofsource codeis converted into a form (e.g.,machine code) that can be readily executed by the target system.
Sophisticated compilers typically performmultiple passesover various intermediate forms. This multi-stage process is used because manyalgorithmsforcode optimizationare easier to apply one at a time, or because the input to one optimization relies on the completed processing performed by another optimization. This organization also facilitates the creation of a single compiler that can target multiple architectures, as only the last of the code generation stages (thebackend) needs to change from target to target. (For more information on compiler design, seeCompiler.)
The input to the code generator typically consists of aparse treeor anabstract syntax tree.[1]The tree is converted into a linear sequence of instructions, usually in anintermediate languagesuch asthree-address code. Further stages of compilation may or may not be referred to as "code generation", depending on whether they involve a significant change in the representation of the program. (For example, apeephole optimizationpass would not likely be called "code generation", although a code generator might incorporate a peephole optimization pass.)
In addition to the basic conversion from an intermediate representation into a linear sequence of machine instructions, a typical code generator tries to optimize the generated code in some way.
Tasks which are typically part of a sophisticated compiler's "code generation" phase include:
Instruction selection is typically carried out by doing arecursivepostorder traversalon the abstract syntax tree, matching particular tree configurations against templates; for example, the treeW := ADD(X,MUL(Y,Z))might be transformed into a linear sequence of instructions by recursively generating the sequences fort1 := Xandt2 := MUL(Y,Z), and then emitting the instructionADD W, t1, t2.
In a compiler that uses an intermediate language, there may be two instruction selection stages—one to convert the parse tree into intermediate code, and a second phase much later to convert the intermediate code into instructions from theinstruction setof the target machine. This second phase does not require a tree traversal; it can be done linearly, and typically involves a simple replacement of intermediate-language operations with their correspondingopcodes. However, if the compiler is actually alanguage translator(for example, one that convertsJavatoC++), then the second code-generation phase may involvebuildinga tree from the linear intermediate code.
When code generation occurs atruntime, as injust-in-time compilation(JIT), it is important that the entire process beefficientwith respect to space and time. For example, whenregular expressionsare interpreted and used to generate code at runtime, a non-deterministicfinite-state machineis often generated instead of a deterministic one, because usually the former can be created more quickly and occupies less memory space than the latter. Despite its generally generating less efficient code, JIT code generation can take advantage ofprofilinginformation that is available only at runtime.
The fundamental task of taking input in one language and producing output in a non-trivially different language can be understood in terms of the coretransformationaloperations offormal language theory. Consequently, some techniques that were originally developed for use in compilers have come to be employed in other ways as well. For example,YACC(Yet AnotherCompiler-Compiler) takes input inBackus–Naur formand converts it to a parser inC. Though it was originally created for automatic generation of a parser for a compiler, yacc is also often used to automate writing code that needs to be modified each time specifications are changed.[3]
Manyintegrated development environments(IDEs) support some form of automaticsource-code generation, often using algorithms in common with compiler code generators, although commonly less complicated. (See also:Program transformation,Data transformation.)
In general, a syntax and semantic analyzer tries to retrieve the structure of the program from the source code, while a code generator uses this structural information (e.g.,data types) to produce code. In other words, the formeraddsinformation while the latterlosessome of the information. One consequence of this information loss is thatreflectionbecomes difficult or even impossible. To counter this problem, code generators often embed syntactic and semantic information in addition to the code necessary for execution.
|
https://en.wikipedia.org/wiki/Code_generation_(compiler)
|
Theinstruction unit(I-unitorIU), also called, e.g.,instruction fetch unit(IFU),instruction issue unit(IIU),instruction sequencing unit(ISU), in acentral processing unit(CPU) is responsible for organizing program instructions to be fetched from memory, and executed, in an appropriate order, and for forwarding them to anexecution unit(E-unitorEU). The I-unit may also do, e.g., address resolution, pre-fetching, prior to forwarding an instruction. It is a part of thecontrol unit, which in turn is part of the CPU.[1]
In the simplest style ofcomputer architecture, theinstruction cycleis very rigid, and runs exactly as specified by theprogrammer. In the instruction fetch part of the cycle, the value of theinstruction pointer(IP) register is the address of the next instruction to be fetched. This value is placed on theaddress busand sent to thememory unit; the memory unit returns the instruction at that address, and it is latched into theinstruction register(IR); and the value of the IP is incremented or over-written by a new value (in the case of a jump or branch instruction), ready for the next instruction cycle.
This becomes a lot more complicated, though, once performance-enhancing features are added, such asinstruction pipelining,out-of-order execution, and even just the introduction of a simpleinstruction cache.[2]
Thismicrocomputer- ormicroprocessor-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Instruction_unit
|
Incomputer engineering,out-of-order execution(or more formallydynamic execution) is aninstruction schedulingparadigm used in high-performancecentral processing unitsto make use ofinstruction cyclesthat would otherwise be wasted. In this paradigm, a processor executesinstructionsin an order governed by the availability of input data and execution units,[1]rather than by their original order in a program.[2][3]In doing so, the processor can avoid being idle while waiting for the preceding instruction to complete and can, in the meantime, process the next instructions that are able to run immediately and independently.[4]
Out-of-order execution is a restricted form ofdataflow architecture, which was a major research area incomputer architecturein the 1970s and early 1980s.
The first machine to use out-of-order execution was theCDC 6600(1964), designed byJames E. Thornton, which uses ascoreboardto avoid conflicts. It permits an instruction to execute if its source operand (read) registers aren't to be written to by any unexecuted earlier instruction (true dependency) and the destination (write) register not be a register used by any unexecuted earlier instruction (false dependency). The 6600 lacks the means to avoidstallinganexecution uniton false dependencies (write after write(WAW) andwrite after read(WAR) conflicts, respectively termedfirst-order conflictandthird-order conflictby Thornton, who termed true dependencies (read after write(RAW)) as second-order conflict) because each address has only a single location referable by it. The WAW is worse than WAR for the 6600, because when an execution unit encounters a WAR, the other execution units still receive and execute instructions, but upon a WAW the assignment of instructions to execution units stops, and they can not receive any further instructions until the WAW-causing instruction's destination register has been written to by earlier instruction.[5]
About two years later, theIBM System/360 Model 91(1966) introducedregister renamingwithTomasulo's algorithm,[6]which dissolves false dependencies (WAW and WAR), making full out-of-order execution possible. An instruction addressing a write into a registerrncan be executed before an earlier instruction using the registerrnis executed, by actually writing into an alternative (renamed) registeralt-rn, which is turned into a normal registerrnonly when all the earlier instructions addressingrnhave been executed, but until thenrnis given for earlier instructions andalt-rnfor later ones addressingrn.
In the Model 91 the register renaming is implemented by abypasstermedCommon Data Bus(CDB) and memory source operand buffers, leaving the physical architectural registers unused for many cycles as the oldest state of registers addressed by any unexecuted instruction is found on the CDB. Another advantage the Model 91 has over the 6600 is the ability to execute instructions out-of-order in the sameexecution unit, not just between the units like the 6600. This is accomplished byreservation stations, from which instructions go to the execution unit when ready, as opposed to the FIFO queue of each execution unit of the 6600. The Model 91 is also capable of reordering loads and stores to execute before the preceding loads and stores,[7]unlike the 6600, which only has a limited ability to move loads past loads, and stores past stores, but not loads past stores and stores past loads.[8]Only the floating-point registers of the Model 91 are renamed, making it subject to the same WAW and WAR limitations as the CDC 6600 when running fixed-point calculations. The 91 and 6600 both also suffer fromimprecise exceptions, which needed to be solved before out-of-order execution could be applied generally and made practical outside supercomputers.
To haveprecise exceptions, the proper in-order state of the program's execution must be available upon an exception. By 1985 various approaches were developed as described byJames E. Smithand Andrew R. Pleszkun.[9]TheCDC Cyber 205was a precursor, as upon a virtual memory interrupt the entire state of the processor (including the information on the partially executed instructions) is saved into aninvisible exchange package, so that it can resume at the same state of execution.[10]However to make all exceptions precise, there has to be a way to cancel the effects of instructions. The CDC Cyber 990 (1984) implements precise interrupts by using a history buffer, which holds the old (overwritten) values of registers that are restored when an exception necessitates the reverting of instructions.[9]Through simulation, Smith determined that adding a reorder buffer (or history buffer or equivalent) to theCray-1Swould reduce the performance of executing the first 14Livermore loops(unvectorized) by only 3%.[9]Important academic research in this subject was led byYale Pattwith hisHPSmsimulator.[11]
In the 1980s many earlyRISCmicroprocessors, like theMotorola 88100, had out-of-order writeback to the registers, resulting in imprecise exceptions. Instructions started execution in order, but some (e.g. floating-point) took more cycles to complete execution. However, the single-cycle execution of the most basic instructions greatly reduced the scope of the problem compared to the CDC 6600.
Smith also researched how to make different execution units operate more independently of each other and of the memory, front-end, and branching.[12]He implemented those ideas in theAstronauticsZS-1 (1988), featuring a decoupling of the integer/load/storepipelinefrom the floating-point pipeline, allowing inter-pipeline reordering. The ZS-1 was also capable of executing loads ahead of preceding stores. In his 1984 paper, he opined that enforcing the precise exceptions only on the integer/memory pipeline should be sufficient for many use cases, as it even permitsvirtual memory. Each pipeline had an instruction buffer to decouple it from the instruction decoder, to prevent the stalling of the front end. To further decouple the memory access from execution, each of the two pipelines was associated with two addressablequeuesthat effectively performed limited register renaming.[7]A similar decoupled architecture had been used a bit earlier in the Culler 7.[13]The ZS-1's ISA, like IBM's subsequent POWER, aided the early execution of branches.
With thePOWER1(1990), IBM returned to out-of-order execution. It was the first processor to combine register renaming (though again only floating-point registers) with precise exceptions. It uses aphysical register file(i.e. a dynamically remapped file with both uncommitted and committed values) instead of a reorder buffer, but the ability to cancel instructions is needed only in the branch unit, which implements a history buffer (namedprogram counter stackby IBM) to undo changes to count, link, and condition registers. The reordering capability of even the floating-point instructions is still very limited; due to POWER1's inability to reorder floating-point arithmetic instructions (results became available in-order), their destination registers aren't renamed. POWER1 also doesn't havereservation stationsneeded for out-of-order use of the same execution unit.[14][15]The next year IBM'sES/9000model 900 had register renaming added for the general-purpose registers. It also hasreservation stationswith six entries for the dual integer unit (each cycle, from the six instructions up to two can be selected and then executed) and six entries for the FPU. Other units have simple FIFO queues. The reordering distance is up to 32 instructions.[16]The A19 ofUnisys'A-series of mainframeswas also released in 1991 and was claimed to have out-of-order execution, and one analyst called the A19's technology three to five years ahead of the competition.[17][18]
The firstsuperscalarsingle-chip processors(Intel i960CAin 1989) used a simple scoreboarding scheduling like the CDC 6600 had a quarter of a century earlier. In 1992–1996 a rapid advancement of techniques, enabled byincreasing transistor counts, saw proliferation down topersonal computers. TheMotorola 88110(1992) used a history buffer to revert instructions.[19]Loads could be executed ahead of preceding stores. While stores and branches were waiting to start execution, subsequent instructions of other types could keep flowing through all the pipeline stages, including writeback. The 12-entry capacity of the history buffer placed a limit on the reorder distance.[20][21][22]ThePowerPC 601(1993) was an evolution of theRISC Single Chip, itself a simplification of POWER1. The 601 permitted branch and floating-point instructions to overtake the integer instructions already in the fetched instruction queue, the lowest four entries of which were scanned for dispatchability. In the case of a cache miss, loads and stores could be reordered. Only the link and count registers could be renamed.[28]In the fall of 1994NexGenandIBM with Motorolabrought the renaming of general-purpose registers to single-chip CPUs. NexGen's Nx586 was the firstx86processor capable of out-of-order execution and featured a reordering distance of up to 14micro-operations.[29]ThePowerPC 603renamed both the general-purpose and FP registers. Each of the four non-branch execution units can have one instruction wait in front of it without blocking the instruction flow to the other units. A five-entryreorder bufferlets no more than four instructions overtake an unexecuted instruction. Due to a store buffer, a load can access cache ahead of a preceding store.[30][31]
PowerPC 604(1995) was the first single-chip processor withexecution unit-level reordering, as three out of its six units each had a two-entry reservation station permitting the newer entry to execute before the older. The reorder buffer capacity is 16 instructions. A four-entry load queue and a six-entry store queue track the reordering of loads and stores upon cache misses.[32]HAL SPARC64(1995) exceeded the reordering capacity of theES/9000model 900 by having three 8-entry reservation stations for integer, floating-point, andaddress generation unit, and a 12-entry reservation station for load/store, which permits greater reordering of cache/memory access than preceding processors. Up to 64 instructions can be in a reordered state at a time.[33][34]Pentium Pro(1995) introduced aunified reservation station, which at the 20 micro-OP capacity permitted very flexible reordering, backed by a 40-entry reorder buffer. Loads can be reordered ahead of both loads and stores.[35]
The practically attainableper-cycle rate of executionrose further as full out-of-order execution was further adopted bySGI/MIPS(R10000) andHPPA-RISC(PA-8000) in 1996. The same yearCyrix 6x86andAMD K5brought advanced reordering techniques into mainstream personal computers. SinceDEC Alphagained out-of-order execution in 1998 (Alpha 21264), the top-performing out-of-order processor cores have been unmatched by in-order cores other thanHP/IntelItanium 2andIBM POWER6, though the latter had an out-of-orderfloating-point unit.[36]The other high-end in-order processors fell far behind, namelySun'sUltraSPARC III/IV, and IBM'smainframeswhich had lost the out-of-order execution capability for the second time, remaining in-order into thez10generation. Later big in-order processors were focused on multithreaded performance, but eventually theSPARC T seriesandXeon Phichanged to out-of-order execution in 2011 and 2016 respectively.[citation needed]
Almost all processors for phones and other lower-end applications remained in-order untilc.2010. First,Qualcomm'sScorpion(reordering distance of 32) shipped inSnapdragon,[37]and a bit laterArm'sA9succeededA8. For low-endx86personal computersin-orderBonnell microarchitecturein earlyIntel Atomprocessors were first challenged byAMD'sBobcat microarchitecture, and in 2013 were succeeded by an out-of-orderSilvermont microarchitecture.[38]Because the complexity of out-of-order execution precludes achieving the lowest minimum power consumption, cost and size, in-order execution is still prevalent inmicrocontrollersandembedded systems, as well as in phone-class cores such as Arm'sA55andA510inbig.LITTLEconfigurations.
Out-of-order execution is more sophisticated relative to the baseline of in-order execution. In pipelined in-order execution processors, execution of instructions overlap in pipelined fashion with each requiring multipleclock cyclesto complete. The consequence is that results from a previous instruction will lag behind where they may be needed in the next. In-order execution still has to keep track of these dependencies. Its approach is however quite unsophisticated: stall, every time. Out-of-order uses much more sophisticated data tracking techniques, as described below.
In earlier processors, the processing of instructions is performed in aninstruction cyclenormally consisting of the following steps:
Often, an in-order processor has abit vectorrecording which registers will be written to by a pipeline.[39]If any input operands have the corresponding bit set in this vector, the instruction stalls. Essentially, the vector performs a greatly simplified role of protecting against register hazards. Thus out-of-order execution uses 2D matrices whereas in-order execution uses a 1D vector for hazard avoidance.
This new paradigm breaks up the processing of instructions into these steps:[40]
The key concept of out-of-order processing is to allow the processor to avoid a class of stalls that occur when the data needed to perform an operation are unavailable. In the outline above, the processor avoids the stall that occurs in step 2 of the in-order processor when the instruction is not completely ready to be processed due to missing data.
Out-of-order processors fill theseslotsin time with other instructions thatareready, then reorder the results at the end to make it appear that the instructions were processed as normal. The way the instructions are ordered in the original computer code is known asprogram order, in the processor they are handled indata order, the order in which the data becomes available in the processor's registers. Fairly complex circuitry is needed to convert from one ordering to the other and maintain a logical ordering of the output.
The benefit of out-of-order processing grows as theinstruction pipelinedeepens and the speed difference betweenmain memory(orcache memory) and the processor widens. On modern machines, the processor runs many times faster than the memory, so during the time an in-order processor spends waiting for data to arrive, it could have theoretically processed a large number of instructions.
One of the differences created by the new paradigm is the creation of queues that allow the dispatch step to be decoupled from the issue step and the graduation stage to be decoupled from the execute stage. An early name for the paradigm wasdecoupled architecture. In the earlierin-orderprocessors, these stages operated in a fairlylock-step, pipelined fashion.
Thefetch and decode stagesis separated from the execute stage in apipelinedprocessor by using abuffer. The buffer's purpose is to partition thememory accessand execute functions in a computer program and achieve high performance by exploiting the fine-grainparallelismbetween the two.[41]In doing so, it effectively hides allmemory latencyfrom the processor's perspective.
A larger buffer can, in theory, increase throughput. However, if the processor has abranch mispredictionthen the entire buffer may need to be flushed, wasting a lot ofclock cyclesand reducing the effectiveness. Furthermore, larger buffers create more heat and use morediespace. For this reason processor designers today favor amulti-threadeddesign approach.
Decoupled architectures are generally thought of as not useful for general-purpose computing as they do not handle control-intensive code well.[42]Control intensive code include such things as nested branches that occur frequently inoperating system kernels. Decoupled architectures play an important role in scheduling invery long instruction word(VLIW) architectures.[43]
The queue for results is necessary to resolve issues such as branch mispredictions and exceptions. The results queue allows programs to be restarted after an exception and for the instructions to be completed in program order. The queue allows results to be discarded due to mispredictions on older branch instructions and exceptions taken on older instructions. The ability to issue instructions past branches that have yet to be resolved is known asspeculative execution.
Are the instructions dispatched to a centralized queue or to multiple distributed queues?
Is there an actual results queue or are the results written directly into a register file? For the latter, the queueing function is handled by register maps that hold the register renaming information for each instruction in flight.
|
https://en.wikipedia.org/wiki/Out-of-order_execution
|
Incomputer science, adynamic array,growable array,resizable array,dynamic table,mutable array, orarray listis arandom access, variable-sizelist data structurethat allows elements to be added or removed. It is supplied withstandard librariesin many modern mainstreamprogramming languages. Dynamic arrays overcome a limit of staticarrays, which have a fixed capacity that needs to be specified atallocation.
A dynamic array is not the same thing as adynamically allocatedarray orvariable-length array, either of which is an array whose size is fixed when the array is allocated, although a dynamic array may use such a fixed-size array as a back end.[1]
A simple dynamic array can be constructed by allocating an array of fixed-size, typically larger than the number of elements immediately required. The elements of the dynamic array are stored contiguously at the start of the underlying array, and the remaining positions towards the end of the underlying array are reserved, or unused. Elements can be added at the end of a dynamic array inconstant timeby using the reserved space, until this space is completely consumed. When all space is consumed, and an additional element is to be added, then the underlying fixed-size array needs to be increased in size. Typically resizing is expensive because it involves allocating a new underlying array and copying each element from the original array. Elements can be removed from the end of a dynamic array in constant time, as no resizing is required. The number of elements used by the dynamic array contents is itslogical sizeorsize, while the size of the underlying array is called the dynamic array'scapacityorphysical size, which is the maximum possible size without relocating data.[2]
A fixed-size array will suffice in applications where the maximum logical size is fixed (e.g. by specification), or can be calculated before the array is allocated. A dynamic array might be preferred if:
To avoid incurring the cost of resizing many times, dynamic arrays resize by a large amount, such as doubling in size, and use the reserved space for future expansion. The operation of adding an element to the end might work as follows:
Asnelements are inserted, the capacities form ageometric progression. Expanding the array by any constant proportionaensures that insertingnelements takesO(n)time overall, meaning that each insertion takesamortizedconstant time. Many dynamic arrays also deallocate some of the underlying storage if its size drops below a certain threshold, such as 30% of the capacity. This threshold must be strictly smaller than 1/ain order to providehysteresis(provide a stable band to avoid repeatedly growing and shrinking) and support mixed sequences of insertions and removals with amortized constant cost.
Dynamic arrays are a common example when teachingamortized analysis.[3][4]
The growth factor for the dynamic array depends on several factors including a space-time trade-off and algorithms used in the memory allocator itself. For growth factora, the average time per insertion operation isabouta/(a−1), while the number of wasted cells is bounded above by (a−1)n[citation needed]. If memory allocator uses afirst-fit allocationalgorithm, then growth factor values such asa=2 can cause dynamic array expansion to run out of memory even though a significant amount of memory may still be available.[5]There have been various discussions on ideal growth factor values, including proposals for thegolden ratioas well as the value 1.5.[6]Many textbooks, however, usea= 2 for simplicity and analysis purposes.[3][4]
Below are growth factors used by several popular implementations:
The dynamic array has performance similar to an array, with the addition of new operations to add and remove elements:
Dynamic arrays benefit from many of the advantages of arrays, including goodlocality of referenceanddata cacheutilization, compactness (low memory use), andrandom access. They usually have only a small fixed additional overhead for storing information about the size and capacity. This makes dynamic arrays an attractive tool for buildingcache-friendlydata structures. However, in languages like Python or Java that enforce reference semantics, the dynamic array generally will not store the actual data, but rather it will storereferencesto the data that resides in other areas of memory. In this case, accessing items in the array sequentially will actually involve accessing multiple non-contiguous areas of memory, so the many advantages of the cache-friendliness of this data structure are lost.
Compared tolinked lists, dynamic arrays have faster indexing (constant time versus linear time) and typically faster iteration due to improved locality of reference; however, dynamic arrays require linear time to insert or delete at an arbitrary location, since all following elements must be moved, while linked lists can do this in constant time. This disadvantage is mitigated by thegap bufferandtiered vectorvariants discussed underVariantsbelow. Also, in a highlyfragmentedmemory region, it may be expensive or impossible to find contiguous space for a large dynamic array, whereas linked lists do not require the whole data structure to be stored contiguously.
Abalanced treecan store a list while providing all operations of both dynamic arrays and linked lists reasonably efficiently, but both insertion at the end and iteration over the list are slower than for a dynamic array, in theory and in practice, due to non-contiguous storage and tree traversal/manipulation overhead.
Gap buffersare similar to dynamic arrays but allow efficient insertion and deletion operations clustered near the same arbitrary location. Somedequeimplementations usearray deques, which allow amortized constant time insertion/removal at both ends, instead of just one end.
Goodrich[16]presented a dynamic array algorithm calledtiered vectorsthat providesO(n1/k) performance for insertions and deletions from anywhere in the array, andO(k) get and set, wherek≥ 2 is a constant parameter.
Hashed array tree(HAT) is a dynamic array algorithm published by Sitarski in 1996.[17]Hashed array tree wastes ordern1/2amount of storage space, wherenis the number of elements in the array. The algorithm hasO(1) amortized performance when appending a series of objects to the end of a hashed array tree.
In a 1999 paper,[18]Brodnik et al. describe a tiered dynamic array data structure, which wastes onlyn1/2space fornelements at any point in time, and they prove a lower bound showing that any dynamic array must waste this much space if the operations are to remain amortized constant time. Additionally, they present a variant where growing and shrinking the buffer has not only amortized but worst-case constant time.
Bagwell (2002)[19]presented the VList algorithm, which can be adapted to implement a dynamic array.
Naïve resizable arrays -- also called "the worst implementation" of resizable arrays -- keep the allocated size of the array exactly big enough for all the data it contains, perhaps by callingreallocfor each and every item added to the array. Naïve resizable arrays are the simplest way of implementing a resizable array in C. They don't waste any memory, but appending to the end of the array always takes Θ(n) time.[17][20][21][22][23]Linearly growing arrays pre-allocate ("waste") Θ(1) space every time they re-size the array, making them many times faster than naïve resizable arrays -- appending to the end of the array still takes Θ(n) time but with a much smaller constant.
Naïve resizable arrays and linearly growing arrays may be useful when a space-constrained application needs lots of small resizable arrays;
they are also commonly used as an educational example leading to exponentially growing dynamic arrays.[24]
C++'sstd::vectorandRust'sstd::vec::Vecare implementations of dynamic arrays, as are theArrayList[25]classes supplied with theJavaAPI[26]: 236and the.NET Framework.[27][28]: 22
The genericList<>class supplied with version 2.0 of the .NET Framework is also implemented with dynamic arrays.Smalltalk'sOrderedCollectionis a dynamic array with dynamic start and end-index, making the removal of the first element also O(1).
Python'slistdatatype implementation is a dynamic array the growth pattern of which is: 0, 4, 8, 16, 24, 32, 40, 52, 64, 76, ...[29]
DelphiandDimplement dynamic arrays at the language's core.
Ada'sAda.Containers.Vectorsgeneric package provides dynamic array implementation for a given subtype.
Many scripting languages such asPerlandRubyoffer dynamic arrays as a built-inprimitive data type.
Several cross-platform frameworks provide dynamic array implementations forC, includingCFArrayandCFMutableArrayinCore Foundation, andGArrayandGPtrArrayinGLib.
Common Lispprovides a rudimentary support for resizable vectors by allowing to configure the built-inarraytype asadjustableand the location of insertion by thefill-pointer.
|
https://en.wikipedia.org/wiki/Dynamic_array
|
In theJava programminglanguage,heappollutionis a situation that arises when a variable of aparameterized typerefers to an object that is not of that parameterized type.[1]This situation is normally detected duringcompilationand indicated with anunchecked warning.[1]Later, duringruntimeheap pollution will often cause aClassCastException.[2]
Heap pollution in Java can occur when type arguments and variables are notreifiedat run-time. As a result, different parameterized types areimplementedby the same class orinterfaceat run time. All invocations of a givengenerictype declaration share asingle run-time implementation. This results in the possibility of heap pollution.[2]
Under certain conditions, a variable of a parameterized type may refer to an object that is not of that parameterized type. The variable will always refer to an object that is an instance of a class that implements the parameterized type.
Heap Pollution in a non-varargscontext
Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Heap_pollution
|
Flat memory modelorlinear memory modelrefers to amemory addressingparadigm in which "memoryappears to the program as a single contiguousaddress space."[1]TheCPUcan directly (andlinearly)addressall of the availablememorylocations without having to resort to any sort ofbank switching,memory segmentationorpagingschemes.
Memory management andaddress translationcan still be implementedon top ofa flat memory model in order to facilitate theoperating system's functionality, resource protection,multitaskingor to increase the memory capacity beyond the limits imposed by the processor's physical address space, but the key feature of a flat memory model is that the entire memory space is linear, sequential and contiguous.
In a simple controller, or in asingle taskingembedded application, where memory management is not needed nor desirable, the flat memory model is the most appropriate, because it provides the simplest interface from the programmer's point of view, with direct access to all memory locations and minimum design complexity.
In a general purpose computer system, which requires multitasking, resource allocation, and protection, the flat memory system must be augmented by some memory management scheme, which is typically implemented through a combination of dedicated hardware (inside or outside the CPU) and software built into the operating system. The flat memory model (at the physical addressing level) still provides the greatest flexibility for implementing this type of memory management.
Most modern memory models fall into one of three categories:
Within the x86 architectures, when operating in thereal mode(or emulation), physical address is computed as:[2]
(I.e., the 16-bit segment register is shifted left by 4 bits and added to a 16-bit offset, resulting in a 20-bit address.)
|
https://en.wikipedia.org/wiki/Linear_address_space
|
Incomputer science, asingle address space operating system(orSASOS) is anoperating systemthat provides only one globally sharedaddress spacefor allprocesses. In a single address space operating system, numerically identical (virtual memory)logical addressesin different processes all refer to exactly the same byte of data.[1]
In a traditional OS with private per-process address space, memory protection is based on address space boundaries ("address space isolation"). Single address-space operating systems make translation and protection orthogonal, which in no way weakens protection.[2][3]The core advantage is that pointers (i.e. memory references) have global validity, meaning their meaning is independent of the process using it. This allows sharing pointer-connected data structures across processes, and making them persistent, i.e. storing them on backup store.
Someprocessor architectureshave direct support for protection independent of translation. On such architectures, a SASOS may be able to perform context switches faster than a traditional OS. Such architectures includeItanium, and Version 5 of theArm architecture, as well ascapability architecturessuch as CHERI.[4]
A SASOS should not be confused with aflat memory model, which provides no address translation and generally no memory protection. In contrast, a SASOS makes protection orthogonal to translation: it may be possible to name a data item (i.e. know its virtual address) while not being able to access it.
SASOS projects using hardware-based protection include the following:
Related are OSes that provide protection through language-level type safety
Thiscomputer-engineering-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Single_address_space_operating_system
|
ABrowser Helper Object(BHO) is aDLLmoduledesigned as apluginfor theMicrosoftInternet Explorerweb browserto provide added functionality. BHOs were introduced in October 1997 with the release ofversion 4of Internet Explorer. Most BHOs are loaded once by each new instance of Internet Explorer. However, in the case ofWindows Explorer, a new instance is launched for each window.
BHOs are still supported as of Windows 10, throughInternet Explorer 11, while BHOs are not supported inMicrosoft Edge.
Each time a new instance of Internet Explorer starts, it checks theWindows Registryfor the keyHKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Browser Helper Objects. If Internet Explorer finds this key in the registry, it looks for aCLSIDkey listed below the key. The CLSID keys under Browser Helper Objects tell the browser which BHOs to load. Removing the registry key prevents the BHO from being loaded. For each CLSID that is listed below the BHO key, Internet Explorer calls CoCreateInstance to start the instance of the BHO in the same process space as the browser. If the BHO is started and implements the IObjectWithSite interface, it can control and receive events from Internet Explorer. BHOs can be created in any language that supportsCOM.[1]
Some modules enable the display of different file formats not ordinarily interpretable by the browser. TheAdobe Acrobatplug-in that allows Internet Explorer users to readPDFfiles within their browser is a BHO.
Other modules add toolbars to Internet Explorer, such as theAlexa Toolbarthat provides a list of web sites related to the one you are currently browsing, or theGoogle Toolbarthat adds a toolbar with a Google search box to the browseruser interface.
The Conduit toolbars are based on a BHO that can be used onInternet Explorer 7and up. This BHO provides a search facility that connects toMicrosoft'sBingsearch.
The BHOAPIexposeshooksthat allow the BHO to access theDocument Object Model(DOM) of the current page and to control navigation. Because BHOs have unrestricted access to the Internet Explorer event model, some forms ofmalware(such as adware and spyware) have also been created as BHOs.[2][3]
For example, theDownload.jectmalware is a BHO that is activated when a secureHTTPconnection is made to a financial institution, then begins torecord keystrokesfor the purpose of capturing user passwords. TheMyWay Searchbartracks users' browsing patterns and passes the information it records to third parties. TheC2.LOPmalware adds links and popups of its own to web pages in order to drive users topay-per-clickwebsites.[citation needed]
Many BHOs introduce visible changes to a browser's interface, such as installing toolbars inInternet Explorerand the like, but others run without any change to the interface. This renders it easy for malicious coders to conceal the actions of their browser add-on, especially since, after being installed, the BHO seldom requires permission before performing further actions. For instance, variants of the ClSpring trojan use BHOs to install scripts to provide a number of instructions to be performed such as adding and deleting registry values and downloading additional executable files, all completely transparently to the user.[4]
In response to the problems associated with BHOs and similar extensions to Internet Explorer, Microsoft debuted anAdd-on ManagerinInternet Explorer 6with the release ofService Pack 2forWindows XP(updating it to IE6 Security Version 1, a.k.a. SP2). This utility displays a list of all installed BHOs,browser extensionsandActiveX controls, and allows the user to enable or disable them at will. There are also free tools (such as BHODemon) that list installed BHOs and allow the user to disable malicious extensions.Spybot S&Dadvanced mode has a similar tool built in to allow the user to disable installed BHO.
|
https://en.wikipedia.org/wiki/Browser_Helper_Object
|
Abulletin board system(BBS), also called acomputer bulletin board service(CBBS),[1]is acomputer serverrunningsoftwarethat allows users to connect to the system using aterminal program. Once logged in, the user performs functions such asuploadinganddownloadingsoftware and data, reading news and bulletins, and exchanging messages with other users through publicmessage boardsand sometimes via directchatting. In the early 1980s, message networks such asFidoNetwere developed to provide services such asNetMail, which is similar to internet-basedemail.[2]
Many BBSes also offeredonline gamesin which users could compete with each other. BBSes with multiple phone lines often providedchat rooms, allowing users to interact with each other. Bulletin board systems were in many ways a precursor to the modern form of theWorld Wide Web,social networks, and other aspects of theInternet. Low-cost, high-performanceasynchronousmodemsdrove the use ofonline servicesand BBSes through the early 1990s.InfoWorldestimated that there were 60,000 BBSes serving 17 million users in the United States alone in 1994, a collective market much larger than major online services such asCompuServe.
The introduction of inexpensivedial-up internet serviceand theMosaic web browseroffered ease of use and global access that BBS and online systems did not provide, and led to a rapid crash in the market starting in late 1994 to early 1995. Over the next year, many of theleading BBS software providerswentbankruptand tens of thousands of BBSes disappeared.[3]Today, BBSing survives largely as a nostalgic hobby in most parts of the world, but it is still a popular form of communication for middle aged Taiwanese (seePTT Bulletin Board System).[4]Most surviving BBSes are accessible overTelnetand typically offer free email accounts,FTPservices, andIRC. Some offer access through packet switched networks orpacket radioconnections.[1]
A precursor to the public bulletin board system wasCommunity Memory, which started in August 1973 inBerkeley, California.Microcomputersdid not exist at that time, and modems were both expensive and slow. Community Memory ran on amainframe computerand was accessed through terminals located in severalSan Francisco Bay Areaneighborhoods.[5][6]The poor quality of the original modem connecting the terminals to the mainframe promptedCommunity Memoryhardware person,Lee Felsenstein, to invent thePennywhistle modem, whose design was influential in the mid-1970s.
Community Memory allowed the user to type messages into acomputer terminalafter inserting a coin, and offered a "pure" bulletin board experience with public messages only (no email or other features). It did offer the ability to tag messages with keywords, which the user could use in searches. The system acted primarily in the form of a buy and sell system with the tags taking the place of the more traditionalclassifications. But users found ways to express themselves outside these bounds, and the system spontaneously created stories, poetry and other forms of communications. The system was expensive to operate, and when their host machine became unavailable and a new one could not be found, the system closed in January 1975.
Similar functionality was available to mostmainframeusers, which might be considered a sort of ultra-local BBS when used in this fashion. Commercial systems, expressly intended to offer these features to the public, became available in the late 1970s and formed theonline servicemarket that lasted into the 1990s. One particularly influential example wasPLATO, which had thousands of users by the late 1970s, many of whom used the messaging andchat roomfeatures of the system in the same way that would later become common on BBSes.
Early modems were generally either expensive or very simple devices usingacoustic couplersto handle telephone operation. The user would pick up the phone, dial a number, then press the handset into rubber cups on the top of the modem. Disconnecting at the end of a call required the user to pick up the handset and return it to the phone. Examples of direct-connecting modems did exist, and these often allowed the host computer to send it commands to answer or hang up calls, but these were very expensive devices used by large banks and similar companies.
With the introduction ofmicrocomputerswith expansion slots, like theS-100 busmachines andApple II, it became possible for the modem to communicate instructions and data on separate lines. These machines typically only supported asynchronous communications, andsynchronousmodems were much more expensive than asynchronous modems. A number of modems of this sort were available by the late 1970s. This made the BBS possible for the first time, as it allowed software on the computer to pick up an incoming call, communicate with the user, and then hang up the call when the user logged off.
The first publicdial-upBBS was developed byWard ChristensenandRandy Suess, members of the Chicago Area Computer Hobbyists' Exchange (CACHE). According to an early interview, when Chicago was snowed under during theGreat Blizzard of 1978, the two began preliminary work on theComputerized Bulletin Board System, orCBBS.[7]The system came into existence largely through a fortuitous combination of Christensen having a spare S-100 bus computer and an early Hayes internal modem, and Suess's insistence that the machine be placed at his house inChicagowhere it would be a local phone call for more users. Christensen patterned the system after thecork boardhis local computer club used to post information like "need a ride". CBBS officially went online on 16 February 1978.[8][9]CBBS, which kept a count of callers, reportedly connected 253,301 callers before it was finally retired.[citation needed]
A key innovation required for the popularization of the BBS was theSmartmodemmanufactured byHayes Microcomputer Products. Internal modems like the ones used by CBBS and similar early systems were usable, but generally expensive due to the manufacturer having to make a different modem for every computer platform they wanted to target. They were also limited to those computers with internal expansion, and could not be used with other useful platforms likevideo terminals. External modems were available for these platforms but required the phone to be dialed using a conventional handset.[a]Internal modems could be software-controlled to perform outbound and inbound calls, but external modems had only the data pins to communicate with the host system.
Hayes' solution to the problem was to use a smallmicrocontrollerto implement a system that examined the data flowing into the modem from the host computer, watching for certain command strings. This allowed commands to be sent to and from the modem using the same data pins as all the rest of the data, meaning it would work on any system that could support even the most basic modems. The Smartmodem could pick up the phone, dial numbers, and hang up again, all without any operator intervention. The Smartmodem was not necessary for BBS use but made overall operation dramatically simpler. It also improved usability for the caller, as most terminal software allowed different phone numbers to be stored and dialed on command, allowing the user to easily connect to a series of systems.
The introduction of the Smartmodem led to the first real wave of BBS systems. Limited in speed and storage capacity, these systems were normally dedicated solely to messaging, private email and public forums. File transfers were extremely slow at these speeds, and file libraries were typically limited to text files containing lists of other BBS systems. These systems attracted a particular type of user who used the BBS as a unique type of communications medium, and when these local systems were crowded from the market in the 1990s, their loss was lamented for many years.[citation needed]
Speed improved with the introduction of 1200bit/sasynchronous modems in theearly 1980s, giving way to 2400 bit/s fairly rapidly. The improved performance led to a substantial increase in BBS popularity. Most of the information was displayed using ordinaryASCIItext orANSI art, but a number of systems attempted character-basedgraphical user interfaces(GUIs) which began to be practical at 2400 bit/s.
There was a lengthy delay before 9600 bit/s models began to appear on the market. 9600 bit/s was not even established as a strong standard beforeV.32bisat 14.4 kbit/s took over in the early 1990s. This period also saw the rapid rise in capacity and a dramatic drop in the price ofhard drives. By the late 1980s, many BBS systems had significant file libraries, and this gave rise to leeching – users calling BBSes solely for their files. These users would use the modem for some time, leaving less time for other users, who gotbusy signals. The resulting upheaval eliminated many of the pioneering message-centric systems.[10]
This also gave rise to a new class of BBS systems, dedicated solely to file upload and downloads. These systems charged for access, typically a flat monthly fee, compared to the per-hour fees charged byEvent Horizons BBSand most online services. Many third-party services were developed to support these systems, offering simple credit cardmerchant accountgateways for the payment of monthly fees, and entire file libraries oncompact diskthat made initial setup very easy. Early 1990s editions ofBoardwatchwere filled with ads for single-click install solutions dedicated to these newsysops. While this gave the market a bad reputation, it also led to its greatest success. During the early 1990s, there were a number of mid-sized software companies dedicated to BBS software, and the number of BBSes in service reached its peak.
Towards the early 1990s, BBS became so popular that it spawned three monthly magazines,Boardwatch,BBS Magazine, and in Asia and Australia,Chips 'n Bits Magazinewhich devoted extensive coverage of the software and technology innovations and people behind them, and listings to US and worldwide BBSes.[11]In addition, in the US, a major monthly magazine,Computer Shopper, carried a list of BBSes along with a brief abstract of each of their offerings.
Through the late 1980s and early 1990s, there was considerable experimentation with ways to develop user-friendly interfaces for BBSes. Almost every popular system used ANSI-based color menus to make reading easier on capable hardware and terminal emulators, and most also allowed cursor commands to offer command-line recall and similar features. Another common feature was the use ofautocompleteto make menu navigation simpler, a feature that would not re-appear on the Web until decades later.
A number of systems also made forays into GUI-based interfaces, either using character graphics sent from the host, or using custom GUI-based terminal systems. The latter initially appeared on theMacintoshplatform, whereTeleFinderandFirstClassbecame very popular. FirstClass offered a host of features that would be difficult or impossible under a terminal-based solution, including bi-directional information flow and non-blocking operation that allowed the user to exchange files in both directions while continuing to use the message system and chat, all in separate windows. Will Price's "Hermes", released in 1988, combined a familiar PC style with Macintosh GUI interface.[12](Hermes was already "venerable" by 1994 although the Hermes II release remained popular.[13][14])Skypixfeatured on Amiga a completemarkup language. It used a standardized set of icons to indicate mouse driven commands available online and to recognize different filetypes present on BBS storage media. It was capable of transmitting data like images, audio files, and audio clips between users linked to the same BBS or off-line if the BBS was in the circuit of the FidoNet organization.
On the PC, efforts were more oriented to extensions of the original terminal concept, with the GUI being described in the information on the host. One example was theRemote Imaging Protocol, essentially a picture description system, which remained relatively obscure. Probably the ultimate development of this style of operation was the dynamic page implementation of theUniversity of Southern CaliforniaBBS (USCBBS) by Susan Biddlecomb, which predated the implementation of theHTMLDynamic web page. A complete Dynamic web page implementation was accomplished usingTBBSwith aTDBSadd-on presenting a complete menu system individually customized for each user.
The demand for complex ANSI and ASCII screens and larger file transfers taxed availablechannel capacity, which in turn increased demand for faster modems. 14.4 kbit/s modems were standard for a number of years while various companies attempted to introduce non-standard systems with higher performance – normally about 19.2 kbit/s. Another delay followed due to a longV.34standards process before 28.8 kbit/s was released, only to be quickly replaced by 33.6 kbit/s, and then 56 kbit/s.
These increasing speeds had the side effect of dramatically reducing the noticeable effects of channel efficiency. When modems were slow, considerable effort was put into developing the most efficient protocols and display systems possible.TCP/IPran slowly over 1200 bit/s modems.56 kbit/s modemscould access the protocol suite more quickly than with slower modems. Dial-up Internet service became widely available in the mid-1990s to the general public outside of universities and research laboratories, and connectivity was included in most general-useoperating systemsby default as Internet access became popular.
These developments together resulted in the sudden obsolescence of bulletin board technology in 1995 and the collapse of its supporting market. Technically, Internet service offered an enormous advantage over BBS systems, as a single connection to the user'sInternet service providerallowed them to contact services around the world. In comparison, BBS systems relied on a direct point-to-point connection, so even dialing multiple local systems required multiple phone calls. Internet protocols also allowed a single connection to be used to contact multiple services simultaneously; for example, downloading files from anFTPlibrary while checking the weather on a local news website. Even with ashell account, it was possible to multitask usingjob controlor aterminal multiplexersuch asGNU Screen. In comparison, a connection to a BBS allowed access only to the information on that system.
According to theFidoNetNodelist, BBSes reached their peak usage around 1996, the same period when theWorld Wide WebandAOLbecame mainstream. BBSes rapidly declined in popularity thereafter, and were replaced by systems using the Internet for connectivity. Some of the larger commercial BBSes, such as MaxMegabyte andExecPC BBS, evolved intoInternet service providers.
The websitetextfiles.comis an archival history of BBSes. It includes a list of over 100,000 BBSes that once existed during a span of 20 years.[15]The creator and maintainer oftextfiles.com,Jason Scott, also producedBBS: The Documentary, a film that chronicles the history of BBSes and has interviews with well-known figures from the BBS heyday.
In the 2000s, most traditional BBS systems migrated to the Internet using Telnet or SSH protocols. As of September 2022, between 900 and 1000 are thought to be active via the Internet – fewer than 30 of these being of the traditional "dial-up" (modem) variety.[citation needed]
Unlike modern websites andonline servicesthat are typically hosted by third-party companies in commercialdata centers, BBS computers (especially for smaller boards) were typically operated from the system operator's home. As such, access could be unreliable, and in many cases, only one user could be on the system at a time. Only larger BBSes with multiple phone lines using specialized hardware, multitasking software, or aLANconnecting multiple computers, could host multiple simultaneous users.
The first BBSes each used their own unique software,[b]quite often written entirely or at least customized by the system operators themselves, running on earlyS-100 busmicrocomputersystems such as theAltair 8800,IMSAI 8080andCromemcounder theCP/Moperating system. Soon after, BBS software was being written for all of the majorhome computersystems of the late 1970s era – theApple II,Atari 8-bit computers,Commodore PET,TI-99/4A, andTRS-80being some of the most popular.
In 1981, theIBM Personal Computerwas introduced andMS-DOSsoon became the operating system on which the majority of BBS programs were run.RBBS-PC,portedover from the CP/M world, andFidoBBS, developed byTom Jennings(who later foundedFidoNet) were the first notable MS-DOS BBS programs. Many successful commercial BBS programs were developed, such asPCBoardBBS,RemoteAccessBBS, Magpie andWildcat! BBS. PopularfreewareBBS programs includedTelegardBBS andRenegade BBS, which both had early origins from leakedWWIVBBS source code.
BBS systems on other systems remained popular, especiallyhome computers, largely because they catered to the audience of users running those machines. The ubiquitousCommodore 64(introduced in 1982) was a common platform in the 1980s. Popular commercial BBS programs wereBlue Board,Ivory BBS,Color64andCNet 64. There was also a devoted contingent of BBS users on TI-99/4A computers, long afterTexas Instrumentshad discontinued the computer in the aftermath of theirprice warwith Commodore. Popular BBSes for the TI-99/4A included Techie, TIBBS (Texas Instruments Bulletin Board System), TI-COMM, and Zyolog.[16][17][18]In the early 1990s, a small number of BBSes were also running on the CommodoreAmiga. Popular BBS software for the Amiga were ABBS,Amiexpress, C-Net, StormforceBBS,Infinityand Tempest. There was also a small faction of devoted Atari BBSes that used the Atari 800, then the 800XL, and eventually the1040ST. The earlier machines generally lackedhard drivecapabilities, which limited them primarily to messaging.
MS-DOS continued to be the most popular operating system for BBS use up until the mid-1990s, and in the early years, most multi-node BBSes were running under a DOS based multitasker such asDESQviewor consisted of multiple computers connected via aLAN. In the late 1980s, a handful of BBS developers implemented multitasking communications routines inside their software, allowing multiple phone lines and users to connect to the same BBS computer. These included Galacticomm'sMajorBBS(later WorldGroup), eSoftThe Bread Board System(TBBS), and Falken. Other popular BBS's wereMaximusand Opus, with some associated applications such as BinkleyTerm being based on characters from theBerkley Breathedcartoon strip ofBloom County. Though most BBS software had been written inBASICorPascal(with some low-level routines written inassembly language), theClanguage was starting to gain popularity.
By 1995, many of the DOS-based BBSes had begun switching to modernmultitaskingoperating systems, such asOS/2,Windows 95, andLinux. One of the first graphics-based BBS applications wasExcalibur BBSwith low-bandwidth applications that required its own client for efficiency. This led to one of the earliest implementations of Electronic Commerce in 1996 with replication of partner stores around the globe. TCP/IP networking allowed most of the remaining BBSes to evolve and include Internet hosting capabilities. Recent BBS software, such asSynchronet,Mystic BBS, EleBBS,DOC, Magpie orWildcat! BBS, provide access using theTelnetprotocol rather than dialup, or by using legacy DOS-based BBS software with aFOSSIL-to-Telnet redirector such asNetFoss.
BBSes were generally text-based, rather thanGUI-based, and early BBSes conversed using the simpleASCIIcharacter set. However, some home computer manufacturers extended the ASCII character set to take advantage of the advanced color and graphics capabilities of their systems. BBS software authors included these extended character sets in their software, and terminal program authors included the ability to display them when a compatible system was called. Atari's native character set was known asATASCII, while most Commodore BBSes supportedPETSCII. PETSCII was also supported by the nationwide online serviceQuantum Link.[c]
The use of these custom character sets was generally incompatible between manufacturers. Unless a caller was using terminal emulation software written for, and running on, the same type of system as the BBS, the session would simply fall back to simple ASCII output. For example, aCommodore 64user calling an Atari BBS would use ASCII rather than the native character set of either. As time progressed, most terminal programs began using theASCIIstandard, but could use their native character set if it was available.
COCONET, a BBS system made by Coconut Computing, Inc., was released in 1988 and only supported a GUI (no text interface was initially available but eventually became available around 1990), and worked in EGA/VGA graphics mode, which made it stand out from text-based BBS systems. COCONET's bitmap andvector graphicsand support for multiple type fonts were inspired by thePLATO system, and the graphics capabilities were based on what was available in theBorland Graphics Interfacelibrary. A competing approach calledRemote Imaging Protocol(RIP) emerged and was promoted by Telegrafix in the early to mid-1990s but it never became widespread. Ateletexttechnology calledNAPLPSwas also considered, and although it became the underlying graphics technology behind theProdigy service, it never gained popularity in the BBS market. There were several GUI-based BBSes on theApple Macintoshplatform, includingTeleFinderandFirstClass, but these were mostly confined to the Mac market.
In the UK, theBBC Microbased OBBS software, available fromPacefor use with their modems, optionally allowed for color and graphics using theTeletextbased graphics mode available on that platform. Other systems used theViewdataprotocols made popular in the UK byBritish Telecom'sPrestelservice, and the on-line magazineMicronet 800whom were busy giving away modems with their subscriptions.
Over time, terminal manufacturers started to supportANSI X3.64in addition to or instead of proprietary terminal control codes, e.g., color, cursor positioning.
The most popular form of online graphics wasANSI art, which combined theIBM Extended ASCIIcharacter set's blocks and symbols withANSIescape sequencesto allow changing colors on demand, provide cursor control and screen formatting, and even basic musical tones. During the late 1980s and early 1990s, most BBSes used ANSI to make elaborate welcome screens, and colorized menus, and thus, ANSI support was a sought-after feature in terminal client programs. The development of ANSI art became so popular that it spawned an entire BBS "artscene"subculturedevoted to it.
TheAmigaSkyline BBSsoftware in 1988 featured a scriptmarkup languagecommunication protocol calledSkypix[19]which was capable of giving the user a complete graphical interface, featuring rich graphics, changeable fonts, mouse-controlled actions, animations and sound.[20]
Today[when?], most BBS software that is still actively supported, such as Worldgroup,Wildcat! BBSandCitadel/UX, is Web-enabled, and the traditional text interface has been replaced (or operates concurrently) with a Web-based user interface. For those more nostalgic for the true BBS experience, one can use NetSerial (Windows) orDOSBox(Windows/*nix) to redirect DOS COM port software to telnet, allowing them to connect to Telnet BBSes using 1980s and 1990s era modemterminal emulationsoftware, likeTelix,Terminate,QmodemandProcomm Plus. Modern 32-bit terminal emulators such as mTelnet andSyncTerminclude native telnet support.
Since most early BBSes were run by computer hobbyists, content was largely technical, with user communities revolving around hardware and software discussions.
As the BBS phenomenon grew, so did the popularity of special interest boards. Bulletin Board Systems could be found for almost every hobby and interest. Popular interests included politics, religion, music,dating, andalternative lifestyles. Many system operators also adopted athemein which they customized their entire BBS (welcome screens, prompts, menus, and so on) to reflect that theme. Common themes were based onfantasy, or were intended to give the user the illusion of being somewhere else, such as in asanatorium, wizard's castle, or on apirate ship.
In the early days, the file download library consisted of files that the system operators obtained themselves from other BBSes and friends. Many BBSes inspected every file uploaded to their public file download library to ensure that the material did not violate copyright law. As time went on,sharewareCD-ROMs were sold with up to thousands of files on eachCD-ROM. Small BBSes copied each file individually to their hard drive. Some systems used a CD-ROM drive to make the files available. Advanced BBSes used Multiple CD-ROM disc changer units that switched 6 CD-ROM disks on demand for the caller(s). Large systems used all 26 DOS drive letters with multi-disk changers housing tens of thousands of copyright-free shareware or freeware files available to all callers. These BBSes were generally more family-friendly, avoiding the seedier side of BBSes. Access to these systems varied from single to multiple modem lines with some requiring little or no confirmed registration.
Some BBSes, called elite,WaReZ, or pirate boards, were exclusively used for distributingcracked software,phreakingmaterials, and other questionable or unlawful content. These BBSes often had multiple modems and phone lines, allowing several users to upload and download files at once. Most elite BBSes used some form of new user verification, where new users would have to apply for membership and attempt to prove that they were not a law enforcement officer or alamer.The largest elite boards accepted users by invitation only. Elite boards also spawned their own subculture and gave rise to theslangknown today asleetspeak.
Another common type of board was thesupport BBSrun by a manufacturer of computer products or software. These boards were dedicated to supporting users of the company's products with question and answer forums, news and updates, and downloads. Most of them were not a free call. Today, these services have moved to the Web.
Some general-purpose Bulletin Board Systems had special levels of access that were given to those who paid extra money, uploaded useful files or knew the system operator personally. These specialty and pay BBSes usually had something unique to offer their users, such as large file libraries,warez,pornography,chat roomsorInternetaccess.
Pay BBSes such as TheWELLand Echo NYC (now Internet forums rather than dial-up),ExecPC, PsudNetwork andMindVox(which folded in 1996) were admired for their close, friendly communities and quality discussion forums. However, many free BBSes also maintained close communities, and some even had annual or bi-annual events where users would travel great distances to meet face-to-face with their on-line friends. These events were especially popular with BBSes that offeredchat rooms.
Some of the BBSes that provided access to illegal content faced opposition. On July 12, 1985, in conjunction with acredit card fraudinvestigation, theMiddlesex County, New JerseySheriff's department raided and seized The Private Sector BBS, which was the official BBS forgrey hathacker quarterly2600 Magazineat the time.[21]The notoriousRusty n Edie's BBS, inBoardman, Ohio, was raided by the FBI in January 1993 for trading unlicensed software, and later sued byPlayboyfor copyright infringement in November 1997. InFlint, Michigan, a 21-year-old man was charged with distributingchild pornographythrough his BBS in March 1996.[22]
Most early BBSes operated as individual systems. Information contained on that BBS never left the system, and users would only interact with the information and user community on that BBS alone. However, as BBSes became more widespread, there evolved a desire to connect systems together to share messages and files with distant systems and users. The largest such network wasFidoNet.
As is it was prohibitively expensive for the hobbyist system operator to have a dedicated connection to another system, FidoNet was developed as astore and forwardnetwork. Private email (Netmail), public message boards (Echomail) and eventually even file attachments on a FidoNet-capable BBS would be bundled into one or more archive files over a set time interval. These archive files were then compressed withARCorZIPand forwarded to (or polled by) another nearby node or hub via a dialupXmodemsession. Messages would be relayed around various FidoNet hubs until they were eventually delivered to their destination. The hierarchy of FidoNet BBS nodes, hubs, and zones was maintained in a routing table called a Nodelist. Some larger BBSes or regional FidoNet hubs would make several transfers per day, some even to multiple nodes or hubs, and as such, transfers usually occurred at night or in the early morning when toll rates were lowest. In Fido's heyday, sending a Netmail message to a user on a distant FidoNet node, or participating in an Echomail discussion could take days, especially if any FidoNet nodes or hubs in the message's route only made one transfer call per day.
FidoNet was platform-independent and would work with any BBS that was written to use it. BBSes that did not have integrated FidoNet capability could usually add it using an external FidoNetfront-endmailer such as SEAdog,FrontDoor, BinkleyTerm, InterMail or D'Bridge, and a mail processor such asFastEchoorSquish. The front-end mailer would conduct the periodic FidoNet transfers, while the mail processor would usually run just before and just after the mailer ran. This program would scan for and pack up new outgoing messages, and then unpack, sort and "toss" the incoming messages into a BBS user's local email box or into the BBS's local message bases reserved for Echomail. As such, these mail processors were commonly called "scanner/tosser/packers".
Many other BBS networks followed the example of FidoNet, using the same standards and the same software. These were called FidoNet Technology Networks (FTNs). They were usually smaller and targeted at selected audiences. Some networks usedQWKdoors, and others such asRelayNet(RIME) andWWIVnetused non-Fido software and standards.
Before commercial Internet access became common, thesenetworksof BBSes provided regional and internationale-mailand message bases. Some even providedgateways, such as UFGATE, by which members could send and receive e-mail to and from theInternetviaUUCP, and many FidoNet discussion groups were shared via gateway toUsenet. Elaborate schemes allowed users to download binary files, searchgopherspace, and interact with distantprograms, all using plain-text e-mail.
As the volume of FidoNet Mail increased and newsgroups from the early days of the Internet became available, satellite data downstream services became viable for larger systems. The satellite service provided access to FidoNet and Usenet newsgroups in large volumes at a reasonable fee. By connecting a small dish and receiver, a constant downstream of thousands of FidoNet and Usenet newsgroups could be received. The local BBS only needed to upload new outgoing messages via the modem network back to the satellite service. This method drastically reduced phone data transfers while dramatically increasing the number of message forums.
FidoNet is still in use today, though in a much smaller form, and many Echomail groups are still shared with Usenet via FidoNet to Usenet gateways. Widespread abuse of Usenet withspamandpornographyhas led to many of these FidoNet gateways to cease operation completely.
Much of thesharewaremovement was started via user distribution of software through BBSes. A notable example wasPhil Katz's PKARC (and laterPKZIP, using the same ".zip"algorithmthatWinZipand other popular archivers now use); also other concepts of software distribution likefreeware,postcardwarelikeJPEGviewanddonationwarelike Red Ryder for the Macintosh first appeared on BBS sites.Doomfromid Softwareand nearly allApogee Softwaregames were distributed as shareware. The Internet has largely erased the distinction of shareware – most users now download the software directly from the developer's website rather than receiving it from another BBS user "sharing" it. Today, shareware often refers to electronically distributed software from a small developer.
Many commercial BBS software companies that continue to support their old BBS software products switched to the shareware model or made it entirely free. Some companies were able to make the move to the Internet and provide commercial products with BBS capabilities.
A classic BBS had:
The BBS software usually provides:[citation needed]
|
https://en.wikipedia.org/wiki/Bulletin_board_system
|
Thejailmechanism is an implementation ofFreeBSD'sOS-level virtualisationthat allowssystem administratorsto partition aFreeBSD-derived computer system into several independent mini-systems calledjails, all sharing the same kernel, with very little overhead[1]. It is implemented through a system call, jail(2),[2]as well as a userland utility, jail(8),[3]plus, depending on the system, a number of other utilities. The functionality was committed into FreeBSD in 1999 byPoul-Henning Kampafter some period of production use by a hosting provider, and was first released with FreeBSD 4.0, thus being supported on a number of FreeBSD descendants, includingDragonFly BSD, to this day.
The need for the FreeBSD jails came from a small shared-environment hosting provider's (R&D Associates, Inc.'s owner, Derrick T. Woolworth) desire to establish a clean, clear-cut separation between their own services and those of their customers, mainly for security and ease of administration (jail(8)). Instead of adding a new layer of fine-grained configuration options, the solution adopted byPoul-Henning Kampwas to compartmentalize the system – both its files and its resources – in such a way that only the right people are given access to the right compartments.[4]
Jails were first introduced in FreeBSD version 4.0, that was released on March 14, 2000(2000-03-14).[5]Most of the original functionality is supported on DragonFly, and several of the new features have been ported as well.
FreeBSD jails mainly aim at three goals:
Unlikechroot jail, which only restricts processes to a particular view of thefilesystem, the FreeBSD jail mechanism restricts the activities of a process in a jail with respect to the rest of the system. In effect, jailed processes aresandboxed. They are bound to specificIP addresses, and a jailed process cannot accessdivertorrouting sockets.Raw socketsare also disabled by default, but may be enabled by setting thesecurity.jail.allow_raw_socketssysctloption. Additionally, interaction between processes that are not running in the same jail is restricted.
Thejail(8)utility andjail(2)system callfirst appeared inFreeBSD 4.0. New utilities (for examplejls(8)to list jails) and system calls (for examplejail_attach(2)to attach a new process to a jail) that render jail management much easier were added in FreeBSD 5.1. The jail subsystem received further significant updates with FreeBSD 7.2, including support for multiple IPv4 and IPv6 addresses per jail and support for binding jails to specific CPUs.
Withjailit is possible to create environments, each having its own set of utilities installed and its own configuration. Jails permit software packages to view the system egoistically, as if each package had the machine to itself. Jails can also have their own, independent, jailed superusers.[6]
The FreeBSD jail does not however achieve true virtualization; it does not allow the virtual machines to run different kernel versions than that of the base system. All jails share the same kernel. There is no support forclusteringorprocess migration.
FreeBSD jails are an effective way to increase the security of a server because of the separation between the jailed environment and the rest of the system (the other jails and the base system).
FreeBSD jails are limited in the following ways:[6]
|
https://en.wikipedia.org/wiki/FreeBSD_jail
|
Afree-netwas originally acomputer systemor network that provided public access to digital resources and community information, including personal communications, throughmodemdialup via thepublic switched telephone network. The concept originated in the health sciences to provide online help for medical patients.[1][2]With the development of theInternetfree-net systems became the first to offer limitedInternet accessto the general public to support the non-profit community work. TheCleveland Free-Net(cleveland.freenet.edu), founded in 1986, was the pioneering community network of this kind in the world.[3][4]
Any person with a personal computer, or through access from public terminal in libraries, could register for accounts on a free-net, and was assigned anemail address. Other services often includedUsenetnewsgroups,chat rooms,IRC,telnet, and archives of community information, delivered either with text-basedGophersoftware or later theWorld-Wide Web.
The word markFree-Netwas a registeredtrademarkof theNational Public Telecomputing Network(NPTN), founded in 1989 by Tom Grundner atCase Western Reserve University. NPTN was a non-profit organization dedicated to establishing and developing, free, public access, digital information and communication services for the general public.[5]It closed operations in 1996, filing for Chapter 7 bankruptcy.[6]However, prior use of the term created some conflicts.[7]NPTN distributed the software packageFreePort, developed at Case Western Reserve, that was used and licensed by many of the free-net sites.
The Internetdomain namefreenet.orgwas first registered by the Greater Detroit Free-Net (detroit.freenet.org), a non-profit community system in Detroit, MI, and a member of the NPTN. The Greater Detroit Free-Net provided other subdomains to several free-net systems during its operation from 1993 to approximately 2001.
Unlike commercialInternet service providers, free-nets originally provided direct terminal-based dialup, instead of other networked connections, such asPoint-to-Point Protocol(PPP). The development of Internet access with cheaper and faster connections, and the advent of theWorld-Wide Webmade the original free-net community concept obsolete.
A number of free-nets, including the original Cleveland Free-Net, have shut down or changed their focus. Free-nets have always been locally governed, so interpretation of their mission to remove barriers to access and provide a forum for community information, as well as services offered, can vary widely. As text-based Internet became less popular, some of the original free-nets have made available PPP dialup and more recently DSL services, as a revenue generating mechanism, with some now transitioning into thecommunity wireless movement.
Several free-net systems continue under new mission statements.Rochester Free-Net(Rochester, New York), for instance, focuses on hosting community service organizations (over 500 to date) as well as seminars about Internet use to the community at no charge.Austin FreeNet(Austin, Texas) now provides technology training and access to residents of the city, "fostering skills that enable people to succeed in a digital age."[8]
|
https://en.wikipedia.org/wiki/Free-net
|
Super Dimension Fortress(SDF, also known as freeshell.org) is anon-profitpublic accessUNIXshell provideron theInternet. It has been in continual operation since 1987 as a non-profitsocial club. The name is derived from the JapaneseanimeseriesSuper Dimension Fortress Macross; the original SDF server was aBulletin board systemcreated by Ted Uhlemann for fellow Japanese anime fans.[1]From its BBS roots, which have been well documented as part of theBBS: The Documentaryproject, SDF has grown into afeature-richprovider serving members around the world.
SDF provides freeUnix shellaccess,webhosting and many other features at the user membership level. Additional programs, capabilities and resources are available at "patron" and "sustaining" level memberships, which are granted with one-time or recurring dues in support of the SDF system.
The SDF network of systems that serves its membership currently includesNetBSDservers for regular use (running onDEC Alpha- andAMD Opteron-powered hardware) as well asretrocomputingenvironments: aTWENEXsystem running the Panda Distribution TOPS-20 MONITOR 7.1 on twoXKLTOAD-2 computers,[2][3]aSymbolicsGenerasystem, and anITSsystem[4]
SDF also hosts its own instances ofsocial mediawebsites from thefediverse, including aMastodonmicroblogging service,[5]aPixelfedimage sharing service,[6]and aLemmylink aggregator with discussion.[7]In addition, SDF hosts aMatrixchat server.[8]
SDF provides free Unix shell access and web hosting to its users. In addition, SDF provides increasingly rare services such asdial-up internet access, andGopherhosting. SDF is one of very few organizations in the world still actively promoting the gopher protocol,[9][10]an alternate protocol that existed at the introduction of the modern World Wide Web.[11]
The system contains thousands of programs and utilities, including acommand-lineBBS called BBOARD,[12]a chat program called COMMODE,[13]email programs, webmail, social networking programs, developer tools and games. Most of the applications hosted at SDF are accessed via the command-line, and SDF provides K-12 and college classrooms the free use of computing resources for Unix education.[14]
SDF also supports multiple retrocomputing experiences, including free user accounts onTOPS-20andSymbolics Generaoperating systems that are running live and accessible via the internet.
There are additional services that are made available on SDF systems to users who apply to be "patrons" and pay one-time dues of US$36 for "Lifetime Membership", and still more services available for at US$9/quarter "sustaining membership", including services such asNextCloud, and access to a large disc-array server. At the sustaining membership level, members are authorized to validate new users to SDF's free User level of membership (otherwise, new members may submit US$1 to be validated).
There are also specialized privileges which patron and sustaining level users can obtain to gain access to particular technologies, includingmailing lists,Voice-over-IP,Databases, Virtual Private Network(VPN), andDomain registration.
In 1987, Ted Uhlemann started SDF on anApple IIemicrocomputerrunning "Magic City Micro-BBS" underProDOS. The system was run as a "Japanese AnimeSIG" known as the SDF-1. In 1989, Uhlemann and Stephen Jones operated SDF very briefly as aDragCit Citadel BBSbefore attempting to use an Intel x86 UNIX clone calledCoherent.
Unhappy with the restrictive menu driven structure of existing BBS systems, Uhlemann, Jones and Daniel Finster created aUNIXSystem VBBS in 1990, initially running on ani386system, which later became anAT&T 3B2/400 and 500, and joined the lonestar.orgUUCPnetwork. Three additional phone lines were installed in late 1991.
In the fall of 1992, Uhlemann and Finster left SDF to start one of the first commercialInternetcompanies in Texas, Texas Metronet.
SDF continued to grow, expanding to ten lines in 1993 along with aSLIPconnection provided by cirr.com. UUCP was still heavily relied upon forUsenetnews and email.
In 1997, SDF (then with about 15,000 users) migrated toLinux. The migration to Linux marked a turning point, as the system started coming under attack like it never had before in its history. Jones calls the Linux periodthe dark age.
In part due to the number of attacks undertaken by malicious users against SDF, the years 2000 and 2001 saw SDF migrate from Linux toNetBSDand from Intelx86toDEC Alpha. This migration included relocation of the servers fromLewisville, TexastoSeattle,Washington. The Linux system was officially decommissioned on August 17, 2001. The occasion was captured in aCOMMODE Logpreserved by one of SDF's users.[15](COMMODE is aDEC TOPS-20chat systemported by Jones toUnixas an executableKornShellscript.)
Although SDF Public Access UNIX System was registered as an operating business in 1993 according to the Dallas County Records Office, it wasn't until October 1, 2001, that the SDF Public Access UNIX System was formed as aDelawarenot-for-profit corporation and subsequently granted501(c)(7)non-profit membership club status by theIRS.[16]SDF had operated under the auspice of the MALR corporation between 1995 and 2001.
As of May 2016[update], SDF was composed of 47,572 users from around the world.[citation needed]SDF users include engineers, computer programmers, students, artists and professionals.
SDF.org is a development site forNetBSD, and in 2018, SDF was the largest NetBSD installation in the world.[17]
|
https://en.wikipedia.org/wiki/SDF_Public_Access_Unix_System
|
Slirp(sometimes capitalizedSLiRP) is asoftwareprogram thatemulatesaPPP,SLIP, orCSLIPconnection to theInternetusing a text-basedshell account. Its original purpose became largelyobsoleteas dedicated dial-up PPP connections andbroadband Internet accessbecame widely available and inexpensive. It then found additional use in connecting mobile devices, such asPDAs, via theirserial ports. Another significant use case is firewall piercing/port forwarding.[1][2]One typical use of Slirp creates a general purpose network connection over aSSHsession on which port forwarding is restricted. Another use case is to create external connectivity for unprivileged containers.
Shell accounts normally only allow the use ofcommand lineortext-basedsoftware, but by logging into a shell account and running Slirp on the remote server, a user can transform their shell account into a general purpose SLIP/PPP network connection, allowing them to run anyTCP/IP-based application—including standardGUIsoftware such as the formerly popularNetscape Navigator—on their computer. This was especially useful in the 1990s because simple shell accounts were less expensive and/or more widely available than full SLIP/PPP accounts.[3]
In the mid-1990s, numerous universities provideddial-upshell accounts (to their faculty, staff, and students). These command line-only connections became more versatile with SLIP/PPP, enabling the use of arbitrary TCP/IP-based applications. Many guides to using university dial-up connections with Slirp were published online. Use of TCP/IP emulations software like Slirp, and its commercial competitorTIAwas banned by some shell account providers, who believed its users violated theirterms of serviceor consumed too muchbandwidth.[4][5]
Slirp is also useful for connectingPDAsand other mobile devices to the Internet: by connecting such a device to a computer running Slirp, via aserial cableorUSB, the mobile device can connect to the Internet.[6]
Unlike a true SLIP/PPP connection, provided by a dedicated server, a Slirp connection does not strictly obey the principle ofend-to-end connectivityenvisioned by theInternet protocol suite. The remote end of the connection, running on the shell account, cannot allocate a newIP addressandroutetraffic to it.[7]Thus the local computer cannot accept arbitrary incoming connections, although Slirp can useport forwardingto accept incoming traffic for specificports.
This limitation is similar to that ofnetwork address translation. It can provide enhanced security as aside effect, it also can enforce policies and act as afirewallbetween the local computer and the Internet.[7]
Slirp isfree softwarelicensed under a BSD-like, modified 4-clause BSD license by its original author. After the original author stopped maintaining it, Kelly Price took over as maintainer.[8]There were no releases from Kelly Price after 2006.Debianmaintainers have taken over some maintenance tasks, such as modifying Slirp to work correctly on64-bitcomputers.[9]In 2019,[10]a more actively maintained Slirp repository was used by slirp4netns to provides network connectivity for unprivileged, rootless containers and VMs.
Despite being largely obsolete, Slirp made a great influence on the networking stacks used invirtual machinesand other virtualized environments. The established practice of connecting the virtual machines to the host's network stack was to use the variouspacket injectionmechanisms.Raw sockets, being one of such mechanisms, were originally used for that purpose, and, due to many problems and limitations, were later replaced with theTAP device.
Packet injection is a privileged operation that may introduce asecurity threat, something that the introduction of TAP device solved only partially. Slirp-derived NAT implementation brought a solution to this long-standing problem. It was discovered that Slirp has the fullNAPTimplementation as a stand-aloneuser-spacecode, whereas otherNATengines are usually embedded into anetwork protocol stackand/or do not cooperate with the host OS when doingPAT(use their own port ranges and require packet injection).QEMUproject have adopted the appropriate code portions of the Slirp package and got the permission from its original authors tore-licenseit under 3-clause BSD license.[11]Such license change allowed many otherFOSSprojects to adopt the QEMU-provided Slirp portions, which was (and still is) not possible with the original Slirp codebase because of the license compatibility problems. Some of the notable adopters areVDEandVirtualBoxprojects.
|
https://en.wikipedia.org/wiki/Slirp
|
Free software,libre software,libreware[1][2]sometimes known asfreedom-respecting softwareis computersoftwaredistributedunder termsthat allow users to run the software for any purpose as well as to study, change, distribute it and any adapted versions.[3][4][5][6]Free software is a matter ofliberty, not price; all users are legally free to do what they want with their copies of a free software (including profiting from them) regardless of how much is paid to obtain the program.[7][2]Computer programs are deemed "free" if they give end-users (not just the developer) ultimate control over the software and, subsequently, over their devices.[5][8]
The right to study and modify a computer program entails that thesource code—the preferred format for making changes—be made available to users of that program. While this is often called "access to source code" or "public availability", the Free Software Foundation (FSF) recommends against thinking in those terms,[9]because it might give the impression that users have an obligation (as opposed to a right) to give non-users a copy of the program.
Although the term "free software" had already been used loosely in the past and other permissive software like theBerkeley Software Distributionreleased in 1978 existed,[10]Richard Stallmanis credited with tying it to the sense under discussion and starting thefree software movementin 1983, when he launched theGNU Project: a collaborative effort to create a freedom-respectingoperating system, and to revive the spirit of cooperation once prevalent amonghackersduring the early days of computing.[11][12]
Free software differs from:
For software under the purview ofcopyrightto be free, it must carry asoftware licensewhereby the author grants users the aforementioned rights. Software that is not covered by copyright law, such as software in thepublic domain, is free as long as the source code is also in the public domain, or otherwise available without restrictions.
Proprietary software uses restrictive software licences orEULAsand usually does not provide users with the source code. Users are thus legally or technically prevented fromchangingthe software, and this results in reliance on the publisher to provide updates, help, and support. (See alsovendor lock-inandabandonware). Users often may notreverse engineer, modify, or redistribute proprietary software.[15][16]Beyond copyright law, contracts and a lack of source code, there can exist additional obstacles keeping users from exercising freedom over a piece of software, such assoftware patentsanddigital rights management(more specifically,tivoization).[17]
Free software can be a for-profit, commercial activity or not. Some free software is developed by volunteercomputer programmerswhile other is developed by corporations; or even by both.[18][7]
Although both definitions refer to almost equivalent corpora of programs, the Free Software Foundation recommends using the term "free software" rather than "open-source software" (an alternative, yet similar, concept coined in 1998), because the goals and messaging are quite dissimilar. According to the Free Software Foundation, "Open source" and its associated campaign mostly focus on the technicalities of thepublic development modeland marketing free software to businesses, while taking the ethical issue of user rights very lightly or even antagonistically.[19]Stallman has also stated that considering the practical advantages of free software is like considering the practical advantages of not being handcuffed, in that it is not necessary for an individual to consider practical reasons in order to realize that being handcuffed is undesirable in itself.[20]
The FSF also notes that "Open Source" has exactly one specific meaning in common English, namely that "you can look at the source code." It states that while the term "Free Software" can lead to two different interpretations, at least one of them is consistent with the intended meaning unlike the term "Open Source".[a]The loan adjective "libre" is often used to avoid the ambiguity of the word "free" in theEnglish language, and the ambiguity with the older usage of "free software" as public-domain software.[10](SeeGratis versus libre.)
The first formal definition of free software was published by FSF in February 1986.[21]That definition, written byRichard Stallman, is still maintained today and states that software is free software if people who receive a copy of the software have the following four freedoms.[22][23]The numbering begins with zero, not only as a spoof on the common usage ofzero-based numberingin programming languages, but also because "Freedom 0" was not initially included in the list, but later added first in the list as it was considered very important.
Freedoms 1 and 3 requiresource codeto be available because studying and modifying software without its source code can range from highly impractical to nearly impossible.
Thus, free software means thatcomputer usershave the freedom to cooperate with whom they choose, and to control the software they use. To summarize this into a remark distinguishinglibre(freedom) software fromgratis(zero price) software, the Free Software Foundation says: "Free software is a matter of liberty, not price. To understand the concept, you should think of 'free' as in 'free speech', not as in 'free beer'".[22](SeeGratis versus libre.)
In the late 1990s, other groups published their own definitions that describe an almost identical set of software. The most notable areDebian Free Software Guidelinespublished in 1997,[24]andThe Open Source Definition, published in 1998.
TheBSD-based operating systems, such asFreeBSD,OpenBSD, andNetBSD, do not have their own formal definitions of free software. Users of these systems generally find the same set of software to be acceptable, but sometimes seecopyleftas restrictive. They generally advocatepermissive free software licenses, which allow others to use the software as they wish, without being legallyforcedto provide the source code. Their view is that this permissive approach is more free. TheKerberos,X11, andApachesoftware licenses are substantially similar in intent and implementation.
There are thousands of free applications and many operating systems available on the Internet. Users can easily download and install those applications via apackage managerthat comes included with mostLinux distributions.
TheFree Software Directorymaintains a large database of free-software packages. Some of the best-known examples includeLinux-libre, Linux-based operating systems, theGNU Compiler CollectionandC library; theMySQLrelational database; theApacheweb server; and theSendmailmail transport agent. Other influential examples include theEmacstext editor; theGIMPraster drawing and image editor; theX Window Systemgraphical-display system; theLibreOfficeoffice suite; and theTeXandLaTeXtypesetting systems.
From the 1950s up until the early 1970s, it was normal for computer users to have thesoftware freedomsassociated with free software, which was typicallypublic-domain software.[10]Softwarewas commonly shared by individuals who used computers and by hardware manufacturers who welcomed the fact that people were making software that made their hardware useful. Organizations of users and suppliers, for example,SHARE, were formed to facilitate exchange of software. As software was often written in aninterpreted languagesuch asBASIC, thesource codewas distributed to use these programs. Software was also shared and distributed as printed source code (Type-in program) incomputer magazines(likeCreative Computing,SoftSide,Compute!,Byte, etc.) and books, like the bestsellerBASIC Computer Games.[25]By the early 1970s, the picture changed: software costs were dramatically increasing, a growing software industry was competing with the hardware manufacturer's bundled software products (free in that the cost was included in the hardware cost), leased machines required software support while providing no revenue for software, and some customers able to better meet their own needs did not want the costs of "free" software bundled with hardware product costs. InUnited States vs.IBM, filed January 17, 1969, the government charged that bundled software wasanti-competitive.[26]While some software might always be free, there would henceforth be a growing amount of software produced primarily for sale. In the 1970s and early 1980s, thesoftware industrybegan using technical measures (such as only distributingbinary copiesofcomputer programs) to preventcomputer usersfrom being able to study or adapt the software applications as they saw fit. In 1980,copyrightlaw was extended to computer programs.
In 1983,Richard Stallman, one of the original authors of the popularEmacsprogram and a longtime member of thehackercommunity at theMIT Artificial Intelligence Laboratory, announced theGNU Project, the purpose of which was to produce a completely non-proprietaryUnix-compatibleoperating system, saying that he had become frustrated with the shift in climate surrounding the computer world and its users. In his initial declaration of the project and its purpose, he specifically cited as a motivation his opposition to being asked to agree tonon-disclosure agreementsand restrictive licenses which prohibited the free sharing of potentially profitable in-development software, a prohibition directly contrary to the traditionalhacker ethic. Software development for theGNU operating systembegan in January 1984, and theFree Software Foundation(FSF) was founded in October 1985. He developed a free software definition and the concept of "copyleft", designed to ensuresoftware freedomfor all.
Some non-software industries are beginning to use techniques similar to those used in free software development for their research and development process; scientists, for example, are looking towards more open development processes, and hardware such as microchips are beginning to be developed with specifications released undercopyleftlicenses (see theOpenCoresproject, for instance).Creative Commonsand thefree-culture movementhave also been largely influenced by the free software movement.
In 1983,Richard Stallman, longtime member of thehackercommunity at theMIT Artificial Intelligence Laboratory, announced the GNU Project, saying that he had become frustrated with the effects of the change in culture of the computer industry and its users.[27]Software development for the GNU operating system began in January 1984, and theFree Software Foundation(FSF) was founded in October 1985. An article outlining the project and its goals was published in March 1985 titled theGNU Manifesto. The manifesto included significant explanation of the GNU philosophy,Free Software Definitionand "copyleft" ideas.
TheLinux kernel, started byLinus Torvalds, was released as freely modifiable source code in 1991. The first licence was a proprietary software licence. However, with version 0.12 in February 1992, herelicensedthe project under theGNU General Public License.[28]Much like Unix, Torvalds' kernel attracted the attention of volunteer programmers.FreeBSDandNetBSD(both derived from386BSD) were released as free software when theUSL v. BSDilawsuit was settled out of court in 1993.OpenBSDforkedfrom NetBSD in 1995. Also in 1995, TheApache HTTP Server, commonly referred to as Apache, was released under theApache License 1.0.
All free-software licenses must grant users all the freedoms discussed above. However, unless the applications' licenses are compatible, combining programs by mixing source code or directly linking binaries is problematic, because oflicense technicalities. Programs indirectly connected together may avoid this problem.
The majority of free software falls under a small set of licenses. The most popular of these licenses are:[30][31]
The Free Software Foundation and the Open Source Initiative both publish lists of licenses that they find to comply with their own definitions of free software and open-source software respectively:
The FSF list is not prescriptive: free-software licenses can exist that the FSF has not heard about, or considered important enough to write about. So it is possible for a license to be free and not in the FSF list. The OSI list only lists licenses that have been submitted, considered and approved. All open-source licenses must meet theOpen Source Definitionin order to be officially recognized as open source software. Free software, on the other hand, is a more informal classification that does not rely on official recognition. Nevertheless, software licensed under licenses that do not meet the Free Software Definition cannot rightly be considered free software.
Apart from these two organizations, theDebianproject is seen by some to provide useful advice on whether particular licenses comply with theirDebian Free Software Guidelines. Debian does not publish a list ofapprovedlicenses, so its judgments have to be tracked by checking what software they have allowed into their software archives. That is summarized at the Debian web site.[32]
It is rare that a license announced as being in-compliance with the FSF guidelines does not also meet theOpen Source Definition, although the reverse is not necessarily true (for example, theNASA Open Source Agreementis an OSI-approved license, but non-free according to FSF).
There are different categories of free software.
Proponents of permissive and copyleft licenses disagree on whether software freedom should be viewed as anegativeorpositive liberty. Due to their restrictions on distribution, not everyone considers copyleft licenses to be free.[34]Conversely, a permissive license may provide an incentive to create non-free software by reducing the cost of developing restricted software. Since this is incompatible with the spirit of software freedom, many people consider permissive licenses to be less free than copyleft licenses.[35]
There is debate over thesecurityof free software in comparison to proprietary software, with a major issue beingsecurity through obscurity. A popular quantitative test in computer security is to use relative counting of known unpatched security flaws. Generally, users of this method advise avoiding products that lack fixes for known security flaws, at least until a fix is available.
Free software advocates strongly believe that this methodology is biased by counting more vulnerabilities for the free software systems, since their source code is accessible and their community is more forthcoming about what problems exist as a part offull disclosure,[39][40]and proprietary software systems can have undisclosed societal drawbacks, such as disenfranchising less fortunate would-be users of free programs. As users can analyse and trace the source code, many more people with no commercial constraints can inspect the code and find bugs and loopholes than a corporation would find practicable. According to Richard Stallman, user access to the source code makes deploying free software with undesirable hiddenspywarefunctionality far more difficult than for proprietary software.[41]
Some quantitative studies have been done on the subject.[42][43][44][45]
In 2006,OpenBSDstarted the first campaign against the use ofbinary blobsinkernels. Blobs are usually freely distributabledevice driversfor hardware from vendors that do not reveal driver source code to users or developers. This restricts the users' freedom effectively to modify the software and distribute modified versions. Also, since the blobs are undocumented and may havebugs, they pose a security risk to anyoperating systemwhose kernel includes them. The proclaimed aim of the campaign against blobs is to collect hardware documentation that allows developers to write free software drivers for that hardware, ultimately enabling all free operating systems to become or remain blob-free.
The issue of binary blobs in theLinux kerneland other device drivers motivated some developers in Ireland to launchgNewSense, a Linux-based distribution with all the binary blobs removed. The project received support from theFree Software Foundationand stimulated the creation, headed by theFree Software Foundation Latin America, of theLinux-librekernel.[46]As of October 2012[update],Trisquelis the most popular FSF endorsed Linux distribution ranked by Distrowatch (over 12 months).[47]WhileDebianis not endorsed by the FSF and does not use Linux-libre, it is also a popular distribution available without kernel blobs by default since 2011.[46]
The Linux community uses the term "blob" to refer to all nonfree firmware in a kernel whereas OpenBSD uses the term to refer to device drivers. The FSF does not consider OpenBSD to be blob free under the Linux community's definition of blob.[48]
Selling softwareunder any free-software licence is permissible, as is commercial use. This is true for licenses with or withoutcopyleft.[18][49][50]
Since free software may be freely redistributed, it is generally available at little or no fee. Free software business models are usually based on adding value such as customization, accompanying hardware, support, training, integration, or certification.[18]Exceptions exist however, where the user is charged to obtain a copy of the free application itself.[51]
Fees are usually charged for distribution on compact discs and bootable USB drives, or for services of installing or maintaining the operation of free software. Development of large, commercially used free software is often funded by a combination of user donations,crowdfunding, corporate contributions, and tax money. TheSELinuxproject at the United StatesNational Security Agencyis an example of a federally funded free-software project.
Proprietary software, on the other hand, tends to use a different business model, where a customer of the proprietary application pays a fee for a license to legally access and use it. This license may grant the customer the ability to configure some or no parts of the software themselves. Often some level of support is included in the purchase of proprietary software, but additional support services (especially for enterprise applications) are usually available for an additional fee. Some proprietary software vendors will also customize software for a fee.[52]
The Free Software Foundation encourages selling free software. As the Foundation has written, "distributing free software is an opportunity to raise funds for development. Don't waste it!".[7]For example, the FSF's own recommended license (theGNU GPL) states that "[you] may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee."[53]
Microsoft CEOSteve Ballmerstated in 2001 that "open source is not available to commercial companies. The way the license is written, if you use any open-source software, you have to make the rest of your software open source."[54]This misunderstanding is based on a requirement ofcopyleftlicenses (like the GPL) that if one distributes modified versions of software, they must release the source and use the same license. This requirement does not extend to other software from the same developer.[55]The claim of incompatibility between commercial companies and free software is also a misunderstanding. There are several large companies, e.g.Red HatandIBM(IBM acquired RedHat in 2019),[56]which do substantial commercial business in the development of free software.[citation needed]
Free software played a significant part in the development of the Internet, the World Wide Web and the infrastructure ofdot-com companies.[57][58]Free software allows users to cooperate in enhancing and refining the programs they use; free software is apure public goodrather than aprivate good. Companies that contribute to free software increase commercialinnovation.[59]
"We migrated key functions from Windows to Linux because we needed an operating system that was stable and reliable – one that would give us in-house control. So if we needed to patch, adjust, or adapt, we could."
The economic viability of free software has been recognized by large corporations such asIBM,Red Hat, andSun Microsystems.[62][63][64][65][66]Many companies whose core business is not in the IT sector choose free software for their Internet information and sales sites, due to the lower initial capital investment and ability to freely customize the application packages. Most companies in the software business include free software in their commercial products if the licenses allow that.[18]
Free software is generally available at no cost and can result in permanently lower TCO (total cost of ownership) compared toproprietary software.[67]With free software, businesses can fit software to their specific needs by changing the software themselves or by hiring programmers to modify it for them. Free software often has no warranty, and more importantly, generally does not assign legal liability to anyone. However, warranties are permitted between any two parties upon the condition of the software and its usage. Such an agreement is made separately from the free software license.
A report by Standish Group estimates that adoption of free software has caused a drop in revenue to the proprietary software industry by about $60 billion per year.[68]Eric S. Raymondargued that the termfree softwareis too ambiguous and intimidating for the business community. Raymond promoted the termopen-source softwareas a friendlier alternative for the business and corporate world.[69]
|
https://en.wikipedia.org/wiki/Free_software
|
Incomputer networks, atunneling protocolis acommunication protocolwhich allows for the movement of data from one network to another. They can, for example, allowprivate networkcommunications to be sent across a public network (such as theInternet), or for one network protocol to be carried over an incompatible network, through a process calledencapsulation.
Because tunneling involves repackaging the traffic data into a different form, perhaps withencryptionas standard, it can hide the nature of the traffic that is run through a tunnel.
Tunneling protocols work by using the data portion of apacket(thepayload) to carry the packets that actually provide the service. Tunneling uses a layered protocol model such as those of theOSIorTCP/IPprotocol suite, but usually violates the layering when using the payload to carry a service not normally provided by the network. Typically, the delivery protocol operates at an equal or higher level in the layered model than the payload protocol.
A tunneling protocol may, for example, allow a foreign protocol to run over a network that does not support that particular protocol, such as runningIPv6overIPv4.
Another important use is to provide services that are impractical or unsafe to be offered using only the underlying network services, such as providing a corporatenetwork addressto a remote user whose physical network address is not part of the corporate network.
Users can also use tunneling to "sneak through" a firewall, using a protocol that the firewall would normally block, but "wrapped" inside a protocol that the firewall does not block, such asHTTP. If the firewall policy does not specifically exclude this kind of "wrapping", this trick can function to get around the intended firewall policy (or any set of interlocked firewall policies).
Another HTTP-based tunneling method uses theHTTP CONNECT method/command. A client issues the HTTP CONNECT command to an HTTP proxy. The proxy then makes a TCP connection to a particular server:port, and relays data between that server:port and the client connection.[1]Because this creates a security hole, CONNECT-capable HTTP proxies commonly restrict access to the CONNECT method. The proxy allows connections only to specific ports, such as 443 for HTTPS.[2]
Other tunneling methods able to bypass network firewalls make use of different protocols such asDNS,[3]MQTT,[4]SMS.[5]
As an example of network layer over network layer,Generic Routing Encapsulation(GRE), a protocol running over IP (IP protocol number47), often serves to carry IP packets, with RFC 1918 private addresses, over the Internet using delivery packets with public IP addresses. In this case, the delivery and payload protocols are the same, but the payload addresses are incompatible with those of the delivery network.
It is also possible to establish a connection using the data link layer. TheLayer 2 Tunneling Protocol(L2TP) allows the transmission offramesbetween two nodes. A tunnel is not encrypted by default: theTCP/IPprotocol chosen determines the level of security.
SSHuses port 22 to enable data encryption of payloads being transmitted over a public network (such as the Internet) connection, thereby providingVPNfunctionality.IPsechas an end-to-end Transport Mode, but can also operate in a tunneling mode through a trusted security gateway.
To understand a particular protocol stack imposed by tunneling, network engineers must understand both the payload and delivery protocol sets.
Tunneling a TCP-encapsulatingpayload (such asPPP) over a TCP-based connection (such as SSH's port forwarding) is known as "TCP-over-TCP", and doing so can induce a dramatic loss in transmission performance — known as theTCP meltdown problem[6][7]which is whyvirtual private network(VPN) software may instead use a protocol simpler than TCP for the tunnel connection. TCP meltdown occurs when a TCP connection is stacked on top of another. The underlying layer may detect a problem and attempt to compensate, and the layer above it then overcompensates because of that, and this overcompensation causes said delays and degraded transmission performance.
ASecure Shell(SSH) tunnelconsists of an encrypted tunnel created through anSSH protocolconnection. Users may set up SSH tunnels to transferunencryptedtraffic over a network through anencryptedchannel. It is a software-based approach to network security and the result is transparent encryption.[8]
For example, Microsoft Windows machines can share files using theServer Message Block(SMB) protocol, a non-encrypted protocol. If one were to mount a Microsoft Windows file-system remotely through the Internet, someone snooping on the connection could see transferred files. To mount the Windows file-system securely, one can establish a SSH tunnel that routes all SMB traffic to the remote fileserver through an encrypted channel. Even though the SMB protocol itself contains no encryption, the encrypted SSH channel through which it travels offers security.
Once an SSH connection has been established, the tunnel starts with SSH listening to a port on theremote or local host. Any connections to it are forwarded to the specifiedaddress and port originating from theopposing (remote or local, as previously) host.
TheTCP meltdown problemis often not a problem when using OpenSSH's port forwarding, because many use cases do not entail TCP-over-TCP tunneling; the meltdown is avoided because the OpenSSH client processes the local, client-side TCP connection in order to get to the actual payload that is being sent, and then sends that payload directly through the tunnel's own TCP connection to the server side, where the OpenSSH server similarly "unwraps" the payload in order to "wrap" it up again for routing to its final destination.[9]Naturally, this wrapping and unwrapping also occurs in the reverse direction of the bidirectional tunnel.
SSH tunnels provide a means to bypassfirewallsthat prohibit certain Internet services – so long as a site allows outgoing connections. For example, an organization may prohibit a user from accessing Internet web pages (port 80) directly without passing through the organization'sproxy filter(which provides the organization with a means of monitoring and controlling what the user sees through the web). But users may not wish to have their web traffic monitored or blocked by the organization's proxy filter. If users can connect to an external SSHserver, they can create an SSH tunnel to forward a given port on their local machine to port 80 on a remote web server. To access the remote web server, users would point theirbrowserto the local port at http://localhost/
Some SSH clients support dynamicport forwardingthat allows the user to create aSOCKS4/5 proxy. In this case users can configure their applications to use their local SOCKS proxy server. This gives more flexibility than creating an SSH tunnel to a single port as previously described. SOCKS can free the user from the limitations of connecting only to a predefined remote port and server. If an application does not support SOCKS, a proxifier can be used to redirect the application to the local SOCKS proxy server. Some proxifiers, such as Proxycap, support SSH directly, thus avoiding the need for an SSH client.
In recent versions of OpenSSH it is even allowed to createlayer 2 or layer 3 tunnelsif both ends have enabled such tunneling capabilities. This createstun(layer 3, default) ortap(layer 2) virtual interfaces on both ends of the connection. This allows normal network management and routing to be used, and when used on routers, the traffic for an entire subnetwork can be tunneled. A pair oftapvirtual interfaces function like an Ethernet cable connecting both ends of the connection and can join kernel bridges.
Over the years, tunneling anddata encapsulationin general have been frequently adopted for malicious reasons, in order to maliciously communicate outside of a protected network.
In this context, known tunnels involve protocols such asHTTP,[10]SSH,[11]DNS,[12][13]MQTT.[14]
|
https://en.wikipedia.org/wiki/Tunneling_protocol#Secure_shell_tunneling
|
The Big Electric Cat, named for anAdrian Belewsong, was a public access computer system inNew York Cityin the late 1980s, known onUsenetas nodedasys1.
Based on aStride Computerbrandminicomputerrunning the UniStrideUnixvariant, the Big Electric Cat (sometimes known asBEC) provided dialupmodemusers with textterminal-basedaccess to Usenet at no charge.
This was the first such system in New York and one of the first in the world. Previously, access to Usenet had been almost exclusively through systems atuniversities, or a few government and very few commercial installations. WhileBulletin Board Systemculture andFidonetexisted at the time, systems which allowed the general public to access Usenet were virtually unknown. As with many early Internet and Usenet systems, a community began to form among users of the system which had occasional outings to restaurants.
BEC was started by four college students, with one of them, Rob Sweeney, owning the equipment. The othersysopswere Charles Foreman, Lee Fischman, and Richard Newman.
A list of BBSes in the 212 Area Code[1]contains the following note, attributed to Lee Fischman
The movie referred to isBBS: The Documentary.[2]
BEC was not intended to be a profit-making operation, charging fees that were designed only to cover operating costs, (Phrack reports $5 per month for an account at the end of 1989, though the system may have in fact been out of operation by then, and other sources note that the system was supported by donations)[3]and relying entirely on volunteer labor.[4]
In mid-1990,[5]after increasingly unreliable operation, The Big Electric Cat suffered what proved to be fatal hardware failure, leaving a gap which was filled by some its users founding one of the first commercialISPsever,Panix.[6]
2600 MagazinefounderEric Corleyused a Big Electric Cat account.[7]
|
https://en.wikipedia.org/wiki/The_Big_Electric_Cat
|
The Internet Adapter(TIA) was software created by Cyberspace Development in 1993 to allowSerial Line Internet Protocol(SLIP) connections over ashell account.[1][2]Point-to-Point Protocol(PPP) was added in 1995, by which time the software was marketed and sold byIntermindofSeattle. Shell accounts normally only allow the use ofcommand lineor text-based software, but by logging into a shell account and starting the TIAdaemon, a user could then run anyTCP/IP-based application, including standardGUIsoftware such as the then-popularNetscape Navigatoron their computer. This was especially useful at the time because simple shell accounts were much less expensive than full SLIP/PPP accounts. TIA wasportedto a large number ofunixorunix-likesystems.
Usage of TIA declined rapidly with the advent of inexpensive PPP-enabled consumer-leveldial-up access. Also, competition from alternatives such as thefree softwareSlirpcut its market share. Cyberspace Development later sold its domain name and its owners went on to other projects while Intermind moved on toPush technologyand automated data delivery.
Thisnetwork-relatedsoftwarearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/The_Internet_Adapter
|
The Whole Earth 'Lectronic Link,[2]normally shortened toThe WELLorThe Well,[3]is avirtual communityfounded in 1985. It is one of the oldest continuously operating virtual communities. By 1993 it had 7,000 members, a staff of 12, and gross annual income of $2 million.[4]A 1997 feature inWiredmagazine called it "The world's most influential online community."[5]In 2012, when it was last publicly offered for sale, it had 2,693 members.[6]It is best known for itsInternet forums, but also provides email,shell accounts, and web pages. Discussion topics are organized into conferences that cover broad areas of interest.[2]User anonymity is prohibited.[2]
The WELL was started byStewart BrandandLarry Brilliantin 1985. The name[7]follows the naming of some of Brand's earlier projects, including theWhole Earth Catalog. Initially The WELL was owned 50% by The Point Foundation, publishers of the Whole Earth Catalog and Whole Earth Review, and 50% by NETI Technologies Inc. a Vancouver-based company of which Larry Brilliant was at that time chairman. Its original management team—Matthew McClure, soon joined byCliff FigalloandJohn Coate—collaborated with its early users to foster a sense ofvirtual community.[8]McClure, Coate and Figallo were all veterans of the 1970s commune calledThe Farm.
John Coate left the WELL to help create SFGate, theSan Francisco Chronicle's first web site.[9]In 1991 Figallo hiredGail Ann Williamsas a community manager. Williams, one of the principals of the satirical group the Plutonium Players, had been working in nonprofit theater management and was already an active member of the WELL.[10]
In 1992 Cliff Figallo also left his job at The WELL and long time WELL member Maurice Weitman was hired as general manager. Figallo's resignation letter to the Board cited changes in company approach: "I am too much identified with the permissive and accommodating attitude that has been part of The Well's growth to preside over a more restrictive régime."[8]
From 1994 to 1999 The WELL was owned byBruce R. Katz, founder ofRockport, a manufacturer of walking shoes.[11]Katz upgraded the infrastructure and hired staff, but alarmed members with plans to franchise the WELL. "Let's just say there was a communications mismatch,"Howard Rheingoldwrote.[12]
The WELL was a California-run enterprise in 1998 and before.[3]
In April 1999 it was acquired bySalon, several of whose founders, such asScott Rosenberg, had previously been regular participants there. Wired reported, "The surprise move... gives Salon a dose of new credibility by tying it directly into a members-only community of scores of artists, writers, thinkers, scientists, programmers, and visionaries."[13]
In August 2005 Salon announced that it was looking for a buyer for The WELL, to concentrate on other business lines. In November 2006, a press release from The WELL said "As Salon has not found a suitable purchaser, it has determined that it is currently in the best interest of the company to retain this business and has therefore suspended all efforts to sell The WELL."[14]
In June 2012 Salon once again announced that it was looking for a buyer for The WELL as its subscriber base "did not bear financial promise". Salon also announced that it had entered into discussions with various parties interested in buying the well.com domain name and that the remaining WELL staff had been laid off at the end of May.[15]The community pledged money to take over The WELL itself and rehire important staff.[16]
In September 2012, Salon sold The WELL to a new corporation, The WELL Group Inc., owned by eleven investors who were all long-time members. The sale price was reported to be $400,000. Members have no official role in the management, but "can ... go back to what they do best: conversation. And complaining about the management."[17][18]The CEO was Earl Crabb, a programmer and supporter of the Bay Area folk music community, who died on February 20, 2015.[19]No announcement was made as to his successor.
The original hardware for the WELL was aVAX11/750, which cost "a quarter of a million dollars and required a closet full of telephone lines and modems."[12]The WELL's core conferencing software,PicoSpan, is written in theC programming languageand runs onUnix. PicoSpan was written by Marcus D. Watts for Network Technologies International (NETI). A license for PicoSpan, in exchange for a half interest in the company, was part of NETI's initial investment in The WELL (along with the VAX computer running themt Xinuvariant of Unix).[20]In 1996, the WELL began also using and licensing the "Engaged" conferencing software, which was built on top of PicoSpan and provides a Web-based user interface which requires less technological expertise from users. The Wall Street Journal was among the websites reported to use Engaged for online community.[21]
The WELL's conferencing system is organized into forums reflecting member interests, and include arts, health, business, regions, hobbies, spirituality, music, politics, games, software and many more. These community forums, known as conferences, are supervised by conference hosts who guide conversations and may enforce conference rules oncivilityand/or appropriateness. Initially all hosts were selected by staff members. In 1995, Gail Ann Williams changed the policies to enable user-created forums. Participants can create their own independent personal conferences—either viewable by any WELL member or privately viewable by those members on a restricted membership list—on any subject they please with any rules they like. Public conferences are open to all members, while private conferences are restricted to a list of users controlled by the conference hosts. Some "featured private" or "private independent" conferences (such as "Women on the WELL" and "Recovery") are listed in the WELL's directory and members may request admission to such conferences. Within the conferences, logged-in members can see the real name of the author of each post. The intent is to foster a more intimate community through "people taking responsibility for opinions, obsessions, insights, silliness, and an occasional faux pas."[22][23]
Women form a large percentage of the WELL's user community, and play strong leadership roles.[24]"[A]lthough women made up only 10 percent of people going online, they constituted 40 percent of the population on the WELL."[25]
Initially, in 1985, the WELL was adial-upbulletin board system(BBS) influenced byEIES.[26]Access to the WELL was via computer modem and phone line, then, when the internet opened to commercial traffic in the 1990s, the WELL became one of the original dial-up gatewayISPsto provide access to it. Over time, web technology evolved and support for dial-up access was dropped. Today users access the WELL via SSH or the web.
In addition to its conferencing services, the WELL also provided access to the Unix operating system for people who didn't have access to an institutional or corporate computer network, and management encouraged members to make and share Unix tools. This was described by early community manager John Coate as an early expression of what would later be called "maker culture."[27]Reflecting back on that era in 1993, Howard Rheingold said that these factors made it an attractive environment for "young computer wizards."[20]
Many early writings about the WELL stress members’ attempts to test utopian forms of self-government in the online community.Kevin Kellyrecalled the original goal was for the WELL to be cheap, open-ended, self-governing and self-designing.[20]Cliff Figallo said the "exercise of free speech and assembly in online interaction is among the most significant and important uses of electronic networking," and hoped that the WELL would be a grass-roots alternative to "electronic consumer shopping malls."[4]But members were shaken in 1990 when one popular and active member "scribbled" (deleted) all his posts, then died by suicide, despite other members’ attempts to reach out to him. A few years later, two members involved in a messy real-life relationship posted about it across several conferences, dividing the community and ultimately becoming a central narrative device forKatie Hafner'sbook about the WELL.[28][29]"People who had to live with each other, because they were all veteran addicts of the same social space, found themselves disliking one another," Howard Rheingold wrote.[20]
In retrospect, Gail Ann Williams said the "cyber utopianism" of the founders may have always been overly optimistic.[10]Although the ideal was egalitarian and democratic, the early pricing structure charged users based on their time spent connected to the service, which might have allowed wealthier users to dominate the conversations in what Williams called a "postocracy." Thomas Valovic, then a research manager with International Data Corporation and adjunct faculty at Northeastern University, theorized that a "single articulate and entertaining person" might be able to steer a discussion through "sheer number of postings," and that this tactic could be used effectively to spread propaganda: "The same, of course, is true of other online systems."[30]Valovic also noted that this early pricing structure gave an edge to people whose work subsidized their time on the WELL. For journalists whose work encouraged them to be online, the distinction between public and private discourse became blurred in WELL conversations, and it was not always easy to tell when people were speaking in their official roles: "[...]the online environment has a way of homogenizing work and play to the point that separating the two becomes increasingly difficult."[30]The WELL's utopianism was also challenged by its sale to Bruce Katz, whose vision for the company was more corporate. In her 1994 essay, “Pandora's Vox,” WELL memberCarmen Hermosilloobserved that by posting her thoughts and feelings where an online platform could profit from them, "I had commodified myself."[31]
On the other hand, during a panel at the 1994Conference on Human Factors in Computing Systems, Figallo reported that "encouraging the formation of core groups of users who shared their desire for minimal social disruption" had been generally successful in promoting free discussion without the need for heavy-handed intervention by management.[32]Looking back in a 2007 interview with Rolling Stone, Stewart Brand said, "Communes failed, drugs went nowhere, free love led pretty directly to AIDS. ... But the counterculture approach to computers – which was of great ingenuity and great enthusiasm, and great disinterest in either corporate or government approaches to their problems – absolutely flourished, and to a large extent created the Internet and the online revolution."[33]
Stewart Brand's original member agreement was, "You Own Your Own Words" or “YOYOW”). Gail Ann Williams recalled the phrase had a number of different interpretations: In an era when it was uncertain how laws applied to online content, Brand intended it to place legal responsibility for posts on the people who wrote them, she said. But “a lot of people saw it as being about property, that it was about copyright, and other people saw it as meaning you have to own up to your words, if you say something heinous, it won't go away, you're going to have to live it down."[10]Currently, the agreement notes members have both the rights to their posted words and the responsibility for those words.[2]Members can also delete their posts at any time, but a placeholder indicates the former location and author of a deleted or "scribbled" post, as well as who deleted it.[34]
The WELL's influence online has been significant (see Katie Hafner's 1997 article, "The Epic Saga of the WELL".[5]) Howard Rheingold noted that bothSteve Case, one of the founders ofAOL, andCraig Newmark, who foundedCraigslist, were WELL members before founding their companies.[12]Frequent in-person meetings of WELL members have also been an important facet of The WELL. Monthly WELL Office Parties began in September 1986[35]and continued for many years thereafter, in the Bay Area and elsewhere. Looking back at the early years, journalist Jon Carroll wrote, "Suddenly there were chili cook-offs and outings to ballgames and brunches and evenings of song... .""[36]
The "Berkeley Singthing," a casual gathering to play and sing popular music, is perhaps the longest running of the in-person gatherings of WELL members. Started in 1991, and taking its name from the Berkeley conference in the WELL where it was originally organized it is one of the many ways that WELL members connect in the physical world.[37]
Sociologist Rebecca Adams noted that “Deadheads were electronic pioneers long before it became fashionable to use the Internet or populate the World Wide Web,” withGrateful Dead-related Usenet forums predating the creation of the first WELL conference for Deadheads on March 1, 1986.[38]MusicianDavid Gans, who was hosting an hour of Grateful Dead music on a San Francisco radio station, launched the conference with Bennett Falk and Mary Eisenhart as co-hosts.[39]The creation of the Grateful Dead conference led to a "growth spurt" in the number of WELL members, and in the early years, Deadheads who used its conferences to make plans, trade audiotapes or discuss lyrics were the largest source of revenue for the WELL. Matthew McClure, part of the WELL's original management team, recalled: “The Deadheads came online and seemed to know instinctively how to use the system to create a community around themselves... Suddenly our future looked assured.”[20]By 1997, Eric F. Wybenga's almanac of Grateful Dead resources said the WELL "is to Deadheads whatAOLis to the average American online."[40]
The WELL was the forum through which Grateful Dead lyricistJohn Perry Barlow,John Gilmore, andMitch Kapor, the founders of theElectronic Frontier Foundation, first met. Barlow wrote that a visit from an FBI agent investigating the theft of some Apple code made him aware how little law enforcement understood the Internet, and even though he was able to persuade the agent he was not involved in the case, he became concerned about the potential for overreach. EFF was formed in 1990 andMike Godwin, also a WELL member, was hired as the first on-staff attorney. Barlow and Kapor hosted the EFF conference on the WELL, which discussed topics related to free speech and internet regulation.[41]Godwin helped publicize flaws in a notorious early study of pornography on the Internet, which had led to calls for legislative censorship.[42]
Craigslist founderCraig Newmarkjoined the WELL shortly after moving to San Francisco in 1993, and was inspired by members’ discussions about internet community, as well as by examples of members offering other members time and professional help without compensation. In 1995, he started sending out an email list of events and job opportunities to friends. Even after this list expanded to a publicLISTSERVand incorporated as a for-profit, Newmark said he viewed it as a community trust and emphasized, "The purpose of the Internet is to connect people to make our lives better."[43]
Salon.comwas founded in the wake of theSan Francisco newspaper strike of 1994by a group of journalists that included WELL members. "The Well [sic] is where a lot of us got our first experience online," Salon co-founderScott Rosenbergwrote. "In Salon's formative days in 1995 we actually used a private conference [on the WELL] to plan our launch."[44]Salon hired WELL management team member Cliff Figallo in 1998 and WELL conference host Mary Elizabeth Williams to direct its online community, Table Talk. After Salon purchased the WELL in 1999, WELL community manager Gail Ann Williams (no relation) became a Salon employee.[10]
In 1995,Tsutomu Shimomuranoticed some of his stolen software had been stored in a WELL account. He worked with WELL management to track and identify hackerKevin Mitnickas the culprit. This effort was described in Shimomura's bookTakedown, which he wrote withJohn Markoff, and in a Wired article excerpted from the book.[45]
The WELL was described in the early 1990s as a "listening post for journalists," with members who were staff writers and editors for the New York Times, Business Week, the San Francisco Chronicle, Time, Rolling Stone, Byte, Harper's, and the Wall Street Journal.[20]This early visibility may have been helped by the early policy of providing free accounts for interested journalists and other select members of the media. Notable journalists who have written about their experiences on the WELL includeJohn Seabrookof the New Yorker,[46]Katie Hafnerof the New York Times,[29]Wendy M. Grossmanof the Guardian,[47]andJon Carrollof the San Francisco Chronicle.[48]
In March 2007, The WELL was noted for refusing membership toKevin Mitnick, and refunding his membership fee.[49]
The WELL also received numerous awards in the 1980s and 1990s, including aWebby Awardfor online community in 1998, and anEFF Pioneer Awardin 1994.
There is often confusion between avirtual communityandsocial network. They are similar in some aspects because they both can be used for personal and professional interests. A social network offers an opportunity to connect with people one already knows or is acquainted with. Facebook and Twitter are social networks. Platforms such asLinkedInandYammeropen up communication channels among coworkers and peers with similar professions in a more relaxed setting. Often social media guidelines are in place for professional usage so that everyone understands what is suitable online behavior.[50][51]Using a social network is an extension of an offline social community. It is helpful in keeping connections among friends and associates as locations change. move. Each user has their own spider web structure which is their social network.[52][53]
Virtual communities differ in that users aren't connected through a mutual friend or similar backgrounds. These groups are formed by people who may be complete strangers but have a common interest or ideology.[54][53]Virtual communities connect people who normally wouldn't consider themselves to be in the same group.[citation needed]These groups continue to stay relevant and maintained in the online world because users feel a need to contribute to the community and in return feel empowered when receiving new information from other members. Virtual communities have an elaborate nest structure because they overlap.Yelp, YouTube, and Wikipedia are all examples of a virtual community. Companies like Kaiser Permanente launched virtual communities for members. The community gave members the ability to control their health care decisions and improve their overall experience.[citation needed]Members of a virtual community are able to offer opinions and contribute helpful advice. Again, the difference between virtual communities and social network is the emergence of the relationship.
The WELL distinguished itself from the technology of the time by creating a networked community for everyone. Users were responsible and owned the content posted, a rule created to protect the information from being copyrighted and commoditized.[24]
|
https://en.wikipedia.org/wiki/The_WELL
|
BusyBoxis asoftware suitethat provides severalUnix utilitiesin a singleexecutable file. It runs in a variety ofPOSIXenvironments such asLinux,Android,[8]andFreeBSD,[9]although many of the tools it provides are designed to work with interfaces provided by theLinux kernel. It was specifically created for embedded operating systems with very limited resources. The authors dubbed it "TheSwiss Army knifeofEmbedded Linux",[10]as the single executable replaces basic functions of more than 300 common commands. It is released asfree softwareunder the terms of theGNU General Public License v2,[6]after controversially deciding not to move toversion 3.
Originally written byBruce Perensin 1995 and declared complete for his intended usage in 1996,[11]BusyBox initially aimed to put a completebootablesystem on a singlefloppy diskthat would serve both as a rescue disk and as aninstallerfor theDebiandistribution. Since that time, it has been extended to become thede facto standardcoreuser spacetoolset for embedded Linux devices and Linux distribution installers. Since each Linux executable requires several kilobytes of overhead, having the BusyBox program combine over two hundred programs together often saves substantial disk space and system memory.
BusyBox was maintained by Enrique Zanardi and focused on the needs of the Debianboot-floppiesinstaller system until early 1998, when Dave Cinege took it over for theLinux Router Project(LRP). Cinege made several additions, created a modularized build environment, and shifted BusyBox's focus into general high-levelembedded systems. As LRP development slowed down in 1999, Erik Andersen, then ofLineo, Inc., took over the project and became the official maintainer between December 1999 and March 2006. During this time the Linux embedded marketplace exploded in growth, and BusyBox matured greatly, expanding both its user base and functionality. Rob Landley became the maintainer in 2005 until late 2006, then Denys Vlasenko took over as the current maintainer.
In September 2006, after heavy discussions and controversies between project maintainer Rob Landley andBruce Perens,[12]the BusyBox[13][14]project decided against adopting the GNU General Public License Version 3 (GPLv3); the BusyBox license was clarified as beingGPL-2.0-only.[15]
Since October 2006, Denys Vlasenko has taken over maintainership of BusyBox from Rob Landley, who has startedToybox, also as a result of the license controversies.[13][16]
In late 2007, BusyBox also came to prominence for actively prosecuting violations of the terms of its license (the GPL) in theUnited States District Court for the Southern District of New York.[17]
What was claimed to be the first USlawsuitover a GPL violation concerned use of BusyBox in anembedded device. The lawsuit,[17]case 07-CV-8205, was filed on September 20, 2007, by theSoftware Freedom Law Center(SFLC) on behalf of Andersen and Landley againstMonsoon MultimediaInc., after BusyBox code was discovered in afirmwareupgrade and attempts to contact the company had apparently failed. The case was settled with release of the Monsoon version of the source and payment of an undisclosed amount of money to Andersen and Landley.[18]
On November 21, 2007, the SFLC brought two similar lawsuits on behalf of Andersen and Landley against two more companies, Xterasys (case 07-CV-10455) and High-Gain Antennas (case 07-CV-10456).[19][20]The Xterasys case was settled on December 17 for release of source code used and an undisclosed payment,[21]and the High-Gain Antennas case on March 6, 2008, for active license compliance and an undisclosed payment.[22]On December 7, 2007, a case was brought againstVerizon Communicationsover its distribution of firmware for Actiontec routers;[23][24]this case was settled March 17, 2008 on condition of license compliance, appointment of an officer to oversee future compliance with free software licenses, and payment of an undisclosed sum.[25]Further suits were brought on June 9, 2008, against Bell Microproducts (case 08-CV-5270) andSuperMicro(case 08-CV-5269),[26]the Super Micro case being settled on July 23, 2008.[27]BusyBox and Bell Microproducts also settled out of court on October 17.[28]
On December 14, 2009, a new lawsuit was filed naming fourteen defendants includingBest Buy,JVC,Samsungand others.[29][30][31]In February 2010Samsungreleased its LN52A650 TV firmware under GPLv2,[32]which was used later as a reference by theSamyGOcommunity project.[33]
On about August 3, 2010, BusyBox won from Westinghouse a default judgement oftriple damagesof $90,000 and lawyers' costs and fees of $47,865, and possession of "presumably a lot of high-def TVs" as infringing equipment in the lawsuitSoftware Freedom Conservancyv. Best Buy, et al., the GPL infringement case noted in the paragraph above.[34]
No other developers, including original author Bruce Perens and maintainer Dave Cinege, were represented in these actions or party to the settlements. On December 15, 2009, Perens released a statement expressing his unhappiness with some aspects of the legal situation, and in particular alleged that the current BusyBox developers "appear to have removed some of the copyright statements of other BusyBox developers, and appear to have altered license statements".[12]
BusyBox can be customized to provide a subset of over two hundred utilities. It can provide most of the utilities specified in theSingle Unix Specification(SUS) plus many others that a user would expect to see on a Linux system. BusyBox uses theAlmquist shell, also known as A Shell, ash and sh.[35]An alternative for customization is the smaller 'hush' shell. "Msh" and "lash" used to be available.[36]
As it is a complete bootstrap system, it will further replace theinit daemonandudev(or the latter-daysystemd) using itself to be called asiniton startup andmdevat hotplug time.
The BusyBox website provides a full list of the utilities implemented.[37]
Typical computer programs have a separatebinary(executable) file for each application. BusyBox is a single binary, which is a conglomerate of many applications, each of which can be accessed by calling the single BusyBox binary with various names (supported by having asymbolic linkorhard linkfor each different name)[38]in a specific manner with appropriate arguments.
BusyBox benefits from the single binary approach, as it reduces the overhead introduced by the executable file format (typicallyELF), and it allows code to be shared between multiple applications without requiring alibrary. This technique is similar to what is provided by thecrunchgen[39]command inFreeBSD, the difference being that BusyBox provides simplified versions of the utilities (for example, anlscommand without file sorting ability), while a crunchgen generated sum of all the utilities would offer the fully functional versions.
Sharing of the common code, along with routines written with size-optimization in mind, can make a BusyBox system use much less storage space than a system built with the corresponding full versions of the utilities replaced by BusyBox. Research[40]that comparedGNU, BusyBox,asmutilsandPerlimplementations of the standard Unix commands showed that in some situations BusyBox may perform faster than other implementations, but not always.
The official BusyBox documentation lists an overview of the available commands and their command-line options.
List of BusyBox commands[41]
Programs included in BusyBox can be run simply by adding their name as an argument to the BusyBox executable:[42]
More commonly, the desired command names are linked (usinghardorsymboliclinks) to the BusyBox executable; BusyBox readsargv[0]to find the name by which it is called, and runs the appropriate command, for example just
after/bin/lsis linked to/bin/busybox.[42]This works because the first argument passed to a program is the name used for the program call, in this case the argument would be "/bin/ls". BusyBox would see that its "name" is "ls" and act like the "ls" program.
BusyBox is used by several operating systems running onembedded systemsand is an essential component of distributions such asOpenWrt,OpenEmbedded(including theYocto Project) andBuildroot. TheSharp Zaurusutilizes BusyBox extensively for ordinaryUnix-liketasks performed on the system's shell.[43]
BusyBox is also an essential component ofVMware ESXi,Tiny Core Linux,SliTaz5(Rolling), andAlpine Linux, all of which are not embedded distributions.
It is necessary for several root applications on Android and is also preinstalled with some "1 Tap Root" solutions such asKingo Root.
Toyboxwas started early 2006 under theGPL-2.0-onlylicense by former BusyBox maintainer Rob Landley as a result of the controversies around GPLv3/GPLv2 discussions. At the end of 2011[44]it was relicensed under theBSD-2-Clauselicense after the project went dormant.[45]In March 2013, it was relicensed again under the0BSDlicense.[46]On January 11, 2012, Tim Bird, aSonyemployee, suggested creating an alternative to BusyBox which would not be under the GNU General Public License. He suggested it be based on the dormant Toybox.[47]In January 2012 the proposal of creating aBSD licensedalternative to the GPL licensed BusyBox project drew harsh criticism fromMatthew Garrettfor taking away the only relevant tool forcopyright enforcementof theSoftware Freedom Conservancygroup.[48]The starter of BusyBox based lawsuits, Rob Landley, responded that this was intentional as he came to the conclusion that the lawsuits resulted not in the hoped for positive outcomes and he wanted to stop them"in whatever way I see fit".[49][50]
|
https://en.wikipedia.org/wiki/BusyBox
|
This article listscommandsprovided byMS-DOScompatibleoperating systems, especially as used onIBM PC compatibles. Many unrelated disk operating systems usethe DOS acronymand are not part of the scope of this list.
Some commands are implemented asbuilt-into thecommand interpreterwhile others are externalapplications. Over multiple generations, commands were added for additional functions. InWindows, the legacy shellCommand Promptprovides many of these commands.
The command interpreter for DOS runs when no application programs are running. When an application exits, if the transient portion of the command interpreter in memory was overwritten, DOS will reload it from disk. Some commands are internal—built into COMMAND.COM; others are external commands stored on disk. When the user types a line of text at the operating system command prompt, COMMAND.COM will parse the line and attempt to match a command name to a built-in command or to the name of an executable program file orbatch fileon disk. If no match is found, an error message is printed, and the command prompt is refreshed.
External commands were too large to keep in the command processor, or were less frequently used. Such utility programs would be stored on disk and loaded just like regular application programs but were distributed with the operating system. Copies of these utility command programs had to be on an accessible disk, either on the current drive or on the commandpathset in the command interpreter.
In the list below, commands that can accept more than one file name, or a filename including wildcards (* and ?), are said to accept afilespec(file specification) parameter. Commands that can accept only a single file name are said to accept afilenameparameter. Additionally, command line switches, or other parameter strings, can be supplied on the command line. Spaces and symbols such as a "/" or a "-" may be used to allow the command processor to parse the command line into filenames, file specifications, and other options.
The command interpreter preserves the case of whatever parameters are passed to commands, but the command names themselves and file names are case-insensitive.
Many commands are the same across many DOS systems, but some differ in command syntax or name.
A partial list of the most common commands forMS-DOSandIBM PC DOSfollows below.
Sets the path to be searched for data files or displays the current search path.
The APPEND command is similar to the PATH command that tells DOS where to search for program files (files with a .COM, . EXE, or .BAT file name extension).
The command is available in MS-DOS versions 3.2 and later.[1]
The command redirects requests for disk operations on one drive to a different drive. It can also display drive assignments or reset all drive letters to their original assignments.
The command is available in MS-DOS versions 3 through 5 and IBM PC DOS releases 2 through 5.[1]
Lists connections and addresses seen by WindowsATMcall manager.
Attrib changes or views the attributes of one or more files. It defaults to display the attributes of all files in the current directory. The file attributes available include read-only, archive, system, and hidden attributes. The command has the capability to process whole folders and subfolders of files and also process all files.
The command is available in MS-DOS versions 3 and later.[1]
These are commands tobackupand restore files from an external disk. These appeared in version 2, and continued toPC DOS5 and MS-DOS 6 (PC DOS 7 had a deversioned check). In DOS 6, these were replaced by commercial programs (CPBACKUP, MSBACKUP), which allowed files to be restored to different locations.[1]
An implementation of theBASICprogramming language for PCs. Implementing BASIC in this way was very common in operating systems on 8- and 16-bit machines made in the 1980s.
IBMcomputers had BASIC 1.1 in ROM, and IBM's versions of BASIC used code in this ROM-BASIC, which allowed for extra memory in the code area. BASICA last appeared inIBM PC DOS5.02, and inOS/2(2.0 and later), the version had ROM-BASIC moved into the program code.
Microsoft releasedGW-BASICfor machines with no ROM-BASIC. Some OEM releases had basic.com and basica.com as loaders for GW-BASIC.EXE.
BASIC was dropped after MS-DOS 4, and PC DOS 5.02. OS/2 (which uses PC DOS 5), has it, while MS-DOS 5 does not.
This command is used to instruct DOS to check whether theCtrlandBreakkeys have been pressed before carrying out a program request.
The command is available in MS-DOS versions 2 and later.[1]
Starts a batch file from within another batch file and returns when that one ends.
The command is available in MS-DOS versions 3.3 and later.[1]
The CHDIR (or the alternative name CD) command either displays or changes the current workingdirectory.
The command is available in MS-DOS versions 2 and later.[1]
The command either displays or changes the activecode pageused to displaycharacter glyphsin aconsole window. Similar functionality can be achieved withMODECON: CP SELECT=yyy.
The command is available in MS-DOS versions 3.3 and later.[1]
CHKDSK verifies a storagevolume(for example, ahard disk,disk partitionorfloppy disk) for file system integrity. The command has the ability to fix errors on a volume and recover information from defectivedisk sectorsof a volume.
The command is available in MS-DOS versions 1 and later.[1]
The CHOICE command is used in batch files to prompt the user to select one item from a set of single-characterchoices. Choice was introduced as an external command with MS-DOS 6.0;[1][2]Novell DOS7[3]and PC DOS 7.0. Earlier versions ofDR-DOSsupported this function with the built-inswitchcommand (for numeric choices) or by beginning a command with a question mark.[3]This command was formerly called ync (yes-no-cancel).
The CLS or CLRSCR command clears theterminal screen.
The command is available in MS-DOS versions 2 and later.[1]
Start a new instance of the command interpreter.
The command is available in MS-DOS versions 1 and later.[1]
Show differences between any two files, or any two sets of files.
The command is available in MS-DOS versions 3.3 through 5 and IBM PC DOS releases 1 through 5.[1]
Makes copies of existing files.
The command is available in MS-DOS versions 1 and later.[1]
Defines theterminaldevice (for example, COM1) to use for input and output.[4]
The command is available in MS-DOS versions 2 and later.[1]
Displays thesystem dateand prompts the user to enter a new date. Complements theTIMEcommand.
The command is available in MS-DOS versions 1 and later.[1]
(Not a command: This is a batch file added to DOS 6.X Supplemental Disks to help create DoubleSpace boot floppies.[5])
Adisk compressionutility supplied with MS-DOS version 6.0 (released in 1993) and version 6.2.[1]
A very primitive assembler and disassembler.
The command has the ability to analyze the file fragmentation on a disk drive or todefragmenta drive. This command is called DEFRAG in MS-DOS/PC DOS anddiskoptinDR-DOS.
The command is available in MS-DOS versions 6 and later.[1]
DEL (or the alternative form ERASE) is used to delete one or more files.
The command is available in MS-DOS versions 1 and later.[1]
Deletes a directory along with all of the files and subdirectories that it contains. Normally, it will ask for confirmation of the potentially dangerous action. Since the RD (RMDIR) command can not delete a directory if the directory is not empty (except in Windows NT & 10), the DELTREE command can be used to delete the whole directory.
Thedeltreecommand is included in certain versions ofMicrosoft WindowsandMS-DOSoperating systems. It is specifically available only in versions ofMS-DOS6.0 and higher,[1]and inMicrosoft Windows 9x. In Windows NT, the functionality provided exists but is handled by the commandrdorrmdirwhich has slightly different syntax. This command is not present in Windows 7 and 8. In Windows 10, the command switch isRD /SorRMDIR /S.
The DIR command displays the contents of a directory. The contents comprise the disk's volume label and serial number; one directory or filename per line, including the filename extension, the file size in bytes, and the date and time the file was last modified; and the total number of files listed, their cumulative size, and the free space (in bytes) remaining on the disk. The command is one of the few commands that exist from the first versions of DOS.[1]The command can display files in subdirectories. The resulting directory listing can be sorted by various criteria and filenames can be displayed in a chosen format.
A command for comparing the complete contents of afloppy diskto another one.
The command is available in MS-DOS versions 3.2 and later and IBM PC DOS releases 1 and later.[1]
A command for copying the complete contents of a diskette to another diskette.
The command is available in MS-DOS versions 2 and later.[1]
A command that addscommand history,macrofunctionality, and improved editing features to the command-line interpreter.
The command is available in MS-DOS versions 5 and later.[1]
Displays how much memory various DOS components occupy.[6]
Adisk compressionutility supplied with MS-DOS version 6.22.[1]
The ECHO command prints its own arguments back out to the DOS equivalent of thestandard output stream.(Hence the name, ECHO) Usually, this means directly to the screen, but the output ofechocan be redirected, like any other command, to files or devices. Often used inbatch filesto print text out to the user.
Another important use of the echo command is to toggle echoing of commands on and off in batch files. Traditionally batch files begin with the@echo offstatement. This says to the interpreter that echoing of commands should be off during the whole execution of the batch file, thus resulting in a "tidier" output (the@symbol declares that this particular command (echo off) should also be executed without echo.)
The command is available in MS-DOS versions 2 and later.[1]
EDIT is a full-screentext editor, included with MS-DOS versions 5 and 6,[1]OS/2 and Windows NT to 4.0. The corresponding program in Windows 95 and later, and Windows 2000 and later is Edit v2.0. PC DOS 6 and later use theDOSEEditorand DR-DOS usededitorup to version 7.
DOS line-editor. It can be used with a script file, like debug, this makes it of some use even today. The absence of a console editor in MS-DOS/PC DOS 1–4 created an after-market for third-party editors.
In DOS 5, an extra command "?" was added to give the user much-needed help.
DOS 6 was the last version to contain EDLIN; for MS-DOS 6, it's on the supplemental disks,[1]while PC DOS 6 had it in the base install. Windows NT 32-bit, and OS/2 have Edlin.
The EMM386 command enables or disables EMM386 expanded-memory support on a computer with an80386or higher processor.
The command is available in MS-DOS versions 5 and later.[1]
See:DEL and ERASE
Converts anexecutable(.exe) file into abinary filewith theextension.com, which is a memory image of the program.
The size of the residentcodeanddata sectionscombined in the input .exe file must be less than 64 KB. The file must also have nostack segment.
The command is available in MS-DOS versions 1 through 5. It is available separately for version 6 on the Supplemental Disk.[1]
Exits the current command processor. If the exit is used at the primary command, it has no effect unless in a DOS window under Microsoft Windows, in which case the window is closed and the user returns to the desktop.
The command is available in MS-DOS versions 2 and later.[1]
The Microsoft File Expansion Utility is used to uncompress one or more compressedcabinet files(.CAB). The command dates back to 1990 and was supplied on floppy disc for MS-DOS versions 5 and later.[7][1]
FAKEMOUS is an IBM PS/2 mouse utility used withAccessDOS. It is included on the MS-DOS 6 Supplemental Disk.[8][9]AccessDOS assists persons with disabilities.
Provides information for MS-DOS commands.
A command that provides accelerated access to frequently-usedfiles and directories.
The command is available in MS-DOS versions 3.3 and later.[1]
Show differences between any two files, or any two sets of files.
The command is available in MS-DOS versions 2 and later – primarily non-IBM releases.[1]
The FDISK command manipulates hard diskpartition tables. The name derives from IBM's habit of calling hard drivesfixed disks. FDISK has the ability to display information about, create, and delete DOS partitions orlogical DOS drive. It can also install a standardmaster boot recordon the hard drive.
The command is available in MS-DOS versions 3.2 and later and IBM PC DOS 2.0 releases and later.[1]
The FIND command is afilterto find lines in the inputdata streamthat contain or don't contain a specifiedstringand send these to the output data stream. It may also be used as apipe.
The command is available in MS-DOS versions 2 and later.[1]
The FINDSTR command is a GREP-orientedFIND-like utility. Among its uses is the logical-OR lacking in FIND.
Iteration: repeats a command for each out of a specified set of files.
The FOR loop can be used toparsea file or the output of a command.
The command is available in MS-DOS versions 2 and later.[1]
Deletes theFATentries and theroot directoryof the drive/partition, and reformats it for MS-DOS. In most cases, this should only be used on floppy drives or otherremovable media. This command can potentially erase everything on a computer's drive.
The command is available in MS-DOS versions 1 and later.[1]
TheGotocommand transfers execution to a specified label. Labels are specified at the beginning of a line, with a colon (:likethis).
The command is available in MS-DOS versions 2 and later.[1]
Used inBatch files.
The GRAFTABL command enables the display of an extended character set in graphics mode.[10]
The command is available in MS-DOS versions 3 through 5.[1]
A TSR program to enable the sending of graphical screen dump to printer by pressing <Print Screen>.
The command is available in MS-DOS versions 3.2 and later and IBM PC DOS releases 2 and later.[1]
Gives help about DOS commands.
The command is available in MS-DOS versions 5 thru Windows XP. Full-screen command help is available in MS-DOS versions 6 and later.[1]Beginning with Windows XP, the command processor "DOS" offers builtin-help for commands by using/?(e.g.COPY /?)
IF is a conditional statement, that allows branching of the program execution. It evaluates the specified condition, and only if it is true, then it executes the remainder of the command line. Otherwise, it skips the remainder of the line and continues with next command line.
Used inBatch files.
The command is available in MS-DOS versions 2 and later.[1]
In MS-DOS;filelinkin DR-DOS.
Network PCs using anull modemcable orLapLink cable. The server-side version of InterLnk, it also immobilizes the machine it's running on as it is an active app (As opposed to aterminate-and-stay-resident program) which must be running for any transfer to take place. DR-DOS'filelinkis executed on both the client and server.
New in PC DOS 5.02, MS-DOS 6.0.[11][1]
The JOIN command attaches a drive letter to a specified directory on another drive.[11]The opposite can be achieved via theSUBSTcommand.
The command is available in MS-DOS versions 3 through 5. It is available separately for versions 6.2 and later on the Supplemental Disk.[1]
The KEYB command is used to select a keyboard layout.
The command is available in MS-DOS versions 3.3 and later.[1]
From DOS 3.0 through 3.21, there are instead per-country commands, namely KEYBFR, KEYBGR, KEYBIT, KEYBSP and KEYBUK.
Changes the label on a logical drive, such as a hard disk partition or a floppy disk.
The command is available in MS-DOS versions 3.1 and later and IBM PC DOS releases 3 and later.[1]
Used in the CONFIG.SYS file to set the maximum number of drives that can be accessed.
The command is available in MS-DOS versions 3.0 and later.[12]
Microsoft 8086 Object Linker[13]
Loads a program above the first 64K of memory, and runs the program. The command is available in MS-DOS versions 5 and later.[1]It is included only in MS-DOS/PC DOS. DR-DOS usedmemmax, which opened or closed lower, upper, and video memory access, to block the lower 64K of memory.[14]
A command that loads a program into the upper memory area.
The command is available in MS-DOS versions 5 and later.[1]
It is calledhiloadin DR-DOS.
Makes a newdirectory. The parent of the directory specified will be created if it does not already exist.
The command is available in MS-DOS versions 2 and later.[1]
Displays memory usage. It is capable of displaying program size and status, memory in use, and internal drivers. It is an external command.
The command is available in MS-DOS versions 4 and later and DR DOS releases 5.0 and later.[1]
On earlier DOS versions the memory usage could be shown by runningCHKDSK. In DR DOS the parameter/Acould be used to only show the memory usage.
Starting with version 6,[1]MS-DOS included the external program MemMaker which was used to free system memory (especiallyConventional memory) by automatically reconfiguring theAUTOEXEC.BATandCONFIG.SYSfiles. This was usually done by moving TSR programs anddevice driversto theupper memory. The whole process required two system restarts. Before the first restart the user was asked whether to enableEMS Memory, since use of expanded memory required a reserved 64KiB region in upper memory. The first restart inserted the SIZER.EXE program which gauged the memory needed by each TSR or Driver. MemMaker would then calculate the optimal Driver and TSR placement in upper memory and modify the AUTOEXEC.BAT and CONFIG.SYS accordingly, and reboot the second time.[15]
MEMMAKER.EXE and SIZER.EXE were developed for Microsoft byHelix Software Companyand were eliminated starting inMS-DOS 7(Windows 95); however, they could be obtained from Microsoft's FTP server as part of the OLDDOS.EXE package, alongside other tools.
PC DOS uses another program called RamBoost to optimize memory, working either with PC DOS'sHIMEM/EMM386or a third-party memory manager. RamBoost was licensed to IBM byCentral Point Software.
The MIRROR command saves disk storage information that can be used to recover accidentally erased files.
The command is available in MS-DOS version 5. It is available separately for versions 6.2 and later on Supplemental Disk.[1]
Configures system devices. Changes graphics modes, adjusts keyboard settings, preparescode pages, and sets up port redirection.[16]
The command is available in MS-DOS versions 3.2 and later and IBM PC DOS releases 1 and later.[1]
The MORE commandpaginatestext, so that one can view files containing more than one screen of text.Moremay also be used as afilter. While viewing MORE text, the return key displays the next line, the space bar displays the next page.
The command is available in MS-DOS versions 2 and later.[1]
Moves files or renames directories.
The command is available in MS-DOS versions 6 and later.[1]
DR-DOS used a separate command for renaming directories,rendir.
A command that scans the computer for known viruses.[17][18]
The command is available in MS-DOS versions 6 and later.[1]
The MSBACKUP command is used to backup or restore one or more files from one disk to another.
TheNew York Timessaid thatMSBACKUP"is much better and faster than the old BACKUP command used in earlier versions of DOS, but it does lack some of the advanced features found in backup software packages that are sold separately.[19]There is another offering, named MWBACKUP, that isGUI-oriented. It was introduced for Windows for Workgroups (3.11).[20]
The MSBACKUP command is available in MS-DOS versions 6 and later.[1]
MSCDEX is a driver executable which allowsDOSprograms to recognize, read, and controlCD-ROMs.
The command is available in MS-DOS versions 6 and later.[1]
The MSD command provides detailed technical information about the computer's hardware and software. MSD was new in MS-DOS 6;[1][21]the PC DOS version of this command is QCONFIG.[22]The command appeared first in Word2, and then in Windows 3.10.
The MSHERC.COM (also QBHERC.COM) was a TSR graphics driver supplied with Microsoft QuickC, QuickBASIC, and the C Compiler, to allow use of the Hercules adapter high-resolution graphics capability (720 x 348, 2 colors).[23]
Loads extended Nationalization and Localization Support from COUNTRY.SYS, and changed the codepage of drivers and system modules resident in RAM.[citation needed]
In later versions of DR-DOS 6, NLSFUNC relocated itself into the HiMem area, thereby freeing a portion of the nearly invaluable lower 640KiB that constituted the ”conventional” memory available to software.[citation needed]
The command is available in MS-DOS versions 3.3 and later.[1]
Displays or sets a searchpathfor executable files.
The command is available in MS-DOS versions 2 and later.[1]
Suspends processing of a batch program and displays the messagePress any key to continue. . ., if not given other text to display.
The command is available in MS-DOS versions 1 and later.[1]
Allows the user to test the availability of a network connection to a specified host. Hostnames are usually resolved to IP addresses.[24]
It is not included in many DOS versions; typically ones with network stacks will have it as a diagnostic tool.
The POWER command is used to turn power management on and off, report the status of power management, and set levels of power conservation. It is an external command implemented as POWER.EXE.[25]
The command is available in MS-DOS versions 6 and later.[1]
The PRINT command adds or removes files in theprint queue. This command was introduced in MS-DOS version 2.[1]Before that there was no built-in support for background printing files. The user would usually use the copy command to copy files to LPT1.
ThePROMPTcommand allows the user to change the prompt in the command screen. The default prompt is$p(i.e.PROMPT $p), which displays the drive and current path as the prompt, but can be changed to anything.PROMPT $d, displays the current system date as the prompt. TypePROMPT /?in the cmd screen for help on this function.
The command is available in MS-DOS versions 2 and later and IBM PC DOS releases 2.1 and later.[1]
A utility inspired by the UNIX/XENIXpscommand. It also provides a full-screen mode, similar to thetoputility on UNIX systems.[6]
Anintegrated development environmentandBASICinterpreter.
The command is available in MS-DOS versions 5 and later.[1]
Remove a directory (delete a directory); by default the directories must be empty of files for the command to succeed.
The command is available in MS-DOS versions 2 and later.[1]
Thedeltreecommand in some versions of MS-DOS and all versions ofWindows 9xremoves non-empty directories.
A primitivefilesystemerror recovery utility included in MS-DOS / IBM PC DOS.
The command is available in MS-DOS versions 2 through 5.[1]
Remark (comment) command, normally used within abatch file, and for DR-DOS, PC/MS-DOS 6 and above, in CONFIG.SYS. This command is processed by the command processor. Thus, its output can be redirected to create a zero-byte file. REM is useful in logged sessions or screen-captures. One might add comments by way of labels, usually starting with double-colon (::). These are not processed by the command processor.
The REN command renames a file. Unlike themovecommand, this command cannot be used to rename subdirectories, or rename files across drives. Mass renames can be accomplished by the use of the wildcards characters asterisk (*) and question mark (?).[26]
The command is available in MS-DOS versions 1 and later.[1]
A command that is used to replace one or more existingcomputer filesor add new files to a targetdirectory.
The command is available in MS-DOS versions 3.2 and later.[1]
See:BACKUP and RESTORE
Disk diagnostic utility. Scandisk was a replacement for thechkdskutility, starting with MS-DOS version 6.2 and later.[1]Its primary advantages overchkdskis that it is more reliable and has the ability to run a surface scan which finds and marks bad clusters on the disk. It also provided mouse point-and-clickTUI, allowing for interactive session to complement command-line batch run.chkdskhad surface scan and bad cluster detection functionality included, and was used again on Windows NT-based operating systems.
The SELECT command formats a disk and installs country-specific information and keyboard codes.
It was initially only available with IBM PC DOS. The version included with PC DOS 3.0 and 3.1 is hard-coded to transfer the operating system from A: to B:, while from PC DOS 3.2 onward you can specify the source and destination, and can be used to install DOS to the harddisk.
The version included with MS-DOS 4 and PC DOS 4 is no longer a simple command-line utility, but a full-fledged installer.
The command is available in MS-DOS versions 3.3 and 4 and IBM PC DOS releases 3 through 4.[1]
This command is no longer included in DOS Version 5 and later, where it has been replaced by SETUP.
Setsenvironment variables.
The command is available in MS-DOS versions 2 and later.[1]
cmd.exein Windows NT 2000, 4DOS, 4OS2, 4NT, and a number of third-party solutions allow direct entry of environment variables from the command prompt. From at least Windows 2000, thesetcommand allows for the evaluation of strings into variables, thus providinginter aliaa means of performing integer arithmetic.[27]
The command is available in MS-DOS versions 5 and later.[1]This command does a computer setup. With all computers running DOS versions 5 and
later, it runs the computer setup, such as Windows 95 setup and Windows 98 setup.
SetVer is a TSR program designed to return a different value to the version of DOS that is running. This allows programs that look for a specific version of DOS to run under a different DOS.
The command is available in MS-DOS versions 5 and later.[1]
Installs support for file sharing and locking capabilities.
The command is available in MS-DOS versions 3 and later.[1]
The SHIFT command increases number ofreplaceable parametersto more than the standard ten for use inbatch files.
This is done by changing the position of replaceable parameters. It replaces each of the replacement parameters with the subsequent one (e.g.%0with%1,%1with%2, etc.).
The command is available in MS-DOS versions 2 and later.[1]
The external command SIZER.EXE is not intended to be started directly from the command prompt. Is used byMemMakerduring the memory-optimization process.
The command is available in MS-DOS versions 6 and later.[1]
Afilterto sort lines in the input data stream and send them to the output data stream. Similar to the Unix commandsort. Handles files up to 64k. This sort is always case insensitive.[28]
The command is available in MS-DOS versions 2 and later.[1]
A utility to map a subdirectory to a drive letter.[11]The opposite can be achieved via theJOINcommand.
The command is available in MS-DOS versions 3.1 and later.[1]
A utility to make a volume bootable. Sys rewrites the Volume Boot Code (the first sector of the partition that SYS is acting on) so that the code, when executed, will look forIO.SYS. SYS also copies the core DOS system files, IO.SYS,MSDOS.SYS, andCOMMAND.COM, to the volume. SYS doesnotrewrite the Master Boot Record, contrary to widely held belief.
The command is available in MS-DOS versions 1 and later.[1]
The Telnet Client is a tool for developers and administrators to help manage and test network connectivity.[29]
Display thesystem timeand waits for the user to enter a new time. Complements theDATEcommand.
The command is available in MS-DOS versions 1 and later.[1]
Enables a user to change the title of their MS-DOS window.
It is an external command, graphically displays the path of each directory and sub-directories on the specified drive.
The command is available in MS-DOS versions 3.2 and later and IBM PC DOS releases 2 and later.[1]
Internal command that expands the name of a file, directory, or drive, and display its absolute pathname as the result. It will expand relative pathnames,SUBSTdrives, andJOINdirectories, to find the actual directory.
For example, in DOS 7.1, if the current directory isC:\WINDOWS\SYSTEM, then
The argument does not need to refer to an existing file or directory: TRUENAME will output the absolute pathname as if it did. Also TRUENAME does not search in thePATH.For example, in DOS 5, if the current directory isC:\TEMP, thenTRUENAME command.comwill displayC:\TEMP\COMMAND.COM(which does not exist), notC:\DOS\COMMAND.COM(which does and is in thePATH).
This command displays theUNC pathnamesof mapped network or local CD drives. This command is an undocumented DOS command. The help switch "/?" defines it as a "Reserved command name". It is available in MS-DOS version 5.00 and later, including the DOS 7 and 8 in Windows 95/98/ME. The C library functionrealpathperforms this function. The Microsoft Windows NT command processors do not support this command, including the versions of command.com for NT.
Displays a file. Themorecommand is frequently used in conjunction with this command, e.g.typelong-text-file| more. TYPE can be used to concatenate files (typefile1 file2>file3); however this won't work for large files[dubious–discuss][citation needed]—use copy command instead.
The command is available in MS-DOS versions 1 and later.[1]
Restores file previously deleted withdel. By default all recoverable files in the working directory are restored; options are used to change this behavior. If the MS-DOSmirrorTSR program is used, then deletion tracking files are created and can be used byundelete.
The command is available in MS-DOS versions 5 and later.[1]
MS-DOS version 5 introduced the quick format option (Format /Q) which removes the disk's file table without deleting any of the data. The same version also introduced the UNFORMAT command to undo the effects of a quick format, restoring the file table and making all the files accessible again.
UNFORMAT only works if invoked before any further changes have overwritten the drive's contents.[1]
An internal DOS command, that reports the DOS version presently running, and since MS-DOS 5, whether DOS is loaded high.
The command is available in MS-DOS versions 2 and later.[1]
Enables or disables the feature to determine if files have been correctly written to disk (You can enable the verify command by typing "verify on" on Command Prompt and pressing enter. To display the current VERIFY setting, type VERIFY without a parameter. To turn off the feature, type "verify off"). If no parameter is provided, the command will display the current setting.[30]
The command is available in MS-DOS versions 2 and later.[1]
An internal command that displays the disk volume label and serial number.
The command is available in MS-DOS versions 2 and later.[1]
A TSR program that continuously monitors the computer for viruses.
The command is available in MS-DOS versions 6 and later.[1]
Copy entire directory trees. Xcopy is a version of the copy command that can move files and directories from one location to another.
XCOPY usage and attributes can be obtained by typingXCOPY /?in the DOS Command line.
The command is available in MS-DOS versions 3.2 and later.[1]
There are several guides to DOS commands available that are licensed under theGNU Free Documentation License:
|
https://en.wikipedia.org/wiki/Internal_DOS_command
|
iOS jailbreakingis the use of aprivilege escalationexploitto remove software restrictions imposed byAppleon devices runningiOSand iOS-based[a]operating systems. It is typically done through a series ofkernelpatches. A jailbroken device typically permitsroot accesswithin the operating system and provides the right to install software unavailable through theApp Store. Different devices and versions are exploited with a variety of tools. Apple views jailbreaking as a violation of theend-user license agreementand strongly cautions device owners not to try to achieve root access through the exploitation of vulnerabilities.[1]
While sometimes compared torootinganAndroid device, jailbreaking bypasses several types of Apple prohibitions for the end-user. Since it includes modifying the operating system (enforced by a "lockedbootloader"), installing non-officially approved (not available on the App Store) applications viasideloading, and granting the user elevated administration-level privileges (rooting), the concepts of iOS jailbreaking are therefore technically different from Android device rooting.
Expanding the feature set that Apple and its App Store have restricted is one of the motivations for jailbreaking.[2]Apple checks apps for compliance with its iOS Developer Program License Agreement[3]before accepting them for distribution in the App Store. However, the reasons for Apple to ban apps are not limited to safety and security and may be regarded as arbitrary and capricious.[4]In one case, Apple mistakenly banned an app by a Pulitzer-Winning cartoonist because it violated its developer license agreement, which specifically bans apps that "contain content that ridicules public figures."[5]To access banned apps,[6]users rely on jailbreaking to circumvent Apple's censorship of content and features. Jailbreaking permits the downloading of programs not approved by Apple,[7]such as user interface customization and tweaks.
Software programs that are available throughAPTorInstaller.app(legacy) are not required to adhere to App Store guidelines. Most of them are not typical self-contained apps, but instead are extensions and customizations for iOS or other apps (commonly called tweaks).[8]Users can install these programs for purposes including personalization and customization of the interface using tweaks developed by developers and designers,[8]adding desired features such as access to the root file system and fixing annoyances,[9]and making development work on the device easier by providing access to the file system and command-line tools.[10][11]Many Chinese iOS device owners also jailbreak their phones to install third-partyChinesecharacterinput systemsbecause they are easier to use than Apple's.[12]
In some cases, jailbreak features are adopted by Apple and used as inspiration for features that are incorporated into iOS andiPadOS.[13][14]
Jailbreaking also opens the possibility for using software to unofficially unlockcarrier-lockediPhones so they can be used with other carriers.[19]Software-based unlocks have been available since September 2007,[20]with each tool applying to a specific iPhone model andbasebandversion (or multiple models and versions).[21]This includes theiPhone 4S,iPhone 4,iPhone 3GS, andiPhone 3Gmodels. An example of unlocking an iPhone through a Jailbreak utility would be Redsn0w. Through this software, iPhone users will be able to create a customIPSWand unlock their device. Moreover, during the unlocking process, there are options to install the iPad baseband to the iPhone.
Cybercriminals may jailbreak an iPhone to install malware or target jailbroken iPhones on which malware can be installed more easily. The Italian cybersecurity companyHacking Team, which used to sell hacking software to law enforcement agencies, advised police to jailbreak iPhones to allow tracking software to be installed on them.[22][23]
On iOS devices, the installation of consumer software isgenerally restricted to installation through the App Store. Jailbreaking, therefore, allows the installation of pirated applications.[24]It has been suggested that a major motivation for Apple to prevent jailbreaking is to protect the income of its App Store, including third-party developers and allow the buildup of a sustainable market for third-party software.[25]However, the installation of pirated applications is also possible without jailbreaking, taking advantage of enterprise certificates to facilitate the distribution of modified or pirated releases of popular applications.[26]
Apackage manageror package-management system is a collection of software tools that automates the process of installing, upgrading, configuring, and removing computer programs. For jailbreaks, this is essential for the installation of third-party content. There are a few package managers specifically for jailbroken iOS devices, of which the most popular areCydia, Sileo, Zebra andInstaller 5.
Depending on the type of the jailbreak (i.e. 'rootless' or 'rootful'), different security structures may be compromised to various degrees. As jailbreaking grants freedom over running software that isn't confined to a sandbox typical to that of anApp Storeapplication, as well as modifications to system files, it ultimately allows for the threat ofmalware.
Users of a jailbroken device are also often forced to stay on an older iOS version that is no longer supported by Apple, commonly due to the unavailability of jailbreak on the newer versions. While using older versions of iOS is considered safe in most circumstances, the device may be vulnerable to publicly known security flaws.
In March 2021, jailbreak developer GeoSn0w[27]released a tweak called iSecureOS which can alert the users of security issues found on their devices. The application works akin to antivirus software, in that it scans the files on the user's device and checks them against a database of known malware or unsafe repos.
In June 2021, ESET Research confirmed that malware did exist on one of the piracy repositories in the jailbreak community. The malware actively targeted iSecureOS to try to bypass the detection,[28]but updates to the security app were quickly released and have mitigated the malware.
Jailbreaking of iOS devices has sometimes been compared to "rooting" ofAndroiddevices. Although both concepts involve privilege escalation, they do differ in scope.
WhereAndroid rootingand jailbreaking are similar is that both are used to grant the owner of the devicesuperusersystem-level privileges, which may be transferred to one or more apps. However, unlike iOS phones and tablets, nearly all Android devices already offer an option to allow the user tosideload3rd-partyappsonto the device without having to install from an official source such as theGoogle Play store.[29]Many Android devices also provide owners the capability to modify or even replace the full operating system after unlocking thebootloader, although doing this requires afactory reset.[30][31][32]
In contrast, iOS devices are engineered with restrictions including a "locked bootloader" which can not be unlocked by the owner to modify the operating system without violating Apple's end-user license agreement. And on iOS, until 2015, while corporations could install private applications onto corporate phones, sideloading unsanctioned, 3rd-party apps onto iOS devices from sources other than theApp Storewas prohibited for most individual users without a purchased developer membership.[33]After 2015, the ability to install 3rd-party apps became free for all users; however, doing so requires a basic understanding ofXcodeand compiling iOS apps.
Jailbreaking an iOS device to defeat all these security restrictions presents a significant technical challenge.[34]Similar to Android, alternative iOS app stores utilizing enterprise certificates are available, offering modified or pirated releases of popular applications and video games, some of which were either previously released through Cydia or are unavailable on the App Store due to these apps not complying with Apple developer guidelines.
Many different types of jailbreaks have been developed over the years, differing in how and when the exploit is applied.
When a jailbroken device is booting, it loads Apple's own boot software initially. The device is thenexploitedand the kernel ispatchedevery time it is turned on. An untethered jailbreak is a jailbreak that does not require any assistance when it reboots up. The kernel will be patched without the help of a computer or an application.
A tethered jailbreak is the opposite of an untethered jailbreak, in the sense that a computer is required to boot the device. Without a computer running the jailbreaking software, the iOS device will not be able to boot at all. While using a tethered jailbreak, the user will still be able to restart/kill the device'sSpringBoardprocess without needing to reboot. Many early jailbreaks were offered initially as tethered jailbreaks.
This type of jailbreak allows a user to reboot their phone normally, but upon doing so, the jailbreak and any modified code will be effectively disabled, as it will have an unpatched kernel. Any functionality independent of the jailbreak will still run as normal, such as making a phone call, texting, or using App Store applications. To be able to have a patched kernel and run modified code again, the device must be booted using a computer.
This type of jailbreak is like a semi-tethered jailbreak in which when the device reboots, it no longer has a patched kernel, but the key difference is that the kernel can be patched without using a computer. The kernel is usually patched using an application installed on the device without patches. This type of jailbreak has become increasingly popular, with most recent jailbreaks classified as semi-untethered.
A few days after the original iPhone became available in July 2007, developers released the first jailbreaking tool for it,[35]and soon a jailbreak-only game app became available.[36]In October 2007,JailbreakMe1.0 (also called "AppSnapp") allowed people to jailbreak iPhone OS 1.1.1 on both the iPhone and iPod Touch,[37][38]and it included Installer.app as a way to get software for the jailbroken device.[39]
In February 2008, Zibri released ZiPhone, a tool for jailbreaking iPhone OS 1.1.3 and iPhone OS 1.1.4.[40]
The iPhone Dev Team, which is not affiliated with Apple, has released a series of free desktop-based jailbreaking tools. In July 2008 it released a version of PwnageTool to jailbreak the then new iPhone 3G on iPhone OS 2.0 as well as the iPod Touch,[41][42]newly including Cydia as the primary third-party installer for jailbroken software.[43]PwnageTool continues to be updated for untethered jailbreaks of newer iOS versions.[44][45]
In November 2008 the iPhone Dev Team released QuickPwn to jailbreak iPhone OS 2.2 on iPhone and iPod Touch, with options to enable past functionality that Apple had disabled on certain devices.[46]
After Apple released iPhone OS 3.0 in June 2009, the Dev Team published redsn0w as a simple jailbreaking tool for Mac and Windows, and also updated PwnageTool primarily intended for expert users making custom firmware, and only for Mac.[47]It continues to maintain redsn0w for jailbreaking most versions of iOS 4 and iOS 5 on most devices.[48]
George Hotzdeveloped the first iPhone unlock, which was a hardware-based solution. Later, in 2009, he released a jailbreaking tool for theiPhone 3GandiPhone 3GSon iPhone OS 3.0 called purplera1n,[49]andblackra1nfor iPhone OS version 3.1.2 on the 3rd generation iPod Touch and other devices.[50]
In October 2010, George Hotz released limera1n, a low-level exploit ofboot ROMcode that permanently works to jailbreak the iPhone 4 and is used as a part of tools including redsn0w.[51]
Nicholas Allegra (better known as "comex") released a program called Spirit in May 2010.[52]Spirit jailbreaks devices including iPhones running iPhone OS 3.1.2, 3.1.3, and iPad running iPhone OS 3.2.[52]In August 2010, comex released JailbreakMe 2.0, the first web-based tool to jailbreak the iPhone 4 (on iOS 4.0.1).[53][54]In July 2011, he released JailbreakMe 3.0,[55]a web-based tool for jailbreaking all devices on certain versions of iOS 4.3, including the iPad 2 for the first time (on iOS 4.3.3).[56]It used a flaw inPDFfile rendering in mobileSafari.[57][58]
Chronic Dev Team initially releasedGreenpois0nin October 2010, a desktop-based tool for untethered jailbreaking iOS 4.1[59]and later iOS 4.2.1[60]on most devices including the Apple TV,[61]as well as iOS 4.2.6 on CDMA (Verizon) iPhones.[62]
As of December 2011, redsn0w included the "Corona" untether by pod2g for iOS 5.0.1 for iPhone 3GS, iPhone 4, iPad (1st generation), and iPod Touch (3rd and 4th generation).[45]As of June 2012, redsn0w also includes the "Rocky Racoon" untether by pod2g for iOS 5.1.1 on all iPhone, iPad, and iPod Touch models that support iOS 5.1.1.[63]
The iPhone Dev Team, Chronic Dev Team, and pod2g collaborated to releaseAbsinthein January 2012, a desktop-based tool to jailbreak the iPhone 4S for the first time and theiPad 2for the second time, on iOS 5.0.1 for both devices and also iOS 5.0 for iPhone 4S.[64][65][66][67]In May 2012 it released Absinthe 2.0, which can jailbreak iOS 5.1.1 untethered on all iPhone, iPad, and iPod Touch models that support iOS 5.1.1, including jailbreaking thethird-generation iPadfor the first time.[68]
An iOS 6.X untethered jailbreak tool called "evasi0n" was released for Linux, OS X, and Windows on February 4, 2013.[69]Due to the high volume of interest in downloading the jailbreak utility, the site initially gave anticipating users download errors. When Apple upgraded its software to iOS 6.1.3 it permanently patched out the evasi0n jailbreak.[70]
On November 29, 2014, TaiG team released their untethered jailbreak tool called "TaiG" for devices running iOS 8.0–8.1.1. On December 10, 2014, the app was updated to include support for iOS 8.1.2.[71]On July 3, 2015, TaiG 2.3.0 was released, which includes support for iOS 8.0–8.4.[72]
On October 14, 2015, Pangu Team released Pangu9, their untethered jailbreak tool for iOS 9.0 through 9.0.2. On March 11, 2016, Pangu Team updated their tool to support iOS 9.1 for 64-bit devices.[73][74]
4th & 5th generation (4K)Apple TV
M1based iPads
16.5.1 (A12-A14,M1)
16.5 (A15-A16,M2)
M1-M2based iPads
M1-M2based iPads
Apple has released various updates to iOS that patch exploits used by jailbreak utilities; this includes a patch released in iOS 6.1.3 to software exploits used by the originalevasi0niOS 6–6.1.2 jailbreak, in iOS 7.1 patching the Evasi0n 7 jailbreak for iOS 7–7.0.6-7.1 beta 3. Boot ROM exploits (exploits found in the hardware of the device) cannot be patched by Apple system updates but can be fixed in hardware revisions such as new chips or new hardware in its entirety, as occurred with the iPhone 3GS in 2009.[121]
On July 15, 2011, Apple released a new iOS version that closed the exploit used inJailbreakMe3.0. The GermanFederal Office for Information Securityhad reported that JailbreakMe uncovered the "critical weakness" that information could be stolen ormalwareunwillingly downloaded by iOS users clicking on maliciously craftedPDFfiles.[122]
On August 13, 2015, Apple updated iOS to 8.4.1, patching the TaiG exploit. Pangu and Taig teams both said they were working on exploiting iOS 8.4.1, and Pangu demonstrated these chances at the WWDC 2015.[123][clarification needed]
On September 16, 2015, iOS 9 was announced and made available; it was released with a new "Rootless" security system, dubbed a "heavy blow" to the jailbreaking community.[124]
On October 21, 2015, seven days after the Pangu iOS 9.0–9.0.2 Jailbreak release, Apple pushed the iOS 9.1 update, which contained a patch that rendered it nonfunctional.[125]
On January 23, 2017, Apple released iOS 10.2.1 to patch jailbreak exploits released by Google for the Yalu iOS 10 jailbreak created by Luca Todesco.[126]
On December 10, 2019, Apple usedDMCAtakedown requests to remove posts from Twitter. The tweet contained an encryption key that could potentially be used to reverse engineer the iPhone's Secure Enclave. Apple later retracted the claim, and the tweet was reinstated.[127]
On June 1, 2020, Apple released the 13.5.1 update, patching the zero-day exploit used by the Unc0ver jailbreak.[128]
On September 20, 2021, Apple releasediOS/iPadOS 15, which introduced signed system volume security to iOS/iPadOS, meaning that any changes to the root file system would revert to the latest snapshot on a reboot, and changes to the snapshot would make the device unbootable.[129]As a result, jailbreak development slowed considerably, and for the first time in jailbreaking history, the latest iPhone did not get a jailbreak before a new model was released.
On September 12, 2022, Apple released iOS 16, which introduced a new firmware component known as Cryptex1. New Cryptex1 versions are almost never compatible with old iOS versions, making downgrading impossible except within patch versions (i.e. 16.3 and 16.3.1).[citation needed]
The legal status of jailbreaking is affected by laws regarding circumvention of digital locks, such as laws protectingdigital rights management(DRM) mechanisms. Many countries do not have such laws, and some countries have laws including exceptions for jailbreaking.
International treaties have influenced the development of laws affecting jailbreaking. The 1996World Intellectual Property Organization (WIPO) Copyright Treatyrequires nations party to the treaties to enact laws against DRM circumvention. The American implementation is theDigital Millennium Copyright Act(DMCA), which includes a process for establishing exemptions for non-copyright-infringing purposes such as jailbreaking. The 2001European Copyright Directiveimplemented the treaty in Europe, requiring member states of theEuropean Unionto implement legal protections for technological protection measures. The Copyright Directive includes exceptions to allow breaking those measures for non-copyright-infringing purposes, such as jailbreaking to run alternative software,[130]but member states vary on the implementation of the directive.
While Apple technically does not support jailbreaking as a violation of its EULA, jailbreaking communities have generally not been legally threatened by Apple. At least two prominent jailbreakers have been given positions at Apple, albeit in at least one case a temporary one.[131][132]Apple has also regularly credited jailbreak developers with detecting security holes in iOS release notes.[133]
Apple's support article concerning jailbreaking claims that they "may deny service for an iPhone, iPad, or iPod Touch that has installed any unauthorized software," which includes jailbreaking.[134]
In 2010,Electronic Frontiers Australiasaid that it is unclear whether jailbreaking is legal in Australia, and that anti-circumvention laws may apply.[135]These laws had been strengthened by theCopyright Amendment Act 2006.
In November 2012, Canadaamended its Copyright Actwith new provisions prohibiting tampering with DRM protection, with exceptions including software interoperability.[136]Jailbreaking a device to run alternative software is a form of circumventing digital locks for the purpose of software interoperability.
There had been several efforts from 2008–2011 to amend the Copyright Act (Bill C-60,Bill C-61, andBill C-32) to prohibit tampering with digital locks, along with initial proposals for C-11 that were more restrictive,[137]but those bills were set aside. In 2011,Michael Geist, a Canadian copyright scholar, cited iPhone jailbreaking as a non-copyright-related activity that overly-broad Copyright Act amendments could prohibit.[138]
India's copyright lawpermits circumventing DRM for non-copyright-infringing purposes.[139][140]Parliament introduced a bill including this DRM provision in 2010 and passed it in 2012 as Copyright (Amendment) Bill 2012.[141]India is not a signatory to the WIPO Copyright Treaty that requires laws against DRM circumvention, but being listed on the USSpecial 301 Report"Priority Watch List" applied pressure to develop stricter copyright laws in line with the WIPO treaty.[139][140]
New Zealand's copyright lawallows the use of technological protection measure (TPM) circumvention methods as long as the use is for legal, non-copyright-infringing purposes.[142][143]This law was added to theCopyright Act 1994as part of theCopyright (New Technologies) Amendment Act 2008.
Jailbreaking might be legal in Singapore if done to provide interoperability and not circumvent copyright, but that has not been tested in court.[144]
The lawCopyright and Related Rights Regulations 2003makes circumventing DRM protection measures legal for the purpose of interoperability but not copyright infringement. Jailbreaking may be a form of circumvention covered by that law, but this has not been tested in court.[130][145]Competition laws may also be relevant.[146]
The main law that affects the legality of iOS jailbreaking in the United States is the 1998Digital Millennium Copyright Act(DMCA), which says "no person shall circumvent atechnological measurethat effectively controls access to a work protected under" the DMCA, since this may apply to jailbreaking.[147]Every three years, the law allows the public to propose exemptions for legitimate reasons for circumvention, which last three years if approved. In 2010 and 2012, the U.S. Copyright Office approved exemptions that allowed smartphone users to jailbreak their devices legally,[148]and in 2015 the Copyright Office approved an expanded exemption that also covers other all-purpose mobile computing devices, such as tablets.[149]It is still possible Apple may employ technical countermeasures to prevent jailbreaking or prevent jailbroken phones from functioning.[150]It is unclear whether it is legal to traffic in the tools used to make jailbreaking easy.[150]
In 2010, Apple announced that jailbreaking "can violate the warranty".[151]
In 2007,Tim Wu, a professor atColumbia Law School, argued that jailbreaking "Apple's superphone is legal, ethical, and just plain fun."[152]Wu cited an explicit exemption issued by theLibrary of Congressin 2006 for personal carrier unlocking, which notes that locks "are used by wireless carriers to limit the ability of subscribers to switch to other carriers, a business decision that has nothing whatsoever to do with the interests protected by copyright" and thus do not implicate the DMCA.[153]Wu did not claim that this exemption applies to those who help others unlock a device or "traffic" in software to do so.[152]
In 2010, in response to a request by theElectronic Frontier Foundation, theU.S. Copyright Officeexplicitly recognized an exemption to the DMCA to permit jailbreaking in order to allow iPhone owners to use their phones with applications that are not available from Apple's store, and to unlock their iPhones for use with unapproved carriers.[154][155]Applehad previously filed comments opposing this exemption and indicated that it had considered jailbreaking to be a violation of copyright (and by implication prosecutable under the DMCA). Apple's request to define copyright law to include jailbreaking as a violation was denied as part of the 2009 DMCA rulemaking. In their ruling, the Library of Congress affirmed on July 26, 2010, that jailbreaking is exempt from DMCA rules with respect to circumventing digital locks. DMCA exemptions must be reviewed and renewed every three years or else they expire.
On October 28, 2012, the US Copyright Office released a new exemption ruling. The jailbreaking of smartphones continued to be legal "where circumvention is accomplished for the sole purpose of enabling interoperability of [lawfully obtained software] applications with computer programs on the telephone handset." However, the U.S. Copyright office refused to extend this exemption to tablets, such as iPads, arguing that the term "tablets" is broad and ill-defined, and an exemption to this class of devices could have unintended side effects.[156][157][158]The Copyright Office also renewed the 2010 exemption for unofficially unlocking phones to use them on unapproved carriers, but restricted this exemption to phones purchased before January 26, 2013.[157]In 2015, these exemptions were extended to include other devices, including tablets.[159]
The firstiPhoneworm,iKee, appeared in early November 2009, created by a 21-year-old Australian student in the town ofWollongong. He told Australian media that he created the worm to raise awareness of security issues: jailbreaking allows users to install anSSHservice, which those users can leave in the default insecure state.[160]In the same month,F-Securereported on a new malicious worm compromising bank transactions from jailbroken phones in theNetherlands, similarly affecting devices where the owner had installed SSH without changing the default password.[161][162]
Restoring a device with iTunes removes a jailbreak.[163][164][165]However, doing so generally updates the device to the latest, and possibly non-jailbreakable, version, due to Apple's use ofSHSH blobs. There are many applications that aim to prevent this, by restoring the devices to the same version they are currently running whilst removing the jailbreaks. Examples are, Succession, Semi-Restore and Cydia Eraser.
In 2012, Forbes staff analyzed a UCSB study on 1,407 free programs available from Apple and a third-party source. Of the 1,407 free apps investigated, 825 were downloaded from Apple's App Store using the website App Tracker, and 526 from BigBoss (Cydia's default repository). 21% of official apps tested leaked device ID and 4% leaked location. Unofficial apps leaked 4% and 0.2% respectively. 0.2% of apps from Cydia leaked photos and browsing history, while the App Store leaked none. Unauthorized apps tended to respect privacy better than official ones.[166]Also, a program available in Cydia called PrivaCy allows user to control the upload of usage statistics to remote servers.[166]
In August 2015, theKeyRaidermalware was discovered, affecting only jailbroken iPhones.[167]
In recent years, due to the technical complexity and often rarity of legitimate jailbreaking software (especially untethered jailbreaks) there has been an increase in websites offering fake iOS jailbreaks. These websites often ask for payment or make heavy use of advertising, but have no actual jailbreak to offer. Others install a fake, lookalike version of theCydiapackage manager.[168]In some cases, users have been asked to downloadfree-to-playapps or fill out surveys to complete a (non-existent) jailbreak.
|
https://en.wikipedia.org/wiki/Jailbreaking_(iOS)
|
In manyUnixvariants, "nobody" is the conventional name of auser identifierwhich owns no files, is in no privileged groups, and has no abilities except those which every other user has. It is normally not enabled as auser account, i.e. has nohome directoryor logincredentialsassigned. Some systems also define an equivalent group "nogroup".
|
https://en.wikipedia.org/wiki/Nobody_(username)
|
TheName Service Switch(NSS) is a feature found in the standard C library of variousUnix-likeoperating systems that connects a computer with a variety of sources of common configuration databases and name resolution mechanisms.[1]These sources include local operating system files (such as/etc/passwd,/etc/group, and/etc/hosts), theDomain Name System(DNS), theNetwork Information Service(NIS, NIS+), andLDAP.
Asystem administratorusually configures the operating system's name services using the file/etc/nsswitch.conf. This file lists databases (such aspasswd,shadowandgroup), and one or more sources for obtaining that information. Examples for sources arefilesfor local files,ldapfor theLightweight Directory Access Protocol,nisfor theNetwork Information Service,nisplusforNIS+,dnsfor theDomain Name System(DNS), andwinsforWindows Internet Name Service.
The nsswitch.conf file has line entries for each service consisting of a database name in the first field, terminated by a colon, and a list of possible source databases in the second field.
A typical file might look like:
The order of the source databases determines the order the NSS will attempt to look up those sources to resolve queries for the specified service. A bracketed list of criteria may be specified following each source name to govern the conditions under which the NSS will proceed to querying the next source based on the preceding source's response.
EarlierUnix-likesystems either accessed only local files or had hard-coded rules for accessing files or network-stored databases.Ultrixwas a notable exception with its nearly identical functionality of the NSS configuration file in/etc/svc.conf.
Sun Microsystemsfirst developed the NSS for theirSolarisoperating system.
Solaris' compliance with SVR4, which Sun Microsystems andAT&TUnix System Laboratories jointly developed by mergingUNIX System V,BSDandXenix, required that third parties be able to plug in name service implementations for thetransport layerof their choosing (OSIorIP) without rewriting SVR4-compliant Transport-IndependentRPC(TI-RPC) applications or rebuilding the operating system. Sun introduced theNIS+directory service in Solaris to supersedeNIS, which required co-existence of the two directory services within an enterprise to ease migration.
Sun engineersThomas MaslenandSanjay Daniwere the first to design and implement the Name Service Switch. They fulfilled Solaris requirements with the nsswitch.conf file specification and the implementation choice to load database access modules asdynamically loaded libraries, which Sun was also the first to introduce.
Sun engineers' original design of the configuration file and runtime loading of name service back-end libraries has withstood the test of time as operating systems have evolved and new name services are introduced. Over the years, programmers ported the NSS configuration file with nearly identical implementations to many other operating systems includingFreeBSD,NetBSD,Linux,HP-UX,IRIXandAIX[citation needed]. More than two decades after the NSS was invented,GNU libcimplements it almost identically.
|
https://en.wikipedia.org/wiki/Name_Service_Switch
|
Apower useris auserof computers, software and other electronic devices who uses advanced features of computer hardware,[1][2][3]operating systems,[4]programs, or websites[5]which are not used by the average user. A power user might not have extensive technical knowledge of the systems they use[6]but is rather characterized by competence or desire to make the most intensive use of computer programs or systems.
The term came into use in the 1980s, as advocates for computing developed special skills for working with or customizing existing hardware and software. Power users knew the best ways to perform common tasks and find advanced information before the arrival of the commercial Internet. On PC platforms, power users read magazines likeByteorPC Magazine, and knew enough about operating systems to create and edit batch files, write short programs inBASIC, and adjust system settings. They tended to customize or “supercharge” existing systems, rather than create new software.[7]
Inenterprise softwaresystems, "Power User" may be a formal role given to an individual who is not aprogrammerbut a specialist inbusiness software. Often these people retain their normal user job role but also function in testing, training, and first-tier support of the enterprise software.[6][8]
Some software applications are regarded as particularly suited for power users and may be designed as such. Examples includeVLC media player, amultimedia framework, a player, and a server, which includes complex features not found in other media player suites.[9][10]
User testingfor software often focuses on awe or regular users.[11]Power users can require different user interface elements compared to regular and minimal users, as they may need less help and fewer cues. A power user might use a program full-time, compared to a casual or occasional user. Thus a program which caters to power users will typically include features that make the interface easier for experts to use, even if these features might be mystifying to beginners.
A typical example is extensive keybindings, like Ctrl+F or Alt+Enter; having keyboard bindings andshortcutsfor many functions is a hallmark ofpower-user centric software design, as it enables users who put forth more effort to learn the shortcuts to operate the program quickly without removing their hands from the keyboard.
Power users typically want to operate the software with little interaction, or as fast as possible, and be able to perform tasks in a precise, exactly-reproducible way, whereas casual users may be happy if the program can be intuitively made to doapproximatelywhat they wanted. To aid in the automation of repetitive tasks during heavy usage, power-user centric interfaces often provide the ability to composemacros, and program functions may be pre-conceived to with the intention that they will be used programmatically inscripting.
Interface design may have to make trade-offs between confusing beginners andminimalistsversus the elaborate needs of experts and power users. These concerns may overlap partially with theblinking twelve problem, in which a complex user interface causes users to avoid certain features. It may be extremely difficult to both appeal to new users, who want user interfaces to be intuitive, and experts, who want power and flexibility.
However, there aresolutionsfor these problems, such as:
Users may alsoerroneously label themselvesas power users when they are less than fully competent,[12]further complicating the requirements of designing software which caters to the desires and needs of those users.
A simple intuitive interface often increases thetechnicalcomplexity of a program and impedes its efficient use, while a well-designed but complex-seeminginterface may increase efficiency by making many advanced features quickly accessible to experts. For example, a program with many advancedkeyboard shortcutsmayseemto be needlessly complex, but experienced power users may find it easier and quicker to avoid long sequences of mouse clicks to navigate menus and popups. Such menus and popups may exist tointuitivelyguide new users along a desired course of action, but they are often overly-simplistic by design so that novices might easily grasp the required steps. Providing both interfaces simultaneously is anoptionbut requires greatly extended development time, so trade-offs are often made.
SAPandOracleareenterprise systemsthat require a complex set of training to gainprofessional certification. Because of this, and also to encourage engagement with the systems, many companies have created a "Super User Model" (also called Power User, Champion) to take regular users and raise them to a level of leadership within the system. Doing this accomplishes three objectives:[6][8]
Extensive research has been done with the Super User Model in SAP, specifically in regard to the role they take in training and supporting end users. Currently, more than 70% of SAP companies utilize a form of the Super User Model.
InMicrosoftWindows 2000,Windows XPProfessional, andWindows Server 2003, there is a "Power Users" group on the system that gives more permissions than a normal restricted user, but stops short ofAdministratorpermissions. If a user is a member of the Power Users group, they have a greater chance of exposing the system to malware over a normal user and canpromotetheir account to an Administrator by purposely installing malware.[13]Thus, the Power Users group should be used with trustworthy and knowledgeable users only; it is not suitable to contain untrustworthy users. The Power Users group was made obsolete inWindows Vistaas part of the consolidation of privilege elevation features in the introduction ofUser Account Control.[14]In Windows Vista Business l or higher, you can still create a "power user" via local users and groups, but there is no difference from a standard user because all the ACL entries of the file system are completely removed.
Software that power users may employ to optimize their workflows include the following:
|
https://en.wikipedia.org/wiki/Power_user
|
Arootkitis a collection ofcomputer software, typically malicious, designed to enable access to acomputeror an area of itssoftwarethat is not otherwise allowed (for example, to an unauthorized user) and often masks its existence or the existence of other software.[1]The termrootkitis acompoundof "root" (the traditional name of theprivileged accountonUnix-likeoperating systems) and the word "kit" (which refers to the software components that implement the tool).[2]The term "rootkit" has negative connotations through its association withmalware.[1]
Rootkit installation can be automated, or anattackercan install it after having obtained root or administrator access.[3]Obtaining this access is a result of direct attack on a system, i.e. exploiting a vulnerability (such asprivilege escalation) or apassword(obtained bycrackingorsocial engineeringtactics like "phishing"). Once installed, it becomes possible to hide the intrusion as well as to maintain privileged access. Full control over a system means that existing software can be modified, including software that might otherwise be used to detect or circumvent it.
Rootkit detection is difficult because a rootkit may be able to subvert the software that is intended to find it. Detection methods include using an alternative and trustedoperating system, behavior-based methods, signature scanning, difference scanning, andmemory dumpanalysis. Removal can be complicated or practically impossible, especially in cases where the rootkit resides in thekernel; reinstallation of the operating system may be the only available solution to the problem. When dealing withfirmwarerootkits, removal may requirehardwarereplacement, or specialized equipment.
The termrootkit,rkit, orroot kitoriginally referred to a maliciously modified set of administrative tools for aUnix-likeoperating systemthat granted "root" access.[4]If an intruder could replace the standard administrative tools on a system with a rootkit, the intruder could obtain root access over the system whilst simultaneously concealing these activities from the legitimatesystem administrator. These first-generation rootkits were trivial to detect by using tools such asTripwirethat had not been compromised to access the same information.[5][6]Lane Davis and Steven Dake wrote the earliest known rootkit in 1990 forSun Microsystems'SunOSUNIX operating system.[7]In the lecture he gave upon receiving theTuring Awardin 1983,Ken ThompsonofBell Labs, one of the creators ofUnix, theorized about subverting theC compilerin a Unix distribution and discussed the exploit. The modified compiler would detect attempts to compile the Unixlogincommand and generate altered code that would accept not only the user's correct password, but an additional "backdoor" password known to the attacker. Additionally, the compiler would detect attempts to compile a new version of the compiler, and would insert the same exploits into the new compiler. A review of the source code for thelogincommand or the updated compiler would not reveal any malicious code.[8]This exploit was equivalent to a rootkit.
The first documentedcomputer virusto target thepersonal computer, discovered in 1986, usedcloakingtechniques to hide itself: theBrain virusintercepted attempts to read theboot sector, and redirected these to elsewhere on the disk, where a copy of the original boot sector was kept.[1]Over time,DOS-virus cloaking methods became more sophisticated. Advanced techniques includedhookinglow-level diskINT 13HBIOSinterruptcalls to hide unauthorized modifications to files.[1]
The first malicious rootkit for theWindows NToperating system appeared in 1999: a trojan calledNTRootkitcreated byGreg Hoglund.[9]It was followed byHackerDefenderin 2003.[1]The first rootkit targetingMac OS Xappeared in 2009,[10]while theStuxnetworm was the first to targetprogrammable logic controllers(PLC).[11]
In 2005,Sony BMGpublishedCDswithcopy protectionanddigital rights managementsoftware calledExtended Copy Protection, created by software company First 4 Internet. The software included a music player but silently installed a rootkit which limited the user's ability to access the CD.[12]Software engineerMark Russinovich, who created the rootkit detection toolRootkitRevealer, discovered the rootkit on one of his computers.[1]The ensuing scandal raised the public's awareness of rootkits.[13]To cloak itself, the rootkit hid any file starting with "$sys$" from the user. Soon after Russinovich's report, malware appeared which took advantage of the existing rootkit on affected systems.[1]OneBBCanalyst called it a "public relationsnightmare."[14]Sony BMG releasedpatchestouninstallthe rootkit, but it exposed users to an even more serious vulnerability.[15]The company eventually recalled the CDs. In the United States, aclass-action lawsuitwas brought against Sony BMG.[16]
TheGreek wiretapping case 2004–05, also referred to as Greek Watergate,[17]involved the illegaltelephone tappingof more than 100mobile phoneson theVodafone Greecenetwork belonging mostly to members of theGreekgovernment and top-ranking civil servants. The taps began sometime near the beginning of August 2004 and were removed in March 2005 without discovering the identity of the perpetrators. The intruders installed a rootkit targeting Ericsson'sAXE telephone exchange. According toIEEE Spectrum, this was "the first time a rootkit has been observed on a special-purpose system, in this case an Ericsson telephone switch."[18]The rootkit was designed to patch the memory of the exchange while it was running, enablewiretappingwhile disabling audit logs, patch the commands that list active processes and active data blocks, and modify the data blockchecksumverification command. A "backdoor" allowed an operator withsysadminstatus to deactivate the exchange's transaction log, alarms and access commands related to the surveillance capability.[18]The rootkit was discovered after the intruders installed a faulty update, which causedSMStexts to be undelivered, leading to an automated failure report being generated. Ericsson engineers were called in to investigate the fault and discovered the hidden data blocks containing the list of phone numbers being monitored, along with the rootkit and illicit monitoring software.
Modern rootkits do not elevate access,[4]but rather are used to make another software payload undetectable by adding stealth capabilities.[9]Most rootkits are classified asmalware, because the payloads they are bundled with are malicious. For example, a payload might covertly steal userpasswords,credit cardinformation, computing resources, or conduct other unauthorized activities. A small number of rootkits may be considered utility applications by their users: for example, a rootkit might cloak aCD-ROM-emulation driver, allowingvideo gameusers to defeatanti-piracymeasures that require insertion of the original installation media into a physical optical drive to verify that the software was legitimately purchased.
Rootkits and their payloads have many uses:
In some instances, rootkits provide desired functionality, and may be installed intentionally on behalf of the computer user:
There are at least five types of rootkit, ranging from those at the lowest level in firmware (with the highest privileges), through to the least privileged user-based variants that operate inRing 3. Hybrid combinations of these may occur spanning, for example, user mode and kernel mode.[26]
User-mode rootkits run inRing 3, along with other applications as user, rather than low-level system processes.[27]They have a number of possible installation vectors to intercept and modify the standard behavior of application programming interfaces (APIs). Some inject adynamically linkedlibrary (such as a.DLLfile on Windows, or a .dylib file onMac OS X) into other processes, and are thereby able to execute inside any target process to spoof it; others with sufficient privileges simply overwrite the memory of a target application. Injection mechanisms include:[27]
...since user mode applications all run in their own memory space, the rootkit needs to perform this patching in the memory space of every running application. In addition, the rootkit needs to monitor the system for any new applications that execute and patch those programs' memory space before they fully execute.
Kernel-mode rootkits run with the highest operating system privileges (Ring 0) by adding code or replacing portions of the core operating system, including both thekerneland associateddevice drivers.[citation needed]Most operating systems support kernel-mode device drivers, which execute with the same privileges as the operating system itself. As such, many kernel-mode rootkits are developed as device drivers or loadable modules, such asloadable kernel modulesinLinuxordevice driversinMicrosoft Windows. This class of rootkit has unrestricted security access, but is more difficult to write.[29]The complexity makes bugs common, and any bugs in code operating at the kernel level may seriously impact system stability, leading to discovery of the rootkit.[29]One of the first widely known kernel rootkits was developed forWindows NT 4.0and released inPhrackmagazine in 1999 byGreg Hoglund.[30][31]Kernel rootkits can be especially difficult to detect and remove because they operate at the samesecurity levelas the operating system itself, and are thus able to intercept or subvert the most trusted operating system operations. Any software, such asantivirus software, running on the compromised system is equally vulnerable.[32]In this situation, no part of the system can be trusted.
A rootkit can modify data structures in the Windows kernel using a method known asdirect kernel object manipulation(DKOM).[33]This method can be used to hide processes. A kernel mode rootkit can also hook theSystem Service Descriptor Table(SSDT), or modify the gates between user mode and kernel mode, in order to cloak itself.[4]Similarly for theLinuxoperating system, a rootkit can modify thesystem call tableto subvert kernel functionality.[34][35]It is common that a rootkit creates a hidden, encrypted filesystem in which it can hide other malware or original copies of files it has infected.[36]Operating systems are evolving to counter the threat of kernel-mode rootkits. For example, 64-bit editions of Microsoft Windows now implement mandatory signing of all kernel-level drivers in order to make it more difficult for untrusted code to execute with the highest privileges in a system.[37]
A kernel-mode rootkit variant called abootkitcan infect startup code like theMaster Boot Record(MBR),Volume Boot Record(VBR), orboot sector, and in this way can be used to attackfull disk encryptionsystems.[38]An example of such an attack on disk encryption is the "evil maid attack", in which an attacker installs a bootkit on an unattended computer. The envisioned scenario is a maid sneaking into the hotel room where the victims left their hardware.[39]The bootkit replaces the legitimateboot loaderwith one under their control. Typically the malware loader persists through the transition toprotected modewhen the kernel has loaded, and is thus able to subvert the kernel.[40][41][42]For example, the "Stoned Bootkit" subverts the system by using a compromisedboot loaderto intercept encryption keys and passwords.[43][self-published source?]In 2010, the Alureon rootkit has successfully subverted the requirement for 64-bit kernel-mode driver signing inWindows 7, by modifying themaster boot record.[44]Although not malware in the sense of doing something the user doesn't want, certain "Vista Loader" or "Windows Loader" software work in a similar way by injecting anACPISLIC (System Licensed Internal Code) table in the RAM-cached version of the BIOS during boot, in order to defeat theWindows Vista and Windows 7 activation process.[citation needed]This vector of attack was rendered useless in the (non-server) versions ofWindows 8, which use a unique, machine-specific key for each system, that can only be used by that one machine.[45]Many antivirus companies provide free utilities and programs to remove bootkits.
Rootkits have been created as Type IIHypervisorsin academia as proofs of concept. By exploiting hardware virtualization features such asIntel VTorAMD-V, this type of rootkit runs in Ring -1 and hosts the target operating system as avirtual machine, thereby enabling the rootkit to intercept hardware calls made by the original operating system.[6]Unlike normal hypervisors, they do not have to load before the operating system, but can load into an operating system before promoting it into a virtual machine.[6]A hypervisor rootkit does not have to make any modifications to the kernel of the target to subvert it; however, that does not mean that it cannot be detected by the guest operating system. For example, timing differences may be detectable inCPUinstructions.[6]The "SubVirt" laboratory rootkit, developed jointly byMicrosoftandUniversity of Michiganresearchers, is an academic example of a virtual-machine–based rootkit (VMBR),[46]whileBlue Pillsoftware is another. In 2009, researchers from Microsoft andNorth Carolina State Universitydemonstrated a hypervisor-layer anti-rootkit calledHooksafe, which provides generic protection against kernel-mode rootkits.[47]Windows 10introduced a new feature called "Device Guard", that takes advantage of virtualization to provide independent external protection of an operating system against rootkit-type malware.[48]
Afirmwarerootkit uses device or platform firmware to create a persistent malware image in hardware, such as arouter,network card,[49]hard drive, or the systemBIOS.[27][50]The rootkit hides in firmware, because firmware is not usually inspected forcode integrity. John Heasman demonstrated the viability of firmware rootkits in bothACPIfirmware routines[51]and in aPCIexpansion cardROM.[52]In October 2008, criminals tampered with Europeancredit-card-reading machines before they were installed. The devices intercepted and transmitted credit card details via a mobile phone network.[53]In March 2009, researchers Alfredo Ortega andAnibal Saccopublished details of aBIOS-level Windows rootkit that was able to survive disk replacement and operating system re-installation.[54][55][56]A few months later they learned that some laptops are sold with a legitimate rootkit, known as AbsoluteCompuTraceor AbsoluteLoJack for Laptops, preinstalled in many BIOS images. This is an anti-thefttechnology system that researchers showed can be turned to malicious purposes.[24]
Intel Active Management Technology, part ofIntel vPro, implementsout-of-band management, giving administratorsremote administration,remote management, andremote controlof PCs with no involvement of the host processor or BIOS, even when the system is powered off. Remote administration includes remote power-up and power-down, remote reset, redirected boot, console redirection, pre-boot access to BIOS settings, programmable filtering for inbound and outbound network traffic, agent presence checking, out-of-band policy-based alerting, access to system information, such as hardware asset information, persistent event logs, and other information that is stored in dedicated memory (not on the hard drive) where it is accessible even if the OS is down or the PC is powered off. Some of these functions require the deepest level of rootkit, a second non-removable spy computer built around the main computer. Sandy Bridge and future chipsets have "the ability to remotely kill and restore a lost or stolen PC via 3G". Hardware rootkits built into thechipsetcan help recover stolen computers, remove data, or render them useless, but they also present privacy and security concerns of undetectable spying and redirection by management or hackers who might gain control.
Rootkits employ a variety of techniques to gain control of a system; the type of rootkit influences the choice of attack vector. The most common technique leveragessecurity vulnerabilitiesto achieve surreptitiousprivilege escalation. Another approach is to use aTrojan horse, deceiving a computer user into trusting the rootkit's installation program as benign—in this case,social engineeringconvinces a user that the rootkit is beneficial.[29]The installation task is made easier if theprinciple of least privilegeis not applied, since the rootkit then does not have to explicitly request elevated (administrator-level) privileges. Other classes of rootkits can be installed only by someone with physical access to the target system. Some rootkits may also be installed intentionally by the owner of the system or somebody authorized by the owner, e.g. for the purpose ofemployee monitoring, rendering such subversive techniques unnecessary.[57]Some malicious rootkit installations are commercially driven, with a pay-per-install (PPI) compensation method typical for distribution.[58][59]
Once installed, a rootkit takes active measures to obscure its presence within the host system through subversion or evasion of standard operating systemsecuritytools andapplication programming interface(APIs) used for diagnosis, scanning, and monitoring.[60]Rootkits achieve this by modifying the behavior ofcore parts of an operating systemthrough loading code into other processes, the installation or modification ofdrivers, orkernel modules. Obfuscation techniques include concealing running processes from system-monitoring mechanisms and hiding system files and other configuration data.[61]It is not uncommon for a rootkit to disable theevent loggingcapacity of an operating system, in an attempt to hide evidence of an attack. Rootkits can, in theory, subvertanyoperating system activities.[62]The "perfect rootkit" can be thought of as similar to a "perfect crime": one that nobody realizes has taken place. Rootkits also take a number of measures to ensure their survival against detection and "cleaning" by antivirus software in addition to commonly installing into Ring 0 (kernel-mode), where they have complete access to a system. These includepolymorphism(changing so their "signature" is hard to detect), stealth techniques, regeneration, disabling or turning off anti-malware software,[63]and not installing onvirtual machineswhere it may be easier for researchers to discover and analyze them.
The fundamental problem with rootkit detection is that if the operating system has been subverted, particularly by a kernel-level rootkit, it cannot be trusted to find unauthorized modifications to itself or its components.[62]Actions such as requesting a list of running processes, or a list of files in a directory, cannot be trusted to behave as expected. In other words, rootkit detectors that work while running on infected systems are only effective against rootkits that have some defect in their camouflage, or that run with lower user-mode privileges than the detection software in the kernel.[29]As withcomputer viruses, the detection and elimination of rootkits is an ongoing struggle between both sides of this conflict.[62]Detection can take a number of different approaches, including looking for virus "signatures" (e.g. antivirus software), integrity checking (e.g.digital signatures), difference-based detection (comparison of expected vs. actual results), and behavioral detection (e.g. monitoring CPU usage or network traffic).
For kernel-mode rootkits, detection is considerably more complex, requiring careful scrutiny of the System Call Table to look forhooked functionswhere the malware may be subverting system behavior,[64]as well asforensicscanning of memory for patterns that indicate hidden processes. Unix rootkit detection offerings include Zeppoo,[65]chkrootkit,rkhunterandOSSEC. For Windows, detection tools include Microsoft SysinternalsRootkitRevealer,[66]Avast Antivirus,[67]SophosAnti-Rootkit,[68]F-Secure,[69]Radix,[70]GMER,[71]andWindowsSCOPE. Any rootkit detectors that prove effective ultimately contribute to their own ineffectiveness, as malware authors adapt and test their code to escape detection by well-used tools.[Notes 1]Detection by examining storage while the suspect operating system is not operational can miss rootkits not recognised by the checking software, as the rootkit is not active and suspicious behavior is suppressed; conventional anti-malware software running with the rootkit operational may fail if the rootkit hides itself effectively.
The best and most reliable method for operating-system-level rootkit detection is to shut down the computer suspected of infection, and then to check itsstoragebybootingfrom an alternative trusted medium (e.g. a "rescue"CD-ROMorUSB flash drive).[72]The technique is effective because a rootkit cannot actively hide its presence if it is not running.
The behavioral-based approach to detecting rootkits attempts to infer the presence of a rootkit by looking for rootkit-like behavior. For example, byprofilinga system, differences in the timing and frequency of API calls or in overall CPU utilization can be attributed to a rootkit. The method is complex and is hampered by a high incidence offalse positives. Defective rootkits can sometimes introduce very obvious changes to a system: theAlureonrootkit crashed Windows systems after a security update exposed a design flaw in its code.[73][74]Logs from apacket analyzer,firewall, orintrusion prevention systemmay present evidence of rootkit behaviour in a networked environment.[26]
Antivirus products rarely catch all viruses in public tests (depending on what is used and to what extent), even though security software vendors incorporate rootkit detection into their products. Should a rootkit attempt to hide during an antivirus scan, a stealth detector may notice; if the rootkit attempts to temporarily unload itself from the system, signature detection (or "fingerprinting") can still find it.[75]This combined approach forces attackers to implement counterattack mechanisms, or "retro" routines, that attempt to terminate antivirus programs. Signature-based detection methods can be effective against well-published rootkits, but less so against specially crafted, custom-root rootkits.[62]
Another method that can detect rootkits compares "trusted" raw data with "tainted" content returned by anAPI. For example,binariespresent on disk can be compared with their copies withinoperating memory(in some operating systems, the in-memory image should be identical to the on-disk image), or the results returned fromfile systemorWindows RegistryAPIs can be checked against raw structures on the underlying physical disks[62][76]—however, in the case of the former, some valid differences can be introduced by operating system mechanisms like memory relocation orshimming. A rootkit may detect the presence of such a difference-based scanner orvirtual machine(the latter being commonly used to perform forensic analysis), and adjust its behaviour so that no differences can be detected. Difference-based detection was used byRussinovich'sRootkitRevealertool to find the Sony DRM rootkit.[1]
Code signingusespublic-key infrastructureto check if a file has been modified since beingdigitally signedby its publisher. Alternatively, a system owner or administrator can use acryptographic hash functionto compute a "fingerprint" at installation time that can help to detect subsequent unauthorized changes to on-disk code libraries.[77]However, unsophisticated schemes check only whether the code has been modified since installation time; subversion prior to that time is not detectable. The fingerprint must be re-established each time changes are made to the system: for example, after installing security updates or aservice pack. The hash function creates amessage digest, a relatively short code calculated from each bit in the file using an algorithm that creates large changes in the message digest with even smaller changes to the original file. By recalculating and comparing the message digest of the installed files at regular intervals against a trusted list of message digests, changes in the system can be detected and monitored—as long as the original baseline was created before the malware was added.
More-sophisticated rootkits are able to subvert the verification process by presenting an unmodified copy of the file for inspection, or by making code modifications only in memory, reconfiguration registers, which are later compared to a white list of expected values.[78]The code that performs hash, compare, or extend operations must also be protected—in this context, the notion of animmutable root-of-trustholds that the very first code to measure security properties of a system must itself be trusted to ensure that a rootkit or bootkit does not compromise the system at its most fundamental level.[79]
Forcing a complete dump ofvirtual memorywill capture an active rootkit (or akernel dumpin the case of a kernel-mode rootkit), allowing offlineforensic analysisto be performed with adebuggeragainst the resultingdump file, without the rootkit being able to take any measures to cloak itself. This technique is highly specialized, and may require access to non-publicsource codeordebugging symbols. Memory dumps initiated by the operating system cannot always be used to detect a hypervisor-based rootkit, which is able to intercept and subvert the lowest-level attempts to read memory[6]—a hardware device, such as one that implements anon-maskable interrupt, may be required to dump memory in this scenario.[80][81]Virtual machinesalso make it easier to analyze the memory of a compromised machine from the underlying hypervisor, so some rootkits will avoid infecting virtual machines for this reason.
Manual removal of a rootkit is often extremely difficult for a typical computer user,[27]but a number of security-software vendors offer tools to automatically detect and remove some rootkits, typically as part of anantivirus suite. As of 2005[update], Microsoft's monthlyWindows Malicious Software Removal Toolis able to detect and remove some classes of rootkits.[82][83]Also, Windows Defender Offline can remove rootkits, as it runs from a trusted environment before the operating system starts.[84]Some antivirus scanners can bypassfile systemAPIs, which are vulnerable to manipulation by a rootkit. Instead, they access raw file system structures directly, and use this information to validate the results from the system APIs to identify any differences that may be caused by a rootkit.[Notes 2][85][86][87][88]There are experts who believe that the only reliable way to remove them is to re-install the operating system from trusted media.[89][90]This is because antivirus and malware removal tools running on an untrusted system may be ineffective against well-written kernel-mode rootkits. Booting an alternative operating system from trusted media can allow an infected system volume to be mounted and potentially safely cleaned and critical data to be copied off—or, alternatively, a forensic examination performed.[26]Lightweight operating systems such asWindows PE,Windows Recovery Console,Windows Recovery Environment,BartPE, orLive Distroscan be used for this purpose, allowing the system to be "cleaned". Even if the type and nature of a rootkit is known, manual repair may be impractical, while re-installing the operating system and applications is safer, simpler and quicker.[89]
Systemhardeningrepresents one of the first layers of defence against a rootkit, to prevent it from being able to install.[91]Applyingsecurity patches, implementing theprinciple of least privilege, reducing theattack surfaceand installing antivirus software are some standard security best practices that are effective against all classes of malware.[92]New secure boot specifications likeUEFIhave been designed to address the threat of bootkits, but even these are vulnerable if the security features they offer are not utilized.[50]For server systems, remote server attestation using technologies such as IntelTrusted Execution Technology(TXT) provide a way of verifying that servers remain in a known good state. For example,MicrosoftBitlocker's encryption of data-at-rest verifies that servers are in a known "good state" on bootup.PrivateCorevCage is a software offering that secures data-in-use (memory) to avoid bootkits and rootkits by verifying servers are in a known "good" state on bootup. The PrivateCore implementation works in concert with Intel TXT and locks down server system interfaces to avoid potential bootkits and rootkits.
Another defense mechanism called the Virtual Wall (VTW) approach, serves as a lightweight hypervisor with rootkit detection and event tracing capabilities. In normal operation (guest mode), Linux runs, and when a loaded LKM violates security policies, the system switches to host mode. The VTW in host mode detects, traces, and classifies rootkit events based on memory access control and event injection mechanisms. Experimental results demonstrate the VTW's effectiveness in timely detection and defense against kernel rootkits with minimal CPU overhead (less than 2%). The VTW is compared favorably to other defense schemes, emphasizing its simplicity in implementation and potential performance gains on Linux servers.[93]
|
https://en.wikipedia.org/wiki/Rootkit
|
InUnixoperating systems, the termwheelrefers to auser accountwith awheel bit, a system setting that provides additional specialsystem privilegesthat empower a user to execute restrictedcommandsthat ordinary user accounts cannot access.[1][2]
The termwheelwas first applied to computer user privilege levels after the introduction of theTENEXoperating system, later distributed under the nameTOPS-20in the 1960s and early 1970s.[2][3]The term was derived from the slang phrasebig wheel, referring to a person with great power or influence.[1]
In the 1980s, the term was imported intoUnixculture due to the migration of operating system developers and users from TENEX/TOPS-20 to Unix.[2]
Modern Unix systems generally useuser groupsas asecurityprotocol to control access privileges. Thewheelgroup is a special user group used on some Unix systems, mostlyBSDsystems,[citation needed]to control access to thesu[4][5]orsudocommand, which allows a user to masquerade as another user (usually thesuper user).[1][2][6]Debian and its derivatives create a group calledsudowith purpose similar to that of awheelgroup.[7]
The phrasewheel war, which originated atStanford University,[8]is a term used incomputer culture, first documented in the 1983 version ofThe Jargon File. A 'wheel war' was a user conflict in amulti-user(see also:multiseat) computer system, in which students withadministrative privilegeswould attempt to lock each other out of a university's computer system, sometimes causing unintentional harm to other users.[9]
|
https://en.wikipedia.org/wiki/Wheel_(computing)
|
Incomputer software, anoperating environmentorintegrated applications environmentis theenvironmentin which users runapplication software. The environment consists of auser interfaceprovided by anapplications managerand usually anapplication programming interface(API) to the applications manager.
An operating environment isnota fulloperating system, but is a form ofmiddlewarethat rests between the OS and the application. For example, the first version ofMicrosoft Windows,Windows 1.0, was not a full operating system, but aGUIlaid over DOS albeit with an API of its own. Similarly, theIBM U2system operates on bothUnix/LinuxandWindows NT. Usually the user interface istext-basedorgraphical, rather than acommand-line interface(e.g.,DOSor theUnix shell), which is often the interface of the underlying operating system.
In the mid 1980s,text-basedandgraphicaluser interface operating environments surroundedDOSoperating systems with ashellthat turned the user'sdisplayinto amenu-oriented "desktop" for selecting and runningPCapplications. These operating environment systems allow users much of the convenience ofintegrated softwarewithout locking them into a single package.
In the mid 1980s,text-basedandgraphicaluser interface operating environments such asIBM TopView,Microsoft Windows,Digital Research'sGEM Desktop,GEOSandQuarterdeck Office Systems'sDESQviewsurroundedDOSoperating systems with ashellthat turned the user'sdisplayinto amenu-oriented "desktop" for selecting and runningPCapplications. These programs were more than simple menu systems—as alternate operating environments they were substitutes for integrated programs such asFrameworkandSymphony, that allowedswitching,windowing, andcut-and-pasteoperations among dedicated applications. These operating environment systems gave users much of the convenience ofintegrated softwarewithout locking them into a single package. Alternative operating environments madeterminate-and-stay-residentpop-up utilities such asBorland Sidekickredundant. Windows provided its own version of these utilities, and placing them under central control could eliminate memory conflicts thatRAM-resident utilities create.[1]In later versions, Windows evolved from an operating environment into a complete operating system with DOS as a bootloader (Windows 9x) and a complete operating system,Windows NT, was developed at the same time. All versions afterWindows MEhave been based on the Windows NT kernel.
|
https://en.wikipedia.org/wiki/Operating_environment
|
This article compares variety of differentX window managers. For an introduction to the topic, seeX Window System.
|
https://en.wikipedia.org/wiki/Comparison_of_X_window_managers
|
In computing,directorimmediate mode[1][2]in an interactive programming system is the immediate execution ofcommands,statements, orexpressions. In many interactive systems, most of these can both be included in programs or executed directly in aread–eval–print loop(REPL).
Most interactive systems also offer the possibility of defining programs in the REPL, either with explicit declarations, such asPython'sdef, or by labelling them withline numbers. Programs can then be run by calling a named or numbered procedure or by running a main program.
Many programming systems, fromLispandJOSStoPythonandPerlhave interactiveREPLswhich also allow defining programs. Mostintegrated development environmentsoffer a direct mode where, duringdebuggingand while the program execution is suspended, commands can be executed directly in the current scope and the result is displayed.
|
https://en.wikipedia.org/wiki/Direct_mode
|
This list includes notablecommand-line interpreters–programsthat interactivelyinterpretcommandsentered by theuserat thecommand-line.
Mostoperating systemsare accessible via ashell– a command line interpreter. In some cases multiple shells are available.
This category somewhat overlaps with the general programming section since an operating system shell supports programming, and the line between operating system access and general programming is sometimes less than clear. For example, some versions of BASIC served as a shell, and BASIC is also a general-purpose language.
|
https://en.wikipedia.org/wiki/List_of_command-line_interpreters
|
Aconsole applicationorcommand-line programis acomputer program(applicationsorutilities) designed to be used via atext-onlyuser interface, such as atext terminal, thecommand-line interfaceof someoperating systems(Unix,DOS,[1]etc.) or the text-based interface included with mostgraphical user interface(GUI) operating systems, such as theWindows ConsoleinMicrosoft Windows,[2]theTerminalinmacOS, andxtermin Unix.
A user typically interacts with a console application using only akeyboardanddisplay screen, as opposed to GUI applications, which normally require the use of amouseor otherpointing device. Many console applications such ascommand line interpretersarecommand linetools, but numeroustext-based user interface(TUI) programs also exist.
As the speed and ease-of-use of GUIs applications have improved over time, the use of console applications has greatly diminished, but not disappeared. Some users simply prefer console based applications, while some organizations still rely on existing console applications to handle key data processing tasks.
The ability to create console applications is kept as a feature of modernprogramming environmentssuch asVisual Studioand the.NET Frameworkon Microsoft Windows.[3]It simplifies the learning process of a new programming language by removing the complexity of a graphical user interface (see an example in theC#article).
For data processing tasks and computer administration, these programming environments represent the next level of operating system or data processing control afterscripting. If an application is only going to be run by the original programmer and/or a few colleagues, there may be no need for a pretty graphical user interface, leaving the application leaner, faster and easier to maintain.
Multiplelibrariesare available to assist with the development of Text User Interfaces.
On Unix systems, such libraries arencursesandcurses.
On Microsoft Windows,conio.his an example of such library.
Console-based applications includeAlpine(ane-mail client), cmus (anaudio player),Irssi(anIRC client),Lynx(aweb browser),Midnight Commander(afile manager),Music on Console(anaudio player),Mutt(an e-mail client),nano(atext editor),ne(a text editor),newsbeuter(anRSS reader), andranger(afile manager).
This article related to a type ofsoftwareis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Console_application
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.