text
stringlengths
16
172k
source
stringlengths
32
122
Microdatais aWHATWGHTMLspecification used to nestmetadatawithin existing content on web pages.[1]Search engines,web crawlers, andbrowserscan extract and process Microdata from a web page and use it to provide a richer browsing experience for users. Search engines benefit greatly from direct access to Microdata because it allows them to understand the information on web pages and provide more relevantresultsto users.[2][3]Microdata uses a supporting vocabulary to describe an item and name-value pairs to assign values to its properties.[4]Microdata is an attempt to provide a simpler way of annotatingHTML elementswith machine-readable tags than the similar approaches of usingRDFaandmicroformats. In 2013, because the W3C HTML Working Group failed to find someone to serve as an editor for theMicrodata HTMLspecification, its development was terminated with a 'Note'.[5][6]However, since that time, two new editors were selected, and five newer versions of the working draft have been published,[7][8][9][10]the most recent being Working Draft 26 April 2018.[10] Microdata vocabularies do not provide thesemantics, or meaning of an Item.[11]Web developers can design a custom vocabulary or use vocabularies available on the web. A collection of commonly used markup vocabularies are provided bySchema.orgschemas which include:Person, "Place",Event,Organization,Product,Review,Review-aggregate,Breadcrumb,Offer,Offer-aggregate. The website schema.org was established by search engine operators likeGoogle,Microsoft,Yahoo!, andYandex, which use microdata markup to improve search results.[12]: 85 For some purposes, an ad-hoc vocabulary is adequate. For others, a vocabulary will need to be designed. Where possible, authors are encouraged to re-use existing vocabularies, as this makes content re-use easier.[1] In some cases, search engines covering specific regions may provide locally-specific extensions of microdata. For example,Yandex, a major search engine in Russia, supportsmicroformatssuch ashCard(company contact information),hRecipe(food recipe),hReview(market reviews) andhProduct(product data) and provides its own format for definition of the terms and encyclopedic articles. This extension was made in order to solvetransliterationproblems between the Cyrillic and Latin alphabets. After the implementation of additional parameters from Schema's vocabulary,[13]indexation of information in Russian-language web-pages became more successful. The following HTML5 markup may be found on a typical “About” page containing information about a person: Here is the same markup with addedSchema.org[14][15][16]Microdata: As the above example shows, Microdata items can be nested. In this case, an item of typehttp://schema.org/PostalAddressis nested inside an item of typehttp://schema.org/Person. The following text shows how Google parses the Microdata from the above example code. Developers can test pages containing Microdata using Google'sRich Snippet Testing Tool.[17] The same machine-readable terms can be used not only in HTML Microdata, but also in other annotations such asRDFaorJSON-LDin the markup, or in an externalRDFfile in a serialization such asRDF/XML,Notation3, orTurtle.
https://en.wikipedia.org/wiki/Microdata_(HTML)
Microformats(μF)[note 1]are predefinedHTMLmarkup (likeHTML classes) created to serve as descriptive and consistentmetadataaboutelements, designating them as representing a certain type of data (such ascontact information,geographic coordinates, events, products, recipes, etc.).[1]They allowsoftwareto process the information reliably by having set classes refer to a specific type of data rather than being arbitrary. Microformats emerged around 2005 and were predominantly designed for use by search engines,web syndicationandaggregatorssuch asRSS.[2]Google confirmed in 2020 that it still parses microformats for use in content indexing.[3]Microformats are referenced in several W3C social web specifications, including IndieAuth[4]and Webmention.[5] Although the content of web pages has been capable of some "automated processing" since the inception of the web, such processing is difficult because themarkup elementsused to display information on the web do not describe what the information means.[6]Microformats can bridge this gap by attachingsemantics, and thereby obviating other, more complicated, methods of automated processing, such asnatural language processingorscreen scraping. The use, adoption and processing of microformats enables data items to be indexed, searched for, saved or cross-referenced, so that information can be reused or combined.[6] As of 2013[update], microformats allow the encoding and extraction of event details, contact information, social relationships and similar information. Microformats2, abbreviated as mf2, is the updated version of microformats. Mf2 provides an easier way of interpretingHTMLstructured syntax and vocabularies than the earlier ways that made use of RDFa and microdata.[7] Microformats emerged around 2005[note 2]as part of a grassroots movement to make recognizable data items (such as events, contact details or geographical locations) capable of automated processing by software, as well as directly readable by end-users.[6][note 3]Link-based microformats emerged first. These include vote links that express opinions of the linked page, which search engines can tally into instant polls.[8] CommerceNet, a nonprofit organization that promotese-commerceon the Internet, has helped sponsor and promote the technology and support the microformats community in various ways.[8]CommerceNet also helped co-found the Microformats.org community site.[8] Neither CommerceNet nor Microformats.org operates as astandards body. The microformats community functions through an openwiki, a mailing list, and an Internet relay chat (IRC) channel.[8]Most of the existing microformats originated at the Microformats.org wiki and the associated mailing list[citation needed]by a process of gathering examples of web-publishing behaviour, then codifying it. Some other microformats (such asrel=nofollowandunAPI) have been proposed, or developed, elsewhere. XHTMLand HTML standards allow for the embedding and encoding of semantics within theattributes of markup elements. Microformats take advantage of these standards by indicating the presence of metadata using the following attributes: For example, in the text "The birds roosted at52.48,-1.89" is a pair of numbers which may be understood, from their context, to be a set ofgeographic coordinates. With wrapping inspans(or other HTML elements) with specific class names (in this casegeo,latitudeandlongitude, all part of thegeo microformatspecification): Software agents can recognize exactly what each value represents and can then perform a variety of tasks such as indexing, locating it on a map and exporting it to aGPSdevice. In this example, the contact information is presented as follows: With hCard microformat markup, that becomes: Here, the formatted name (fn), organisation (org), telephone number (tel) andweb address(url) have been identified using specific class names and the whole thing is wrapped inclass="vcard", which indicates that the other classes form an hCard (short for "HTMLvCard") and are not merely coincidentally named. Other, optional, hCard classes also exist. Software, such as browser plug-ins, can now extract the information, and transfer it to other applications, such as an address book. For annotated examples of microformats on live pages, seeHCard#Live exampleandGeo (microformat)#Usage. Several microformats have been developed to enable semantic markup of particular types of information. However, only hCard and hCalendar have been ratified, the others remaining as drafts: Using microformats within HTML code provides additional formatting and semantic data that applications can use. For example, applications such asweb crawlerscan collect data about online resources, or desktop applications such as e-mail clients or scheduling software can compile details. The use of microformats can also facilitate "mash ups" such as exporting all of the geographical locations on a web page into (for example)Google Mapsto visualize them spatially. Several browser extensions, such asOperatorforFirefoxand Oomph forInternet Explorer, provide the ability to detect microformats within an HTML document. When hCard or hCalendar are involved, such browser extensions allow microformats to be exported into formats compatible with contact management and calendar utilities, such asMicrosoft Outlook. When dealing with geographical coordinates, they allow the location to be sent to applications such asGoogle Maps.Yahoo! Query Languagecan be used to extract microformats from web pages.[16]On 12 May 2009Googleannounced that they would be parsing the hCard, hReview and hProduct microformats, and using them to populate search result pages.[17]They subsequently extended this in 2010 to use hCalendar for events and hRecipe for cookery recipes.[18]Similarly, microformats are also processed byBing[19]andYahoo!.[20]As of late 2010, these are the world's top three search engines.[21] Microsoftsaid in 2006 that they needed to incorporate microformats into upcoming projects,[22]as did other software companies. Alex Faaborg summarizes the arguments for putting the responsibility for microformat user interfaces in the web browser rather than making more complicated HTML:[23] Various commentators have offered review and discussion on the design principles and practical aspects of microformats. Microformats have been compared to other approaches that seek to serve the same or similar purpose.[24]As of 2007[update], there had been some criticism of one, or all, microformats.[24]The spread and use of microformats was being advocated as of 2007[update].[25][26]Opera SoftwareCTO andCSScreatorHåkon Wium Liesaid in 2005 "We will also see a bunch of microformats being developed, and that’s how thesemantic webwill be built, I believe."[27]However, in August 2008 Toby Inkster, author of the "Swignition" (formerly "Cognition") microformat parsing service, pointed out that no new microformat specifications had been published since 2005.[28] Computer scientist and entrepreneur,Rohit Kharestated thatreduce, reuse, and recycleis "shorthand for several design principles" that motivated the development and practices behind microformats.[8]: 71–72These aspects can be summarized as follows: Because some microformats make use of title attribute of HTML's<abbr>element to concealmachine-readable data(particularly date-times and geographical coordinates) in the "abbr design pattern", the plain text content of the element is inaccessible toscreen readersthat expand abbreviations.[29]In June 2008 theBBCannounced that it would be dropping use of microformats using theabbrdesign pattern because of accessibility concerns.[30] Microformats are not the only solution for providing "more intelligent data" on the web; alternative approaches are used and are under development. For example, the use ofXMLmarkup and standards of the Semantic Web are cited as alternative approaches.[8]Some contrast these with microformats in that they do not necessarily coincide with the design principles of "reduce, reuse, and recycle", at least not to the same extent.[8] One advocate of microformats,Tantek Çelik, characterized a problem with alternative approaches: Here's a new language we want you to learn, and now you need to output these additional files on your server. It's a hassle. (Microformats) lower the barrier to entry.[6] For some applications the use of other approaches may be valid. If the type of data to be described does not map to an existing microformat,RDFacan embed arbitrary vocabularies into HTML, such as for example domain-specific scientific data such as zoological or chemical data for which there is no microformat. Standards such as W3C'sGRDDLallow microformats to be converted into data compatible with the Semantic Web.[31] Another advocate of microformats, Ryan King, put the compatibility of microformats with other approaches this way: Microformats provide an easy way for many people to contribute semantic data to the web. With GRDDL all of that data is made available for RDF Semantic Web tools. Microformats and GRDDL can work together to build a better web.[31] Microformats2 was proposed and discussed during FOOEast, 2010-05-02.[32]Microformats2 was intended to make it easier for authors to publish microformats and for developers to consume them, while remaining backwards compatible[33] Using microformats2, the example above would be marked up as: and:
https://en.wikipedia.org/wiki/Microformat
TheFacebook Platformis the set of services, tools, and products provided by thesocial networking serviceFacebookforthird-partydevelopers to create their ownapplicationsand services that access data in Facebook.[1] The current Facebook Platform was launched in2010.[2]The platform offers a set ofprogramming interfacesand tools which enable developers to integrate with the open "social graph" of personal relations and other things like songs, places, and Facebook pages. Applications on facebook.com, external websites, and devices are all allowed to access the graph. Facebook launched the Facebook Platform onMay 24, 2007, providing aframeworkforsoftware developersto createapplicationsthat interact with coreFacebook features.[1][2]Amarkup languagecalled Facebook Markup Language was introduced simultaneously; it is used to customize the "look and feel" of applications that developers create. Prior to the Facebook platform, Facebook had built many applications themselves within the Facebook website, including Gifts, allowing users to send virtual gifts to each other,Marketplace, allowing users to post free classified ads, Facebook events, giving users a method of informing their friends about upcoming events,Video, letting users share homemade videos with one another,[3][4]andsocial network game, where users can use their connections to friends to help them advance in games they are playing. The Facebook Platform made it possible for outside partners to build similar applications.[1][2]Many of the popular early social network games would combine capabilities. For instance, one of the early games to reach the top application spot,(Lil) Green Patch, combined virtual Gifts with Event notifications to friends and contributions to charities through Causes. Third party companies provide application metrics, and severalblogsarose in response to the clamor for Facebook applications. OnJuly 4, 2007, Altura Ventures announced the "Altura 1 Facebook Investment Fund," becoming the world's first Facebook-only venture capital firm.[5] OnAugust 29, 2007, Facebook changed the way in which the popularity of applications is measured, to give attention to the more engaging applications, following criticism that ranking applications only by the number of people who had installed the application was giving an advantage to the highly viral, yet useless applications.[6]Tech blogValleywaghas criticized Facebook Applications, labeling them a "cornucopia of uselessness."[7]Others have called for limiting third-party applications so the Facebookuser experienceis not degraded.[8][9] Applications that have been created on the Platform includechess, which both allow users to play games with their friends.[10]In such games, a user's moves are saved on the website, allowing the next move to be made at any time rather than immediately after the previous move.[11] ByNovember 3, 2007, seven thousand applications had been developed on the Facebook Platform, with another hundred created every day.[12]By thesecond annual f8developers conference onJuly 23, 2008, the number of applications had grown to 33,000,[13]and the number of registered developers had exceeded 400,000.[14] Within a few months of launching the Facebook Platform, issues arose regarding "applicationspam", which involves Facebook applications "spamming" users to request it be installed.[15] Facebook integration was announced for theXbox 360andNintendo DSionJune 1, 2009at E3.[16]OnNovember 18, 2009, Sony announced an integration with Facebook to deliver the first phase of a variety of new features to further connect and enhance the online social experiences of PlayStation 3.[17]OnFebruary 2, 2010, Facebook announced the release ofHipHop for PHPas an opensource project.[18]Mark Zuckerberg said that his team from Facebook is developing a Facebook search engine.[19]“Facebook is pretty well placed to respond to people’s questions. At some point, we will. We have a team that is working on it", said Mark Zuckerberg. For him, the traditional search engines return too many results that do not necessarily respond to questions. “The search engines really need to evolve a set of answers: 'I have a specific question, answer this question for me.'" OnJune 10, 2014, Facebook announced Haxl, a Haskell library that simplified the access to remote data, such as databases or web-based services.[20] Starting in 2007, Facebook formeddata sharingpartnerships with at least 60 handset manufacturers, includingApple,Amazon,BlackBerry,MicrosoftandSamsung.[21]Those manufacturers were provided with Facebook user data without the users' consent.[21]Most of the partnerships remained in place as of 2018, when the partnerships were first publicly reported.[21] The Graph API is the core of Facebook Platform, enabling developers to read from and write data into Facebook. The Graph API presents a simple, consistent view of the Facebook social graph, uniformly representing objects in the graph (e.g., people, photos, events, and pages) and the connections between them (e.g., friend relationships, shared content, and photo tags).[22] On April 30, 2015, Facebook shut down friends' data API prior to the v2.0 release.[23] Facebook authentication enables developers’ applications to interact with the Graph API on behalf of Facebook users, and it provides a single-sign on mechanism across web, mobile, and desktop apps.[24] Facebook Connect,[25]also called Log in with Facebook, likeOpenID, is a set of authenticationAPIsfrom Facebook that developers can use to help their users connect and share with such users' Facebook friends (on and off Facebook) and increase engagement for their website or application. When so used, Facebook members can log on to third-party websites, applications, mobile devices and gaming systems with their Facebook identity and, while logged in, can connect with friends via these media and post information and updates to their Facebook profile. Originally unveiled during Facebook’s developer conference, F8, inJuly 2008, Log in with Facebook became generally available inDecember 2008. According to an article from The New York Times, "Some say the services are representative of surprising new thinking in Silicon Valley. Instead of trying to hoard information about their users, the Internet companies (including Facebook, Google, MySpace and Twitter) all share at least some of that data so people do not have to enter the same identifying information again and again on different sites."[26] Log in with Facebook cannot be used by users in locations that cannot access Facebook, even if the third-party site is otherwise accessible from that location.[27] According to Facebook, users who logged intoThe Huffington Postwith Facebook spent more time on the site than the average user.[28] Social plugins – including theLike Button, Recommendations, and Activity Feed – enable developers to provide social experiences to their users with just a few lines of HTML. All social plugins are extensions of Facebook and are designed so that no user data is shared with the sites on which they appear.[29]On the other hand, the social plugins let Facebook track its users’ browsing habits through any sites that feature the plugins. The Open Graph protocol enables developers to integrate their pages into Facebook's global mapping/tracking toolSocial Graph. These pages gain the functionality of other graph objects including profile links and stream updates for connected users.[30]OpenGraph tags inHTML5might look like this: Facebook usesiframesto allow third-party developers to create applications that are hosted separately from Facebook, but operate within a Facebook session and are accessed through a user's profile. Since iframes essentially nest independent websites within a Facebook session, their content is distinct from Facebook formatting. Facebook originally used 'Facebook Markup Language (FBML)' to allowFacebook Applicationdevelopersto customize the "look and feel" of theirapplications, to a limited extent. FBML is aspecificationof how to encode content so that Facebook'sserverscan read and publish it, which is needed in the Facebook-specific feed so that Facebook's system can properly parse content and publish it as specified.[31]FBML set by any application is cached by Facebook until a subsequent API call replaces it. Facebook also offers a specialized Facebook JavaScript (FBJS) library.[32] Facebook stopped accepting new FBML applications onMarch 18, 2011,[33]but continued to support existing FBML tabs and applications. SinceJanuary 1, 2012FBML was no longer supported, and FBML no longer functioned as ofJune 1, 2012.[citation needed] InFebruary 2011, Facebook began to use thehCalendarmicroformat to mark up events, and thehCardfor the events' venues, enabling the extraction of details to users' own calendar or mapping applications.[34] The UI framework for themobile websiteis based onXhp, the Javelin Javascript library, andWURFL.[35]The mobile platform has grown dramatically in popularity since its launch. InDecember 2012, the number of users signing into the site from mobile devices exceeded web-based logins for the first time.[36] Many Facebook application developers have attempted to create viral applications.Stanford Universityeven offered a class in the Fall of2007, entitled Computer Science (CS) 377W: "Create Engaging Web Applications Using Metrics and Learning on Facebook". Numerous applications created by the class were highly successful, and ranked amongst the top Facebook applications, with some achieving over 3.5 million users in a month.[37] In 2011,The Guardianexpressed concerns that users publishing content through a third party provider are exposed to losing their web positioning if their service is removed; and the open graph could force connecting web presence to Facebook social services even for people using their own publishing channels.[38]In June 2018,The New York Timescriticized Facebook's partnerships with device manufacturers, writing that the data available to these manufacturers "raise concerns about the company's privacy protections and compliance with a 2011 consent decree with the Federal Trade Commission."[21] Facebook Platform is relatively unknown to the general public, with no notable occurrences relating to it, as the privacy policy and terms and conditions are regularly updated.[39]
https://en.wikipedia.org/wiki/Open_Graph_protocol
Hypertext Application Language(HAL) is a convention for defininghypermediasuch as links to externalresourceswithinJSONorXMLcode. It is documented in anInternet Draft(a "work in progress"), with the latest version 11 published the 10th of October 2023. The standard was initially proposed in June 2012, specifically for use with JSON,[1]and has since become available in two variations, JSON and XML. The two associatedMIMEtypes are application/hal+xml and application/hal+json.[2] HAL was created to be simple to use and easily applicable across differentdomainsby avoiding the need to impose any requirements on how the project be structured. Maintaining this minimal impact approach, HAL has enabled developers to create general-purposelibrarieswhich can be incorporated on anyAPIthat uses HAL.[citation needed] APIs that adopt HAL simplify the use ofopen sourcelibraries and make it possible to interact with theAPIusing JSON or XML. The alternative would be having to develop aproprietary formatwhich in turn forces developers to learn how to use yet another foreign format.[3] HAL is structured in such a way as to represent elements based on two concepts: Resources and Links. Resources consist ofURIlinks, embedded resources, your standard data (be it JSON or XML), and non URI links. Links have a target URI, as well as the name of the link (referred to as 'rel'), as well as optional properties designed to be mindful of deprecation and content negotiation.[3] General Resource Embedded resource Collections
https://en.wikipedia.org/wiki/Hypertext_Application_Language
N-Triplesis a format for storing and transmitting data. It is a line-based, plain textserialisationformat forRDF(Resource Description Framework) graphs, and a subset of theTurtle(Terse RDF Triple Language) format.[1][2][3]N-Triples should not be confused withNotation3which is a superset of Turtle. N-Triples was primarily developed by Dave Beckett at theUniversity of Bristoland Art Barstow at theWorld Wide Web Consortium(W3C).[4] N-Triples was designed to be a simpler format than Notation3 and Turtle, and therefore easier for software toparseand generate. However, because it lacks some of the shortcuts provided by other RDF serialisations (such asCURIEsand nested resources, which are provided by both RDF/XML and Turtle) it can be onerous to type out large amounts of data by hand, and difficult to read. There is very little variation in how an RDFgraphcan be represented in N-Triples. This makes it a very convenient format to provide "model answers" forRDF test suites.[3] As N-Triples is a subset of Turtle and Notation3, by definition all tools which support input in either of those formats will support N-Triples. In addition, some tools likeCwmhave specific support for N-Triples. Each line of the file has either the form of a comment or of a statement: A statement consists offourparts, separated bywhitespace: Subjects may take the form of aURIor ablank node; predicates must be a URI; objects may be a URI, blank node or a literal. URIs are delimited withless-than and greater-than signsused asangle brackets. Blank nodes are represented by an alphanumeric string, prefixed with an underscore and colon (_:). Literals are represented asprintable ASCIIstrings (with backslash escapes),[5]delimited with double-quote characters, and optionally suffixed with a language or datatype indicator. Language indicators are anat signfollowed by anRFC 3066 language tag; datatype indicators are a double-caretfollowed by a URI. Comments consist of a line beginning with ahash sign. The N-Triples statements below are equivalent to this RDF/XML: (The symbol ↵ is used to indicate a place where a line has been wrapped for legibility. N-Triples do not allow lines to be wrapped arbitrarily: the line endings indicate the end of a statement.) The relatedN-Quadssuperset extends N-Triples with an optional context value at the fourth position.[6][7][8]
https://en.wikipedia.org/wiki/N-Triples
Incomputing,Terse RDF Triple Language(Turtle) is asyntaxandfile formatfor expressing data in theResource Description Framework(RDF) data model. Turtle syntax is similar to that ofSPARQL, anRDF query language. It is a common data format for storing RDF data, along withN-Triples,JSON-LDandRDF/XML. RDF represents information usingsemantic triples, which comprise a subject, predicate, and object. Each item in the triple is expressed as a WebURI. Turtle provides a way to group three URIs to make a triple, and provides ways to abbreviate such information, for example by factoring out common portions of URIs. For example, information aboutHuckleberry Finncould be expressed as: Turtle was defined by Dave Beckett as a subset ofTim Berners-LeeandDan Connolly'sNotation3(N3) language, and a superset of the minimalN-Triplesformat. Unlike full N3, which has an expressive power that goes much beyond RDF, Turtle can only serialize valid RDF graphs. Turtle is an alternative toRDF/XML, the original syntax and standard for writing RDF. As opposed to RDF/XML, Turtle does not rely onXMLand is generally recognized as being more readable and easier to edit manually than its XML counterpart. SPARQL, the query language for RDF, uses a syntax similar to Turtle for expressing query patterns. In 2011, a working group of theWorld Wide Web Consortium(W3C) started working on an updated version of RDF, with the intention of publishing it along with a standardised version of Turtle. This Turtle specification was published as a W3C Recommendation on 25 February 2014.[1] A significant proportion of RDF toolkits include Turtle parsing and serializing capability. Some examples of such toolkits areRedland,RDF4J,Jena, Python'sRDFLiband JavaScript's N3.js. The following example defines 3 prefixes ("rdf", "dc", and "ex"), and uses them in expressing a statement about the editorship of the RDF/XML document: (Turtle examples are also validNotation3). The example encodes an RDF graph made of four triples, which express these facts: Here are the triples made explicit inN-Triplesnotation: TheMIME typeof Turtle istext/turtle. The character encoding of Turtle content is alwaysUTF-8.[2] TriGRDF syntax extends Turtle with support fornamed graphs.
https://en.wikipedia.org/wiki/Turtle_(syntax)
Arelational database(RDB[1]) is adatabasebased on therelational modelof data, as proposed byE. F. Coddin 1970.[2] A RelationalDatabase Management System(RDBMS) is a type of database management system that stores data in a structuredformatusingrowsandcolumns. Many relational database systems are equipped with the option of usingSQL(Structured Query Language) for querying and updating the database.[3] The concept of relational database was defined byE. F. CoddatIBMin 1970. Codd introduced the termrelationalin his research paper "A Relational Model of Data for Large Shared Data Banks".[2]In this paper and later papers, he defined what he meant byrelation. One well-known definition of what constitutes a relational database system is composed ofCodd's 12 rules. However, no commercial implementations of the relational model conform to all of Codd's rules,[4]so the term has gradually come to describe a broader class of database systems, which at a minimum: In 1974, IBM began developingSystem R, a research project to develop a prototype RDBMS.[5][6]The first system sold as an RDBMS wasMultics Relational Data Store(June 1976).[7][8][citation needed]Oraclewas released in 1979 by Relational Software, nowOracle Corporation.[9]IngresandIBM BS12followed. Other examples of an RDBMS includeIBM Db2,SAP Sybase ASE, andInformix. In 1984, the first RDBMS forMacintoshbegan being developed, code-named Silver Surfer, and was released in 1987 as4th Dimensionand known today as 4D.[10] The first systems that were relatively faithful implementations of the relational model were from: The most common definition of an RDBMS is a product that presents a view of data as a collection of rows and columns, even if it is not based strictly uponrelational theory. By this definition, RDBMS products typically implement some but not all of Codd's 12 rules. A second school of thought argues that if a database does not implement all of Codd's rules (or the current understanding on the relational model, as expressed byChristopher J. Date,Hugh Darwenand others), it is not relational. This view, shared by many theorists and other strict adherents to Codd's principles, would disqualify most DBMSs as not relational. For clarification, they often refer to some RDBMSs astruly-relational database management systems(TRDBMS), naming otherspseudo-relational database management systems(PRDBMS).[citation needed] As of 2009, most commercial relational DBMSs employSQLas theirquery language.[15] Alternative query languages have been proposed and implemented, notably the pre-1996 implementation ofIngres QUEL. A relational model organizes data into one or moretables(or "relations") ofcolumnsandrows, with aunique keyidentifying each row. Rows are also calledrecordsortuples.[16]Columns are also called attributes. Generally, each table/relation represents one "entity type" (such as customer or product). The rows represent instances of that type ofentity(such as "Lee" or "chair") and the columns represent values attributed to that instance (such as address or price). For example, each row of a class table corresponds to a class, and a class corresponds to multiple students, so the relationship between the class table and the student table is "one to many"[17] Each row in a table has its own unique key. Rows in a table can be linked to rows in other tables by adding a column for the unique key of the linked row (such columns are known asforeign keys). Codd showed that data relationships of arbitrary complexity can be represented by a simple set of concepts.[2] Part of this processing involves consistently being able to select or modify one and only one row in a table. Therefore, most physical implementations have a uniqueprimary key(PK) for each row in a table. When a new row is written to the table, a new unique value for the primary key is generated; this is the key that the system uses primarily for accessing the table. System performance is optimized for PKs. Other, morenatural keysmay also be identified and defined asalternate keys(AK). Often several columns are needed to form an AK (this is one reason why a single integer column is usually made the PK). Both PKs and AKs have the ability to uniquely identify a row within a table. Additional technology may be applied to ensure a unique ID across the world, aglobally unique identifier, when there are broader system requirements. The primary keys within a database are used to define the relationships among the tables. When a PK migrates to another table, it becomes a foreign key (FK) in the other table. When each cell can contain only one value and the PK migrates into a regular entity table, this design pattern can represent either aone-to-oneorone-to-manyrelationship. Most relational database designs resolvemany-to-manyrelationships by creating an additional table that contains the PKs from both of the other entity tables – the relationship becomes an entity; the resolution table is then named appropriately and the two FKs are combined to form a PK. The migration of PKs to other tables is the second major reason why system-assigned integers are used normally as PKs; there is usually neither efficiency nor clarity in migrating a bunch of other types of columns. Relationships are a logical connection between different tables (entities), established on the basis of interaction among these tables. These relationships can be modelled as anentity-relationship model. In order for a database management system (DBMS) to operate efficiently and accurately, it must useACID transactions.[18][19][20] Part of the programming within a RDBMS is accomplished usingstored procedures(SPs). Often procedures can be used to greatly reduce the amount of information transferred within and outside of a system. For increased security, the system design may grant access to only the stored procedures and not directly to the tables. Fundamental stored procedures contain the logic needed to insert new and update existing data. More complex procedures may be written to implement additional rules and logic related to processing or selecting the data. The relational database was first defined in June 1970 byEdgar Codd, of IBM'sSan Jose Research Laboratory.[2]Codd's view of what qualifies as an RDBMS is summarized inCodd's 12 rules. A relational database has become the predominant type of database. Other models besides therelational modelinclude thehierarchical database modeland thenetwork model. The table below summarizes some of the most important relational database terms and the correspondingSQLterm: In a relational database, arelationis a set oftuplesthat have the sameattributes. A tuple usually represents an object and information about that object. Objects are typically physical objects or concepts. A relation is usually described as atable, which is organized intorowsandcolumns. All the data referenced by an attribute are in the samedomainand conform to the same constraints. The relational model specifies that the tuples of a relation have no specific order and that the tuples, in turn, impose no order on the attributes. Applications access data by specifying queries, which use operations such asselectto identify tuples,projectto identify attributes, andjointo combine relations. Relations can be modified using theinsert,delete, andupdateoperators. New tuples can supply explicit values or be derived from a query. Similarly, queries identify tuples for updating or deleting. Tuples by definition are unique. If the tuple contains acandidateor primary key then obviously it is unique; however, a primary key need not be defined for a row or record to be a tuple. The definition of a tuple requires that it be unique, but does not require a primary key to be defined. Because a tuple is unique, its attributes by definition constitute asuperkey. All data are stored and accessed viarelations. Relations that store data are called "base relations", and in implementations are called "tables". Other relations do not store data, but are computed by applying relational operations to other relations. These relations are sometimes called "derived relations". In implementations these are called "views" or "queries". Derived relations are convenient in that they act as a single relation, even though they may grab information from several relations. Also, derived relations can be used as anabstraction layer. A domain describes the set of possible values for a given attribute, and can be considered a constraint on the value of the attribute. Mathematically, attaching a domain to an attribute means that any value for the attribute must be an element of the specified set. The character string"ABC", for instance, is not in the integer domain, but the integer value123is. Another example of domain describes the possible values for the field "CoinFace" as ("Heads","Tails"). So, the field "CoinFace" will not accept input values like (0,1) or (H,T). Constraints are often used to make it possible to further restrict the domain of an attribute. For instance, a constraint can restrict a given integer attribute to values between 1 and 10. Constraints provide one method of implementingbusiness rulesin the database and support subsequent data use within the application layer. SQL implements constraint functionality in the form ofcheck constraints. Constraints restrict the data that can be stored inrelations. These are usually defined using expressions that result in aBooleanvalue, indicating whether or not the data satisfies the constraint. Constraints can apply to single attributes, to a tuple (restricting combinations of attributes) or to an entire relation. Since every attribute has an associated domain, there are constraints (domain constraints). The two principal rules for the relational model are known asentity integrityandreferential integrity. Everyrelation/table has a primary key, this being a consequence of a relation being aset.[21]A primary key uniquely specifies a tuple within a table. While natural attributes (attributes used to describe the data being entered) are sometimes good primary keys,surrogate keysare often used instead. A surrogate key is an artificial attribute assigned to an object which uniquely identifies it (for instance, in a table of information about students at a school they might all be assigned a student ID in order to differentiate them). The surrogate key has no intrinsic (inherent) meaning, but rather is useful through its ability to uniquely identify a tuple. Another common occurrence, especially in regard to N:M cardinality is thecomposite key. A composite key is a key made up of two or more attributes within a table that (together) uniquely identify a record.[22] Foreign key refers to a field in a relational table that matches the primary key column of another table. It relates the two keys. Foreign keys need not have unique values in the referencing relation. A foreign key can be used tocross-referencetables, and it effectively uses the values of attributes in the referenced relation to restrict the domain of one or more attributes in the referencing relation. The concept is described formally as: "For all tuples in the referencing relation projected over the referencing attributes, there must exist a tuple in the referenced relation projected over those same attributes such that the values in each of the referencing attributes match the corresponding values in the referenced attributes." A stored procedure is executable code that is associated with, and generally stored in, the database. Stored procedures usually collect and customize common operations, like inserting atupleinto arelation, gathering statistical information about usage patterns, or encapsulating complexbusiness logicand calculations. Frequently they are used as anapplication programming interface(API) for security or simplicity. Implementations of stored procedures on SQL RDBMS's often allow developers to take advantage ofproceduralextensions (often vendor-specific) to the standarddeclarativeSQL syntax. Stored procedures are not part of the relational database model, but all commercial implementations include them. An index is one way of providing quicker access to data. Indices can be created on any combination of attributes on arelation. Queries that filter using those attributes can find matching tuples directly using the index (similar toHash tablelookup), without having to check each tuple in turn. This is analogous to using theindex of a bookto go directly to the page on which the information you are looking for is found, so that you do not have to read the entire book to find what you are looking for. Relational databases typically supply multiple indexing techniques, each of which is optimal for some combination of data distribution, relation size, and typical access pattern. Indices are usually implemented viaB+ trees,R-trees, andbitmaps. Indices are usually not considered part of the database, as they are considered an implementation detail, though indices are usually maintained by the same group that maintains the other parts of the database. The use of efficient indexes on both primary and foreign keys can dramatically improve query performance. This is because B-tree indexes result in query times proportional to log(n) where n is the number of rows in a table and hash indexes result in constant time queries (no size dependency as long as the relevant part of the index fits into memory). Queries made against the relational database, and the derivedrelvarsin the database are expressed in arelational calculusor arelational algebra. In his original relational algebra, Codd introduced eight relational operators in two groups of four operators each. The first four operators were based on the traditional mathematicalset operations: The remaining operators proposed by Codd involve special operations specific to relational databases: Other operators have been introduced or proposed since Codd's introduction of the original eight including relational comparison operators and extensions that offer support for nesting and hierarchical data, among others. Normalization was first proposed by Codd as an integral part of the relational model. It encompasses a set of procedures designed to eliminate non-simple domains (non-atomic values) and the redundancy (duplication) of data, which in turn prevents data manipulation anomalies and loss of data integrity. The most common forms of normalization applied to databases are called thenormal forms. Connolly and Begg define database management system (DBMS) as a "software system that enables users to define, create, maintain and control access to the database".[23]RDBMS is an extension of that initialism that is sometimes used when the underlying database is relational. An alternative definition for arelational database management systemis a database management system (DBMS) based on therelational model. Most databases in widespread use today are based on this model.[24] RDBMSs have been a common option for the storage of information in databases used for financial records, manufacturing and logistical information, personnel data, and other applications since the 1980s. Relational databases have often replaced legacyhierarchical databasesandnetwork databases, because RDBMS were easier to implement and administer. Nonetheless, relational stored data received continued, unsuccessful challenges byobject databasemanagement systems in the 1980s and 1990s, (which were introduced in an attempt to address the so-calledobject–relational impedance mismatchbetween relational databases and object-oriented application programs), as well as byXML databasemanagement systems in the 1990s.[25]However, due to the expanse of technologies, such ashorizontal scalingofcomputer clusters,NoSQLdatabases have recently become popular as an alternative to RDBMS databases.[26] Distributed Relational Database Architecture(DRDA) was designed by a workgroup within IBM in the period 1988 to 1994. DRDA enables network connected relational databases to cooperate to fulfill SQL requests.[27][28]The messages, protocols, and structural components of DRDA are defined by theDistributed Data Management Architecture. According toDB-Engines, in December 2024 the most popular systems on the db-engines.com web site were:[29] According to research companyGartner, in 2011, the five leadingproprietary softwarerelational database vendors by revenue wereOracle(48.8%),IBM(20.2%),Microsoft(17.0%),SAPincludingSybase(4.6%), andTeradata(3.7%).[30]
https://en.wikipedia.org/wiki/Relational_database_management_system
This is acomparison of object–relational database management systems(ORDBMSs). Each system has at least some features of anobject–relational database; they vary widely in their completeness and the approaches taken. The following tables compare general and technical information; please see the individual products' articles for further information. Unless otherwise specified in footnotes, comparisons are based on the stable versions without any add-ons, extensions or external programs. Information about what fundamental ORDBMSes features are implemented natively. Information about what data types are implemented natively.
https://en.wikipedia.org/wiki/Comparison_of_object%E2%80%93relational_database_management_systems
The following tables compare general and technical information for a number of availabledatabase administrationtools. Please see individual product articles for further information. This article is neither all-inclusive nor necessarily up to date. Systems listed on a light purple background are no longer in active development. Standard Ed: Commercial Proprietary 2025-01-21; 3 months ago[±] 5.2.2[9] Legend Legend: Legend:
https://en.wikipedia.org/wiki/Comparison_of_database_administration_tools
Business System 12, or simplyBS12, was one of the first fullyrelational database management systems, designed and implemented byIBM'sBureau Servicesubsidiary at the company's international development centre inUithoorn,Netherlands. Programming started in 1978 and the first version was delivered in 1982. It was never widely used and essentially disappeared soon after the division was shut down in 1985, possibly because IBM and other companies settled onSQLas the standard. BS12's lasting contribution to history was the use of a new query language based onISBL, created at IBM's UKScientific Centre. Developers of the famousSystem Runderway in the US at the same time were also consulted on certain matters concerning the engine, but the BS12 team rejectedSQLunequivocally, being convinced that this apparently unsound and difficult-to-use language (which at that time was also relationally incomplete) would never catch on. BS12 included a number of interesting features that have yet to appear on most SQL-based systems, some a consequence of following the ISBL precedent, others due to deliberate design. For instance, a view could be parameterised andparameterscould be of type TABLE. Thus, a view could in effect be a newrelational operatordefined in terms of the existing operators.Codd'sDIVIDE operatorwas in fact implemented that way. Another feature that could have easily been included in SQL systems was the support for update operations on the catalog tables (system tables describing the structure of the database, as in SQL). A new table could be created by inserting a row into theTABLEScatalog, and then columns added to it by inserting intoCOLUMNS. In addition, BS12 was ahead of SQL in supporting user-defined functions and procedures, using aTuring completesublanguage,triggers, and a simple "call" interface for use by application programs, all in its very first release in 1982. Sample query for determining which departments are over their salary budgets:[1] Note the "natural join" on the common column,DEPTNUM. Although some SQL dialects support natural joins, for familiarity, the example will show only a "traditional" join. Here is the equivalent SQL for comparison:
https://en.wikipedia.org/wiki/IBM_Business_System_12
Arelational database(RDB[1]) is adatabasebased on therelational modelof data, as proposed byE. F. Coddin 1970.[2] A RelationalDatabase Management System(RDBMS) is a type of database management system that stores data in a structuredformatusingrowsandcolumns. Many relational database systems are equipped with the option of usingSQL(Structured Query Language) for querying and updating the database.[3] The concept of relational database was defined byE. F. CoddatIBMin 1970. Codd introduced the termrelationalin his research paper "A Relational Model of Data for Large Shared Data Banks".[2]In this paper and later papers, he defined what he meant byrelation. One well-known definition of what constitutes a relational database system is composed ofCodd's 12 rules. However, no commercial implementations of the relational model conform to all of Codd's rules,[4]so the term has gradually come to describe a broader class of database systems, which at a minimum: In 1974, IBM began developingSystem R, a research project to develop a prototype RDBMS.[5][6]The first system sold as an RDBMS wasMultics Relational Data Store(June 1976).[7][8][citation needed]Oraclewas released in 1979 by Relational Software, nowOracle Corporation.[9]IngresandIBM BS12followed. Other examples of an RDBMS includeIBM Db2,SAP Sybase ASE, andInformix. In 1984, the first RDBMS forMacintoshbegan being developed, code-named Silver Surfer, and was released in 1987 as4th Dimensionand known today as 4D.[10] The first systems that were relatively faithful implementations of the relational model were from: The most common definition of an RDBMS is a product that presents a view of data as a collection of rows and columns, even if it is not based strictly uponrelational theory. By this definition, RDBMS products typically implement some but not all of Codd's 12 rules. A second school of thought argues that if a database does not implement all of Codd's rules (or the current understanding on the relational model, as expressed byChristopher J. Date,Hugh Darwenand others), it is not relational. This view, shared by many theorists and other strict adherents to Codd's principles, would disqualify most DBMSs as not relational. For clarification, they often refer to some RDBMSs astruly-relational database management systems(TRDBMS), naming otherspseudo-relational database management systems(PRDBMS).[citation needed] As of 2009, most commercial relational DBMSs employSQLas theirquery language.[15] Alternative query languages have been proposed and implemented, notably the pre-1996 implementation ofIngres QUEL. A relational model organizes data into one or moretables(or "relations") ofcolumnsandrows, with aunique keyidentifying each row. Rows are also calledrecordsortuples.[16]Columns are also called attributes. Generally, each table/relation represents one "entity type" (such as customer or product). The rows represent instances of that type ofentity(such as "Lee" or "chair") and the columns represent values attributed to that instance (such as address or price). For example, each row of a class table corresponds to a class, and a class corresponds to multiple students, so the relationship between the class table and the student table is "one to many"[17] Each row in a table has its own unique key. Rows in a table can be linked to rows in other tables by adding a column for the unique key of the linked row (such columns are known asforeign keys). Codd showed that data relationships of arbitrary complexity can be represented by a simple set of concepts.[2] Part of this processing involves consistently being able to select or modify one and only one row in a table. Therefore, most physical implementations have a uniqueprimary key(PK) for each row in a table. When a new row is written to the table, a new unique value for the primary key is generated; this is the key that the system uses primarily for accessing the table. System performance is optimized for PKs. Other, morenatural keysmay also be identified and defined asalternate keys(AK). Often several columns are needed to form an AK (this is one reason why a single integer column is usually made the PK). Both PKs and AKs have the ability to uniquely identify a row within a table. Additional technology may be applied to ensure a unique ID across the world, aglobally unique identifier, when there are broader system requirements. The primary keys within a database are used to define the relationships among the tables. When a PK migrates to another table, it becomes a foreign key (FK) in the other table. When each cell can contain only one value and the PK migrates into a regular entity table, this design pattern can represent either aone-to-oneorone-to-manyrelationship. Most relational database designs resolvemany-to-manyrelationships by creating an additional table that contains the PKs from both of the other entity tables – the relationship becomes an entity; the resolution table is then named appropriately and the two FKs are combined to form a PK. The migration of PKs to other tables is the second major reason why system-assigned integers are used normally as PKs; there is usually neither efficiency nor clarity in migrating a bunch of other types of columns. Relationships are a logical connection between different tables (entities), established on the basis of interaction among these tables. These relationships can be modelled as anentity-relationship model. In order for a database management system (DBMS) to operate efficiently and accurately, it must useACID transactions.[18][19][20] Part of the programming within a RDBMS is accomplished usingstored procedures(SPs). Often procedures can be used to greatly reduce the amount of information transferred within and outside of a system. For increased security, the system design may grant access to only the stored procedures and not directly to the tables. Fundamental stored procedures contain the logic needed to insert new and update existing data. More complex procedures may be written to implement additional rules and logic related to processing or selecting the data. The relational database was first defined in June 1970 byEdgar Codd, of IBM'sSan Jose Research Laboratory.[2]Codd's view of what qualifies as an RDBMS is summarized inCodd's 12 rules. A relational database has become the predominant type of database. Other models besides therelational modelinclude thehierarchical database modeland thenetwork model. The table below summarizes some of the most important relational database terms and the correspondingSQLterm: In a relational database, arelationis a set oftuplesthat have the sameattributes. A tuple usually represents an object and information about that object. Objects are typically physical objects or concepts. A relation is usually described as atable, which is organized intorowsandcolumns. All the data referenced by an attribute are in the samedomainand conform to the same constraints. The relational model specifies that the tuples of a relation have no specific order and that the tuples, in turn, impose no order on the attributes. Applications access data by specifying queries, which use operations such asselectto identify tuples,projectto identify attributes, andjointo combine relations. Relations can be modified using theinsert,delete, andupdateoperators. New tuples can supply explicit values or be derived from a query. Similarly, queries identify tuples for updating or deleting. Tuples by definition are unique. If the tuple contains acandidateor primary key then obviously it is unique; however, a primary key need not be defined for a row or record to be a tuple. The definition of a tuple requires that it be unique, but does not require a primary key to be defined. Because a tuple is unique, its attributes by definition constitute asuperkey. All data are stored and accessed viarelations. Relations that store data are called "base relations", and in implementations are called "tables". Other relations do not store data, but are computed by applying relational operations to other relations. These relations are sometimes called "derived relations". In implementations these are called "views" or "queries". Derived relations are convenient in that they act as a single relation, even though they may grab information from several relations. Also, derived relations can be used as anabstraction layer. A domain describes the set of possible values for a given attribute, and can be considered a constraint on the value of the attribute. Mathematically, attaching a domain to an attribute means that any value for the attribute must be an element of the specified set. The character string"ABC", for instance, is not in the integer domain, but the integer value123is. Another example of domain describes the possible values for the field "CoinFace" as ("Heads","Tails"). So, the field "CoinFace" will not accept input values like (0,1) or (H,T). Constraints are often used to make it possible to further restrict the domain of an attribute. For instance, a constraint can restrict a given integer attribute to values between 1 and 10. Constraints provide one method of implementingbusiness rulesin the database and support subsequent data use within the application layer. SQL implements constraint functionality in the form ofcheck constraints. Constraints restrict the data that can be stored inrelations. These are usually defined using expressions that result in aBooleanvalue, indicating whether or not the data satisfies the constraint. Constraints can apply to single attributes, to a tuple (restricting combinations of attributes) or to an entire relation. Since every attribute has an associated domain, there are constraints (domain constraints). The two principal rules for the relational model are known asentity integrityandreferential integrity. Everyrelation/table has a primary key, this being a consequence of a relation being aset.[21]A primary key uniquely specifies a tuple within a table. While natural attributes (attributes used to describe the data being entered) are sometimes good primary keys,surrogate keysare often used instead. A surrogate key is an artificial attribute assigned to an object which uniquely identifies it (for instance, in a table of information about students at a school they might all be assigned a student ID in order to differentiate them). The surrogate key has no intrinsic (inherent) meaning, but rather is useful through its ability to uniquely identify a tuple. Another common occurrence, especially in regard to N:M cardinality is thecomposite key. A composite key is a key made up of two or more attributes within a table that (together) uniquely identify a record.[22] Foreign key refers to a field in a relational table that matches the primary key column of another table. It relates the two keys. Foreign keys need not have unique values in the referencing relation. A foreign key can be used tocross-referencetables, and it effectively uses the values of attributes in the referenced relation to restrict the domain of one or more attributes in the referencing relation. The concept is described formally as: "For all tuples in the referencing relation projected over the referencing attributes, there must exist a tuple in the referenced relation projected over those same attributes such that the values in each of the referencing attributes match the corresponding values in the referenced attributes." A stored procedure is executable code that is associated with, and generally stored in, the database. Stored procedures usually collect and customize common operations, like inserting atupleinto arelation, gathering statistical information about usage patterns, or encapsulating complexbusiness logicand calculations. Frequently they are used as anapplication programming interface(API) for security or simplicity. Implementations of stored procedures on SQL RDBMS's often allow developers to take advantage ofproceduralextensions (often vendor-specific) to the standarddeclarativeSQL syntax. Stored procedures are not part of the relational database model, but all commercial implementations include them. An index is one way of providing quicker access to data. Indices can be created on any combination of attributes on arelation. Queries that filter using those attributes can find matching tuples directly using the index (similar toHash tablelookup), without having to check each tuple in turn. This is analogous to using theindex of a bookto go directly to the page on which the information you are looking for is found, so that you do not have to read the entire book to find what you are looking for. Relational databases typically supply multiple indexing techniques, each of which is optimal for some combination of data distribution, relation size, and typical access pattern. Indices are usually implemented viaB+ trees,R-trees, andbitmaps. Indices are usually not considered part of the database, as they are considered an implementation detail, though indices are usually maintained by the same group that maintains the other parts of the database. The use of efficient indexes on both primary and foreign keys can dramatically improve query performance. This is because B-tree indexes result in query times proportional to log(n) where n is the number of rows in a table and hash indexes result in constant time queries (no size dependency as long as the relevant part of the index fits into memory). Queries made against the relational database, and the derivedrelvarsin the database are expressed in arelational calculusor arelational algebra. In his original relational algebra, Codd introduced eight relational operators in two groups of four operators each. The first four operators were based on the traditional mathematicalset operations: The remaining operators proposed by Codd involve special operations specific to relational databases: Other operators have been introduced or proposed since Codd's introduction of the original eight including relational comparison operators and extensions that offer support for nesting and hierarchical data, among others. Normalization was first proposed by Codd as an integral part of the relational model. It encompasses a set of procedures designed to eliminate non-simple domains (non-atomic values) and the redundancy (duplication) of data, which in turn prevents data manipulation anomalies and loss of data integrity. The most common forms of normalization applied to databases are called thenormal forms. Connolly and Begg define database management system (DBMS) as a "software system that enables users to define, create, maintain and control access to the database".[23]RDBMS is an extension of that initialism that is sometimes used when the underlying database is relational. An alternative definition for arelational database management systemis a database management system (DBMS) based on therelational model. Most databases in widespread use today are based on this model.[24] RDBMSs have been a common option for the storage of information in databases used for financial records, manufacturing and logistical information, personnel data, and other applications since the 1980s. Relational databases have often replaced legacyhierarchical databasesandnetwork databases, because RDBMS were easier to implement and administer. Nonetheless, relational stored data received continued, unsuccessful challenges byobject databasemanagement systems in the 1980s and 1990s, (which were introduced in an attempt to address the so-calledobject–relational impedance mismatchbetween relational databases and object-oriented application programs), as well as byXML databasemanagement systems in the 1990s.[25]However, due to the expanse of technologies, such ashorizontal scalingofcomputer clusters,NoSQLdatabases have recently become popular as an alternative to RDBMS databases.[26] Distributed Relational Database Architecture(DRDA) was designed by a workgroup within IBM in the period 1988 to 1994. DRDA enables network connected relational databases to cooperate to fulfill SQL requests.[27][28]The messages, protocols, and structural components of DRDA are defined by theDistributed Data Management Architecture. According toDB-Engines, in December 2024 the most popular systems on the db-engines.com web site were:[29] According to research companyGartner, in 2011, the five leadingproprietary softwarerelational database vendors by revenue wereOracle(48.8%),IBM(20.2%),Microsoft(17.0%),SAPincludingSybase(4.6%), andTeradata(3.7%).[30]
https://en.wikipedia.org/wiki/RDBMS
Inmetadata, the termdata elementis an atomic unit of data that has precise meaning or precise semantics. A data element has: Data elements usage can be discovered by inspection ofsoftware applicationsor applicationdata filesthrough a process of manual or automatedApplication Discovery and Understanding. Once data elements are discovered they can be registered in ametadata registry. Intelecommunications, the termdata elementhas the following components: In the areas ofdatabasesanddata systemsmore generally a data element is a concept forming part of adata model. As an element of data representation, a collection of data elements forms adata structure.[1] In practice, data elements (fields, columns, attributes, etc.) are sometimes "overloaded", meaning a given data element will have multiple potential meanings. While a known bad practice, overloading is nevertheless a very real factor or barrier to understanding what a system is doing. Thisdatabase-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Data_element
Incomputinganddata management,data mappingis the process of creatingdata elementmappingsbetween two distinctdata models. Data mapping is used as a first step for a wide variety ofdata integrationtasks, including:[1] For example, a company that would like to transmit and receive purchases and invoices with other companies might use data mapping to create data maps from a company's data to standardizedANSI ASC X12messages for items such as purchase orders and invoices. X12 standards are genericElectronic Data Interchange(EDI) standards designed to allow acompanyto exchangedatawith any other company, regardless of industry. The standards are maintained by the Accredited Standards Committee X12 (ASC X12), with theAmerican National Standards Institute(ANSI) accredited to set standards for EDI. The X12 standards are often calledANSI ASC X12standards. TheW3CintroducedR2RMLas a standard for mapping data in arelational databaseto data expressed in terms of theResource Description Framework(RDF). In the future, tools based onsemantic weblanguages such as RDF, theWeb Ontology Language(OWL) and standardizedmetadata registrywill make data mapping a more automatic process. This process will be accelerated if each application performedmetadata publishing. Full automated data mapping is a very difficult problem (seesemantic translation). Data mappings can be done in a variety of ways using procedural code, creatingXSLTtransforms or by using graphical mapping tools that automatically generate executable transformation programs. These are graphical tools that allow a user to "draw" lines from fields in one set of data to fields in another. Some graphical data mapping tools allow users to "auto-connect" a source and a destination. This feature is dependent on the source and destinationdata element namebeing the same. Transformation programs are automatically created in SQL, XSLT,Java, orC++. These kinds of graphical tools are found in mostETL(extract, transform, and load) tools as the primary means of entering data maps to support data movement. Examples include SAP BODS and Informatica PowerCenter. This is the newest approach in data mapping and involves simultaneously evaluating actual data values in two data sources using heuristics and statistics to automatically discover complex mappings between two data sets. This approach is used to find transformations between two data sets, discovering substrings, concatenations,arithmetic, case statements as well as other kinds of transformation logic. This approach also discovers data exceptions that do not follow the discovered transformation logic. Semantic mappingis similar to the auto-connect feature of data mappers with the exception that ametadata registrycan be consulted to look up data element synonyms. For example, if the source system listsFirstNamebut the destination listsPersonGivenName, the mappings will still be made if these data elements are listed assynonymsin the metadata registry. Semantic mapping is only able to discover exact matches between columns of data and will not discover any transformation logic or exceptions between columns. Data lineage is a track of the life cycle of each piece of data as it is ingested, processed, and output by the analytics system. This provides visibility into the analytics pipeline and simplifies tracing errors back to their sources. It also enables replaying specific portions or inputs of the data flow for step-wise debugging or regenerating lost output. In fact, database systems have used such information, called data provenance, to address similar validation and debugging challenges already.[2]
https://en.wikipedia.org/wiki/Data_mapping
Adata modelis anabstract modelthat organizes elements ofdataandstandardizeshow they relate to one another and to the properties of real-worldentities.[2][3]For instance, a data model may specify that the data element representing a car be composed of a number of other elements which, in turn, represent the color and size of the car and define its owner. The corresponding professional activity is called generallydata modelingor, more specifically,database design. Data models are typically specified by a data expert, data specialist, data scientist, data librarian, or a data scholar. A datamodeling languageand notation are often represented in graphical form as diagrams.[4] A data model can sometimes be referred to as adata structure, especially in the context ofprogramming languages. Data models are often complemented byfunction models, especially in the context ofenterprise models. A data model explicitly determines thestructure of data; conversely,structured datais data organized according to an explicit data model or data structure. Structured data is in contrast tounstructured dataandsemi-structured data. The termdata modelcan refer to two distinct but closely related concepts. Sometimes it refers to an abstract formalization of theobjectsand relationships found in a particular application domain: for example the customers, products, and orders found in a manufacturing organization. At other times it refers to the set of concepts used in defining such formalizations: for example concepts such as entities, attributes, relations, or tables. So the "data model" of a banking application may be defined using the entity–relationship "data model". This article uses the term in both senses. Managing large quantities of structured andunstructured datais a primary function ofinformation systems. Data models describe the structure, manipulation, and integrity aspects of the data stored in data management systems such as relational databases. They may also describe data with a looser structure, such asword processingdocuments,email messages, pictures, digital audio, and video:XDM, for example, provides a data model forXMLdocuments. The main aim of data models is to support the development ofinformation systemsby providing the definition and format of data. According to West and Fowler (1999) "if this is done consistently across systems then compatibility of data can be achieved. If the same data structures are used to store and access data then different applications can share data. The results of this are indicated above. However, systems and interfaces often cost more than they should, to build, operate, and maintain. They may also constrain the business rather than support it. A major cause is that the quality of the data models implemented in systems and interfaces is poor".[5] The reason for these problems is a lack of standards that will ensure that data models will both meet business needs and be consistent.[5] A data model explicitly determines the structure of data. Typical applications of data models include database models, design of information systems, and enabling exchange of data. Usually, data models are specified in a data modeling language.[3] A data modelinstancemay be one of three kinds according toANSIin 1975:[6] The significance of this approach, according to ANSI, is that it allows the three perspectives to be relatively independent of each other. Storage technology can change without affecting either the logical or the conceptual model. The table/column structure can change without (necessarily) affecting the conceptual model. In each case, of course, the structures must remain consistent with the other model. The table/column structure may be different from a direct translation of the entity classes and attributes, but it must ultimately carry out the objectives of the conceptual entity class structure. Early phases of many software development projects emphasize the design of aconceptual data model. Such a design can be detailed into alogical data model. In later stages, this model may be translated intophysical data model. However, it is also possible to implement a conceptual model directly. One of the earliest pioneering works in modeling information systems was done by Young and Kent (1958),[7][8]who argued for "a precise and abstract way of specifying the informational and time characteristics of adata processingproblem". They wanted to create "a notation that should enable theanalystto organize the problem around any piece ofhardware". Their work was the first effort to create an abstract specification and invariant basis for designing different alternative implementations using different hardware components. The next step in IS modeling was taken byCODASYL, an IT industry consortium formed in 1959, who essentially aimed at the same thing as Young and Kent: the development of "a proper structure for machine-independent problem definition language, at the system level of data processing". This led to the development of a specific ISinformation algebra.[8] In the 1960s data modeling gained more significance with the initiation of themanagement information system(MIS) concept. According to Leondes (2002), "during that time, the information system provided the data and information for management purposes. The first generationdatabase system, calledIntegrated Data Store(IDS), was designed byCharles Bachmanat General Electric. Two famous database models, thenetwork data modeland thehierarchical data model, were proposed during this period of time".[9]Towards the end of the 1960s,Edgar F. Coddworked out his theories of data arrangement, and proposed therelational modelfor database management based onfirst-order predicate logic.[10] In the 1970sentity–relationship modelingemerged as a new type of conceptual data modeling, originally formalized in 1976 byPeter Chen. Entity–relationship models were being used in the first stage ofinformation systemdesign during therequirements analysisto describe information needs or the type ofinformationthat is to be stored in adatabase. This technique can describe anyontology, i.e., an overview and classification of concepts and their relationships, for a certainarea of interest. In the 1970sG.M. Nijssendeveloped "Natural Language Information Analysis Method" (NIAM) method, and developed this in the 1980s in cooperation withTerry HalpinintoObject–Role Modeling(ORM). However, it was Terry Halpin's 1989 PhD thesis that created the formal foundation on which Object–Role Modeling is based. Bill Kent, in his 1978 bookData and Reality,[11]compared a data model to a map of a territory, emphasizing that in the real world, "highways are not painted red, rivers don't have county lines running down the middle, and you can't see contour lines on a mountain". In contrast to other researchers who tried to create models that were mathematically clean and elegant, Kent emphasized the essential messiness of the real world, and the task of the data modeler to create order out of chaos without excessively distorting the truth. In the 1980s, according to Jan L. Harrington (2000), "the development of theobject-orientedparadigm brought about a fundamental change in the way we look at data and the procedures that operate on data. Traditionally, data and procedures have been stored separately: the data and their relationship in a database, the procedures in an application program. Object orientation, however, combined an entity's procedure with its data."[12] During the early 1990s, three Dutch mathematicians Guido Bakema, Harm van der Lek, and JanPieter Zwart, continued the development on the work ofG.M. Nijssen. They focused more on the communication part of the semantics. In 1997 they formalized the method Fully Communication Oriented Information ModelingFCO-IM. A database model is a specification describing how a database is structured and used. Several such models have been suggested. Common models include: A data structure diagram (DSD) is adiagramand data model used to describeconceptual data modelsby providing graphical notations which documententitiesand theirrelationships, and theconstraintsthat bind them. The basic graphic elements of DSDs areboxes, representing entities, andarrows, representing relationships. Data structure diagrams are most useful for documenting complex data entities. Data structure diagrams are an extension of theentity–relationship model(ER model). In DSDs,attributesare specified inside the entity boxes rather than outside of them, while relationships are drawn as boxes composed of attributes which specify the constraints that bind entities together. DSDs differ from the ER model in that the ER model focuses on the relationships between different entities, whereas DSDs focus on the relationships of the elements within an entity and enable users to fully see the links and relationships between each entity. There are several styles for representing data structure diagrams, with the notable difference in the manner of definingcardinality. The choices are between arrow heads, inverted arrow heads (crow's feet), or numerical representation of the cardinality. An entity–relationship model (ERM), sometimes referred to as an entity–relationship diagram (ERD), could be used to represent an abstractconceptual data model(orsemantic data modelor physical data model) used insoftware engineeringto represent structured data. There are several notations used for ERMs. Like DSD's,attributesare specified inside the entity boxes rather than outside of them, while relationships are drawn as lines, with the relationship constraints as descriptions on the line. The E-R model, while robust, can become visually cumbersome when representing entities with several attributes. There are several styles for representing data structure diagrams, with a notable difference in the manner of defining cardinality. The choices are between arrow heads, inverted arrow heads (crow's feet), or numerical representation of the cardinality. A data model inGeographic information systemsis a mathematical construct for representing geographic objects or surfaces as data. For example, Generic data models are generalizations of conventional data models. They define standardized general relation types, together with the kinds of things that may be related by such a relation type. Generic data models are developed as an approach to solving some shortcomings of conventional data models. For example, different modelers usually produce different conventional data models of the same domain. This can lead to difficulty in bringing the models of different people together and is an obstacle for data exchange and data integration. Invariably, however, this difference is attributable to different levels of abstraction in the models and differences in the kinds of facts that can be instantiated (the semantic expression capabilities of the models). The modelers need to communicate and agree on certain elements that are to be rendered more concretely, in order to make the differences less significant. A semantic data model in software engineering is a technique to define the meaning of data within the context of its interrelationships with other data. A semantic data model is an abstraction that defines how the stored symbols relate to the real world.[13]A semantic data model is sometimes called aconceptual data model. The logical data structure of adatabase management system(DBMS), whetherhierarchical,network, orrelational, cannot totally satisfy therequirementsfor a conceptual definition of data because it is limited in scope and biased toward the implementation strategy employed by the DBMS. Therefore, the need to define data from aconceptual viewhas led to the development of semantic data modeling techniques. That is, techniques to define the meaning of data within the context of its interrelationships with other data. As illustrated in the figure. The real world, in terms of resources, ideas, events, etc., are symbolically defined within physical data stores. A semantic data model is an abstraction that defines how the stored symbols relate to the real world. Thus, the model must be a true representation of the real world.[13] Data architecture is the design of data for use in defining the target state and the subsequent planning needed to hit the target state. It is usually one of severalarchitecture domainsthat form the pillars of anenterprise architectureorsolution architecture. A data architecture describes the data structures used by a business and/or its applications. There are descriptions of data in storage and data in motion; descriptions of data stores, data groups, and data items; and mappings of those data artifacts to data qualities, applications, locations, etc. Essential to realizing the target state, Data architecture describes how data is processed, stored, and utilized in a given system. It provides criteria for data processing operations that make it possible to design data flows and also control the flow of data in the system. Data modeling insoftware engineeringis the process of creating a data model by applying formal data model descriptions using data modeling techniques. Data modeling is a technique for defining businessrequirementsfor a database. It is sometimes calleddatabase modelingbecause a data model is eventually implemented in a database.[16] The figure illustrates the way data models are developed and used today. Aconceptual data modelis developed based on the datarequirementsfor the application that is being developed, perhaps in the context of anactivity model. The data model will normally consist of entity types, attributes, relationships, integrity rules, and the definitions of those objects. This is then used as the start point for interface ordatabase design.[5] Some important properties of data for which requirements need to be met are: Another kind of data model describes how to organize data using adatabase management systemor other data management technology. It describes, for example, relational tables and columns or object-oriented classes and attributes. Such a data model is sometimes referred to as thephysical data model, but in the original ANSI three schema architecture, it is called "logical". In that architecture, the physical model describes the storage media (cylinders, tracks, and tablespaces). Ideally, this model is derived from the more conceptual data model described above. It may differ, however, to account for constraints like processing capacity and usage patterns. Whiledata analysisis a common term for data modeling, the activity actually has more in common with the ideas and methods ofsynthesis(inferring general concepts from particular instances) than it does withanalysis(identifying component concepts from more general ones). {Presumably we call ourselvessystems analystsbecause no one can saysystems synthesists.} Data modeling strives to bring the data structures of interest together into a cohesive, inseparable, whole by eliminating unnecessary data redundancies and by relating data structures withrelationships. A different approach is to useadaptive systemssuch asartificial neural networksthat can autonomously create implicit models of data. A data structure is a way of storing data in a computer so that it can be used efficiently. It is an organization of mathematical and logical concepts of data. Often a carefully chosen data structure will allow the mostefficientalgorithmto be used. The choice of the data structure often begins from the choice of anabstract data type. A data model describes the structure of the data within a given domain and, by implication, the underlying structure of that domain itself. This means that a data model in fact specifies a dedicatedgrammarfor a dedicated artificial language for that domain. A data model represents classes of entities (kinds of things) about which a company wishes to hold information, the attributes of that information, and relationships among those entities and (often implicit) relationships among those attributes. The model describes the organization of the data to some extent irrespective of how data might be represented in a computer system. The entities represented by a data model can be the tangible entities, but models that include such concrete entity classes tend to change over time. Robust data models often identifyabstractionsof such entities. For example, a data model might include an entity class called "Person", representing all the people who interact with an organization. Such anabstract entityclass is typically more appropriate than ones called "Vendor" or "Employee", which identify specific roles played by those people. The term data model can have two meanings:[17] A data model theory has three main components:[17] For example, in therelational model, the structural part is based on a modified concept of themathematical relation; the integrity part is expressed infirst-order logicand the manipulation part is expressed using therelational algebra,tuple calculusanddomain calculus. A data model instance is created by applying a data model theory. This is typically done to solve some business enterprise requirement. Business requirements are normally captured by a semanticlogical data model. This is transformed into a physical data model instance from which is generated a physical database. For example, a data modeler may use a data modeling tool to create anentity–relationship modelof the corporate data repository of some business enterprise. This model is transformed into arelational model, which in turn generates arelational database. Patterns[18]are common data modeling structures that occur in many data models. A data-flow diagram (DFD) is a graphical representation of the "flow" of data through aninformation system. It differs from theflowchartas it shows thedataflow instead of thecontrolflow of the program. A data-flow diagram can also be used for thevisualizationofdata processing(structured design). Data-flow diagrams were invented byLarry Constantine, the original developer of structured design,[20]based on Martin and Estrin's "data-flow graph" model of computation. It is common practice to draw acontext-level data-flow diagramfirst which shows the interaction between the system and outside entities. TheDFDis designed to show how a system is divided into smaller portions and to highlight the flow of data between those parts. This context-level data-flow diagram is then "exploded" to show more detail of the system being modeled An Information model is not a type of data model, but more or less an alternative model. Within the field of software engineering, both a data model and an information model can be abstract, formal representations of entity types that include their properties, relationships and the operations that can be performed on them. The entity types in the model may be kinds of real-world objects, such as devices in a network, or they may themselves be abstract, such as for the entities used in a billing system. Typically, they are used to model a constrained domain that can be described by a closed set of entity types, properties, relationships and operations. According to Lee (1999)[21]an information model is a representation of concepts, relationships, constraints, rules, andoperationsto specifydata semanticsfor a chosen domain of discourse. It can provide sharable, stable, and organized structure of information requirements for the domain context.[21]More in general the terminformation modelis used for models of individual things, such as facilities, buildings, process plants, etc. In those cases the concept is specialised toFacility Information Model,Building Information Model, Plant Information Model, etc. Such an information model is an integration of a model of the facility with the data and documents about the facility. An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are called data models, irrespective of whether they areobject models(e.g. usingUML),entity–relationship modelsorXML schemas. An object model in computer science is a collection of objects or classes through which a program can examine and manipulate some specific parts of its world. In other words, the object-oriented interface to some service or system. Such an interface is said to be theobject model ofthe represented service or system. For example, theDocument Object Model (DOM)[1]is a collection of objects that represent apagein aweb browser, used byscriptprograms to examine and dynamically change the page. There is aMicrosoft Excelobject model[22]for controlling Microsoft Excel from another program, and theASCOMTelescope Driver[23]is an object model for controlling an astronomical telescope. Incomputingthe termobject modelhas a distinct second meaning of the general properties ofobjectsin a specific computerprogramming language, technology, notation ormethodologythat uses them. For example, theJavaobject model, theCOMobject model, orthe object model ofOMT. Such object models are usually defined using concepts such asclass,message,inheritance,polymorphism, andencapsulation. There is an extensive literature on formalized object models as a subset of theformal semantics of programming languages. Object–Role Modeling (ORM) is a method forconceptual modeling, and can be used as a tool for information and rules analysis.[25] Object–Role Modeling is a fact-oriented method for performingsystems analysisat the conceptual level. The quality of a database application depends critically on its design. To help ensure correctness, clarity, adaptability and productivity, information systems are best specified first at the conceptual level, using concepts and language that people can readily understand. The conceptual design may include data, process and behavioral perspectives, and the actual DBMS used to implement the design might be based on one of many logical data models (relational, hierarchic, network, object-oriented, etc.).[26] The Unified Modeling Language (UML) is a standardized general-purposemodeling languagein the field ofsoftware engineering. It is agraphical languagefor visualizing, specifying, constructing, and documenting theartifactsof a software-intensive system. The Unified Modeling Language offers a standard way to write a system's blueprints, including:[27] UML offers a mix offunctional models, data models, anddatabase models.
https://en.wikipedia.org/wiki/Data_model
Database designis the organization of data according to adatabase model. The designer determines what data must be stored and how the data elements interrelate. With this information, they can begin to fit the data to the database model.[1]A database management system manages the data accordingly. Database design is a process that consists of several steps. The first step of database design involves classifying data and identifying interrelationships. The theoretical representation of data is called anontologyor aconceptual data model. In a majority of cases, the person designing a database is a person with expertise in database design, rather than expertise in the domain from which the data to be stored is drawn e.g. financial information, biological information etc. Therefore, the data to be stored in a particular database must be determined in cooperation with a person who does have expertise in that domain, and who is aware of the meaning of the data to be stored within the system. This process is one which is generally considered part ofrequirements analysis, and requires skill on the part of the database designer to elicit the needed information from those with thedomain knowledge. This is because those with the necessary domain knowledge often cannot clearly express the system requirements for the database as they are unaccustomed to thinking in terms of the discrete data elements which must be stored. Data to be stored can be determined by Requirement Specification.[2] Once a database designer is aware of the data which is to be stored within the database, they must then determine where dependency is within the data. Sometimes when data is changed you can be changing other data that is not visible. For example, in a list of names and addresses, assuming a situation where multiple people can have the same address, but one person cannot have more than one address, the address is dependent upon the name. When provided a name and the list the address can be uniquely determined; however, the inverse does not hold – when given an address and the list, a name cannot be uniquely determined because multiple people can reside at an address. Because an address is determined by a name, an address is considered dependent on a name. (NOTE: A common misconception is that therelational modelis so called because of the stating of relationships between data elements therein. This is not true. The relational model is so named because it is based upon the mathematical structures known asrelations.) The information obtained can be formalized in a diagram or schema. At this stage, it is aconceptual schema. One of the most common types of conceptual schemas is the ER (entity–relationship model) diagrams. Attributes in ER diagrams are usually modeled as an oval with the name of the attribute, linked to the entity or relationship that contains the attribute. ER models are commonly used in information system design; for example, they are used to describe information requirements and / or the types of information to be stored in the database during the conceptual structure design phase.[3] Once the relationships and dependencies amongst the various pieces of information have been determined, it is possible to arrange the data into a logical structure which can then be mapped into the storage objects supported by thedatabase management system. In the case ofrelational databasesthe storage objects aretableswhich store data in rows and columns. In anObject databasethe storage objects correspond directly to the objects used by theObject-oriented programming languageused to write the applications that will manage and access the data. The relationships may be defined as attributes of the object classes involved or as methods that operate on the object classes. The way this mapping is generally performed is such that each set of related data which depends upon a single object, whether real or abstract, is placed in a table. Relationships between these dependent objects are then stored as links between the various objects. Each table may represent an implementation of either a logical object or a relationship joining one or more instances of one or more logical objects. Relationships between tables may then be stored as links connecting child tables with parents. Since complex logical relationships are themselves tables they will probably have links to more than one parent. In the field ofrelational databasedesign,normalizationis a systematic way of ensuring that a database structure is suitable for general-purpose querying and free of certain undesirable characteristics—insertion, update, and deletion anomalies that could lead to loss ofdata integrity. A standard piece of database design guidance is that the designer should create a fully normalized design; selectivedenormalizationcan subsequently be performed, but only forperformancereasons. The trade-off is storage space vs performance. The more normalized the design is, the less data redundancy there is (and therefore, it takes up less space to store), however, common data retrieval patterns may now need complex joins, merges, and sorts to occur – which takes up more data read, and compute cycles. Some modeling disciplines, such as thedimensional modelingapproach todata warehousedesign, explicitly recommend non-normalized designs, i.e. designs that in large part do not adhere to3NF. Normalization consists of normal forms that are1NF,2NF, 3NF,Boyce-Codd NF (3.5NF),4NF,5NFand6NF. Document databases take a different approach. A document that is stored in such a database, typically would contain more than one normalized data unit and often the relationships between the units as well. If all the data units and the relationships in question are often retrieved together, then this approach optimizes the number of retrieves. It also simplifies how data gets replicated, because now there is a clearly identifiable unit of data whose consistency is self-contained. Another consideration is that reading and writing a single document in such databases will require a single transaction – which can be an important consideration in aMicroservicesarchitecture. In such situations, often, portions of the document are retrieved from other services via an API and stored locally for efficiency reasons. If the data units were to be split out across the services, then a read (or write) to support a service consumer might require more than one service calls, and this could result in management of multiple transactions, which may not be preferred. The physical design of the database specifies the physical configuration of the database on the storage media. This includes detailed specification ofdata elementsanddata types. This step involves specifying theindexingoptions and other parameters residing in the DBMSdata dictionary. It is the detailed design of a system that includes modules & the database's hardware & software specifications of the system. Some aspects that are addressed at the physical layer: At the application level, other aspects of the physical design can include the need to define stored procedures, or materialized query views,OLAPcubes, etc.
https://en.wikipedia.org/wiki/Database_design
Anentity–relationship model(orER model) describes interrelated things of interest in a specific domain of knowledge. A basic ER model is composed of entity types (which classify the things of interest) and specifies relationships that can exist betweenentities(instances of those entity types). Insoftware engineering, an ER model is commonly formed to represent things a business needs to remember in order to performbusiness processes. Consequently, the ER model becomes an abstractdata model,[1]that defines a data or information structure that can be implemented in adatabase, typically arelational database. Entity–relationship modeling was developed for database and design byPeter Chenand published in a 1976 paper,[2]with variants of the idea existing previously.[3]Today it is commonly used for teaching students the basics of database structure. Some ER models show super and subtype entities connected by generalization-specialization relationships,[4]and an ER model can also be used to specify domain-specificontologies. An ER model usually results from systematic analysis to define and describe the data created and needed by processes in a business area. Typically, it represents records of entities and events monitored and directed by business processes, rather than the processes themselves. It is usually drawn in a graphical form as boxes (entities) that are connected by lines (relationships) which express the associations and dependencies between entities. It can also be expressed in a verbal form, for example:one building may be divided into zero or more apartments, but one apartment can only be located in one building. Entities may be defined not only by relationships, but also by additional properties (attributes), which include identifiers called "primary keys". Diagrams created to represent attributes as well as entities and relationships may be called entity-attribute-relationship diagrams, rather than entity–relationship models. An ER model is typically implemented as adatabase. In a simple relational database implementation, each row of a table represents one instance of an entity type, and each field in a table represents an attribute type. In arelational databasea relationship between entities is implemented by storing the primary key of one entity as a pointer or "foreign key" in the table of another entity. There is a tradition for ER/data models to be built at two or three levels of abstraction. The conceptual-logical-physical hierarchy below is used in other kinds of specification, and is different from thethree schema approachtosoftware engineering. The first stage ofinformation systemdesign uses these models during therequirements analysisto describe information needs or the type ofinformationthat is to be stored in adatabase. Thedata modelingtechnique can be used to describe anyontology(i.e. an overview and classifications of used terms and their relationships) for a certainarea of interest. In the case of the design of an information system that is based on a database, theconceptual data modelis, at a later stage (usually called logical design), mapped to alogical data model, such as therelational model. This in turn is mapped to a physical model during physical design. Sometimes, both of these phases are referred to as "physical design." Anentitymay be defined as a thing that is capable of an independent existence that can be uniquely identified, and is capable of storing data.[5]An entity is an abstraction from the complexities of a domain. When we speak of an entity, we normally speak of some aspect of the real world that can be distinguished from other aspects of the real world.[6] An entity is a thing that exists either physically or logically. An entity may be a physical object such as a house or a car (they exist physically), an event such as a house sale or a car service, or a concept such as a customer transaction or order (they exist logically—as a concept). Although the term entity is the one most commonly used, following Chen, entities and entity-types should be distinguished. An entity-type is a category. An entity, strictly speaking, is an instance of a given entity-type. There are usually many instances of an entity-type. Because the term entity-type is somewhat cumbersome, most people tend to use the term entity as a synonym. Entities can be thought of asnouns.[7]Examples include a computer, an employee, a song, or a mathematical theorem. Arelationshipcaptures how entities are related to one another. Relationships can be thought of asverbs, linking two or more nouns.[7]Examples include anownsrelationship between a company and a computer, asupervisesrelationship between an employee and a department, aperformsrelationship between an artist and a song, and aprovesrelationship between a mathematician and a conjecture. The model's linguistic aspect described above is used in thedeclarativedatabasequery languageERROL, which mimicsnatural languageconstructs. ERROL'ssemanticsand implementation are based on reshaped relational algebra (RRA), arelational algebrathat is adapted to the entity–relationship model and captures its linguistic aspect. Entities and relationships can both have attributes. For example, anemployeeentity might have aSocial Security Number(SSN) attribute, while aprovedrelationship may have adateattribute. All entities exceptweak entitiesmust have a minimal set of uniquely identifying attributes that may be used as aunique/primarykey. Entity-relationship diagrams (ERDs) do not show single entities or single instances of relations. Rather, they show entity sets (all entities of the same entity type) and relationship sets (all relationships of the same relationship type). For example, a particularsongis an entity, the collection of all songs in a database is an entity set, theeatenrelationship between a child and his lunch is a single relationship, and the set of all such child-lunch relationships in a database is a relationship set. In other words, a relationship set corresponds to arelation in mathematics, while a relationship corresponds to a member of the relation. Certaincardinality constraintson relationship sets may be indicated as well. Physical views show how data is actually stored. Chen's original paper gives an example of a relationship and its roles. He describes a relationship "marriage" and its two roles, "husband" and "wife". A person plays the role of husband in a marriage (relationship) and another person plays the role of wife in the (same) marriage. These words are nouns. Chen's terminology has also been applied to earlier ideas. The lines, arrows, and crow's feet of some diagrams owes more to the earlierBachman diagramsthan to Chen's relationship diagrams. Another common extension to Chen's model is to "name" relationships and roles as verbs or phrases. It has also become prevalent to name roles with phrases such asis the owner ofandis owned by. Correct nouns in this case areownerandpossession. Thus,person plays the role of ownerandcar plays the role of possessionrather thanperson plays the role of,is the owner of, etc. Using nouns has direct benefit when generating physical implementations from semantic models. When apersonhas two relationships withcarit is possible to generate names such asowner_personanddriver_person, which are immediately meaningful.[9] Modifications to the original specification can be beneficial. Chen describedlook-across cardinalities. As an aside, theBarker–Ellisnotation, used in Oracle Designer, uses same-side for minimum cardinality (analogous to optionality) and role, but look-across for maximum cardinality (the crow's foot).[clarification needed] Research byMerise, Elmasri & Navathe and others has shown there is a preference for same-side for roles and both minimum and maximum cardinalities,[10][11][12]and researchers (Feinerer, Dullea et al.) have shown that this is more coherent when applied to n-ary relationships of order greater than 2.[13][14] Dullea et al. states: "A 'look across' notation such as used in theUMLdoes not effectively represent the semantics of participation constraints imposed on relationships where the degree is higher than binary." Feinerer says: "Problems arise if we operate under the look-across semantics as used for UML associations. Hartmann[15]investigates this situation and shows how and why different transformations fail."(Although the "reduction" mentioned is spurious as the two diagrams 3.4 and 3.5 are in fact the same)and also "As we will see on the next few pages, the look-across interpretation introduces several difficulties that prevent the extension of simple mechanisms from binary to n-ary associations." Chen's notation for entity–relationship modeling uses rectangles to represent entity sets, and diamonds to represent relationships appropriate forfirst-class objects: they can have attributes and relationships of their own. If an entity set participates in a relationship set, they are connected with a line. Attributes are drawn as ovals and connected with a line to exactly one entity or relationship set. Cardinality constraints are expressed as follows: Attributes are often omitted as they can clutter up a diagram. Other diagram techniques often list entity attributes within the rectangles drawn for entity sets. Crow's foot notation, the beginning of which dates back to an article by Gordon Everest (1976),[16]is used inBarker's notation,Structured Systems Analysis and Design Method(SSADM), andinformation technology engineering. Crow's foot diagrams represent entities as boxes, and relationships as lines between the boxes. Different shapes at the ends of these lines represent the relative cardinality of the relationship. Crow's foot notation was in use inICLin 1978,[17]and was used in the consultancy practiceCACI. Many of the consultants at CACI (including Richard Barker) came from ICL and subsequently moved toOracleUK, where they developed the early versions of Oracle'sCASEtools, introducing the notation to a wider audience. With this notation, relationships cannot have attributes. Where necessary, relationships are promoted to entities in their own right: for example, if it is necessary to capture where and when an artist performed a song, a new entity "performance" is introduced (with attributes reflecting the time and place), and the relationship of an artist to a song becomes an indirect relationship via the performance (artist-performs-performance, performance-features-song). Three symbols are used to represent cardinality: These symbols are used in pairs to represent the four types of cardinality that an entity may have in a relationship. The inner component of the notation represents the minimum, and the outer component represents the maximum. Users of a modeled database can encounter two well-known issues where the returned results differ from what the query author assumed. These are known as thefan trapand thechasm trap, and they can lead to inaccurate query results if not properly handled during the design of the Entity-Relationship Model (ER Model). Both the fan trap and chasm trap underscore the importance of ensuring that ER models are not only technically correct but also fully and accurately reflect the real-world relationships they are designed to represent. Identifying and resolving these traps early in the design process helps avoid significant issues later, especially in complex databases intended forbusiness intelligenceor decision support. The first issue is the fan trap. It occurs when a (master) table links to multiple tables in a one-to-many relationship. The issue derives its name from the visual appearance of the model when it is drawn in an entity–relationship diagram, as the linked tables 'fan out' from the master table. This type of model resembles astar schema, which is a common design in data warehouses. When attempting to calculate sums over aggregates using standard SQL queries based on the master table, the results can be unexpected and often incorrect due to the way relationships are structured. The miscalculation happens becauseSQLtreats each relationship individually, which may result in double-counting or other inaccuracies. This issue is particularly common in decision support systems. To mitigate this, either the data model or theSQL queryitself must be adjusted. Some database querying software designed for decision support includes built-in methods to detect and address fan traps. The second issue is the chasm trap. A chasm trap occurs when a model suggests the existence of a relationship between entity types, but the pathway between these entities is incomplete or missing in certain instances. For example, imagine a database where a Building has one or more Rooms, and these Rooms hold zero or more Computers. One might expect to query the model to list all Computers in a Building. However, if a Computer is temporarily not assigned to a Room (perhaps under repair or stored elsewhere), it won't be included in the query results. The query would only return Computers currently assigned to Rooms, not all Computers in the Building. This reflects a flaw in the model, as it fails to account for Computers that are in the Building but not in a Room. To resolve this, an additional relationship directly linking the Building and Computers would be required. A semantic model is a model of concepts and is sometimes called a "platform independent model". It is an intensional model. At least sinceCarnap, it is well known that:[18] An extensional model is one that maps to the elements of a particular methodology or technology, and is thus a "platform specific model". The UML specification explicitly states that associations in class models are extensional and this is in fact self-evident by considering the extensive array of additional "adornments" provided by the specification over and above those provided by any of the prior candidate "semantic modelling languages"."UML as a Data Modeling Notation, Part 2" Peter Chen, the father of ER modeling said in his seminal paper: In his original 1976 article Chen explicitly contrasts entity–relationship diagrams with record modelling techniques: Several other authors also support Chen's program:[19][20][21][22][23] Chen is in accord with philosophical traditions from the time of the Ancient Greek philosophers:PlatoandAristotle.[24]Plato himself associates knowledge with the apprehension of unchangingForms(namely, archetypes or abstract representations of the many types of things, and properties) and their relationships to one another.
https://en.wikipedia.org/wiki/Entity%E2%80%93relationship_model
Object–role modeling(ORM) is used to model thesemanticsof auniverse of discourse. ORM is often used fordata modelingandsoftware engineering. An object–role model uses graphical symbols that are based onfirst order predicate logicand set theory to enable the modeler to create an unambiguous definition of an arbitrary universe of discourse. Attribute free, the predicates of an ORM Model lend themselves to the analysis and design ofgraph databasemodels in as much as ORM was originally conceived to benefit relational database design. The term "object–role model" was coined in the 1970s and ORM based tools have been used for more than 30 years – principally fordata modeling. More recently ORM has been used to modelbusiness rules, XML-Schemas,data warehouses, requirements engineering and web forms.[1] The roots of ORM can be traced to research into semantic modeling for information systems in Europe during the 1970s. There were many pioneers and this short summary does not by any means mention them all. An early contribution came in 1973 when Michael Senko wrote about "data structuring" in the IBM Systems Journal. In 1974 Jean-Raymond Abrial contributed an article about "Data Semantics". In June 1975,Eckhard Falkenberg's doctoral thesis was published and in 1976 one of Falkenberg's papers mentions the term "object–role model". G.M. Nijssenmade fundamental contributions by introducing the "circle-box" notation for object types and roles, and by formulating the first version of the conceptual schema design procedure. Robert Meersman extended the approach by adding subtyping, and introducing the first truly conceptual query language. Object role modeling also evolved from theNatural language Information Analysis Method, a methodology that was initially developed by the academic researcher,G.M. Nijssenin the Netherlands (Europe) in the mid-1970s and his research team at the Control Data Corporation Research Laboratory in Belgium, and later at the University of Queensland, Australia in the 1980s. The acronymNIAMoriginally stood for "Nijssen's Information Analysis Methodology", and later generalised to "Natural language Information Analysis Methodology" andBinary Relationship Modelingsince G. M. Nijssen was only one of many people involved in the development of the method.[2] In 1989,Terry Halpincompleted his PhD thesis on ORM, providing the first full formalization of the approach and incorporating several extensions. Also in 1989,Terry HalpinandG.M. Nijssenco-authored the book "Conceptual Schema and Relational Database Design" and several joint papers, providing the first formalization of object–role modeling. A graphical NIAM design tool which included the ability to generate database-creation scripts for Oracle, DB2 and DBQ was developed in the early 1990s in Paris. It was originally named Genesys and was marketed successfully in France and later Canada. It could also handle ER diagram design. It was ported to SCO Unix, SunOs, DEC 3151's and Windows 3.0 platforms, and was later migrated to succeedingMicrosoftoperating systems, utilising XVT for cross operating system graphical portability. The tool was renamed OORIANE and is currently being used for large data warehouse and SOA projects. Also evolving from NIAM is "Fully Communication Oriented Information Modeling"FCO-IM(1992). It distinguishes itself from traditional ORM in that it takes a strict communication-oriented perspective. Rather than attempting to model the domain and its essential concepts, it models the communication in this domain (universe of discourse). Another important difference is that it does this on instance level, deriving type level and object/fact level during analysis. Another recent development is the use of ORM in combination with standardised relation types with associated roles and a standardmachine-readable dictionaryandtaxonomyof concepts as are provided in theGellish Englishdictionary. Standardisation of relation types (fact types), roles and concepts enables increased possibilities for model integration and model reuse. Object–role models are based on elementary facts, and expressed indiagramsthat can be verbalised into natural language. A fact is apropositionsuch as "John Smith was hired on 5 January 1995" or "Mary Jones was hired on 3 March 2010". With ORM,propositionssuch as these, are abstracted into "fact types" for example "Person was hired on Date" and the individual propositions are regarded as sample data. The difference between a "fact" and an "elementary fact" is that an elementary fact cannot be simplified without loss of meaning. This "fact-based" approach facilitates modeling, transforming, and querying information from any domain.[4] ORM is attribute-free: unlike models in theentity–relationship(ER) andUnified Modeling Language(UML) methods, ORM treats all elementary facts as relationships and so treats decisions for grouping facts into structures (e.g. attribute-based entity types, classes, relation schemes, XML schemas) as implementation concerns irrelevant to semantics. By avoiding attributes, ORM improves semantic stability and enables verbalization into natural language. Fact-based modelingincludes procedures for mapping facts to attribute-based structures, such as those of ER or UML.[4] Fact-based textual representations are based on formal subsets of native languages. ORM proponents argue that ORM models are easier to understand by people without a technical education. For example, proponents argue that object–role models are easier to understand than declarative languages such asObject Constraint Language(OCL) and other graphical languages such asUMLclass models.[4]Fact-based graphical notations are more expressive than those of ER andUML. An object–role model can be automatically mapped to relational and deductive databases (such asdatalog).[5] ORM2 is the latest generation of object–role modeling. The main objectives for the ORM 2 graphical notation are:[6] System development typically involves several stages such as: feasibility study; requirements analysis; conceptual design of data and operations; logical design; external design; prototyping; internal design and implementation; testing and validation; and maintenance. The seven steps of the conceptual schema design procedure are:[7] ORM's conceptual schema design procedure (CSDP) focuses on the analysis and design of data.
https://en.wikipedia.org/wiki/Object-role_modeling
The theory ofologsis an attempt to provide a rigorous mathematical framework for knowledge representation, construction of scientific models and data storage usingcategory theory, linguistic and graphical tools. Ologs were introduced in 2012 byDavid Spivakand Robert Kent.[1] The term "olog" is short for "ontologylog". "Ontology" derives fromonto-, from theGreekὤν, ὄντος"being; that which is", present participle of the verbεἰμί"be", and-λογία,-logia:science,study,theory. An ologC{\displaystyle {\mathcal {C}}}for a given domain is acategorywhoseobjectsare boxes labeled with phrases (more specifically, singular indefinite noun phrases) relevant to the domain, and whosemorphismsare directed arrows between the boxes, labeled with verb phrases also relevant to the domain. These noun and verb phrases combine to form sentences that express relationships between objects in the domain. In every olog, the objects exist within atarget category. Unless otherwise specified, the target category is taken to beSet{\displaystyle {\textbf {Set}}}, thecategory of sets and functions. The boxes in the above diagram represent objects ofSet{\displaystyle {\textbf {Set}}}. For example, the box containing the phrase "an amino acid" represents the set of all amino acids, and the box containing the phrase "a side chain" represents the set of all side chains. The arrow labeled "has" that points from "an amino acid" to "a side chain" represents the function that maps each amino acid to its unique side chain. Another target category that can be used is theKleisli categoryCP{\displaystyle {\mathcal {C}}_{\mathbb {P} }}of thepower set monad. Given anA∈Ob(Set){\displaystyle A\in Ob({\textbf {Set}})},P(A){\displaystyle \mathbb {P} (A)}is then the power set of A. Thenatural transformationη{\displaystyle \eta }mapsa∈A{\displaystyle a\in A}to thesingleton{a}{\displaystyle \{a\}}, and the natural transformationμ{\displaystyle \mu }maps a set of sets to its union. TheKleisli categoryCP{\displaystyle {\mathcal {C}}_{\mathbb {P} }}is the category with the objects matching those inP{\displaystyle \mathbb {P} }, and morphisms that establishbinary relations. Given a morphismf:A→B{\displaystyle f:A\to B}, and givena∈A{\displaystyle a\in A}andb∈B{\displaystyle b\in B}, we define the morphismR{\displaystyle R}by saying that(a,b)∈R{\displaystyle (a,b)\in R}wheneverb∈f(a){\displaystyle b\in f(a)}. The verb phrases used with this target category would need to make sense with objects that are subsets: for example, "is related to" or "is greater than". Another possible target category is the Kleisli category of probability distributions, called the Giry monad.[2]This provides a generalization ofMarkov decision processes. An ologC{\displaystyle {\mathcal {C}}}can also be viewed as adatabase schema. Every box (object ofC{\displaystyle {\mathcal {C}}}) in the olog is atableT{\displaystyle T}and the arrows (morphisms) emanating from the box are columns inC{\displaystyle {\mathcal {C}}}. The assignment of a particular instance to an object ofC{\displaystyle {\mathcal {C}}}is done through afunctorI:C→Set{\displaystyle I:{\mathcal {C}}\to {\textbf {Set}}}. In the example above, the box "an amino acid" will be represented as a table whose number of rows is equal to the number of types of amino acids and whose number of columns is three, one column for each arrow emanating from that box. "Communication" between different ologs which in practice can be communication between different models or world-views is done usingfunctors. Spivak coins the notions of a 'meaningful' and 'strongly meaningful' functors.[1]LetC{\displaystyle {\mathcal {C}}}andD{\displaystyle {\mathcal {D}}}be two ologs,I:C→Set{\displaystyle I:{\mathcal {C}}\to {\textbf {Set}}},J:D→Set{\displaystyle J:{\mathcal {D}}\to {\textbf {Set}}}functors (see the section on ologs and databases) andF:C→D{\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}a functor.F{\displaystyle F}is called aschema mapping. We say that aF{\displaystyle F}ismeaningfulif there exists a natural transformationm:I→F∗J{\displaystyle m:I\to F^{*}J}(thepullbackof J by F). Taking as an exampleC{\displaystyle {\mathcal {C}}}andD{\displaystyle {\mathcal {D}}}as two different scientific models, the functorF{\displaystyle F}is meaningful if "predictions", which are objects inSet{\displaystyle {\textbf {Set}}}, made by the first modelC{\displaystyle {\mathcal {C}}}can be translated to the second modelD{\displaystyle {\mathcal {D}}}. We say thatF{\displaystyle F}isstrongly meaningfulif given an objectX∈C{\displaystyle X\in {\mathcal {C}}}we haveI(X)=J(F(X)){\displaystyle I(X)=J(F(X))}. This equality is equivalent to requiringm{\displaystyle m}to be a natural isomorphism. Sometimes it will be hard to find a meaningful functorF{\displaystyle F}fromC{\displaystyle {\mathcal {C}}}toD{\displaystyle {\mathcal {D}}}. In such a case we may try to define a new ologB{\displaystyle {\mathcal {B}}}which represents the common ground ofC{\displaystyle {\mathcal {C}}}andD{\displaystyle {\mathcal {D}}}and find meaningful functorsFC:B→C{\displaystyle F_{\mathcal {C}}:{\mathcal {B}}\to {\mathcal {C}}}andFD:B→D{\displaystyle F_{\mathcal {D}}:{\mathcal {B}}\to {\mathcal {D}}}. If communication between ologs is limited to a two-way communication as described above then we may think of a collection of ologs as nodes of agraphand of the edges as functors connecting the ologs. If a simultaneous communication between more than two ologs is allowed then the graph becomes a symmetricsimplicial complex. Spivak provides some rules of good practice for writing an olog whose morphisms have a functional nature (see the first example in the section Mathematical formalism).[1]The text in a box should adhere to the following rules: The first three rules ensure that the objects (the boxes) defined by the olog's author are well-defined sets. The fourth rule improves the labeling of arrows in an olog. This concept was used in a paper published in the December 2011 issue ofBioNanoScienceby David Spivak and others to establish a scientific analogy between spider silk and musical composition.[3]
https://en.wikipedia.org/wiki/Olog
Thethree-schema approach, orthree-schema concept, insoftware engineeringis an approach to buildinginformation systemsand systemsinformation managementthat originated in the 1970s. It proposes three differentviewsin systems development, withconceptual modellingbeing considered the key to achievingdata integration.[2] The three-schema approach provides for three types of schemas with schema techniques based on formal language descriptions:[3] At the center, the conceptual schema defines theontologyof theconceptsas theusersthink of them and talk about them. The physical schema according to Sowa (2004) "describes the internal formats of thedatastored in thedatabase, and the external schema defines the view of the data presented to theapplication programs."[4]The framework attempted to permit multiple data models to be used for external schemata.[5] Over the years, the skill and interest in building information systems has grown tremendously. However, for the most part, the traditional approach to building systems has only focused on definingdatafrom two distinct views, the "user view" and the "computer view". From the user view, which will be referred to as the “external schema,” the definition of data is in the context of reports and screens designed to aid individuals in doing their specific jobs. The required structure of data from a usage view changes with the business environment and the individual preferences of the user. From the computer view, which will be referred to as the "internal schema", data is defined in terms of file structures for storage and retrieval. The required structure of data forcomputer storagedepends upon the specific computer technology employed and the need for efficient processing of data.[6] These two traditional views of data have been defined by analysts over the years on an application by application basis as specific business needs were addressed, see Figure 1. Typically, the internal schema defined for an initial application cannot be readily used for subsequent applications, resulting in the creation of redundant and often inconsistent definition of the same data. Data was defined by the layout of physical records and processed sequentially in early information systems. The need for flexibility, however, led to the introduction ofDatabase Management Systems(DBMSs), which allow for random access of logically connected pieces of data. The logical data structures within a DBMS are typically defined as either hierarchies, networks or relations. Although DBMSs have greatly improved the shareability of data, the use of a DBMS alone does not guarantee a consistent definition of data. Furthermore, most large companies have had to develop multiple databases which are often under the control of different DBMSs and still have the problems of redundancy and inconsistency.[6] The recognition of this problem led the ANSI/X3/SPARC Study Group on Database Management Systems to conclude that in an ideal data management environment a third view of data is needed. This view, referred to as a "conceptual schema" is a single integrated definition of the data within an enterprise which is unbiased toward any single application of data and is independent of how the data is physically stored or accessed, see Figure 2. The primary objective of this conceptual schema is to provide a consistent definition of the meanings and interrelationship of data which can be used to integrate, share, and manage the integrity of data.[6] The notion of a three-schema model consisting of aconceptual model, an external model, and an internal or physical model was first introduced by theANSI/X3/SPARCStandards Planning and Requirements Committee directed byCharles Bachmanin 1975. The ANSI/X3/SPARC Report characterized DBMSs as having a two-schema organization. That is, DBMSs utilize an internal schema, which represents the structure of the data as viewed by the DBMS, and an external schema, which represents various structures of the data as viewed by the end user. The concept of a third schema (conceptual) was introduced in the report. The conceptual schema represents the basic underlying structure of data as viewed by the enterprise as a whole.[2] The ANSI/SPARC report was intended as a basis for interoperable computer systems. All database vendors adopted the three-schema terminology, but they implemented it in incompatible ways. Over the next twenty years, various groups attempted to define standards for the conceptual schema and its mappings to databases and programming languages. Unfortunately, none of the vendors had a strong incentive to make their formats compatible with their competitors'. A few reports were produced, but no standards.[4] As the practice of data administration and graphical techniques have evolved, the term "schema" has given way to the term "model". The conceptual model represents the view of data that is negotiated between end users and database administrators covering those entities about which it is important to keep data, the meaning of the data, and the relationships of the data to each other.[2] One further development is theIDEF1Xinformation modeling methodology, which is based on the three-schema concept[citation needed]. Another is theZachman Framework, proposed by John Zachman in 1987 and developed ever since in the field ofEnterprise Architecture. In this framework, the three-schema model has evolved into a layer of six perspectives. In otherEnterprise Architecture frameworkssome kind ofview modelis incorporated. This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
https://en.wikipedia.org/wiki/Three-schema_approach
Answer set programming(ASP) is a form ofdeclarative programmingoriented towards difficult (primarilyNP-hard)search problems. It is based on thestable model(answer set) semantics oflogic programming. In ASP, search problems are reduced to computing stable models, andanswer set solvers—programs for generating stable models—are used to perform search. The computational process employed in the design of many answer set solvers is an enhancement of theDPLL algorithmand, in principle, it always terminates (unlikePrologquery evaluation, which may lead to aninfinite loop). In a more general sense, ASP includes all applications of answer sets toknowledge representation and reasoning[1][2]and the use of Prolog-style query evaluation for solving problems arising in these applications. An early example of answer set programming was theplanningmethod proposed in 1997 by Dimopoulos, Nebel and Köhler.[3][4]Their approach is based on the relationship between plans and stable models.[5]In 1998 Soininen and Niemelä[6]applied what is now known as answer set programming to the problem ofproduct configuration.[4]In 1999, the term "answer set programming" appeared for the first time in a bookThe Logic Programming Paradigmas the title of a collection of two papers.[4]The first of these papers identified the use of answer set solvers for search as a newprogramming paradigm.[7]That same year Niemelä also proposed "logic programs with stable model semantics" as a new paradigm.[8] Lparseis the name of the program that was originally created as agroundingtool (front-end) for the answer set solversmodels. The language that Lparse accepts is now commonly called AnsProlog,[9]short forAnswer Set Programming in Logic.[10]It is now used in the same way in many other answer set solvers, includingassat,clasp,cmodels,gNt,nomore++andpbmodels. (dlvis an exception; the syntax of ASP programs written for dlv is somewhat different.) An AnsProlog program consists of rules of the form The symbol:-("if") is dropped if<body>is empty; such rules are calledfacts. The simplest kind of Lparse rules arerules with constraints. One other useful construct included in this language ischoice. For instance, the choice rule says: choose arbitrarily which of the atomsp,q,r{\displaystyle p,q,r}to include in the stable model. The Lparse program that contains this choice rule and no other rules has 8 stable models—arbitrary subsets of{p,q,r}{\displaystyle \{p,q,r\}}. The definition of a stable model was generalized to programs with choice rules.[11]Choice rules can be treated also as abbreviations forpropositional formulas under the stable model semantics.[12]For instance, the choice rule above can be viewed as shorthand for the conjunction of three "excluded middle" formulas: The language of Lparse allows us also to write "constrained" choice rules, such as This rule says: choose at least 1 of the atomsp,q,r{\displaystyle p,q,r}, but not more than 2. The meaning of this rule under the stable model semantics is represented by thepropositional formula Cardinality bounds can be used in the body of a rule as well, for instance: Adding this constraint to an Lparse program eliminates the stable models that contain at least 2 of the atomsp,q,r{\displaystyle p,q,r}. The meaning of this rule can be represented by the propositional formula Variables (capitalized, as inProlog) are used in Lparse to abbreviate collections of rules that follow the same pattern, and also to abbreviate collections of atoms within the same rule. For instance, the Lparse program has the same meaning as The program is shorthand for Arangeis of the form: where start and end are constant-valued arithmetic expressions. A range is a notational shortcut that is mainly used to define numerical domains in a compatible way. For example, the fact is a shortcut for Ranges can also be used in rule bodies with the same semantics. Aconditional literalis of the form: If the extension ofqis{q(a1), q(a2), ..., q(aN)}, the above condition is semantically equivalent to writing{p(a1), p(a2), ..., p(aN)}in the place of the condition. For example, is a shorthand for To find a stable model of the Lparse program stored in file${filename}we use the command Option 0 instructs smodels to findallstable models of the program. For instance, if filetestcontains the rules then the command produces the output Ann{\displaystyle n}-coloringof agraphG=⟨V,E⟩{\displaystyle G=\left\langle V,E\right\rangle }is a functioncolor:V→{1,…,n}{\displaystyle \mathrm {color} :V\to \{1,\dots ,n\}}such thatcolor(x)≠color(y){\displaystyle \mathrm {color} (x)\neq \mathrm {color} (y)}for every pair of adjacent vertices(x,y)∈E{\displaystyle (x,y)\in E}. We would like to use ASP to find ann{\displaystyle n}-coloring of a given graph (or determine that it does not exist). This can be accomplished using the following Lparse program: Line 1 defines the numbers1,…,n{\displaystyle 1,\dots ,n}to be colors. According to the choice rule in Line 2, a unique colori{\displaystyle i}should be assigned to each vertexx{\displaystyle x}. The constraint in Line 3 prohibits assigning the same color to verticesx{\displaystyle x}andy{\displaystyle y}if there is an edge connecting them. If we combine this file with a definition ofG{\displaystyle G}, such as and run smodels on it, with the numeric value ofn{\displaystyle n}specified on the command line, then the atoms of the formcolor(…,…){\displaystyle \mathrm {color} (\dots ,\dots )}in the output of smodels will represent ann{\displaystyle n}-coloring ofG{\displaystyle G}. The program in this example illustrates the "generate-and-test" organization that is often found in simple ASP programs. The choice rule describes a set of "potential solutions"—a simple superset of the set of solutions to the given search problem. It is followed by a constraint, which eliminates all potential solutions that are not acceptable. However, the search process employed by smodels and other answer set solvers is not based ontrial and error. Acliquein a graph is a set of pairwise adjacent vertices. The following Lparse program finds a clique of size≥n{\displaystyle \geq n}in a given directed graph, or determines that it does not exist: This is another example of the generate-and-test organization. The choice rule in Line 1 "generates" all sets consisting of≥n{\displaystyle \geq n}vertices. The constraint in Line 2 "weeds out" the sets that are not cliques. AHamiltonian cyclein adirected graphis acyclethat passes through each vertex of the graph exactly once. The following Lparse program can be used to find a Hamiltonian cycle in a given directed graph if it exists; we assume that 0 is one of the vertices. The choice rule in Line 1 "generates" all subsets of the set of edges. The three constraints "weed out" the subsets that are not Hamiltonian cycles. The last of them uses the auxiliary predicater(x){\displaystyle r(x)}("x{\displaystyle x}is reachable from 0") to prohibit the vertices that do not satisfy this condition. This predicate is defined recursively in Lines 6 and 7. This program is an example of the more general "generate, define and test" organization: it includes the definition of an auxiliary predicate that helps us eliminate all "bad" potential solutions. Innatural language processing,dependency-based parsingcan be formulated as an ASP problem.[13]The following code parses the Latin sentence "Puella pulchra in villa linguam latinam discit", "the pretty girl is learning Latin in the villa". The syntax tree is expressed by thearcpredicates which represent the dependencies between the words of the sentence. The computed structure is a linearly ordered rooted tree. The ASP standardization working group produced a standard language specification, called ASP-Core-2,[14]towards which recent ASP systems are converging. ASP-Core-2 is the reference language for the Answer Set Programming Competition, in which ASP solvers are periodically benchmarked over a number of reference problems. Early systems, such as smodels, usedbacktrackingto find solutions. As the theory and practice ofBoolean SAT solversevolved, a number of ASP solvers were built on top of SAT solvers, including ASSAT and Cmodels. These converted ASP formula into SAT propositions, applied the SAT solver, and then converted the solutions back to ASP form. More recent systems, such as Clasp, use a hybrid approach, using conflict-driven algorithms inspired by SAT, without fully converting into a Boolean-logic form. These approaches allow for significant improvements of performance, often by an order of magnitude, over earlier backtracking algorithms. ThePotasscoproject acts as an umbrella for many of the systems below, includingclasp, grounding systems (gringo), incremental systems (iclingo), constraint solvers (clingcon),action languageto ASP compilers (coala), distributedMessage Passing Interfaceimplementations (claspar), and many others. Most systems support variables, but only indirectly, by forcing grounding, by using a grounding system such asLparseorgringoas a front end. The need for grounding can cause a combinatorial explosion of clauses; thus, systems that perform on-the-fly grounding might have an advantage.[15] Query-driven implementations of answer set programming, such as the Galliwasp system[16]and s(CASP)[17]avoid grounding altogether by using a combination ofresolutionandcoinduction.
https://en.wikipedia.org/wiki/Answer_set_programming
Indatabase theory, aconjunctive queryis a restricted form offirst-orderqueries using thelogical conjunctionoperator. Many first-order queries can be written as conjunctive queries. In particular, a large part of queries issued onrelational databasescan be expressed in this way. Conjunctive queries also have a number of desirable theoretical properties that larger classes of queries (e.g., therelational algebraqueries) do not share. The conjunctive queries are the fragment of (domain independent)first-order logicgiven by the set of formulae that can be constructed fromatomic formulaeusingconjunction∧ andexistential quantification∃, but not usingdisjunction∨,negation¬, oruniversal quantification∀. Each such formula can be rewritten (efficiently) into an equivalent formula inprenex normal form, thus this form is usually simply assumed. Thus conjunctive queries are of the following general form: with thefree variablesx1,…,xk{\displaystyle x_{1},\ldots ,x_{k}}being called distinguished variables, and thebound variablesxk+1,…,xm{\displaystyle x_{k+1},\ldots ,x_{m}}being called undistinguished variables.A1,…,Ar{\displaystyle A_{1},\ldots ,A_{r}}areatomic formulae. As an example of why the restriction to domain independent first-order logic is important, considerx1.∃x2.R(x2){\displaystyle x_{1}.\exists x_{2}.R(x_{2})}, which is not domain independent; seeCodd's theorem. This formula cannot be implemented in the select-project-join fragment of relational algebra, and hence should not be considered a conjunctive query. Conjunctive queries can express a large proportion of queries that are frequently issued onrelational databases. To give an example, imagine a relational database for storing information about students, their address, the courses they take and their gender. Finding all male students and their addresses who attend a course that is also attended by a female student is expressed by the following conjunctive query: Note that since the only entity of interest is the male student and his address, these are the only distinguished variables, while the variablescourse,student2are onlyexistentially quantified, i.e. undistinguished. Conjunctive queries without distinguished variables are calledboolean conjunctive queries. Conjunctive queries where all variables are distinguished (and no variables are bound) are calledequi-join queries,[1]because they are the equivalent, in therelational calculus, of theequi-joinqueries in therelational algebra(when selecting all columns of the result). Conjunctive queries also correspond to select-project-join queries inrelational algebra(i.e., relational algebra queries that do not use the operations union or difference) and to select-from-where queries inSQLin which the where-condition uses exclusively conjunctions of atomic equality conditions, i.e. conditions constructed from column names and constants using no comparison operators other than "=", combined using "and". Notably, this excludes the use of aggregation and subqueries. For example, the above query can be written as an SQL query of the conjunctive query fragment as Besides their logical notation, conjunctive queries can also be written asDatalogrules. Many authors in fact prefer the following Datalog notation for the query above: Although there are no quantifiers in this notation, variables appearing in the head of the rule are still implicitlyuniversally quantified, while variables only appearing in the body of the rule are still implicitly existentially quantified. While any conjunctive query can be written as a Datalog rule, not every Datalog program can be written as a conjunctive query. In fact, only single rules over extensional predicate symbols can be easily rewritten as an equivalent conjunctive query. The problem of deciding whether for a given Datalog program there is an equivalentnonrecursive program(corresponding to a positive relational algebra query, or, equivalently, a formula of positive existentialfirst-order logic, or, as a special case, a conjunctive query) is known as theDatalog boundednessproblem and is undecidable.[2] Extensions of conjunctive queries capturing moreexpressive powerinclude: The formal study of all of these extensions is justified by their application inrelational databasesand is in the realm ofdatabase theory. For the study of thecomputational complexityof evaluating conjunctive queries, two problems have to be distinguished. The first is the problem of evaluating a conjunctive query on arelational databasewhere both the query and the database are considered part of the input. The complexity of this problem is usually referred to ascombined complexity, while the complexity of the problem of evaluating a query on a relational database, where the query is assumed fixed, is calleddata complexity.[3] Conjunctive queries areNP-completewith respect tocombined complexity,[4]while the data complexity of conjunctive queries is very low, in the parallel complexity classAC0, which is contained inLOGSPACEand thus inpolynomial time. TheNP-hardnessof conjunctive queries may appear surprising, sincerelational algebraandSQLstrictly subsume the conjunctive queries and are thus at least as hard (in fact, relational algebra isPSPACE-complete with respect to combined complexity and is therefore even harder under widely held complexity-theoretic assumptions). However, in the usual application scenario, databases are large, while queries are very small, and the data complexity model may be appropriate for studying and describing their difficulty. The problem of listing all answers to a non-Boolean conjunctive query has been studied in the context ofenumeration algorithms, with a characterization (under somecomputational hardness assumptions) of the queries for which enumeration can be performed withlinear timepreprocessing andconstantdelay between each solution. Specifically, these are the acyclic conjunctive queries which also satisfy afree-connexcondition.[5] Conjunctive queries are one of the great success stories ofdatabase theoryin that many interesting problems that are computationally hard orundecidablefor larger classes of queries are feasible for conjunctive queries.[6]For example, consider thequery containment problem. We writeR⊆S{\displaystyle R\subseteq S}for twodatabase relationsR,S{\displaystyle R,S}of the sameschemaif and only if each tuple occurring inR{\displaystyle R}also occurs inS{\displaystyle S}. Given a queryQ{\displaystyle Q}and arelational databaseinstanceI{\displaystyle I}, we write the result relation of evaluating the query on the instance simply asQ(I){\displaystyle Q(I)}. Given two queriesQ1{\displaystyle Q_{1}}andQ2{\displaystyle Q_{2}}and adatabase schema, the query containment problem is the problem of deciding whether for all possible database instancesI{\displaystyle I}over the input database schema,Q1(I)⊆Q2(I){\displaystyle Q_{1}(I)\subseteq Q_{2}(I)}. The main application of query containment is in query optimization: Deciding whether two queries are equivalent is possible by simply checking mutual containment. The query containment problem is undecidable forrelational algebraandSQLbut is decidable andNP-completefor conjunctive queries. In fact, it turns out that the query containment problem for conjunctive queries is exactly the same problem as the query evaluation problem.[6]Since queries tend to be small,NP-completenesshere is usually considered acceptable. The query containment problem for conjunctive queries is also equivalent to theconstraint satisfaction problem.[7] An important class of conjunctive queries that have polynomial-time combined complexity are theacyclicconjunctive queries.[8]The query evaluation, and thus query containment, isLOGCFL-complete and thus inpolynomial time.[9]Acyclicity of conjunctive queries is a structural property of queries that is defined with respect to the query'shypergraph:[6]a conjunctive query is acyclic if and only if it has hypertree-width 1. For the special case of conjunctive queries in which all relations used are binary, this notion corresponds to the treewidth of thedependency graphof the variables in the query (i.e., the graph having the variables of the query as nodes and an undirected edge{x,y}{\displaystyle \{x,y\}}between two variables if and only if there is an atomic formulaR(x,y){\displaystyle R(x,y)}orR(y,x){\displaystyle R(y,x)}in the query) and the conjunctive query is acyclic if and only if its dependency graph isacyclic. An important generalization of acyclicity is the notion ofbounded hypertree-width, which is a measure of how close to acyclic a hypergraph is, analogous to boundedtreewidthingraphs. Conjunctive queries of bounded tree-width haveLOGCFLcombined complexity.[10] Unrestricted conjunctive queries over tree data (i.e., a relational database consisting of a binary child relation of a tree as well as unary relations for labeling the tree nodes) have polynomial time combined complexity.[11]
https://en.wikipedia.org/wiki/Conjunctive_query
DatalogZ(stylized asDatalogℤ) is an extension ofDatalogwithintegerarithmetic and comparisons. Thedecision problemof whether or not a givenground atom(fact) is entailed by a DatalogZ program isRE-complete(hence,undecidable), which can be shown by areductiontodiophantine equations.[1] The syntax of DatalogZ extends that of Datalog withnumeric terms, which are integer constants, integer variables, or terms built up from these with addition, subtraction, and multiplication. Furthermore, DatalogZ allowscomparison atoms, which are atoms of the formt < sort <= sfor numeric termst,s.[2] The semantics of DatalogZ are based onthe model-theoretic (Herbrand) semantics of Datalog.[2] The undecidability of entailment of DatalogZ motivates the definition oflimit DatalogZ. Limit DatalogZ restricts predicates to a single numeric position, which is marked maximal or minimal. The semantics are based on the model-theoretic (Herbrand) semantics of Datalog. The semantics require that Herbrand interpretations belimit-closedto qualify as models, in the following sense: Given a ground atoma=r(c1,…,cn){\displaystyle a=r(c_{1},\ldots ,c_{n})}of a limit predicater{\displaystyle r}where the last position is a max (resp. min) position, ifa{\displaystyle a}is in a Herbrand interpretationI{\displaystyle I}, then the ground atomsr(c1,…,k){\displaystyle r(c_{1},\ldots ,k)}fork>cn{\displaystyle k>c_{n}}(resp.k<cn{\displaystyle k<c_{n}}) must also be inI{\displaystyle I}forI{\displaystyle I}to be limit-closed.[3] Given a constantw, a binary relationedgethat represents the edges of agraph, and a binary relationspwith the last position ofspminimal, the following limit DatalogZ program computes the relationsp, which represents the length of the shortest path fromwto any other node in the graph:
https://en.wikipedia.org/wiki/DatalogZ
Disjunctive Datalogis an extension of thelogic programming languageDatalogthat allowsdisjunctionsin the heads of rules. This extension enables disjunctive Datalog to express severalNP-hardproblems that are not known to be expressable in plain Datalog. Disjunctive Datalog has been applied in the context of reasoning aboutontologiesin thesemantic web.[1]DLVis an implementation of disjunctive Datalog. A disjunctive Datalog program is a collection of rules. Aruleis a clause of the form:[2] whereb1{\displaystyle b_{1}}, ...,bm{\displaystyle b_{m}}may be negated, and may include (in)equality constraints. There are at least three ways to define the semantics of disjunctive Datalog:[3] Disjunctive Datalog can express severalNP-completeandNP-hardproblems, including thetravelling salesman problem,graph coloring,maximum clique problem, andminimal vertex cover.[3]These problems are only expressible in Datalog if thepolynomial hierarchycollapses. TheDLV(DataLog with Disjunction, where thelogical disjunctionsymbolVis used) system implements the disjunctive stable model semantics.[4]
https://en.wikipedia.org/wiki/Disjunctive_Datalog
Flixis afunctional,imperative, andlogicprogramming languagedeveloped atAarhus University, with funding from theIndependent Research Fund Denmark,[2]and by a community ofopen sourcecontributors.[3]The Flix language supportsalgebraic data types,pattern matching,parametric polymorphism,currying,higher-order functions,extensible records,[4]channel and process-based concurrency, andtail call elimination. Two notable features of Flix are its type and effect system[5]and its support for first-class Datalog constraints.[6] The Flix type and effect system supportsHindley-Milner-styletype inference. The system separates pure and impure code: if an expression is typed as pure then it cannot produce an effect at run-time. Higher-order functions can enforce that they are given pure (or impure) function arguments. The type and effect system supportseffect polymorphism[7][8]which means that the effect of a higher-order function may depend on the effect(s) of its argument(s). Flix supportsDatalogprograms asfirst-classvalues. A Datalog program value, i.e. a collection of Datalog facts and rules, can be passed to and returned from functions, stored in data structures, and composed with other Datalog program values. Theminimal modelof a Datalog program value can be computed and is itself a Datalog program value. In this way, Flix can be viewed as ameta programminglanguage for Datalog. Flix supportsstratified negationand the Flix compiler ensures stratification at compile-time.[9]Flix also supports an enriched form of Datalog constraints where predicates are givenlatticesemantics.[10][11][12][13] Flix is aprogramming languagein theML-family of languages. Its type and effect system is based onHindley-Milnerwith several extensions, includingrow polymorphismandBoolean unification. The syntax of Flix is inspired byScalaand uses shortkeywordsandcurly braces. Flix supportsuniform function call syntaxwhich allows a function callf(x, y, z)to be written asx.f(y, z). The concurrency model of Flix is inspired byGoand based onchannels and processes. A process is a light-weight thread that does not share (mutable) memory with another process. Processes communicate over channels which are bounded or unbounded queues of immutable messages. While many programming languages support a mixture of functional and imperative programming, the Flix type and effect system tracks the purity of every expression making it possible to write parts of a Flix program in apurely functional stylewith purity enforced by the effect system. Flix programs compile toJVM bytecodeand are executable on theJava Virtual Machine(JVM).[14]The Flix compiler performswhole program compilation, eliminates polymorphism viamonomorphization,[15]and usestree shakingto removeunreachable code. Monomorphization avoidsboxingof primitive values at the cost of longer compilation times and larger executable binaries. Flix has some support for interoperability with programs written inJava.[16] Flix supportstail call eliminationwhich ensures that function calls in tail position never consume stack space and hence cannot cause the call stack to overflow.[17]Since theJVM instruction setlacks explicit support for tail calls, such calls are emulated using a form of reusable stack frames.[18]Support for tail call elimination is important since all iteration in Flix is expressed throughrecursion. The Flix compiler disallows most forms of unused or redundant code, including: unused local variables, unused functions, unused formal parameters, unused type parameters, and unused type declarations, such unused constructs are reported as compiler errors.[19]Variable shadowingis also disallowed. The stated rationale is that unused or redundant code is often correlated with erroneous code[20] AVisual Studio Codeextension for Flix is available.[21]The extension is based on theLanguage Server Protocol, a common interface betweenIDEsandcompilersbeing developed byMicrosoft. Flix isopen source softwareavailable under theApache 2.0 License. The following program prints "Hello World!" when compiled and executed: The type and effect signature of themainfunction specifies that it has no parameters, returns a value of typeUnit, and that the function has the IO effect, i.e. is impure. Themainfunction is impure because it invokesprintLinewhich is impure. The following program fragment declares analgebraic data type(ADT) namedShape: The ADT has three constructors:Circle,Square, andRectangle. The following program fragment usespattern matchingto destruct aShapevalue: The following program fragment defines ahigher-order functionnamedtwicethat when given a functionffromInttoIntreturns a function that appliesfto its input two times: We can use the functiontwiceas follows: Here the call totwice(x -> x + 1)returns a function that will increment its argument two times. Thus the result of the whole expression is0 + 1 + 1 = 2. The following program fragment illustrates apolymorphic functionthat maps a functionf: a -> bover a list of elements of typeareturning a list of elements of typeb: Themapfunction recursively traverses the listland appliesfto each element constructing a new list. Flix supports type parameter elision hence it is not required that the type parametersaandbare explicitly introduced. The following program fragment shows how to construct arecordwith two fieldsxandy: Flix usesrow polymorphismto type records. Thesumfunction below takes a record that hasxandyfields (and possibly other fields) and returns the sum of the two fields: The following are all valid calls to thesumfunction: The Flix type and effect system separates pure and impure expressions.[5][22][23]A pure expression is guaranteed to bereferentially transparent. A pure function always returns the same value when given the same argument(s) and cannot have any (observable) side-effects. For example, the following expression is of typeInt32and has the empty effect set{}, i.e. it is pure: whereas the following expression has theIOeffect, i.e. is impure: A higher-order function can specify that a function argument must be pure, impure, or that it is effect polymorphic. For example, the definition ofSet.existsrequires that its function argumentfis pure: The requirement thatfmust be pure ensures that implementation details do notleak. For example, sincefis pure it cannot be used to determine in what order the elements of the set are traversed. Iffwas impure such details could leak, e.g. by passing a function that also prints the current element, revealing the internal element order inside the set. A higher-order function can also require that a function is impure. For example, the definition ofList.foreachrequires that its function argumentfis impure: The requirement thatfmust be impure ensures that the code makes sense: It would be meaningless to callList.foreachwith a pure function since it always returnsUnit. The type and effect is sound, but not complete. That is, if a function is pure then itcannotcause an effect, whereas if a function is impure then itmay, but not necessarily, cause an effect. For example, the following expression is impure even though it cannot produce an effect at run-time: A higher-order function can also be effect polymorphic: its effect(s) can depend on its argument(s). For example, the standard library definition ofList.mapis effect polymorphic:[24] TheList.mapfunction takes a functionffrom elements of typeatobwith effecte. The effect of the map function is itselfe. Consequently, ifList.mapis invoked with a pure function then the entire expression is pure whereas if it is invoked with an impure function then the entire expression is impure. It is effect polymorphic. A higher-order function that takes multiple function arguments may combine their effects. For example, the standard library definition of forwardfunction composition>>is pure if both its function arguments are pure:[25] The type and effect signature can be understood as follows: The>>function takes two function arguments:fwith effecte1andgwith effecte2. The effect of>>is effect polymorphic in theconjunctionofe1ande2. If both are pure then the overall expression is pure. The type and effect system allows arbitrary set expressions to control the purity of function arguments. For example, it is possible to express a higher-order functionhthat accepts two function argumentsfandgwhere the effects offare disjoint from those ofg: Ifhis called with a function argumentfwhich has theIOeffect thengcannot have theIOeffect. The type and effect system can be used to ensure that statement expressions are useful, i.e. that if an expression or function is evaluated and its result is discarded then it must have a side-effect. For example, compiling the program fragment below: causes a compiler error: because it is non-sensical to evaluate the pure expressionList.map(x -> 2 * x, 1 :: 2 :: Nil)and then to discard its result. Most likely the programmer wanted to use the result (or alternatively the expression is redundant and could be deleted). Consequently, Flix rejects such programs. Flix supportsDatalogprograms as first-class values.[6][9][26]A Datalog program is a logic program that consists of a collection of unorderedfactsandrules. Together, the facts and rules imply aminimal model, a unique solution to any Datalog program. In Flix, Datalog program values can be passed to and returned from functions, stored in data structures, composed with other Datalog program values, and solved. The solution to a Datalog program (the minimal model) is itself a Datalog program. Thus, it is possible to construct pipelines of Datalog programs where the solution, i.e. "output", of one Datalog program becomes the "input" to another Datalog program. The following edge facts define a graph: The following Datalog rules compute thetransitive closureof the edge relation: The minimal model of the facts and rules is: In Flix, Datalog programs are values. The above program can beembeddedin Flix as follows: The local variablefholds a Datalog program value that consists of the edge facts. Similarly, the local variablepis a Datalog program value that consists of the two rules. Thef <+> pexpression computes the composition (i.e. union) of the two Datalog programsfandp. Thesolveexpression computes the minimal model of the combined Datalog program, returning the edge and path facts shown above. Since Datalog programs are first-class values, we can refactor the above program into several functions. For example: The un-directed closure of the graph can be computed by adding the rule: We can modify theclosurefunction to take a Boolean argument that determines whether we want to compute the directed or un-directed closure: The Flix type system ensures that Datalog program values are well-typed. For example, the following program fragment does not type check: because inp1the type of theEdgepredicate isEdge(Int32, Int32)whereas inp2it has typeEdge(String, String). The Flix compiler rejects such programs as ill-typed. The Flix compiler ensures that every Datalog program value constructed at run-time isstratified. Stratification is important because it guarantees the existence of a unique minimal model in the presence of negation. Intuitively, a Datalog program is stratified if there is no recursion through negation,[27]i.e. a predicate cannot depend negatively on itself. Given a Datalog program, acycle detectionalgorithm can be used to determine if it is stratified. For example, the following Flix program contains an expression that cannot be stratified: because the last expression constructs a Datalog program value whoseprecedence graphcontains a negative cycle: theBachelorpredicate negatively depends on theHusbandpredicate which in turn (positively) depends on theBachelorpredicate. The Flix compiler computes the precedence graph for every Datalog program valued expression and determines its stratification at compile-time. If an expression is not stratified, the program is rejected by the compiler. The stratification is sound, but conservative. For example, the following program isunfairly rejected: The type system conservatively assumes that both branches of the if expression can be taken and consequently infers that there may be a negative cycle between theAandBpredicates. Thus the program is rejected. This is despite the fact that at run-time themainfunction always returns a stratified Datalog program value. Flix is designed around a collection of stated principles:[28] The principles also list several programming language features that have been deliberately omitted. In particular, Flix lacks support for:
https://en.wikipedia.org/wiki/Flix_(programming_language)
TheSemantic Web Rule Language(SWRL) is a proposed language for theSemantic Webthat can be used to express rules as well as logic, combiningOWL DLor OWL Lite with a subset of theRule Markup Language(itself a subset ofDatalog).[1] The specification was submitted in May 2004 to theW3Cby theNational Research Council of Canada, Network Inference (since acquired bywebMethods), andStanford Universityin association with the Joint US/EU ad hoc Agent Markup Language Committee. The specification was based on an earlier proposal for an OWL rules language.[2][3] SWRL has the full power of OWL DL, but at the price of decidability and practical implementations.[4]However, decidability can be regained by restricting the form of admissible rules, typically by imposing a suitable safety condition.[5] Rules are of the form of an implication between an antecedent (body) and a consequent (head). The intended meaning can be read as: whenever the conditions specified in the antecedent hold, then the conditions specified in the consequent must also hold. TheXMLConcrete Syntax is a combination of theOWL Web Ontology Language XML Presentation Syntaxwith theRuleML XML syntax. It is straightforward to provide such anRDFconcrete syntax for rules, but the presence of variables in rules goes beyond the RDF Semantics.[6]Translation from the XML Concrete Syntax toRDF/XMLcould be easily accomplished by extending theXSLTtransformation for the OWL XML Presentation syntax. Caveat: Reasoners do not support the full specification because the reasoning becomes undecidable. There can be three types of approach: Description Logic Programs(DLPs) are another proposal for integrating rules and OWL.[7]Compared with Description Logic Programs, SWRL takes a diametrically opposed integration approach. DLP is the intersection ofHorn logicand OWL, whereas SWRL is (roughly) the union of them.[4]In DLP, the resultant language is a very peculiar looking description logic and rather inexpressive language overall.[4] As the Semantic Web continues to evolve, the role of SWRL in enabling automated reasoning and decision-making processes will likely expand. While current implementations, such as those found in Protégé and Pellet, provide significant capabilities, ongoing advancements in artificial intelligence and knowledge representation may lead to even more sophisticated reasoning engines that better handle the computational complexities introduced by SWRL. Furthermore, as data integration across diverse domains becomes increasingly critical, SWRL could play a pivotal role in enhancing interoperability between systems that utilize OWL ontologies. The combination of rules with ontologies, as facilitated by SWRL, remains a powerful mechanism for drawing inferences and uncovering relationships in large, distributed datasets, offering broad applicability in fields such as healthcare, finance, and semantic data analytics.[8]
https://en.wikipedia.org/wiki/Semantic_Web_Rule_Language
In relationaldatabase theory, atuple-generating dependency(TGD) is a certain kind of constraint on arelational database. It is a subclass of the class ofembedded dependencies(EDs). An algorithm known asthe chasetakes as input an instance that may or may not satisfy a set of TGDs (or more generally EDs) and, if it terminates (which is a priori undecidable), outputs an instance that does satisfy the TGDs. A tuple-generating dependency is asentenceinfirst-order logicof the form:[1] whereϕ{\displaystyle \phi }is a possibly empty andψ{\displaystyle \psi }is a non-emptyconjunctionofrelational atoms. A relational atom has the formR(w1,…,wh){\displaystyle R(w_{1},\ldots ,w_{h})}, where each of thetermsw,…,wh{\displaystyle w,\ldots ,w_{h}}arevariablesor constants. Severalfragmentsof TGDs have been defined. For instance,full TGDsare TGDs which do not use the existential quantifier. Full TGDs can equivalently be seen as programs in theDatalogquery language. There are also some fragments of TGDs that can be expressed inguarded logic, in particular:[2][3][4] The expressive power of these fragments and TGDs has been studied in depth. For example, Heng Zhang et al.[3], as well as Marco Console and Phokion G. Kolaitis,[4]have developed a series of model-theoretic characterizations for these languages. In addition, Heng Zhang and Guifei Jiang have provided characterizations of the program expressive power of TGDs, several of their extensions, and Linear TGDs, specifically in the context of query answering.[6] InSQL, inclusion dependencies are typically expressed by means of a stronger constraint calledforeign key, which forces the frontier variables to be acandidate keyin thetablecorresponding to the relational atom ofψ{\displaystyle \psi }.
https://en.wikipedia.org/wiki/Tuple-generating_dependency
Data integrityis the maintenance of, and the assurance of, data accuracy and consistency over its entirelife-cycle.[1]It is a critical aspect to the design, implementation, and usage of any system that stores, processes, or retrieves data. The term is broad in scope and may have widely different meanings depending on the specific context even under the same general umbrella ofcomputing. It is at times used as a proxy term fordata quality,[2]whiledata validationis a prerequisite for data integrity.[3] Data integrity is the opposite ofdata corruption.[4]The overall intent of any data integrity technique is the same: ensure data is recorded exactly as intended (such as a database correctly rejecting mutually exclusive possibilities). Moreover, upon laterretrieval, ensure the data is the same as when it was originally recorded. In short, data integrity aims to prevent unintentional changes to information. Data integrity is not to be confused withdata security, the discipline of protecting data from unauthorized parties. Any unintended changes to data as the result of a storage, retrieval or processing operation, including malicious intent, unexpected hardware failure, andhuman error, is failure of data integrity. If the changes are the result of unauthorized access, it may also be a failure of data security. Depending on the data involved this could manifest itself as benign as a single pixel in an image appearing a different color than was originally recorded, to the loss of vacation pictures or a business-critical database, to even catastrophic loss of human life in alife-critical system. Physical integrity deals with challenges which are associated with correctly storing and fetching the data itself. Challenges with physical integrity may includeelectromechanicalfaults, design flaws, materialfatigue,corrosion,power outages, natural disasters, and other special environmental hazards such asionizing radiation, extreme temperatures, pressures andg-forces. Ensuring physical integrity includes methods such asredundanthardware, anuninterruptible power supply, certain types ofRAIDarrays,radiation hardenedchips,error-correcting memory, use of aclustered file system, using file systems that employ block levelchecksumssuch asZFS, storage arrays that compute parity calculations such asexclusive oror use acryptographic hash functionand even having awatchdog timeron critical subsystems. Physical integrity often makes extensive use of error detecting algorithms known aserror-correcting codes. Human-induced data integrity errors are often detected through the use of simpler checks and algorithms, such as theDamm algorithmorLuhn algorithm. These are used to maintain data integrity after manual transcription from one computer system to another by a human intermediary (e.g. credit card or bank routing numbers). Computer-induced transcription errors can be detected throughhash functions. In production systems, these techniques are used together to ensure various degrees of data integrity. For example, a computerfile systemmay be configured on a fault-tolerant RAID array, but might not provide block-level checksums to detect and preventsilent data corruption. As another example, a database management system might be compliant with theACIDproperties, but the RAID controller or hard disk drive's internal write cache might not be. This type of integrity is concerned with thecorrectnessorrationalityof a piece of data, given a particular context. This includes topics such asreferential integrityandentity integrityin arelational databaseor correctly ignoring impossible sensor data in robotic systems. These concerns involve ensuring that the data "makes sense" given its environment. Challenges includesoftware bugs, design flaws, and human errors. Common methods of ensuring logical integrity include things such ascheck constraints,foreign key constraints, programassertions, and other run-time sanity checks. Physical and logical integrity often share many challenges such as human errors and design flaws, and both must appropriately deal with concurrent requests to record and retrieve data, the latter of which is entirely a subject on its own. If a data sector only has a logical error, it can be reused by overwriting it with new data. In case of a physical error, the affected data sector is permanently unusable. Data integrity contains guidelines fordata retention, specifying or guaranteeing the length of time data can be retained in a particular database (typically arelational database). To achieve data integrity, these rules are consistently and routinely applied to all data entering the system, and any relaxation of enforcement could cause errors in the data. Implementing checks on the data as close as possible to the source of input (such as human data entry), causes less erroneous data to enter the system. Strict enforcement of data integrity rules results in lower error rates, and time saved troubleshooting and tracing erroneous data and the errors it causes to algorithms. Data integrity also includes rules defining the relations a piece of data can have to other pieces of data, such as aCustomerrecord being allowed to link to purchasedProducts, but not to unrelated data such asCorporate Assets. Data integrity often includes checks and correction for invalid data, based on a fixedschemaor a predefined set of rules. An example being textual data entered where a date-time value is required. Rules for data derivation are also applicable, specifying how a data value is derived based on algorithm, contributors and conditions. It also specifies the conditions on how the data value could be re-derived. Data integrity is normally enforced in adatabase systemby a series of integrity constraints or rules. Three types of integrity constraints are an inherent part of therelational data model: entity integrity, referential integrity and domain integrity. If a database supports these features, it is the responsibility of the database to ensure data integrity as well as theconsistency modelfor the data storage and retrieval. If a database does not support these features, it is the responsibility of the applications to ensure data integrity while the database supports theconsistency modelfor the data storage and retrieval. Having a single, well-controlled, and well-defined data-integrity system increases: Moderndatabasessupport these features (seeComparison of relational database management systems), and it has become the de facto responsibility of the database to ensure data integrity. Companies, and indeed many database systems, offer products and services to migrate legacy systems to modern databases. An example of a data-integrity mechanism is the parent-and-child relationship of related records. If a parent record owns one or more related child records all of the referential integrity processes are handled by the database itself, which automatically ensures the accuracy and integrity of the data so that no child record can exist without a parent (also called being orphaned) and that no parent loses their child records. It also ensures that no parent record can be deleted while the parent record owns any child records. All of this is handled at the database level and does not require coding integrity checks into each application. Various research results show that neither widespreadfilesystems(includingUFS,Ext,XFS,JFSandNTFS) norhardware RAIDsolutions provide sufficient protection against data integrity problems.[5][6][7][8][9] Some filesystems (includingBtrfsandZFS) provide internal data andmetadatachecksumming that is used for detectingsilent data corruptionand improving data integrity. If a corruption is detected that way and internal RAID mechanisms provided by those filesystems are also used, such filesystems can additionally reconstruct corrupted data in a transparent way.[10]This approach allows improved data integrity protection covering the entire data paths, which is usually known asend-to-end data protection.[11]
https://en.wikipedia.org/wiki/Integrity_constraint
Business intelligence softwareis a type ofapplication softwaredesigned to retrieve, analyze, transform and report data forbusiness intelligence(BI). The applications generally read data that has been previously stored, often - though not necessarily - in adata warehouseordata mart. The first comprehensive business intelligence systems were developed byIBMandSiebel(currently acquired byOracle) in the period between 1970 and 1990.[1][2]At the same time, small developer teams were emerging with attractive ideas, and pushing out some of the products companies still use nowadays.[3] In 1988, specialists and vendors organized a Multiway Data Analysis Consortium inRome, where they considered making data management and analytics more efficient, and foremost available to smaller and financially restricted businesses. By 2000, there were many professional reporting systems and analytic programs, some owned by top performing software producers in theUnited States of America.[4] In the years after 2000, business intelligence software producers became interested in producing universally applicable BI systems which don’t require expensive installation, and could hence be considered by smaller and midmarket businesses which could not afford on premise maintenance. These aspirations emerged in parallel with the cloud hosting trend, which is how most vendors came to develop independent systems with unrestricted access to information.[5] From 2006 onwards, the positive effects of cloud-stored information and data management transformed itself to a completely mobile-affectioned one, mostly to the benefit of decentralized and remote teams looking to tweak data or gain full visibility over it out of office. As a response to the large success of fully optimized uni-browser versions, vendors have recently begun releasing mobile-specific product applications for bothAndroidandiOSusers.[6]Cloud-hosted data analytics made it possible for companies to categorize and process large volumes of data, which is how we can currently speak of unlimited visualization, and intelligent decision making. The key general categories of business intelligence applications are: Except for spreadsheets, these tools are provided as standalone applications, suites of applications, components ofEnterprise resource planningsystems,application programming interfacesor as components of software targeted to a specific industry. The tools are sometimes packaged intodata warehouse appliances.
https://en.wikipedia.org/wiki/Business_intelligence_software
Adata lakeis a system orrepository of datastored in its natural/raw format,[1]usually objectblobsor files. A data lake is usually a single store of data including raw copies of source system data, sensor data, social data etc.,[2]and transformed data used for tasks such asreporting,visualization,advanced analytics, andmachine learning. A data lake can includestructured datafromrelational databases(rows and columns), semi-structured data (CSV, logs,XML,JSON),unstructured data(emails, documents,PDFs), andbinary data(images,audio, video).[3]A data lake can be establishedon premises(within an organization's data centers) orin the cloud(usingcloud services). James Dixon, then chief technology officer atPentaho, coined the term by 2011[4]to contrast it withdata mart, which is a smaller repository of interesting attributes derived from raw data.[5]In promoting data lakes, he argued that data marts have several inherent problems, such asinformation siloing.PricewaterhouseCoopers(PwC) said that data lakes could "put an end to data silos".[6]In their study on data lakes they noted that enterprises were "starting to extract and place data for analytics into a single,Hadoop-based repository." Many companies usecloud storage servicessuch asGoogle Cloud StorageandAmazon S3or a distributed file system such asApache Hadoopdistributed file system (HDFS).[7]There is a gradual academic interest in the concept of data lakes. For example, Personal DataLake atCardiff Universityis a new type of data lake which aims at managingbig dataof individual users by providing a single point of collecting, organizing, and sharing personal data.[8] Early data lakes, such as Hadoop 1.0, had limited capabilities because it only supported batch-oriented processing (Map Reduce). Interacting with it required expertise in Java, map reduce and higher-level tools likeApache Pig,Apache SparkandApache Hive(which were also originally batch-oriented). Poorly managed data lakes have been facetiously called data swamps.[9] In June 2015, David Needle characterized "so-called data lakes" as "one of the more controversial ways to managebig data".[10]PwCwas also careful to note in their research that not all data lake initiatives are successful. They quote Sean Martin, CTO ofCambridge Semantics: We see customers creating big data graveyards, dumping everything intoHadoop distributed file system(HDFS) and hoping to do something with it down the road. But then they just lose track of what’s there. The main challenge is not creating a data lake, but taking advantage of the opportunities it presents.[6] They describe companies that build successful data lakes as gradually maturing their lake as they figure out which data andmetadataare important to the organization. Another criticism is that the termdata lakeis not useful because it is used in so many different ways.[11]It may be used to refer to, for example: any tools or data management practices that are notdata warehouses; a particular technology for implementation; a raw data reservoir; a hub forETLoffload; or a central hub for self-service analytics. While critiques of data lakes are warranted, in many cases they apply to other data projects as well.[12]For example, the definition ofdata warehouseis also changeable, and not all data warehouse efforts have been successful. In response to various critiques, McKinsey noted[13]that the data lake should be viewed as a service model for delivering business value within the enterprise, not a technology outcome. Data lakehousesare a hybrid approach that can ingest a variety of raw data formats like a data lake, yet provideACIDtransactions and enforce data quality like adata warehouse.[14][15]A data lakehouse architecture attempts to address several criticisms of data lakes by adding data warehouse capabilities such as transaction support, schema enforcement, governance, and support for diverse workloads. According to Oracle, data lakehouses combine the "flexible storage of unstructured data from a data lake and the management features and tools from data warehouses".[16]
https://en.wikipedia.org/wiki/Data_lake
Data meshis asociotechnicalapproach to building a decentralized data architecture by leveraging a domain-oriented, self-serve design (in a software development perspective), and borrows Eric Evans’ theory ofdomain-driven design[1]and Manuel Pais’ and Matthew Skelton’s theory of team topologies.[2]Data mesh mainly concerns itself with the data itself, taking thedata lakeand the pipelines as a secondary concern.[3]The main proposition is scaling analytical data by domain-oriented decentralization.[4]With data mesh, the responsibility for analytical data is shifted from the central data team to the domain teams, supported by adata platformteam that provides a domain-agnostic data platform.[5]This enables a decrease in data disorder or the existence of isolateddata silos, due to the presence of a centralized system that ensures the consistent sharing of fundamental principles across various nodes within the data mesh and allows for the sharing of data across different areas.[6] The termdata meshwas first defined by Zhamak Dehghani in 2019[7]while she was working as a principal consultant at the technology companyThoughtworks.[8][9]Dehghani introduced the term in 2019 and then provided greater detail on its principles and logical architecture throughout 2020. The process was predicted to be a “big contender” for companies in 2022.[10][11]Data meshes have been implemented by companies such asZalando,[12]Netflix,[13]Intuit,[14]VistaPrint,PayPal[15]and others. In 2022, Dehghani leftThoughtworksto found Nextdata Technologies to focus on decentralized data.[16] Data mesh is based on four core principles:[17] In addition to these principles, Dehghani writes that the data products created by each domain team should be discoverable, addressable, trustworthy, possess self-describing semantics and syntax, be interoperable, secure, and governed by global standards and access controls.[18]In other words, the data should be treated as a product that is ready to use and reliable.[19][20] After its introduction in 2019[7]multiple companies started to implement a data mesh[12][14][15]and share their experiences. Challenges (C) and best practices (BP) for practitioners, include: Scott Hirleman has started a data mesh community that contains over 7,500 people in their Slack channel.[25]
https://en.wikipedia.org/wiki/Data_mesh
This is acomparison of notable object database management systems, showing what fundamentalobject databasefeatures are implemented natively.
https://en.wikipedia.org/wiki/Comparison_of_object_database_management_systems
Component-oriented database(CODB) is a way ofdata administrationand programmingDBMS's using the paradigm of thecomponent-orientation.[citation needed] The paradigm ofcomponent-orientation(CO) is a development of theobject-orientation(OO) inprogramminganddata modeling, leading toward the extreme the possibilities of reuse.[1]In this model type,classesare aggregate in cells calledcomponents,[citation needed]that execute a role similar to thefunctionin thestructured programming,[2]a way of processing information contemporary to therelational databasemodel.[3] So the component-orientation mixes a set of features of its predecessor models. Understanding it is simpler while thinking of thevisual component, that is anapplication[4]which not being deployed into anexecutableorbytecodebut otherwise turned to be linked by an icon inside another application, icon when one clicks on it implements certain tasks.[5]Then this concepts can be extended tonon-visual components.[6] In database activities, the component, visual or not, is an aggregate ofclasses, in the sense ofOO, that can be linked to other ones byadapters.[7] As after the OO model conception data and code programming code are mixed in a cohesive body,[8]there are some difficulties in conceiving where theCODBandCOprogramming are separate one from the other. Although this enigma is important in conceptual epistemological area, in practicaldata processingthere isn't so importance in this question because of usage of mapping models to large scale used software, like the mappings calledORDBMSandCRDB(component-relational database), in which the separation of data and code are still well defined.[9] Inprogrammingactivity, the CO is often taken place with large-scale used OO languages (likeC++,Java) withmapping adaptation. Indesigningthe paradigm is supported byUML. Indata modeling,data administrationanddatabase administration, themapping adaptationis alike theORDBMSparadigm. The adapted paradigm to component-based models is known ascomponent-relational database(CRDB).[10] The main advantage of the component-oriented thinking is the optimization of reusability of work. The CO paradigm allows the use ofready to useapplications as modules to new and bigger projects.[5] The notions of OO such asencapsulation,inheritanceandpolymorphismdo not necessarily lead to the idea of reusing applications as modules of new works.[citation needed]The CO thinking also assures that components are fully tested, as a real application, and thus there is in this model the paroxism of reuse,[11]as well as the feature of understandability to end users, as corollary of theapp->compway of realizing the IT works.[clarification needed] Even using the same software that are present in OO paradigm, there are many specific consequences in the world of data-oriented activities. In analogous way, whole models composed of classes can be treated as a part (component) of a new more comprehensive model.[citation needed]
https://en.wikipedia.org/wiki/Component-oriented_database
AnEDA databaseis adatabasespecialized for the purpose ofelectronic design automation. These application specific databases are required because general purpose databases have historically not provided enough performance for EDA applications. In examining EDA design databases, it is useful to look at EDA tool architecture, to determine which parts are to be considered part of the design database, and which parts are the application levels. In addition to the database itself, many other components are needed for a useful EDA application. Associated with a database are one or more language systems (which, although not directly part of the database, are used by EDA applications such asparameterized cellsand user scripts). On top of the database are built the algorithmic engines within the tool (such astiming,placement,routing, orsimulation engines), and the highest level represents the applications built from these component blocks, such asfloorplanning. The scope of the design database includes the actual design, library information, technology information, and the set of translators to and from external formats such asVerilogandGDSII. Many instances of mature design databases exist in the EDA industry, both as a basis for commercial EDA tools as well as proprietary EDA tools developed by the CAD groups of major electronics companies.IBM,Hewlett-Packard, SDA Systems and ECAD (nowCadence Design Systems), High Level Design Systems, and many other companies developed EDA specific databases over the last 20 years, and these continue to be the basis of IC-design systems today. Many of these systems took ideas from university research and successfully productized them. Most of the mature design databases have evolved to the point where they can represent netlist data, layout data, and the ties between the two. They are hierarchical to allow for reuse and smaller designs. They can support styles of layout from digital through pure analog and many styles of mixed-signal design. Given the importance of a common design database in the EDA industry, theOpenAccessCoalition has been formed to develop, deploy, and support an open-sourced EDA design database with shared control. The data model presented in the OA DB provides a unified model that currently extends from structuralRTLthroughGDSII-level mask data, and now into thereticleand wafer space. It provides a rich enough capability to support digital, analog, and mixed-signal design data. It provides technology data that can express foundry process design rules through at least 20 nm, contains the definitions of the layers and purposes used in the design, definitions of VIAs and routing rules, definitions of operating points used for analysis, and so on. OA makes extensive use of IC-specific data compression techniques to reduce thememory footprint, to address the size, capacity, and performance problems of previous DBs. Despite what its name could imply, this file format has no publicly accessible implementation or specification. Those are exclusive to the members of the OpenAccess Coalition. The Milkyway database was originally developed by Avanti Corporation, which has since been acquired bySynopsys. It was first released in 1997. Milkyway is the database underlying most of Synopsys' physical design tools: Milkyway stores topological, parasitic and timing data. Having been used to design thousands of chips, Milkyway is very stable and production worthy. Milkyway is known to be written in C. Its internal implementation is not available outside Synopsys, so no comments may be made about the implementation. At the request of large customers such asTexas Instruments, Avanti released the MDX C-API in 1998. This enables the customers' CAD developers to createpluginsthat add custom functionality to Milkyway tools (chiefly Astro). MDX allows fairly complete access to topological data in Milkyway, but does not support timing or RC parasitic data. In early 2003, Synopsys (which acquired Avanti) opened Milkyway through theMilkyway Access Program (MAP-In). Any EDA company may become a MAP-in member for free (Synopsys customers must use MDX). Members are provided the means to interface their software to Milkyway using C,Tcl, orScheme. The Scheme interface is deprecated in favor of TCL. IC Compiler supports only TCL. The MAP-in C-API enables a non-Synopsys application to read and write Milkyway databases. Unlike MDX, MAP-in does not permit the creation of a plugin that can be used from within Synopsys Milkyway tools. MAP-in does not support access to timing or RC parasitic data. MAP-in also lacks direct support of certain geometric objects. MAP-in includes Milkyway Development Environment (MDE). MDE is a GUI application used to develop TCL and Scheme interfaces and diagnose problems. Its major features include: Another significant design database isFalcon, fromMentor Graphics. This database was one of the first in the industry written in C++. Like Milkyway is for Synopsys, Falcon seems to be a stable and mature platform for Mentor’s IC products. Again, the implementation is not publicly available, so little can be said about its features or performance relative to other industry standards. Magma Design Automation’s database is not just a disk format with an API, but is an entire system built around their DB as a central data structure. Again, since the details of the system are not publicly available, a direct comparison of features or performance is not possible. Looking at the capabilities of the Magma tools would indicate that this DB has a similar functionality to OpenAccess, and may be capable of representing behavioral (synthesis input) information. An EDA specific database is expected to provide many basic constructs and services. Here is a brief and incomplete list of what is needed:
https://en.wikipedia.org/wiki/EDA_database
TheEnterprise Objects Framework, or simplyEOF, was introduced byNeXTin 1994 as a pioneeringobject-relational mappingproduct for itsNeXTSTEPandOpenStepdevelopment platforms. EOF abstracts the process of interacting with arelational databaseby mapping database rows toJavaorObjective-Cobjects. This largely relieves developers from writing low-levelSQLcode.[1] EOF enjoyed some niche success in the mid-1990s among financial institutions who were attracted to the rapid application development advantages of NeXT's object-oriented platform. SinceApple Inc's merger with NeXT in 1996, EOF has evolved into a fully integrated part ofWebObjects, an application server also originally from NeXT. Many of the core concepts of EOF re-emerged as part ofCore Data, which further abstracts the underlying data formats to allow it to be based on non-SQL stores. In the early 1990sNeXTComputer recognized that connecting to databases was essential to most businesses and yet also potentially complex. Every data source has a different data-access language (orAPI), driving up the costs to learn and use each vendor's product. The NeXT engineers wanted to apply the advantages ofobject-oriented programming, by getting objects to "talk" to relational databases. As the two technologies are very different, the solution was to create an abstraction layer, insulating developers from writing the low-level procedural code (SQL) specific to each data source. The first attempt came in 1992 with the release of Database Kit (DBKit), which wrapped an object-oriented framework around any database. Unfortunately,NEXTSTEPat the time was not powerful enough and DBKit had serious design flaws. NeXT's second attempt came in 1994 with the Enterprise Objects Framework (EOF) version 1, acomplete rewritethat was far more modular andOpenStepcompatible. EOF 1.0 was the first product released byNeXTusing the Foundation Kit and introduced autoreleased objects to the developer community. The development team at the time was only four people: Jack Greenfield, Rich Williamson, Linus Upson and Dan Willhite. EOF 2.0, released in late 1995, further refined the architecture, introducing the editing context. At that point, the development team consisted of Dan Willhite,Craig Federighi, Eric Noyau and Charly Kleissner. EOF achieved a modest level of popularity in the financial programming community in the mid-1990s, but it would come into its own with the emergence of theWorld Wide Weband the concept ofweb applications. It was clear that EOF could help companies plug their legacy databases into the Web without any rewriting of that data. With the addition of frameworks to do state management, load balancing and dynamic HTML generation, NeXT was able to launch the first object-oriented Web application server,WebObjects, in 1996, with EOF at its core. In 2000, Apple Inc. (which had merged with NeXT) officially dropped EOF as a standalone product, meaning that developers would be unable to use it to create desktop applications for the forthcomingMac OS X. It would, however, continue to be an integral part of a major new release of WebObjects. WebObjects 5, released in 2001, was significant for the fact that its frameworks had been ported from their nativeObjective-Cprogramming language to theJavalanguage. Critics of this change argue that most of the power of EOF was a side effect of its Objective-C roots, and that EOF lost the beauty or simplicity it once had. Third-party tools, such asEOGenerator, help fill the deficiencies introduced by Java (mainly due to the loss ofcategories). The Objective-C code base was re-introduced with some modifications to desktop application developers asCore Data, part of Apple'sCocoa API, with the release ofMac OS X Tigerin April 2005. Enterprise Objects provides tools and frameworks for object-relational mapping. The technology specializes in providing mechanisms to retrieve data from various data sources, such as relational databases viaJDBCandJNDIdirectories, and mechanisms to commit data back to those data sources. These mechanisms are designed in a layered, abstract approach that allows developers to think about data retrieval and commitment at a higher level than a specific data source or data source vendor. Central to this mapping is a model file (an "EOModel") that you build with a visual tool — either EOModeler, or the EOModeler plug-in toXcode. The mapping works as follows: You can build data models based on existing data sources or you can build data models from scratch, which you then use to create data structures (tables, columns, joins) in a data source. The result is that database records can be transposed into Java objects. The advantage of using data models is that applications are isolated from the idiosyncrasies of the data sources they access. This separation of an application's business logic from database logic allows developers to change the database an application accesses without needing to change the application. EOF provides a level of database transparency not seen in other tools and allows the same model to be used to access different vendor databases and even allows relationships across different vendor databases without changing source code. Its power comes from exposing the underlying data sources as managed graphs of persistent objects. In simple terms, this means that it organizes the application's model layer into a set of defined in-memory data objects. It then tracks changes to these objects and can reverse those changes on demand, such as when a user performs an undo command. Then, when it is time to save changes to the application's data, it archives the objects to the underlying data sources. In designing Enterprise Objects developers can leverage the object-oriented feature known asinheritance. A Customer object and an Employee object, for example, might both inherit certain characteristics from a more generic Person object, such as name, address, and phone number. While this kind of thinking is inherent in object-oriented design, relational databases have no explicit support for inheritance. However, using Enterprise Objects, you can build data models that reflect object hierarchies. That is, you can design database tables to support inheritance by also designing enterprise objects that map to multiple tables or particular views of a database table. An Enterprise Object is analogous to what is often known in object-oriented programming as abusiness object— a class which models a physical orconceptual objectin the business domain (e.g. a customer, an order, an item, etc.). What makes an EO different from other objects is that its instance data maps to a data store. Typically, an enterprise object contains key-value pairs that represent a row in a relational database. The key is basically the column name, and the value is what was in that row in the database. So it can be said that an EO's properties persist beyond the life of any particular running application. More precisely, an Enterprise Object is an instance of a class that implements the com.webobjects.eocontrol.EOEnterpriseObject interface. An Enterprise Object has a corresponding model (called an EOModel) that defines the mapping between the class's object model and the database schema. However, an enterprise object doesn't explicitly know about its model. This level of abstraction means that database vendors can be switched without it affecting the developer's code. This gives Enterprise Objects a high degree of reusability. Despite their common origins, the two technologies diverged, with each technology retaining a subset of the features of the original Objective-C code base, while adding some new features. EOF supports custom SQL; shared editing contexts; nested editing contexts; and pre-fetching and batch faulting of relationships, all features of the original Objective-C implementation not supported by Core Data. Core Data also does not provide the equivalent of an EOModelGroup—the NSManagedObjectModel class provides methods for merging models from existing models, and for retrieving merged models from bundles. Core Data supports fetched properties; multiple configurations within a managed object model; local stores; and store aggregation (the data for a given entity may be spread across multiple stores); customization and localization of property names and validation warnings; and the use of predicates for property validation. These features of the original Objective-C implementation are not supported by the Java implementation.
https://en.wikipedia.org/wiki/Enterprise_Objects_Framework
TheObject Data Management Group(ODMG) was conceived in the summer of 1991 at a breakfast withobject databasevendors that was organized by Rick Cattell ofSun Microsystems. In 1998, the ODMG changed its name from the Object Database Management Group to reflect the expansion of its efforts to include specifications for both object database andobject–relational mappingproducts. The primary goal of the ODMG was to put forward a set of specifications that allowed a developer to writeportableapplications for object database and object–relational mapping products. In order to do that, the data schema, programminglanguage bindings, and data manipulation andquery languagesneeded to be portable. Between 1993 and 2001, the ODMG published five revisions to its specification. The last revision was ODMG version 3.0, after which the group disbanded. ODMG 3.0 was published in book form in 2000.[1]By 2001, most of the major object database and object-relational mapping vendors claimed conformance to the ODMG Java Language Binding. Compliance to the other components of the specification was mixed.[2]In 2001, the ODMG Java Language Binding was submitted to theJava Community Processas a basis for theJava Data Objectsspecification. The ODMG member companies then decided to concentrate their efforts on the Java Data Objects specification. As a result, the ODMG disbanded in 2001. In 2004, theObject Management Group(OMG) was granted the right to revise the ODMG 3.0 specification as an OMG specification by the copyright holder, Morgan Kaufmann Publishers. In February 2006, the OMG announced the formation of the Object Database Technology Working Group (ODBT WG) and plans to work on the4th generation of an object database standard.
https://en.wikipedia.org/wiki/Object_Data_Management_Group
Anobject–relational database(ORD), orobject–relational database management system(ORDBMS), is adatabase management system(DBMS) similar to arelational database, but with anobject-oriented databasemodel:objects,classesandinheritanceare directly supported indatabase schemasand in thequery language. Also, as with pure relational systems, it supports extension of thedata modelwith customdata typesandmethods. An object–relational database can be said to provide a middle ground between relational databases andobject-oriented databases. In object–relational databases, the approach is essentially that of relational databases: the data resides in the database and is manipulated collectively with queries in a query language; at the other extreme are OODBMSes in which the database is essentially a persistent object store for software written in anobject-oriented programminglanguage, with an application programming interfaceAPIfor storing and retrieving objects, and little or no specific support for querying. The basic need of object–relational database arises from the fact that both Relational and Object database have their individual advantages and drawbacks. The isomorphism of the relational database system with a mathematical relation allows it to exploit many useful techniques and theorems from set theory. But these types of databases are not optimal for certain kinds of applications. An object oriented database model allows containers like sets and lists, arbitrary user-defined datatypes as well as nested objects. This brings commonality between the application type systems and database type systems which removes any issue of impedance mismatch. But object databases, unlike relational do not provide any mathematical base for their deep analysis.[2][3] The basic goal for the object–relational database is to bridge the gap between relational databases and theobject-oriented modelingtechniques used in programming languages such asJava,C++,Visual Basic (.NET)orC#. However, a more popular alternative for achieving such a bridge is to use a standard relational database systems with some form ofobject–relational mapping(ORM) software. Whereas traditionalRDBMSor SQL-DBMS products focused on the efficient management of data drawn from a limited set of data-types (defined by the relevant language standards), an object–relational DBMS allows software developers to integrate their own types and the methods that apply to them into the DBMS. The ORDBMS (likeODBMSorOODBMS) is integrated with anobject-oriented programminglanguage. The characteristic properties of ORDBMS are 1) complex data, 2) type inheritance, and 3) object behavior.Complex datacreation in most SQL ORDBMSs is based on preliminary schema definition via theuser-defined type(UDT). Hierarchy within structured complex data offers an added property,type inheritance. That is, a structured type can have subtypes that reuse all of its attributes and contain additional attributes specific to the subtype. Another advantage, theobject behavior, is related with access to the program objects. Such program objects must be storable and transportable for database processing, therefore they usually are named aspersistent objects. Inside a database, all the relations with a persistent program object are relations with itsobject identifier(OID). All of these points can be addressed in a proper relational system, although the SQL standard and its implementations impose arbitrary restrictions and additional complexity[4][page needed] Inobject-oriented programming(OOP), object behavior is described through the methods (object functions). The methods denoted by one name are distinguished by the type of their parameters and type of objects for which they attached (method signature). The OOP languages call this thepolymorphismprinciple, which briefly is defined as "one interface, many implementations". Other OOP principles,inheritanceandencapsulation, are related both to methods and attributes. Method inheritance is included in type inheritance. Encapsulation in OOP is a visibility degree declared, for example, through thepublic,privateandprotectedaccess modifiers. Object–relational database management systems grew out of research that occurred in the early 1990s. That research extended existing relational database concepts by addingobjectconcepts. The researchers aimed to retain a declarative query-language based onpredicate calculusas a central component of the architecture. Probably the most notable research project, Postgres (UC Berkeley), spawned two products tracing their lineage to that research:IllustraandPostgreSQL. In the mid-1990s, early commercial products appeared. These included Illustra[5](Illustra Information Systems, acquired byInformix Software, which was in turn acquired by International Business Machines (IBM), Omniscience (Omniscience Corporation, acquired byOracle Corporationand became the original Oracle Lite), and UniSQL (UniSQL, Inc., acquired byKCOM Group). Ukrainian developer Ruslan Zasukhin, founder of Paradigma Software, Inc., developed and shipped the first version of Valentina database in the mid-1990s as aC++software development kit(SDK). By the next decade, PostgreSQL had become a commercially viable database, and is the basis for several current products that maintain its ORDBMS features. Computer scientists came to refer to these products as "object–relational database management systems" or ORDBMSs.[6] Many of the ideas of early object–relational database efforts have largely become incorporated intoSQL:1999viastructured types. In fact, any product that adheres to the object-oriented aspects of SQL:1999 could be described as an object–relational database management product. For example,IBM Db2,Oracle Database, andMicrosoft SQL Server, make claims to support this technology and do so with varying degrees of success. An RDBMS might commonly involveSQLstatements such as these: Most current[update]SQL databases allow the crafting of customfunctions, which would allow the query to appear as: In an object–relational database, one might see something like this, with user-defined data-types and expressions such asBirthDay(): The object–relational model can offer another advantage in that the database can make use of the relationships between data to easily collect related records. In anaddress bookapplication, an additional table would be added to the ones above to hold zero or more addresses for each customer. Using a traditional RDBMS, collecting information for both the user and their address requires a "join": The same query in an object–relational database appears more simply:
https://en.wikipedia.org/wiki/Object%E2%80%93relational_database
Incomputer science,persistencerefers to the characteristic ofstateof a system that outlives (persists for longer than) theprocessthat created it. This is achieved in practice by storing the state as data incomputer data storage. Programs have to transfer data to and from storage devices and have to provide mappings from the nativeprogramming-languagedata structuresto the storage device data structures.[1][2] Picture editing programs orword processors, for example, achievestatepersistence by saving their documents tofiles. Persistence is said to be "orthogonal" or "transparent" when it is implemented as an intrinsic property of the execution environment of a program. An orthogonal persistence environment does not require any specific actions by programs running in it to retrieve or save theirstate. Non-orthogonal persistence requires data to be written and read to and from storage using specific instructions in a program, resulting in the use ofpersistas a transitive verb:On completion, the program persists the data. The advantage of orthogonal persistence environments is simpler and less error-prone programs.[citation needed] The term "persistent" was first introduced by Atkinson and Morrison[1]in the sense of orthogonal persistence: they used an adjective rather than a verb to emphasize persistence as a property of the data, as distinct from an imperative action performed by a program. The use of the transitive verb "persist" (describing an action performed by a program) is a back-formation. Orthogonal persistence is widely adopted in operating systems forhibernationand inplatform virtualizationsystems such asVMwareandVirtualBoxfor state saving. Research prototype languages such asPS-algol,Napier88, Fibonacci and pJama, successfully demonstrated the concepts along with the advantages to programmers. Usingsystem imagesis the simplest persistence strategy. Notebookhibernationis an example of orthogonal persistence using a system image because it does not require any actions by the programs running on the machine. An example of non-orthogonal persistence using a system image is a simple text editing program executing specific instructions to save an entire document to a file. Shortcomings: Requires enough RAM to hold the entire system state. State changes made to a system after its last image was saved are lost in the case of a system failure or shutdown. Saving an image for every single change would be too time-consuming for most systems, so images are not used as the single persistence technique for critical systems. Using journals is the second simplest persistence technique. Journaling is the process of storing events in a log before each one is applied to a system. Such logs are called journals. On startup, the journal is read and each event is reapplied to the system, avoiding data loss in the case of system failure or shutdown. The entire "Undo/Redo" history of user commands in a picture editing program, for example, when written to a file, constitutes a journal capable of recovering the state of an edited picture at any point in time. Journals are used byjournaling file systems,prevalent systemsanddatabase management systemswhere they are also called "transaction logs" or "redo logs". Shortcomings: When journals are used exclusively, the entire (potentially large) history of all system events must be reapplied on every system startup. As a result, journals are often combined with other persistence techniques. This technique is the writing to storage of only those portions of system state that have been modified (are dirty) since their last write. Sophisticated document editing applications, for example, will use dirty writes to save only those portions of a document that were actually changed since the last save. Shortcomings:This technique requires state changes to be intercepted within a program. This is achieved in a non-transparent way by requiring specific storage-API calls or in a transparent way with automaticprogram transformation. This results in code that is slower than native code and more complicated to debug. Anysoftware layerthat makes it easier for a program to persist its state is generically called a persistence layer. Most persistence layers will not achieve persistence directly but will use an underlyingdatabase management system. System prevalence is a technique that combines system images and transaction journals, mentioned above, to overcome their limitations. Shortcomings:A prevalent system must have enoughRAMto hold the entire system state. DBMSsuse a combination of the dirty writes and transaction journaling techniques mentioned above. They provide not only persistence but also other services such as queries, auditing and access control. Persistent operating systems areoperating systemsthat remain persistent even after a crash or unexpected shutdown. Operating systems that employ this ability include
https://en.wikipedia.org/wiki/Persistence_(computer_science)
Therelational model(RM) is an approach to managingdatausing astructureand language consistent withfirst-order predicate logic, first described in 1969 by English computer scientistEdgar F. Codd,[1][2]where all data are represented in terms oftuples, grouped intorelations. A database organized in terms of the relational model is arelational database. The purpose of the relational model is to provide adeclarativemethod for specifying data and queries: users directly state what information the database contains and what information they want from it, and let thedatabase management system softwaretake care of describing data structures for storing the data and retrieval procedures for answering queries. Most relational databases use theSQLdata definition and query language; these systems implement what can be regarded as an engineering approximation to the relational model. Atablein a SQLdatabase schemacorresponds to a predicate variable; the contents of a table to a relation; key constraints, other constraints, and SQL queries correspond to predicates. However, SQL databasesdeviate from the relational model in many details, and Codd fiercely argued against deviations that compromise the original principles.[3] The relational model was developed byEdgar F. Coddas a general model of data, and subsequently promoted byChris DateandHugh Darwenamong others. In their 1995The Third Manifesto, Date and Darwen try to demonstrate how the relational model can accommodate certain "desired"object-orientedfeatures.[4] Some years after publication of his 1970 model, Codd proposed athree-valued logic(True, False, Missing/NULL) version of it to deal with missing information, and in hisThe Relational Model for Database Management Version 2(1990) he went a step further with a four-valued logic (True, False, Missing but Applicable, Missing but Inapplicable) version.[5] Arelationconsists of aheadingand abody. The heading defines asetofattributes, each with anameanddata type(sometimes called adomain). The number of attributes in this set is the relation'sdegreeorarity. The body is a set oftuples. A tuple is a collection ofnvalues, wherenis the relation's degree, and each value in the tuple corresponds to a unique attribute.[6]The number of tuples in this set is the relation'scardinality.[7]: 17–22 Relations are represented byrelationalvariablesorrelvars, which can be reassigned.[7]: 22–24Adatabaseis a collection of relvars.[7]: 112–113 In this model, databases follow theInformation Principle: At any given time, allinformationin the database is represented solely by values within tuples, corresponding to attributes, in relations identified by relvars.[7]: 111 A database may define arbitraryboolean expressionsasconstraints. If all constraints evaluate astrue, the database isconsistent; otherwise, it isinconsistent. If a change to a database's relvars would leave the database in an inconsistent state, that change is illegal and must not succeed.[7]: 91 In general, constraints are expressed using relational comparison operators, of which just one, "is subset of" (⊆), is theoretically sufficient.[citation needed] Two special cases of constraints are expressed askeysandforeign keys: Acandidate key, or simply akey, is the smallestsubsetof attributes guaranteed to uniquely differentiate each tuple in a relation. Since each tuple in a relation must be unique, every relation necessarily has a key, which may be its complete set of attributes. A relation may have multiple keys, as there may be multiple ways to uniquely differentiate each tuple.[7]: 31–33 An attribute may be unique across tuples without being a key. For example, a relation describing a company's employees may have two attributes: ID and Name. Even if no employees currently share a name, if it is possible to eventually hire a new employee with the same name as a current employee, the attribute subset {Name} is not a key. Conversely, if the subset {ID} is a key, this means not only that no employeescurrentlyshare an ID, but that no employeeswill evershare an ID.[7]: 31–33 Aforeign keyis a subset of attributesAin a relationR1that corresponds with a key of another relationR2, with the property that theprojectionofR1onAis a subset of the projection ofR2onA. In other words, if a tuple inR1contains values for a foreign key, there must be a corresponding tuple inR2containing the same values for the corresponding key.[7]: 34 Users (or programs) request data from a relational database by sending it aquery. In response to a query, the database returns a result set. Often, data from multiple tables are combined into one, by doing ajoin. Conceptually, this is done by taking all possible combinations of rows (theCartesian product), and then filtering out everything except the answer. There are a number of relational operations in addition to join. These include project (the process of eliminating some of the columns), restrict (the process of eliminating some of the rows), union (a way of combining two tables with similar structures), difference (that lists the rows in one table that are not found in the other), intersect (that lists the rows found in both tables), and product (mentioned above, which combines each row of one table with each row of the other). Depending on which other sources you consult, there are a number of other operators – many of which can be defined in terms of those listed above. These include semi-join, outer operators such as outer join and outer union, and various forms of division. Then there are operators to rename columns, and summarizing or aggregating operators, and if you permitrelationvalues as attributes (relation-valued attribute), then operators such as group and ungroup. The flexibility of relational databases allows programmers to write queries that were not anticipated by the database designers. As a result, relational databases can be used by multiple applications in ways the original designers did not foresee, which is especially important for databases that might be used for a long time (perhaps several decades). This has made the idea and implementation of relational databases very popular with businesses. Relationsare classified based upon the types of anomalies to which they're vulnerable. A database that is in thefirst normal formis vulnerable to all types of anomalies, while a database that is in the domain/key normal form has no modification anomalies. Normal forms are hierarchical in nature. That is, the lowest level is the first normal form, and the database cannot meet the requirements for higher level normal forms without first having met all the requirements of the lesser normal forms.[8] The relational model is aformal system. A relation's attributes define a set oflogicalpropositions. Each proposition can be expressed as a tuple. The body of a relation is a subset of these tuples, representing which propositions are true. Constraints represent additional propositions which must also be true.Relational algebrais a set of logical rules that canvalidlyinferconclusions from these propositions.[7]: 95–101 The definition of atupleallows for a unique empty tuple with no values, corresponding to theempty setof attributes. If a relation has a degree of 0 (i.e. its heading contains no attributes), it may have either a cardinality of 0 (a body containing no tuples) or a cardinality of 1 (a body containing the single empty tuple). These relations representBooleantruth values. The relation with degree 0 and cardinality 0 isFalse, while the relation with degree 0 and cardinality 1 isTrue.[7]: 221–223 If a relation of Employees contains the attributes{Name, ID}, then the tuple{Alice, 1}represents the proposition: "There exists an employee namedAlicewith ID1". This proposition may be true or false. If this tuple exists in the relation's body, the proposition is true (there is such an employee). If this tuple is not in the relation's body, the proposition is false (there is no such employee).[7]: 96–97 Furthermore, if{ID}is a key, then a relation containing the tuples{Alice, 1}and{Bob, 1}would represent the followingcontradiction: Under theprinciple of explosion, this contradiction would allow the system to prove that any arbitrary proposition is true. The database must enforce the key constraint to prevent this.[7]: 104 An idealized, very simple example of a description of somerelvars(relationvariables) and their attributes: In thisdesignwe have three relvars: Customer, Order, and Invoice. The bold, underlined attributes arecandidate keys. The non-bold, underlined attributes areforeign keys. Usually onecandidate keyis chosen to be called theprimary keyand used inpreferenceover the other candidate keys, which are then calledalternate keys. Acandidate keyis a uniqueidentifierenforcing that notuplewill be duplicated; this would make therelationinto something else, namely abag, by violating the basic definition of aset. Both foreign keys and superkeys (that includes candidate keys) can be composite, that is, can be composed of several attributes. Below is a tabular depiction of a relation of our example Customer relvar; a relation can be thought of as a value that can be attributed to a relvar. If we attempted toinserta new customer with the ID123, this would violate the design of the relvar sinceCustomer IDis aprimary keyand we already have a customer123. TheDBMSmust reject atransactionsuch as this that would render thedatabaseinconsistent by a violation of anintegrity constraint. However, it is possible to insert another customer namedAlice, as long as this new customer has a unique ID, since the Name field is not part of the primary key. Foreign keysareintegrity constraintsenforcing that thevalueof theattribute setis drawn from acandidate keyin anotherrelation. For example, in the Order relation the attributeCustomer IDis a foreign key. Ajoinis theoperationthat draws oninformationfrom several relations at once. By joining relvars from the example above we couldquerythe database for all of the Customers, Orders, and Invoices. If we only wanted the tuples for a specific customer, we would specify this using arestriction condition. If we wanted to retrieve all of the Orders for Customer123, we couldquerythe database to return every row in the Order table withCustomer ID123. There is a flaw in ourdatabase designabove. The Invoice relvar contains an Order ID attribute. So, each tuple in the Invoice relvar will have one Order ID, which implies that there is precisely one Order for each Invoice. But in reality an invoice can be created against many orders, or indeed for no particular order. Additionally the Order relvar contains an Invoice ID attribute, implying that each Order has a corresponding Invoice. But again this is not always true in the real world. An order is sometimes paid through several invoices, and sometimes paid without an invoice. In other words, there can be many Invoices per Order and many Orders per Invoice. This is amany-to-manyrelationship between Order and Invoice (also called anon-specific relationship). To represent this relationship in the database a new relvar should be introduced whoseroleis to specify the correspondence between Orders and Invoices: Now, the Order relvar has aone-to-many relationshipto the OrderInvoice table, as does the Invoice relvar. If we want to retrieve every Invoice for a particular Order, we can query for all orders whereOrder IDin the Order relation equals theOrder IDin OrderInvoice, and whereInvoice IDin OrderInvoice equals theInvoice IDin Invoice. Adata typein a relational database might be the set of integers, the set of character strings, the set of dates, etc. The relational model does not dictate what types are to be supported. Attributesare commonly represented ascolumns,tuplesasrows, andrelationsastables. A table is specified as a list of column definitions, each of which specifies a unique column name and the type of the values that are permitted for that column. Anattributevalueis the entry in a specific column and row. A databaserelvar(relation variable) is commonly known as abase table. The heading of its assigned value at any time is as specified in the table declaration and its body is that most recently assigned to it by anupdate operator(typically, INSERT, UPDATE, or DELETE). The heading and body of the table resulting from evaluating a query are determined by the definitions of the operators used in that query. SQL, initially pushed as thestandardlanguage forrelational databases, deviates from the relational model in several places. The currentISOSQL standard doesn't mention the relational model or use relational terms or concepts.[citation needed] According to the relational model, a Relation's attributes and tuples aremathematical sets, meaning they are unordered and unique. In a SQL table, neither rows nor columns are proper sets. A table may contain both duplicate rows and duplicate columns, and a table's columns are explicitly ordered. SQL uses aNullvalue to indicate missing data, which has no analog in the relational model. Because a row can represent unknown information, SQL does not adhere to the relational model'sInformation Principle.[7]: 153–155, 162 Basic notions in the relational model arerelationnamesandattribute names. We will represent these as strings such as "Person" and "name" and we will usually use the variablesr,s,t,…{\displaystyle r,s,t,\ldots }anda,b,c{\displaystyle a,b,c}to range over them. Another basic notion is the set ofatomic valuesthat contains values such as numbers and strings. Our first definition concerns the notion oftuple, which formalizes the notion of row or record in a table: The next definition definesrelationthat formalizes the contents of a table as it is defined in the relational model. Such a relation closely corresponds to what is usually called the extension of a predicate infirst-order logicexcept that here we identify the places in the predicate with attribute names. Usually in the relational model adatabase schemais said to consist of a set of relation names, the headers that are associated with these names and theconstraintsthat should hold for every instance of the database schema. One of the simplest and most important types of relationconstraintsis thekey constraint. It tells us that in every instance of a certain relational schema the tuples can be identified by their values for certain attributes. A superkey is a set of column headers for which the values of those columns concatenated are unique across all rows. Formally: A candidate key is a superkey that cannot be further subdivided to form another superkey. Functional dependency is the property that a value in a tuple may be derived from another value in that tuple. Othermodelsinclude thehierarchical modelandnetwork model. Somesystemsusing these older architectures are still in use today indata centerswith high data volume needs, or where existing systems are so complex and abstract that it would be cost-prohibitive to migrate to systems employing the relational model. Also of note are newerobject-oriented databases.[9]andDatalog.[10] Datalogis a database definition language, which combines a relational view of data, as in the relational model, with a logical view, as inlogic programming. Whereas relational databases use a relational calculus or relational algebra, withrelational operations, such asunion,intersection,set differenceandcartesian productto specify queries, Datalog uses logical connectives, such asif,or,andandnotto define relations as part of the database itself. In contrast with the relational model, which cannot express recursive queries without introducing a least-fixed-point operator,[11]recursive relations can be defined in Datalog, without introducing any new logical connectives or operators.
https://en.wikipedia.org/wiki/Relational_model
The following tables compare general and technical information for a number ofonline analytical processing(OLAP) servers. Please see the individual products articles for further information. APIs and query languages OLAP servers support. A list of OLAP features that are not supported by all vendors. All vendors support features such as parent-child, multilevel hierarchy, drilldown. Unrestricted (In-memory) The OLAP servers can run on the followingoperating systems: Note (1):The server availability depends onJava Virtual Machinenot on theoperating system
https://en.wikipedia.org/wiki/Comparison_of_OLAP_servers
Thefunctional database modelis used to support analytics applications such asfinancial planningandperformance management. The functional database model, or the functional model for short, is different from but complementary to therelational model. The functional model is also distinct from other similarly named concepts, including the DAPLEX functional database model[1]and functional language databases. The functional model is part of theonline analytical processing (OLAP)category since it comprises multidimensional hierarchical consolidation. But it goes beyond OLAP by requiring aspreadsheet-like cell orientation, where cells can be input or calculated as functions of other cells. Also as in spreadsheets, it supports interactive calculations where the values of all dependent cells are automatically up to date whenever the value of a cell is changed. Analytics, especially forward looking or prospective analytics requires interactive modeling, "what if", and experimentation of the kind that most business analysts do with spreadsheets. This interaction with the data is enabled by the spreadsheet's cell orientation and its ability to let users define cells calculated as a function of other cells. The relational database model has no such concepts and is thus very limited in the business performance modeling and interactivity it can support. Accordingly, relational-based analytics is almost exclusively restricted to historical data, which is static. This misses most of the strategic benefits of analytics, which come from interactively constructing views of the future. The functional model is based on multidimensional arrays, or "cubes", of cells that, as in a spreadsheet, can be either externally input, or calculated in terms of other cells. Such cubes are constructed using dimensions which correspond to hierarchically organized sets of real entities such as products, geographies, time, etc. A cube can be seen as afunctionover thecartesian productof the dimensions. I.e., it assigns a value to each cell, which is identified by an n-tuple of dimension elements; thus the name "functional". The model retains the flexibility and potential for interactivity of spreadsheets, as well as the multidimensional hierarchical consolidations of relational-based OLAP tools. At the same time, the functional model overcomes the limitations of both the relational database model and classical spreadsheets. Products that implement the principles of the functional model to varying degrees have been in existence for some time, including products such asEssbase,TM1,Jedox, Alea, Microsoft Analysis Services, etc.[2][3][4][5][6] The management system of an enterprise generally consists of a series of interconnected control loops. Each loop starts by developing a plan, the plan is then executed, and the results are reviewed and compared against the plan. Based on those results, and a new assessment of what the future holds, a new plan is developed and the process is repeated. The three components of the control loop, planning, execution and assessment, have different time perspectives. Planning looks at the future, execution looks at the present and review looks at the past. Information Technology (IT) plays now a central role in making management control loops more efficient and effective. Operational computer systems are concerned with execution while analytic computer systems, or simply Analytics, are used to improve planning and assessment. The information needs of each component are different. Operational systems are typically concerned with recording transactions and keeping track of the current state of the business – inventory, work in progress etc. Analytics has two principal components: forward-looking or prospective analytics, which applies to planning, and backward looking or retrospective analytics, which applies to assessment. In retrospective analytics, transactions resulting from operations are boiled down and accumulated into arrays of cells. These cells are identified by as many dimensions as are relevant to the business: time, product, customer, account, region, etc. The cells are typically arrayed in cubes that form the basis for retrospective analyses such as comparing actual performance to plan. This is the main realm of OLAP systems. Prospective analytics develops similar cubes of data but for future time periods. The development of prospective data is typically the result of human input or mathematical models that are driven and controlled through user interaction. The application of IT to the tree components of the management control loop evolved over time as new technologies were developed. Recording of operational transactions was one of the first needs to be automated through the use of 80 column punch cards. As electronics progressed, the records were moved, first to magnetic tape, then to disk. Software technology progressed as well and gave rise to database management systems that centralized the access and control of the data. Databases made it then possible to develop languages that made it easy to produce reports for retrospective analytics. At about the same time, languages and systems were developed to handle multidimensional data and to automate mathematical techniques for forecasting and optimization as part of prospective analytics. Unfortunately, this technology required a high level of expertise and was not comprehensible to most end users. Thus its user acceptance was limited, and so were the benefits derived from it. No wide-use tool was available for prospective analytics until the introduction of the electronic spreadsheet. For the first time end users had a tool that they could understand and control, and use it to model their business as they understood it. They could interact, experiment, adapt to changing situations, and derive insights and value very quickly. As a result, spreadsheets were adopted broadly and ultimately became pervasive. To this day, spreadsheets remain an indispensable tool for anyone doing planning. Spreadsheets have a key set of characteristics that facilitate modeling and analysis. Data from multiple sources can be brought together in one worksheet. Cells can be defined by means of calculation formulas in terms of other cells, so facts from different sources can be logically interlinked to calculate derived values. Calculated cells are updated automatically whenever any of the input cells on which they depend changes. When users have a "what if" question, they simply change some data cells, and automatically all dependent cells are brought up to date. Also, cells are organized in rectangular grids and juxtaposed so that significant differences can be spotted at a glance or through associated graphic displays. Spreadsheet grids normally also contain consolidation calculations along rows and or columns. This permits discovering trends in the aggregate that may not be evident at a detailed level. But spreadsheets suffer from a number ofshortcomings. Cells are identified by row and column position, not the business concepts they represent. Spreadsheets are two dimensional, and multiple pages provide the semblance of three dimensions, but business data often has more dimensions. If users want to perform another analysis on the same set of data, the data needs to be duplicated. Spreadsheet links can sometimes be used, but most often are not practical. The combined effect of these limitations is that there is a limit on the complexity of spreadsheets that can be built and managed. While the functional model retains the key features of the spreadsheet, it also overcomes its main limitations. With the functional model, data is arranged in a grid of cells, but cells are identified by business concept instead of just row or column. Rather than worksheets, the objects of the functional model are dimensions and cubes. Rather than two or three dimensions: row, column and sheet, the functional model supports as many dimensions as are necessary. Another advantage of the functional model is that it is a database with features such as data independence, concurrent multiuser access, integrity, scalability, security, audit trail, backup/recovery, and data integration. Data independence is of particularly high value for analytics. Data need no longer reside in spreadsheets. Instead the functional database acts as a central information resource. The spreadsheet acts as a user interface to the database, so the same data can be shared by multiple spreadsheets and multiple users. Updates submitted by multiple users are available to all users subject to security rules. Accordingly, there is always a single consistent shared version of the data. A functional database consists of a set of dimensions which are used to construct a set of cubes. A dimension is a finite set of elements, or members, that identify business data, e.g., time periods, products, areas or regions, line items, etc. Cubes are built using any number of dimensions. A cube is a collection of cells, each of which is identified by a tuple of elements, one from each dimension of the cube. Each cell in a cube contains a value. A cube is effectively a function that assigns a value to each n-tuple of the cartesian product of the dimensions. The value of a cell may be assigned externally (input), or the result of a calculation that uses other cells in the same cube or other cubes. The definition of a cube includes the formulas that specify the calculation of such cells. Cells may also be empty and deemed to have a zero value for purposes of consolidation. As with spreadsheets, users need not worry about executing recalculation. When the value of a cell is requested, the value that is returned is up to date with respect to the values of all of the cells that go into its calculation i.e. the cells on which it depends. Dimensions typically contain consolidation hierarchies where some elements are defined as parents of other elements, and a parent is interpreted as the sum of its children. Cells that are identified by a consolidated element in one or more dimensions are automatically calculated by the functional model as sums of cells having child elements in those dimensions. When the value of a consolidated cell is requested, the value that is returned is always up to date with respect to the values of all of the cells that it consolidates. The cubes and their dimensions (in parentheses) are as follows: The cubes in the model are interconnected through formulas: The P&L cube picks up the dollar costs from the payroll cube through a formula of the form: P&L( "Payroll", "Dollars") = Payroll ("All Employees") Note: The expression syntax used is for illustration purposes and may not reflect syntax used in the formal model or in particular products that implement the functional model. The dimensions that are omitted from the expression are assumed to range over all the leaf elements of those dimensions. Thus this expression is equivalent to: P&L( xRegion, "Payroll", "Dollars", xTime) = Payroll (xRegion, "All Employees", xTime), for all leaves xRegion in Region and all leaves xTime in Time. Similarly, P&L also picks up sales revenue from the Sales cube through: P&L( "Sales", "Dollars") = Sales("All Products") Overhead accounts are allocated by region on the basis of sales: P&L("Region", "Dollars") = Ovhd() * Sales("Region") / Sales("All Regions") Finally, other currencies are derived from the dollar exchange rate: P&L() = P&L("Dollars") * Fx() The historical portion of the cubes is also populated from the data warehouse. In this simplified example, the calculations just discussed may be done in the data warehouse for the historical portion of the cubes, but generally, the functional model supports the calculation of other functions, such as ratios and percentages. While the history is static, the future portion is typically dynamic and developed interactively by business analysts in various organizations and various backgrounds. Sales forecasts should be developed by experts from each region. They could use forecasting models and parameters that incorporate their knowledge and experience of that region, or they could simply enter them through a spreadsheet. Each region can use a different method with different assumptions. The payroll forecast could be developed by HR experts in each region. The overhead cube would be populated by people in headquarters finance, and so would the exchange rate forecasts. The forecasts developed by regional experts are first reviewed and recycled within the region and then reviewed and recycled with headquarters. The model can be expanded to include a Version dimension that varies based on, for example, various economic climate scenarios. As time progresses, each planning cycle can be stored in a different version, and those versions compared to actual and to one another. At any time the data in all the cubes, subject to security constraints, is available to all interested parties. Users can bring slices of cubes dynamically into spreadsheets to do further analyses, but with a guarantee that the data is the same as what other users are seeing. A functional database brings together data from multiple disparate sources and ties the disparate data sets into coherent consumable models. It also brings data scattered over multiple spreadsheets under control. This lets users see a summary picture that combines multiple components, e.g., to roll manpower planning into a complete financial picture automatically. It gives them a single point of entry to develop global insights based on various sources. A functional database, like spreadsheets, also lets users change input values while all dependent values are up to date. This facilitates what-if experimentation and creating and comparing multiple scenarios. Users can then see the scenarios side by side and choose the most appropriate. When planning, users can converge on a most advantageous course of action by repeatedly recycling and interacting with results. Actionable insights come from this intimate interaction with data that users normally do with spreadsheets A functional database does not only provide a common interactive data store. It also brings together models developed by analysts with knowledge of a particular area of the business that can be shared by all users. To facilitate this, a functional database retains the spreadsheet's interactive cell-based modelling capability. This makes possible models that more closely reflect the complexities of business reality. Perhaps a functional database's largest single contribution to analytics comes from promoting collaboration. It lets multiple individuals and organizations not only share a single version of the truth, but a truth that is dynamic and constantly changing. Its automatic calculations quickly consolidate and reconcile inputs from multiple sources. This promotes interaction of various departments, facilitates multiple iterations of thought processes and makes it possible for differing viewpoints to converge and be reconciled. Also, since each portion of the model is developed by the people that are more experts in their particular area, it is able to leverage experience and insights that exist up and down the organization.
https://en.wikipedia.org/wiki/Functional_database_model
This list includesSQLreserved words– akaSQL reserved keywords,[1][2]as theSQL:2023specifies and someRDBMSshave added. A dash (-) means that the keyword is not reserved.
https://en.wikipedia.org/wiki/List_of_SQL_reserved_words
Thesyntaxof theSQLprogramming languageis defined and maintained byISO/IEC SC 32as part ofISO/IEC 9075. This standard is not freely available. Despite the existence of the standard, SQL code is not completely portable among differentdatabase systemswithout adjustments. The SQL language is subdivided into several language elements, including: Other operators have at times been suggested or implemented, such as theskyline operator(for finding only those rows that are not 'worse' than any others). SQL has thecaseexpression, which was introduced inSQL-92. In its most general form, which is called a "searched case" in the SQL standard: SQL testsWHENconditions in the order they appear in the source. If the source does not specify anELSEexpression, SQL defaults toELSE NULL. An abbreviated syntax called "simple case" can also be used: This syntax uses implicit equality comparisons, withthe usual caveats for comparing with NULL. There are two short forms for specialCASEexpressions:COALESCEandNULLIF. TheCOALESCEexpression returns the value of the first non-NULL operand, found by working from left to right, or NULL if all the operands equal NULL. is equivalent to: TheNULLIFexpression has two operands and returns NULL if the operands have the same value, otherwise it has the value of the first operand. is equivalent to Standard SQL allows two formats forcomments:-- comment, which is ended by the firstnewline, and/* comment */, which can span multiple lines. The most common operation in SQL, the query, makes use of the declarativeSELECTstatement.SELECTretrieves data from one or moretables, or expressions. StandardSELECTstatements have no persistent effects on the database. Some non-standard implementations ofSELECTcan have persistent effects, such as theSELECT INTOsyntax provided in some databases.[2] Queries allow the user to describe desired data, leaving thedatabase management system (DBMS)to carry outplanning,optimizing, and performing the physical operations necessary to produce that result as it chooses. A query includes a list of columns to include in the final result, normally immediately following theSELECTkeyword. An asterisk ("*") can be used to specify that the query should return all columns of the queried tables.SELECTis the most complex statement in SQL, with optional keywords and clauses that include: The clauses of a query have a particular order of execution,[5]which is denoted by the number on the right hand side. It is as follows: The following example of aSELECTquery returns a list of expensive books. The query retrieves all rows from theBooktable in which thepricecolumn contains a value greater than 100.00. The result is sorted in ascending order bytitle. The asterisk (*) in theselect listindicates that all columns of theBooktable should be included in the result set. The example below demonstrates a query of multiple tables, grouping, and aggregation, by returning a list of books and the number of authors associated with each book. Example output might resemble the following: Under the precondition thatisbnis the only common column name of the two tables and that a column namedtitleonly exists in theBooktable, one could re-write the query above in the following form: However, many[quantify]vendors either do not support this approach, or require certain column-naming conventions for natural joins to work effectively. SQL includes operators and functions for calculating values on stored values. SQL allows the use of expressions in theselect listto project data, as in the following example, which returns a list of books that cost more than 100.00 with an additionalsales_taxcolumn containing a sales tax figure calculated at 6% of theprice. Queries can be nested so that the results of one query can be used in another query via arelational operatoror aggregation function. A nested query is also known as asubquery. While joins and other table operations provide computationally superior (i.e. faster) alternatives in many cases, the use of subqueries introduces a hierarchy in execution that can be useful or necessary. In the following example, the aggregation functionAVGreceives as input the result of a subquery: A subquery can use values from the outer query, in which case it is known as acorrelated subquery. Since 1999 the SQL standard allowsWITHclauses for subqueries, i.e. named subqueries, usually calledcommon table expressions(also calledsubquery factoring). CTEs can also berecursiveby referring to themselves;the resulting mechanismallows tree or graph traversals (when represented as relations), and more generallyfixpointcomputations. Aderived tableis the use of referencing an SQL subquery in a FROM clause. Essentially, the derived table is a subquery that can be selected from or joined to. The derived table functionality allows the user to reference the subquery as a table. The derived table is sometimes referred to as aninline viewor asubselect. In the following example, the SQL statement involves a join from the initial "Book" table to the derived table "sales". This derived table captures associated book sales information using the ISBN to join to the "Book" table. As a result, the derived table provides the result set with additional columns (the number of items sold and the company that sold the books): The concept ofNullallows SQL to deal with missing information in the relational model. The wordNULLis a reserved keyword in SQL, used to identify the Null special marker. Comparisons with Null, for instance equality (=) in WHERE clauses, results in an Unknown truth value. In SELECT statements SQL returns only results for which the WHERE clause returns a value of True; i.e., it excludes results with values of False and also excludes those whose value is Unknown. Along with True and False, the Unknown resulting from direct comparisons with Null thus brings a fragment ofthree-valued logicto SQL. The truth tables SQL uses for AND, OR, and NOT correspond to a common fragment of the Kleene and Lukasiewicz three-valued logic (which differ in their definition of implication, however SQL defines no such operation).[6] There are however disputes about the semantic interpretation of Nulls in SQL because of its treatment outside direct comparisons. As seen in the table above, direct equality comparisons between two NULLs in SQL (e.g.NULL = NULL) return a truth value of Unknown. This is in line with the interpretation that Null does not have a value (and is not a member of any data domain) but is rather a placeholder or "mark" for missing information. However, the principle that two Nulls aren't equal to each other is effectively violated in the SQL specification for theUNIONandINTERSECToperators, which do identify nulls with each other.[7]Consequently, theseset operations in SQLmay produce results not representing sure information, unlike operations involving explicit comparisons with NULL (e.g. those in aWHEREclause discussed above). In Codd's 1979 proposal (which was basically adopted by SQL92) this semantic inconsistency is rationalized by arguing that removal of duplicates in set operations happens "at a lower level of detail than equality testing in the evaluation of retrieval operations".[6]However, computer-science professor Ron van der Meyden concluded that "The inconsistencies in the SQL standard mean that it is not possible to ascribe any intuitive logical semantics to the treatment of nulls in SQL."[7] Additionally, because SQL operators return Unknown when comparing anything with Null directly, SQL provides two Null-specific comparison predicates:IS NULLandIS NOT NULLtest whether data is or is not Null.[8]SQL does not explicitly supportuniversal quantification, and must work it out as a negatedexistential quantification.[9][10][11]There is also the<row value expression> IS DISTINCT FROM <row value expression>infixed comparison operator, which returns TRUE unless both operands are equal or both are NULL. Likewise, IS NOT DISTINCT FROM is defined asNOT (<row value expression> IS DISTINCT FROM <row value expression>).SQL:1999also introducedBOOLEANtype variables, which according to the standard can also hold Unknown values if it is nullable. In practice, a number of systems (e.g.PostgreSQL) implement the BOOLEAN Unknown as a BOOLEAN NULL, which the standard says that the NULL BOOLEAN and UNKNOWN "may be used interchangeably to mean exactly the same thing".[12][13] TheData Manipulation Language(DML) is the subset of SQL used to add, update and delete data: Transactions, if available, wrap DML operations: COMMITandROLLBACKterminate the current transaction and release data locks. In the absence of aSTART TRANSACTIONor similar statement, the semantics of SQL are implementation-dependent. The following example shows a classic transfer of funds transaction, where money is removed from one account and added to another. If either the removal or the addition fails, the entire transaction is rolled back. TheData Definition Language(DDL) manages table and index structure. The most basic items of DDL are theCREATE,ALTER,RENAME,DROPandTRUNCATEstatements: Each column in an SQL table declares the type(s) that column may contain. ANSI SQL includes the following data types.[14] For theCHARACTER LARGE OBJECTandNATIONAL CHARACTER LARGE OBJECTdata types, the multipliersK(1 024),M(1 048 576),G(1 073 741 824) andT(1 099 511 627 776) can be optionally used when specifying the length. For theBINARY LARGE OBJECTdata type, the multipliersK(1 024),M(1 048 576),G(1 073 741 824) andT(1 099 511 627 776) can be optionally used when specifying the length. TheBOOLEANdata type can store the valuesTRUEandFALSE. For example, the number 123.45 has a precision of 5 and a scale of 2. Theprecisionis a positive integer that determines the number of significant digits in a particular radix (binary or decimal). Thescaleis a non-negative integer. A scale of 0 indicates that the number is an integer. For a decimal number with scale S, the exact numeric value is the integer value of the significant digits divided by 10S. SQL provides the functionsCEILINGandFLOORto round numerical values. (Popular vendor specific functions areTRUNC(Informix, DB2, PostgreSQL, Oracle and MySQL) andROUND(Informix, SQLite, Sybase, Oracle, PostgreSQL, Microsoft SQL Server and Mimer SQL.)) The SQL functionEXTRACTcan be used for extracting a single field (seconds, for instance) of a datetime or interval value. The current system date / time of the database server can be called by using functions likeCURRENT_DATE,CURRENT_TIMESTAMP,LOCALTIME, orLOCALTIMESTAMP. (Popular vendor specific functions areTO_DATE,TO_TIME,TO_TIMESTAMP,YEAR,MONTH,DAY,HOUR,MINUTE,SECOND,DAYOFYEAR,DAYOFMONTHandDAYOFWEEK.) TheData Control Language(DCL) authorizes users to access and manipulate data. Its two main statements are: Example:
https://en.wikipedia.org/wiki/SQL_syntax
PL/SQL(Procedural Language for SQL) isOracle Corporation'sproceduralextensionforSQLand theOracle relational database. PL/SQL is available in Oracle Database (since version 6 - stored PL/SQL procedures/functions/packages/triggers since version 7),TimesTen in-memory database(since version 11.2.1), andIBM Db2(since version 9.7).[1]Oracle Corporation usually extends PL/SQL functionality with each successive release of the Oracle Database. PL/SQL includes procedural language elements such asconditionsandloops, and can handleexceptions(run-time errors). It allows the declaration of constants andvariables, procedures, functions, packages, types and variables of those types, and triggers.Arraysare supported involving the use of PL/SQL collections. Implementations from version 8 of Oracle Database onwards have included features associated withobject-orientation. One can create PL/SQL units such as procedures, functions, packages, types, and triggers, which are stored in the database for reuse by applications that use any of the Oracle Database programmatic interfaces. The first public version of the PL/SQL definition[2]was in 1995. It implements the ISOSQL/PSMstandard.[3] The main feature of SQL (non-procedural) is also its drawback: control statements (decision-makingoriterative control) cannot be used if only SQL is to be used. PL/SQL provides the functionality of other procedural programming languages, such as decision making, iteration etc. A PL/SQL program unit is one of the following: PL/SQL anonymous block,procedure,function,packagespecification, package body, trigger, type specification, type body, library. Program units are the PL/SQL source code that is developed, compiled, and ultimately executed on the database.[4] The basic unit of a PL/SQL source program is the block, which groups together related declarations and statements. A PL/SQL block is defined by the keywords DECLARE, BEGIN, EXCEPTION, and END. These keywords divide the block into a declarative part, an executable part, and an exception-handling part. The declaration section is optional and may be used to define and initialize constants and variables. If a variable is not initialized then it defaults toNULLvalue. The optional exception-handling part is used to handle run-time errors. Only the executable part is required. A block can have a label.[5] For example: The symbol:=functions as anassignment operatorto store a value in a variable. Blocks can be nested – i.e., because a block is an executable statement, it can appear in another block wherever an executable statement is allowed. A block can be submitted to an interactive tool (such as SQL*Plus) or embedded within an Oracle Precompiler orOCIprogram. The interactive tool or program runs the block once. The block is not stored in the database, and for that reason, it is called an anonymous block (even if it has a label). The purpose of a PL/SQL function is generally used to compute and return a single value. This returned value may be a single scalar value (such as a number, date or character string) or a single collection (such as a nested table or array).User-defined functionssupplement the built-in functions provided by Oracle Corporation.[6] The PL/SQL function has the form: Pipe-lined table functions return collections[7]and take the form: A function should only use the default IN type of parameter. The only out value from the function should be the value it returns. Procedures resemble functions in that they are named program units that can be invoked repeatedly. The primary difference is thatfunctions can be used in a SQL statement whereas procedures cannot. Another difference is that the procedure can return multiple values whereas a function should only return a single value.[8] The procedure begins with a mandatory heading part to hold the procedure name and optionally the procedure parameter list. Next come the declarative, executable and exception-handling parts, as in the PL/SQL Anonymous Block. A simple procedure might look like this: The example above shows a standalone procedure - this type of procedure is created and stored in a database schema using the CREATE PROCEDURE statement. A procedure may also be created in a PL/SQL package - this is called a Package Procedure. A procedure created in a PL/SQL anonymous block is called a nested procedure. The standalone or package procedures, stored in the database, are referred to as "stored procedures". Procedures can have three types of parameters: IN, OUT and IN OUT. PL/SQL also supports external procedures via the Oracle database's standardext-procprocess.[9] Packages are groups of conceptually linked functions, procedures, variables, PL/SQL table and record TYPE statements, constants, cursors, etc. The use of packages promotes re-use of code. Packages are composed of the package specification and an optional package body. The specification is the interface to the application; it declares the types, variables, constants, exceptions, cursors, and subprograms available. The body fully defines cursors and subprograms, and so implements the specification. Two advantages of packages are:[10] Adatabase triggeris like a stored procedure that Oracle Database invokes automatically whenever a specified event occurs. It is a named PL/SQL unit that is stored in the database and can be invoked repeatedly. Unlike a stored procedure, you can enable and disable a trigger, but you cannot explicitly invoke it. While a trigger is enabled, the database automatically invokes it—that is, the trigger fires—whenever its triggering event occurs. While a trigger is disabled, it does not fire. You create a trigger with the CREATE TRIGGER statement. You specify the triggering event in terms of triggering statements, and the item they act on. The trigger is said to be created on or defined on the item—which is either a table, aview, a schema, or the database. You also specify the timing point, which determines whether the trigger fires before or after the triggering statement runs and whether it fires for each row that the triggering statement affects. If the trigger is created on a table or view, then the triggering event is composed of DML statements, and the trigger is called a DML trigger. If the trigger is created on a schema or the database, then the triggering event is composed of either DDL or database operation statements, and the trigger is called a system trigger. An INSTEAD OF trigger is either: A DML trigger created on a view or a system trigger defined on a CREATE statement. The database fires the INSTEAD OF trigger instead of running the triggering statement. Triggers can be written for the following purposes: The majordatatypesin PL/SQL include NUMBER, CHAR, VARCHAR2, DATE and TIMESTAMP. To define a numeric variable, the programmer appends the variable typeNUMBERto the name definition. To specify the (optional) precision (P) and the (optional) scale (S), one can further append these in round brackets, separated by a comma. ("Precision" in this context refers to the number of digits the variable can hold, and "scale" refers to the number of digits that can follow the decimal point.) A selection of other data-types for numeric variables would include: binary_float, binary_double, dec, decimal, double precision, float, integer, int, numeric, real, small-int, binary_integer. To define a character variable, the programmer normally appends the variable type VARCHAR2 to the name definition. There follows in brackets the maximum number of characters the variable can store. Other datatypes for character variables include: varchar, char, long, raw, long raw, nchar, nchar2, clob, blob, and bfile. Date variables can contain date and time. The time may be left out, but there is no way to define a variable that only contains the time. There is no DATETIME type. And there is a TIME type. But there is no TIMESTAMP type that can contain fine-grained timestamp up to millisecond or nanosecond. TheTO_DATEfunction can be used to convert strings to date values. The function converts the first quoted string into a date, using as a definition the second quoted string, for example: or To convert the dates to strings one uses the functionTO_CHAR (date_string, format_string). PL/SQL also supports the use of ANSI date and interval literals.[11]The following clause gives an 18-month range: Exceptions—errors during code execution—are of two types: user-defined and predefined. User-definedexceptions are always raised explicitly by the programmers, using theRAISEorRAISE_APPLICATION_ERRORcommands, in any situation where they determine it is impossible for normal execution to continue. TheRAISEcommand has the syntax: Oracle Corporation haspredefinedseveral exceptions likeNO_DATA_FOUND,TOO_MANY_ROWS,etc.Each exception has an SQL error number and SQL error message associated with it. Programmers can access these by using theSQLCODEandSQLERRMfunctions. This syntax defines a variable of the type of the referenced column on the referenced tables. Programmers specify user-defined datatypes with the syntax: For example: This sample program defines its own datatype, calledt_address, which contains the fieldsname, street, street_numberandpostcode. So according to the example, we are able to copy the data from the database to the fields in the program. Using this datatype the programmer has defined a variable calledv_addressand loaded it with data from the ADDRESS table. Programmers can address individual attributes in such a structure by means of the dot-notation, thus: The following code segment shows the IF-THEN-ELSIF-ELSE construct. The ELSIF and ELSE parts are optional so it is possible to create simpler IF-THEN or, IF-THEN-ELSE constructs. The CASE statement simplifies some large IF-THEN-ELSIF-ELSE structures. CASE statement can be used with predefined selector: PL/SQL refers toarraysas "collections". The language offers three types of collections: Programmers must specify an upper limit for varrays, but need not for index-by tables or for nested tables. The language includes several collectionmethodsused to manipulate collection elements: for example FIRST, LAST, NEXT, PRIOR, EXTEND, TRIM, DELETE, etc. Index-by tables can be used to simulate associative arrays, as in thisexample of a memo function for Ackermann's function in PL/SQL. With index-by tables, the array can be indexed by numbers or strings. It parallels aJavamap, which comprises key-value pairs. There is only one dimension and it is unbounded. Withnested tablesthe programmer needs to understand what is nested. Here, a new type is created that may be composed of a number of components. That type can then be used to make a column in a table, and nested within that column are those components. With Varrays you need to understand that the word "variable" in the phrase "variable-size arrays" doesn't apply to the size of the array in the way you might think that it would. The size the array is declared with is in fact fixed. The number of elements in the array is variable up to the declared size. Arguably then, variable-sized arrays aren't that variable in size. Acursoris a pointer to a private SQL area that stores information coming from a SELECT or data manipulation language (DML) statement (INSERT, UPDATE, DELETE, or MERGE). Acursorholds the rows (one or more) returned by a SQL statement. The set of rows thecursorholds is referred to as the active set.[12] Acursorcan be explicit or implicit. In a FOR loop, an explicit cursor shall be used if the query will be reused, otherwise an implicit cursor is preferred. If using a cursor inside a loop, use a FETCH is recommended when needing to bulk collect or when needing dynamic SQL. As a procedural language by definition, PL/SQL provides severaliterationconstructs, including basic LOOP statements,WHILE loops,FOR loops, and Cursor FOR loops. Since Oracle 7.3 the REF CURSOR type was introduced to allow recordsets to be returned from stored procedures and functions. Oracle 9i introduced the predefined SYS_REFCURSOR type, meaning we no longer have to define our own REF CURSOR types. [13] Loops can be terminated by using theEXITkeyword, or by raising anexception. Output: Cursor-for loops automatically open acursor, read in their data and close the cursor again. As an alternative, the PL/SQL programmer can pre-define the cursor's SELECT-statement in advance to (for example) allow re-use or make the code more understandable (especially useful in the case of long or complex queries). The concept of the person_code within the FOR-loop gets expressed with dot-notation ("."): While programmers can readily embedData Manipulation Language(DML) statements directly into PL/SQL code using straightforward SQL statements,Data Definition Language(DDL) requires more complex "Dynamic SQL" statements in the PL/SQL code. However, DML statements underpin the majority of PL/SQL code in typical software applications. In the case of PL/SQL dynamic SQL, early versions of the Oracle Database required the use of a complicated OracleDBMS_SQLpackage library. More recent versions have however introduced a simpler "Native Dynamic SQL", along with an associatedEXECUTE IMMEDIATEsyntax. PL/SQL works analogously to the embedded procedural languages associated with otherrelational databases. For example,SybaseASEandMicrosoftSQL ServerhaveTransact-SQL,PostgreSQLhasPL/pgSQL(which emulates PL/SQL to an extent),MariaDBincludes a PL/SQL compatibility parser,[14]andIBM Db2includes SQL Procedural Language,[15]which conforms to theISO SQL’sSQL/PSMstandard. The designers of PL/SQL modeled its syntax on that ofAda. Both Ada and PL/SQL havePascalas a common ancestor, and so PL/SQL also resembles Pascal in most aspects. However, the structure of a PL/SQL package does not resemble the basicObject Pascalprogram structure as implemented by aBorland DelphiorFree Pascalunit. Programmers can define public and private global data-types, constants, and static variables in a PL/SQL package.[16] PL/SQL also allows for the definition of classes and instantiating these as objects in PL/SQL code. This resembles usage inobject-oriented programminglanguages likeObject Pascal,C++andJava. PL/SQL refers to a class as an "Abstract Data Type" (ADT) or "User Defined Type" (UDT), and defines it as anOracleSQL data-type as opposed to a PL/SQL user-defined type, allowing its use in both theOracleSQL Engine and theOraclePL/SQL engine. The constructor and methods of an Abstract Data Type are written in PL/SQL. The resulting Abstract Data Type can operate as an object class in PL/SQL. Such objects can also persist as column values in Oracle database tables. PL/SQL is fundamentally distinct fromTransact-SQL, despite superficial similarities. Porting code from one to the other usually involves non-trivial work, not only due to the differences in the feature sets of the two languages,[17]but also due to the very significant differences in the way Oracle and SQL Server deal withconcurrencyandlocking. TheStepSqliteproduct is a PL/SQL compiler for the popular small databaseSQLitewhich supports a subset of PL/SQL syntax. Oracle'sBerkeley DB11g R2 release added support forSQLbased on the popular SQLite API by including a version of SQLite in Berkeley DB.[18]Consequently, StepSqlite can also be used as a third-party tool to run PL/SQL code on Berkeley DB.[19]
https://en.wikipedia.org/wiki/PL/SQL
Transact-SQL(T-SQL) isMicrosoft's andSybase's proprietary extension to theSQL(Structured Query Language) used to interact withrelational databases. T-SQL expands on the SQL standard to includeprocedural programming,local variables, various support functions for string processing, date processing, mathematics, etc. and changes to theDELETEandUPDATEstatements. Transact-SQL is central to usingMicrosoft SQL Server. All applications that communicate with an instance of SQL Server do so by sending Transact-SQL statements to the server, regardless of the user interface of the application. Stored proceduresin SQL Server are executable server-side routines. The advantage of stored procedures is the ability to pass parameters. Transact-SQL provides the following statements to declare and set local variables:DECLARE,SETandSELECT. Keywords for flow control in Transact-SQL includeBEGINandEND,BREAK,CONTINUE,GOTO,IFandELSE,RETURN,WAITFOR, andWHILE. IFandELSEallow conditional execution. This batch statement will print "It is the weekend" if the current date is a weekend day, or "It is a weekday" if the current date is a weekday. (Note: This code assumes that Sunday is configured as the first day of the week in the@@DATEFIRSTsetting.) BEGINandENDmark ablock of statements. If more than one statement is to be controlled by the conditional in the example above, we can useBEGINandENDlike this: WAITFORwill wait for a given amount of time, or until a particular time of day. The statement can be used for delays or to block execution until the set time. RETURNis used to immediately return from astored procedureor function. BREAKends the enclosingWHILEloop, whileCONTINUEcauses the next iteration of the loop to execute. An example of aWHILEloop is given below. In Transact-SQL, both theDELETEandUPDATEstatements are enhanced to enable data from another table to be used in the operation, without needing a subquery: This example deletes alluserswho have been flagged in theuser_flagstable with the 'idle' flag. BULKis a Transact-SQL statement that implements a bulk data-loading process, inserting multiple rows into a table, reading data from an external sequential file. Use ofBULK INSERTresults in better performance than processes that issue individualINSERTstatements for each row to be added. Additional details are availablein MSDN. Beginning with SQL Server 2005,[1]Microsoft introduced additionalTRY CATCHlogic to support exception type behaviour. This behaviour enables developers to simplify their code and leave out@@ERRORchecking after each SQL execution statement.
https://en.wikipedia.org/wiki/Transact-SQL
Online transaction processing(OLTP) is a type ofdatabasesystem used in transaction-oriented applications, such as many operational systems. "Online" refers to the fact that such systems are expected to respond to user requests and process them in real-time (process transactions). The term is contrasted withonline analytical processing(OLAP) which instead focuses on data analysis (for exampleplanningandmanagement systems). The term "transaction" can have two different meanings, both of which might apply: in the realm of computers ordatabase transactionsit denotes an atomic change of state, whereas in the realm of business or finance, the term typically denotes an exchange of economic entities (as used by, e.g.,Transaction Processing Performance Councilorcommercial transactions.[1]): 50OLTP may use transactions of the first type to record transactions of the second type. OLTP is typically contrasted toonline analytical processing(OLAP), which is generally characterized by much more complex queries, in a smaller volume, for the purpose of business intelligence or reporting rather than to process transactions. Whereas OLTP systems process all kinds of queries (read, insert, update and delete), OLAP is generally optimized for read only and might not even support other kinds of queries. OLTP also operates differently frombatch processingandgrid computing.[1]: 15 In addition, OLTP is often contrasted toonline event processing(OLEP), which is based on distributedevent logsto offer strong consistency in large-scale heterogeneous systems.[2]Whereas OLTP is associated with short atomic transactions, OLEP allows for more flexible distribution patterns and higher scalability, but with increased latency and without guaranteed upper bound to the processing time. OLTP has also been used to refer to processing in which the system responds immediately to user requests. Anautomated teller machine(ATM) for a bank is an example of a commercial transaction processing application.[3]Online transaction processing applications have high throughput and are insert- or update-intensive in database management. These applications are used concurrently by hundreds of users. The key goals of OLTP applications are availability, speed, concurrency and recoverability (durability).[4]Reduced paper trails and the faster, more accurate forecast for revenues and expenses are both examples of how OLTP makes things simpler for businesses. However, like many modern online information technology solutions, some systems require offline maintenance, which further affects the cost-benefit analysis of an online transaction processing system. An OLTP system is an accessible data processing system in today's enterprises. Some examples of OLTP systems include order entry, retail sales, and financial transaction systems.[5]Online transaction processing systems increasingly require support for transactions that span a network and may include more than one company. For this reason, modern online transaction processing software uses client or server processing and brokering software that allows transactions to run on different computer platforms in a network. In large applications, efficient OLTP may depend on sophisticated transaction management software (such as IBMCICS) and/ordatabaseoptimization tactics to facilitate the processing of large numbers of concurrent updates to an OLTP-oriented database. For even more demanding decentralized database systems, OLTP brokering programs can distribute transaction processing among multiple computers on anetwork. OLTP is often integrated intoservice-oriented architecture(SOA) andWeb services. Online transaction processing (OLTP) involves gathering input information, processing the data and updating existing data to reflect the collected and processed information. As of today, most organizations use a database management system to support OLTP. OLTP is carried in a client-server system. Online transaction process concerns about concurrency and atomicity. Concurrency controls guarantee that two users accessing the same data in the database system will not be able to change that data or the user has to wait until the other user has finished processing, before changing that piece of data. Atomicity controls guarantee that all the steps in a transaction are completed successfully as a group. That is, if any steps between the transaction fail, all other steps must fail also.[6] To build an OLTP system, a designer must know that the large number of concurrent users does not interfere with the system's performance. To increase the performance of an OLTP system, a designer must avoid excessive use of indexes and clusters. The following elements are crucial for the performance of OLTP systems:[4]
https://en.wikipedia.org/wiki/Online_transaction_processing
Arelational data stream management system (RDSMS)is a distributed, in-memorydata stream management system(DSMS) that is designed to use standards-compliantSQLqueries to process unstructured and structured data streams in real-time. Unlike SQL queries executed in a traditionalRDBMS, which return a result and exit, SQL queries executed in a RDSMS do not exit, generating results continuously as new data become available. Continuous SQL queries in a RDSMS use the SQL Window function to analyze, join and aggregate data streams over fixed or sliding windows. Windows can be specified as time-based or row-based. Continuous SQL queries in a RDSMS conform to theANSISQL standards. The most common RDSMS SQL query is performed with the declarativeSELECTstatement. A continuous SQLSELECToperates on data across one or more data streams, with optional keywords and clauses that includeFROMwith an optionalJOINsubclause to specify the rules for joining multiple data streams, theWHEREclause and comparison predicate to restrict the records returned by the query,GROUP BYto project streams with common values into a smaller set,HAVINGto filter records resulting from aGROUP BY, andORDER BYto sort the results. The following is an example of a continuous data stream aggregation using aSELECTquery that aggregates a sensor stream from a weather monitoring station. TheSELECTquery aggregates the minimum, maximum and average temperature values over a one-second time period, returning a continuous stream of aggregated results at one second intervals. RDSMS SQL queries also operate on data streams over time or row-based windows. The following example shows a second continuous SQL query using theWINDOWclause with a one-second duration. TheWINDOWclause changes the behavior of the query, to output a result for each new record as it arrives. Hence the output is a stream of incrementally updated results with zero result latency.
https://en.wikipedia.org/wiki/Relational_data_stream_management_system
MUMPS("Massachusetts General Hospital Utility Multi-Programming System"), orM, is an imperative, high-levelprogramming languagewith an integrated transaction processingkey–value database. It was originally developed atMassachusetts General Hospitalfor managing patient medical records and hospital laboratory information systems. MUMPS technology has since expanded as the predominant database for health information systems andelectronic health recordsin the United States. MUMPS-based information systems, such asEpic Systems', provide health information services for over 78% of patients across the U.S.[1] A unique feature of the MUMPS technology is its integrateddatabase language, allowing direct, high-speed read-write access to permanent disk storage.[2] MUMPS was developed byNeil Pappalardo,Robert A. Greenes, and Curt Marble in Dr. Octo Barnett's lab at theMassachusetts General Hospital(MGH) inBostonduring 1966 and 1967.[3]It grew out of frustration, during a National Institutes of Health (NIH)-support hospital information systems project at the MGH, with the development in assembly language on a time-shared PDP-1 by primary contractor Bolt Beranek & Newman, Inc. (BBN). MUMPS came out of an internal "skunkworks" project at MGH by Pappalardo, Greenes, and Marble to create an alternative development environment. As a result of initial demonstration of capabilities, Dr. Barnett's proposal to NIH in 1967 for renewal of the hospital computer project grant took the bold step of proposing that the system be built in MUMPS going forward, rather than relying on the BBN approach. The project was funded, and serious implementation of the system in MUMPS began. The original MUMPS system was, likeUnixa few years later, built on aDECPDP-7. Octo Barnett and Neil Pappalardo obtained abackward compatiblePDP-9, and began using MUMPS in the admissions cycle and laboratory test reporting. MUMPS was then aninterpreted language, yet even then, incorporated ahierarchical databasefile system to standardize interaction with the data and abstract disk operations so they were only done by the MUMPS language itself. MUMPS was also used in its earliest days in an experimental clinical progress note entry system[4]and a radiology report entry system.[5] Some aspects of MUMPS can be traced fromRAND Corporation'sJOSSthroughBBN'sTELCOMPandSTRINGCOMP. The MUMPS team chose to include portability between machines as a design goal. An advanced feature of the MUMPS language not widely supported inoperating systemsor incomputer hardwareof the era wasmultitasking. Althoughtime-sharingonmainframe computerswas increasingly common in systems such asMultics, most mini-computers did not run parallel programs and threading was not available at all. Even on mainframes, the variant of batch processing where a program was run to completion was the most common implementation for an operating system of multi-programming. It was a few years until Unix was developed. The lack of memory management hardware also meant that all multi-processing was fraught with the possibility that a memory pointer could change some other process. MUMPS programs do not have a standard way to refer to memory directly at all, in contrast toC language, so since the multitasking was enforced by the language, not by any program written in the language it was impossible to have the risk that existed for other systems. Dan Brevik's DEC MUMPS-15 system was adapted to a DECPDP-15, where it lived for some time. It was first installed at Health Data Management Systems of Denver in May 1971.[6]The portability proved to be useful and MUMPS was awarded a government research grant, and so MUMPS was released to the public domain which was a requirement for grants. MUMPS was soon ported to a number of other systems including the popular DECPDP-8, theData General Novaand on DECPDP-11and theArtronixPC12 minicomputer. Word about MUMPS spread mostly through the medical community, and was in widespread use, often being locally modified for their own needs. Versions of the MUMPS system were rewritten by technical leaders Dennis "Dan" Brevik and Paul Stylos[6]ofDECin 1970 and 1971. By the early 1970s, there were many and varied implementations of MUMPS on a range of hardware platforms. Another noteworthy platform was Paul Stylos'[6]DEC MUMPS-11 on the PDP-11, andMEDITECH'sMIIS. In the Fall of 1972, many MUMPS users attended a conference in Boston which standardized the then-fractured language, and created theMUMPS Users GroupandMUMPS Development Committee(MDC) to do so. These efforts proved successful; a standard was complete by 1974, and was approved, on September 15, 1977, asANSIstandard, X11.1-1977. At about the same time DEC launched DSM-11 (Digital Standard MUMPS) for the PDP-11. This quickly dominated the market, and became the reference implementation of the time. Also,InterSystemssold ISM-11 for the PDP-11 (which was identical to DSM-11). During the early 1980s several vendors brought MUMPS-based platforms that met the ANSI standard to market. The most significant were: This period also saw considerable MDC activity. The second revision of the ANSI standard for MUMPS (X11.1-1984) was approved on November 15, 1984. The chief executive of InterSystems disliked the name MUMPS and felt that it represented a serious marketing obstacle. Thus, favoring M to some extent became identified as alignment with InterSystems. The 1990 ANSI Standard was open to both M and MUMPS and after a "world-wide" discussion in 1992 the Mumps User Groups officially changed the name to M. The dispute also reflected rivalry between organizations (the M Technology Association, the MUMPS Development Committee, the ANSI and ISO Standards Committees) as to who determines the "official" name of the language.[citation needed] As of 2020, the ISO still mentions both M and MUMPS as officially accepted names.[16] Massachusetts General Hospitalregistered "MUMPS" as a trademark with the USPTO on November 28, 1971, and renewed it on November 16, 1992, but let it expire on August 30, 2003.[17] MUMPS is a language intended for and designed to build database applications. Secondary language features were included to help programmers make applications using minimal computing resources. The original implementations wereinterpreted, though modern implementations may be fully or partiallycompiled. Individual "programs" run in memory"partitions". Early MUMPS memory partitions were limited to 2048 bytes so aggressive abbreviation greatly aided multi-programming on severely resource limited hardware, because more than one MUMPS job could fit into the very small memories extant in hardware at the time. The ability to provide multi-user systems was another language design feature. The word "Multi-Programming" in the acronym points to this. Even the earliest machines running MUMPS supported multiple jobs running at the same time. With the change from mini-computers to micro-computers a few years later, even a "single user PC" with a single 8-bit CPU and 16K or 64K of memory could support multiple users, who could connect to it from (non-graphical)video display terminals. Since memory was tight originally, the language design for MUMPS valued very terse code. Thus, every MUMPS command or function name could be abbreviated from one to three letters in length, e.g.Quit(exit program) asQ,$P=$Piecefunction,R=Readcommand,$TR=$Translatefunction. Spaces and end-of-line markers are significant in MUMPS because line scope promoted the same terse language design. Thus, a single line of program code could express, with few characters, an idea for which other programming languages could require 5 to 10 times as many characters. Abbreviation was a common feature of languages designed in this period (e.g.,FOCAL-69, early BASICs such asTiny BASIC, etc.). An unfortunate side effect of this, coupled with the early need to write minimalist code, was that MUMPS programmers routinely did not comment code and used extensive abbreviations. This meant that even an expert MUMPS programmer could not just skim through a page of code to see its function but would have to analyze it line by line. Database interaction is transparently built into the language. The MUMPS language provides ahierarchical databasemade up ofpersistentsparse arrays, which is implicitly "opened" for every MUMPS application. All variable names prefixed with the caret character (^) use permanent (instead of RAM) storage, will maintain their values after the application exits, and will be visible to (and modifiable by) other running applications. Variables using this shared and permanent storage are calledGlobalsin MUMPS, because the scoping of these variables is "globally available" to all jobs on the system. The more recent and more common use of the name "global variables" in other languages is a more limited scoping of names, coming from the fact thatunscoped variablesare "globally" available to any programs running in the same process, but not shared among multiple processes. The MUMPS Storage mode (i.e. globals stored as persistent sparse arrays), gives the MUMPS database the characteristics of adocument-oriented database.[18] All variable names which are not prefixed with caret character (^) are temporary and private. Like global variables, they also have a hierarchical storage model, but are only "locally available" to a single job, thus they are called "locals". Both "globals" and "locals" can have child nodes (calledsubscriptsin MUMPS terminology). Subscripts are not limited to numerals—anyASCIIcharacter or group of characters can be a subscript identifier. While this is not uncommon for modern languages such as Perl or JavaScript, it was a highly unusual feature in the late 1970s. This capability was not universally implemented in MUMPS systems before the 1984 ANSI standard, as only canonically numeric subscripts were required by the standard to be allowed.[19]Thus, the variable named 'Car' can have subscripts "Door", "Steering Wheel", and "Engine", each of which can contain a value and have subscripts of their own. The variable^Car("Door")could have a nested variable subscript of "Color" for example. Thus, you could say to modify a nested child node of^Car. In MUMPS terms, "Color" is the 2nd subscript of the variable^Car(both the names of the child-nodes and the child-nodes themselves are likewise called subscripts). Hierarchical variables are similar to objects with properties in manyobject-orientedlanguages. Additionally, the MUMPS language design requires that all subscripts of variables are automatically kept in sorted order. Numeric subscripts (including floating-point numbers) are stored from lowest to highest. All non-numeric subscripts are stored in alphabetical order following the numbers. In MUMPS terminology, this iscanonical order. By using only non-negative integer subscripts, the MUMPS programmer can emulate thearraysdata type from other languages. Although MUMPS does not natively offer a full set ofDBMSfeatures such as mandatory schemas, several DBMS systems have been built on top of it that provide application developers with flat-file, relational, andnetwork databasefeatures. Additionally, there are built-in operators which treat a delimited string (e.g.,comma-separated values) as an array. Early MUMPS programmers would often store a structure of related information as a delimited string, parsing it after it was read in; this saved disk access time and offered considerable speed advantages on some hardware. MUMPS has no data types. Numbers can be treated as strings of digits, or strings can be treated as numbers by numeric operators (coerced, in MUMPS terminology). Coercion can have some odd side effects, however. For example, when a string is coerced, the parser turns as much of the string (starting from the left) into a number as it can, then discards the rest. Thus the statementIF 20<"30 DUCKS"is evaluated asTRUEin MUMPS. Other features of the language are intended to help MUMPS applications interact with each other in a multi-user environment. Database locks, process identifiers, andatomicityof database update transactions are all required of standard MUMPS implementations. In contrast to languages in the C orWirthtraditions, some space characters between MUMPS statements are significant. A single space separates a command from its argument, and a space, or newline, separates each argument from the next MUMPS token. Commands which take no arguments (e.g.,ELSE) require two following spaces. The concept is that one space separates the command from the (nonexistent) argument, the next separates the "argument" from the next command. Newlines are also significant; anIF,ELSEorFORcommand processes (or skips) everything else till the end-of-line. To make those statements control multiple lines, you must use theDOcommand to create a code block. A simple"Hello, World!" programin MUMPS might be: and would be run with the commanddo ^helloafter it has been saved to disk. For direct execution of the code a kind of "label" (any alphanumeric string) on the first position of the program line is needed to tell the mumps interpreter where to start execution. Since MUMPS allows commands to be strung together on the same line, and since commands can be abbreviated to a single letter, this routine could be made more compact: The ',!' after the text generates a newline. This code would return to the prompt. ANSI X11.1-1995 gives a complete, formal description of the language; an annotated version of this standard is available online.[20] Language features include: MUMPS supports multiple simultaneous users and processes even when the underlying operating system does not (e.g.,MS-DOS). Additionally, there is the ability to specify an environment for a variable, such as by specifying a machine name in a variable (as inSET ^|"DENVER"|A(1000)="Foo"), which can allow you to access data on remote machines. Some aspects of MUMPS syntax differ strongly from that of more modern languages, which can cause confusion, although those aspects vary between different versions of the language. On some versions, whitespace is not allowed within expressions, as it ends a statement:2 + 3is an error, and must be written2+3. All operators have the same precedence and areleft-associative(2+3*10evaluates to 50). The operators for "less than or equal to" and "greater than or equal to" are'>and'<(that is, the Boolean negation operator'plus a strict comparison operator in the opposite direction), although some versions allow the use of the more standard<=and>=respectively. Periods (.) are used to indent the lines in a DO block, not whitespace. The ELSE command does not need a corresponding IF, as it operates by inspecting the value in the built-in system variable$test. MUMPSscopingrules are more permissive than other modern languages. Declared local variables are scoped using the stack. A routine can normally see all declared locals of the routines below it on the call stack, and routines cannot prevent routines they call from modifying their declared locals, unless the caller manually creates a new stack level (do) and aliases each of the variables they wish to protect (. new x,y) before calling any child routines. By contrast, undeclared variables (variables created by using them, rather than declaration) are in scope for all routines running in the same process, and remain in scope until the program exits. Because MUMPS database references differ from internal variable references only in the caret prefix, it is dangerously easy to unintentionally edit the database, or even to delete a database "table".[21] The US Department of Veterans Affairs (formerly the Veterans Administration) was one of the earliest major adopters of the MUMPS language. Their development work (and subsequent contributions to the free MUMPS application codebase) was an influence on many medical users worldwide. In 1995, the Veterans Affairs' patient Admission/Tracking/Discharge system,Decentralized Hospital Computer Program(DHCP) was the recipient of the ComputerworldSmithsonian Awardfor best use of Information Technology in Medicine. In July 2006, the Department of Veterans Affairs (VA) /Veterans Health Administration(VHA) was the recipient of the Innovations in American Government Award presented by the Ash Institute of theJohn F. Kennedy School of GovernmentatHarvard Universityfor its extension of DHCP into the Veterans Health Information Systems and Technology Architecture (VistA). Nearly the entire VA hospital system in the United States, theIndian Health Service, and major parts of theDepartment of DefenseCHCShospital system use MUMPS databases for clinical data tracking. Other healthcare IT companies using MUMPS include: Many reference laboratories, such as DASA,Quest Diagnostics,[23]and Dynacare, use MUMPS software written by or based on Antrim Corporation code. Antrim was purchased by Misys Healthcare (nowSunquest Information Systems) in 2001.[24] MUMPS is also widely used in financial applications. MUMPS gained an early following in the financial sector and is in use at many banks and credit unions. It is used by theBank of EnglandandBarclays Bank.[25][26][27] Since 2005, the most popular implementations of MUMPS have been Greystone Technology MUMPS (GT.M) from Fidelity National Information Services, and Caché, from Intersystems Corporation. The European Space Agency announced on May 13, 2010, that it will use theInterSystems Cachédatabase to support theGaiamission. This mission aims to map theMilky Waywith unprecedented precision.[28]InterSystems is in the process of phasing out Caché in favor of Iris.[29] Other current implementations include:
https://en.wikipedia.org/wiki/MUMPS
Afact constellation schema, also referred to as agalaxy schema, is a model using multiplefact tablesand multipledimension tables.[1]These schemas are implemented for complex data warehouses.[1] The fact constellation is a measure ofonline analytical processingand can be seen as an extension of thestar schema. A fact constellation schema has multiple fact tables. It is a widely used schema and more complex than star schemas and snowflake schemas. It is possible to create a fact constellation schema by splitting the original star schema into more star schemas. It has many fact tables and some common dimension tables. Thisdatabase-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Fact_constellation
Thereverse star schemais aschemaoptimized for fastretrievalof large quantities of descriptivedata. The design was derived from a warehousestar schema,[1]and its adaptation for descriptive data required that certain key characteristics of the classic star schema be "reversed". The relation of the central table to those indimension tablesis one-to-many, or in some cases many-to-many rather than many-to-one; the primary keys of the central table are theforeign keysin dimension tables, and the main tables are, in general, smaller than the dimension tables. Main table columns are typically the source ofqueryconstraints, as opposed to dimension tables in the classical star schema. By starting queries with the smaller table, many results are filtered out early in the querying process, thereby streamlining the entiresearch path. To add further flexibility, more than one main table is allowed, with main and submain tables having a one-to-many relation. Each main table can have its own dimension tables. To provide furtherquery optimization, a data set can be partitioned into separate physical schemas on either the samedatabase serveror different database servers.
https://en.wikipedia.org/wiki/Reverse_star_schema
DOAP(Description Of A Project) is anRDF SchemaandXMLvocabulary to describe software projects, in particularfree and open source software. It was created and initially developed by Edd Wilder-James (Edd Dumbill) to convey semantic information associated with open source software projects.[1][2] There are currently generators,validators, viewers, and converters to enable more projects to be able to be included in thesemantic web. In 2007Freecodelisted 43 000 projects as published with DOAP.[3]It was used in thePython Package Indexbut is no longer supported there. In 2025, it is normal practice for DOAP files to be included withGNOMEsource code.[4] Major properties include:homepage,developer,programming-language,os. The following is an example in RDF/XML: Other properties includeImplements specification,anonymous root,platform,browse,mailing list,category,description,helper,tester,short description,audience,screenshots,translator,module,documenter,wiki,repository,name,repository location,language,service endpoint,created,download mirror,vendor,old homepage,revision,download page,license,bug database,maintainer,blog,file-releaseandrelease.[5]
https://en.wikipedia.org/wiki/DOAP
Semantically Interlinked Online Communities Project(SIOC (/ʃɒk/SHOK[1])) is aSemantic Webtechnology. SIOC provides methods for interconnecting discussion methods such asblogs, forums and mailing lists to each other. It consists of the SIOContology, an open-standard machine-readable format for expressing the information contained both explicitly and implicitly inInternetdiscussion methods, of SIOCmetadataproducers for a number of popular blogging platforms andcontent management systems, and of storage and browsing/searching systems for leveraging this SIOC data. The SIOC vocabulary is based onRDFand is defined usingRDFS. SIOC documents may use other existing ontologies to enrich the information described. Additional information about the creator of the post can be described usingFOAFVocabulary and thefoaf:makerproperty. Rich content of the post (e.g., anHTMLrepresentation) can be described using theAtomOWLorRSS1.0 Content module. The SIOC project was started in 2004 byJohn BreslinandUldis BojarsatDERI,NUI Galway. In 2007, SIOC became aW3CMember Submission.[2]
https://en.wikipedia.org/wiki/Semantically-Interlinked_Online_Communities
hCardis amicroformatfor publishing the contact details (which might be no more than the name) of people, companies, organizations, and places, inHTML,Atom,RSS, or arbitraryXML.[1]The hCard microformat does this using a 1:1 representation ofvCard(RFC 2426) properties and values, identified using HTML classes andrelattributes. It allows parsing tools (for example other websites, orFirefox'sOperator extension) to extract the details, and display them, using some other websites ormappingtools, index or search them, or to load them into an address-book program. In May 2009,Googleannounced that they would be parsing the hCard andhReviewandhProductmicroformats, and using them to populate search-result pages.[2]In September 2010Googleannounced their intention to surface hCard,hReviewinformation in their local search results.[3]In February 2011,Facebookbegan using hCard to mark up event venues.[4] Consider the HTML: With microformat markup, that becomes: A profile may optionally be included in the page header: Here the propertiesfn,[5]nickname,org(organization),tel(telephone number) andurl(web address) have been identified using specific class names; and the whole thing is wrapped inclass="vcard"which indicates that the other classes form an hcard, and are not just coincidentally named. If the hCard is for an organization or venue, thefnandorgclasses are used on the same element, as in<span class="fn org">Wikipedia</span>or<span class="fn org">Wembley Stadium</span>. Other, optional hCard classes also exist. It is now possible for software, for example browser plug-ins, to extract the information, and transfer it to other applications, such as an address book. TheGeo microformatis a part of the hCard specification, and is often used to include the coordinates of a location within an hCard. Theadrpart of hCard can also be used as a stand-alone microformat. Here are theWikimedia Foundation's contact details as of February 2023[update], as a live hCard: The mark-up (wrapped for clarity) used is: In this example, thefnandorgproperties are combined on one element, indicating that this is the hCard for an organization, not a person. Other commonly used hCard attributes include
https://en.wikipedia.org/wiki/HCard
vCard, also known asVCF("Virtual Contact File"), is afile formatstandard for electronicbusiness cards. vCards can be attached toe-mailmessages, sent viaMultimedia Messaging Service(MMS), on theWorld Wide Web,instant messaging,NFCor throughQR code. They can containnameandaddressinformation,phone numbers,e-mailaddresses,URLs,logos,photographs, and audio clips. vCard is used as adata interchangeformat insmartphonecontacts,personal digital assistants(PDAs),personal information managers(PIMs) andcustomer relationship management systems(CRMs). To accomplish these data interchange applications, other "vCard variants" have been used and proposed as "variant standards", each for its specific niche:XMLrepresentation,JSONrepresentation, orweb pages. The standard Internet media type (MIMEtype) for a vCard has varied with each version of the specification.[1] vCards can be embedded inweb pages. RDFawith the vCard Ontology can be used in HTML and various XML-family languages, e.g. SVG, MathML. jCard, "TheJSONFormat for vCard" is a standard proposal of 2014 inRFC7095. RFC 7095 describes a lossless method of representing vCard instances in JSON, using arrays of sequence-dependent tag–value pairs. jCard has been incorporated into several other protocols, includingRDAP, the Protocol to AccessWhite SpaceDatabases (PAWS, described inRFC7545), andSIP, which (viaRFC8688) uses it to provide contact information for the operator of an intermediary which has rejected a call. hCardis a microformat that allows a vCard to be embedded inside an HTML page. It makes use ofCSSclass names to identify each vCard property. Normal HTML markup and CSS styling can be used alongside the hCard class names without affecting the webpage's ability to be parsed by a hCard parser. h-card is the microformats2 update to hCard. MeCardis a variation of vCard made byNTT DoCoMofor smartphones usingQR codes. It uses a very similar syntax, but in a more consolidated way as the storage space on QR codes is limited. It's also limited in the amount of data that can be stored, not just by the standard but the size of QR codes. An example of a simple vCard (from RFC 6350 of August, 2011, abbreviated): This is the vCard for "Simon Perreault" (the author of RFC 6350), with his birthday (omitting the year), email address and gender. vCard defines the following property types. All vCards begin withBEGIN:VCARDand end withEND:VCARD. All vCards must contain theVERSIONproperty, which specifies the vCard version.VERSIONmust come immediately afterBEGIN, except in the vCard 2.1 and 3.0 standards, which allows it to be anywhere in the vCard. Otherwise, properties can be defined in any order. This property was introduced in a separate RFC when the latest vCard version was 3.0. Therefore, 3.0 vCards might use this property without otherwise declaring it. Not supported in version 4.0. Instead, this information is stored in theLABELparameter of theADRproperty. Example:ADR;TYPE=home;LABEL="123 Main St\nNew York, NY 12345":;;123 Main St;New York;NY;12345;USA Not supported in version 4.0. Instead, this information is stored in theSORT-ASparameter of theNand/orORGproperties.
https://en.wikipedia.org/wiki/VCard
XHTML Friends Network(XFN) is anHTMLmicroformatdeveloped byGlobal Multimedia Protocols Groupthat provides a simple way to represent human relationships using links. XFN enables web authors to indicate relationships to the people in theirblogrollsby adding one or more keywords as therelattributeto their links.[1][2][3]XFN was the firstmicroformat, introduced in December 2003.[1][failed verification] A friend of Jimmy Example could indicate that relationship by publishing a link on their site like this: Multiple values may be used, so if that friend has met Jimmy: Thismarkup languagearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/XHTML_Friends_Network
Circumscriptionis anon-monotonic logiccreated byJohn McCarthyto formalize thecommon senseassumption that things are as expected unless otherwise specified.[1][2]Circumscription was later used by McCarthy in an attempt to solve theframe problem. To implement circumscription in its initial formulation, McCarthy augmentedfirst-order logicto allow the minimization of theextensionof some predicates, where the extension of a predicate is the set of tuples of values the predicate is true on. This minimization is similar to theclosed-world assumptionthat what is not known to be true is false.[3] The original problem considered by McCarthy was that ofmissionaries and cannibals: there are three missionaries and three cannibals on one bank of a river; they have to cross the river using a boat that can only take two, with the additional constraint that cannibals must never outnumber the missionaries on either bank (as otherwise the missionaries would be killed and, presumably, eaten). The problem considered by McCarthy was not that of finding a sequence of steps to reach the goal (the article on themissionaries and cannibals problemcontains one such solution), but rather that of excluding conditions that are not explicitly stated. For example, the solution "go half a mile south and cross the river on the bridge" is intuitively not valid because the statement of the problem does not mention such a bridge. On the other hand, the existence of this bridge is not excluded by the statement of the problem either. That the bridge does not exist is a consequence of the implicit assumption that the statement of the problem contains everything that is relevant to its solution. Explicitly stating that a bridge does not exist is not a solution to this problem, as there are many other exceptional conditions that should be excluded (such as the presence of a rope for fastening the cannibals, the presence of a larger boat nearby, etc.) Circumscription was later used by McCarthy to formalize the implicit assumption ofinertia: things do not change unless otherwise specified. Circumscription seemed to be useful to avoid specifying that conditions are not changed by all actions except those explicitly known to change them; this is known as theframe problem. However, the solution proposed by McCarthy was later shown to lead to wrong results in some cases, as in theYale shooting problemscenario. Other solutions to the frame problem that correctly formalize the Yale shooting problem exist; some use circumscription but in a different way. While circumscription was initially defined in the first-order logic case, the particularization to the propositional case is easier to define.[4]Given apropositional formulaT{\displaystyle T}, its circumscription is the formula having only themodelsofT{\displaystyle T}that do not assign a variable to true unless necessary. Formally, propositional models can be represented by sets ofpropositional variables; namely, each model is represented by the set of propositional variables it assigns to true. For example, the model assigning true toa{\displaystyle a}, false tob{\displaystyle b}, and true toc{\displaystyle c}is represented by the set{a,c}{\displaystyle \{a,c\}}, becausea{\displaystyle a}andc{\displaystyle c}are exactly the variables that are assigned to true by this model. Given two modelsM{\displaystyle M}andN{\displaystyle N}represented this way, the conditionN⊆M{\displaystyle N\subseteq M}is equivalent toM{\displaystyle M}setting to true every variable thatN{\displaystyle N}sets to true. In other words,⊆{\displaystyle \subseteq }models the relation of "setting to true less variables".N⊂M{\displaystyle N\subset M}means thatN⊆M{\displaystyle N\subseteq M}but these two models do not coincide. This lets us define models that do not assign variables to true unless necessary. A modelM{\displaystyle M}of atheoryT{\displaystyle T}is calledminimal, if and only if there is no modelN{\displaystyle N}ofT{\displaystyle T}for whichN⊂M{\displaystyle N\subset M}. Circumscription is expressed by selecting only the minimal models. It is defined as follows: Alternatively, one can defineCIRC(T){\displaystyle CIRC(T)}as a formula having exactly the above set of models; furthermore, one can also avoid giving a definition ofCIRC{\displaystyle CIRC}and only define minimal inference asT⊨MQ{\displaystyle T\models _{M}Q}if and only if every minimal model ofT{\displaystyle T}is also a model ofQ{\displaystyle Q}. As an example, the formulaT=a∧(b∨c){\displaystyle T=a\land (b\lor c)}has three models: The first model is not minimal in the set of variables it assigns to true. Indeed, the second model makes the same assignments except forc{\displaystyle c}, which is assigned to false and not to true. Therefore, the first model is not minimal. The second and third models are incomparable: while the second assigns true tob{\displaystyle b}, the third assigns true toc{\displaystyle c}instead. Therefore, the models circumscribingT{\displaystyle T}are the second and third models of the list. A propositional formula having exactly these two models is the following one: Intuitively, in circumscription a variable is assigned to true only if this is necessary. Dually, if a variablecanbe false, itmustbe false. For example, at least one ofb{\displaystyle b}andc{\displaystyle c}must be assigned to true according toT{\displaystyle T}; in the circumscription exactly one of the two variables must be true. The variablea{\displaystyle a}cannot be false in any model ofT{\displaystyle T}and neither of the circumscription. The extension of circumscription with fixed and varying predicates is due toVladimir Lifschitz.[5]The idea is that some conditions are not to be minimized. In propositional logic terms, some variables are not to be falsified if possible. In particular, two kind of variables can be considered: The difference is that the value of the varying conditions are simply assumed not to matter. The fixed conditions instead characterize a possible situation, so that comparing two situations where these conditions have different value makes no sense. Formally, the extension of circumscription that incorporate varying and fixed variables is as follows, whereP{\displaystyle P}is the set of variables to minimize,Z{\displaystyle Z}the fixed variables, and the varying variables are those not inP∪Z{\displaystyle P\cup Z}: In words, minimization of the variables assigned to true is only done for the variables inP{\displaystyle P}; moreover, models are only compared if they assign the same values to the variables ofZ{\displaystyle Z}. All other variables are not taken into account while comparing models. The solution to the frame problem proposed by McCarthy is based on circumscription with no fixed conditions. In the propositional case, this solution can be described as follows: in addition to the formulae directly encoding what is known, one also define new variables representing changes in the values of the conditions; these new variables are then minimized. For example, of the domain in which there is a door that is closed at time 0 and in which the action of opening the door is executed at time 2, what is explicitly known is represented by the two formulae: The frame problem shows in this example as the problem that¬open1{\displaystyle \neg open_{1}}is not a consequence of the above formulae, while the door is supposed to stay closed until the action of opening it is performed. Circumscription can be used to this aim by defining new variableschange_opent{\displaystyle change\_open_{t}}to model changes and then minimizing them: As shown by theYale shooting problem, this kind of solution does not work. For example,¬open1{\displaystyle \neg {\text{open}}_{1}}is not yet entailed by the circumscription of the formulae above: the model in whichchange open0{\displaystyle {\text{change open}}_{0}}is true andchange open1{\displaystyle {\text{change open}}_{1}}is false is incomparable with the model with the opposite values. Therefore, the situation in which the door becomes open at time 1 and then remains open as a consequence of the action is not excluded by circumscription. Several other formalizations of dynamical domains not suffering from such problems have been developed (seeframe problemfor an overview). Many use circumscription but in a different way. The original definition of circumscription proposed by McCarthy is about first-order logic. The role of variables in propositional logic (something that can be true or false) is played in first-order logic by predicates. Namely, a propositional formula can be expressed in first-order logic by replacing each propositional variable with a predicate of zero arity (i.e., a predicate with no arguments). Therefore, minimization is done on predicates in the first-order logic version of circumscription: the circumscription of a formula is obtained forcing predicates to be false whenever possible.[6] Given a first-order logic formulaT{\displaystyle T}containing apredicateP{\displaystyle P}, circumscribing this predicate amounts to selecting only the models ofT{\displaystyle T}in whichP{\displaystyle P}is assigned to true on a minimal set of tuples of values. Formally, theextensionof a predicate in a first-order model is the set of tuples of values this predicate assign to true in the model. First-order models indeed includes the evaluation of each predicate symbol; such an evaluation tells whether the predicate is true or false for any possible value of its arguments.[7]Since each argument of a predicate must be a term, and each term evaluates to a value, the models tells whetherP(v1,…,vn){\displaystyle P(v_{1},\ldots ,v_{n})}is true for any possible tuple of values⟨v1,…,vn⟩{\displaystyle \langle v_{1},\ldots ,v_{n}\rangle }. The extension ofP{\displaystyle P}in a model is the set of tuples of terms such thatP(v1,…,vn){\displaystyle P(v_{1},\ldots ,v_{n})}is true in the model. The circumscription of a predicateP{\displaystyle P}in a formulaT{\displaystyle T}is obtained by selecting only the models ofT{\displaystyle T}with a minimal extension ofP{\displaystyle P}. For example, if a formula has only two models, differing only becauseP(v1,…,vn){\displaystyle P(v_{1},\ldots ,v_{n})}is true in one and false in the second, then only the second model is selected. This is because⟨v1,…,vn⟩{\displaystyle \langle v_{1},\ldots ,v_{n}\rangle }is in the extension ofP{\displaystyle P}in the first model but not in the second. The original definition by McCarthy was syntactical rather than semantical. Given a formulaT{\displaystyle T}and a predicateP{\displaystyle P}, circumscribingP{\displaystyle P}inT{\displaystyle T}is the following second-order formula: In this formulap{\displaystyle p}is a predicate of the same arity asP{\displaystyle P}. This is a second-order formula because it contains a quantification over a predicate. The subformulap<P{\displaystyle p<P}is a shorthand for: In this formula,x{\displaystyle x}is a n-tuple of terms, where n is the arity ofP{\displaystyle P}. This formula states that extension minimization has to be done: in order for a truth evaluation onP{\displaystyle P}of a model being considered, it must be the case that no other predicatep{\displaystyle p}can assign to false every tuple thatP{\displaystyle P}assigns to false and yet being different fromP{\displaystyle P}. This definition only allows circumscribing a single predicate. While the extension to more than one predicate is trivial, minimizing the extension of a single predicate has an important application: capturing the idea that things are usually as expected. This idea can be formalized by minimizing a single predicate expressing the abnormality of situations. In particular, every known fact is expressed in logic with the addition of a literal¬Abnormal(...){\displaystyle \neg Abnormal(...)}stating that the fact holds only in normal situations. Minimizing the extension of this predicate allows for reasoning under the implicit assumption that things are as expected (that is, they are not abnormal), and that this assumption is made only if possible (abnormality can be assumed false only if this is consistent with the facts.) Pointwise circumscriptionis a variant of first-order circumscription that has been introduced byVladimir Lifschitz.[8]The rationale of pointwise circumscription is that it minimizes the value of a predicate for each tuple of values separately, rather than minimizing the extension of the predicate. For example, there are two models ofP(a)≡P(b){\displaystyle P(a)\equiv P(b)}with domain{a,b}{\displaystyle \{a,b\}}, one settingP(a)=P(b)=false{\displaystyle P(a)=P(b)=false}and the other settingP(a)=P(b)=true{\displaystyle P(a)=P(b)=true}. Since the extension ofP{\displaystyle P}in the first model is∅{\displaystyle \emptyset }while the extension for the second is{a,b}{\displaystyle \{a,b\}}, circumscription only selects the first model. In the propositional case, pointwise and predicate circumscription coincide. In pointwise circumscription, each tuple of values is considered separately. For example, in the formulaP(a)≡P(b){\displaystyle P(a)\equiv P(b)}one would consider the value ofP(a){\displaystyle P(a)}separately fromP(b){\displaystyle P(b)}. A model is minimal only if it is not possible to turn any such value from true to false while still satisfying the formula. As a result, the model in whichP(a)=P(b)=true{\displaystyle P(a)=P(b)=true}is selected by pointwise circumscription because turning onlyP(a){\displaystyle P(a)}into false does not satisfy the formula, and the same happens forP(b){\displaystyle P(b)}. An earlier formulation of circumscription by McCarthy is based on minimizing thedomainof first-order models, rather than the extension of predicates. Namely, a model is considered less than another if it has a smaller domain and the two models coincide on the evaluation of the common tuples of values. This version of circumscription can be reduced to predicate circumscription. Formula circumscription was a later formalism introduced by McCarthy. This is a generalization of circumscription in which the extension of a formula is minimized, rather than the extension of a predicate. In other words, a formula can be specified so that the set of tuples of values of the domain that satisfy the formula is made as small as possible. Circumscription does not always correctly handle disjunctive information.Ray Reiterprovided the following example: a coin is tossed over a checkboard, and the result is that the coin is either on a black area, or on a white area, or both. However, there are a large number of other possible places where the coin is not supposed to be on; for example, it is implicit that the coin is not on the floor, or on the refrigerator, or on the surface of the Moon. Circumscription can therefore be used to minimize the extension ofOn{\displaystyle On}predicate, so thatOn(coin,moon){\displaystyle On({\text{coin}},{\text{moon}})}is false even if this is not explicitly stated. On the other hand, the minimization of theOn{\displaystyle On}predicate leads to the wrong result that the coin is either on a black area or on a white area,but not both. This is because the models in whichOn{\displaystyle On}is true only on(coin,white area){\displaystyle ({\text{coin}},{\text{white area}})}and only on(coin,black area){\displaystyle ({\text{coin}},{\text{black area}})}have a minimal extension ofOn{\displaystyle On}, while the model in which the extension ofOn{\displaystyle On}is composed of both pairs is not minimal. Theory curbing is a solution proposed byThomas Eiter,Georg Gottlob, andYuri Gurevich.[9]The idea is that the model that circumscription fails to select, the one in which bothOn(coin,white area){\displaystyle On({\text{coin}},{\text{white area}})}andOn(coin,black area){\displaystyle On({\text{coin}},{\text{black area}})}are true, is a model of the formula that is greater (w.r.t. the extension ofOn{\displaystyle On}) than both the two models that are selected. More specifically, among the models of the formula, the excluded model is the least upper bound of the two selected models. Theory curbing selects such least upper bounds models in addition to the ones selected by circumscription. This inclusion is done until the set of models is closed, in the sense that it includes all least upper bounds of all sets of models it contains.
https://en.wikipedia.org/wiki/Circumscription_(logic)
Default logicis anon-monotonic logicproposed byRaymond Reiterto formalize reasoning with default assumptions. Default logic can express facts like “by default, something is true”; by contrast, standard logic can only express that something is true or that something is false. This is a problem because reasoning often involves facts that are true in the majority of cases but not always. A classical example is: “birds typically fly”. This rule can be expressed in standard logic either by “all birds fly”, which is inconsistent with the fact that penguins do not fly, or by “all birds that are not penguins and not ostriches and ... fly”, which requires all exceptions to the rule to be specified. Default logic aims at formalizing inference rules like this one without explicitly mentioning all their exceptions. A default theory is a pair⟨W,D⟩{\displaystyle \langle W,D\rangle }.Wis a set of logical formulas, calledthe background theory, that formalize the facts that are known for sure.Dis a set ofdefault rules, each one being of the form: According to this default, if we believe thatPrerequisiteis true, and eachJustificationi{\displaystyle \mathrm {Justification} _{i}}fori=1,…,n{\displaystyle i=1,\dots ,n}is consistent with our current beliefs, we are led to believe thatConclusionis true. The logical formulae inWand all formulae in a default were originally assumed to befirst-order logicformulae, but they can potentially be formulae in an arbitrary formal logic. The case in which they are formulae inpropositional logicis one of the most studied. The default rule “birds typically fly” is formalized by the following default: This rule means that, "ifXis a bird, and it can be assumed that it flies, then we can conclude that it flies". A background theory containing some facts about birds is the following one: According to this default rule, a condor flies because the preconditionBird(Condor)is true and the justificationFlies(Condor)is not inconsistent with what is currently known. On the contrary,Bird(Penguin)does not allow concludingFlies(Penguin): even if the precondition of the defaultBird(Penguin)is true, the justificationFlies(Penguin)is inconsistent with what is known. From this background theory and this default,Bird(Bee)cannot be concluded because the default rule only allows derivingFlies(X)fromBird(X), but not vice versa. Deriving the antecedents of an inference rule from the consequences is a form of explanation of the consequences, and is the aim ofabductive reasoning. A common default assumption is that what is not known to be true is believed to be false. This is known as theClosed-World Assumption, and is formalized in default logic using a default like the following one for every factF. For example, the computer languageProloguses a sort of default assumption when dealing with negation: if a negative atom cannot be proved to be true, then it is assumed to be false. Note, however, that Prolog uses the so-callednegation as failure: when the interpreter has to evaluate the atom¬F{\displaystyle \neg F}, it tries to prove thatFis true, and conclude that¬F{\displaystyle \neg F}is true if it fails. In default logic, instead, a default having¬F{\displaystyle \neg F}as a justification can only be applied if¬F{\displaystyle \neg F}is consistent with the current knowledge. A default is categorical or prerequisite-free if it has no prerequisite (or, equivalently, its prerequisite istautological). A default is normal if it has a single justification that is equivalent to its conclusion. A default is supernormal if it is both categorical and normal. A default is seminormal if all its justifications entail its conclusion. A default theory is called categorical, normal, supernormal, or seminormal if all defaults it contains are categorical, normal, supernormal, or seminormal, respectively. A default rule can be applied to a theory if its precondition is entailed by the theory and its justifications are allconsistent withthe theory. The application of a default rule leads to the addition of its consequence to the theory. Other default rules may then be applied to the resulting theory.When the theory is such that no other default can be applied, the theory is called an extension of the default theory.The default rules may be applied in different order, and this may lead to different extensions. TheNixon diamondexample is a default theory with two extensions: SinceNixonis both aRepublicanand aQuaker, both defaults can be applied. However, applying the first default leads to the conclusion that Nixon is not a pacifist, which makes the second default not applicable. In the same way, applying the second default we obtain that Nixon is a pacifist, thus making the first default not applicable. This particular default theory has therefore two extensions, one in whichPacifist(Nixon)is true, and one in whichPacifist(Nixon)is false. The original semantics of default logic was based on thefixed pointof a function. The following is an equivalent algorithmic definition. If a default contains formulae with free variables, it is considered to represent the set of all defaults obtained by giving a value to all these variables. A defaultα:β1,…,βnγ{\displaystyle {\frac {\alpha :\beta _{1},\ldots ,\beta _{n}}{\gamma }}}is applicable to a propositional theoryTifT⊨α{\displaystyle T\models \alpha }and all theoriesT∪{βi}{\displaystyle T\cup \{\beta _{i}\}}are consistent. The application of this default toTleads to the theoryT∪{γ}{\displaystyle T\cup \{\gamma \}}. An extension can be generated by applying the following algorithm: This algorithm isnon-deterministic, as several defaults can alternatively be applied to a given theoryT. In the Nixon diamond example, the application of the first default leads to a theory to which the second default cannot be applied and vice versa. As a result, two extensions are generated: one in which Nixon is a pacifist and one in which Nixon is not a pacifist. The final check of consistency of the justifications of all defaults that have been applied implies that some theories do not have any extensions. In particular, this happens whenever this check fails for every possible sequence of applicable defaults. The following default theory has no extension: SinceA(b){\displaystyle A(b)}is consistent with the background theory, the default can be applied, thus leading to the conclusion thatA(b){\displaystyle A(b)}is false. This result however undermines the assumption that has been made for applying the first default. Consequently, this theory has no extensions. In a normal default theory, all defaults are normal: each default has the formϕ:ψψ{\displaystyle {\frac {\phi :\psi }{\psi }}}. A normal default theory is guaranteed to have at least one extension. Furthermore, the extensions of a normal default theory are mutually inconsistent, i.e., inconsistent with each other. A default theory can have zero, one, or more extensions.Entailmentof a formula from a default theory can be defined in two ways: Thus, the Nixon diamond example theory has two extensions, one in which Nixon is a pacifist and one in which he is not a pacifist. Consequently, neitherPacifist(Nixon)nor¬Pacifist(Nixon)are skeptically entailed, while both of them are credulously entailed. As this example shows, the credulous consequences of a default theory may be inconsistent with each other. The following alternative inference rules for default logic are all based on the same syntax as the original system. The justified and constrained versions of the inference rule assign at least an extension to every default theory. The following variants of default logic differ from the original one on both syntax and semantics. Default theories can be translated into theories in other logics and vice versa. The following conditions on translations have been considered: Translations are typically required to be faithful or at least consequence-preserving, while the conditions of modularity and same alphabet are sometimes ignored. The translatability between propositional default logic and the following logics have been studied: Translations exist or not depending on which conditions are imposed. Translations from propositional default logic to classical propositional logic cannot always generate a polynomially sized propositional theory, unless thepolynomial hierarchycollapses. Translations to autoepistemic logic exist or not depending on whether modularity or the use of the same alphabet is required. Thecomputational complexityof the following problems about default logic is known: Four systems implementing default logics areDeReS[permanent dead link],XRay,GADeLArchived2007-04-06 at theWayback Machine, andCatala.
https://en.wikipedia.org/wiki/Default_logic
Negation as failure(NAF, for short) is anon-monotonicinference rule inlogic programming, used to derivenotp{\displaystyle \mathrm {not} ~p}(i.e. thatp{\displaystyle p}is assumed not to hold) from failure to derivep{\displaystyle p}. Note thatnotp{\displaystyle \mathrm {not} ~p}can be different from the statement¬p{\displaystyle \neg p}of thelogical negationofp{\displaystyle p}, depending on thecompletenessof the inference algorithm and thus also on the formal logic system. Negation as failure has been an important feature of logic programming since the earliest days of bothPlannerandProlog. In Prolog, it is usually implemented using Prolog's extralogical constructs. More generally, this kind of negation is known asweak negation,[1][2]in contrast with the strong (i.e. explicit, provable) negation. In Planner, negation as failure could be implemented as follows: which says that if an exhaustive search to provepfails, then assert¬p.[3]This states that propositionpshall be assumed as "not true" in any subsequent processing. However, Planner not being based on a logical model, a logical interpretation of the preceding remains obscure. In pure Prolog, NAF literals of the formnotp{\displaystyle \mathrm {not} ~p}can occur in the body of clauses and can be used to derive other NAF literals. For example, given only the four clauses NAF derivesnots{\displaystyle \mathrm {not} ~s},notr{\displaystyle \mathrm {not} ~r}andp{\displaystyle p}as well ast{\displaystyle t}andq{\displaystyle q}. The semantics of NAF remained an open issue until 1978, whenKeith Clarkshowed that it is correct with respect to the completion of the logic program, where, loosely speaking, "only" and←{\displaystyle \leftarrow }are interpreted as "if and only if", written as "iff" or "≡{\displaystyle \equiv }". For example, the completion of the four clauses above is The NAF inference rule simulates reasoning explicitly with the completion, where both sides of the equivalence are negated and negation on the right-hand side is distributed down toatomic formulae. For example, to shownotp{\displaystyle \mathrm {not} ~p}, NAF simulates reasoning with the equivalences In the non-propositional case, the completion needs to be augmented with equality axioms, to formalize the assumption that individuals with distinct names are distinct. NAF simulates this by failure of unification. For example, given only the two clauses NAF derivesnotp(c){\displaystyle \mathrm {not} ~p(c)}. The completion of the program is augmented with unique names axioms and domain closure axioms. The completion semantics is closely related both tocircumscriptionand to theclosed world assumption. The completion semantics justifies interpreting the resultnotp{\displaystyle \mathrm {not} ~p}of a NAF inference as the classical negation¬p{\displaystyle \neg p}ofp{\displaystyle p}. However, in 1987,Michael Gelfondshowed that it is also possible to interpretnotp{\displaystyle \mathrm {not} ~p}literally as "p{\displaystyle p}can not be shown", "p{\displaystyle p}is not known" or "p{\displaystyle p}is not believed", as inautoepistemic logic. The autoepistemic interpretation was developed further by Gelfond andLifschitzin 1988, and is the basis ofanswer set programming. The autoepistemic semantics of a pure Prolog program P with NAF literals is obtained by "expanding" P with a set of ground (variable-free) NAF literals Δ that isstablein the sense that In other words, a set of assumptions Δ about what can not be shown isstableif and only if Δ is the set of all sentences that truly can not be shown from the program P expanded by Δ. Here, because of the simple syntax of pure Prolog programs, "implied by" can be understood very simply as derivability using modus ponens and universal instantiation alone. A program can have zero, one or more stable expansions. For example, has no stable expansions. has exactly one stable expansionΔ = {notq} has exactly two stable expansionsΔ1= {notp}andΔ2= {notq}. The autoepistemic interpretation of NAF can be combined with classical negation, as in extended logic programming andanswer set programming. Combining the two negations, it is possible to express, for example
https://en.wikipedia.org/wiki/Negation_as_failure
Operational design domain(ODD) is a term for a particular operating context for an automated system, often used in the field ofautonomous vehicles. The context is defined by a set of conditions, including environmental, geographical, time of day, and other conditions. For vehicles, traffic and roadway characteristics are included. Manufacturers use ODD to indicate where/how their product operates safely. A given system may operate differently according to the immediate ODD.[1] The concept presumes that automated systems have limitations.[2]Relating system function to the ODD it supports is important for developers and regulators to establish and communicate safe operating conditions. Systems should operate within those limitations. Some systems recognize the ODD and modify their behavior accordingly. For example, an autonomous car might recognize that traffic is heavy and disable its automated lane change feature.[2] ODD is used forcars, forships,[3]trains,[4]agricultural robots,[5]and other robots. Various regulators have offered definitions of related terms: Physical infrastructure includes roadway types, surfaces, edges and geometry. Operational constraints include speed limits and traffic conditions. Environmental conditions include weather, illumination, etc. Zones include regions, states, school areas, and construction sites. In 2022,Mercedes-Benzannounced a product with an ODD ofLevel 3 autonomous drivingat 130 km/h.[17]
https://en.wikipedia.org/wiki/Operational_design_domain
The concept of astable model, oranswer set, is used to define a declarativesemanticsforlogic programswithnegation as failure. This is one of several standard approaches to the meaning ofnegationin logic programming, along withprogram completionand thewell-founded semantics. The stable model semantics is the basis ofanswer set programming. Research on the declarative semantics of negation in logic programming was motivated by the fact that the behavior ofSLDNFresolution—the generalization ofSLD resolutionused byPrologin the presence of negation in the bodies of rules—does not fully match thetruth tablesfamiliar from classicalpropositional logic. Consider, for instance, the program Given this program, the querypwill succeed, because the program includespas a fact; the queryqwill fail, because it does not occur in the head of any of the rules. The queryrwill fail also, because the only rule withrin the head contains the subgoalqin its body; as we have seen, that subgoal fails. Finally, the queryssucceeds, because each of the subgoalsp,not⁡q{\displaystyle \operatorname {not} q}succeeds. (The latter succeeds because the corresponding positive goalqfails.) To sum up, the behavior of SLDNF resolution on the given program can be represented by the following truth assignment: On the other hand, the rules of the given program can be viewed aspropositional formulasif we identify the comma with conjunction∧{\displaystyle \land }, the symbolnot{\displaystyle \operatorname {not} }with negation¬{\displaystyle \neg }, and agree to treatF←G{\displaystyle F\leftarrow G}as the implicationG→F{\displaystyle G\rightarrow F}written backwards. For instance, the last rule of the given program is, from this point of view, alternative notation for the propositional formula If we calculate thetruth valuesof the rules of the program for the truth assignment shown above then we will see that each rule gets the valueT. In other words, that assignment is amodelof the program. But this program has also other models, for instance Thus one of the models of the given program is special in the sense that it correctly represents the behavior of SLDNF resolution. What are the mathematical properties of that model that make it special? An answer to this question is provided by the definition of a stable model. The meaning of negation in logic programs is closely related to two theories ofnonmonotonic reasoning—autoepistemic logicanddefault logic. The discovery of these relationships was a key step towards the invention of the stable model semantics. The syntax of autoepistemic logic uses amodal operatorthat allows us to distinguish between what is true and what is known.Michael Gelfond[1987] proposed to readnot⁡p{\displaystyle \operatorname {not} p}in the body of a rule as "p{\displaystyle p}is not known", and to understand a rule with negation as the corresponding formula of autoepistemic logic. The stable model semantics, in its basic form, can be viewed as a reformulation of this idea that avoids explicit references to autoepistemic logic. In default logic, a default is similar to aninference rule, except that it includes, besides its premises and conclusion, a list of formulas called justifications. A default can be used to derive its conclusion under the assumption that its justifications are consistent with what is currently known. Nicole Bidoit and Christine Froidevaux [1987] proposed to treat negated atoms in the bodies of rules as justifications. For instance, the rule can be understood as the default that allows us to derives{\displaystyle s}fromp{\displaystyle p}assuming that¬q{\displaystyle \neg q}is consistent. The stable model semantics uses the same idea, but it does not explicitly refer to default logic. The definition of a stable model below, reproduced from [Gelfond and Lifschitz, 1988], uses two conventions. First, a truth assignment is identified with the set of atoms that get the valueT. For instance, the truth assignment is identified with the set{p,s}{\displaystyle \{p,s\}}. This convention allows us to use theset inclusionrelation to compare truth assignments with each other. The smallest of all truth assignments∅{\displaystyle \emptyset }is the one that makes every atom false; the largest truth assignment makes every atom true. Second, a logic program with variables is viewed as shorthand for the set of allgroundinstances of its rules, that is, for the result of substituting variable-free terms for variables in the rules of the program in all possible ways. For instance, the logic programming definition of even numbers is understood as the result of replacingXin this program by the ground terms in all possible ways. The result is the infinite ground program LetPbe a set of rules of the form whereA,B1,…,Bm,C1,…,Cn{\displaystyle A,B_{1},\dots ,B_{m},C_{1},\dots ,C_{n}}are ground atoms. IfPdoes not contain negation (n=0{\displaystyle n=0}in every rule of the program) then, by definition, the only stable model ofPis its model that is minimal relative to set inclusion.[1](Any program without negation has exactly one minimal model.) To extend this definition to the case of programs with negation, we need the auxiliary concept of the reduct, defined as follows. For any setIof ground atoms, thereductofPrelative toIis the set of rules without negation obtained fromPby first dropping every rule such that at least one of the atoms⁠Ci{\displaystyle C_{i}}⁠in its body belongs toI, and then dropping the partsnot⁡C1,…,not⁡Cn{\displaystyle \operatorname {not} C_{1},\dots ,\operatorname {not} C_{n}}from the bodies of all remaining rules. We say thatIis astable modelofPifIis the stable model of the reduct ofPrelative toI. (Since the reduct does not contain negation, its stable model has been already defined.) As the term "stable model" suggests, every stable model ofPis a model ofP. To illustrate these definitions, let us check that{p,s}{\displaystyle \{p,s\}}is a stable model of the program The reduct of this program relative to{p,s}{\displaystyle \{p,s\}}is (Indeed, sinceq∉{p,s}{\displaystyle q\not \in \{p,s\}}, the reduct is obtained from the program by dropping the partnot⁡q.{\displaystyle \operatorname {not} q.}) The stable model of the reduct is{p,s}{\displaystyle \{p,s\}}. (Indeed, this set of atoms satisfies every rule of the reduct, and it has no proper subsets with the same property.) Thus after computing the stable model of the reduct we arrived at the same set{p,s}{\displaystyle \{p,s\}}that we started with. Consequently, that set is a stable model. Checking in the same way the other 15 sets consisting of the atomsp,q,r,s{\displaystyle p,q,r,s}shows that this program has no other stable models. For instance, the reduct of the program relative to{p,q,r}{\displaystyle \{p,q,r\}}is The stable model of the reduct is{p}{\displaystyle \{p\}}, which is different from the set{p,q,r}{\displaystyle \{p,q,r\}}that we started with. A program with negation may have many stable models or no stable models. For instance, the program has two stable models{p}{\displaystyle \{p\}},{q}{\displaystyle \{q\}}. The one-rule program has no stable models. If we think of the stable model semantics as a description of the behavior ofPrologin the presence of negation then programs without a unique stable model can be judged unsatisfactory: they do not provide an unambiguous specification for Prolog-style query answering. For instance, the two programs above are not reasonable as Prolog programs—SLDNF resolution does not terminate on them. But the use of stable models inanswer set programmingprovides a different perspective on such programs. In that programming paradigm, a given search problem is represented by a logic program so that the stable models of the program correspond to solutions. Then programs with many stable models correspond to problems with many solutions, and programs without stable models correspond to unsolvable problems. For instance, theeight queens puzzlehas 92 solutions; to solve it using answer set programming, we encode it by a logic program with 92 stable models. From this point of view, logic programs with exactly one stable model are rather special in answer set programming, like polynomials with exactly one root in algebra. In this section, as in thedefinition of a stable modelabove, by a logic program we mean a set of rules of the form whereA,B1,…,Bm,C1,…,Cn{\displaystyle A,B_{1},\dots ,B_{m},C_{1},\dots ,C_{n}}are ground atoms. Any stable model of a finite ground program is not only a model of the program itself, but also a model of itscompletion[Marek and Subrahmanian, 1989]. The converse, however, is not true. For instance, the completion of the one-rule program is thetautologyp↔p{\displaystyle p\leftrightarrow p}. The model∅{\displaystyle \emptyset }of this tautology is a stable model ofp←p{\displaystyle p\leftarrow p}, but its other model{p}{\displaystyle \{p\}}is not. François Fages [1994] found a syntactic condition on logic programs that eliminates such counterexamples and guarantees the stability of every model of the program's completion. The programs that satisfy his condition are calledtight. Fangzhen Lin and Yuting Zhao [2004] showed how to make the completion of a nontight program stronger so that all its nonstable models will be eliminated. The additional formulas that they add to the completion are calledloop formulas. Thewell-founded modelof a logic program partitions all ground atoms into three sets: true, false and unknown. If an atom is true in the well-founded model ofP{\displaystyle P}then it belongs to every stable model ofP{\displaystyle P}. The converse, generally, does not hold. For instance, the program has two stable models,{p,r}{\displaystyle \{p,r\}}and{q,r}{\displaystyle \{q,r\}}. Even thoughr{\displaystyle r}belongs to both of them, its value in the well-founded model isunknown. Furthermore, if an atom is false in the well-founded model of a program then it does not belong to any of its stable models. Thus the well-founded model of a logic program provides a lower bound on the intersection of its stable models and an upper bound on their union. From the perspective ofknowledge representation, a set of ground atoms can be thought of as a description of a complete state of knowledge: the atoms that belong to the set are known to be true, and the atoms that do not belong to the set are known to be false. A possiblyincompletestate of knowledge can be described using a consistent but possibly incomplete set of literals; if an atomp{\displaystyle p}does not belong to the set and its negation does not belong to the set either then it is not known whetherp{\displaystyle p}is true or false. In the context of logic programming, this idea leads to the need to distinguish between two kinds of negation—negation as failure, discussed above, andstrong negation, which is denoted here by∼{\displaystyle \sim }.[2]The following example, illustrating the difference between the two kinds of negation, belongs toJohn McCarthy. A school bus may cross railway tracks under the condition that there is no approaching train. If we do not necessarily know whether a train is approaching then the rule using negation as failure is not an adequate representation of this idea: it says that it's okay to crossin the absence of informationabout an approaching train. The weaker rule, that uses strong negation in the body, is preferable: It says that it's okay to cross if weknowthat no train is approaching. To incorporate strong negation in the theory of stable models, Gelfond and Lifschitz [1991] allowed each of the expressionsA{\displaystyle A},Bi{\displaystyle B_{i}},Ci{\displaystyle C_{i}}in a rule to be either an atom or an atom prefixed with the strong negation symbol. Instead of stable models, this generalization usesanswer sets, which may include both atoms and atoms prefixed with strong negation. An alternative approach [Ferraris and Lifschitz, 2005] treats strong negation as a part of an atom, and it does not require any changes in the definition of a stable model. In this theory of strong negation, we distinguish between atoms of two kinds,positiveandnegative, and assume that each negative atom is an expression of the form∼A{\displaystyle {\sim }A}, whereA{\displaystyle A}is a positive atom. A set of atoms is calledcoherentif it does not contain "complementary" pairs of atomsA,∼A{\displaystyle A,{\sim }A}. Coherent stable models of a program are identical to its consistent answer sets in the sense of [Gelfond and Lifschitz, 1991]. For instance, the program has two stable models,{p,r}{\displaystyle \{p,r\}}and{q,r,∼r}{\displaystyle \{q,r,{\sim }r\}}. The first model is coherent; the second is not, because it contains both the atomr{\displaystyle r}and the atom∼r{\displaystyle {\sim }r}. According to [Gelfond and Lifschitz, 1991], theclosed world assumptionfor a predicatep{\displaystyle p}can be expressed by the rule (the relationp{\displaystyle p}does not hold for a tupleX1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}if there is no evidence that it does). For instance, the stable model of the program consists of 2 positive atoms and 14 negative atoms i.e., the strong negations of all other positive ground atoms formed fromp,a,b,c,d{\displaystyle p,a,b,c,d}. A logic program with strong negation can include the closed world assumption rules for some of its predicates and leave the other predicates in the realm of theopen world assumption. The stable model semantics has been generalized to many kinds of logic programs other than collections of "traditional" rules discussed above—rules of the form whereA,B1,…,Bm,C1,…,Cn{\displaystyle A,B_{1},\dots ,B_{m},C_{1},\dots ,C_{n}}are atoms. One simple extension allows programs to containconstraints—rules with the empty head: Recall that a traditional rule can be viewed as alternative notation for a propositional formula if we identify the comma with conjunction∧{\displaystyle \land }, the symbolnot{\displaystyle \operatorname {not} }with negation¬{\displaystyle \neg }, and agree to treatF←G{\displaystyle F\leftarrow G}as the implicationG→F{\displaystyle G\rightarrow F}written backwards. To extend this convention to constraints, we identify a constraint with the negation of the formula corresponding to its body: We can now extend the definition of a stable model to programs with constraints. As in the case of traditional programs, to define stable models, we begin with programs that do not contain negation. Such a program may be inconsistent; then we say that it has no stable models. If such a programP{\displaystyle P}is consistent thenP{\displaystyle P}has a unique minimal model, and that model is considered the only stable model ofP{\displaystyle P}. Next, stable models of arbitrary programs with constraints are defined using reducts, formed in the same way as in the case of traditional programs (see thedefinition of a stable modelabove). A setI{\displaystyle I}of atoms is astable modelof a programP{\displaystyle P}with constraints if the reduct ofP{\displaystyle P}relative toI{\displaystyle I}has a stable model, and that stable model equalsI{\displaystyle I}. Theproperties of the stable model semanticsstated above for traditional programs hold in the presence of constraints as well. Constraints play an important role inanswer set programmingbecause adding a constraint to a logic programP{\displaystyle P}affects the collection of stable models ofP{\displaystyle P}in a very simple way: it eliminates the stable models that violate the constraint. In other words, for any programP{\displaystyle P}with constraints and any constraintC{\displaystyle C}, the stable models ofP∪{C}{\displaystyle P\cup \{C\}}can be characterized as the stable models ofP{\displaystyle P}that satisfyC{\displaystyle C}. In adisjunctive rule, the head may be the disjunction of several atoms: (the semicolon is viewed as alternative notation for disjunction∨{\displaystyle \lor }). Traditional rules correspond tok=1{\displaystyle k=1}, andconstraintstok=0{\displaystyle k=0}. To extend the stable model semantics to disjunctive programs [Gelfond and Lifschitz, 1991], we first define that in the absence of negation (n=0{\displaystyle n=0}in each rule) the stable models of a program are its minimal models. The definition of the reduct for disjunctive programs remainsthe same as before. A setI{\displaystyle I}of atoms is astable modelofP{\displaystyle P}ifI{\displaystyle I}is a stable model of the reduct ofP{\displaystyle P}relative toI{\displaystyle I}. For example, the set{p,r}{\displaystyle \{p,r\}}is a stable model of the disjunctive program because it is one of two minimal models of the reduct The program above has one more stable model,{q}{\displaystyle \{q\}}. As in the case of traditional programs, each element of any stable model of a disjunctive programP{\displaystyle P}is a head atom ofP{\displaystyle P}, in the sense that it occurs in the head of one of the rules ofP{\displaystyle P}. As in the traditional case, the stable models of a disjunctive program are minimal and form an antichain. Testing whether a finite disjunctive program has a stable model isΣ2P{\displaystyle \Sigma _{2}^{\rm {P}}}-complete[Eiter and Gottlob, 1993]. Rules, and evendisjunctive rules, have a rather special syntactic form, in comparison with arbitrarypropositional formulas. Each disjunctive rule is essentially an implication such that itsantecedent(the body of the rule) is a conjunction ofliterals, and itsconsequent(head) is a disjunction of atoms. David Pearce [1997] and Paolo Ferraris [2005] showed how to extend the definition of a stable model to sets of arbitrary propositional formulas. This generalization has applications toanswer set programming. Pearce's formulation looks very different from theoriginal definition of a stable model. Instead of reducts, it refers toequilibrium logic—a system ofnonmonotonic logicbased onKripke models. Ferraris's formulation, on the other hand, is based on reducts, although the process of constructing the reduct that it uses differs from the onedescribed above. The two approaches to defining stable models for sets of propositional formulas are equivalent to each other. According to [Ferraris, 2005], thereductof a propositional formulaF{\displaystyle F}relative to a setI{\displaystyle I}of atoms is the formula obtained fromF{\displaystyle F}by replacing each maximal subformula that is not satisfied byI{\displaystyle I}with the logical constant⊥{\displaystyle \bot }(false). Thereductof a setP{\displaystyle P}of propositional formulas relative toI{\displaystyle I}consists of the reducts of all formulas fromP{\displaystyle P}relative toI{\displaystyle I}. As in the case of disjunctive programs, we say that a setI{\displaystyle I}of atoms is astable modelofP{\displaystyle P}ifI{\displaystyle I}is minimal (with respect to set inclusion) among the models of the reduct ofP{\displaystyle P}relative toI{\displaystyle I}. For instance, the reduct of the set relative to{p,s}{\displaystyle \{p,s\}}is Since{p,s}{\displaystyle \{p,s\}}is a model of the reduct, and the proper subsets of that set are not models of the reduct,{p,s}{\displaystyle \{p,s\}}is a stable model of the given set of formulas. Wehave seenthat{p,s}{\displaystyle \{p,s\}}is also a stable model of the same formula, written in logic programming notation, in the sense of theoriginal definition. This is an instance of a general fact: in application to a set of (formulas corresponding to) traditional rules, the definition of a stable model according to Ferraris is equivalent to the original definition. The same is true, more generally, forprograms with constraintsand fordisjunctive programs. The theorem asserting that all elements of any stable model of a programP{\displaystyle P}are head atoms ofP{\displaystyle P}can be extended to sets of propositional formulas, if we define head atoms as follows. An atomA{\displaystyle A}is ahead atomof a setP{\displaystyle P}of propositional formulas if at least one occurrence ofA{\displaystyle A}in a formula fromP{\displaystyle P}is neither in the scope of a negation nor in the antecedent of an implication. (We assume here that equivalence is treated as an abbreviation, not a primitive connective.) Theminimality and the antichain property of stable modelsof a traditional program do not hold in the general case. For instance, (the singleton set consisting of) the formula has two stable models,∅{\displaystyle \emptyset }and{p}{\displaystyle \{p\}}. The latter is not minimal, and it is a proper superset of the former. Testing whether a finite set of propositional formulas has a stable model isΣ2P{\displaystyle \Sigma _{2}^{\rm {P}}}-complete, as in the case ofdisjunctive programs.
https://en.wikipedia.org/wiki/Stable_model_semantics
Theunique name assumptionis a simplifying assumption made in someontologylanguages anddescription logics. In logics with the unique name assumption, different names always refer to different entities in the world.[1]It was included inRay Reiter's discussion of theclosed-world assumptionoften tacitly included in Database Management Systems (e.g. SQL) in his 1984 article "Towards a logical reconstruction of relational database theory" (in M. L. Brodie, J. Mylopoulos, J. W. Schmidt (editors), Data Modelling in Artificial Intelligence, Database and Programming Languages, Springer, 1984, pages 191–233). The standard ontology languageOWLdoes not make this assumption, but provides explicit constructs to express whether two names denote the same or distinct entities.[2][3] Thislogic-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Unique_name_assumption
AnXML schemais a description of a type ofXMLdocument, typically expressed in terms of constraints on the structure and content of documents of that type, above and beyond the basic syntactical constraints imposed by XML itself. These constraints are generally expressed using some combination of grammatical rules governing the order of elements,Boolean predicatesthat the content must satisfy, data types governing the content of elements and attributes, and more specialized rules such asuniquenessandreferential integrityconstraints. There are languages developed specifically to express XML schemas. Thedocument type definition(DTD) language, which is native to the XML specification, is a schema language that is of relatively limited capability, but that also has other uses in XML aside from the expression of schemas. Two more expressive XML schema languages in widespread use areXML Schema(with a capitalS) andRELAX NG. The mechanism for associating an XML document with a schema varies according to the schema language. The association may be achieved via markup within the XML document itself, or via some external means. The XML Schema Definition is commonly referred to as XSD. The process of checking to see if a XML document conforms to a schema is calledvalidation, which is separate from XML's core concept of syntacticwell-formedness. All XML documents must be well-formed, but it is not required that a document be valid unless the XML parser is "validating", in which case the document is also checked for conformance with its associated schema. DTD-validatingparsersare most common, but some support XML Schema or RELAX NG as well. Validation of an instance document against a schema can be regarded as a conceptually separate operation from XML parsing. In practice, however, many schema validators are integrated with an XML parser. There are several different languages available for specifying an XML schema. Each language has its strengths and weaknesses. The primary purpose of a schema language is to specify what the structure of an XML document can be. This means which elements can reside in which other elements, which attributes are and are not legal to have on a particular element, and so forth. A schema is analogous to agrammarfor a language; a schema defines what the vocabulary for the language may be and what a valid "sentence" is. There are historic and current XML schema languages: The main ones (see also theISO 19757's endorsed languages) are described below. Though there are a number of schema languages available, the primary three languages areDocument Type Definitions,W3C XML Schema, andRELAX NG. Each language has its own advantages and disadvantages. DTDs are perhaps the most widely supported schema language for XML. Because DTDs are one of the earliest schema languages for XML, defined before XML even had namespace support, they are widely supported. Internal DTDs are often supported in XML processors; external DTDs are less often supported, but only slightly. Most large XML parsers, ones that support multiple XML technologies, will provide support for DTDs as well. Features available in XSD that are missing from DTDs include: XSD schemas are conventionally written as XML documents, so familiar editing and transformation tools can be used. As well as validation, XSD allows XML instances to be annotated with type information (thePost-Schema-Validation Infoset (PSVI)) which is designed to make manipulation of the XML instance easier in application programs. This may be by mapping the XSD-defined types to types in a programming language such as Java ("data binding") or by enriching the type system of XML processing languages such as XSLT and XQuery (known as "schema-awareness"). RELAX NG and W3C XML Schema allow for similar mechanisms of specificity. Both allow for a degree of modularity in their languages, including, for example, splitting the schema into multiple files. And both of them are, or can be, defined in[clarification needed]an XML language. RELAX NG does not have any analog toPSVI. Unlike W3C XML Schema, RELAX NG was designed so that validation and augmentation (adding type information and default values) are separate. W3C XML Schema has a formal mechanism for attaching a schema to an XML document, while RELAX NG intentionally avoids such mechanisms for security and interoperability reasons. RELAX NG has no ability to apply default attribute data to an element's list of attributes (i.e., changing the XML info set), while W3C XML Schema does. Again, this design is intentional and is to separate validation and augmentation.[8] W3C XML Schema has a rich "simple type" system built-in (xs:number, xs:date, etc., plus derivation of custom types), while RELAX NG has an extremely simplistic one because it is meant to use type libraries developed independently of RELAX NG, rather than grow its own. This is seen by some as a disadvantage. In practice it is common for a RELAX NG schema to use the predefined "simple types" and "restrictions" (pattern, maxLength, etc.) of W3C XML Schema. In W3C XML Schema a specific number or range of repetitions of patterns can be expressed whereas it is practically not possible to specify at all in RELAX NG (<oneOrMore> or <zeroOrMore>). W3C XML Schema is complex and hard to learn, although that is partially because it tries to do more than mere validation (seePSVI). Although being written in XML is an advantage, it is also a disadvantage in some ways. The W3C XML Schema language, in particular, can be quite verbose, while a DTD can be terse and relatively easily editable. Likewise, WXS's formal mechanism for associating a document with a schema can pose a potential security problem. For WXS validators that will follow aURIto an arbitrary online location, there is the potential for reading something malicious from the other side of the stream.[9] W3C XML Schema does not implement most of the DTD ability to provide data elements to a document. Although W3C XML Schema's ability to add default attributes to elements is an advantage, it is a disadvantage in some ways as well. It means that an XML file may not be usable in the absence of its schema, even if the document would validate against that schema. In effect, all users of such an XML document must also implement the W3C XML Schema specification, thus ruling out minimalist or older XML parsers. It can also slow down the processing of the document, as the processor must potentially download and process a second XML file (the schema); however, a schema would normally then be cached, so the cost comes only on the first use. WXS support exists in a number of large XML parsing packages.Xercesand the.NET Framework'sBase Class Libraryboth provide support for WXS validation. RELAX NG provides for most of the advantages that W3C XML Schema does over DTDs. While the language of RELAX NG can be written in XML, it also has an equivalent form that is much more like a DTD, but with greater specifying power. This form is known as the compact syntax. Tools can easily convert between these forms with no loss of features or even commenting. Even arbitrary elements specified between RELAX NG XML elements can be converted into the compact form. RELAX NG provides very strong support for unordered content. That is, it allows the schema to state that a sequence of patterns may appear in any order. RELAX NG also allows for non-deterministic content models. What this means is that RELAX NG allows the specification of a sequence like the following: When the validator encounters something that matches the "odd" pattern, it is unknown whether this is the optional last "odd" reference or simply one in the zeroOrMore sequence without looking ahead at the data. RELAX NG allows this kind of specification. W3C XML Schema requires all of its sequences to be fully deterministic, so mechanisms like the above must be either specified in a different way or omitted altogether. RELAX NG allows attributes to be treated as elements in content models. In particular, this means that one can provide the following: This block states that the element "some_element" must have an attribute named "has_name". This attribute can only take true or false as values, and if it is true, the first child element of the element must be "name", which stores text. If "name" did not need to be the first element, then the choice could be wrapped in an "interleave" element along with other elements. The order of the specification of attributes in RELAX NG has no meaning, so this block need not be the first block in the element definition. W3C XML Schema cannot specify such a dependency between the content of an attribute and child elements. RELAX NG's specification only lists two built-in types (string and token), but it allows for the definition of many more. In theory, the lack of a specific list allows a processor to support data types that are very problem-domain specific. Most RELAX NG schemas can be algorithmically converted into W3C XML Schemas and even DTDs (except when using RELAX NG features not supported by those languages, as above). The reverse is not true. As such, RELAX NG can be used as a normative version of the schema, and the user can convert it to other forms for tools that do not support RELAX NG. Most of RELAX NG's disadvantages are covered under the section on W3C XML Schema's advantages over RELAX NG. Though RELAX NG's ability to support user-defined data types is useful, it comes at the disadvantage of only having two data types that the user can rely upon. Which, in theory, means that using a RELAX NG schema across multiple validators requires either providing those user-defined data types to that validator or using only the two basic types. In practice, however, most RELAX NG processors support the W3C XML Schema set of data types. Schematron is a fairly unusual schema language. Unlike the main three, it defines an XML file's syntax as a list ofXPath-based rules. If the document passes these rules, then it is valid. Because of its rule-based nature, Schematron's specificity is very strong. It can require that the content of an element be controlled by one of its siblings. It can also request or require that the root element, regardless of what element that happens to be, have specific attributes. It can even specify required relationships between multiple XML files. While Schematron is good at relational constructs, its ability to specify the basic structure of a document, that is, which elements can go where, results in a very verbose schema. The typical way to solve this is to combine Schematron with RELAX NG or W3C XML Schema. There are several schema processors available for both languages that support this combined form. This allows Schematron rules to specify additional constraints to the structure defined by W3C XML Schema or RELAX NG. Schematron's reference implementation is actually anXSLTtransformation that transforms the Schematron document into an XSLT that validates the XML file. As such, Schematron's potential toolset is any XSLT processor, thoughlibxml2provides an implementation that does not require XSLT.Sun Microsystems's Multiple Schema Validator forJavahas an add-on that allows it to validate RELAX NG schemas that have embedded Schematron rules. This is not technically a schema language. Its sole purpose is to direct parts of documents to individual schemas based on the namespace of the encountered elements. An NRL is merely a list ofXML namespacesand a path to a schema that each corresponds to. This allows each schema to be concerned with only its own language definition, and the NRL file routes the schema validator to the correct schema file based on the namespace of that element. This XML format is schema-language agnostic and works for just about any schema language. Capitalization in theschemaword: there is some confusion as to when to use the capitalized spelling "Schema" and when to use the lowercase spelling. The lowercase form is a generic term and may refer to any type of schema, including DTD, XML Schema (aka XSD), RELAX NG, or others, and should always be written using lowercase except when appearing at the start of a sentence. The form "Schema" (capitalized) in common use in the XML community always refers toW3C XML Schema. The focus of theschemadefinition is structure and some semantics of documents. However, schema design, just like design of databases, computer program, and other formal constructs, also involve many considerations of style, convention, and readability. Extensive discussions of schema design issues can be found in (for example) Maler (1995)[10]and DeRose (1997).[11] Languages:
https://en.wikipedia.org/wiki/XML_Schema_Language_Comparison
AnXML transformation languageis aprogramming languagedesigned specifically to transform aninputXMLdocument into anoutputdocument which satisfies some specific goal. There are two special cases of transformation: AsXML to XMLtransformation outputs an XML document,XML to XMLtransformation chains formXML pipelines. TheXML (EXtensible Markup Language) to Datatransformation contains some important cases. The most notable one isXML to HTML (HyperText Markup Language), as anHTMLdocumentis notan XML document. The earliest transformation languages predate the advent of XML as an SGML profile, and thus accept input in arbitrarySGMLrather than specifically XML. These include the SGML-to-SGMLlink process definition(LPD) format defined as part of the SGML standard itself; in SGML (but not XML), the LPD file can be referenced from the document itself by aLINKTYPEdeclaration, similarly to theDOCTYPEdeclarationused for aDTD.[1]Other such transformation languages, addressing some of the deficiencies of LPDs, includeDocument Style Semantics and Specification Language(DSSSL) andOmniMark.[2]Newer transformation languages tend to target XML specifically, and thus only accept XML, not arbitrary SGML.
https://en.wikipedia.org/wiki/XML_transformation_language
Insoftware, anXML pipelineis formed whenXML(ExtensibleMarkup Language) processes, especiallyXML transformationsandXML validations, are connected. For instance, given two transformations T1and T2, the two can be connected so that an input XML document is transformed by T1and then the output of T1is fed as input document to T2. Simple pipelines like the one described above are calledlinear; a single input document always goes through the same sequence of transformations to produce a single output document. Linear operations can be divided in at least two parts They operate at the inner document level They take the input document as a whole They are mainly introduced inXProcand help to handle the sequence of document as a whole Non-linear operations on pipelines may include: Some standards also categorize transformation as macro (changes impacting an entire file) or micro (impacting only an element or attribute) XML pipeline languages are used to define pipelines. A program written with an XML pipeline language is implemented by software known as an XML pipeline engine, which creates processes, connects them together and finally executes the pipeline. Existing XML pipeline languages include: Different XML Pipeline implementations support different granularity of flow. Until May 2010, there was no widely used standard for XML pipeline languages. However, with the introduction of theW3CXProc standard as aW3C Recommendationas of May 2010,[6]widespread adoption can be expected.
https://en.wikipedia.org/wiki/XML_pipeline
XML logorXMLlogging is used by manycomputerprogramsto log the program's operations. An XMLlogfilerecords a description of the operations done by a program during its session. The log normally includes:timestamp, the programs settings during the operation, what was completed during the session, the files or directories used and any errors that may have occurred. Incomputing, a logfile records eithereventsthat occur in anoperating systemor othersoftwarerunning. It may also log messages between different users of acommunication software. XML file standard is controlled by theWorld Wide Web Consortiumas the XML file standard is used for many other data standards, seeList of XML markup languages. XML is short for eXtensible Markup Language file.[1][2][3]
https://en.wikipedia.org/wiki/XML_log
Christophe de Dinechinis a Frenchcomputer scientist, with contributions invideo games,programming languagesandoperating systems. Dinechin contributed toC++, notably a high-performanceexception handlingimplementation[1]that became a de-facto standard in the industry.[2]de Dinechin was one of the proponents of a portable C++ABI, initially developed forItanium, but now widely used across platforms.[3] Dinechin is the designer of the XL programming language and associated concept programming methodology.[4]"XL" is named for "eXtensible Language". XL features programmer-reconfigurablesyntaxandsemantics. Compilerplug-inscan be used to add new features to the language. A base set of plug-ins implements a relatively standardimperative language. Programmers can write their own plug-ins to implement application-specific notations, such assymbolic differentiation, which can then be used as readily as built-in language features. There are projects that exploit similar ideas to create code with higher level of abstraction. Among them are: As initial developer ofAlpha Waves, a "groundbreaking"Atari STgame (listed in theGuinness World Recordsas the first 3D platform game[5]), de Dinechin heavily influencedFrederick Raynal, the main developer ofAlone in the Dark.[6]de Dinechin also wrote a few viral games for HP-48 calculators,[7][8]and was the first person to take advantage of hardware-scrolling on these machines.[9] In the early 2000s, he worked as a software architect forHP-UX,[10]and was the initial designer of HP's virtualisation platform for Itanium servers,HP Integrity Virtual Machines. He was awarded 10 US patents for this work.[11] Since 2022, he also is the initiator and maintainer ofDB48X, a new implementation ofRPL.[12][13] Christophe de Dinechin did the initial port ofEmacsto theAquauser interface.[14]He wrote a variety ofopen-sourcedrivers for the HP DE200C Digital Entertainment Center,[15]turning it from a web-connected CD Player into a true digital video recorder. Between 2010 and 2017, Christophe de Dinechin was the CEO of Taodyne, a company developing a 3D animation tool, using a derivative of his XL programming language called Tao3D to describe dynamic documents.[16] Dinechin has published three books:
https://en.wikipedia.org/wiki/Concept_programming
Aprogramming languageis a system of notation for writingcomputer programs.[1]Programming languages are described in terms of theirsyntax(form) andsemantics(meaning), usually defined by aformal language. Languages usually provide features such as atype system,variables, and mechanisms forerror handling. Animplementationof a programming language is required in order toexecuteprograms, namely aninterpreteror acompiler. An interpreter directly executes the source code, while acompilerproduces anexecutableprogram. Computer architecturehas strongly influenced the design of programming languages, with the most common type (imperative languages—which implement operations in a specified order) developed to perform well on the popularvon Neumann architecture. While early programming languages were closely tied to thehardware, over time they have developed moreabstractionto hide implementation details for greater simplicity. Thousands of programming languages—often classified as imperative,functional,logic, orobject-oriented—have been developed for a wide variety of uses. Many aspects of programming language design involve tradeoffs—for example,exception handlingsimplifies error handling, but at a performance cost.Programming language theoryis the subfield ofcomputer sciencethat studies the design, implementation, analysis, characterization, and classification of programming languages. Programming languages differ fromnatural languagesin that natural languages are used for interaction between people, while programming languages are designed to allow humans to communicate instructions to machines.[citation needed] The termcomputer languageis sometimes used interchangeably with "programming language".[2]However, usage of these terms varies among authors. In one usage, programming languages are described as a subset of computer languages.[3]Similarly, the term "computer language" may be used in contrast to the term "programming language" to describe languages used in computing but not considered programming languages.[citation needed]Most practical programming languages are Turing complete,[4]and as such are equivalent in what programs they can compute. Another usage regards programming languages as theoretical constructs for programmingabstract machinesand computer languages as the subset thereof that runs on physical computers, which have finite hardware resources.[5]John C. Reynoldsemphasizes thatformal specificationlanguages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.[6] The first programmable computers were invented at the end of the 1940s, and with them, the first programming languages.[7]The earliest computers were programmed infirst-generation programming languages(1GLs),machine language(simple instructions that could be directly executed by the processor). This code was very difficult to debug and was notportablebetween different computer systems.[8]In order to improve the ease of programming,assembly languages(orsecond-generation programming languages—2GLs) were invented, diverging from the machine language to make programs easier to understand for humans, although they did not increase portability.[9] Initially, hardware resources were scarce and expensive, whilehuman resourceswere cheaper. Therefore, cumbersome languages that were time-consuming to use, but were closer to the hardware for higher efficiency were favored.[10]The introduction ofhigh-level programming languages(third-generation programming languages—3GLs)—revolutionized programming. These languagesabstractedaway the details of the hardware, instead being designed to express algorithms that could be understood more easily by humans. For example, arithmetic expressions could now be written in symbolic notation and later translated into machine code that the hardware could execute.[9]In 1957,Fortran(FORmula TRANslation) was invented. Often considered the firstcompiledhigh-level programming language,[9][11]Fortran has remained in use into the twenty-first century.[12] Around 1960, the firstmainframes—general purpose computers—were developed, although they could only be operated by professionals and the cost was extreme. The data and instructions were input bypunch cards, meaning that no input could be added while the program was running. The languages developed at this time therefore are designed for minimal interaction.[14]After the invention of themicroprocessor, computers in the 1970s became dramatically cheaper.[15]New computers also allowed more user interaction, which was supported by newer programming languages.[16] Lisp, implemented in 1958, was the firstfunctional programminglanguage.[17]Unlike Fortran, it supportedrecursionandconditional expressions,[18]and it also introduceddynamic memory managementon aheapand automaticgarbage collection.[19]For the next decades, Lisp dominatedartificial intelligenceapplications.[20]In 1978, another functional language,ML, introducedinferred typesand polymorphicparameters.[16][21] AfterALGOL(ALGOrithmic Language) was released in 1958 and 1960,[22]it became the standard in computing literature for describingalgorithms. Although its commercial success was limited, most popular imperative languages—includingC,Pascal,Ada,C++,Java, andC#—are directly or indirectly descended from ALGOL 60.[23][12]Among its innovations adopted by later programming languages included greater portability and the first use ofcontext-free,BNFgrammar.[24]Simula, the first language to supportobject-oriented programming(includingsubtypes,dynamic dispatch, andinheritance), also descends from ALGOL and achieved commercial success.[25]C, another ALGOL descendant, has sustained popularity into the twenty-first century. C allows access to lower-level machine operations more than other contemporary languages. Its power and efficiency, generated in part with flexiblepointeroperations, comes at the cost of making it more difficult to write correct code.[16] Prolog, designed in 1972, was the firstlogic programminglanguage, communicating with a computer using formal logic notation.[26][27]With logic programming, the programmer specifies a desired result and allows theinterpreterto decide how to achieve it.[28][27] During the 1980s, the invention of thepersonal computertransformed the roles for which programming languages were used.[29]New languages introduced in the 1980s included C++, asupersetof C that can compile C programs but also supportsclassesandinheritance.[30]Adaand other new languages introduced support forconcurrency.[31]The Japanese government invested heavily into the so-calledfifth-generation languagesthat added support for concurrency to logic programming constructs, but these languages were outperformed by other concurrency-supporting languages.[32][33] Due to the rapid growth of theInternetand theWorld Wide Webin the 1990s, new programming languages were introduced to supportWeb pagesandnetworking.[34]Java, based on C++ and designed for increased portability across systems and security, enjoyed large-scale success because these features are essential for many Internet applications.[35][36]Another development was that ofdynamically typedscripting languages—Python,JavaScript,PHP, andRuby—designed to quickly produce small programs that coordinate existingapplications. Due to their integration withHTML, they have also been used for building web pages hosted onservers.[37][38] During the 2000s, there was a slowdown in the development of new programming languages that achieved widespread popularity.[39]One innovation wasservice-oriented programming, designed to exploitdistributed systemswhose components are connected by a network. Services are similar to objects in object-oriented programming, but run on a separate process.[40]C#andF#cross-pollinated ideas between imperative and functional programming.[41]After 2010, several new languages—Rust,Go,Swift,ZigandCarbon—competed for the performance-critical software for which C had historically been used.[42]Most of the new programming languages usestatic typingwhile a few numbers of new languages usedynamic typinglikeRingandJulia.[43][44] Some of the new programming languages are classified asvisual programming languageslikeScratch,LabVIEWandPWCT. Also, some of these languages mix between textual and visual programming usage likeBallerina.[45][46][47][48]Also, this trend lead to developing projects that help in developing new VPLs likeBlocklybyGoogle.[49]Many game engines likeUnrealandUnityadded support for visual scripting too.[50][51] Every programming language includes fundamental elements for describing data and the operations or transformations applied to them, such as adding two numbers or selecting an item from a collection. These elements are governed by syntactic and semantic rules that define their structure and meaning, respectively. A programming language's surface form is known as itssyntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, some programming languages aregraphical, using visual relationships between symbols to specify a program. The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics (eitherformalor hard-coded in areference implementation). Since most languages are textual, this article discusses textual syntax. The programming language syntax is usually defined using a combination ofregular expressions(forlexicalstructure) andBackus–Naur form(forgrammaticalstructure). Below is a simple grammar, based onLisp: This grammar specifies the following: The following are examples of well-formed token sequences in this grammar:12345,()and(a b c232 (1)). Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibitundefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it. Usingnatural languageas an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false: The followingC languagefragment is syntactically correct, but performs operations that are not semantically defined (the operation*p >> 4has no meaning for a value having a complex type andp->imis not defined because the value ofpis thenull pointer): If thetype declarationon the first line were omitted, the program would trigger an error on the undefined variablepduring compilation. However, the program would still be syntactically correct since type declarations provide only semantic information. The grammar needed to specify a programming language can be classified by its position in theChomsky hierarchy. The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they arecontext-free grammars.[52]Some languages, including Perl and Lisp, contain constructs that allow execution during the parsing phase. Languages that have constructs that allow the programmer to alter the behavior of the parser make syntax analysis anundecidable problem, and generally blur the distinction between parsing and execution.[53]In contrast toLisp's macro systemand Perl'sBEGINblocks, which may contain general computations, C macros are merely string replacements and do not require code execution.[54] The termsemanticsrefers to the meaning of languages, as opposed to their form (syntax). Static semantics defines restrictions on the structure of valid texts that are hard or impossible to express in standard syntactic formalisms.[1][failed verification]For compiled languages, static semantics essentially include those semantic rules that can be checked at compile time. Examples include checking that everyidentifieris declared before it is used (in languages that require such declarations) or that the labels on the arms of acase statementare distinct.[55]Many important restrictions of this type, like checking that identifiers are used in the appropriate context (e.g. not adding an integer to a function name), or thatsubroutinecalls have the appropriate number and type of arguments, can be enforced by defining them as rules in alogiccalled atype system. Other forms ofstatic analyseslikedata flow analysismay also be part of static semantics. Programming languages such asJavaandC#havedefinite assignment analysis, a form of data flow analysis, as part of their respective static semantics.[56] Once data has been specified, the machine must be instructed to perform operations on the data. For example, the semantics may define thestrategyby which expressions are evaluated to values, or the manner in whichcontrol structuresconditionally executestatements. Thedynamic semantics(also known asexecution semantics) of a language defines how and when the various constructs of a language should produce a program behavior. There are many ways of defining execution semantics. Natural language is often used to specify the execution semantics of languages commonly used in practice. A significant amount of academic research goes intoformal semantics of programming languages, which allows execution semantics to be specified in a formal manner. Results from this field of research have seen limited application to programming language design and implementation outside academia.[56] Adata typeis a set of allowable values and operations that can be performed on these values.[57]Each programming language'stype systemdefines which data types exist, the type of anexpression, and howtype equivalenceandtype compatibilityfunction in the language.[58] According totype theory, a language is fully typed if the specification of every operation defines types of data to which the operation is applicable.[59]In contrast, an untyped language, such as mostassembly languages, allows any operation to be performed on any data, generally sequences of bits of various lengths.[59]In practice, while few languages are fully typed, most offer a degree of typing.[59] Because different types (such asintegersandfloats) represent values differently, unexpected results will occur if one type is used when another is expected.Type checkingwill flag this error, usually atcompile time(runtime type checking is more costly).[60]Withstrong typing,type errorscan always be detected unless variables are explicitlycastto a different type.Weak typingoccurs when languages allow implicit casting—for example, to enable operations between variables of different types without the programmer making an explicit type conversion. The more cases in which thistype coercionis allowed, the fewer type errors can be detected.[61] Early programming languages often supported only built-in, numeric types such as theinteger(signed and unsigned) andfloating point(to support operations onreal numbersthat are not integers). Most programming languages support multiple sizes of floats (often calledfloatanddouble) and integers depending on the size and precision required by the programmer. Storing an integer in a type that is too small to represent it leads tointeger overflow. The most common way of representing negative numbers with signed types istwos complement, althoughones complementis also used.[62]Other common types includeBoolean—which is either true or false—andcharacter—traditionally onebyte, sufficient to represent allASCIIcharacters.[63] Arraysare a data type whose elements, in many languages, must consist of a single type of fixed length. Other languages define arrays as references to data stored elsewhere and support elements of varying types.[64]Depending on the programming language, sequences of multiple characters, calledstrings, may be supported as arrays of characters or their ownprimitive type.[65]Strings may be of fixed or variable length, which enables greater flexibility at the cost of increased storage space and more complexity.[66]Other data types that may be supported includelists,[67]associative (unordered) arraysaccessed via keys,[68]recordsin which data is mapped to names in an ordered structure,[69]andtuples—similar to records but without names for data fields.[70]Pointersstore memory addresses, typically referencing locations on theheapwhere other data is stored.[71] The simplestuser-defined typeis anordinal type, often called anenumeration, whose values can be mapped onto the set of positive integers.[72]Since the mid-1980s, most programming languages also supportabstract data types, in which the representation of the data and operations arehidden from the user, who can only access aninterface.[73]The benefits ofdata abstractioncan include increased reliability, reduced complexity, less potential forname collision, and allowing the underlyingdata structureto be changed without the client needing to alter its code.[74] Instatic typing, all expressions have their types determined before a program executes, typically at compile-time.[59]Most widely used, statically typed programming languages require the types of variables to be specified explicitly. In some languages, types are implicit; one form of this is when the compiler caninfertypes based on context. The downside ofimplicit typingis the potential for errors to go undetected.[75]Complete type inference has traditionally been associated with functional languages such asHaskellandML.[76] With dynamic typing, the type is not attached to the variable but only the value encoded in it. A single variable can be reused for a value of a different type. Although this provides more flexibility to the programmer, it is at the cost of lower reliability and less ability for the programming language to check for errors.[77]Some languages allow variables of aunion typeto which any type of value can be assigned, in an exception to their usual static typing rules.[78] In computing, multiple instructions can be executed simultaneously. Many programming languages support instruction-level and subprogram-level concurrency.[79]By the twenty-first century, additional processing power on computers was increasingly coming from the use of additional processors, which requires programmers to design software that makes use of multiple processors simultaneously to achieve improved performance.[80]Interpreted languagessuch asPythonandRubydo not support the concurrent use of multiple processors.[81]Other programming languages do support managing data shared between different threads by controlling the order of execution of key instructions via the use ofsemaphores, controlling access to shared data viamonitor, or enablingmessage passingbetween threads.[82] Many programming languages include exception handlers, a section of code triggered byruntime errorsthat can deal with them in two main ways:[83] Some programming languages support dedicating a block of code to run regardless of whether an exception occurs before the code is reached; this is called finalization.[84] There is a tradeoff between increased ability to handle exceptions and reduced performance.[85]For example, even though array index errors are common[86]C does not check them for performance reasons.[85]Although programmers can write code to catch user-defined exceptions, this can clutter a program. Standard libraries in some languages, such as C, use their return values to indicate an exception.[87]Some languages and their compilers have the option of turning on and off error handling capability, either temporarily or permanently.[88] One of the most important influences on programming language design has beencomputer architecture.Imperative languages, the most commonly used type, were designed to perform well onvon Neumann architecture, the most common computer architecture.[89]In von Neumann architecture, thememorystores both data and instructions, while theCPUthat performs instructions on data is separate, and data must be piped back and forth to the CPU. The central elements in these languages are variables,assignment, anditeration, which is more efficient thanrecursionon these machines.[90] Many programming languages have been designed from scratch, altered to meet new needs, and combined with other languages. Many have eventually fallen into disuse.[citation needed]The birth of programming languages in the 1950s was stimulated by the desire to make a universal programming language suitable for all machines and uses, avoiding the need to write code for different computers.[91]By the early 1960s, the idea of a universal language was rejected due to the differing requirements of the variety of purposes for which code was written.[92] Desirable qualities of programming languages include readability, writability, and reliability.[93]These features can reduce the cost of training programmers in a language, the amount of time needed to write and maintain programs in the language, the cost of compiling the code, and increase runtime performance.[94] Programming language design often involves tradeoffs.[104]For example, features to improve reliability typically come at the cost of performance.[105]Increased expressivity due to a large number of operators makes writing code easier but comes at the cost of readability.[105] Natural-language programminghas been proposed as a way to eliminate the need for a specialized language for programming. However, this goal remains distant and its benefits are open to debate.Edsger W. Dijkstratook the position that the use of a formal language is essential to prevent the introduction of meaningless constructs.[106]Alan Perliswas similarly dismissive of the idea.[107] The specification of a programming language is an artifact that the languageusersand theimplementorscan use to agree upon whether a piece ofsource codeis a validprogramin that language, and if so what its behavior shall be. A programming language specification can take several forms, including the following: An implementation of a programming language is the conversion of a program intomachine codethat can be executed by the hardware. The machine code then can be executed with the help of theoperating system.[111]The most common form of interpretation inproduction codeis by acompiler, which translates the source code via an intermediate-level language into machine code, known as anexecutable. Once the program is compiled, it will run more quickly than with other implementation methods.[112]Some compilers are able to provide furtheroptimizationto reduce memory or computation usage when the executable runs, but increasing compilation time.[113] Another implementation method is to run the program with aninterpreter, which translates each line of software into machine code just before it executes. Although it can make debugging easier, the downside of interpretation is that it runs 10 to 100 times slower than a compiled executable.[114]Hybrid interpretation methods provide some of the benefits of compilation and some of the benefits of interpretation via partial compilation. One form this takes isjust-in-time compilation, in which the software is compiled ahead of time into an intermediate language, and then into machine code immediately before execution.[115] Although most of the most commonly used programming languages have fully open specifications and implementations, many programming languages exist only as proprietary programming languages with the implementation available only from a single vendor, which may claim that such a proprietary language is their intellectual property. Proprietary programming languages are commonlydomain-specific languagesor internalscripting languagesfor a single product; some proprietary languages are used only internally within a vendor, while others are available to external users.[citation needed] Some programming languages exist on the border between proprietary and open; for example,Oracle Corporationasserts proprietary rights to some aspects of theJava programming language,[116]andMicrosoft'sC#programming language, which has open implementations of most parts of the system, also hasCommon Language Runtime(CLR) as a closed environment.[117] Many proprietary languages are widely used, in spite of their proprietary nature; examples includeMATLAB,VBScript, andWolfram Language. Some languages may make the transition from closed to open; for example,Erlangwas originally Ericsson's internal programming language.[118] Open source programming languagesare particularly helpful foropen scienceapplications, enhancing the capacity forreplicationand code sharing.[119] Thousands of different programming languages have been created, mainly in the computing field.[120]Individual software projects commonly use five programming languages or more.[121] Programming languages differ from most other forms of human expression in that they require a greater degree of precision and completeness. When using a natural language to communicate with other people, human authors and speakers can be ambiguous and make small errors, and still expect their intent to be understood. However, figuratively speaking, computers "do exactly what they are told to do", and cannot "understand" what code the programmer intended to write. The combination of the language definition, a program, and the program's inputs must fully specify the external behavior that occurs when the program is executed, within the domain of control of that program. On the other hand, ideas about an algorithm can be communicated to humans without the precision required for execution by usingpseudocode, which interleaves natural language with code written in a programming language. A programming language provides a structured mechanism for defining pieces of data, and the operations or transformations that may be carried out automatically on that data. Aprogrammeruses theabstractionspresent in the language to represent the concepts involved in a computation. These concepts are represented as a collection of the simplest elements available (calledprimitives).[122]Programmingis the process by which programmers combine these primitives to compose new programs, or adapt existing ones to new uses or a changing environment. Programs for a computer might beexecutedin abatch processwithout any human interaction, or a user might typecommandsin aninteractive sessionof aninterpreter. In this case the "commands" are simply programs, whose execution is chained together. When a language can run its commands through an interpreter (such as aUnix shellor othercommand-line interface), without compiling, it is called ascripting language.[123] Determining which is the most widely used programming language is difficult since the definition of usage varies by context. One language may occupy the greater number of programmer hours, a different one has more lines of code, and a third may consume the most CPU time. Some languages are very popular for particular kinds of applications. For example,COBOLis still strong in the corporate data center, often on largemainframes;[124][125]Fortranin scientific and engineering applications;Adain aerospace, transportation, military, real-time, and embedded applications; andCin embedded applications and operating systems. Other languages are regularly used to write many different kinds of applications. Various methods of measuring language popularity, each subject to a different bias over what is measured, have been proposed: Combining and averaging information from various internet sites, stackify.com reported the ten most popular programming languages (in descending order by overall popularity):Java,C,C++,Python,C#,JavaScript,VB .NET,R,PHP, andMATLAB.[129] As of June 2024, the top five programming languages as measured byTIOBE indexarePython,C++,C,JavaandC#. TIOBE provides a list of top 100 programming languages according to popularity and update this list every month.[130] Adialectof a programming language or adata exchange languageis a (relatively small) variation or extension of the language that does not change its intrinsic nature. With languages such asSchemeandForth, standards may be considered insufficient, inadequate, or illegitimate by implementors, so often they will deviate from the standard, making a newdialect. In other cases, a dialect is created for use in adomain-specific language, often a subset. In theLispworld, most languages that use basicS-expressionsyntax and Lisp-like semantics are considered Lisp dialects, although they vary wildly as do, say,RacketandClojure. As it is common for one language to have several dialects, it can become quite difficult for an inexperienced programmer to find the right documentation. TheBASIClanguage hasmany dialects. Programming languages are often placed into four main categories:imperative,functional,logic, andobject oriented.[131] Althoughmarkup languagesare not programming languages, some have extensions that support limited programming. Additionally, there are special-purpose languages that are not easily compared to other programming languages.[135]
https://en.wikipedia.org/wiki/Dialecting
Grammar-orientedprogramming(GOP) andGrammar-oriented Object Design(GOOD) are good for designing and creating adomain-specific programming language(DSL) for a specific business domain. GOOD can be used to drive the execution of the application or it can be used to embed the declarative processing logic of a context-aware component (CAC) orcontext-aware service (CAS). GOOD is a method for creating and maintaining dynamically reconfigurablesoftware architecturesdriven by business-process architectures. The business compiler was used to capture business processes within real-time workshops for various lines of business and create an executable simulation of the processes used. Instead of using one DSL for the entire programming activity, GOOD suggests the combination of defining domain-specific behavioral semantics in conjunction with the use of more traditional,general purpose programming languages.
https://en.wikipedia.org/wiki/Grammar-oriented_programming
The following tables compare general and technical information for a number ofdocument markup languages. Please see the individual markup languages' articles for further information. Basic general information about the markup languages: creator, version, etc. Note: WhileRich Text Format (RTF)is human readable, it is not considered to be a markup language and is thus excluded from the table. Some characteristics of the markup languages.[definition needed]
https://en.wikipedia.org/wiki/Comparison_of_document_markup_languages
BSON(/ˈbiːsən/;Binary JSON)[2]is acomputerdata interchange format extendingJSON. It is abinaryform for representing simple or complexdata structuresincludingassociative arrays(also known asname-value pairs),integer indexed arrays, and a suite of fundamentalscalar types. BSON originated in 2009 atMongoDB. Several scalar data types are of specific interest to MongoDB and the format is used both as a data storage and network transfer format for the MongoDB database, but it can be used independently outside of MongoDB. Implementations are available in a variety of languages such asC,C++,C#,D,Delphi,Erlang,Go,Haskell,Java,JavaScript,Julia,Lua,OCaml,Perl,PHP,Python,Ruby,Rust,Scala,Smalltalk, andSwift.[3] BSON has a published specification.[4][5]The topmost element in the structure must be of type BSON object and contains 1 or more elements, where an element consists of a field name, a type, and a value. Field names are strings. Types include: An important differentiator toJSONis that BSON contains types not present in JSON (e.g. datetime, byte array, and proper IEEE 754 floats) and offers type-strict handling for several numeric types instead of a universal "number" type. For situations where these additional types need to be represented in a textual way, MongoDB's Extended JSON format[7]can be used. Compared to JSON, BSON is designed to be efficient both in storage space and scan-speed. Large elements in a BSON document are prefixed with a length field to facilitate scanning. In some cases, BSON will use more space than JSON due to the length prefixes and explicit array indices.[2] A document such as{"hello": "world"}will be stored as:
https://en.wikipedia.org/wiki/BSON
MessagePackis acomputerdata interchange format. It is a binary form for representing simpledata structureslikearraysandassociative arrays. MessagePack aims to be as compact and simple as possible. The official implementation is available in a variety of languages, some official libraries and others community created, such asC,C++,C#,D,Erlang,Go,Haskell,Java,JavaScript(NodeJS),Lua,OCaml,Perl,PHP,Python,Ruby,Rust,Scala,Smalltalk, andSwift.[1] Data structures processed by MessagePack loosely correspond to those used inJSONformat. They consist of the following element types: MessagePack is more compact thanJSON, but imposes limitations on array and integer sizes. On the other hand, it allows binary data and non-UTF-8 encoded strings. In JSON, map keys have to be strings, but in MessagePack there is no such limitation and any type can be a map key, including types like maps and arrays, and, likeYAML, numbers. Compared toBSON, MessagePack is more space-efficient. BSON is designed for fast in-memory manipulation, whereas MessagePack is designed for efficient transmission over the wire. For example, BSON requires null terminators at the end of all strings and inserts string indexes for list elements, while MessagePack doesn't. BSON represents both arrays and maps internally as documents, which are maps, where an array is a map with keys as decimal strings counting up from 0. MessagePack on the other hand represents both maps and arrays as arrays, where each map key-value pair is contiguous, making odd items keys and even items values. TheProtocol Buffersformat provides a significantly more compact transmission format than MessagePack because it doesn't transmit field names. However, while JSON and MessagePack aim to serialize arbitrary data structures with type tags, Protocol Buffers requires a schema to define the data types. Protocol Buffers compiler createsboilerplate codein the target language to facilitate integration of serialization into the application code; MessagePack returns only adynamically typeddata structure and provides no automatic structure checks. MessagePack is referenced inRFC7049ofCBOR.
https://en.wikipedia.org/wiki/MessagePack
Concise Binary Object Representation(CBOR) is a binary dataserializationformat loosely based onJSONauthored by Carsten Bormann and Paul Hoffman.[a]Like JSON it allows the transmission of data objects that containname–value pairs, but in a more concise manner. This increases processing and transfer speeds at the cost ofhuman readability. It is defined in IETFRFC8949.[2] Amongst other uses, it is the recommended data serialization layer for theCoAPInternet of Things protocol suite[3][failed verification]and the data format on whichCOSEmessages are based. It is also used in theClient-to-Authenticator Protocol(CTAP) within the scope of the FIDO2 project.[4] CBOR was inspired byMessagePack, which was developed and promoted by Sadayuki Furuhashi. CBOR extended MessagePack, particularly by allowing to distinguish text strings from byte strings, which was implemented in 2013 in MessagePack.[5][6] CBOR encoded data is seen as a stream of data items. Each data item consists of a header byte containing a 3-bit type and 5-bit short count. This is followed by an optional extended count (if the short count is in the range 24–27), and an optional payload. For types 0, 1, and 7, there is no payload; the countisthe value. For types 2 (byte string) and 3 (text string), the count is the length of the payload. For types 4 (array) and 5 (map), the count is the number of items (pairs) in the payload. For type 6 (tag), the payload is a single item and the count is a numeric tag number which describes the enclosed item. Each data item's behaviour is defined by the major type and count. The major type is used for selecting the main behaviour or type of each data item. The 5-bit short count field encodes counts 0–23 directly. Short counts of 24–27 indicate the count value is in a following 8, 16, 32 or 64-bit extended count field. Values 28–30 are not assigned and must not be used. Types are divided into "atomic" types 0–1 and 6–7, for which the count field encodes the value directly, and non-atomic types 2–5, for which the count field encodes the size of the following payload field. A short count of 31 is used with non-atomic types 2–5 to indicate an indefinite length; the payload is the following items until a "break" marker byte of 255 (type=7, short count=31). A short count of 31 is not permitted with the other atomic types 0, 1 or 6. Type 6 (tag) is unusual in that its count field encodes a value directly, but also has a payload field (which always consists of a single item). Extended counts, and all multi-byte values, are encoded innetwork (big-endian) byte order. For integers, the count fieldisthe value; there is no payload. Type 0 encodes positive or unsigned integers, with values up to 264−1. Type 1 encodes negative integers, with a value of −1−count, for values from −264to −1. Types 2 and 3 have a count field which encodes the length in bytes of the payload. Type 2 is an unstructured byte string. Type 3 is aUTF-8text string. A short count of 31 indicates an indefinite-length string. This is followed by zero or more definite-length strings of the same type, terminated by a "break" marker byte. The value of the item is the concatenation of the values of the enclosed items. Items of a different type, or nested indefinite-length strings, are not permitted. Text strings must be individually well-formed; UTF-8 characters may not be split across items. Type 4 has a count field encoding the number of following items, followed by that many items. The items need not all be the same type; some programming languages call this a "tuple" rather than an "array". Alternatively, an indefinite-length encoding with a short count of 31 may be used. This continues until a "break" marker byte of 255. Because nested items may also use the indefinite encoding, the parser must pair the break markers with the corresponding indefinite-length header bytes. Type 5 is similar but encodes a map (also called a dictionary, or associative array) of key/value pairs. In this case, the count encodes the number ofpairsof items. If the indefinite-length encoding is used, there must be an even number of items before the "break" marker byte. A semantic tag is another atomic type for which the count is the value, but it also has a payload (a single following item), and the two are considered one item in e.g. an array or a map. The tag number provides additional type information for the following item, beyond what the 3-bit major type can provide. For example, a tag of 1 indicates that the following number is aUnix timevalue. A tag of 2 indicates that the following byte string encodes an unsignedbignum. A tag of 32 indicates that the following text string is aURIas defined inRFC3986.RFC8746defines tags 64–87 to encode homogeneous arrays of fixed-size integer or floating-point values as byte strings. The tag 55799 is allocated to mean "CBOR data follows". This is a semanticno-op, but allows the corresponding tag bytesd9 d9 f7to be prepended to a CBOR file without affecting its meaning. These bytes may be used as a "magic number" to distinguish the beginning of CBOR data. The all-ones tag values 0xffff, 0xffffffff and 0xffffffffffffffff are reserved to indicate the absence of a tag in a CBOR decoding library; they should never appear in a data stream. The break marker pseudo-item may not be the payload of a tag. This major type is used to encode various special values that do not fit into the other categories. It follows the same encoding-size rules as the other atomic types (0, 1, and 6), but the count field is interpreted differently. The values 20–23 are used to encode the special values false, true,null, andundefined. Values 0–19 are not currently defined. A short count of 24 indicates a 1-byte extended count follows which can be used in future to encode additional special values. To simplify decoding, the values 0–31 may not be encoded in this form. None of the values 32–255 are currently defined. Short counts of 25, 26 or 27 indicate a following extended count field is to be interpreted as a (big-endian) 16-, 32- or 64-bitIEEE floating pointvalue. These are the same sizes as an extended count, but are interpreted differently. In particular, for all other major types, a 2-byte extended count of 0x1234 and a 4-byte extended count of 0x00001234 are exactly equivalent. This is not the case for floating-point values. Short counts 28–30 are reserved, like for all other major types. A short count of 31 encodes the special "break" marker which terminates an indefinite-length encoding. This is related to, but different from, the use with other major types where a short count of 31beginsan indefinite length encoding. This is not an item, and may not appear in a defined-length payload. IANA has created the CBOR tags registry, located athttps://www.iana.org/assignments/cbor-tags/cbor-tags.xhtml. Registrations must contain the template outlined below.[7] The URL can point to an Internet-Draft or a web page.
https://en.wikipedia.org/wiki/CBOR
ACanonical S-expression(orcsexp) is a binary encoding form of a subset of generalS-expression(or sexp). It was designed for use inSPKIto retain the power of S-expressions and ensurecanonical formfor applications such asdigital signatureswhile achieving the compactness of a binary form and maximizing the speed of parsing. The particular subset of general S-expressions applicable here is composed ofatoms, which are byte strings, and parentheses used to delimit lists or sub-lists. These S-expressions are fully recursive. While S-expressions are typically encoded as text, with spaces delimiting atoms and quotation marks used to surround atoms that contain spaces, when using the canonical encoding each atom is encoded as a length-prefixed byte string. No whitespace separating adjacent elements in a list is permitted. The length of an atom is expressed as anASCIIdecimal number followed by a ":". The sexp becomes thecsexp No quotation marks are required to escape the space character internal to the atom "Canonical S-expression", because the length prefix clearly points to the end of the atom. There is no whitespace separating an atom from the next element in the list. Whilecsexpsgenerally permit empty lists, empty atoms, and so forth, certain uses ofcsexpsimpose additional restrictions. For example,csexpsas used inSPKIhave one limitation compared tocsexpsin general: every list must start with an atom, and therefore there can be no empty lists. Typically, a list's first atom is treated as one treats an element name inXML. There are other encodings in common use: Generally,csexphas a parser one or two decimal orders of magnitude smaller than that of either XML or ASN.1.[citation needed]This small size and corresponding speed[citation needed]givecsexpits main advantage. In addition to the parsing advantage, there are other differences. csexpand XML differ in thatcsexpis a data-representation format, while XML includes a data-representation format and also a schema mechanism. Thus, XML can be "configured" for particular kinds of data, which conform to some grammar (say,HTML,ATOM,SVG,MathML, or new ones as needed). It has languages for defining document grammars:DTDis defined by the XML standard itself, whileXSD,RelaxNG, andSchematronare commonly used with XML for additional features, and XML can also work with no schema.csexpdata can of course be operated on by schemas implemented at a higher level, but provides no such mechanism itself. In terms of characters and bytes, acsexp"string" may have any byte sequence whatsoever (because of the length prefix on each atom), while XML (like regular Lisp S-expressions, JSON, and literals in programming languages), requires alternate representations for a few characters (such as "<" and most control characters). This, however, has no effect on the range of structures and semantics that can be represented. XML also provides mechanisms to specify how a given byte sequence is intended to be interpreted: Say, as aUnicodeUTF-8string, aJPEGfile, or an integer;csexpleaves such distinctions to external mechanisms. At the most basic level, bothcsexpand XML represent trees (as do most other external representations). This is not surprising, since XML can be described as a differently-punctuated form for LISP-like S-expressions, or vice versa.[1] However, XML includes additional semantics, which are commonly achieved incsexpby various conventions rather than as part of the language. First, every XML element has a name (csexpapplications commonly use the first child of each expression for this). Second, XML provides data typing, firstly via the schema grammar. A schema can also, however, distinguish integers, strings, data objects with types (e.g. JPEG) and (especially withXSD) other types). An XML element may also haveattributes, a construct thatcsexpdoes not share. To represent XML data incsexp, one must choose a representation for such attributes; an obvious one is to reserve the second item in each S-expression for a list of (name value) pairs, analogous to theLISPassociation list. The XMLIDandIDREFattributes have no equivalent incsexp, but can be easily implemented by acsexpapplication program. Finally, an XML element may contain comments and/or processing instructions.csexphas no specific equivalents, but they are trivial to represent, merely by reserving a name for each. For example, naming them "*COM" and "*PI" (the "*" prevents ever colliding with XML element type names): Bothcsexpand XML are fully recursive. The first atom in acsexplist, by convention roughly corresponds to an XML element type name in identifying the "type" of the list. However, incsexpthis can be any atom in any encoding (e.g., a JPEG, a Unicode string, aWAVfile, …), while XML element names are identifiers, constrained to certain characters, like programming-language identifiers.csexp's method is obviously more general; on the other hand, Identifying what encoding such an item is in, and thus how to interpret it, is determined only by a particular user's conventions, meaning that acsexpapplication must build such conventions for itself, in code, documentation, and so forth. Similarly,csexpatoms are binary (consisting of a length prefix followed by totally arbitrarybytes), while XML is designed to be human-readable (while arguably less so thanJSONorYAML) – so arbitrary bytes in XML must be encoded somehow (for example, a bitmapped image can be included usingbase64). This means that storing large amounts of non-readable information in uncompressed XML takes more space; on the other hand, it will survive translation between alternatecharacter sets(including transmission through network hosts that may apply differing character sets, line-end conventions, etc.). It has been suggested that XML "merges" a sequence of strings within one element into a single string, whilecsexpallows a sequence of atoms within a list and those atoms remain separate from one another; but this is incorrect.[2]Exactly like S-expressions andcsexp, XML has a notion of a "sequence of strings" only if the "strings" are separated somehow: ASN.1 is a popular binary encoding form. However, it expresses only syntax (data types), not semantics. Two different structures – each a SEQUENCE of two INTEGERS – have identical representations on the wire (barring special tag choices to distinguish them). To parse an ASN.1 structure, one must tell the parser what set of structures one is expecting and the parser must match the data type being parsed against the structure options. This adds to the complexity of an ASN.1 parser. Acsexpstructure carries some indication of its own semantics (encoded in element names), and the parser for acsexpstructure does not care what structure is being parsed. Once a wire-format expression has been parsed into an internal tree form (similar to XML's DOM), the consumer of that structure can examine it for conformance to what was expected. An XML document without a schema works just likecsexpin this respect, while an XML document with them can work more like ASN.1.
https://en.wikipedia.org/wiki/Canonical_S-expressions
Matroska(styledMatroška) is a project to create acontainer formatthat can hold an unlimited number of video, audio, picture, or subtitle tracks in one file.[4]TheMatroska Multimedia Containeris similar in concept to other containers likeAVI,MP4, orAdvanced Systems Format(ASF), but is anopen standard. Matroska file extensions are.mkvfor video (which may includesubtitlesor audio),.mk3dforstereoscopicvideo,.mkafor audio-only files (which may include subtitles), and.mksfor subtitles only.[5] The project was announced on 6 December 2002[6]as aforkof theMultimedia Container Format(MCF), after disagreements between MCF lead developer Lasse Kärkkäinen and soon-to-be Matroska founder Steve Lhomme about the use of theExtensible Binary Meta Language(EBML) instead of a binary format.[7]This coincided with a 6-month coding break by the MCF's lead developer for his military service, during which most of the community quickly migrated to the new project.[citation needed] In 2010, it was announced that theWebMaudio/video format would be based on aprofileof the Matroska container format together withVP8video andVorbisaudio.[8] On 31 October 2014,Microsoftconfirmed thatWindows 10would supportHEVCand Matroskaout of the box, according to a statement from Gabriel Aul, the leader of MicrosoftOperating SystemsGroup's Data and Fundamentals Team.[9][10]Windows 10 Technical Preview Build 9860 added platform level support for HEVC and Matroska.[11][12] "Matroska" is derived frommatryoshka(Russian:матрёшка[mɐˈtrʲɵʂkə]), the Russian name for thehollow wooden dolls, better known in English as Russian nesting dolls, which open to expose another smaller doll, that in turn opens to expose another doll, and so on. The logo writes it as "Matroška"; the letterš, an "s" with acaronover it, represents the "sh" sound (/ʂ/) in various languages.[13] The use ofEBMLallows extension for future format changes. The Matroska team has expressed some of their long-term goals onDoom9.organdHydrogenaudioforums. Thus, the following are "goals", not necessarily existing features, of Matroska:[14] Matroska is supported by a non-profit organization (association loi 1901) in France,[17]and the specifications are open to everyone. It is aroyalty-freeopen standardthat is free to use, and its technical specifications are available for private and commercial use. The Matroska development team licenses its libraries under theLGPL, with parsing and playback libraries available underBSD licenses.[14] Software supporting Matroska include allffmpeg/libav-based ones,[18]including, notably,mplayer,mpv,VLC,Foobar2000,Media Player Classic-HC,BS.player,Google Chrome,Mozilla Firefox,Blender,Kdenlive,Handbrake,MKVToolNixas well asYouTube(which usesWebMextensively),[19]andOBS Studio.[20] Outside of ffmpeg,Windows 10supports Matroska natively as well.[21]Earlier versions relied on codec packs (likeK-Lite Codec PackorCombined Community Codec Pack) to integrate ffmpeg (viaffdshow) and other additions into Windows' nativeDirectShow. Apple's nativeQuickTimeplayer formacOSnotably lacks support.
https://en.wikipedia.org/wiki/Matroska
WebMis an audiovisual media file format.[5]It is primarily intended to offer aroyalty-freealternative to use in theHTML videoand theHTML audioelements. It has a sister project,WebP, for images. The development of the format is sponsored byGoogle, and the corresponding software is distributed under aBSD license. The WebMcontaineris based on aprofileofMatroska.[3][6][7]WebM initially supportedVP8video andVorbisaudio streams. In 2013, it was updated to accommodateVP9video andOpusaudio.[8]It also supports theAV1codec.[9] Native WebM support byMozilla Firefox,[10][11]Opera,[12][13]andGoogle Chrome[14]was announced at the 2010Google I/Oconference.Internet Explorer 9requires third-party WebM software.[15]In 2021,ApplereleasedSafari14.1 for macOS, which added native WebM support to the browser.[16]As of 2019[update], QuickTime does not natively support WebM,[17][18]but does with a suitable third-party plug-in.[19]In 2011, the Google WebM Project Team released plugins for Internet Explorer and Safari to allow playback of WebM files through the standard HTML5<video>tag.[20]As of 9 June 2012[update], Internet Explorer 9 and later supported the plugin for Windows Vista and later.[21] VLC media player,[22]MPlayer,K-Multimedia PlayerandJRiver Media Centerhave native support for playing WebM files.[23]FFmpegcan encode and decode VP8 videos when built with support forlibvpx, the VP8/VP9 codec library of the WebM project, as well asmux/demuxWebM-compliant files.[24]On July 23, 2010 Fiona Glaser, Ronald Bultje, and David Conrad of the FFmpeg team announced the ffvp8 decoder. Their testing found that ffvp8 was faster than Google's own libvpx decoder.[25][26]MKVToolNix, the popularMatroskacreation tools, implemented support for multiplexing/demultiplexing WebM-compliant files out of the box.[27]Haali Media Splitter also announced support for muxing/demuxing of WebM.[27]Since version 1.4.9, theLiVESvideo editor has support for realtime decoding and for encoding to WebM format using ffmpeg libraries. MPC-HCsince build SVN 2071 supports WebM playback with internal VP8 decoder based onFFmpeg's code.[25][28]The full decoding support for WebM is available in MPC-HC since version 1.4.2499.0.[29] Androidis WebM-enabled since version2.3 Gingerbread,[30]which was first made available via theNexus Ssmartphone and streamable since Android4.0 Ice Cream Sandwich.[31] The Microsoft Edge browser supports WebM since April 2016.[32] On July 30, 2019,Blender 2.80was released with WebM support.[33] iOSdid not natively play WebM originally,[34]but support for WebM was added in Safari 15 as part ofiOS 15.[35] The SonyPlayStation 5supports capturing 1080p and 2160p footage in WebM format.[36] ChromeOSscreen recordings are saved as WebM files.[37] WebM Project licenses VP8 hardware accelerators (RTL IP) to semiconductor companies for 1080p encoding and decoding at zero cost.[38]AMD,ARMandBroadcomhave announced support forhardware accelerationof the WebM format.[39][40]Intelis also considering hardware-based acceleration for WebM in itsAtom-basedTV chips if the format gains popularity.[41]QualcommandTexas Instrumentshave announced support,[42][43]with native support coming to the TIOMAPprocessor.[44]Chips&Mediahave announced a fully hardware decoder for VP8 that can decodefull HDresolution (1080p) VP8 streams at 60 frames per second.[45] Nvidiais supporting VP8 and provides both hardware decoding and encoding in theTegra 4andTegra 4iSoCs.[46]Nvidiaannounced3Dvideo support for WebM throughHTML5and theirNvidia 3D Visiontechnology.[47][48][49] On January 7, 2011,Rockchipreleased the world's first chip to host a full hardware implementation of 1080p VP8 decoding. The video acceleration in the RK29xx chip is handled by the WebM Project's G-Series 1 hardware decoder IP.[50] In June 2011,ZiiLABSdemonstrated their 1080p VP8 decoder implementation running on the ZMS-20 processor. The chip's programmable media processing array is used to provide the VP8 acceleration.[51] ST-EricssonandHuaweialso had hardware implementations in their computer chips.[52] The original WebM license terminated both patent grants and copyright redistribution terms if a patent infringement lawsuit was filed, causing concerns around GPL compatibility. In response to those concerns, the WebM Project decoupled the patent grant from the copyright grant, offering the code under a standardBSD licenseand patents under a separate grant.[53]TheFree Software Foundation, which maintainsThe Free Software Definition, has given its endorsement for WebM and VP8[54]and considers the software's license to be compatible with theGNU General Public License.[55][56]On January 19, 2011, the Free Software Foundation announced its official support for the WebM project.[57]In February 2011,Microsoft's Vice President of Internet Explorer called upon Google to provide indemnification against patent suits.[58] Although Google has irrevocably released all of its patents on VP8 as a royalty-free format,[59]theMPEG LA, licensors of theH.264patent pool, have expressed interest in creating apatent poolfor VP8.[60][61]Conversely, other researchers cite evidence thatOn2made a particular effort to avoid any MPEG LA patents.[62]As a result of the threat, theUnited States Department of Justice(DOJ) started an investigation in March 2011 into the MPEG LA for its role in possibly attempting to stifle competition.[63][64]In March 2013, MPEG LA announced that it had reached an agreement with Google to license patents that "may be essential" for the implementation of the VP8 codec, and give Google the right to sub-license these patents to any third-party user of VP8 orVP9.[65][66] In March 2013,Nokiafiled an objection to theInternet Engineering Task Forceconcerning Google's proposal for the VP8 codec to be a core part of WebM, saying it holds essential patents to VP8's implementation.[67]Nokia listed 64 patents and 22 pending applications, adding it was not prepared to license any of them for VP8.[68]On August 5, 2013, a court in Mannheim, Germany, ruled that VP8 does not infringe a patent owned and asserted by Nokia.[69]
https://en.wikipedia.org/wiki/WebM
Interchange File Format(IFF) is a genericdigital container file formatoriginally introduced byElectronic Arts(in cooperation withCommodore) in 1985 to facilitate transfer of data between software produced by different companies. IFF files do not have any standardfilename extension. On many systems that generate IFF files, file extensions are not important because theoperating systemstoresfile formatmetadataseparately from thefile name. The.ifffilename extension is commonly used for theILBMimage file format, which uses the IFF container format. Resource Interchange File Formatis a format developed byMicrosoftandIBMin 1991 that is based on IFF, except thebyte orderhas been changed tolittle-endianto match thex86microprocessorarchitecture.Apple'sAudio Interchange File Format(AIFF) is abig-endianaudio file formatdeveloped from IFF. TheTIFFimage file format is not related to IFF. An IFF file is built up fromchunks. Each chunk begins with what the specification calls a "Type ID" (what theMacintoshcalled anOSType, andWindowsdevelopers might call aFourCC). This is followed by a 32-bit signedinteger(all integers in IFF file structure arebig-endian) specifying the size of the following data (the chunk content) in bytes.[1]Because the specification includes explicit lengths for each chunk, it is possible for a parser to skip over chunks that it either can't or doesn't care to process. This structure is closely related to thetype–length–value(TLV) representation. There are predefinedgroupchunks, with type IDsFORM,LISTandCAT.[NB 1]AFORMchunk is like a record structure, containing a type ID (indicating the record type) followed by nested chunks specifying the record fields. ALISTis a factoring structure containing a series ofPROP(property) chunks plus nested group chunks to which those properties apply. ACATis just a collection of nested chunks with no special semantics. Group chunks can contain other group chunks, depending on the needs of the application. Group chunks, like their simpler counterparts, contain a length element. Skipping over a group can thus be done with a simple relativeseek operation. Chunks must begin on even file offsets, as befits the origins of IFF on the Motorola68000processor, which couldn't address quantities larger than a byte on odd addresses. Thus chunks with odd lengths will be "padded" to an even byte boundary by adding a so-called "pad byte" after their regular end. The top-level structure of an IFF file consists of exactly one of the group chunks:FORM,LISTorCAT, whereFORMis by far the most common one. Each type of chunk typically has a different internal structure, which could be numerical data, text, or raw data. It is also possible to include other IFF files as if they are chunks (note that they have the same structure: four letters followed with length), and some formats use this. There are standard chunks that could be present in any IFF file, such asAUTH(containing text with information about author of the file),ANNO(containing text with annotation, usually name of the program that created the file),NAME(containing text with name of the work in the file),VERS(containing file version),(c)(containing text with copyright information). There are also chunks that are common among a number of formats, such asCMAP, which holds color palette inILBM,ANIMandDR2Dfiles (pictures, animations and vector pictures). There are chunks that have a common name but hold different data such asBODY, which could store an image in anILBMfile and sound in an8SVXfile. And finally, there are chunks unique to their file type. Some programs that create IFF files add chunks to them with their internal data; these same files can later be read by other programs without any disruption (because their parsers could skip uninteresting chunks), which is a great advantage of IFF and similar formats.
https://en.wikipedia.org/wiki/Interchange_File_Format
Extensible Binary Meta Language(EBML) is a generalizedfile formatfor any kind of data, aiming to be a binary equivalent toXML. It provides a basic framework for storing data in XML-like tags. It was originally designed as the framework language for theMatroskaaudio/video container format.[1][2][3] EBML is not extensible in the same way that XML is, as theXML schema(e.g.,DTD) must be known in advance. Thismarkup languagearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Extensible_Binary_Meta_Language
In networking formobile devices,WMLCis a format for the efficient transmission ofWMLweb pages overWireless Application Protocol(WAP). Its primary purpose is to compress (or rather tokenise) a WML page for transport over low-bandwidth internet connections such asGPRS/2G. WMLC is apparently synonymous with Wireless Application Protocol Binary XML (WBXML). WMLC is most efficient for pages that contain frequently repeated strings of characters. Commonly used phrases such as "www." and "http://www." are tokenised and replaced with a single byte just before transmission and then re-inserted at the destination. WMLC has an added advantage that the data can be progressively decoded unlike some compression algorithms that require all of the data to be available before decompression begins. As soon as the first few bytes of WMLC data are available, the WAP browser can start creating the page, this means the user can see the page being constructed as it is downloaded. Content type is application/vnd.wap.wmlc.[1] Thiscomputer networkingarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Compiled_Wireless_Markup_Language
Efficient XML Interchange(EXI) is abinary XMLformat for exchange of data on a computer network. It was developed by the W3C'sEfficient Extensible Interchange Working Groupand is one of the most prominent efforts to encodeXMLdocuments in abinary data format, rather than plain text. Using EXI format reduces the verbosity of XML documents as well as the cost of parsing. Improvements in the performance of writing (generating) content depends on the speed of the medium being written to, the methods and quality of actual implementations. EXI is useful for TheWorld Wide Web Consortium(W3C) formed a working group to standardize on a format in March 2006. EXI was chosen as W3C's Binary XML format after an evaluation of various proposals that includedFast Infoset.[1]The EXI format is derived from the AgileDelta Efficient XML format.[2][3]EXI was adopted as aW3C recommendationby the W3C on 10 March 2011. A second edition was published in February 2014.[4] In November 2016, the working group was renamed to "Efficient Extensible Interchange (EXI)" from "Efficient XML Interchange (EXI)" to reflect the broader scope of EXI applicability beyond XML to other data-description languages.[5] An advantage of EXI over Fast Infoset is that EXI (optionally) uses more constraints from theXML schema. This can make the EXI data more compact;[6]for example, if the XML schema specifies that elements named 'bar' may only exist within elements named 'foo', EXI can assign a shorter token to the 'bar' element, knowing that it doesn't have to share the same token space as elements that occur elsewhere in the document. The main disadvantage to utilizing such "schema-informed" compression is that, not only does the document require a schema, but the decoder needs a copy of the same schema that the encoder used. A variety of EXI-capable applications are available.[7] A variety of EXI implementations are available that enable the integration of EXI capabilities in other tools.[8] Related: EXI is being adapted for non-XML data formats as well. EXI was recommended for use in the US Department of DefenseGlobal Information Grid.[10] Multiple experimental initiatives continue to be pursued by the EXI Working Group.
https://en.wikipedia.org/wiki/Efficient_XML_Interchange
Early research and development: Merging the networks and creating the Internet: Commercialization, privatization, broader access leads to the modern Internet: Examples of Internet services: TheInternet Engineering Task Force(IETF) is astandards organizationfor theInternetand is responsible for thetechnical standardsthat make up theInternet protocol suite(TCP/IP).[3]It has no formal membership roster or requirements and all its participants are volunteers. Their work is usually funded by employers or other sponsors. The IETF was initially supported by thefederal government of the United Statesbut since 1993 has operated under the auspices of theInternet Society, anon-profit organizationwith local chapters around the world. There is no membership in the IETF. Anyone can participate by signing up to a working group mailing list, or registering for an IETF meeting.[4] The IETF operates in a bottom-up task creation mode[clarification needed], largely driven by working groups.[2]Each working group normally has appointed two co-chairs (occasionally three); a charter that describes its focus; and what it is expected to produce, and when. It is open to all who want to participate and holds discussions on an openmailing list. Working groups hold open sessions at IETF meetings, where the onsite registration fee in 2024 was betweenUS$875 (early registration) and $1200 per person for the week.[5]Significant discounts are available for students and remote participants. As working groups do not make decisions at IETF meetings, with all decisions taken later on the working groupmailing list, meeting attendance is not required for contributors. Rough consensusis the primary basis for decision making. There are no formal voting procedures. Each working group is intended to complete work on its topic and then disband. In some cases, the working group will instead have its charter updated to take on new tasks as appropriate.[2] The working groups are grouped into areas by subject matter (see§ Steering group, below). Each area is overseen by an area director (AD), with most areas having two ADs. The ADs are responsible for appointing working group chairs. The area directors, together with the IETF Chair, form theInternet Engineering Steering Group(IESG), which is responsible for the overall operation of the IETF.[citation needed] TheInternet Architecture Board(IAB) oversees the IETF's external relationships.[6]The IAB provides long-range technical direction for Internet development. The IAB also manages theInternet Research Task Force(IRTF), with which the IETF has a number of cross-group relations.[7] A nominating committee (NomCom) of ten randomly chosen volunteers who participate regularly at meetings, a non-voting chair and 4-5 liaisons, is vested with the power to appoint, reappoint, and remove members of the IESG, IAB, IETF Trust and the IETF LLC.[8]To date, no one has been removed by a NomCom, although several people have resigned their positions, requiring replacements.[9] In 1993 the IETF changed from an activity supported by the US federal government to an independent, international activity associated with theInternet Society, a US-based501(c)(3) organization.[10]In 2018 the Internet Society created a subsidiary, the IETF Administration LLC, to be the corporate, legal and financial home for the IETF.[11]IETF activities are funded by meeting fees, meeting sponsors and by the Internet Society via its organizational membership and the proceeds of thePublic Interest Registry.[12] In December 2005, the IETF Trust was established to manage the copyrighted materials produced by the IETF.[13] The Internet Engineering Steering Group (IESG) is a body composed of the Internet Engineering Task Force (IETF) chair and area directors. It provides the final technical review of Internet standards and is responsible for day-to-day management of the IETF. It receives appeals of the decisions of the working groups, and the IESG makes the decision to progress documents in thestandards track.[14] The chair of the IESG is the area director of the general area, who also serves as the overall IETF chair. Members of the IESG include the two directors, sometimes three, of each of the following areas:[15] Liaison andex officiomembers include:[citation needed] The Gateway Algorithms and Data Structures (GADS) Task Force was the precursor to the IETF. Its chairman wasDavid L. Millsof theUniversity of Delaware.[16] In January 1986, the Internet Activities Board (IAB; now called the Internet Architecture Board) decided to divide GADS into two entities: an Internet Architecture (INARC) Task Force chaired by Mills to pursue research goals, and the IETF to handle nearer-term engineering and technology transfer issues.[16]The first IETF chair was Mike Corrigan, who was then the technical program manager for theDefense Data Network(DDN).[16]Also in 1986, after leaving DARPA, Robert E. Kahn founded theCorporation for National Research Initiatives(CNRI), which began providing administrative support to the IETF. In 1987, Corrigan was succeeded as IETF chair by Phill Gross.[17] Effective March 1, 1989, but providing support dating back to late 1988, CNRI and NSF entered into a cooperative agreement, No. NCR-8820945, wherein CNRI agreed to create and provide a "secretariat" for the "overall coordination, management and support of the work of the IAB, its various task forces and, particularly, the IETF".[18] In 1992, CNRI supported the formation and early funding of the Internet Society, which took on the IETF as a fiscally sponsored project, along with the IAB, the IRTF, and the organization of annual INET meetings. Gross continued to serve as IETF chair throughout this transition. Cerf, Kahn, and Lyman Chapin announced the formation of ISOC as "a professional society to facilitate, support, and promote the evolution and growth of the Internet as a global research communications infrastructure".[19]At the first board meeting of the Internet Society, Cerf, representing CNRI, offered, "In the event a deficit occurs, CNRI has agreed to contribute up to USD$102,000 to offset it."[20]In 1993, Cerf continued to support the formation of ISOC while working for CNRI,[21]and the role of ISOC in "the official procedures for creating and documenting Internet Standards" was codified in the IETF'sRFC1602.[22] In 1995, IETF'sRFC2031describes ISOC's role in the IETF as being purely administrative, and ISOC as having "no influence whatsoever on the Internet Standards process, the Internet Standards or their technical content".[23] In 1998, CNRI established Foretec Seminars, Inc. (Foretec), a for-profit subsidiary to take over providing secretariat services to the IETF.[18]Foretec provided these services until at least 2004.[18]By 2013, Foretec was dissolved.[24] In 2003, IETF'sRFC3677described IETFs role in appointing three board members to the ISOC's board of directors.[25] In 2018, ISOC established The IETF Administration LLC, a separate LLC to handle the administration of the IETF.[26]In 2019, the LLC issued a call for proposals to provide secretariat services to the IETF.[27] The first IETF meeting was attended by 21 US federal government-funded researchers on 16 January 1986. It was a continuation of the work of the earlier GADS Task Force. Representatives from non-governmental entities (such as gateway vendors)[28]were invited to attend starting with the fourth IETF meeting in October 1986. Since that time all IETF meetings have been open to the public.[2] Initially, the IETF met quarterly, but from 1991, it has been meeting three times a year. The initial meetings were very small, with fewer than 35 people in attendance at each of the first five meetings. The maximum attendance during the first 13 meetings was only 120 attendees. This occurred at the twelfth meeting, held during January 1989. These meetings have grown in both participation and scope a great deal since the early 1990s; it had a maximum attendance of 2810 at the December 2000 IETF held inSan Diego, California. Attendance declined with industry restructuring during the early 2000s, and is currently around 1200.[29][2] The locations for IETF meetings vary greatly. A list of past and future meeting locations is on the IETF meetings page.[30]The IETF strives to hold its meetings near where most of the IETF volunteers are located. IETF meetings are held three times a year, with one meeting each in Asia, Europe and North America. An occasional exploratory meeting is held outside of those regions in place of one of the other regions.[31] The IETF also organizeshackathonsduring the IETF meetings. The focus is on implementing code that will improve standards in terms of quality and interoperability.[32] Due to recent changes in USA administration that deny entry to foreign free speech supporters and could impact transgender people. There is a movement to ask the IETF to have its meeting outside of the USA in a safe country instead.[33] The details of IETF operations have changed considerably as the organization has grown, but the basic mechanism remains publication of proposed specifications, development based on the proposals, review and independent testing by participants, and republication as a revised proposal, a draft proposal, or eventually as an Internet Standard. IETF standards are developed in an open, all-inclusive process in which any interested individual can participate. All IETF documents are freely available over the Internet and can be reproduced at will. Multiple, working, useful, interoperable implementations are the chief requirement before an IETF proposed specification can become a standard.[2]Most specifications are focused on single protocols rather than tightly interlocked systems. This has allowed the protocols to be used in many different systems, and its standards are routinely re-used by bodies which create full-fledged architectures (e.g.3GPPIMS).[citation needed] Because it relies on volunteers and uses "rough consensus and running code" as its touchstone, results can be slow whenever the number of volunteers is either too small to make progress, or so large as to make consensus difficult, or when volunteers lack the necessary expertise. For protocols likeSMTP, which is used to transport e-mail for a user community in the many hundreds of millions, there is also considerable resistance to any change that is not fullybackward compatible, except forIPv6. Work within the IETF on ways to improve the speed of the standards-making process is ongoing but, because the number of volunteers with opinions on it is very great, consensus on improvements has been slow to develop.[citation needed] The IETF cooperates with theW3C,ISO/IEC,ITU, and other standards bodies.[10] Statistics are available that show who the top contributors by RFC publication are.[34]While the IETF only allows for participation by individuals, and not by corporations or governments, sponsorship information is available from these statistics.[citation needed] The IETF chairperson is selected by the NomCom process for a two-year renewable term.[35]Before 1993, the IETF Chair was selected by the IAB.[36] A list of the past and current chairs of the IETF: The IETF works on a broad range of networking technologies which provide foundation for the Internet's growth and evolution.[38] It aims to improve the efficiency in management of networks as they grow in size and complexity. The IETF is alsostandardizingprotocols for autonomic networking that enables networks to be self managing.[39] It is a network of physical objects or things that are embedded with electronics, sensors, software and also enables objects to exchange data with operator, manufacturer and other connected devices. Several IETF working groups are developing protocols that are directly relevant toIoT.[40] Its development provides the ability of internet applications to send data over the Internet. There are some well-established transport protocols such as TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) which are continuously getting extended and refined to meet the needs of the global Internet.[41]
https://en.wikipedia.org/wiki/Internet_Engineering_Task_Force
Thehead–body patternis a common XMLdesign pattern, used for example in theSOAPprotocol. This pattern is useful when a message, or parcel of data, requires considerablemetadata. While mixing the meta-data with the data could be done it makes the whole confusing. In this pattern the meta-data or meta-information are structured as the header, sometimes known as the envelope. The ordinary data or information are structured as the body, sometimes known as the payload.XMLis employed for both head and body (see alsoXML Protocol).[1][2] The pattern can be illustrated as: Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Head%E2%80%93body_pattern
Areverse dictionaryis adictionaryalphabetized by the reversal of each entry: Before computers, reverse dictionaries were tedious to produce. The first computer-produced was Stahl and Scavnicky'sA Reverse Dictionary of the Spanish Language, in 1974.[1]The first computer-produced reverse dictionary for a single text wasWisbey, R.,Vollständige Verskonkordanz zur Wiener Genesis. Mit einem rückläufigen Wörterbuch zum Formenbestand, Berlin, E. Schmidt, 1967. In a reverse word dictionary, the entries are alphabetized by the last letter first, then next to last, and so on.[1][2]In them, words with the samesuffixappear together. This can be useful for linguists and poets looking for words ending with a particular suffix, or by an epigrapher or forensics specialist examining a damaged text (e.g. a stone inscription, or a burned document) that had only the final portion of a word. Reverse dictionaries of this type have been published for most majoralphabeticallanguages. Applications of reverse word dictionaries include: Reverse word dictionaries are straightforward to construct, by simply sorting based on reversed words. This was labor-intensive and tedious before computers, but is now straightforward. By the same token, reverse dictionaries have become less important since online word lists can be searched dynamically.
https://en.wikipedia.org/wiki/Reverse_dictionary
Digital footprintordigital shadowrefers to one's unique set oftraceabledigital activities, actions, contributions, and communications manifested on theInternetordigital devices.[1][2][3][4]Digital footprints can be classified as either passive or active. Passive footprints consist of a user's web-browsing activity and information stored ascookies. Active footprints are intentionally created by users to share information on websites orsocial media.[5]While the term usually applies to a person, a digital footprint can also refer to a business, organization or corporation.[6] The use of a digital footprint has both positive and negative consequences. On one side, it is the subject of manyprivacy issues.[7]For example, without an individual's authorization, strangers can piece together information about that individual by only usingsearch engines. Socialinequalitiesare exacerbated by the limited access afforded tomarginalized communities.[8]Corporations are also able to produce customized ads based on browsing history. On the other hand, others can reap the benefits by profiting off their digital footprint as social mediainfluencers. Furthermore, employers use a candidate's digital footprint foronline vettingand assessing fit due to its reduced cost and accessibility.[citation needed]Between two equal candidates, a candidate with a positive digital footprint may have an advantage. As technology usage becomes more widespread, even children generate larger digital footprints with potential positive and negative consequences such as college admissions. Media and information literacy frameworks and educational efforts promote awareness of digital footprints as part of a citizen's digital privacy.[9]Since it is hard not to have a digital footprint, it is in one's best interest to create a positive one. Passive digital footprints are a data trail that an individual involuntarily leaves online.[10][11]They can be stored in various ways depending on the situation. A footprint may be stored in an online database as a "hit" in an online environment. The footprint may track the user'sIP address, when it was created, where it came from, and the footprint later being analyzed. In anofflineenvironment,administratorscan access and view the machine's actions without seeing who performed them. Examples of passive digital footprints are apps that usegeolocations, websites that download cookies onto your appliance, orbrowser history. Although passive digital footprints are inevitable, they can be lessened by deleting old accounts, usingprivacy settings(public or private accounts), and occasionally online searching yourself to see the information left behind.[12] Active digital footprints are deliberate, as they are posted or shared information willingly. They can also be stored in a variety of ways depending on the situation. A digital footprint can be stored when a userlogsinto a site and makes apostor change; the registered name is connected to the edit in an online environment. Examples of active digital footprints include social media posts, video or image uploads, or changes to various websites.[11] Digital footprints are not adigital identityorpassport, but the content andmetadatacollected impactsinternet privacy,trust,security, digitalreputation, andrecommendation. As the digital world expands and integrates with more aspects of life, ownership and rights concerning data become increasingly important. Digital footprints are controversial in that privacy and openness compete.[13]Scott McNealy, CEO ofSun Microsystems, said in 1999Get Over Itwhen referring to privacy on the Internet.[14]The quote later became a commonly used phrase in discussing private data and what companies do with it.[15]Digital footprints are a privacy concern because they are a set of traceable actions, contributions, and ideas shared by users. It can be tracked and can allow internet users to learn about human actions.[16] Interested parties use Internet footprints for several reasons; includingcyber-vetting,[17]where interviewers could research applicants based on their online activities. Internet footprints are also used by law enforcement agencies to provide information unavailable otherwise due to a lack ofprobable cause.[18]Also, digital footprints are used by marketers to find what products a user is interested in or to inspire ones' interest in a particular product based on similar interests.[19] Social networking systemsmay record the activities of individuals, with data becoming alife stream. Suchsocial mediausage and roaming services allow digital tracing data to include individual interests, social groups, behaviors, and location. Such data is gathered from sensors within devices and collected and analyzed without user awareness.[20]When many users choose to share personal information about themselves through social media platforms, including places they visited, timelines and their connections, they are unaware of the privacy setting choices and the security consequences associated with them.[21]Many social media sites, likeFacebook, collect an extensive amount of information that can be used to piece together a user's personality. Information gathered from social media, such as the number of friends a user has, can predict whether or not the user has an introvert or extrovert personality. Moreover, a survey of SNS users revealed that 87% identified their work or education level, 84% identified their full date of birth, 78% identified their location, and 23% listed their phone numbers.[21] While one's digital footprint may infer personal information, such as demographic traits, sexual orientation, race, religious and political views, personality, or intelligence[22]without individuals' knowledge, it also exposes individuals' private psychological spheres into the social sphere.[23]Lifeloggingis an example of an indiscriminate collection of information concerning an individual's life and behavior.[24]There are actions to take to make a digital footprint challenging to track.[25]An example of the usage or interpretation of data trails is through Facebook-influenced creditworthiness ratings,[26]the judicial investigations around German social scientist Andrej Holm,[27]advertisement-junk mails by the American companyOfficeMax[28]or the border incident of Canadian citizen Ellen Richardson.[29] An increasing number of employers are evaluating applicants by their digital footprint through their interaction on social media due to its reduced cost and easy accessibility[30]during the hiring process. By using such resources, employers can gain more insight on candidates beyond their well-scripted interview responses and perfected resumes.[31]Candidates who display poor communication skills, use inappropriate language, or use drugs or alcohol are rated lower.[32]Conversely, a candidate with a professional or family-oriented social media presence receives higher ratings.[33]Employers also assess a candidate through their digital footprint to determine if a candidate is a good cultural fit[34]for their organization.[35]Suppose a candidate upholds an organization's values or shows existing passion for its mission. In that case, the candidate is more likely to integrate within the organization and could accomplish more than the average person. Although these assessments are known not to be accurate predictors of performance or turnover rates,[36]employers still use digital footprints to evaluate their applicants. Thus, job seekers prefer to create a social media presence that would be viewed positively from a professional point of view. In some professions, maintaining a digital footprint is essential. People will search the internet for specific doctors and their reviews. Half of the search results for a particular physician link to third-party rating websites.[37]For this reason, prospective patients may unknowingly choose their physicians based on their digital footprint in addition to online reviews. Furthermore, a generation relies on social media for livelihood asinfluencersby using their digital footprint. These influencers have dedicated fan bases that may be eager to follow recommendations. As a result, marketers pay influencers to promote their products among their followers, since this medium may yield better returns than traditional advertising.[38][39]Consequently, one's career may be reliant on their digital footprint. Generation Alphawill not be the first generation born into the internet world. As such, a child's digital footprint is becoming more significant than ever before and their consequences may be unclear. As a result of parenting enthusiasm, an increasing amount of parents will create social media accounts for their children at a young age, sometimes even before they are born.[40]Parents may post up to 13,000 photos of a child on social media in their celebratory state before their teen years of everyday life or birthday celebrations.[41]Furthermore, these children are predicted to post 70,000 times online on their own by 18.[41]The advent of posting on social media creates many opportunities to gather data from minors. Since an identity's basic components contain a name, birth date, and address, these children are susceptible toidentity theft.[42]While parents may assume that privacy settings may prevent children's photos and data from being exposed, they also have to trust that their followers will not be compromised. Outsiders may take the images to pose as these children's parents or post the content publicly.[43]For example, during theFacebook-Cambridge Analytica data scandal, friends of friends leaked data to data miners. Due to the child's presence on social media, their privacy may be at risk. Some professionals argue that young people entering the workforce should consider the effect of their digital footprint on theirmarketabilityand professionalism.[44]Having a digital footprint may be very good for students, as college admissions staff and potential employers may decide to research into prospective student's and employee's online profiles, leading to an enormous impact on the students' futures.[44]Teens will be set up for more success if they consider the kind of impact they are making and how it can affect their future. Instead, someone who acts apathetic towards the impression they are making online will struggle if they one day choose to attend college or enter into the workforce.[45]Teens who plan to receive ahigher educationwill have their digital footprint reviewed and assessed as a part of theapplicationprocess.[46]Besides, if the teens that have the intention of receiving a higher education are planning to do so with financial help and scholarships, then they need to consider that their digital footprint will be evaluated in the application process to getscholarships.[47] Digital footprints may reinforce existingsocial inequalities. In a conceptual overview of this topic, researchers argue that both actively and passively generated digital footprints represent a new dimension of digital inequality, withmarginalized groupssystematically disadvantaged in terms of online visibility and opportunity.[48]Corporations and governments increasingly rely on algorithms that use digital footprints to automate decisions across areas like employment, credit, and public services, amplifying existing social inequalities.[48]Because marginalized groups often have less extensive or lower-quality digital footprints, they are at greater risk of being misrepresented, excluded, or disadvantaged by these algorithmic processes.[48]Examples of low-quality digital footprints include lack of data on online databases that track credit scores, legal history or medical history.[48]People from higher socio-economic backgrounds are more likely to leave favorable or carefully curated digital footprints than enable accelerated access to critical services, financial assistance, and jobs.[48] An example of digital inequality is access to essentiale-governmentservices. In theUnited Kingdom, individuals lacking a sufficient digital footprint face challenges in verify their identities.[49]This new barriers to services such aspublic housingandhealthcarecreating a “double disadvantage".[49]A double disadvantage compounds existing issues in digital access by excluded from digital life lack both access and the digital reputation required to navigate public systems.[49]Other communities with private access or open access to technology anddigital educationfrom an early age will have greater access to government e-services.[49] The United Nations International Children's Emergency Fund's (UNICEF)State of the World’s Children2017report highlights how digital footprints are linked to broader issues of equity, inclusion, and safety, emphasizing that marginalized communities experience greater risks in digital environments.[50] Mediaandinformation literacy(MIL) encompasses the knowledge and skills necessary to access, evaluate, and create information across different media platforms.[51]Understanding and managing one's digital footprint is increasingly recognized as a core component of MIL. Scholars suggest that digital footprint literacy falls under privacy literacy, which refers to the ability to critically manage and protect personal information in online environments.[52]Studies indicate that disparities in MIL access across countries and socio-demographic groups contribute to uneven abilities to manage digital footprints safely.[51] Organizations likeUNESCOand UNICEF advocate for integrating MIL frameworks into formal education systems as a way to mitigate digital inequalities.[51][53]However, there remains a notable lack of standardized MIL curricula globally, particularly concerning privacy literacy and digital footprint management. In response to these gaps, researchers in 2022 developed the "5Ds of Privacy Literacy" educational framework, which emphasizes teaching students to "define, describe, discern, determine, and decide" appropriate information flows based on context.[9]Grounded insocioculturallearning theory, the 5Ds encourage students to make privacy decisions thoughtfully, rather than simply adhering to universal rules.[9]Sociocultural learning theory means that students learn privacy skills not just by memorizing rules, but by actively engaging with real-world social situations, discussing them with others, and practicing decisions in authentic, contextualized settings. This framework highlights that part of digital footprint literacy includes awareness about how our behaviors are tracked online. Companies can infer demographic attributes such as age, gender, and political orientation without explicit disclosure.[54]This is often done without users' awareness.[54]Educating students about these practices aims to promote critical thinking about personal data trails. Another part of digital footprint literacy is being able to critically assess your own digital footprint. Initiatives likeAustralia's "Best Footprint Forward" program have implemented digital footprint education using real-world examples to teach critical self-assessment of online presence.[55]Similarly, theConnecticut State Department of Educationrecommends incorporatingdigital citizenship,internet safety, and media literacy into K–12 education standards.[56]
https://en.wikipedia.org/wiki/Digital_traces
Incomputing,rebootingis the process by which a runningcomputer systemis restarted, either intentionally or unintentionally. Reboots can be either acold reboot(alternatively known as ahard reboot) in which the power to the system is physically turned off and back on again (causing aninitial bootof the machine); or awarm reboot(orsoft reboot) in which the system restarts while still powered up. The termrestart(as a system command) is used to refer to a reboot when theoperating systemcloses all programs and finalizes all pending input and output operations before initiating a soft reboot. Early electronic computers (like theIBM 1401) had no operating system and little internal memory. The input was often a stack ofpunch cardsor via aswitch register. On systems with cards, the computer was initiated by pressing a start button that performed a single command - "read a card". This first card then instructed the machine to read more cards that eventually loaded a user program. This process was likened to an old saying, "picking yourself up by the bootstraps", referring to a horseman who lifts himself off the ground by pulling on the straps of his boots. This set of initiating punch cards was called "bootstrap cards". Thus a cold start was calledbootingthe computer up. If the computercrashed, it was rebooted. The boot reference carried over to all subsequent types of computers. ForIBM PC compatiblecomputers, acold bootis a boot process in which the computer starts from a powerless state, in which the system performs a completepower-on self-test(POST).[1][2][3][4]Both the operating system and third-party software can initiate a cold boot; the restart command inWindows 9xinitiates a cold reboot, unless Shift key is held.[1]: 509 Awarm bootis initiated by theBIOS, either as a result of theControl-Alt-Deletekey combination[1][2][3][4]or directly through BIOS interruptINT19h.[5]It may not perform a complete POST - for example, it may skip the memory test - and may not perform a POST at all.[1][2][4]Malwaremay prevent or subvert a warm boot by intercepting the Ctrl + Alt + Delete key combination and prevent it from reaching BIOS.[6]TheWindows NTfamily of operating systems also does the same and reserves the key combination for its own use.[7][8] Operating systems based onLinuxsupport an alternative to warm boot; the Linux kernel has optional support forkexec, asystem callwhich transfers execution to a new kernel and skips hardware or firmware reset. The entire process occurs independently of the system firmware. The kernel being executed does not have to be a Linux kernel.[citation needed] Outside the domain of IBM PC compatible computers, the types of boot may not be as clear. According to Sue Loh ofWindows CEBase Team, Windows CE devices support three types of boots: Warm, cold and clean.[9]A warm boot discards program memory. A cold boot additionally discards storage memory (also known as the "object store"), while a clean boot erasesallforms of memory storage from the device. However, since these areas do not exist on all Windows CE devices, users are only concerned with two forms of reboot: one that resets the volatile memory and one that wipes the device clean and restores factory settings. For example, for aWindows Mobile 5.0device, the former is a cold boot and the latter is a clean boot.[9] A hard reboot means that the system is not shut down in an orderly manner, skipping file system synchronisation and other activities that would occur on an orderly shutdown. This can be achieved by either applying areset, bycycling power, by issuing thehalt-qcommand in mostUnix-likesystems, or by triggering akernel panic. Hard reboots are used in thecold boot attack. The term "restart" is used by theMicrosoft WindowsandLinuxfamilies of operating systems to denote an operating system-assisted reboot. In a restart, the operating system ensures that all pending I/O operations are gracefully ended before commencing a reboot. Users may deliberately initiate a reboot. Rationale for such action may include: The means of performing a deliberate reboot also vary and may include: Unexpected loss of power for any reason (includingpower outage,power supplyfailure or depletion ofbatteryon a mobile device) forces the system user to perform a cold boot once the power is restored. SomeBIOSeshave an option to automatically boot the system after a power failure.[23][24]Anuninterruptible power supply(UPS), backup battery or redundant power supply can prevent such circumstances. "Random reboot" is a non-technical term referring to an unintended (and often undesired) reboot following asystem crash, whose root cause may not immediately be evident to the user. Such crashes may occur due to a multitude of software and hardware problems, such astriple faults. They are generally symptomatic of an error inring 0that is not trapped by anerror handlerin an operating system or a hardware-triggerednon-maskable interrupt. Systems may be configured to reboot automatically after a power failure, or afatal system errororkernel panic. The method by which this is done varies depending on whether the reboot can be handled via software or must be handled at the firmware or hardware level. Operating systems in theWindows NTfamily (fromWindows NT 3.1throughWindows 7) have an option to modify the behavior of the error handler so that a computer immediately restarts rather than displaying aBlue Screen of Death(BSOD) error message. This option is enabled by default in some editions. The introduction ofadvanced power managementallowed operating systems greater control of hardware power management features. WithAdvanced Configuration and Power Interface(ACPI), newer operating systems are able to manage different power states and thereby sleep and/orhibernate. While hibernation also involves turning a system off then subsequently back on again, the operating system does not start from scratch, thereby differentiating this process from rebooting. A reboot may be simulated by software running on an operating system. For example: the Sysinternals BlueScreen utility, which is used for pranking; or some modes of thebsodXScreenSaver"hack", for entertainment (albeit possibly concerning at first glance). Malware may also simulate a reboot, and thereby deceive a computer user for some nefarious purpose.[6] Microsoft App-Vsequencing tool captures all the file system operations of an installer in order to create a virtualized software package for users. As part of the sequencing process, it will detect when an installer requires a reboot, interrupt the triggered reboot, and instead simulate the required reboot by restarting services and loading/unloading libraries.[25] Windows 8&10enable (by default) ahibernation-like "Fast Startup" (a.k.a. "Fast Boot") which can cause problems (including confusion) for users accustomed to turning off computers to (cold) reboot them.[26][27][28]
https://en.wikipedia.org/wiki/Reboot#Cold
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias Discourseis a generalization of the notion of aconversationto any form ofcommunication.[1]Discourse is a major topic in social theory, with work spanning fields such associology,anthropology,continental philosophy, anddiscourse analysis. Following work byMichel Foucault, these fields view discourse as a system of thought, knowledge, or communication that constructs our world experience. Since control of discourse amounts to control of how the world is perceived, social theory often studies discourse as a window intopower. Withintheoretical linguistics, discourse is understood more narrowly as linguistic information exchange and was one of the major motivations for the framework ofdynamic semantics. In these expressions,denotationsare equated with their ability to update a discourse context. In thehumanitiesandsocial sciences, discourse describes a formal way of thinking that can be expressed through language. Discourse is a social boundary that defines what statements can be said about a topic. Many definitions of discourse are primarily derived from the work of French philosopherMichel Foucault. Insociology,discourseis defined as "any practice (found in a wide range of forms) by which individuals imbue reality with meaning".[2] Political sciencesees discourse as closely linked to politics[3][4]and policy making.[5]Likewise, different theories among various disciplines understand discourse as linked topowerandstate, insofar as the control ofdiscoursesis understood as a hold on reality itself (e.g. if a state controls the media, they control the "truth"). In essence,discourseis inescapable, since any use of language will have an effect on individual perspectives. In other words, the chosen discourse provides the vocabulary, expressions, orstyleneeded to communicate. For example, two notably distinct discourses can be used about variousguerrillamovements, describing them either as "freedom fighters" or "terrorists". Inpsychology, discourses are embedded in different rhetorical genres and meta-genres that constrain and enable them—language talking about language. This is exemplified in theAPA'sDiagnostic and Statistical Manual of Mental Disorders, which tells of the terms that have to be used in speaking about mental health, thereby mediating meanings and dictating practices of professionals in psychology and psychiatry.[6] Modernist theoristsfocused on achieving progress and believed in natural and social laws that could be used universally to develop knowledge and, thus, a better understanding of society.[7]Such theorists would be preoccupied with obtaining the "truth" and "reality", seeking to develop theories which contained certainty and predictability.[8]Modernist theorists therefore understood discourse to be functional.[9]Discourse and language transformations are ascribed to progress or the need to develop new or more "accurate" words to describe discoveries, understandings, or areas of interest.[9]In modernist theory, language and discourse are dissociated from power and ideology and instead conceptualized as "natural" products of common sense usage or progress.[9]Modernismfurther gave rise to the liberal discourses of rights, equality, freedom, and justice; however, this rhetoric masked substantive inequality and failed to account for differences, according to Regnier.[10] Structuralisttheorists, such asFerdinand de SaussureandJacques Lacan, argue that all human actions and social formations are related tolanguageand can be understood as systems of related elements.[11]This means that the "individual elements of a system only have significance when considered about the structure as a whole, and that structures are to be understood as self-contained, self-regulated, and self-transforming entities".[11]: 17In other words, it is the structure itself that determines the significance, meaning, and function of the individual elements of a system. Structuralism has contributed to our understanding of language and social systems.[12]Saussure's theory of languagehighlights the decisive role of meaning and signification in structuring human life more generally.[11] Following the perceived limitations of the modern era, emergedpostmoderntheory.[7]Postmodern theorists rejected modernist claims that there was one theoretical approach that explained all aspects of society.[8]Rather, postmodernist theorists were interested in examining the variety of experiences of individuals and groups and emphasized differences over similarities and shared experiences.[9] In contrast to modernist theory, postmodern theory is pessimistic regarding universal truths and realities. Hence, it has attempted to be fluid, allowing for individual differences as it rejects the notion of social laws. Postmodern theorists shifted away from truth-seeking and sought answers to how truths are produced and sustained. Postmodernists contended that truth and knowledge are plural, contextual, and historically produced through discourses. Postmodern researchers, therefore, embarked on analyzing discourses such as texts, language, policies, and practices.[9] In the works of the philosopherMichel Foucault, adiscourseis "an entity of sequences, of signs, in that they are enouncements (énoncés)."[13]The enouncement (l’énoncé, "the statement") is a linguistic construct that allows the writer and the speaker to assign meaning to words and to communicate repeatable semantic relations to, between, and among the statements, objects, or subjects of the discourse.[13]Internal ties exist between the signs (semiotic sequences). The termdiscursive formationidentifies and describes written and spoken statements with semantic relations that produce discourses. As a researcher, Foucault applied the discursive formation to analyses of large bodies of knowledge, e.g.political economyandnatural history.[14] InThe Archaeology of Knowledge(1969), a treatise about themethodologyandhistoriographyof systems of thought ("epistemes") and knowledge ("discursive formations"),Michel Foucaultdeveloped the concepts of discourse. The sociologist Iara Lessa summarizes Foucault's definition of discourse as "systems of thoughts composed of ideas, attitudes, courses of action, beliefs, and practices that systematically construct the subjects and the worlds of which they speak."[15]Foucault traces the role of discourse in thelegitimationof society'spowerto construct contemporary truths, to maintain said truths, and to determine what relations of power exist among the constructed truths; therefore discourse is a communications medium through which power relations produce men and women who can speak.[9] The interrelation between power and knowledge renders every human relationship into a power negotiation,[16]Because power is always present and so produces and constrains the truth.[9]Power is exercised through rules of exclusion (discourses) that determine what subjects people can discuss; when, where, and how a person may speak; and determines which persons are allowed to speak.[13]That knowledge is both thecreatorof power and thecreationof power, Foucault coined "power/knowledge"to show that it is "an abstract force which determines what will be known, rather than assuming that individual thinkers develop ideas and knowledge."[17][18] Interdiscoursestudies the external semantic relations among discourses,[19]as discourses exists in relation to other discourses.[14] There is more than one type ofdiscourse analysis, and the definition of "discourse" shifts slightly between types. Generally speaking, discourse analyses can be divided into those concerned with "little d" discourse and "big D" Discourse. The former ("little d") refers to language-in-use, such as spoken communication; the latter ("big D") refers to sociopolitical discourses (language plus social and cultural contexts).[20] Common forms of discourse analysis include: Informal semanticsandpragmatics, discourse is often viewed as the process of refining the information in acommon ground. In some theories of semantics, such asdiscourse representation theory, sentences'denotationsthemselves are equated with functions that update acommon ground.[21][22][23][24]
https://en.wikipedia.org/wiki/Discourse
Grounding in communicationis a concept proposed byHerbert H. Clarkand Susan E. Brennan. It comprises the collection of "mutual knowledge, mutual beliefs, and mutual assumptions" that is essential forcommunicationbetween two people.[1]Successful grounding in communication requires parties "to coordinate both the content and process". The concept is also common inphilosophy of language. Grounding in communication theory has described conversation as a form of collaborative action.[2]While grounding in communication theory has been applied to mediated communication, the theory primarily addresses face-to-faceconversation. Groups working together will ground their conversations by coming up with common ground ormutual knowledge. The members will utilize this knowledge in order to contribute to a more efficient dialogue.[3]Grounding criterion is the mutual belief between conversational partners that everyone involved has a clear enough understanding of the concept to move forward.[4] Clark and Schaefer (1989) found that, to reach this state of grounding criterion, groups use three methods of reaching an understanding that they can move forward.[5] The parties engaging in grounding exchange information over what they do or do not understand over the course of a communication and they will continue to clarify concepts until they have agreed on grounding criterion. There are generally two phases in grounding. According to this theory, mereutterancequalifies as presentation in conversation whereas contribution to conversation demands both utterance and assurance of mutual understanding.[6] The presentation phase can become complex when meanings are embedded or repairs are made to utterances. An example of a repair is "Do you and your husband have a car," but rather the messier, "now, – um do you and your husband have a j-car".[6] The acceptance phase often clarifies any ambiguities with grounding. For example:[6] The acceptance phase is completed once Barbara indicates that the answer is "no" and Alan accepts it as a valid answer.[6] Grounding theory identifies three common types of evidence in conversation: 'acknowledgements, relevant next turn, and continued attention.[7] Acknowledgementsrefer toback channelmodes of communication that affirm and validate the messages being communicated. Some examples of these include, "uh huh," "yeah," "really," and head nods that act as continuers. They are used to signal that a phrase has been understood and that the conversation can move on. Relevant next turnrefers to the initiation or invitation to respond between speakers, including verbal and nonverbal prompts forturn-takingin conversation. Questions and answers act asadjacency pairs,[8]the first part of the conversation is relevant to the second part. Meaning that a relevant utterance needs to be made in response to the question in order for it to be accepted. For example:[6] Chico is revealing that he did not understand Miss Dimple's first question. She then corrects her phrase after realizing Chico's utterance wasn't an appropriate response and they continue to communicate with adjacent pairs. Continued attentionis the "mutual belief that addressees have correctly identified areferent." Partners involved in a conversation usually demonstrate this through eye gaze. One can capture their partner's gaze and attention by beginning an utterance. Attention that is undisturbed and not interrupted is an example of positive evidence of understanding. However, if one of the partners turns away or looks puzzled, these are signs that indicate there is no longer continued attention. More evidence for grounding comes from a study done in 2014, in which dialogue between humans and robots was studied. The complexity of human-robot dialogue arises from the difference between the human's idea of what the robot has internalized versus the robot's actual internal representation of the real world. By going through the grounding process, this study concluded that human-robot grounding can be strengthened by the robot providing information to its partner about how it has internalized the information it has received.[9] There are three main factors that allow speakers to anticipate what a partner knows.[10] Shared visual information also aids anticipation of what a partner knows. For example, when responding to an instruction, performing the correct action without any verbal communication provides an indication of understanding, while performing the wrong action, or even failing to act, can signal misunderstanding.[11]Findings from the paper (Using Visual Information for Grounding and Awareness in Collaborative Tasks),[12]supports previous experiments and show evidence that collaborative pairs perform quicker and more accurately when they share a common view of a workspace.[13]The results from the experiment showed that the pairs completed the task 30–40% faster when they were given shared visual information. The value of this information, however, depended on the features of the task. Its value increased when the task objects were linguistically complex and not part of the pairs‟ shared lexicon. However, even a small delay to the transmission of the visual information severely disrupted its value. Also, the ones accepting the instructions were seen to increase their spoken contribution when those giving the instructions do not have shared visual information. This increase in activity is due to the fact that it is easier for the former to produce the information rather than for the ones giving the instruction to continuously ask questions to anticipate their partners' understanding. Such a phenomenon is predicted by the grounding theory, where it is said that since communication costs are distributed among the partners, the result should shift to the method that would be the most efficient for the pair. The theory of least collaborative effort asserts that participants in a contribution try to minimize the total effort spent on that contribution – in both the presentation and acceptance phases.[14]In exact, every participant in a conversation tries to minimize the total effort spent in that interactional encounter.[15]The ideal utterances are informative and brief.[15] Participants in conversation refashion referring expressions and decrease conversation length. When interactants are trying to pick out difficult to describe shapes from a set of similar items, they produce and agree on an expression which is understood and accepted by both and this process is termed refashioning.[15]The following is an example from Clark & Wilkes-Gibbs,[16] A offers a conceptualisation which is refashioned slightly by the B before it is agreed on by both. In later repetitions of the task, the expression employed to re-use the agreed conceptualisation progressively became shorter. For example, "the next one looks like a person who's ice skating, except they're sticking out two arms in front" (trial 1) was gradually shortened to "The next one's the ice skater" (trial 4) and eventually became just "The ice skater" in trial 6.[16] Clark & Wilkes-Gibbs argue that there are two indicators of least collaborative effort in the above example. First, the process of refashioning itself involves less work than A having to produce a 'perfect' referring expression first time, because of the degree of effort which would be needed to achieve that. Second, the decrease in length of the referring expressions and the concomitant reduction in conversation length over the trials showed that the participants were exploiting their increased common ground to decrease the amount of talk needed, and thus their collaborative effort.[15] Time pressures, errors, and ignorance are problems that are best remedied by mutual understanding, thus the theory of grounding in communication dispels the theory of least collaborative effort in instances where grounding is the solution to a communication problem.[17] The lack of one of these characteristics generally forces participants to use alternative grounding techniques, because the costs associated with grounding change. There is often a trade-off between the costs- one cost will increase as another decreases. There is also often a correlation between the costs. The following table highlights several of the costs that can change as the medium of communication changes. Clark and Brennan's theory acknowledges the impact of medium choice on successful grounding. According to the theory,computer mediated communicationpresents potential barriers to establishing mutual understanding.[18]Grounding occurs by acknowledgement of understanding through verbal, nonverbal, formal, and informal acknowledgments, thus computer mediated communications reduce the number of channels through which parties can establish grounding.[19] Clark and Brennan identify eight constraints mediated communication places on communicating parties. Situation awareness theory[21]holds that visual information helps pairs assess the current state of the task and plan future actions.[22]An example would be when a friend is solving a problem that you know the solution to, you could intervene and provide hints or instructions when you see that your friend is stuck and needs help. Similarly, the grounding theory maintains that visual information can support the conversations through evidence of common ground or mutual understanding. Using the same example, you could provide clearer instruction to the problem when you see that your friend is stuck. Therefore, an extension to both theories would mean that when groups have timely visual information, they would be able to monitor the situation and clarify instructions more efficiently.[22] Common groundis a communication technique based on mutual knowledge as well as awareness of mutual knowledge. According to Barr, common ground and common knowledge are kinds of mutual knowledge.[23]Common ground is negotiated to close the gap between differences in perspective and this in turn would enable different perspectives and knowledge to be shared.[24]PsycholinguistHerbert H. Clarkuses the example of a day at the beach with his son. They share their experiences at the beach and are aware of the mutual knowledge. If one were to propose the painting of a room a certain shade of pink, they could describe it by comparing it to a conch shell they saw at the beach. They can make the comparison because of their mutual knowledge of the pink on the shell as well as awareness of the mutual knowledge of the pink. This communication technique is often found innegotiation.[25] Common ground in communication has been critical in mitigating misunderstandings and negotiations. For example, common ground can be seen during the first Moon landing betweenApollo 11andmission controlsince mission control had to provide assistance and instructions to the crew in Apollo 11, and the crew had to be able to provide their situation and context for mission control. That was particularly difficult given the strict conditions in which the radio system needed to function. The success of the mission was dependent on the ability to provide situation information and instructions clearly. The transcripts show how often both parties checked to ensure that the other party had clearly heard what they had to say. Both parties needed to provide verbal feedback after they had listened because of the constraints of their situation.[26] The difficulties of establishing common ground, especially in using telecommunications technology, can give rise to dispositional rather than situational attribution. This tendency is known as the "actor-observer effect". What this means is that people often attribute their own behavior to situational causes, while observers attribute the actor's behavior to the personality or disposition of the actor. For example, an actor's common reason to be late is due to the situational reason, traffic. Observers' lack of contextual knowledge about the traffic, i.e. common ground, leads to them attributing the lateness due to ignorance or laziness on the actor's part. This tendency towards dispositional attribution is especially magnified when the stakes are higher and the situation is more complex. When observers are relatively calm, the tendency towards dispositional attribution is less strong.[25] Another consequence of a lack of mutual understanding is disappointment. When communicating partners fail to highlight the important points of their message to their partner or know the important points of the partner's message, then both parties can never satisfy their partner's expectations. This lack of common ground damages interpersonal trust, especially when partners do not have the contextual information of why the other party behaves the way they did.[25] People base their decisions and contribution based on their own point of view. When there is a lack of common ground in the points of views of individuals within a team, misunderstandings occur. Sometimes these misunderstandings remain undetected, which means that decisions would be made based on ignorant or misinformed point of views, which in turn lead to multiple ignorances. The team may not be able to find the right solution because it does not have a correct representation of the problem.[24] Critiques of the approaches used to explore common ground suggest that the creation of a common set of mutual knowledge is an unobservable event which is hardly accessible to empirical research. It would require an omniscient point of view in order to look into the participants' heads.[27]By modeling the common ground from one communication partner's perspective is a model used to overcome this ambiguity.[28]Even so, it is difficult to distinguish between the concepts of grounding and situation awareness.[22] Distinguishing between situation awareness and grounding in communication can provide insights about how these concepts affect collaboration and furthering research in this area.[22]Despite revealing evidence of how these theories exist independently, recognizing these concepts in conversation can prove to be difficult. Often both of these mechanisms are present in the same task. For example, in a study where Helpers had a small field of view and were able to see pieces being manipulated demonstrates grounding in communication. However, situation awareness is also present because there is no shared view of the pieces.[22] Another criticism of common ground is the inaccurate connotation of the term. The name appears to relate to a specific place where a record of things can be stored. However, it does not account for how those involved in conversation effortlessly understand the communication. There have been suggestions that the term common ground be revised to better reflect how people actually come to understand each other.[27] Grounding in communication has also been described as a mechanistic style of dialogue which can be used to make many predictions about basic language processing.[29]Pickering and Garrod conducted many studies that reveal, when engaging in dialogue, production and comprehension become closely related. This process greatly simplifies language processing in communication. In Pickering and Garrod's paper Toward a Mechanistic Psychology of Dialogue, they discuss three points that exemplify the mechanistic quality of language processing: Another component that is essential to this criticism on Grounding in Communication is that successful dialogue is coupled with how well those engaged in the conversation adapt to different linguistic levels. This process allows for the development of communication routines that allow for the process of comprehending language to be more efficient.[29]
https://en.wikipedia.org/wiki/Grounding_in_communication
Active usersis a softwareperformance metricthat is commonly used to measure the level of engagement for a particularsoftwareproduct or object, by quantifying the number of active interactions fromusersor visitors within a relevant range of time (daily, weekly and monthly). The metric has many uses insoftware managementsuch as insocial networking services,online games, ormobile apps, inweb analyticssuch as inweb apps, incommercesuch as inonline bankingand inacademia, such as in user behavior analytics and predictive analytics. Although having extensive uses in digital behavioural learning, prediction and reporting, it also has impacts on the privacy andsecurity, and ethical factors should be considered thoroughly. It measures how many users visit or interact with the product or service over a given interval or period.[1]However, there is no standard definition of this term, so comparison of the reporting between different providers of this metric is problematic. Also, most providers have the interest to show this number as high as possible, therefore defining even the most minimal interaction as "active".[2]Still the number is a relevant metric to evaluate development of user interaction of a given provider. This metric is commonly assessed per month asmonthly active users(MAU),[3]per week asweekly active users(WAU),[4]per day asdaily active users(DAU)[5]andpeak concurrent users(PCU).[6] Active users on any time scale offers a rough overview of the amount of returning customers a product maintains, and comparing the changes in this number can be used to predict growth or decline in consumer numbers. In a commercial context, the success of asocial-networking-siteis generally associated with a growing network of active users (greater volume of site visits), social relationships amongst those users andgenerated contents. Active Users can be used as a keyperformance indicator(KPI),managingandpredictingfuture success, in measuring the growth and current volume of users visiting and consuming the site. The ratio of DAU and MAU offers a rudimentary method to estimatecustomer engagementand retention rate over time.[7]A higher ratio represents a larger retention probability, which often indicates success of a product. Ratios of 0.15 and above are believed to be a tipping point for growth while sustained ratios of 0.2 and above mark lasting success.[8] Chen, Lu, Chau, and Gupta (2014)[9]argues that greater numbers of users (early adopters) will lead to greateruser-generated content, such as posts of photos and videos, that "promotes and propagates" social media acceptance, contributing to social-networking-site growth. The growth of social media use, characterised as increase of active users in a pre-determined timeframe, may increase an individual'ssocial presence.Social presencecan be defined as the degree to which a social-networking communications medium allows an individual to feel present with others.[10][11] Moon and Kim's (2001)[12]research results found that individual'senjoymentof web systems have positive impacts on theirperceptionson the system, and thus would form "high behaviour intention to use it". Munnukka (2007)[13]have found strongcorrelationsbetween positive previous experience of related types ofcommunicationsandadoptionof new mobilesite communication services. However, there are also cases where active users and revenue seemed to have a negativecorrelation. For instance, Snap Inc.'s gains in daily active users (DAU) havestabilisedor decreased during theCOVID-19 Pandemic, revenue still exceeded estimates, with strong similar strong trends in the current period.[14] Greater number active users boost the number of visits on particular sites. With more traffic, moreadvertiserswill be attracted, contributing torevenue generation.[15]In 2014, 88% ofcorporation's purpose of social media usage isadvertising.[16]Active Users increase allowssocial-networking sitesto build and follow more customer profiles, that is based on customer's needs and consumption patterns.[17]Active user data can be used to determine high traffic periods and create behavior models of users to be used for targeted advertising. The increase of customer profiles, due to increase of active users, ensures a more relevantpersonalised and customisedadvertisements. Bleier and Eisenbeiss (2015)[18]found that morepersonalisedandrelevantadvertisements increase "view-throughresponses" and strengthen theeffectivenessof "the advertisedbanner" significantly. DeZoysa (2002)[19]found that consumers are more likely to open and responsive on personalised advertisements that are relevant to them. TheFinancial Accounting Standard Boarddefines that objective of financial reporting is provide relevant and material financial information to financial statement users to allow for decision making and ensure an efficient economic |resource allocation.[20]All reporting entities, primarilypublicly listed companiesand largeprivate companiesare required by law to adhere to disclosure and accounting standards requirements. For example, in Australia, companies are required to comply withaccounting standardsset by theAustralian Accounting Standards Board, which is part of theCorporations Act 2001. In social media company's context, there is also reporting of non-financial information, such as the number of users (active users). Examples may include: Alternative methods of reporting thesemetricsare through social networks and the web, which have become important part of firm's "information environment" to report financial and non-financial information, according to Frankel (2004),[22]whereby firm relevant information is being spread and disseminated in short spans of time between networks of investors, journalists, and other intermediaries and stakeholders.[23]Investment blogs aggregator, likeSeeking Alpha, has become significant for professionalfinancial analysts,[24]who giverecommendationson buying and selling stocks. Studies by Frieder and Zittrain (2007)[25]have raised new concerns about how digitalcommunications technologiesinformation reportinghave the ability to affectmarket participants. Admiraal (2009)[26]emphasised that nonfinancial metrics reported bysocial mediacompanies, including active users, may give not desirable assurance in success measurements, as the guidance, and reportingregulationsthat safeguards thereliabilityandqualityof the information are too few and have not yet beenstandardized. Cohen et al. (2012)[27]research on a set of economic performance indicators found that there is a lack of extensivedisclosuresand a materialvariabilitybetween disclosure practices based on industries and sizes. In 2008, the U.S. Securities and Exchange Commission took a cautious approach in revising their public disclosureguidancefor social media companies and claim the information to be "supplementalrather thansufficientby themselves".[28]Alexander, Raquel, Gendry and James (2014)[29]recommended that executives and managers should take a morestrategicapproach in managinginvestor relationsandcorporate communications, ensuring investor's andanalyst'sneeds are jointly met. The active user metric can be particularly useful inbehavioural analyticsandpredictive analytics. The active user metric in the context ofpredictive analyticscan be applied in a variety of fields includingactuarial science,marketing,finance services,healthcare,online-gaming, andsocial networking. Lewis, Wyatt, and Jeremy (2015),[30]for example, have used this metric conducted a research in the fields ofhealthcareto study quality and impacts of a mobile application and predicted usage limits of these applications. Active users can also be used in studies that addresses the issue ofmental health problemsthat could cost theglobal economy$16 Trillion U.S. Dollars by 2030, if there is a lack of resource allocated formental health.[31]Through web-behavioural analysis, Chuenphitthayavut, Zihuang, and Zhu (2020)[32]discovered that the promotion of informational, social andemotional supportthat represents media and public perception has positive effects on their research participants behavioural intention to use online mental health intervention. Online psychological educational program, a type of online mental health interventions are found to promote well-being, and decreased suicidal conception.[33] In the fields of online-gaming, active users is quite useful in behaviour prediction andchurn ratesof online games. For example, active user's features such "active Duration" and "play count" can have inversecorrelationswith churn rates, with "shorter play times and lower play count" associated with higher churn rates.[34]Jia et Al. (2015)[35]showed that there are social structures that transpire or emerge and centred around highly active players, withstructuralsimilarity betweenmultiplayer online-games, such asStarCraft IIandDota. The Active Users metric can be used to predict one'spersonality traits, which can be classified and grouped into categories. These categories have accuracy that ranges from 84%–92%.[36]Based on the number of user's in a particular group, the internet object associated with it, can be deemed as "trending", and as an "area of interest". With the internet'sevolutioninto a tool used for communications andsocialisation, ethical considerations have also shifted from data-driven to "human-centered", further complicating the ethical issues relating with concepts of public and private on online domains, whereby researchers and subjects do not fully understand theterms and conditions[37]Ethical considerations need to be considered in terms of participative consent,data confidentiality-privacy-integrity, and disciplinary-industry-professionalnormsand acceptedstandardsincloud computingandbig dataresearch. Boehlefeld (1996)[38]noted that researchers usually refer to ethical principals in their respective disciplines, as they seek guidance and recommended the guidelines by theAssociation for Computing Machineryto assist researchers of their responsibilities in their research studies in technological orcyberspace. Informed consentrefers to a situation that participant voluntarily participates in the research with full acknowledgement of the methods of research, risks and rewards associated. With the rise internet being used as a social networking tool, active users may face unique challenges in gaining informed consents. Ethical considerations may includedegree of knowledgeto the participants andage appropriateness, ways and practicality in which researchers inform, and "when" it is appropriate to waive the consent.[39]Crawford and Schultz (2014)[40]have notedconsentto be "innumerable" and "yet-to-be-determined" before the research is conducted. Grady et al. (2017)[41]pointed out thattechnological advancementscan assist in obtaining consent without the in-person meeting of investigators (researchers) and theresearch participants. A large number of researches is based on individualised data, that encompass usersonline identity(their clicks, readings, movements) and contents consumed and with data-analytics produced inferences about theirpreferences,social relationships, and movement or work habits. In some cases, individuals may greatly benefit, but in others they can be harmed. Afolabi and García-Basteiro (2017)[42]believed that informed consent to research studies is beyond "clicking blocks or supplying signature", as participants could have feel pressured in to joining the research, without researcher's awareness of the situation. There is yet to be a universally accepted form ofindustrystandardsandnormsin terms of data-privacy,confidentialityand integrity, a critical ethics consideration, but there has been attempts to design a process to oversee the research activities anddata collectionto better meet thecommunityandend-user's expectations.[43]There are also policy debates around ethical issues regarding the integration ofedtech (education technology)intoK-12education environment, asminorchildren are perceived to be most vulnerable segment of the entire population.[44] Manysocial mediacompanies have their respective differences definition andcalculationmethods of the active users metric. These differences often cause differences in the variable that the metric is measuring. Wyatt (2008)[45]argues that there is evidence that some metrics reported by social media companies do not appear to bereliable, as it requires categoricaljudgements, but is still value-relevant tofinancial statementusers. Luft (2009)[46]conveyed that non-financial metric, like active users, there presents challenges in measurement accuracy and appropriateness in weighting when coupled withaccountingreporting measures. There has been increasing notice frombusiness pressesand academia oncorporateconventions of disclosureof these information.[47] Active users are calculated using the internal data of the specific company. Data is collected based onunique usersperforming specific actions which data collectors deem as a sign of activity. These actions include visiting the home orsplash pageof a website, logging in, commentating, uploading content, or similar actions which make use of the product. The number of people subscribed to a service may also be considered an active user for its duration. Each company has their own method of determining their number of active users, and many companies do not share specific details regarding how they calculate them. Some companies make changes to their calculation method over time. The specific action flagging users as active greatly impacts the quality of the data if it does not accurately reflect engagement with the product, resulting in misleading data.[48]Basic actions such as logging into the product may not be an accurate representation of customer engagement and inflate the number of active users, while uploading content or commenting may be too specific for a product and under-represent user activity. Weitz, Henry and Rosenthal (2014)[21]suggested that factors that may affect accuracy of metrics like active users include issues relating to definition and calculation, circumstances ofdeceptiveinflation, uncertaintyspecificationand user-shared, duplicate or fake accounts. The authors describesFacebookmonthly active userscriterionasregistered userspast 30 days, have used the messenger, and took action to sharecontentand activity differing fromLinkedInwho uses registered members, page visits and views. For example, a customer who uses the Facebook once, to "comment" or "share content", may also be counted as an "active user".[49]A potential cause for these inaccuracies in measurement is the implementedPay-for-Performance systems, that encourages desired behaviours, included high-performance work system.[50]In social media companies, active users is one of the crucial metric that measures the success of the product. Trueman, Wong, and Zhang (2000)[51]have found that in most cases unique visitors and pageviews as a measurement of web-usage accounts for changes in stock prices, and net income in internet companies. Lazer, Lev and Livnat (2001)[52]found that more popular website generated greater stock returns, in their research analysis of traffic data of internet companies through the division of higher and lower than median traffic data. Yieldingportfoliomore returns may sway investors to vote on a more favourable bonus package forexecutive management. Kang, Lee and Na's (2010)[53]research on the global financial crisis in 2007–2008 highlights the importance of prevention of "expropriationincentives" of investors, that provides very prominent implications oncorporate governance, especially during aneconomic shock. Active user is limited in examining pre-adoptionand post-adoptionbehavioursof users. Users commitment to a particular onlineproductmay also depend on trust and the alternatives quality.[54]Pre-adoptionbehaviour's effects on post-adoption behaviour, that is predicted by past research has suggested,[55]is found to haveassociationswith factors such as habit, gender and some othersocio-culturaldemographics.[56]Buchanan and Gillies (1990)[57]and Reichheld and Schefter (2000)[58]argues that post-adoption behaviours and continuous usage is "relatively more important than first-time or initial usage" as it shows "the degree ofconsumer loyalty", and that ultimately produces long termproduct value.
https://en.wikipedia.org/wiki/Active_users
Marketingis the act of satisfying and retainingcustomers.[3]It is one of the primary components ofbusiness managementandcommerce.[4] Marketing is usually conducted by the seller, typically a retailer or manufacturer. Products can be marketed to other businesses (B2B) or directly to consumers (B2C).[5]Sometimes tasks are contracted to dedicated marketing firms, like amedia,market research, oradvertising agency. Sometimes, atrade associationor government agency (such as theAgricultural Marketing Service) advertises on behalf of an entire industry or locality, often a specific type of food (e.g.Got Milk?), food from a specific area, or a city or region as a tourism destination. Market orientations are philosophies concerning the factors that should go into market planning.[6]The marketing mix, which outlines the specifics of the product and how it will be sold, including the channels that will be used to advertise the product,[7][8]is affected by the environment surrounding the product,[9]the results ofmarketing researchandmarket research,[10][11]and the characteristics of the product's target market.[12]Once these factors are determined, marketers must then decide what methods of promoting the product,[5]including use of coupons and other price inducements.[13] Marketing is currently defined by theAmerican Marketing Association(AMA) as "the activity, set of institutions, and processes for creating, communicating, delivering, and exchanging offerings that have value for customers, clients, partners, and society at large".[14]However, the definition of marketing has evolved over the years. The AMA reviews this definition and its definition for "marketing research" every three years.[14]The interests of "society at large" were added into the definition in 2008.[15]The development of the definition may be seen by comparing the 2008 definition with the AMA's 1935 version: "Marketing is the performance of business activities that direct the flow of goods, and services from producers to consumers".[16]The newer definition highlights the increased prominence of other stakeholders in the new conception of marketing. Recent definitions of marketing place more emphasis on the consumer relationship, as opposed to a pure exchange process. For instance, prolific marketing author and educator,Philip Kotlerhas evolved his definition of marketing. In 1980, he defined marketing as "satisfying needs and wants through an exchange process",[18]and in 2018 defined it as "the process by which companies engage customers, build strong customer relationships, and create customer value in order to capture value from customers in return".[19]A related definition, from thesales process engineeringperspective, defines marketing as "a set of processes that are interconnected and interdependent with other functions of a business aimed at achieving customer interest and satisfaction".[20] Some definitions of marketing highlight marketing's ability to produce value to shareholders of the firm as well. In this context, marketing can be defined as "the management process that seeks to maximise returns to shareholders by developing relationships with valued customers and creating a competitive advantage".[21]For instance, theChartered Institute of Marketingdefines marketing from a customer-centric perspective, focusing on "the management process responsible for identifying, anticipating and satisfying customer requirements profitably".[22] In the past, marketing practice tended to be seen as a creative industry, which includedadvertising,distributionandselling, and even today many parts of the marketing process (e.g.product design,art director,brand management, advertising, inbound marketing,copywritingetc.) involve the use of the creative arts.[23]However, because marketing makes extensive use ofsocial sciences,psychology,sociology,mathematics,economics,anthropologyandneuroscience, the profession is now widely recognized as a science.[24]Marketing science has developed a concrete process that can be followed to create amarketing plan.[25] The "marketing concept" proposes that to complete its organizational objectives, an organization should anticipate the needs and wants of potential consumers and satisfy them more effectively than its competitors. This concept originated fromAdam Smith's bookThe Wealth of Nationsbut would not become widely used until nearly 200 years later.[26]Marketing and Marketing Concepts are directly related. Given the centrality of customer needs, and wants in marketing, a rich understanding of these concepts is essential:[27] Marketing research, conducted for the purpose of new product development or product improvement, is often concerned with identifying the consumer'sunmet needs.[28]Customer needs are central to market segmentation which is concerned with dividing markets into distinct groups of buyers on the basis of "distinct needs, characteristics, or behaviors who might require separate products or marketing mixes."[29]Needs-based segmentation (also known asbenefit segmentation) "places the customers' desires at the forefront of how a company designs and markets products or services."[30]Although needs-based segmentation is difficult to do in practice, it has been proved to be one of the most effective ways to segment a market.[31][28]In addition, a great deal of advertising and promotion is designed to show how a given product's benefits meet the customer's needs, wants or expectations in a unique way.[32] The two major segments of marketing are business-to-business (B2B) marketing and business-to-consumer (B2C) marketing.[5] B2B (business-to-business) marketing refers to any marketing strategy or content that is geared towards a business or organization.[33]Any company that sells products or services to other businesses or organizations (vs. consumers) typically uses B2B marketing strategies. The 7 P's of B2B marketing are: product, price, place, promotion, people, process, and physical evidence.[33]Some of the trends in B2B marketing include content such as podcasts, videos, and social media marketing campaigns.[33] Examples of products sold through B2B marketing include: The four major categories of B2B product purchasers are: Business-to-consumer marketing, or B2C marketing, refers to the tactics and strategies in which a company promotes its products and services to individual people. Traditionally, this could refer to individuals shopping for personal products in a broad sense. More recently the term B2C refers to the online selling of consumer products. Consumer-to-business marketing or C2B marketing is a business model where the end consumers create products and services which are consumed by businesses and organizations. It is diametrically opposed to the popular concept of B2C or business-to-consumer where the companies make goods and services available to the end consumers. In this type of business model, businesses profit from consumers' willingness to name their own price or contribute data or marketing to the company, while consumers benefit from flexibility, direct payment, or free or reduced-price products and services. One of the major benefit of this type of business model is that it offers a company a competitive advantage in the market.[34] Customer to customermarketing or C2C marketing represents a market environment where one customer purchases goods from another customer using a third-party business or platform to facilitate the transaction. C2C companies are a new type of model that has emerged with e-commerce technology and the sharing economy.[35] The different goals of B2B and B2C marketing lead to differences in the B2B and B2C markets. The main differences in these markets are demand, purchasing volume, number of customers, customer concentration, distribution, buying nature, buying influences, negotiations, reciprocity, leasing and promotional methods.[5] A marketing orientation has been defined as a "philosophy of business management."[6]or "a corporate state of mind"[36]or as an "organizational culture."[37]Although scholars continue to debate the precise nature of specific concepts that inform marketing practice, the most commonly cited orientations are as follows:[38] A marketing mix is a foundational tool used to guide decision making in marketing. The marketing mix represents the basic tools that marketers can use to bring their products or services to the market. They are the foundation of managerial marketing and themarketing plantypically devotes a section to the marketing mix. The 4Ps refers to four broad categories of marketing decisions, namely:product,price,promotion, andplace.[7][49]The origins of the 4 Ps can be traced to the late 1940s.[50][51]The first known mention has been attributed to a Professor of Marketing at Harvard University, James Culliton.[52] The 4 Ps, in its modern form, was first proposed in 1960 by E. Jerome McCarthy; who presented them within a managerial approach that coveredanalysis,consumer behavior,market research,market segmentation, andplanning.[53][54]Phillip Kotler, popularised this approach and helped spread the 4 Ps model.[55][56]McCarthy's 4 Ps have been widely adopted by both marketing academics and practitioners.[57][58][59] One of the limitations of the 4Ps approach is its emphasis on an inside-out view.[63]Aninside-outapproach is the traditional planning approach where the organization identifies its desired goals and objectives, which are often based around what has always been done. Marketing's task then becomes one of "selling" the organization's products and messages to the "outside" or external stakeholders.[60]In contrast, anoutside-inapproach first seeks to understand the needs and wants of the consumer.[64] From a model-building perspective, the 4 Ps has attracted a number of criticisms. Well-designed models should exhibit clearly defined categories that are mutually exclusive, with no overlap. Yet, the 4 Ps model has extensive overlapping problems. Several authors stress the hybrid nature of the fourth P, mentioning the presence of two important dimensions, "communication" (general and informative communications such as public relations and corporate communications) and "promotion" (persuasive communications such as advertising and direct selling). Certain marketing activities, such as personal selling, may be classified as eitherpromotionor as part of the place (i.e., distribution) element.[65]Some pricing tactics, such as promotional pricing, can be classified as price variables or promotional variables and, therefore, also exhibit some overlap. Other important criticisms include that the marketing mix lacks a strategic framework and is, therefore, unfit to be a planning instrument, particularly when uncontrollable, external elements are an important aspect of the marketing environment.[66] To overcome the deficiencies of the 4P model, some authors have suggested extensions or modifications to the original model. Extensions of the four P's are often included in cases such as services marketing where unique characteristics (i.e. intangibility, perishability, heterogeneity and the inseparability of production and consumption) warrant additional consideration factors. Other extensions include "people", "process", and "physical evidence" and are often applied in the case ofservices marketing.[67]Other extensions have been found necessary in retail marketing, industrial marketing and internet marketing. In response to environmental and technological changes in marketing, as well as criticisms towards the 4Ps approach, the 4Cs has emerged as a modern marketing mix model. Robert F. Lauterborn proposed a 4 Cs classification in 1990.[68]His classification is a more consumer-orientated version of the 4 Ps[69][70]that attempts to better fit the movement frommass marketingtoniche marketing.[68][71][72] Consumer (or client) The consumer refers to the person or group that will acquire the product. This aspect of the model focuses on fulfilling the wants or needs of the consumer.[8] Cost Cost refers to what is exchanged in return for the product. Cost mainly consists of the monetary value of the product. Cost also refers to anything else the consumer must sacrifice to attain the product, such as time or money spent on transportation to acquire the product.[8] Convenience Like "Place" in the 4Ps model, convenience refers to where the product will be sold. This, however, not only refers to physical stores but also whether the product is available in person or online. The convenience aspect emphasizes making it as easy as possible for the consumer to attain the product, thus making them more likely to do so.[8] Communication Like "Promotion" in the 4Ps model, communication refers to how consumers find out about a product. Unlike promotion, communication not only refers to the one-way communication of advertising, but also the two-way communication available through social media.[8] The term "marketing environment" relates to all of the factors (whether internal, external, direct or indirect) that affect a firm's marketing decision-making/planning. A firm's marketing environment consists of three main areas, which are: Marketing research is a systematic process of analyzing data that involves conducting research to support marketing activities and the statistical interpretation of data into information. This information is then used by managers to plan marketing activities, gauge the nature of a firm's marketing environment and to attain information from suppliers. A distinction should be made betweenmarketingresearch andmarketresearch. Market research involves gathering information about a particular target market. As an example, a firm may conduct research in a target market, after selecting a suitable market segment. In contrast, marketing research relates to all research conducted within marketing. Market research is a subset of marketing research.[10](Avoiding the word consumer, which shows up in both,[73]market research is about distribution, while marketing research encompasses distribution, advertising effectiveness, and salesforce effectiveness).[74] The stages of research include: Well-known academic journals in the field of marketing with the best rating in VHB-Jourqual and Academic Journal Guide, an impact factor of more than 5 in theSocial Sciences Citation Indexand anh-indexof more than 130 in theSCImago Journal Rankare These are also designated as Premier AMA Journals by the American Marketing Association. Market segmentation consists of taking the total heterogeneous market for a product and dividing it into several sub-markets or segments, each of which tends to be homogeneous in all significant aspects.[12]The process is conducted for two main purposes: better allocation of a firm's finite resources and to better serve the more diversified tastes of contemporary consumers. A firm only possesses a certain amount of resources. Thus, it must make choices (and appreciate the related costs) in servicing specific groups of consumers. Moreover, with more diversity in the tastes of modern consumers, firms are noting the benefit of servicing a multiplicity of new markets. Market segmentation can be defined in terms of theSTPacronym, meaningSegmentation, Targeting, and Positioning. Segmentationinvolves the initial splitting up of consumers into persons of like needs/wants/tastes. Commonly used criteria include: Once a segment has been identified to target, a firm must ascertain whether the segment is beneficial for them to service. TheDAMPacronym is used as criteria to gauge the viability of a target market. The elements of DAMP are: The next step in the targeting process is the level ofdifferentiationinvolved in a segment serving. Three modes of differentiation exist, which are commonly applied by firms. These are: Positioningconcerns how to position a product in the minds of consumers and inform what attributes differentiate it from the competitor's products. A firm often performs this by producing a perceptual map, which denotes similar products produced in the same industry according to how consumers perceive their price and quality. From a product's placing on the map, a firm would tailor its marketing communications to meld with the product's perception among consumers and its position among competitors' offering.[76] The promotional mix outlines how a company will market its product. It consists of five tools: personal selling, sales promotion, public relations, advertising and social media: The area of marketing planning involves forging a plan for a firm's marketing activities. A marketing plan can also pertain to a specific product, the introduction of a new product, the revision of current marketing strategies for existing products, as well as an organisation's overall marketing strategy. Theplanis created to accomplish specific marketing objectives, outlining a company'sadvertisingand marketing efforts for a given period, describing the current marketing position of a business, and discussing the target market andmarketing mixto be used to achieve marketing goals. An organization's marketing planning process is derived from its overall business strategy. Marketing plans start by identifying customer needs through market research and how the business can satisfy these needs. The marketing plan also shows what actions will be taken and what resources will be used to achieve the planned objectives. Marketing objectives are typically broad-based in nature, and pertain to the general vision of the firm in the short, medium or long-term. As an example, if one pictures a group of companies (or aconglomerate), the objective might be to increase the group's sales by 25% over a ten-year period. Theproduct life cycle(PLC) is a tool used by marketing managers to gauge the progress of a product, especially relating to sales or revenue accrued over time. The PLC is based on a few key assumptions, including: In theintroductionstage, a product is launched onto the market. To stimulate the growth of sales/revenue, use of advertising may be high, in order to heighten awareness of the product in question. During thegrowthstage, the product's sales/revenue is increasing, which may stimulate more marketing communications to sustain sales. More entrants enter into the market, to reap the apparent high profits that the industry is producing. When the product hitsmaturity, its starts to level off, and an increasing number of entrants to a market produce price falls for the product. Firms may use sales promotions to raise sales. Duringdecline, demand for a good begins to taper off, and the firm may opt to discontinue the manufacture of the product. This is so, if revenue for the product comes from efficiency savings in production, over actual sales of a good/service. However, if a product services a niche market, or is complementary to another product, it may continue the manufacture of the product, despite a low level of sales/revenue being accrued.[5]
https://en.wikipedia.org/wiki/Marketing
Decidimdescribes itself as a "technopolitical network for participatory democracy".[2]It combines afree and open-source software(FOSS) software package together with aparticipatorypolitical project and an organising community, "Metadecidim".[3]Decidim participants describe the software, political and organising components as "technical", "political" and "technopolitical" levels, respectively.[4]Decidim's aims can be seen as promoting theright to the city, as proposed byHenri Lefebvre.[5]As of 2023[update], Decidim instances were actively in use for participatorydecision-makingin municipal and regional governments and by citizens' associations in Spain, Switzerland and elsewhere.[3][4]Studies of the use of Decidim found that it was effective in some cases,[3][4][5]while in one case implementedtop-downinLucerne, it strengthened thedigital divide.[5] A server called "Decidim" was created by the15M anti-austerity movement in Spainin 2016, running a fork of the "Consul" software,[4]when a political party derived from the protest movement obtained political power.[3]In early 2017, the server was switched to a similarly inspired, but new software project, Decidim, completely rewritten, aiming to be more modular and convenient for development by a wide community.[2]: 3, 104 The name "Decidim" comes from aCatalanword meaning "let's decide" or "we decide".[2]: 1 Decidim usesRuby on Rails. As of 2022[update], the software defines two structures: "participatory spaces" and "participatory components".[4]The participatory spaces (six as of early 2024[2]: 3) include "processes" (such as a participatory budget), "assemblies" (such as a citizens' association website), "conferences/meetings", "initiatives", and "consultations (voting/elections)".[5][4]The participatory components (twelve as of early 2024[2]: 3) range from "comments", "proposals", "amendments", "votes" through to "accountability". Together these allow a wide flexibility in creating specific space–component combinations.[4]The "accountability" component is used to monitor whether and how a project is executed.[2]: 4 As of 2022[update], three user levels are defined: general visitors with view-only access; registered users who have several participation rights; and verified users who can participate in decision-making. Users may be individuals or represent associations or working groups within an organisation. Users with special privileges are called "administrators", "moderators" and "collaborators".[4] As of 2022[update], four versions of Decidim had been released.[3] The Decidim software development strategy is intended to be modular and scalable. AsFOSS, the software is intended to encourage both citizen and government interaction with each other and with decision-making power over the software itself, aiming at high levels of traceability and transparency.[4] Decidim software provides anapplication programming interface(API) for command line access.[6] In the spirit of the Decidim software beingfree and open-source software(FOSS), a community of software developers, social activists, software consultancies, researchers, and administrative staff from municipal governments called Metadecidim was created for discussing and analysing Decidim experience and development.[3]Metadecidim is seen as an intermediary component between the political level of Decidim, implemented on servers such as Barcelona Decidim, and the technical level of hosting the software source code and bug reporting structures.[4]As of June 2023[update], Metadecidim had about 5000 registered participants.[2]: 90 The Decidim community has a text called the Decidim Social Contract (DSC) that defines six guidelines. The DSC defines thefree softwarelicences that may be used for Decidim software; it defines requirements of transparency, traceability and integrity of content hosted by Decidim software; a goal of equal access to all users and democratic quality parameters to measure progress towards equality;[7]data privacy; and it requires inter-institutional cooperation of institutions implementing instances of the software, in order to encourage further development.[5]The free software licensing is theGNU Affero General Public License(AGPL) version 3 for code; theCC BY-SA licenceis used for content; and "data" is published under theOpen Database License.[2]: 4–5[7] Philosophically, the aims of Decidim can be seen as promoting theright to the city, as proposed byHenri Lefebvre.[5]Metadecidim's self-description as "technopolitical" is seen as implying that the political implications of designs and choices of software are seen as significant, in opposition to the view that software is "value neutral and objective".[2]: viiiMetadecidim sees Decidim as a "recursively democratic infrastructure", in the sense that the software, political and server infrastructure is "both used and democratised by its community, the Metadecidim community".[2]: 2 Decidim proponents see the combination of online and offline participation as fundamental: "From its very conception until today, a distinguishing feature of Decidim over other kinds of participatory democracy software ... was that of connecting digital processes directly with public meetings and vice versa."[2]: 74 Organisationally, the community formally established Decidim Association in 2019 andCity Council of Barcelonagave control of the Decidim trademark and code base to Decidim Association. The effect was to combine public funds with citizens' association control of decision-making.[2]: 3 In 2022, Borge and colleagues estimated that there were 311 instances running Decidim in Spain and in 19 other countries;[3]while Borges and colleagues estimated that there were Decidim instances run by 80 local and regional governments and 40 citizens' associations in Spain and elsewhere.[4]In 2023, Suter and colleagues cited Decidim's own estimate of 400 city and regional governments and civil society institutions using Decidim.[5]TheOpen University of Catalonia, theUniversity of Bordeauxand theUniversity of Caen Normandyran Decidim instances.[2]: 27–28 A Decidim server was run by theCity Council of Barcelonafor a two-month trial prior to 2017, in which 40,000 citizens discussed their own proposals and proposals made by the council. The Decidim software allowedthreadeddiscussion, labelling whether the initial comment on a proposal was negative, neutral or positive, andnotificationto participants.[6] The two-month trial included both online and face-to-face participation. According to Decidim, about 40% of the 39,000 individual participants did so face-to-face, and about 85% of the organisational participants did so face-to-face. There were about 11,000 proposals made on the Decidim server, of which about 8000 were accepted. The execution of the proposals was monitored during the following four years, spending about 90% of the Barcelona City Council's budget for 2016–2019.[2]: 24–25 In Switzerland, urban development has legal requirements in relation to citizen participation. Use of Decidim inZurichandLucernein 2021 and 2022 was studied by Suter and colleagues, based on documentary evidence, interviews with 15 people in Zurich and 17 in Lucerne ranging from municipality employees through to representatives of neighbourhood associations, and "participatory observations" (informal participatory events observed by the researchers). The researchers found that the effectiveness of Decidim varied significantly between the different cases, and argued that the "full potential" of Decidim had not yet been achieved in Switzerland.[5] InWipkingenin Zurich, two local citizens' associations used a server running Decidim to run aparticipatory budgetto spendCHF40,000. The project, named "Quartieridee", had 99 submissions of proposals and awarded funding to eight proposals. The researchers found that overall implementation was dependent on significant financial resources and citizens' voluntary work; and had difficulties due to the municipality lacking legal procedures for implementing the citizens' chosen projects.[5] The project was scaled up to the Zurich city level the following year with the name "Stadtidee" and a participatory budget ofCHF540,000. Among the successful projects was a confrontation between a citizens' association, "Linkes Seeufer für Alle" opposed to a Kibag AG in relation to a plot of land owned by Kibag next to Lake Zurich. An effect of the Decidim networking was that citizens legally occupied the plot of land for several days.[5] In 2021, the LuzernNord area of Lucerne was an area with many migrants and people with low incomes, at risk ofgentrification. A top-down use of a Decidim server by the local administration, in which citizens' associations were encouraged to participate, was found by the researchers to strengthen thedigital dividerather than overcoming it. Limitation of the language to German and lack of confidence in being able to participate effectively were found to be specific effects opposing the effectiveness of the project.[5] Based on nine in-depth interviews with officials responsible for Decidim, conducted in 2018 in some of the initial municipalities that used Decidim, online interviews in March 2019 with officials from 34 municipalities using Decidim, and data from the Decidim servers, the effectiveness of Decidim in terms of transparency, participation indecision-making, and deliberation (discussion of proposals) was studied by Rosa Borge and colleagues. It was found that the officials saw Decidim's role as primarily promoting transparency and the collecting of citizens' proposals, while having only a modest role in transferring decision-making to citizens and a minor role in encouraging online citizen debate.[3] Several municipalities' use of Decidim provided their first use ofparticipatory budgeting.[3] The Borge et al. study also found, consistently with other research, that the participatory aspect of citizens making proposals and participating in decisions was obstructed in some cases by local civil society associations, since direct citizen participation was seen to be in competition with the associations' roles. Several municipal governments worked on the implementation of Decidim together with local associations, adding features to the software such as different weightings for proposals by individuals versus those by associations.[3] The use of Decidim and participatory processes was found to depend on electoral results in some cases: these ceased inBadalonaafterDolors Sabaterlost power as Mayor in June 2018.[3] In 2023, the Decidim software was recognised as satisfying the criteria of theDigital Public Goods Allianceas a digital public good that contributes to theUnited Nations'Sustainable Development Goals.[8]
https://en.wikipedia.org/wiki/Decidim
Condorcet methods Positional voting Cardinal voting Quota-remainder methods Approval-based committees Fractional social choice Semi-proportional representation By ballot type Pathological response Strategic voting Paradoxes ofmajority rule Positive results Proxy votingis a form ofvotingwhereby a member of a decision-making body may delegate their voting power to a representative, to enable a vote in absence. The representative may be another member of the same body, or external. A person so designated is called a "proxy" and the person designating them is called a "principal".[1]: 3Proxy appointments can be used to form avoting blocthat can exercise greater influence indeliberationsornegotiations. Proxy voting is a particularly important practice with respect to corporations; in the United States, investment advisers often vote proxies on behalf of their client accounts.[2] A related topic isliquid democracy, a family of electoral systems where votes are transferable and grouped by voters, candidates or combination of both to create proportional representation, and delegated democracy. Another related topic is the so-called Proxy Plan, orinteractive representationelectoral systemwhereby elected representatives would wield as many votes as they received in the previous election. Oregon held a referendum on adopting such anelectoral systemin 1912.[3] The United States parliamentary manualRiddick's Rules of Procedurenotes that, under proxy voting, voting for officers should be done by ballot, due to the difficulties involved in authentication if a member simply calls out, "I cast 17 votes for Mr. X."[4] Proxy voting is also an important feature incorporate governancein the United States through theproxy statement. Companies use proxy solicitation agencies to secure proxy votes. The rules of some assemblies presently forbid proxy voting. There is a plan to forbid proxy voting in the United States House of Representatives. A recent vote showed 53 Democrats and 26 Republicans voted by proxy.[5]Forbidding proxy voting can result, however, in the absence of a quorum and the need to compel attendance by a sufficient number of missing members to get a quorum. Seecall of the house. It is possible for automatic proxy voting to be used in legislatures, by way ofdirect representation(this idea is essentially a form ofweighted voting). For example, it has been proposed that instead of electing members from single-member districts (that may have beengerrymandered), members be elected at large, but when seated each member cast the number of votes he or she received in the last election. Thus, if, for example, a state were allocated 32 members in the U.S. House of Representatives, the 32 candidates who received the most votes in the at-large election would be seated, but each would cast a different number of votes on the floor and in committee. This proposal would allow for representation of minority views in legislative deliberations, as it does in deliberations at shareholder meetings of corporations. Such a concept was proposed in a submission to the 2007 Ontario Citizens' Assembly process.[6] Another example isEvaluative Proportional Representation (EPR). It elects all the members of a legislative body. Each citizen grades the fitness for office of as many of the candidates as they wish as either Excellent (ideal), Very Good, Good, Acceptable, Poor, or Reject. Multiple candidates may be given the same grade by a voter. Each citizen elects their representative at-large for a city council. For a large and diverse state legislature, each citizen chooses to vote through any of the districts or official electoral associations in the country. Each grades any number of candidates in the whole country. Each elected representative has a different voting power (a different number of weighted votes) in the legislative body. This number is equal to the total number of highest available grades counted for them from all the voters – no citizen's vote is "wasted".[7]Each voter is represented equally. Two real-life examples of weighted voting include theCouncil of Ministers of the European Unionand theUS Electoral College.[8] TheParliament of New Zealandallows proxy voting. Sections 155-156 of the Standing Orders of theNew Zealand House of Representativesspecify the procedures for doing so. A member can designate another member or a party to cast his or her vote. However, a party may not exercise proxies for more than 25% of its members (rounded upwards).[9]TheNew Zealand Listenernotes a controversial occurrence of proxy voting. TheLabour Partywas allowed to cast votes on behalf ofTaito Phillip Field, who was frequently absent. Theoretically, this was to be allowed only if a legislator was absent on parliamentary business, public business or pressing private business, such as illness or bereavement.[10] Until theRepublicanreforms of 1995 banished the practice, proxy voting was also used inU.S. House of Representativescommittees. Often members would delegate their vote to the ranking member of their party in the committee. Republicans opposed proxy voting on the grounds that it allowed an indolent Democratic majority to move legislation through committee with antimajoritarian procedures. According to this criticism, on days when Democratic committee members were absent, the Democratic leader in the committee would successfully oppose the sitting Republican majority by wielding the proxies of absent Democrats.[11]Democratic House SpeakerNancy Pelositemporary reinstated proxy voting in 2020 for members who were unable to be physically present in the chamber due to the ongoingCOVID-19 pandemic.[12] During the COVID-19 pandemic emergency, proxy voting was temporarily introduced in theUK House of Commons. Deputy Chief WhipStuart Andrewheld a large number of proxy votes for other Conservative MPs, and at one stage in 2021 personally controlled a majority of votes in the whole house.[13]He did not always cast these proxy votes the same way, instead following the instructions of individual MPs.[14] Thomas E. Mann and Norman J. Ornstein write, "In a large and fragmented institution in which every member has five or six places to be at any given moment, proxy voting is a necessary evil".[15] Proxy voting is sometimes described as "the frequency with which spouses, union workers, and friends of friends are in effect sent off to the polls with an assignment to complete." The potential for proxy voting exists in roughly one voter out of five, and it is about twice as high at the middle levels of the sophistication continuum. According to W. Russell Neuman, the net effect of the cues provided by friends and associates is not likely to be as significant as those of the political parties.[16] The possibility of expanded use of proxy voting has been the subject of much speculation. Terry F. Buss et al. write thatinternet votingwould result in de facto approval of proxy voting, since passwords could be shared with others: "Obviously, cost-benefit calculations around the act of voting could also change substantially as organizations attempt to identify and provide inducements to control proxy votes without violating vote-buying prohibitions in the law."[17] One of the criticisms of proxy voting is that it carries a risk of fraud or intimidation.[18]Another criticism is that it violates the concept of a secret ballot, in that paperwork may be filed, for instance, designating a party worker as one's proxy.[19] It has been proposed that proxy voting be combined withinitiative and referendumto form a hybrid ofdirect democracyandrepresentative democracy.[20][21][unreliable source?]James C. Miller III,Ronald Reagan's budget director, suggested scrapping representative democracy and instead implementing a "program for direct and proxy voting in the legislative process."[22]It has been suggested by Joseph Francis Zimmerman that proxy voting be allowed inNew England town meetings.[23] Proxy voting can eliminate some of the problems associated with thepublic choicedilemma ofbundling. According to Arch Puddington et al., in Albanian Muslim areas, many women have been effectively disenfranchised through proxy voting by male relatives.[24] In Algeria, restrictions on proxy voting were institutedc.1991in order to undermine theIslamic Salvation Front.[25] In Canada, the province of Nova Scotia allows citizens to vote by proxy if they expect to be absent. The territories of Yukon, Northwest Territories, and Nunavut also allow for proxy voting.[26]Canadian prisoners of war in enemy camps were allowed to vote through proxy voting.[27]David Stewart and Keith Archer opine that proxy voting can result in leadership selection processes to become leader-dominated.[28]Proxy voting had only been available to military personnel since World War II, but was extended in 1970 and 1977 to include voters in special circumstances such as northern camp operators, fishermen, and prospectors. TheAlberta Liberal Partyran into some difficulties, in that an unknown number of proxy ballots that were counted may have been invalid.[29]Those who, through proxy voting or assistance of invalids, become knowledgeable of the principal's choice are bound to secrecy.[30] SomeChinese provincesallow village residents to designate someone to vote on their behalf. Lily L. Tsai notes that "In practice, one family member often casts votes for everyone in the family even if they are present for the election."[31]In 1997, aCarter Centerdelegation recommended abolishing the proxy voting that allowed one person to vote for three; theInternational Republican Institutehad made a similar recommendation.[32]Proxy voting also became an issue in relation to many of theWenzhoupeople doing business outside.[clarification needed]Most election disputes revolved around proxy votes, including the issues of who could represent them to vote and what kinds of evidence were acceptable for proxy voting. Intense competition made the proxy voting process more and more formal and transparent. Some villages required anotaryto validate faxed proxy votes; some villages asked forfaxedsignatures; more often villages publicized those proxy votes so that villagers could directly monitor them.Taicanggovernment reported a 99.4% voter turnout in its 1997 election, but a study showed that after removing proxy votes, only 48% of the eligible voters in the sample reported that they actually went to the central polling station to vote.[33] In France, voters are allowed to temporarily give thepower of attorneyto another registered voter (online or by paper form) for purpose of voting in an election, provided that the voter making the request visits the national police station or gendarmerie with proof of identity. Applying voters then receive an e-mail receipt to indicate the validation or invalidation of their request.[34]This method is allowed instead orearlyormail voting. Proxy voting was intensely used in both rounds of the2024 snap legislative election, when many voters were travelling or scheduled to travel on holiday when the election was called. The election resulted in historically-high turnout for a legislative election. According to Mim Kelber, "in Central Africa, all it takes for a man to cast a proxy vote for his wife is to produce an unwitnessed letter mentioning the name of the person to whom the voting power is delegated." The Gabon respondent to anInter-Parliamentary Unionletter commented, "It has been observed that this possibility was exploited to a far greater extent by men than by women, for reasons not always noble."[35] Proxy voting played an important role inGuyanapolitics in the 1960s. Prior to and during the 1961 elections, proxies had been severely restricted. Some restrictions were lifted, and there was a rise in proxy votes cast from 300 in 1961 to 6,635 in 1964. After that election, theCommonwealth Team of Observersvoiced concern about proxy votes being liable to fraud. The proxy voting rules were relaxed further, and in 1969, official figures recorded 19,287 votes cast by proxy, about 7% of the total votes cast (an increase from 2.5% in 1964 to 1968).[36]Amidst allegations of fraud, more restrictions were placed on proxy voting in 1973; in that year, about 10,000 votes were cast by proxy.[37] In 2003, India'sPeople's Representative Actwas amended to allow armed forces personnel to appoint a proxy to vote on their behalf.[38] In Iraq, the Electoral Laws of 1924 and 1946 ruled out the possibility of proxy voting, except for illiterates, who could appoint someone to write for them.[39] Some instances of proxy voting (usually by family members) in the Russian parliamentary elections of 1995 were noted by observers from theOrganization for Security and Cooperation in Europe.[40] The provision for proxy voting in the UK dates back toJames I. Long beforewomen's suffrage, women sometimes voted as proxies for absent male family heads. Under British electoral law, ballot papers could not be sent overseas.[19]British emigrants had no right to vote until the mid-1980s. They can now vote by proxy in general elections if they have been on a British electoral register at some point in the past 15 years.[41]They can also vote by post.[42] In the United Kingdom, electors may appoint a proxy. An elector can only act as a proxy for two people to whom they are not directly related. However, they can be a proxy for any number of electors if they are directly related to those electors. The voter can change his mind and vote in the election personally as long as his proxy has not already voted on his behalf or applied tovote by mail.[43] Voters must provide a reason for using a proxy, such as being away on vacation. A narrower subset of reasons is permissible if the proxy is to be for more than one election. Except in cases of blindness, the validity of all proxies must be certified by someone such as an employer or doctor.[44] In 2004, twoLiberal Democratcouncillors were found guilty of submitting 55 fraudulent proxy votes and sentenced to 18 months imprisonment.[45] TheElectoral Reform Societyhas proposed the abolition of proxy voting in the UK except in special circumstances such as when the voter is abroad.[46] In 1635–36, Massachusetts granted to the frontier towns "liberty to stay soe many of their freemen at home for the safety of their towne as they judge needful, and that the said freemen that are appoyncted by the towne to stay at home shall have liberty for this court to send their voices by proxy." According to Charles Seymour and Donald Paige Frary, had not proxy voting been implemented, the inhabitants of the frontier towns would have lost their franchises, and the government would have represented only the freemen in the vicinity of Boston. The roads were poor; the drawing of all a village's men at once would have exposed it to Indian attacks; and at election time, the emigrants' labor was needed to get the spring planting into the ground. As late as 1680, and probably even after the charter was revoked in 1684, the Freeman might give his vote for Magistrates in person or proxy at the Court of Elections.[47] Proxy voting was also adopted in colonies adjacent to Massachusetts.[48]Indeed, traces of the practice of proxy voting remained in Connecticut's election laws until the final supersedure of her charter in 1819.[49] In Maryland, theprimary assembliesallowed proxy voting. After the assembly of 1638, protests were sent to the proprietor in England. It was said that the Governor and his friends were able to exercise too much influence through the proxies they had obtained. Proxy voting was also used in South Carolina; the proprietors in September 1683 complained to the governor about this system. Proxy voting was used inLong Island, New York as well, at that time. Phraseology was sometimes designed to hide the fact that a proxy system was in use and that the majority of voters did not actually attend the elections. In Rhode Island, the system described as a "proxy" system, from 1664 onward, was actually simply the sending of written ballots from voters who did not attend the election, rather than a true proxy system, as in the assembly of 1647.[50] In Alabama, the Perry County Civic League's members' assisting illiterate voters by marking a ballot on their behalf was deemed "proxy voting" and "voting more than once" and thus held to be illegal.[51] During theAmerican Civil War, some northern soldiers used proxy voting.[52]AfterIra Eastman's near-victory in New Hampshire, Republicans supported a bill to allow soldiers to vote by proxy, but it was ruled unconstitutional by the state supreme court.[53] In theProgressive Era, proxy voting was used in Republican Party state conventions in New Hampshire. TheBoston and Maine Railroad, the Republican Party's ally, maintained control over the Party by means of these conventions. "At the 1906 state convention, for instance, party delegates were quite willing to trade, sell, or exchange their voting power in return for various forms of remuneration from the party machine. Public outcry led to the end of such 'proxy' voting".[54] Proxy voting was used in some American U.S. presidential nominating caucuses. In one case,Eugene McCarthysupporters were in the majority of those present but were outvoted when the presiding party official cast 492 proxy votes – three times the number present – for his own slate of delegates.[55]After the nomination ofHubert Humphrey, theNew Politicsmovement charged that Humphrey and party bosses had circumvented the will of Democratic Party members by manipulating the rules to Humphrey's advantage. In response, the Commission on Party Structure and Delegate Selection, also known as theMcGovern–Fraser Commission, was created to rework the rules in time for the1972 Democratic National Convention. State parties were required to ban proxy voting in order to have their delegates seated at the national convention.[54]It was said that these rules had been used in "highly selective" ways.[56] Several attempts have been made to place proxy voting-related initiatives on the California ballot, but all have failed.[57] Proxy is defined by supreme courts as "anauthorityor power todoa certain thing."[58]A person can confer on his proxy any power which he himself possesses. He may also give him secret instructions as to voting upon particular questions.[59]But a proxy is ineffectual when it is contrary to law or public policy.[60]Where the proxy is duly appointed and he acts within the scope of the proxy, the person authorizing the proxy is bound by his appointee's acts, including his errors or mistakes.[61]When the appointer sends his appointee to a meeting, the proxy may do anything at that meeting necessary to a full and complete exercise of the appointer's right to vote at such meeting. This includes the right to vote to take the vote by ballot, or to adjourn (and, hence, he may also vote on other ordinary parliamentary motions, such as to refer, postpone, reconsider, etc., when necessary or when deemed appropriate and advantageous to the overall object or purpose of the proxy).[62] A proxy can vote only in the principal's absence, not when the principal is present and voting.[63]Where the authority conferred upon a proxy is limited to a designated or special purpose, a vote for another and different purpose is ineffective.[64]A proxy in the usual, ordinary form confers authority to act only at the meeting then in contemplation, and in any adjourned-meetings of the same; hence, it may not be voted at another or different meeting held under anew call.[65]A proxy's unauthorized acts may beratifiedby his appointer, and such ratification is equivalent to previous authority.[66]According to the weight of authority, a proxy only to vote stock may be revoked at any time, notwithstanding any agreement that it shall be irrevocable.[67]The sale in the meantime by a stockholder of his shares in a corporation or company automatically revokes any proxies made or given to vote in respect of such shares.[68]And a proxy is also revoked where the party giving it attends the election in person, or gives subsequent proxy.[69]Hence, a proxy cannot vote when the owner of the stock arrives late or is present and votes.[70] In Vietnam, proxy voting was used to increase turnout. Presently, proxy voting is illegal, but it has nonetheless been occurring since before 1989. It is estimated to contribute about 20% to voter turnout, and has been described as "a convenient way to fulfil one's duty, avoid possible risks, and avoid having to participate directly in the act of voting". It is essentially a compromise between the party-state, which wants to have high turnouts as proof of public support, and voters who do not want to go to the polling stations. In the Soviet Union, proxy voting was also illegal but done in order to increase turnout figures.[71] Proxy voting is automatically prohibited in organizations that have adoptedRobert's Rules of Order Newly Revised(RONR) orThe Standard Code of Parliamentary Procedure(TSC) as their parliamentary authority, unless it is provided for in its bylaws or charter or required by the laws of its state of incorporation.[72][73]Robert's Rules says, "If the law under which an organization is incorporated allows proxy voting to be prohibited by a provision of the bylaws, the adoption of this book as parliamentary authority by prescription in the bylaws should be treated as sufficient provision to accomplish that result".[74]Demetersays the same thing, but also states that "if these laws donotprohibit voting by proxy, the body can pass a law permitting proxy voting for any purpose desired."[75]RONR opines, "Ordinarily it should neither be allowed nor required, because proxy voting is incompatible with the essential characteristics of a deliberative assembly in which membership is individual, personal, and nontransferable. In a stock corporation, on the other hand, where the ownership is transferable, the voice and vote of the member also is transferable, by use of a proxy."[76]While Riddick opines that "proxy voting properly belongs in incorporate organizations that deal with stocks or real estate, and in certain political organizations," it also states, "If a state empowers an incorporated organization to use proxy voting, that right cannot be denied in the bylaws." Riddick further opines, "Proxy voting is not recommended for ordinary use. It can discourage attendance, and transfers an inalienable right to another without positive assurance that the vote has not been manipulated."[4] Parliamentary Lawexpounds on this point:[77] It is used only in stock corporations where the control is in the majority of the stock, not in the majority of the stockholders. If one person gets control of fifty-one per cent of the stock he can control the corporation, electing such directors as he pleases in defiance of the hundreds or thousands of holders of the remaining stock. The laws for stock corporations are nearly always made on the theory that the object of the organization is to make money by carrying on a certain business, using capital supplied by a large number of persons whose control of the business should be in proportion to the capital they have put into the concern. The people who have furnished the majority of the capital should control the organization, and yet they may live in different parts of the country, or be traveling at the time of the annual meeting. By the system of proxy voting they can control the election of directors without attending the meetings. Nonetheless, it is common practice in conventions for a delegate to have an alternate, who is basically the same as a proxy.Demeter's Manualnotes that the alternate has all the privileges of voting, debate and participation in the proceedings to which the delegate is entitled.[75]Moreover, "if voting has for years ... been conducted ... by proxy ... such voting by long and continuous custom has the force of law, and the proceedings are valid."[78] Thomas E. Arend notes that U.S. laws allow proxy votes to be conducted electronically in certain situations: "The use of electronic media may be permissible for proxy voting, but such voting is generally limited to members. Given the fiduciary duties that are personal to each director, and the need for directors to deliberate to ensure properly considered decisions, proxy voting by directors is usually prohibited by statute. In contrast, a number of state nonprofit corporate statutes allow for member proxy voting and may further allow members to use electronic media to grant a proxy right to another party for member voting purposes."[79]Sturgis agrees, "Directors or board members cannot vote by proxy in their meetings, since this would mean the delegation of a discretionary legislative duty which they cannot delegate."[73] Proxy voting, even if allowed, may be limited to infrequent use if the rules governing a body specify minimum attendance requirements. For instance, bylaws may prescribe that a member can be dropped for missing three consecutive meetings.[80] The Journal of Mental Science noted the arguments raised against adopting proxy voting for the Association. These included that possibility that it would diminish attendance at meetings. The rejoinder was that people did not go there to vote; they attending the meetings for the sake of the meeting, the discussion, and the good fellowship.[81] In 2005, theLibertarian Party of Colorado, following intense debate, enacted rules allowing proxy voting.[82]A motion to limit proxies to 5 per person was defeated.[83]Some people favored requiring members attending the convention to bring a certain number of proxies, in order to encourage them to politick.[84]In 2006, the party repealed those bylaw provisions due to concerns that a small group of individuals could use it to take control of the organization.[85] Under thecommon law, shareholders had no right to cast votes by proxy in corporate meetings without special authorization. InWalker v. Johnson,[86]theCourt of Appeals for the District of Columbiaexplained that the reason was that early corporations were of a municipal, religious or charitable nature, in which the shareholder had no pecuniary interest. The normal mode of conferring corporate rights was by an issue of a charter from the crown, essentially establishing the corporation as a part of the government. Given the personal trust placed in these voters by the king, it was inappropriate for them to delegate to others. In the Pennsylvania case ofCommonwealth ex rel. Verree v. Bringhurst,[87]the court held that members of a corporation had no right to vote by proxy at a corporate election unless such right was expressly conferred by the charter or by a bylaw. The attorneys for the plaintiff argued that the common law rules had no application to trading or moneyed corporations where the relation was not personal. The court found, "The fact that it is a business corporation in no wise dispenses with the obligation of all members to assemble together, unless otherwise provided, for the exercise of a right to participate in the election of their officers." At least as early as the 18th century, however, clauses permitting voting by proxy were being inserted in corporate charters in England.[88] Proxy voting is commonly used in corporations for voting by members or shareholders, because it allows members who have confidence in the judgment of other members to vote for them and allows the assembly to have a quorum of votes when it is difficult for all members to attend, or there are too many members for all of them to conveniently meet and deliberate.Proxy firmscommonly advise institutional shareholders on how they should vote. Proxy solicitation firms assist in helping corral votes for a certain resolution.[89] Domini notes that in the corporate world, "Proxy ballots typically contain proposals from company management on issues of corporate governance, including capital structure, auditing, board composition, and executive compensation."[90] Proxies are essentially the corporate law equivalent ofabsentee balloting.[91]: 10–11Shareholders send in a card (called a proxy card) on which they mark their vote. The card authorizes a proxy agent to vote the shareholder's stock as directed on the card.[91]: 10–11The proxy card may specify how shares are to be voted or may simply give the proxy agent discretion to decide how the shares are to be voted.[91]: 10–11The Securities Exchange Act of 1934 transferred this responsibility from the FTC to the SEC. The Securities Exchange Act of 1934 also gave the SEC the power to regulate the solicitation of proxies, though some of the rules the SEC has since proposed (like the universal proxy) have been controversial.[1]: 4UnderSecurities Exchange CommissionRule 14a-3, the incumbent board of directors' first step in soliciting proxies must be the distribution to shareholders of the firm's annual report. An insurgent may independently prepare proxy cards and proxy statements, which are sent to the shareholders.[92]In 2009, the SEC proposed a new rule allowing shareholders meeting certain criteria to add nominees to the proxy statement; though this rule has been the subject of intense debate.[93]: 1 Associations ofinstitutional investorssometimes attempt to effect social change. For instance, several hundred faith-based institutional investors, such as denominations, pensions, etc. belong to the Interfaith Center on Corporate Responsibility. These organizations commonly exercise influence throughshareholder resolutions, which may spur management to action and lead to the resolutions' withdrawal before an actual vote on the resolution is taken.[94] Fiduciaries for ERISA and other pension plans are generally expected to vote proxies on behalf of these plans in a manner than maximizes the economic value for plan participants. In these regards, for ERISA plans, fiduciaries and advisers are very limited in the extent to which they can take social or other goals into account.[95] In the absence of his principal from the annual meeting of a business corporation, the proxy has the right to vote in all instances, but he has not the right to debate or otherwise participate in the proceedings unless he is a stockholder in that same corporation.[75] TheSecurities and Exchange Commission(SEC) has ruled that an investment adviser who exercises voting authority over his clients' proxies has a fiduciary responsibility to adopt policies and procedures reasonably designed to ensure that the adviser votes proxies in the best interests of clients, to disclose to clients information about those policies and procedures, to disclose to clients how they may obtain information on how the adviser has voted their proxies, and to keep certain records related to proxy voting.[96]This ruling has been criticized on many grounds, including the contention that it places unnecessary burdens on investment advisers and would not have prevented the majoraccounting scandalsof the early 2000s.[97]Mutual funds must report their proxy votes periodically on Form N-PX.[98] It is possible forovervotesandundervotesto occur in corporate proxy situations.[99] Even in corporate settings, proxy voting's use is generally limited to voting at the annual meeting for directors, for the ratification of acts of the directors, for enlargement or diminution of capital, and for other vital changes in the policy of the organization. These proposed changes are summarized in the circular sent to shareholders prior to the annual meeting. The stock-transfer book is closed at least ten days before the annual meeting to enable the secretary to prepare a list of stockholders and the number of shares held by each. Stock is voted as shown by the stock book when posted. All proxies are checked against this list.[77] It is possible to designate two or more persons to act as proxy by using language appointing, for instance, "A, B, C, D, and E, F, or any of them, attorneys and agents for me, irrevocable, with full power by the affirmative vote of a majority of said attorneys and agents to appoint a substitute or substitutes for and in the name and stead of me."[77] Proxy voting is said to have some anti-deliberative consequences, in that proxy holders often lack discretion about how to cast votes due to the instructions given by their principal. Thus, they cannot alter their decision based on the deliberative process of testing the strength of arguments and counter-arguments.[100] In Germany, corporate proxy voting is done through banks.[101]Proxy voting by banks has been a key feature of the connection of banks to corporate ownership in Germany since the industrialization period.[102] Indelegated voting, the proxy istransitiveand the transfer recursive. Put simply, the vote may be further delegated to the proxy's proxy, and so on. This is also called transitive proxy or delegate cascade.[103]An early proposal of delegate voting was that ofLewis Carrollin 1884.[104][105] Delegate voting is used by the Swedish local political partyDemoex. Demoex won its first seat in the city council of Vallentuna,Sweden, in 2002. The first years of activity in the party have been evaluated byMitthögskolan Universityin a paper by Karin Ottesen in 2003.[106]In Demoex, a voter can also vote directly, even if he has delegated his vote to a proxy; the direct vote overrules the proxy vote. It is also possible to change the proxy at any time. In 2005, in a pilot study in Pakistan,Structural Deep Democracy, SD2[107][108]was used for leadership selection in a sustainable agriculture group called Contact Youth. SD2 usesPageRankfor the processing of the transitive proxy votes, with the additional constraints of mandating at least two initial proxies per voter, and all voters are proxy candidates. More complex variants can be built on top of SD2, such as adding specialist proxies and direct votes for specific issues, but SD2 as the underlying umbrella system, mandates that generalist proxies should always be used. Delegated voting is also used in the World Parliament Experiment, and in implementations ofliquid democracy.
https://en.wikipedia.org/wiki/Proxy_voting#Delegated_voting
Thedelegate model of representationis a model of arepresentative democracy. In this model,constituentselect their representatives as delegates for theirconstituency. These delegates act only as a mouthpiece for the wishes of their constituency/state and have noautonomyfrom the constituency only the autonomy to vote for the actual representatives of the state. This model does not provide representatives the luxury of acting in their own conscience and is bound byimperative mandate. Essentially, the representative acts as the voice of those who are (literally) not present. Irish philosopherEdmund Burke(1729–1797) contested this model and supported the alternativetrustee model of representation.[1] The delegate model of representation is made use of in various forms ofcouncil democracyand commune democracy. Models of democratic rule making extensive use of the delegate model of representation are often labeled "delegative democracy".[2][3]However, the merging of these two terms is criticized as misleading.[4] Thispolitical sciencearticle is astub. You can help Wikipedia byexpanding it. This article aboutpolitical philosophyortheoryis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Delegate_model_of_representation
Thetrustee model of representationis a model of arepresentative democracy, frequently contrasted with thedelegate model of representation.[1]In this model,constituentselect their representatives as 'trustees' for theirconstituency. These 'trustees' haveautonomyto deliberate and act as they see fit, in their own conscience even if it means going against the explicit desires of their constituents. By contrast, in the delegate model, the representative is expected to act strictly in accordance with the beliefs of their constituents.[2][3] This model was formulated byEdmund Burke[2](1729–1797), an Irish MP and philosopher, who opposed the delegate model of representation. In the trustee model, Burke argued that his behavior inParliamentshould be informed by his knowledge and experience, allowing him to serve thepublic interest. Essentially, a trustee considers an issue and, after hearing all sides of the debate, exercises their own judgment in making decisions about what should be done. His unbiased opinion, his mature judgment, his enlightened conscience, he ought not to sacrifice to you, to any man, or to any set of men living. ... Your representative owes you, not his industry only, but his judgment; and he betrays, instead of serving you, if he sacrifices it to your opinion. You choose a member, indeed; but when you have chosen him, he is not a member of Bristol, but he is a member of Parliament. (Burke, 1774) He made these statements immediately after being elected, and after his colleague had spoken in favour of coercive instructions being given to representatives.[citation needed]Burkeconscientiously objectedto slavery,[4]: 6but needed to balance this againsthis electors' slave trade business.[4]: 8–9This played a minor role in decreasing support for reelecting him in Bristol, forcing him to run in Malton,[4]: 9which did not benefit from the slave trade.[4]: 19 John Stuart Millpreferred intelligent representatives.[3]: 1He stated that while all individuals have a right to be represented, not all political opinions are of equal value. He suggested a model where constituents would receive votes that increase based on each level of education past simple literacy and math.[5]
https://en.wikipedia.org/wiki/Trustee_model_of_representation
Electronic governanceore-governanceis the use ofinformation technologyto providegovernment services,information exchange, communication transactions, and integration of different stand-alone systems between government to citizen (G2C), government to business (G2B), government to government (G2G), government to employees (G2E), andback-officeprocesses and interactions within the entire governance framework.[1]Through IT, citizens can access government services through e-governance. The government, citizens, and businesses/interest groups are the three primary target groups that can be identified in governance concepts. The goal of government-to-citizen (G2C) e-governance is to offer a variety of ICT services to citizens in an efficient and economical manner and to strengthen the relationship between government and citizens using technology. There are several methods of G2C e-governance.Two-way communicationallows citizens to instant message directly with public administrators, and cast remote electronic votes (electronic voting) and instant opinion voting. These are examples ofe-Participation. Other examples included the payment of taxes and services that can be completed online or over the phone. Mundane services such as name or address changes, applying for services or grants, or transferring existing services are more convenient and no longer have to be completed face to face.[2] G2C e-governance is unbalanced across the globe as not everyone has Internet access and computing skills, but theUnited States,European Union, andAsiaare ranked the top three in development. TheFederal Government of the United Stateshas a broad framework of G2C technology to enhance citizen access to Government information and services.benefits.govis an official US government website that informs citizens of benefits they are eligible for and provides information on how to apply for assistance. US State Governments also engage in G2C interaction through theDepartment of Transportation,Department of Public Safety,United States Department of Health and Human Services,United States Department of Education, and others.[3]As with e-governance on the global level, G2C services vary from state to state. The Digital States Survey ranks states on social measures,digital democracy,e-commerce, taxation, and revenue. The 2012 report showsMichiganandUtahin the lead andFloridaandIdahowith the lowest scores.[3]Municipal governments in the United States also use government-to-customer technology to complete transactions and inform the public. Much like states, cities are awarded for innovative technology. Government Technology's "Best of the Web 2012" named Louisville, KY, Arvada, CO, Raleigh, NC, Riverside, CA, and Austin, TX the top five G2C city portals.[4] European countries were ranked second among all geographic regions. The Single Point of Access for Citizens of Europe supports travel within Europe and Europe is a 1999 initiative supporting an online government. Main focuses are to provide public information, allow customers to have access to basicpublic services, simplify online procedures, and promoteelectronic signatures.[3]Estoniais the first and the only country[5]in the world with e-residency which enables anyone in the world outside Estonia to access Estonian online services. One caveat of the Estonia e-residency program is that it does not give e-residents physical rights to the country. This means that unless the e-resident buys land they do not get to participate in the democratic processes. The benefit to e-residents is the opportunity to develop business in the digital European Union market. NeighboringLithuanialaunched a similare-Residency program. Asia is ranked third in comparison, and there are diverse G2C programs between countries.Singapore's eCitizen Portal is an organized single access point to government information and services.South Korea's Home Tax Service (HTS) provides citizens with 24/7 online services such as tax declaration.Taiwanhas top ranking G2C technology including an online motor vehicle services system, which provides 21 applications and payment services to citizens.[3]India's e-governance programs have found success in regional areas. This is likely due to the ability to meet the language and literacy differences among their constituents.India's UPI(Unified Payments Network) has become world's largest digital payments network which allows its users to combine multiple bank accounts into a single app, allowing users to send and receive money without sharing account numbers or other details. Government-to-Citizen is the communication link between a government and private individuals or residents. Such G2C communication most often refers to that which takes place throughInformation and Communication Technologies(ICTs), but can also includedirect mailand media campaigns. G2C can take place at the federal, state, and local levels. G2C stands in contrast to G2B, orGovernment-to-Businessnetworks. One such Federal G2C network isUSA.gov, the United States' official web portal, though there are many other examples from governments around the world.[6] A full switch to government-to-citizen e-governance will cost a large amount of money in development and implementation.[2]In addition, government agencies do not always engage citizens in the development of their e-gov services or accept feedback. Customers identified the following barriers to government-to-customer e-governance: not everyone has Internet access, especially in rural or low-income areas, G2C technology can be problematic for citizens who lack computing skills. Some G2C sites have technology requirements (such as browser requirements and plug-ins) that won't allow access to certain services, language barriers, the necessity for an e-mail address to access certain services, and a lack of privacy.[7] E-governance to Employee partnership (G2E) Is one of four main primary interactions in the delivery model of E-governance. It is the relationship between online tools, sources, and articles that help employees to maintain the communication with the government and their own companies. E-governance relationship with Employees allows new learning technology in one simple place as the computer. Documents can now be stored and shared with other colleagues online.[8] E-governance makes it possible for employees to become paperless and makes it easy for employees to send important documents back and forth to colleagues all over the world instead of having to print out these records or fax[9]G2Eservices also include software for maintaining personal information and records of employees. Some of the benefits of G2E expansion include: Government-to-employees (abbreviated G2E) is the online interactions through instantaneous communication tools betweengovernment unitsand their employees. G2E is one out of the four primary delivery models ofe-Government.[11][12][13] G2E is an effective way to providee-learningto the employees, bring them together and to promote knowledge sharing among them.[14]It also gives employees the possibility of accessing information in regard to compensation and benefits policies, training and learning opportunities and civil rights laws.[11][14][15]G2E services also include software for maintaining personal information and records of employees.[15] G2E is adopted in many countries including the United States, Hong Kong, and New Zealand.[16] From the start of 1990s e-commerce and e-product, there has rampant integration of e-forms of government process. Governments have now tried to use the efficiencies of their techniques to cut down on waste. E-government is a fairly broad subject matter, but all relate to how the services and representation are now delivered and how they are now being implemented. Many governments around the world have gradually turned to Information technologies (IT) in an effort to keep up with today's demands. Historically, many governments in this sphere have only been reactive, but recently there has been a more proactive approach in developing comparable services such things ase-commerceande-business.[17] Before, the structure emulated private-like business techniques. Recently that has all changed as e-government begins to make its own plan. Not only does e-government introduce a new form of record keeping, but it also continues to become more interactive to better the process of delivering services and promoting constituency participation. The framework of such an organization is now expected to increase more than ever by becoming efficient and reducing the time it takes to complete an objective. Some examples include paying utilities, tickets, and applying for permits. So far, the biggest concern is accessibility to Internet technologies for the average citizen. In an effort to help, administrations are now trying to aid those who do not have the skills to fully participate in this new medium of governance, especially now as e-government progressing to more e-governance though. An overhaul of the structure is now required as every pre-existing sub-entity must now merge under one concept of e-government. As a result, Public Policy has also seen changes due to the emerging of constituent participation and the Internet. Many governments such as Canada's have begun to invest in developing new mediums of communication of issues and information through virtual communication and participation. In practice, this has led to several responses and adaptations by interest groups, activist, and lobbying groups. This new medium has changed the way the polis interacts with government. The purpose to include e-governance to government is to means more efficient in various aspects. Whether it means to reduce cost by reducing paper clutter, staffing cost, or communicating with private citizens or public government. E-government brings many advantages to play such as facilitating information delivery, application process/renewal between both business and private citizen, and participation with the constituency. There are both internal and external advantages to the emergence of IT in government, though not all municipalities are alike in size and participation. In theory, there are currently four major levels of E-government in municipal governments:[18] These, along with 5 degrees of technical integration and interaction of users include: The adoption of e-government in municipalities evokes greater innovation in e-governance by being specialized and localized. The level success and feedback depends greatly on the city size and government type. Acouncil-manager governmentmunicipality typically works the best with this method, as opposed tomayor-council governmentpositions, which tend to be more political. Therefore, they have greater barriers towards its application. Council-Manager governments are also more inclined to be effective here by bringing innovation and reinvention of governance to e-governance. The International City/County Management Association and Public Technology Inc. have done surveys over the effectiveness of this method. The results are indicating that most governments are still in either the primary stages (stage 1 or 2), which revolves around public service requests. Though application of integration is now accelerating, there has been little to no instigating research to see its progression as e-governance to the government. We can only theorize it's still within the primitive stages of e-governance. Government-to-Government (abbreviated G2G) is the online non-commercial interaction between Government organizations, departments, and authorities and other Government organizations, departments, and authorities. Its use is common in theUK, along withG2C, the online non-commercial interaction of local and central Government and private individuals, andG2Bthe online non-commercial interaction of local and central Government and the commercial business sector. G2G systems generally come in one of two types: The strategic objective of e-governance, or in this case G2G is to support and simplify governance for government, citizens, and businesses. The use of ICT can connect all parties and support processes and activities. Other objectives are to make government administration more transparent, speedy and accountable while addressing the society's needs and expectations through efficient public services and effective interaction between the people, businesses, and government.[19] Within every of those interaction domains, four sorts of activities take place:[20][21] Pushing data over the internet, e.g.: regulative services, general holidays, public hearing schedules, issue briefs, notifications, etc. two-way communications between one governmental department and another, users will interact in dialogue with agencies and post issues, comments, or requests to the agency. Conducting transactions, e.g.: Lodging tax returns, applying for services and grants. Governance, e.g.: To alter the national transition from passive info access to individual participation by: In the field of networking, the Government Secure Intranet (GSi) puts in place a secure link between central government departments. It is an IP-basedvirtual private networkbased on broadband technology introduced in April 1998 and further upgraded in February 2004. Among other things, it offers a variety of advanced services including file transfer and search facilities, directory services, email exchange facilities (both between network members and over the Internet) as well as voice and video services. An additional network is currently also under development: the Public Sector Network (PSN) will be the network to interconnect public authorities (including departments and agencies in England; devolved administrations and local governments) and facilitate in particular sharing of information and services among each other.[22] The objective of G2B is to reduce difficulties for business, provide immediate information and enable digital communication by e-business (XML). In addition, the government should re-use the data in the report proper, and take advantage of commercial electronic transaction protocol.[23]Government services are concentrated on the following groups: human services; community services; judicial services; transport services; land resources; business services; financial services and other.[24]Each of the components listed above for each cluster of related services to the enterprise. E-government reduces costs and lowers the barrier of allowing companies to interact with the government. The interaction between the government and businesses reduces the time required for businesses to conduct a transaction. For instance, there is no need to commute to a government agency's office, and transactions may be conducted online instantly with the click of a mouse. This significantly reduces transaction time for the government and businesses alike. E-Government provides a greater amount of information that the business needed, also it makes that information more clear. A key factor in business success is the ability to plan and forecast through data-driven future. The government collected a lot of economic, demographic and other trends in the data. This makes the data more accessible to companies which may increase the chance of economic prosperity. In addition, E-Government can help businesses navigate through government regulations by providing an intuitive site organization with a wealth of useful applications. The electronic filings of applications for environmental permits give an example of it. Companies often do not know how, when, and what they must apply. Therefore, failure to comply with environmental regulations up to 70%, a staggering figure[25]most likely due to confusion about the requirements, rather than the product of willful disregard of the law.[26] The government should concern that not all people are able to access the internet to gain online government services. The network reliability, as well as information on government bodies, can influence public opinion and prejudice hidden agenda. There are many considerations and implementation, designing e-government, including the potential impact of government and citizens of disintermediation, the impact on economic, social and political factors, vulnerable to cyber attacks, and disturbances to the status quo in these areas.[27] G2B rises the connection between government and businesses. Once the e-government began to develop, become more sophisticated, people will be forced to interact with e-government in the larger area. This may result in a lack of privacy for businesses as their government gets their more and more information. In the worst case, there is so much information in the electron transfer between the government and business, a system which is liketotalitariancould be developed. As the government can access more information, the loss of privacy could be a cost.[28][29] The government site does not consider about "potential to reach many users including those who live in remote areas, are homebound, have low literacy levels, exist on poverty line incomes."[30] The main goal of government to business is to increase productivity by giving business more access to information in a more organize manner while lowering the cost of doing business as well as the ability to cut "red tape", save time, reduce operational cost and to create a more transparent business environment when dealing with government. Government to business key points Difference between G2B and B2G Conclusion The overall benefit of e-governance when dealing with business is that it enables the business to perform more efficiently. E-governance is facing numerous challenges world over. The traditional approach for introducing e-governance is just not sufficient due to the complexity from wide variety of application architecture mix from both legacy and modern worlds that need to be brought into the purview of e-governance.[35]These challenges are arising from administrative, legal, institutional and technological factors.[36]The challenge includes security drawbacks such as spoofing, tampering, repudiation, disclosure, elevation of privilege, denial of service and other cyber crimes. Other sets of problems include implementation parts such as funding, management of change, privacy, authentication, delivery of services, standardization, technology issues and use of local languages.
https://en.wikipedia.org/wiki/E-governance
Anindirect electionorhierarchical voting,[1]is anelectionin which voters do not choose directly amongcandidatesor parties for an office (direct voting system), but elect people who in turn choose candidates or parties. It is one of the oldest forms of elections and is used by many countries forheads of state(such aspresidents),cabinets,heads of government(such asprime ministers), and/orupper houses. It is also used for somesupranational legislatures. Positions that are indirectly elected may be chosen by a permanent body (such as aparliament) or by a special body convened solely for that purpose (such as anelectoral college). In nearly all cases the body thatcontrolsthe federalexecutive branch(such as acabinet) is elected indirectly.[citation needed]This includes the cabinets of mostparliamentary systems; members of the public elect the parliamentarians, who then elect the cabinet. Upper houses, especially in federal republics, are often indirectly elected, either by the correspondinglower houseor cabinet. An election can be partially indirect, for example in the case ofindirect single transferable voting, where only eliminated candidates select other candidates to transfer their vote share to. Similarly, supranational legislatures can be indirectly elected by constituent countries' legislatures orexecutive governments. A head of state is the official leader and representative of a country.[2]The head of state position can vary from ceremonialfigureheadwith limited power to powerful leader depending on the government structure and historical legacy of the country.[3]For instance, in some cases heads of state inherit the position through amonarchywhereas others are indirectly or directly elected such as presidents.[4]Several examples are included below. ThePresident of the United Statesis elected indirectly. In aUS presidential election, eligible members of the public vote for theelectorsof anElectoral College, who have previously pledged publicly to support a particular presidential candidate.[5]When the Electoral College sits, soon after the election, it formally elects the candidate that has won a majority of the members of the Electoral College. Members of the federal cabinet, including the vice president, are in practice nominated by the president, and are thus elected indirectly.[6]The Electoral College is a controversial issue in U.S. politics, especially following presidential elections when voting is polarized geographically in such a way that the electoral collegeelects a candidate who did not win an absolute majority of the popular vote.[7]TheNational Popular Vote Interstate Compact, if enacted, would effectively replace the indirect election via the Electoral College with ade factoplurality-based direct election.[8] TheConstitution of the People's Republic of Chinaspecifies a system of indirect democracy.[9]TheNational People's Congresselects thepresident, also known as the state chairman, who serves asstate representative.[10]The power of the presidency is largely ceremonial and has no real power in China'spolitical system, the vast majority of power stems from the president's position asGeneral Secretary of the Chinese Communist Partyandcommander-in-chiefof themilitary.[11] The president of theEuropean Commissionis nominated by theEuropean Counciland confirmed or denied by the directly electedEuropean Parliament(seeElections to the European Parliament).[12] Republics withparliamentary systemsusually elect theirhead of stateindirectly (e.g.Germany,Italy,Estonia,Latvia,Malta,Hungary,India,Israel,Bangladesh).[13]Several parliamentary republics, such asIreland,Austria,Croatia,Bulgariaand theCzech Republic, operate using a semi-presidential system with a directly elected president distinct from the prime minister.[14] A head of government is in charge of the daily business of government and overseeing central government institutions. In presidential systems the president is the head of government and head of state. In parliamentary systems the head of government is usually the leader of the party with the most seats in the legislature.[15]Several examples of heads of government who are chosen through indirect elections are summarized below. The most prominent position in parliamentary democracies is the prime ministership.[16] Under theWestminster system, named after and typified by theparliament of the United Kingdom, a prime minister (or first minister, premier, or chief minister) is the person that can command the largest coalition of supporters in parliament. In almost all cases, the prime minister is the leader of a political party (orcoalition) that has a majority in the parliament, or thelower house(such as theHouse of Commons), or in the situation that no one party has a majority then the largest party or a coalition of smaller parties may attempt to form a minority government. The prime minister is thus indirectly elected as political parties elect their own leader through internal democratic process, while the general public choose from amongst the local candidates of the various political parties or independents.[17] The Westminster model continues to be used in a number ofCommonwealthcountries includingAustralia,Canada,New Zealand,Singaporeand theUnited Kingdom.[18]Additionally many nations colonized by the British Empire inherited the Westminster model following their independence.[19] InSpain, theCongress of Deputiesvotes on amotion of confidenceof theking'snominee (customarilythe party leader whose party controls the Congress) and the nominee'spolitical manifesto, an example of an indirect election of theprime minister of Spain.[20] InGermany, thefederal chancellor- the most powerful position on the federal level - is elected indirectly by theBundestag, which in turn is elected by the population.[21]Thefederal president, the head of state, proposes candidates for the chancellor's office. Although this has never happened, the Bundestag may in theory also choose to elect another person into office, which the president has to accept.[22] Some countries have nonpartisan heads of government who are appointed by the president, such as thePrime Minister of Singapore.[23] Members of theGerman Bundesratare appointed (delegated) by theLandtagof the variousstates. InFrance, election to the upper house of Parliament, theSénat, is indirect. Electors (called "Grands électeurs") are locally elected representatives. Members of the IndianRajya Sabha(upper house of parliament) are largely elected directly by theVidhan Sabha(legislative assembly) of the variousstates and Union territories; some are appointed by thepresident. Indirect single transferable votingis used to elect some members of theSenate of Pakistan.[24] TheNational People's Congressof China is elected by lower level of thesystem of people's congress.[25] Some examples of indirectly electedsupranational legislaturesinclude: the parliamentary assemblies of theCouncil of Europe,OSCEandNATO– in all of these cases, voters elect national parliamentarians, who in turn elect some of their own members to the assembly. The same applies to bodies formed by representatives chosen by a national government, e.g. theUnited Nations General Assembly– assuming the national governments in question aredemocratically electedin the first place. TheControl YuanofChina, formerly a parliamentary chamber, was elected by its respective legislatures across the country: five from each province, two from each directly administered municipality, eight from Mongolia (by 1948 only the Inner Mongolian provinces were represented), eight from Tibet and eight from the overseas Chinese communities. As originally envisioned both the President and Vice President of the Control Yuan were to be elected by and from the members like the speaker of many other parliamentary bodies worldwide. The Control Yuan became a sole auditory body in Taiwan in 1993 afterdemocratization. Members of theUnited States Senatewere elected by theLegislatureof the variousstatesuntil ratification of theSeventeenth Amendment to the United States Constitutionin 1913. Since that time they have been elected by direct popular vote. Indirect elections can have a lower politicalaccountabilityandresponsivenesscompared todirect elections.[26]
https://en.wikipedia.org/wiki/Indirect_election
Social bookmarkingis an online service which allows users to add, annotate, edit, and sharebookmarksofweb documents.[1][2]Many online bookmark management services have launched since 1996;Delicious, founded in 2003, popularized the terms "social bookmarking" and "tagging". Tagging is a significant feature of social bookmarking systems, allowing users to organize their bookmarks and develop shared vocabularies known asfolksonomies. Unlikefile sharing, social bookmarking does not save theresourcesthemselves, merely bookmarks thatreferencethem, i.e. a link to the bookmarked page. Descriptions may be added to these bookmarks in the form ofmetadata, so users may understand the content of the resource without first needing to download it for themselves. Such descriptions may be free text comments, votes in favor of or against its quality, ortagsthat collectively or collaboratively become afolksonomy. Folksonomy is also calledsocial tagging, "the process by which many users add metadata in the form of keywords to shared content".[3] In a social bookmarking system, users save links toweb pagesthat they want to remember and/or share. These bookmarks are usually public, and can be saved privately, shared only with specified people or groups, shared only inside certainnetworks, or another combination of public and private domains. The allowed people can usually view these bookmarks chronologically, by category or tags, or via a search engine. Most social bookmark services encourage users to organize their bookmarks with informaltagsinstead of the traditional browser-based system of folders, although some services feature categories/folders or a combination of folders and tags. They also enable viewing bookmarks associated with a chosen tag, and include information about the number of users who have bookmarked them. Some social bookmarking services also draw inferences from the relationship of tags to create clusters of tags or bookmarks. Many social bookmarking services provideweb feedsfor their lists of bookmarks, including lists organized by tags. This allows subscribers to become aware of new bookmarks as they are saved, shared, and tagged by other users. It also helps to promote your sites by networking with other social book markers and collaborating with each other. As these services have matured and grown more popular, they have added extra features such as ratings and comments on bookmarks, the ability to import and export bookmarks from browsers, emailing of bookmarks,web annotation, and groups or othersocial networkfeatures.[4] The concept of shared online bookmarks is believed to have originated around April 1996 with the launch of itList,[5]the features of which included public and private bookmarks.[6]Another system known as WebTagger, developed by a team at the Computational Sciences Division atNASA, was presented at the Sixth International WWW Conference held in Santa Clara on April 7–11, 1997. WebTagger included several advanced social bookmarking features including the ability to collaboratively share and organize bookmarks using a web-based interface, provide comments and organize them according to categories.[7]Within the next three years, online bookmark services became competitive, with venture-backed companies such as Backflip, Blink, Clip2, ClickMarks, HotLinks, and others entering the market.[8][9]They provided folders for organizing bookmarks, and some services automatically sorted bookmarks into folders (with varying degrees of accuracy).[10]Blink included browser buttons for saving bookmarks;[11]Backflip enabled users to email their bookmarks to others[12]and displayed "Backflip this page" buttons on partner websites.[13]Lacking viable revenue models, this early generation of social bookmarking companies failed as thedot-com bubbleburst—Backflip closed citing "economic woes at the start of the 21st century".[14]In 2005, the founder of Blink said, "I don't think it was that we were 'too early' or that we got killed when the bubble burst. I believe it all came down to product design, and to some very slight differences in approach."[15] Founded in 2003,Delicious(then called del.icio.us) pioneeredtagging[16]and coined the termsocial bookmarking. Frassle, a blogging system released in November 2003, included social bookmarking elements.[17]In 2004, as Delicious began to take off, similar servicesFurl,Simpy, Spurl.net, and unalog were released,[17]along withCiteULikeandConnotea(sometimes calledsocial citationservices) and the related recommendation systemStumbleupon. Also in 2004, the social photo sharing websiteFlickrwas released, and inspired by Delicious it soon added a tagging feature.[18]In 2006,Ma.gnolia(later renamed toGnolia),Blue Dot(later renamed toFaves),Mister Wong, andDiigoentered the bookmarking field, andConnectbeamincluded a social bookmarking and tagging service aimed at businesses and enterprises. In 2007,IBMreleased itsLotus Connectionsproduct.[19]In 2009,Pinboardlaunched as a bookmarking service with paid accounts.[20]As of 2012, Furl, Simpy, Spurl.net, Gnolia, Faves, and Connectbeam are no longer active services. Diggwas founded in 2004[21]with a related system for sharing and rankingsocial news, followed by competitorsRedditin 2005[22]andNewsvinein 2006.[23]As of January 20, 2016, Reddit is now the 32nd highest ranking in the world and Digg is no longer a social bookmarking platform and has dropped out of the top 1000. A simple form of shared vocabularies does emerge in social bookmarking systems (folksonomy). Collaborative tagging exhibits a form ofcomplex systems(orself-organizing) dynamics.[24]Although there is no central controlled vocabulary to constrain the actions of individual users, the distributions of tags that describe different resources have been shown to converge over time to stablepower lawdistributions.[24]Once such stable distributions form, thecorrelationsbetween different tags can be examined to construct simple folksonomy graphs, which can be efficiently partitioned to obtain a form of community or shared vocabularies.[25]While such vocabularies suffer from some of the informality problems described below, they can be seen as emerging from the decentralized actions of many users, as a form ofcrowdsourcing. From the point of view of search data, there are drawbacks to such tag-based systems: no standard set of keywords (i.e., afolksonomyinstead of acontrolled vocabulary), no standard for the structure of such tags (e.g., singular vs. plural, capitalization), mistagging due to spelling errors, tags that can have more than one meaning, unclear tags due tosynonym/antonymconfusion, unorthodox and personalized tag schemata from some users, and no mechanism for users to indicatehierarchicalrelationships between tags (e.g., a site might be labeled as bothcheeseandcheddar, with no mechanism that might indicate thatcheddaris a refinement or sub-class ofcheeses). For individual users, social bookmarking can be useful as a way to access a consolidated set of bookmarks from various computers, organize large numbers of bookmarks, and share bookmarks with contacts. Institutions, including businesses, libraries, and universities have used social bookmarking as a way to increase information sharing among members. Social bookmarking has also been used to improve web search.[26][27] Unlike social bookmarking, where individuals can bookmark their favorite web resources to share those with the public on the internet,enterprise bookmarkingis for knowledge management and sharing it within a specific network of an organization.Enterprise bookmarkingis used by the users of an organization to tag, manage and share bookmarks on the web as well as the knowledge base of the organization's databases andfile servers. Libraries have found social bookmarking to be useful as an easy way to provide lists of informative links to patrons.[28]The University of Pennsylvania (UP) was one of the first library adopters with its PennTags.[29][30] Social bookmarking tools are an emerging educational technology that has been drawing more of educators' attention over the last several years. This technology offers knowledge sharing solutions and a social platform for interactions and discussions. These tools enable users to collaboratively underline, highlight, and annotate an electronic text, in addition to providing a mechanism to write additional comments on the margins of the electronic document.[31]For example, Delicious could be used in a course to provide an inexpensive answer to the question of rising course materials costs.[32]RISAL (Repository of Intean organization's users to manage and share bookmarks on the web and supporting teaching and learning at the university level.[33] Social bookmarking tools have several purposes in an academic setting including: organizing and categorizing web pages for efficient retrieval; keeping tagged pages accessible from any networked computer; sharing needed or desired resources with other users; accessing tagged pages with RSS feeds, cell phones and PDAs for increased mobilan organization's users to tag, manage and share bookmarks on the web and giving students another way to collaborate with each other and make collective discoveries.[34] One requirement unique to education is that resources often have one URL that describes the resource, with another for the actual learning content. XtLearn.net[35]allows bookmarking of both in one step,[36]the relevant URL being delivered to either tutors or learners, depending on the delivery context. It also demonstrates integration with traditionallearning content repositories, such asJorum,NLN,IntuteandTES.[37] In comparison to search engines, a social bookmarking system has several advantages over traditional automated resource location and classification software, such assearch enginespiders. All tag-based classification of Internet resources (such as web sites) is done by human beings, who understand the content of the resource, as opposed to software, which algorithmically attempts to determine the meaning and quality of a resource. Also, people can find andbookmark web pagesthat have not yet been noticed or indexed by web spiders.[38]Additionally, a social bookmarking system can rank a resource based on how many times it has been bookmarked by users, which may be a more usefulmetricforend-usersthan systems that rank resources based on the number of external links pointing to it. However, both types of ranking are vulnerable to fraud, (seeGaming the system), and both need technical countermeasures to try to deal with this. Social bookmarking is susceptible to corruption and collusion.[17]Due to its popularity, some have begun to use it as a tool to use along withsearch engine optimizationto make their website more visible. The more often a web page is submitted and tagged, the better chance it has of being found.Spammershave started bookmarking the same web page multiple times and/or tagging each page of their web site using a lot of popular tags, obligating developers to constantly adjust their security system to overcome abuses.[39][40]Furthermore, since social bookmarking generatesbacklinks, social bookmark link generating services are used by somewebmastersin an attempt to improve their websites' rankings insearch engine results pages.[41][42]
https://en.wikipedia.org/wiki/Social_bookmarking
Knowledge management(KM) is the set of procedures for producing, disseminating, utilizing, and overseeing an organization's knowledge and data. It alludes to a multidisciplinary strategy that maximizes knowledge utilization to accomplish organizational goals. Courses in business administration, information systems, management, libraries, and information science are all part of knowledge management, a discipline that has been around since 1991. Information and media, computer science, public health, and public policy are some of the other disciplines that may contribute to KM research. Numerous academic institutions provide master's degrees specifically focused on knowledge management. As a component of their IT, human resource management, or business strategy departments, many large corporations, government agencies, and nonprofit organizations have resources devoted to internal knowledge management initiatives. These organizations receive KM guidance from a number of consulting firms. Organizational goals including enhanced performance, competitive advantage, innovation, sharing of lessons learned, integration, and ongoing organizational improvement are usually the focus of knowledge management initiatives. These initiatives are similar to organizational learning, but they can be differentiated by their increased emphasis on knowledge management as a strategic asset and information sharing. Organizational learning is facilitated by knowledge management. The setting ofsupply chainmay be the most challenging situation for knowledge management since it involves several businesses without a hierarchy or ownership tie; some authors refer to this type of knowledge as transorganizational or interorganizational knowledge.industry 4.0(or4th industrial revolution) anddigital transformationalso add to that complexity, as new issues arise from the volume and speed of information flows and knowledge generation. Knowledge management efforts have a long history, including on-the-job discussions, formalapprenticeship,discussion forums, corporate libraries, professional training, and mentoring programs.[1][2]With increased use of computers in the second half of the 20th century, specific adaptations of technologies such asknowledge bases,expert systems,information repositories, groupdecision support systems,intranets, andcomputer-supported cooperative workhave been introduced to further enhance such efforts.[1] In 1999, the termpersonal knowledge managementwas introduced; it refers to the management of knowledge at the individual level.[3] In the enterprise, early collections of case studies recognised the importance of knowledge management dimensions of strategy,processandmeasurement.[4][5]Key lessons learned include people and the cultural norms which influence their behaviors are the most critical resources for successful knowledge creation, dissemination and application; cognitive, social and organisational learning processes are essential to the success of a knowledge management strategy; and measurement,benchmarkingand incentives are essential to accelerate the learning process and to drive cultural change.[5]In short, knowledge management programs can yield impressive benefits to individuals and organisations if they are purposeful, concrete and action-orientated. TheISO 9001:2015 quality management standardreleased in September 2015 introduced a specification for 'organizational knowledge' as a complementary aspect of quality management within an organisation.[6] KM emerged as a scientific discipline in the early 1990s.[7]It was initially supported by individual practitioners, whenSkandiahired Leif Edvinsson of Sweden as the world's firstchief knowledge officer(CKO).[8]Hubert Saint-Onge (formerly ofCIBC, Canada), started investigating KM long before that.[1]The objective of CKOs is to manage and maximise the intangible assets of their organizations.[1]Gradually, CKOs became interested in practical and theoretical aspects of KM, and the new research field was formed.[9]The KM idea has been taken up by academics, such asIkujiro Nonaka(Hitotsubashi University), Hirotaka Takeuchi (Hitotsubashi University),Thomas H. Davenport(Babson College) and Baruch Lev (New York University).[10][11] In 2001,Thomas A. Stewart, former editor atFortunemagazine and subsequently the editor ofHarvard Business Review, published a cover story highlighting the importance of intellectual capital in organizations.[12]The KM discipline has been gradually moving towards academic maturity.[1]First, is a trend toward higher cooperation among academics; single-author publications are less common. Second, the role of practitioners has changed.[9]Their contribution to academic research declined from 30% of overall contributions up to 2002, to only 10% by 2009.[13]Third, the number of academic knowledge management journals has been steadily growing, currently reaching 27 outlets.[14][15] Multiple KM disciplines exist; approaches vary by author and school.[9][16]As the discipline matured, academic debates increased regardingtheoryand practice, including: Regardless of theschool of thought, core components of KM roughly include people/culture, processes/structure and technology. The details depend on theperspective.[22]KM perspectives include: The practical relevance of academic research in KM has been questioned[29]withaction researchsuggested as having more relevance[30]and the need to translate the findings presented in academic journals to a practice.[4] Differentframeworksfor distinguishing between different 'types of' knowledge exist.[2]One proposed framework for categorising the dimensions of knowledge distinguishestacit knowledgeandexplicit knowledge.[26]Tacit knowledge represents internalised knowledge that an individual may not be consciously aware of, such as to accomplish particular tasks. At the opposite end of the spectrum, explicit knowledge represents knowledge that the individual holds consciously in mental focus, in a form that can easily be communicated to others.[9][31] Ikujiro Nonaka proposed a model (SECI, for Socialisation, Externalisation, Combination, Internalisation) which considers a spiraling interaction betweenexplicit knowledgeand tacit knowledge.[32]In this model, knowledge follows a cycle in which implicit knowledge is 'extracted' to become explicit knowledge, and explicit knowledge is 're-internalised' into implicit knowledge.[32] Hayes and Walsham (2003) describe knowledge and knowledge management as two different perspectives.[33]The content perspective suggests that knowledge is easily stored; because it may be codified, while the relational perspective recognises the contextual and relational aspects of knowledge which can make knowledge difficult to share outside the specific context in which it is developed.[33] Early research suggested that KM needs to convert internalised tacit knowledge into explicit knowledge to share it, and the same effort must permit individuals to internalise and make personally meaningful any codified knowledge retrieved from the KM effort.[19][34] Subsequent research suggested that a distinction between tacit knowledge and explicit knowledge represented an oversimplification and that the notion of explicit knowledge is self-contradictory.[3]Specifically, for knowledge to be made explicit, it must be translated into information (i.e.,symbolsoutside our heads).[3][35]More recently, together withGeorg von KroghandSven Voelpel, Nonaka returned to his earlier work in an attempt to move the debate about knowledge conversion forward.[36][37] A second proposed framework for categorising knowledge dimensions distinguishes embedded knowledge of asystemoutside a human individual (e.g., an information system may have knowledge embedded into its design) fromembodied knowledgerepresenting a learned capability of a human body'snervousandendocrine systems.[38] A third proposed framework distinguishes between the exploratory creation of "new knowledge" (i.e., innovation) vs. thetransferor exploitation of "established knowledge" within a group, organisation, or community.[33][39]Collaborative environments such as communities of practice or the use ofsocial computingtools can be used for both knowledge creation and transfer.[39] Knowledge may be accessed at three stages: before, during, or after KM-related activities.[25]Organisations have tried knowledge captureincentives, including making content submission mandatory and incorporating rewards intoperformance measurementplans.[40]Considerable controversy exists over whether such incentives work and no consensus has emerged.[41] One strategy to KM involves actively managing knowledge (push strategy).[41][42]In such an instance, individuals strive to explicitly encode their knowledge into a shared knowledge repository, such as adatabase, as well as retrieving knowledge they need that other individuals have provided (codification).[42]Another strategy involves individuals making knowledge requests of experts associated with a particular subject on an ad hoc basis (pull strategy).[41][42]In such an instance, expert individual(s) provide insights to requestor (personalisation).[26]When talking about strategic knowledge management, the form of the knowledge and activities to share it defines the concept between codification and personalization.[43]The form of the knowledge means that it's eithertacitorexplicit.Dataandinformationcan be considered as explicit andknow-howcan be considered as tacit.[44] Hansenet al. defined the two strategies (codification and personalisation).[45]Codification means a system-oriented method in KM strategy for managing explicit knowledge with organizational objectives.[46]Codification strategy is document-centered strategy, where knowledge is mainly codified as "people-to-document" method. Codification relies on information infrastructure, where explicit knowledge is carefully codified and stored.[45]Codification focuses on collecting and storing codified knowledge in electronic databases to make it accessible.[47]Codification can therefore refer to both tacit and explicit knowledge.[48]In contrast, personalisation encourages individuals to share their knowledge directly.[47]Personification means human-oriented KM strategy where the target is to improve knowledge flows through networking and integrations related to tacit knowledge with knowledge sharing and creation.[46]Information technology plays a less important role, as it only facilitates communication and knowledge sharing. Generic knowledge strategies includeknowledge acquisitionstrategy, knowledge exploitation strategy, knowledge exploration strategy, andknowledge sharingstrategy. These strategies aim at helping organisations to increase their knowledge andcompetitive advantage.[49] Other knowledge management strategies and instruments for companies include:[41][20][26] Multiple motivations lead organisations to undertake KM.[31]Typical considerations include:[26][53] Knowledge management (KM) technology can be categorised: These categories overlap. Workflow, for example, is a significant aspect of content or document management systems, most of which have tools for developing enterprise portals.[41][55] Proprietary KM technology products such asHCL Notes(Previously Lotus Notes) defined proprietary formats for email, documents, forms, etc. The Internet drove most vendors to adopt Internet formats.Open-sourceandfreewaretools for the creation ofblogsandwikisnow enable capabilities that used to require expensive commercial tools.[30][56] KM is driving the adoption of tools that enable organisations to work at the semantic level,[57]as part of theSemantic Web.[58]Some commentators have argued that after many years the Semantic Web has failed to see widespread adoption,[59][60][61]while other commentators have argued that it has been a success.[62] Just like knowledge transfer and knowledge sharing, the term "knowledge barriers" is not a uniformly defined term and differs in its meaning depending on the author.[63]Knowledge barriers can be associated with high costs for both companies and individuals.[64][65][66]Knowledge barriers appear to have been used from at least three different perspectives in the literature:[63]1) Missing knowledge about something as a result of barriers for the share or transfer of knowledge. 2) Insufficient knowledge based on the amount of education in a certain field or issue. 3) A unique individual or group of humans' perceptual system lacks adequate contact points or does not fit incoming information to use and transform it to knowledge. Knowledge retention is part of knowledge management. It helps convert tacit form of knowledge into an explicit form. It is a complex process which aims to reduce the knowledge loss in the organization.[67]Knowledge retention is needed when expert knowledge workers leave the organization after a long career.[68]Retaining knowledge prevents losing intellectual capital.[69] According to DeLong(2004)[70]knowledge retention strategies are divided into four main categories: Knowledge retention projects are usually introduced in three stages: decision making, planning and implementation. There are differences among researchers on the terms of the stages. For example, Dalkir talks about knowledge capture, sharing and acquisition and Doan et al. introduces initiation, implementation and evaluation.[71][72]Furthermore, Levy introduces three steps (scope, transfer, integration) but also recognizes a "zero stage" for initiation of the project.[68] A knowledge audit is a comprehensive assessment of an organization's knowledge assets, including its explicit and tacit knowledge, intellectual capital, expertise, and skills. The goal of a knowledge audit is to identify the organization's knowledge strengths and gaps, and to develop strategies for leveraging knowledge to improve performance and competitiveness. Knowledge audit helps ensure that an organization's knowledge management activities are heading in the right direction. It also reduces the making of incorrect decisions. Term knowledge audit is often used interchangeably with information audit, although information audit is slightly narrower in scope.[73][74] The requirement and significance of a knowledge audit can vary widely among different industries and companies. For instance, within the software development industry, knowledge audits can play a pivotal role due to the inherently knowledge-intensive nature of the work. This contrasts with sectors like manufacturing, where physical assets often take more important role. The difference arises from the fact that in software development companies, the skills, expertise, and intellectual capital, often overshadow the value of physical assets.[75] Knowledge audits provide opportunities for organizations to improve their management of knowledge assets, with the goal of enhancing organizational effectiveness and efficiency. By conducting a knowledge audit, organizations can raise awareness of knowledge assets as primary factors of production and as critical capital assets in today's knowledge economy. The process of a knowledge audit allows organizations to gain a deeper understanding of their knowledge assets. This includes identifying and defining these assets, understanding their behavior and properties, and describing how, when, why, and where they are used in business processes.[75] Knowledge protection refers to behaviors and actions taken to protect the knowledge from unwanted opportunistic behavior for example appropriation or imitation of the knowledge.[76] Knowledge protection is used to prevent the knowledge to be unintentionally available or useful for competitors. Knowledge protection can be for example a patent, copyright, trademark, lead time or secrecy held by a company or an individual.[77] There are various methods for knowledge protection and those methods are often divided into two categories by their formality: formal protection and informal protection.[78][79][80][81]Occasionally a third category is introduced, semi-formal protection, which includes contracts and trade-secrets.[80][81][82]These semi-formal methods are also usually placed under formal methods. Organizations often use a combination of formal and informal knowledge protection methods to achieve comprehensive protection of their knowledge assets.[81]The formal and informal knowledge protection mechanisms are different in nature, and they have their benefits and drawbacks. In many organizations, the challenge is to find a good mix of measures that works for the organization.[79] Formal knowledge protection practices can take various forms, such as legal instruments or formal procedures and structures, to control which knowledge is shared and which is protected.[78]Formal knowledge protection methods include for example: patents, trademarks, copyrights and licensing.[78][80][83] Technical solutions to protect the knowledge fall also under the category of formal knowledge protection. Formal knowledge protection from technical viewpoint includes technical access constraints and protection of communication channels, systems, and storage.[79] While knowledge may eventually become public in some form or another, formal protection mechanisms are necessary to prevent competitors from directly utilizing it for their own gain.[79]Formal protection methods are particularly effective in protecting established knowledge that can be codified and embodied in final products or services.[83] Informal knowledge protection methods refer to the use of informal mechanisms such as human resource management practices or secrecy to protect knowledge assets. There is notable amount of knowledge that cannot be protected by formal methods, and for which more informal protection might be the most efficient option.[84] Informal knowledge protection methods can take various forms, such as: secrecy, social norms and values, complexity, lead-time and Human resource management.[78][83][85][84] Informal knowledge protection methods protect knowledge assets for example by making it difficult for outsiders to access and understand the knowledge within the boundaries of the organization.[85]Informal protection methods are more effective for protecting knowledge that is complex or difficult to express, articulate, or codify.[85][84] The balance between knowledge sharing and knowledge protection is a critical dilemma faced by organizations today.[86][79]While sharing knowledge can lead to innovation, collaboration, and competitive advantage, protecting knowledge can prevent it from being misused, misappropriated, or lost.[86][79][87]Thus, the need for organizational learning must be balanced with the need to protect organisations' intellectual property, especially whilst cooperating with external partners.[86][88]The role of information security is crucial in helping organisations protect their assets whilst still enabling the benefits of information sharing.[79][87]By implementing effective knowledge management strategies, organizations can protect valuableintellectual propertywhile also encouraging the sharing of relevant knowledge across teams and departments.[86]This active balancing act requires careful consideration of factors such as the level of openness, the identification of core knowledge areas, and the establishment of appropriate mechanisms for knowledge transfer and collaboration.[86]Finding the right balance between knowledge sharing and knowledge protection is a complex issue that requires a nuanced understanding of the trade-off's involved and the context in which knowledge is shared or protected.[86][88] Protecting knowledge cannot be considered without its risks. Here are listed four of the major risks associated with knowledge protection: In conclusion, protecting knowledge is crucial to promote innovation and creativity, but it is not without its risks. Overprotection, misappropriation, infringement claims, and inadequate protection are all risks associated with knowledge protection. Individuals and organizations should take steps to protect their intellectual property while also considering the potential risks and benefits of such protection.
https://en.wikipedia.org/wiki/Knowledge_management
The following tables compareenterprise bookmarkingplatforms. The table provides an overview of Enterprise Bookmarking platforms. The platforms listed refer to an application that is installed on a web server (usually requiringMySQLor another database andPHP,perl,Python, or some other language for web apps). This table lists the types of data that can be tagged. Tags and metadata can be used to enrich previously described types of data and content. This table lists the default capabilities each platform provides. Enterprise bookmarkingtools differ fromsocial bookmarkingtools in the way that they often have to meettaxonomyconstraints. Tag management capabilities are the uphill (e.g.faceted classification, predefined tags) and downhill gardening (e.g. tag renaming, moving, merging) abilities that can be put in place to manage thefolksonomygenerated from user tagging. Security abilities at the platform level: Security abilities at the application level: In the case of web applications, this describes the server OS. For centrally-hosted websites that are proprietary, this is not applicable. Any client OS can connect to a web service unless stated otherwise in a footnote.
https://en.wikipedia.org/wiki/Comparison_of_enterprise_bookmarking_platforms
Abookmark manageris any software program or feature designed to store, organize, and displayweb bookmarks. Thebookmarks feature included in each major web browseris a rudimentary bookmark manager. More capable bookmark managers are available online as web apps, mobile apps, or browser extensions, and may display bookmarks as text links or graphical tiles (often depicting icons).Social bookmarking websitesare bookmark managers. Start page browser extensions, new tab page browser extensions, and some browser start pages, also have bookmark presentation and organization features, which are typically tile-based. Some more general programs, such as certain note taking apps, have bookmark management functionality built-in. This article related to a type ofsoftwareis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Bookmark_manager
Asocial bookmarking websiteis a centralized online service that allows users to store and shareInternet bookmarks. Such a website typically offers a blend of social and organizational tools, such as annotation, categorization,folksonomy-basedtagging,social catalogingand commenting. The website may also interface with other kinds of services, such ascitation management softwareandsocial networking sites.[1]
https://en.wikipedia.org/wiki/List_of_social_bookmarking_websites
This is a list of notablesocial software: selected examples ofsocial softwareproducts and services that facilitate a variety of forms of social human contact.
https://en.wikipedia.org/wiki/List_of_social_software
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias Asocial networkis asocial structureconsisting of a set ofsocialactors (such asindividualsor organizations), networks ofdyadicties, and othersocial interactionsbetween actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures.[1]The study of these structures usessocial network analysisto identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherentlyinterdisciplinaryacademic field which emerged fromsocial psychology,sociology,statistics, andgraph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations".[2]Jacob Morenois credited with developing the firstsociogramsin the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in thesocial and behavioral sciencesby the 1980s.[1][3]Social network analysisis now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with othercomplex networks, it forms part of the nascent field ofnetwork science.[4][5] The social network is atheoreticalconstructuseful in thesocial sciencesto study relationships between individuals,groups,organizations, or even entiresocieties(social units, seedifferentiation). The term is used to describe asocial structuredetermined by suchinteractions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. Anaxiomof the social network approach to understandingsocial interactionis that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is thatindividual agencyis often ignored[6]although this may not be the case in practice (seeagent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations,network analyticsare useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited toanthropology,biology,communication studies,economics,geography,information science,organizational studies,social psychology,sociology, andsociolinguistics. In the late 1890s, bothÉmile DurkheimandFerdinand Tönniesforeshadowed the idea of social networks in their theories and research ofsocial groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society").[7]Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors.[8]Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups.[9] Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently.[6][10][11]Inpsychology, in the 1930s,Jacob L. Morenobegan systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (seesociometry). Inanthropology, the foundation for social network theory is the theoretical andethnographicwork ofBronislaw Malinowski,[12]Alfred Radcliffe-Brown,[13][14]andClaude Lévi-Strauss.[15]A group of social anthropologists associated withMax Gluckmanand theManchester School, includingJohn A. Barnes,[16]J. Clyde MitchellandElizabeth Bott Spillius,[17][18]often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom.[6]Concomitantly, British anthropologistS. F. Nadelcodified a theory of social structure that was influential in later network analysis.[19]Insociology, the early (1930s) work ofTalcott Parsonsset the stage for taking a relational approach to understanding social structure.[20][21]Later, drawing upon Parsons' theory, the work of sociologistPeter Blauprovides a strong impetus for analyzing the relational ties of social units with his work onsocial exchange theory.[22][23][24] By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologistHarrison Whiteand his students at theHarvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time wereCharles Tilly, who focused on networks in political and community sociology and social movements, andStanley Milgram, who developed the "six degrees of separation" thesis.[25]Mark Granovetter[26]andBarry Wellman[27]are among the former students of White who elaborated and championed the analysis of social networks.[26][28][29][30] Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such asDuncan J. Watts,Albert-László Barabási,Peter Bearman,Nicholas A. Christakis,James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. In general, social networks areself-organizing,emergent, andcomplex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system.[32][33]These patterns become more apparent as network size increases. However, a global network analysis[34]of, for example, allinterpersonal relationshipsin the world is not feasible and is likely to contain so muchinformationas to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis.[35][36]The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Althoughlevels of analysisare not necessarilymutually exclusive, there are three general levels into which networks may fall:micro-level,meso-level, andmacro-level. At the micro-level, social network research typically begins with an individual,snowballingas social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: Adyadis a social relationship between two individuals. Network research on dyads may concentrate onstructureof the relationship (e.g. multiplexity, strength),social equality, and tendencies towardreciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have atriad. Research at this level may concentrate on factors such asbalanceandtransitivity, as well associal equalityand tendencies towardreciprocity/mutuality.[35]In thebalance theoryofFritz Heiderthe triad is the key to social dynamics. The discord in a rivalrouslove triangleis an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory ofsigned graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density,centrality,prestigeand roles such asisolates, liaisons, andbridges.[37]Such analyses, are most commonly used in the fields ofpsychologyorsocial psychology,ethnographickinshipanalysis or othergenealogicalstudies of relationships between individuals. Subset level:Subsetlevels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus ondistanceand reachability,cliques,cohesivesubgroups, or othergroup actionsorbehavior.[38] In general, meso-level theories begin with apopulationsize that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks.[39] Organizations: Formalorganizationsaresocial groupsthat distribute tasks for a collectivegoal.[40]Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms offormalorinformalrelationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures.[40]Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups.[41] Randomly distributed networks:Exponential random graph modelsof social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including generaldegree-based structural effects commonly observed in many human social networks as well asreciprocityandtransitivity, and at the node-level,homophilyandattribute-based activity and popularity effects, as derived from explicit hypotheses aboutdependenciesamong network ties.Parametersare given in terms of the prevalence of smallsubgraphconfigurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior.[42] Scale-free networks: Ascale-free networkis anetworkwhosedegree distributionfollows apower law, at leastasymptotically. Innetwork theorya scale-free ideal network is arandom networkwith adegree distributionthat unravels the size distribution of social groups.[43]Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness ofverticeswith adegreethat greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is theclustering coefficientdistribution, which decreases as the node degree increases. This distribution also follows apower law.[44]TheBarabásimodel of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such aseconomicor otherresourcetransferinteractions over a largepopulation. Large-scale networks:Large-scale networkis a term somewhat synonymous with "macro-level." It is primarily used insocialandbehavioralsciences, and ineconomics. Originally, the term was used extensively in thecomputer sciences(seelarge-scale network mapping). Complex networks: Most larger social networks display features ofsocial complexity, which involves substantial non-trivial features ofnetwork topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see,complexity science,dynamical systemandchaos theory), as dobiological, andtechnological networks. Suchcomplex networkfeatures include a heavy tail in thedegree distribution, a highclustering coefficient,assortativityor disassortativity among vertices,community structure(seestochastic block model), andhierarchical structure. In the case ofagency-directednetworks these features also includereciprocity, triad significance profile (TSP, seenetwork motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such aslatticesandrandom graphs, do not show these features.[45] Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these areGraph theory,Balance theory, Social comparison theory, and more recently, theSocial identity approach.[46] Few complete theories have been produced from social network analysis. Two that have arestructural role theoryandheterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties".[47] In the context of networks,social capitalexists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections.[48]Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters.[49]When two separate clusters possess non-redundant information, there is said to be a structural hole between them.[49]Thus, a network that bridgesstructural holeswill provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes.[49] Networks rich in structural holes are a form of social capital in that they offerinformationbenefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters.[49]For example, inbusiness networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory ofweak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction.[50] Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist.[51][52]Other work examines how network grouping of artists can affect an individual artist's auction performance.[53]An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed throughtelecommunicationsdevices andsocial network services. Such devices and services require extensive and ongoing maintenance and analysis, often usingnetwork sciencemethods.Community developmentstudies, today, also make extensive use of such methods. Complex networksrequire methods specific to modelling and interpretingsocial complexityandcomplex adaptive systems, including techniques ofdynamic network analysis. Mechanisms such asDual-phase evolutionexplain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants incollective actionssuch asprotests; promotion of peaceful behavior,social norms, andpublic goodswithincommunitiesthrough networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats.[54] Incriminologyandurban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength.[55] Diffusion of ideas and innovationsstudies focus on the spread and use of ideas from one actor to another or onecultureand another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., byNicholas Christakisand collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages,[56][57]Indian slums,[58]or in the lab.[59]Still other experiments have documented the experimental induction of social contagion of voting behavior,[60]emotions,[61]risk perception,[62]and commercial products.[63] Indemography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents.[64][65] The field ofsociologyfocuses almost entirely on networks of outcomes of social interactions. More narrowly,economic sociologyconsiders behavioral interactions of individuals and groups throughsocial capitaland social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy.[66] Analysis of social networks is increasingly incorporated intohealth care analytics, not only inepidemiologicalstudies but also in models ofpatient communicationand education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations andsystems.[67] Human ecologyis aninterdisciplinaryandtransdisciplinarystudy of the relationship betweenhumansand theirnatural,social, andbuilt environments. The scientific philosophy of human ecology has a diffuse history with connections togeography,sociology,psychology,anthropology,zoology, and naturalecology.[68][69] In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo,[70]De Nooy,[71]Senekal,[72]andLotker,[73]to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings ofEven-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped usingvisualizationfrom SNA. Research studies offormalorinformal organizationrelationships,organizational communication,economics,economic sociology, and otherresourcetransfers. Social networks have also been used to examine how organizations interact with each other, characterizing the manyinformal connectionsthat link executives together, as well as associations and connections between individual employees at different organizations.[74]Many organizational social network studies focus onteams.[75]Withinteamnetwork studies, research assesses, for example, the predictors and outcomes ofcentralityand power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affectorganizational commitment,[76]organizational identification,[37]interpersonal citizenship behaviour.[77] Social capitalis a form ofeconomicandcultural capitalin which social networks are central,transactionsare marked byreciprocity,trust, andcooperation, andmarketagentsproducegoods and servicesnot mainly for themselves, but for acommon good.Social capitalis split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations.[78]This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations.[78]The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions.[78] Social capitalis a sociological concept about the value ofsocial relationsand the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use.[79][80][81]In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity.[79][82] This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understandconsumer behaviourand drive sales. In manyorganizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities.[48]Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economistJohn Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress."[83]Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking.[84]In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms.[85]By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations.[86]However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted.[48]Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networkscombined with social networking software produce a new medium for social interaction. A relationship over a computerizedsocial networking servicecan be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In acomputer-mediated communicationcontext, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise ofelectronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world.[87]Social network analysismethods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature ofsocial mediahas given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data.[88] Based on the pattern ofhomophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhoodsegregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area.[89]
https://en.wikipedia.org/wiki/Social_networking