text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
Innumericallinear algebra, theArnoldi iterationis aneigenvalue algorithmand an important example of aniterative method. Arnoldi finds an approximation to theeigenvaluesandeigenvectorsof general (possibly non-Hermitian)matricesby constructing an orthonormal basis of theKrylov subspace, which makes it particularly useful when dealing with largesparse matrices.
The Arnoldi method belongs to a class of linear algebra algorithms that give a partial result after a small number of iterations, in contrast to so-calleddirect methodswhich must complete to give any useful results (see for example,Householder transformation). The partial result in this case being the first few vectors of the basis the algorithm is building.
When applied to Hermitian matrices it reduces to theLanczos algorithm. The Arnoldi iteration was invented byW. E. Arnoldiin 1951.[1]
An intuitive method for finding the largest (in absolute value) eigenvalue of a givenm×mmatrixA{\displaystyle A}is thepower iteration: starting with an arbitrary initialvectorb, calculateAb,A2b,A3b, ...normalizing the result after every application of the matrixA.
This sequence converges to theeigenvectorcorresponding to the eigenvalue with the largest absolute value,λ1{\displaystyle \lambda _{1}}. However, much potentially useful computation is wasted by using only the final result,An−1b{\displaystyle A^{n-1}b}. This suggests that instead, we form the so-calledKrylov matrix:
The columns of this matrix are not in generalorthogonal, but we can extract an orthogonalbasis, via a method such asGram–Schmidt orthogonalization. The resulting set of vectors is thus an orthogonal basis of theKrylov subspace,Kn{\displaystyle {\mathcal {K}}_{n}}. We may expect the vectors of this basis tospangood approximations of the eigenvectors corresponding to then{\displaystyle n}largest eigenvalues, for the same reason thatAn−1b{\displaystyle A^{n-1}b}approximates the dominant eigenvector.
The Arnoldi iteration uses themodified Gram–Schmidt processto produce a sequence of orthonormal vectors,q1,q2,q3, ..., called theArnoldi vectors, such that for everyn, the vectorsq1, ...,qnspan the Krylov subspaceKn{\displaystyle {\mathcal {K}}_{n}}. Explicitly, the algorithm is as follows:
Thej-loop projects out the component ofqk{\displaystyle q_{k}}in the directions ofq1,…,qk−1{\displaystyle q_{1},\dots ,q_{k-1}}. This ensures the orthogonality of all the generated vectors.
The algorithm breaks down whenqkis the zero vector. This happens when theminimal polynomialofAis of degreek. In most applications of the Arnoldi iteration, including the eigenvalue algorithm below andGMRES, the algorithm has converged at this point.
Every step of thek-loop takes one matrix-vector product and approximately 4mkfloating point operations.
In the programming languagePythonwith support of theNumPylibrary:
LetQndenote them-by-nmatrix formed by the firstnArnoldi vectorsq1,q2, ...,qn, and letHnbe the (upperHessenberg) matrix formed by the numbershj,kcomputed by the algorithm:
The orthogonalization method has to be specifically chosen such that the lower Arnoldi/Krylov components are removed from higher Krylov vectors. AsAqi{\displaystyle Aq_{i}}can be expressed in terms ofq1, ...,qi+1by construction, they are orthogonal toqi+2, ...,qn,
We then have
The matrixHncan be viewed asAin the subspaceKn{\displaystyle {\mathcal {K}}_{n}}with the Arnoldi vectors as an orthogonal basis;Ais orthogonally projected ontoKn{\displaystyle {\mathcal {K}}_{n}}. The matrixHncan be characterized by the following optimality condition. Thecharacteristic polynomialofHnminimizes ||p(A)q1||2among allmonic polynomialsof degreen. This optimality problem has a unique solution if and only if the Arnoldi iteration does not break down.
The relation between theQmatrices in subsequent iterations is given by
where
is an (n+1)-by-nmatrix formed by adding an extra row toHn.
The idea of the Arnoldi iteration as aneigenvalue algorithmis to compute the eigenvalues in the Krylov subspace. The eigenvalues ofHnare called theRitz eigenvalues. SinceHnis a Hessenberg matrix of modest size, its eigenvalues can be computed efficiently, for instance with theQR algorithm, or somewhat related,Francis' algorithm. Also Francis' algorithm itself can be considered to be related to power iterations, operating on nested Krylov subspace. In fact, the most basic form of Francis' algorithm appears to be to choosebto be equal toAe1, and extendingnto the full dimension ofA. Improved versions include one or more shifts, and higher powers ofAmay be applied in a single steps.[2]
This is an example of theRayleigh-Ritz method.
It is often observed in practice that some of the Ritz eigenvalues converge to eigenvalues ofA. SinceHnisn-by-n, it has at mostneigenvalues, and not all eigenvalues ofAcan be approximated. Typically, the Ritz eigenvalues converge to the largest eigenvalues ofA. To get the smallest eigenvalues ofA, the inverse (operation) ofAshould be used instead. This can be related to the characterization ofHnas the matrix whose characteristic polynomial minimizes ||p(A)q1|| in the following way. A good way to getp(A) small is to choose the polynomialpsuch thatp(x) is small wheneverxis an eigenvalue ofA. Hence, the zeros ofp(and thus the Ritz eigenvalues) will be close to the eigenvalues ofA.
However, the details are not fully understood yet. This is in contrast to the case whereAisHermitian. In that situation, the Arnoldi iteration becomes theLanczos iteration, for which the theory is more complete.
Due to practical storage consideration, common implementations of Arnoldi methods typically restart after a fixed number of iterations. One approach is the Implicitly Restarted Arnoldi Method (IRAM)[3]by Lehoucq and Sorensen, which was popularized in the free and open source software packageARPACK.[4]Another approach is the Krylov-Schur Algorithm by G. W. Stewart, which is more stable and simpler to implement than IRAM.[5]
Thegeneralized minimal residual method(GMRES) is a method for solvingAx=bbased on Arnoldi iteration.
|
https://en.wikipedia.org/wiki/Arnoldi_iteration
|
RankBrainis amachine learning-basedsearch enginealgorithm, the use of which was confirmed byGoogleon 26 October 2015.[1]It helps Google to process search results and provide more relevant search results for users.[2]In a 2015 interview, Google commented that RankBrain was the third most important factor in the ranking algorithm, after links and content,[2][3]out of about 200 ranking factors[4]whose exact functions are not fully disclosed. As of 2015[update], "RankBrain was used for less than 15% of queries."[5]The results show that RankBrain guesses what the other parts of the Google search algorithm will pick as the top result 80% of the time, compared to 70% for human search engineers.[2]
If RankBrain sees a word or phrase it is not familiar with, the program can make a guess as to what words or phrases might have a similar meaning and filter the result accordingly, making it more effective at handling never-before-seen search queries or keywords.Search queriesare sorted intoword vectors, also known as “distributed representations,” which are close to each other in terms of linguistic similarity. RankBrain attempts to map this query into words (entities) or clusters of words that have the best chance of matching it. Therefore, RankBrain attempts to guess what people mean and records the results, which adapts the results to provide better user satisfaction.[6]
RankBrain is trained offline with batches of past searches. Studies showed how RankBrain better interpreted the relationships between words. This can include the use ofstop wordsin a search query ("the," "and," "without," etc.) – words that were historically ignored previously by Google, but are sometimes of a major importance to fully understanding the meaning orintentbehind a person’s search query. It is also able to parse patterns between searches that are seemingly unconnected, to understand how those searches are similar to each other.[7]Once RankBrain's results are verified byGoogle's team, the system is updated and goes live again.[8]
Google has stated that it usestensor processing unit(TPU)ASICsfor processing RankBrain requests.[9]
RankBrain has allowed Google to speed up the algorithmic testing it does for keyword categories to attempt to choose the best content for any particular keyword search. This means that old methods of gaming the rankings with false signals are becoming less and less effective, and the highest quality content from a human perspective is being ranked higher in Google.[10][unreliable source?]
RankBrain has helpedGoogle Hummingbird(the 2013 version of the ranking algorithm) provide more accurate results because it can learn words and phrases it may not know. It also learns them specifically for the country, as well as language, in which a query is made. So, if one looks up a query with the wordbootin it within the United States, one will get information on footwear. However, if the query comes through the UK, then the information could also be in regard to storage spaces in cars.[11][unreliable source?]
角鋼BloggerBoosted
|
https://en.wikipedia.org/wiki/RankBrain
|
Mobilegeddonis a name forGoogle's search enginealgorithmupdate of April 21, 2015.[1]The term was coined by Chuck Price in a post written forSearch Engine Watchon March 9, 2015. The term was then adopted by webmasters and web-developers.
The main effect of this update was to give priority to websites that display well on smartphones and other mobile devices. The change did not affect searches made from a desktop computer or a laptop.[2]
Google announced its intention to make the change in February 2015.[3]In addition to their announcement, Google published an article, "Mobile Friendly Sites," on their Google Developers page to help webmasters with the transition.[4]Google claims the transition to mobile-friendly sites was to improve user experience, stating "the desktop version of a site might be difficult to view and use on a mobile device."[4]
Theprotologismis ablend wordof "mobile" and "Armageddon" because the change "could cause massive disruption to page rankings."[5]But, writing forForbes, Robert Hof says that concerns about the change were "overblown" in part because "Google is providing a test to see if sites look good on smartphones".[6]
Search engine results pageson smartphones now show URLs in "breadcrumb" format, as opposed to the previous explicit format.[7]
Based on their data set, software companySearchmetricsfound that the average loss of rankings for the non-mobile friendly sites measured was 0.21 positions on average.[8]Content marketing companyBrightEdgehas tracked over 20,000 URLs since the update, and is reporting a 21% decrease in non-mobile-friendly URLs on the first 3 pages of search results.[9]According to Peter J. Meyers, it was "nothing to write home about."[10]
|
https://en.wikipedia.org/wiki/Mobilegeddon
|
Thedomain authority(also referred to asthought leadership) of a website describes its relevance for a specific subject area or industry. Domain Authority is asearch engine rankingscore developed by Moz.[1]This relevance has a direct impact on its ranking by search engines, trying to assess domain authority through automated analytic algorithms. The relevance of domain authority on website-listing in the Search Engine Results Page (SERPs) of search engines led to the birth of a whole industry ofBlack-Hat SEOproviders, trying to feign an increased level of domain authority.[2]The ranking by major search engines, e.g.,Google’sPageRankis agnostic of specific industry or subject areas and assesses a website in the context of the totality of websites on the Internet.[3]The results on the SERP page set the PageRank in the context of a specific keyword. In a less competitive subject area, even websites with a low PageRank can achieve high visibility in search engines, as the highest ranked sites that match specific search words are positioned on the first positions in the SERPs.[4]
Domain authority can be described through four dimensions:
The weight of these factors varies in function of the ranking body. When individuals judge domain authority, decisive factors can include the prestige of a website, the prestige of the contributing authors in a specific domain, the quality and relevance of the information on a website, the novelty of the content, but also the competitive situation around the discussed subject area or the quality of the outgoing links.[5]Several search engines (e.g.,Bing,Google,Yahoo) have developed automated analyses and rank algorithms for domain authority. Lacking "human reasoning" which would allow to directly judge quality, they make use of complementary parameters such as information or website prestige and centrality from agraph-theoreticalperspective, manifested in the quantity and quality of inbound links.[6]Thesoftware as a servicecompany Moz.org has developed an algorithm and weighted level metric, branded as "Domain Authority", which gives predictions on a website's performance in search engine rankings with a discriminating range from 0 to 100.[7][8]
Prestige identifies the prominent actors in a qualitative and quantitative manner on the basis ofGraph theory. A website is considered a node. Its prestige is defined by the quantity of nodes that have directed edges pointing on the website and the quality of those nodes. The nodes’ quality is also defined through their prestige. This definition assures that a prestigious website is not only pointed at by many other websites, but that those pointing websites are prestigious themselves[9]Similar to the prestige of a website, the contributing authors’ prestige is taken into consideration,[10]in those cases, where the authors are named and identified (e.g., with theirTwitterorGoogle Plusprofile). In this case, prestige is measured with the prestige of the authors who quote them or refer to them and the quantity of referrals which these authors receive.[5]Search engines use additional factors to scrutinize the websites’ prestige. To do so, Google’s PageRank looks at factors like link-diversification and link-dynamics: When too many links are coming from the same domain or webmaster, there is a risk ofblack-hat SEO. When backlinks grow rapidly, this nourishes suspicion ofspamor black-hat SEO as origin.[11]In addition, Google looks at factors like the public availability of thewhoIsinformation of the domain owner, the use of globaltop-level domains, domain age and volatility of ownership to assess their apparent prestige. Lastly, search engines look at the traffic and the amount of organic searches for a site as the amount of traffic should be congruent to the level of prestige that a website has in a certain domain.[5]
Information qualitydescribes the value which information provides to the reader. Wang and Strong categorize assessable dimensions of information into intrinsic (accuracy,objectivity,believability,reputation), contextual (relevancy,value-added/authenticity,timelessness,completeness, quantity), representational (interpretability, format, coherence, compatibility) and accessible (accessibilityand access security).[12]Humans can base their judgments on quality based on experience in judging content, style and grammatical correctness. Information systems like search engines need indirect means, allowing concluding on the quality of information. In 2015, Google’sPageRankalgorithm took approximately 200 ranking factors included in a learning algorithm to assess information quality.[13]
Prominent actors have extensive and ongoing relationships with other prominent actors. This increases their visibility and makes the content more relevant, interconnected, and useful.[9]Centrality, from a graph-theoretical perspective, describes unidirectional relationships without distinguishing between receiving and sending information. In this context, it includes the inbound links considered in the definition of 'prestige,' complemented by outgoing links. Another difference between prestige and centrality is that the measure of prestige applies to a complete website or author, whereas centrality can be considered at a more granular level, such as an individual blog post. Search engines evaluate various factors to assess the quality of outgoing links, including link centrality, which describes the quality, quantity, and relevance of outgoing links as well as the prestige of their destination. They also consider the frequency of new content publication ('freshness of information') to ensure that the website remains an active participant in the community.[5]
The domain authority that a website attains is not the only factor which defines its positioning in the SERPs of search engines. The second important factor is the competitiveness of a specific sector. Subjects likeSEOare very competitive. A website needs to outperform the prestige of competing websites to attain domain authority. This prestige, relative to other websites, can be defined as “relative domain authority.”
|
https://en.wikipedia.org/wiki/Domain_Authority
|
In the context of theWorld Wide Web,deep linkingis the use of ahyperlinkthat links to a specific, generally searchable or indexed, piece ofweb contenton awebsite(e.g. "https://example.com/path/page"), rather than the website's home page (e.g., "https://example.com"). TheURLcontains all the information needed to point to a particular item. Deep linking is different frommobile deep linking, which refers to directly linking to in-app content using a non-HTTPURI.
The technology behind the World Wide Web, theHypertext Transfer Protocol(HTTP), does not actually make any distinction between "deep" links and any other links—all links are functionally equal. This is intentional; one of the design purposes of the Web is to allow authors to link to any published document on another site. The possibility of so-called "deep" linking is therefore built into the Web technology ofHTTPandURLsby default—while a site can attempt to restrict deep links, to do so requires extra effort. According to theWorld Wide Web ConsortiumTechnical Architecture Group, "any attempt to forbid the practice of deep linking is based on a misunderstanding of the technology, and threatens to undermine the functioning of the Web as a whole".[1]
Some commercial websites object to other sites making deep links into their content either because it bypasses advertising on their main pages, passes off their content as that of the linker or, likeThe Wall Street Journal, they charge users for permanently valid links.
Sometimes, deep linking has led to legal action such as in the 1997 case ofTicketmasterversusMicrosoft, where Microsoft deep-linked to Ticketmaster's site from its Sidewalk service. This case was settled when Microsoft and Ticketmaster arranged a licensing agreement.
Ticketmaster later filed asimilar caseagainstTickets.com, and the judge in this case ruled that such linking was legal as long as it was clear to whom the linked pages belonged.[2]The court also concluded that URLs themselves were not copyrightable, writing: "A URL is simply an address, open to the public, like the street address of a building, which, if known, can enable the user to reach the building. There is nothing sufficiently original to make the URL a copyrightable item, especially the way it is used. There appear to be no cases holding the URLs to be subject to copyright. On principle, they should not be."
Websites built on technologies such asAdobe FlashandAJAXoften do not support deep linking. This can cause usability problems for visitors to those sites. For example, they may be unable to save bookmarks to individual pages orstatesof the site, use theweb browserforward and back buttons—and clicking the browser refresh button may return the user to the initial page.
However, this is not a fundamental limitation of these technologies. Well-known techniques, and libraries such as SWFAddress[3]and unFocus History Keeper,[4]now exist that website creators usingFlashorAJAXcan use to provide deep linking to pages within their sites.[5][6][7]
Probably the earliest legal case arising out of deep linking was the 1996Scottishcase ofThe Shetland Timesvs.The Shetland News, in which theTimesaccused theNewsof appropriating stories on theTimes' website as its own.[8][9]
At the beginning of 2006, in a case between the search engineBixee.comand job site Naukri.com, theDelhi High CourtinIndiaprohibited Bixee.com from deep linking toNaukri.com.[10]
The most important and widely cited U.S. opinions on deep linking are the Ninth Circuit's rulings inKelly v. Arriba Soft Corp.[11]andPerfect 10, Inc. v. Amazon.com, Inc..[12]In both cases, the court exonerated the use of deep linking. In the second of these cases, the court explained (speaking of defendant Google, whom Perfect 10 had also sued) why linking is not a copyright infringement under US law:
Google does not…display a copy of full-size infringing photographic images for purposes of the Copyright Act when Google frames in-line linked images that appear on a user's computer screen. Because Google's computers do not store the photographic images, Google does not have a copy of the images for purposes of the Copyright Act. In other words, Google does not have any "material objects…in which a work is fixed…and from which the work can be perceived, reproduced, or otherwise communicated" and thus cannot communicate a copy. Instead of communicating a copy of the image, Google provides HTML instructions that direct a user's browser to a website publisher's computer that stores the full-size photographic image. Providing these HTML instructions is not equivalent to showing a copy. First, the HTML instructions are lines of text, not a photographic image. Second, HTML instructions do not themselves cause infringing images to appear on the user's computer screen. The HTML merely gives the address of the image to the user's browser. The browser then interacts with the computer that stores the infringing image. It is this interaction that causes an infringing image to appear on the user's computer screen. Google may facilitate the user's access to infringing images. However, such assistance raised only contributory liability issues and does not constitute direct infringement of the copyright owner's display rights. …While in-line linking and framing may cause some computer users to believe they are viewing a single Google webpage, the Copyright Act, unlike the Trademark Act, does not protect a copyright holder against acts that cause consumer confusion.
In December 2006, a Texas court ruled that linking by amotocrosswebsite to videos on a Texas-based motocross video production website did not constitute fair use. The court subsequently issued an injunction.[13]This case, SFX Motor Sports Inc., v. Davis, was not published in official reports, but is available at 2006 WL 3616983.
In a February 2006 ruling, theDanish Maritime and Commercial Court(Copenhagen) found systematiccrawling, indexing and deep linking by portal site ofir.dk of real estate site Home.dk not to conflict with Danish law or the database directive of theEuropean Union. The Court stated that search engines are desirable for the functioning of the Internet, and that, when publishing information on the Internet, one must assume—and accept—that search engines deep-link to individual pages of one's website.[14]
Web site owners who do not want search engines to deep link, or want them only to index specific pages can request so using theRobots Exclusion Standard(robots.txtfile). People who favor deep linking often feel that content owners who do not provide a robots.txt file are implying by default that they do not object to deep linking either by search engines or others.[citation needed]People against deep linking often claim that content owners may be unaware of the Robots Exclusion Standard or may not use robots.txt for other reasons.[citation needed]Sites other than search engines can also deep link to content on other sites, so some question the relevance of the Robots Exclusion Standard to controversies about Deep Linking.[15]The Robots Exclusion Standard does not programmatically enforce its directives so it does not prevent search engines and others who do not follow polite conventions from deep linking.[16]
|
https://en.wikipedia.org/wiki/Deep_linking
|
Inline linking(also known ashotlinking,piggy-backing,directlinking,offsiteimagegrabs,bandwidththeft,[1]orleeching) is the practice of using orembeddinga linked object—often an image—from onewebsiteonto awebpageof another website. In this process, the second site does nothostthe object itself but instead loads it directly from the original source, creating aninline linkto the hosting site.
TheHypertext Transfer Protocol(HTTP), the technology behind theWorld Wide Web, does not differentiate between different types of links—all links are functionally equal. As a result, resources can be linked from anyserverand loaded onto aweb pageregardless of their original location.
When a website is visited, the browser first downloads the HTML document containing the web page's textual content. This document may reference additional resources, including other HTML files, images, scripts, orstylesheet. Within the HTML,<img>tagsspecify theURLsof images to be displayed on the page. If the<img>tag does not specify a server, theweb browserassumes that the image is hosted on the same server as the parent page (e.g.,<img src="picture.jpg" />). If the<img>tag contains an absolute URL, the browser retrieves the image from an external server (e.g.,<img src="http://www.example.com/picture.jpg" />).
When a browser downloads an HTML page containing such an image, the browser will contact the remote server to request the image content.
The ability to display content from one site within another is part of the original design of the Web'shypertextmedium. Common uses include:
The blurring of boundaries between sites can lead to other problems when the site violates users' expectations. Other times, inline linking can be done for malicious purposes.
Most web browsers will blindly follow the URL for inline links, even though it is a frequent security complaint.[3]Embedded images may be used as aweb bugto track users or to relay information to a third party. Manyad filteringbrowser tools will restrict this behavior to varying degrees.
Some servers are programmed to use theHTTP referer headerto detect hotlinking and return a condemnatory message, commonly in the same format, in place of the expected image or media clip. Most servers can be configured to partially protect hosted media from inline linking, usually by not serving the media or by serving a different file.[1][4]
URL rewritingis often used (e.g., mod_rewrite withApache HTTP Server) to reject or redirect attempted hotlinks to images and media to an alternative resource. Most types of electronic media can be redirected this way, including video files, music files, and animations (such asFlash).
Other solutions usually combineURL rewritingwith some custom complex server side scripting to allow hotlinking for a short time, or in more complex setups, to allow the hotlinking but return an alternative image with reduced quality and size and thus reduce the bandwidth load when requested from a remote server. All hotlink prevention measures risk deteriorating the user experience on the third-party website.[5]
The most significant legal fact about inline linking, relative to copyright law considerations, is that the inline linker does not place a copy of the image file on its own Internet server. Rather, the inline linker places a pointer on its Internet server that points to the server on which the proprietor of the image has placed the image file. This pointer causes a user's browser to jump to the proprietor's server and fetch the image file to the user's computer. US courts have considered this a decisive fact in copyright analysis. Thus, inPerfect 10, Inc. v. Amazon.com, Inc.,[6]theUnited States Court of Appeals for the Ninth Circuitexplained why inline linking did not violate US copyright law:
Google does not...display a copy of full-size infringing photographic images for purposes of the Copyright Act when Google frames in-line linked images that appear on a user's computer screen. Because Google's computers do not store the photographic images, Google does not have a copy of the images for purposes of the Copyright Act. In other words, Google does not have any "material objects...in which a work is fixed...and from which the work can be perceived, reproduced, or otherwise communicated" and thus cannot communicate a copy. Instead of communicating a copy of the image, Google provides HTML instructions that direct a user's browser to a website publisher's computer that stores the full-size photographic image. Providing these HTML instructions is not equivalent to showing a copy. First, the HTML instructions are lines of text, not a photographic image. Second, HTML instructions do not themselves cause infringing images to appear on the user's computer screen. The HTML merely gives the address of the image to the user's browser. The browser then interacts with the computer that stores the infringing image. It is this interaction that causes an infringing image to appear on the user's computer screen. Google may facilitate the user's access to infringing images. However, such assistance raised only contributory liability issues and does not constitute direct infringement of the copyright owner's display rights. ...While in-line linking and framing may cause some computer users to believe they are viewing a single Google webpage, the Copyright Act...does not protect a copyright holder against [such] acts....
|
https://en.wikipedia.org/wiki/Inline_linking
|
Aninternal linkis a type ofhyperlinkon aweb pageto another page or resource, such as an image or document, on the samewebsiteordomain.[1][2]It is the opposite of anexternal link, a link that directs a user to content that is outside its domain.
Hyperlinks are considered either "external" or "internal" depending on their target or destination. Generally, a link to a page outside the same domain or website is considered external, whereas one that points at another section of the same web page or to another page of the same website or domain is considered internal. Both internal and external links allow users of the website to navigate to another web page or resource.[3][4]These definitions become clouded, however, when the same organization operates multiple domains functioning as a single web experience, e.g. when a secure commerce website is used for purchasing things displayed on anon-secure website. In these cases, links that are "external" by the above definition can conceivably be classified as "internal" for some purposes. Ultimately, an internal link points to a web page or resource in the sameroot directory.
Similarly, seemingly "internal" links are in fact "external" for many purposes, for example in the case of linking amongsubdomainsof a main domain, which are not operated by the same person(s). For example, abloggingplatform, such asWordPress,BloggerorTumblrhost thousands of different blogs on subdomains, which are entirely unrelated and the authors of which are generally unknown to each other with no control[5]over the links outside their own subdomain or blog. In these contexts one might view a link as "internal" only if it linked within the same blog, not to other blogs within the same domain.
The existence of internal links are used inwebsitesin order to navigate to multiple pages under a domain, making them a requirement if a website were to have more than one page (unless that page is meant to be inaccessible to the average visitor). Internal Links are also commonly used byweb crawlers(e.g. for indexing a website for asearch engine).[6]
ThisWorld Wide Web–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Internal_link
|
Overlinkingin awebpageor otherhyperlinkedtext is having too many hyperlinks (links).[1][2]
It is characterized by:
The opposites of overlinking are "null linking" and "underlinking", which are phenomena in which hyperlinks are reduced to such a degree as to remove all pointers to a likely-needed context of an unusual term, in the text-area where the term occurs.[2]This results in reader frustration. Underlinking results whenever a reader encounters an odd term in an article (perhaps not even for the first time) and wants to briefly browse more deeply at that point, but finds they cannot, but rather is required to conduct an extensive search far up near the beginning of the article, in order to locate the only instance of the word or term being linked— or perhaps even to find that it hasn't been linked at all.[citation needed]
|
https://en.wikipedia.org/wiki/Overlinking
|
Competitor backlinkingis asearch engine optimizationstrategy that involves analyzing thebacklinksof competing websites within avertical search. The outcome of this activity is designed to increaseorganic searchengine rankings and to gain an understanding of thelink buildingstrategies used by business competitors.[1]
Competitor backlinking can provide insights into the types of links that may contribute to higher search engine rankings.[citation needed]The number and quality of backlinks may be influential and they are part of a broader array of factors that search engines use to rank websites.[citation needed]These factors include content quality, keyword relevance, user engagement, and overall site architecture.[citation needed]Obtaining a similar number or quality of backlinks as a competitor does not guarantee similar search engine rankings.[2]
Another possible outcome of competitive backlinking is the identification of the type of websites that are inclined to link to a specific type of website.[citation needed]
|
https://en.wikipedia.org/wiki/Competitor_backlinking
|
Search engines, includingweb search engines,selection-based searchengines,metasearch engines,desktop searchtools, andweb portalsandvertical marketwebsites have a search facility foronline databases.
† Main website is a portal
General:
Academic materials only:
Search engines dedicated to a specific kind of information
These search engines work across theBitTorrent protocol.
Desktop searchengines listed on a light purple background are no longer in active development.
|
https://en.wikipedia.org/wiki/List_of_search_engines
|
Search engine marketing(SEM) is a form ofInternet marketingthat involves the promotion ofwebsitesby increasing their visibility insearch engine results pages(SERPs) primarily through paid advertising.[1]SEM may incorporatesearch engine optimization(SEO), which adjusts or rewrites website content and site architecture to achieve a higher ranking in search engine results pages to enhancepay per click(PPC) listings and increase theCall to action(CTA) on the website.[2]
In 2007, U.S. advertisers spent US $24.6 billion on search engine marketing.[3]In Q2 2015, Google (73.7%) and the Yahoo/Bing (26.3%) partnership accounted for almost 100% of U.S. search engine spend.[4]As of 2006, SEM was growing much faster than traditionaladvertisingand even other channels of online marketing.[5]Managing search campaigns is either done directly with the SEM vendor or through an SEM tool provider. It may also be self-serve or through an advertising agency.
Search engine marketing is also a method of business analytics, which is mainly aimed at providing useful information for organizations to find business opportunities and generate profits. SEM can help organizations optimize their marketing and gather more audience and create more customers.[6]
As of October 2016, Google leads the global search engine market with a market share of 89.3%. Bing comes second with a market share of 4.36%, Yahoo comes third with a market share of 3.3%, and Chinese search engine Baidu is fourth globally with a share of about 0.68%.[7]
In August 2024,Google's search enginewas declared by a court to be amonopolyover the market.[8]During the trial, theUS Department of Justiceargued "Google hasn’t just illegally cornered the market in search — it’s squeezed online publishers and advertisers with a “trifecta” of monopolies that have harmed virtually the entire World Wide Web"[9]
As the number of sites on the Web increased in the mid-to-late 1990s,search enginesstarted appearing to help people find information quickly. Search engines developed business models to finance their services, such aspay per clickprograms offered by Open Text[10]in 1996 and thenGoto.com[11]in 1998. Goto.com later changed its name[12]to Overture in 2001, was purchased by Yahoo! in 2003, and now offers paid search opportunities for advertisers through Yahoo! Search Marketing. Google also began to offer advertisements on search results pages in 2000 through the GoogleAdWordsprogram. By 2007, pay-per-click programs proved to be primary moneymakers[13]for search engines. In a market dominated by Google, in 2009 Yahoo! andMicrosoftannounced the intention to forge an alliance. The Yahoo! & Microsoft Search Alliance eventually received approval from regulators in the US and Europe in February 2010.[14]
Search engine optimization consultants expanded their offerings to help businesses learn about and use the advertising opportunities offered by search engines, and new agencies focusing primarily upon marketing and advertising through search engines emerged. The term "search engine marketing" was popularized byDanny Sullivanin 2001[15]to cover the spectrum of activities involved in performing SEO, managing paid listings at the search engines, submitting sites to directories, and developing online marketing strategies for businesses, organizations, and individuals.
Search engine marketing uses at least five methods and metrics to optimize websites.[16]
Search engine marketing is a way to create and edit a website so that search engines rank it higher than other pages. It should be also focused on keyword marketing orpay-per-click advertising(PPC). The technology enables advertisers to bid on specific keywords or phrases and ensures ads appear with the results of search engines.
With the development of this system, the price is growing under a high level of competition. Many advertisers prefer to expand their activities, including increasing search engines and adding more keywords. The more advertisers are willing to pay for clicks, the higher the ranking for advertising, which leads to higher traffic.[18]PPC comes at a cost. The higher position is likely to cost $5 for a given keyword, and $4.50 for a third location. A third advertiser earns 10% less than the top advertiser while reducing traffic by 50%.[18]
Investors must consider their return on investment when engaging in PPC campaigns. Buying traffic via PPC will deliver a positive ROI when the total cost-per-click for a single conversion remains below theprofit margin. That way the amount of money spent to generate revenue is below the actual revenue generated.
There are many reasons explaining why advertisers choose the SEM strategy. First, creating a SEM account is easy and can build traffic quickly based on the degree of competition. The shopper who uses the search engine to find information tends to trust and focus on the links showed in the results pages. However, a large number of online sellers do not buy search engine optimization to obtain higher ranking lists of search results but prefer paid links. A growing number of online publishers are allowing search engines such as Google to crawl content on their pages and place relevant ads on it.[19]From an online seller's point of view, this is an extension of the payment settlement and an additional incentive to invest in paid advertising projects. Therefore, it is virtually impossible for advertisers with limited budgets to maintain the highest rankings in the increasingly competitive search market.
Google's search engine marketing is one of the western world's marketing leaders, while its search engine marketing is its biggest source of profit.[20]Google's search engine providers are clearly ahead of theYahooandBingnetwork. The display of unknown search results is free, while advertisers are willing to pay for each click of the ad in the sponsored search results.
Paid inclusion involves asearch enginecompany charging fees for the inclusion of awebsitein their results pages. Also known as sponsored listings, paid inclusion products are provided by most search engine companies either in the main results area or as a separately identified advertising area.
The fee structure is both a filter against superfluous submissions and a revenue generator. Typically, the fee covers an annual subscription for one webpage, which will automatically be catalogued on a regular basis. However, some companies are experimenting with non-subscription based fee structures where purchased listings are displayed permanently. A per-click fee may also apply. Each search engine is different. Some sites allow only paid inclusion, although these have had little success. More frequently, many search engines, likeYahoo!,[21]mix paid inclusion (per-page and per-click fee) with results from web crawling. Others, likeGoogle(and as of 2006,Ask.com[22][23]), do not let webmasters pay to be in their search engine listing (advertisements are shown separately and labeled as such).
Some detractors of paid inclusion allege that it causes searches to return results based more on theeconomicstanding of the interests of a web site, and less on the relevancy of that site toend-users.
Often the line betweenpay per click advertisingand paid inclusion is debatable. Some have lobbied for any paid listings to be labeled as an advertisement, while defenders insist they are not actually ads since the webmasters do not control the content of the listing, its ranking, or even whether it is shown to any users. Another advantage of paid inclusion is that it allows site owners to specify particular schedules for crawling pages. In the general case, one has no control as to when their page will be crawled or added to a search engine index. Paid inclusion proves to be particularly useful for cases where pages are dynamically generated and frequently modified.
Paid inclusion is a search engine marketing method in itself, but also a tool ofsearch engine optimizationsince experts and firms can test out different approaches to improving ranking and see the results often within a couple of days, instead of waiting weeks or months. Knowledge gained this way can be used to optimize other web pages, without paying the search engine company.
SEM is the wider discipline that incorporatesSEO. SEM includes both paid search results (using tools like Google AdWords or Bing Ads, formerly known as Microsoft adCenter) and organic search results (SEO). SEM uses paid advertising withAdWordsorBing Ads, pay per click (particularly beneficial for local providers as it enables potential consumers to contact a company directly with one click), article submissions, advertising and making sure SEO has been done. A keyword analysis is performed for both SEO and SEM, but not necessarily at the same time. SEM and SEO both need to be monitored and updated frequently to reflect evolving best practices.
In some contexts, the termSEMis used exclusively to meanpay per click advertising,[2]particularly in the commercial advertising and marketing communities which have a vested interest in this narrow definition. Such usage excludes the wider search marketing community that is engaged in other forms of SEM such as search engine optimization andsearch retargeting.
Creating the link between SEO and PPC represents an integral part of the SEM concept. Sometimes, especially when separate teams work on SEO and PPC and the efforts are not synced, positive results of aligning their strategies can be lost. The aim of both SEO and PPC is maximizing the visibility in search and thus, their actions to achieve it should be centrally coordinated. Both teams can benefit from setting shared goals and combined metrics, evaluating data together to determine future strategy or discuss which of the tools works better to get the traffic for selected keywords in the national and local search results. Thanks to this, the search visibility can be increased along with optimizing both conversions and costs.[24]
Another part of SEM issocial media marketing(SMM). SMM is a type of marketing that involves exploiting social media to influence consumers that one company’s products and/or services are valuable.[25]Some of the latest theoretical advances include search engine marketing management (SEMM). SEMM relates to activities including SEO but focuses on return on investment (ROI) management instead of relevant traffic building (as is the case of mainstream SEO). SEMM also integrates organic SEO, trying to achieve top ranking without using paid means to achieve it, and pay per click SEO. For example, some of the attention is placed on the web page layout design and how content and information is displayed to the website visitor. SEO & SEM are two pillars of one marketing job and they both run side by side to produce much better results than focusing on only one pillar.
Paid search advertising has not been without controversy and the issue of how search engines present advertising on their search result pages has been the target of a series of studies and reports[26][27][28]byConsumer ReportsWebWatch. TheFederal Trade Commission(FTC) also issued a letter[29]in 2002 about the importance of disclosure of paid advertising on search engines, in response to a complaint from Commercial Alert, a consumer advocacy group with ties toRalph Nader.
Another ethical controversy associated with search marketing has been the issue oftrademark infringement. The debate as to whether third parties should have the right to bid on their competitors' brand names has been underway for years. In 2009 Google changed their policy, which formerly prohibited these tactics, allowing 3rd parties to bid on branded terms as long as theirlanding pagein fact provides information on the trademarked term.[30]Though the policy has been changed this continues to be a source of heated debate.[31]
On April 24, 2012, many started to see that Google has started to penalize companies that are buying links for the purpose of passing off the rank. The Google Update was calledPenguin. Since then, there have been several different Penguin/Panda updates rolled out by Google. SEM has, however, nothing to do with link buying and focuses on organicSEOandPPCmanagement. As of October 20, 2014, Google had released three official revisions of their Penguin Update.
In 2013, the Tenth Circuit Court of Appeals held inLens.com, Inc. v. 1-800 Contacts, Inc.that online contact lens seller Lens.com did not committrademark infringementwhen it purchased search advertisements using competitor1-800 Contacts' federally registered 1800 CONTACTS trademark as a keyword. In August 2016, theFederal Trade Commissionfiled an administrative complaint against 1-800 Contacts alleging, among other things, that its trademark enforcement practices in the search engine marketing space have unreasonably restrained competition in violation of the FTC Act. 1-800 Contacts has denied all wrongdoing and appeared before an FTCadministrative law judgein April 2017.[32]
Google Adsis recognized as a web-based advertising utensil since it adopts keywords that can deliver adverts explicitly to web users looking for information in respect to a certain product or service. It is flexible and provides customizable options like Ad Extensions, access to non-search sites, leveraging the display network to help increasebrand awareness. The project hinges oncost per click(CPC) pricing where the maximum cost per day for the campaign can be chosen, thus the payment of the service only applies if the advert has been clicked. SEM companies have embarked on Google Ads projects as a way to publicize their SEM and SEO services. One of the most successful approaches to the strategy of this project was to focus on making sure that PPC advertising funds were prudently invested. Moreover, SEM companies have described Google Ads as a practical tool for increasing a consumer’s investment earnings on Internet advertising. The use of conversion tracking andGoogle Analyticstools was deemed to be practical for presenting to clients the performance of their canvas from click to conversion. Google Ads project has enabled SEM companies to train their clients on the utensil and delivers better performance to the canvass. The assistance of Google Ads canvass could contribute to the growth of web traffic for a number of its consumer’s websites, by as much as 250% in only nine months.[33]
Another way search engine marketing is managed is bycontextual advertising. Here marketers place ads on other sites or portals that carry information relevant to their products so that the ads jump into the circle of vision of browsers who are seeking information from those sites. A successful SEM plan is the approach to capture the relationships amongst information searchers, businesses, and search engines. Search engines were not important to some industries in the past, but over the past years the use of search engines for accessing information has become vital to increase business opportunities.[34]The use of SEM strategic tools for businesses such as tourism can attract potential consumers to view their products, but it could also pose various challenges.[35]These challenges could be the competition that companies face amongst their industry and other sources of information that could draw the attention of online consumers.[34]To assist the combat of challenges, the main objective for businesses applying SEM is to improve and maintain their ranking as high as possible on SERPs so that they can gain visibility. Therefore, search engines are adjusting and developing algorithms and the shifting criteria by which web pages are ranked sequentially to combat against search engine misuse and spamming, and to supply the most relevant information to searchers.[34]This could enhance the relationship amongst information searchers, businesses, and search engines by understanding the strategies of marketing to attract business.
|
https://en.wikipedia.org/wiki/Search_engine_marketing
|
Search neutralityis aprinciplethatsearch enginesshould have no editorial policies other than that their results be comprehensive,impartialand based solely onrelevance.[1]This means that when a user types in a search engine query, the engine should return the most relevant results found in the provider's domain (those sites which the engine has knowledge of), without manipulating the order of the results (except to rank them by relevance), excluding results, or in any other way manipulating the results to a certainbias.
Search neutrality is related tonetwork neutralityin that they both aim to keep any one organization from limiting or altering a user's access to services on the Internet. Search neutrality aims to keep theorganic searchresults (results returned because of their relevance to the search terms, as opposed to results sponsored by advertising) of a search engine free from any manipulation, while network neutrality aims to keep those who provide and govern access to the Internet from limiting the availability of resources to access any given content.
The term "search neutrality" in context of the internet appears as early as March 2009 in an academic paper by the Polish-American mathematicianAndrew Odlyzkotitled, "Network Neutrality, Search Neutrality, and the Never-ending Conflict between Efficiency and Fairness in Markets".[2]In this paper, Odlykzo predicts that if net neutrality were to be accepted as a legal or regulatory principle, then the questions surrounding search neutrality would be the next controversies. Indeed, in December 2009 the New York Times published an opinion letter by Foundem co-founder and lead complainant in an anti-trust complaint against Google, Adam Raff, which likely brought the term to the broader public. According to Raff in his opinion letter, search neutrality ought to be "theprinciplethatsearch enginesshould have no editorial policies other than that their results be comprehensive,impartialand based solely onrelevance".[1]On October 11, 2009, Adam and his wife Shivaun launched SearchNeutrality.org, an initiative dedicated to promoting investigations against Google's search engine practices.[3]There, the Raffs note that they chose to frame their issue with Google as "search neutrality" in order to benefit from the focus and interest on net neutrality.[3]
In contrast to net neutrality, answers to such questions, as "what is search neutrality?" or "what are appropriate legislative or regulatory principles to protect search neutrality?", appear to have less consensus. The idea that neutrality means equal treatment, regardless of the content, comes from debates on net neutrality.[4]Neutrality in search is complicated by the fact that search engines, by design and in implementation, are not intended to be neutral or impartial. Rather, search engines and otherinformation retrievalapplications are designed to collect and store information (indexing), receive a query from a user, search for and filter relevant information based on that query (searching/filtering), and then present the user with only a subset of those results, which are ranked from most relevant to least relevant (ranking). "Relevance" is a form of bias used to favor some results and rank those favored results. Relevance is defined in the search engine so that a user is satisfied with the results and is therefore subject to the user's preferences. And because relevance is so subjective, putting search neutrality into practice has been so contentious.
Search neutrality became a concern aftersearch engines, most notablyGoogle, were accused ofsearch biasby other companies.[5]Competitors and companies claim search engines systematically favor some sites (and some kind of sites) over others in their lists of results, disrupting the objective results users believe they are getting.[6]
The call for search neutrality goes beyond traditional search engines. Sites likeAmazon.comandFacebookare also accused of skewing results.[7]Amazon's search results are influenced by companies that pay to rank higher in their search results while Facebook filters their newsfeed lists to conduct social experiments.[7]
In order to find information on the Web, most users make use ofsearch engines, whichcrawl the web, index it and show a list of results ordered by relevance. The use of search engines to access information through the web has become a key factor foronline businesses, which depend on the flow of users visiting their pages.[8]One of these companies is Foundem. Foundem provides a "vertical search" service to compare products available on online markets for the U.K. Many people see these "vertical search" sites asspam.[9]Beginning in 2006 and for three and a half years following, Foundem's traffic and business dropped significantly due to what they assert to be a penalty deliberately applied by Google.[10]It is unclear, however, whether their claim of a penalty was self-imposed via their use of iframeHTMLtags to embed the content from other websites. At the time at which Foundem claims the penalties were imposed, it was unclear whether web crawlers crawled beyond the main page of a website using iframe tags without some extra modifications. The former SEO director OMD UK, Jaamit Durrani, among others, offered this alternative explanation, stating that “Two of the major issues that Foundem had in summer was content in iFrames and content requiring javascript to load – both of which I looked at in August, and they were definitely in place. Both are huge barriers to search visibility in my book. They have been fixed somewhere between then and the lifting of the supposed ‘penalty’. I don't think that's a coincidence.”[11]
Most of Foundem’s accusations claim that Google deliberately applies penalties to other vertical search engines because they represent competition.[12]Foundem is backed by a Microsoft proxy group, the 'Initiative for Competitive Online Marketplace'.[13]
The following table details Foundem's chronology of events as found on their website:[14]
Google's large market share (85%) has made them a target for search neutrality litigation viaantitrust laws.[15]In February 2010, Google released an article on the Google Public Policy blog expressing their concern for fair competition, when other companies at the UK joined Foundem's cause (eJustice.fr, and Microsoft's Ciao! from Bing) also claiming being unfairly penalized by Google.[12]
After two years of looking into claims that Google “manipulated its search algorithms to harm vertical websites and unfairly promote its own competing vertical properties,” theFederal Trade Commission(FTC) voted unanimously to end the antitrust portion of its investigation without filing a formal complaint against Google.[16]The FTC concluded that Google's “practice of favoring its own content in the presentation of search results” did not violate U.S. antitrust laws.[5]The FTC further determined that even though competitors might be negatively impacted by Google's changing algorithms, Google did not change its algorithms to hurt competitors, but as a product improvement to benefit consumers.[5]
There are a number of arguments for and against search neutrality.
According to theNet Neutrality Institute, as of 2018,Google’s "Universal Search" system[21]uses by far the least neutral search engine practices, and following the implementation of Universal Search, websites such asMapQuestexperienced a massive decline in web traffic. This decline has been attributed to Google linking to its own services rather than the services offered at external websites.[22][23]Despite these claims, Microsoft's Bing displays Microsoft content in first place more than twice as often as Google shows Google content in first place. This indicates that as far as there is any 'bias', Google is less biased than its principal competitor.[24]
|
https://en.wikipedia.org/wiki/Search_neutrality
|
User intent, also known asquery intentorsearch intent, is the identification and categorization of what a user online intended or wanted to find when they typed theirsearch termsinto an onlineweb search enginefor the purpose ofsearch engine optimisationorconversion rate optimisation.[1]Examples of user intent arefact-checking, comparison shopping or navigating to other websites.
To increase ranking onsearch engines, marketers need to create content that best satisfies queries entered by users on their smartphones or desktops. Creating content with user intent in mind helps increase the value of the information being showcased.[2]Keyword researchcan help determine user intent. The search terms a user enters into a web search engine to find content, services, or products are the words that should be used on the webpage to optimize for user intent.[3]
Googlecan showSERPfeatures such as featured snippets, knowledge cards or knowledge panels for queries where the search intent is clear.SEOpractitioners take this into account because Google can often satisfy the user intent without having the user leave Google SERP. The better Google gets in figuring out user intent, the less users are going to click on search results.As of 2019,[update]less than half of Google searches result in clicks.[citation needed]
Though there are various ways of classifying the categories of user intent, overall, they tend to follow the same clusters. Until 2017, there werethree broad categories: informational, transactional, and navigational.[4]However, after the rise[5]ofmobile search, other categories have appeared or have segmented into more specific categorisation.[6][7]For example, as mobile users may want to find directions or information about a specific physical location, some marketers have proposed categories such as "local intent," as in searches like "XY near me." Additionally, there is commercial search intent, which is when someone searches for a product or service to know more about it or compare other alternatives before finalizing their purchase.[citation needed]
Some notable types with examples below include:
Informational Intent: Donald Trump, Who is Maradona?, How to lose weight?
Navigational Intent: Facebook login, Wikipedia contribution page
Transactional Intent:Latest iPhone,Amazoncoupons, cheap dell laptop, fence installers
Commercial Intent: top headphones, best marketing agency, x protein powder review,
Local SearchIntent:restaurants near me, nearest gas station,
Many search queries also have mixed search intent. For example, when someone searches "Best iPhone repair shop near me" is transactional and local search intent. Mixed search intent can easily happen withhomonymsand suchSERPstend to be volatile because user signals differ.[citation needed]
User intent is often misinterpreted, and thinking that there are just a few user intent types is not giving the complete picture of the user behavior.
It is also a term to describe what type of activity, business or services users are searching for (not only the user behavior after the search).
Example: when you write 'Spanish games' in the search engine (your browser settings in English) you have results for learning Spanish methods, not a real games with Spanish origin. In this example, the user intent is to learn Spanish language, not to play typical games. This intent is reflected by Google and the other search engines, and they strive to display their SERP results based on the user interest.
|
https://en.wikipedia.org/wiki/User_intent
|
Website promotionis a process used bywebmasterstoimprove contentand increase exposure of awebsiteto bring more visitors.[1]: 210Many techniques such assearch engine optimizationandsearch engine submissionare used to increase a site's traffic once content is developed.[1]: 314
With the rise in popularity of social media platforms, many webmasters have moved to platforms likeFacebook,Twitter,LinkedInandInstagramforviral marketing. By sharing interesting content, webmasters hope that some of the audience will visit the website.
Examples ofviral contentareinfographicsandmemes.
Webmasters often hireoutsourcedoroffshorefirms to perform website promotion for them, many of whom provide "low-quality, assembly-linelink building".[2]
|
https://en.wikipedia.org/wiki/Website_promotion
|
Asearch engine results page(SERP) is awebpagethat is displayed by asearch enginein response to a query by a user. The main component of a SERP is the listing of results that are returned by the search engine in response to akeywordquery.[1]
The results are of two general types:
The results are normally ranked byrelevanceto the query. Each result displayed on theSERPnormally includes a title, a link that points to the actual page on theWeb, and a shortdescription, known as a snippet, showing where thekeywordshave matched content within the page for organic results. For sponsored results, the advertiser chooses what to display.[citation needed]
A single search query can yield many pages of results. However, in order to avoid overwhelming users, search engines and personal preferences often limit the number of results displayed per page. As a result, subsequent pages may not be as relevant or ranked as highly as the first. Just like the world of traditional print media and itsadvertising, this enables competitive pricing for page real estate but is complicated by the dynamics of consumer expectations and intent— unlike static print media where the content and the advertising on every page are the same all of the time for all viewers, despite such hard copy being localized to some degree, usually geographic, like state, metro-area, city, orneighbourhood, search engine results can vary based on individual factors such as browsing habits.[citation needed]
The organic search results, queries, andadvertisementsare the three main components of theSERP, However, theSERPof major search engines, likeGoogle,Yahoo!,Bing,Brave Search, andSogoumay include many different types of enhanced results (organic search, and sponsored) such as rich snippets, images, maps, definitions, answer boxes, videos or suggested search refinements. A study revealed that 97% of queries in Google returned at least one rich feature.[2]Another study on the evolution of SERPs interfaces from 2000 to 2020 shows that SERP are becoming more diverse in terms of elements, aggregating content from different verticals and including more features that provide direct answers.[3][4]
Also known as 'user search string', this is the word or set of words that are typed by the user in the search bar of the search engine. The search box is located on all major search engines likeGoogle,Yahoo,Bing,Brave Search, andSogou. Users indicate the topic desired based on the keywords they enter into thesearch boxin the search engine.[citation needed]
Organic SERP listings are the natural listings generated bysearch engines, they listwebpagesmatching the query. The pages are sorted on a relevance score based on a series of metrics generally based upon factors such as quality and relevance of the content, expertise, authoritativeness, trustworthiness of the website and author on a given topic, gooduser experience, andbacklinks.[5]
Each of the matching web pages is presented as a visual element composed of attribution, a title link, and a snippet of the matching webpage showing how the query matched on the page.[6]
Search results pages typically contain numerous organic results, and users tend to view only the first results on the first page.[7]
Several major search engines offer "sponsored results" to companies, who may pay the search engine to have their products or services appear above other search hits. This is often done in the form of bidding between companies, where the highest bidder gets the top result. A 2018 report from the European Commission showed that consumers generally avoid these top results, as there is an expectation that the topmost results on a search engine page will be sponsored, and thus less relevant.[8]
Rich snippets are displayed byGooglein the search results pages when a website contains content in structured data markup. Structured data markup helps theGoogle algorithmto index and understand the content better. Google supports rich snippets for various data types, including products, recipes, reviews, events, news articles, and job postings.[9]
A featured snippet is a summary of an answer to a user's query. This snippet appears at the top of the list of search hits. Google supports the following types of featured snippets: Paragraph Featured Snippet, Numbered List Featured Snippet, Bulleted List Featured Snippet, Table Featured Snippet,YouTubeFeatured Snippet, Carousel Snippet, Double Featured Snippet, and Two-for-One Featured Snippet.[10]
Search engines likeGoogle, Bing,Sogouhave started to expand their data intoEncyclopediaand other rich sources of information.
Google for example calls this sort of information "Google Knowledge Graph", if asearch querymatches it will display an additional sub-window on right-hand side with information from its sources.[11][12]Such panels may offer the user azero-click resultto their query.
Google Discover formerly known as Google Feed is a way of getting topics and news information to users on the homepage below the search box.[13]
Major search engines likeGoogle,Yahoo!,Bing,Sogouprimarily use content contained within the page and fallback tometadatatags of aweb pageto generate the content that makes up a searchsnippet.[14]Generally, theHTMLtitle tag will be used as the title of the snippet while the most relevant or useful contents of the web page (description tagor page copy) will be used for the description.
Search engineresult pages are protected from automated access by a range of defensive mechanisms and terms of service.[15]These result pages are the primary data source forSearch engine optimization, the website placement for competitive keywords that has become an important field of business and interest.
The process of harvesting search engine result pages data is usually called "search engine scraping" or in a general form "web crawling" and generates the data SEO-related companies need to evaluate website competitive organic and sponsored rankings. This data can be used to track the position of websites and show the effectiveness of SEO as well as keywords that may need more SEO investment to rank higher.
There is no evidence of Google making any public announcement as to the practice of scraping being in breach of its terms of service, as previously documented in this section, as any such 'warnings' could not, by their nature, apply universally to its users as well as, say, users in countries where Google does not operate, nor would they be capable of applying to a private individual in the same way as they do to one of Google's Ad Partners. Further, crawling itself remains one of the core elements of Google's search functionality and tools; purported 'warnings' against scraping previously attributed to Google, were in reality posts on third-party platforms such asTwitterthat were manifestly by individuals not necessarily associated with or employed by Google, and in any event made in a personal capacity alone, rather than in a Google-endorsed formal capacity.[16]
|
https://en.wikipedia.org/wiki/Search_engine_results_page
|
Search engine scrapingis the process of harvestingURLs, descriptions, or other information fromsearch engines. This is a specific form ofscreen scrapingorweb scrapingdedicated to search engines only.
Most commonly largersearch engine optimization(SEO) providers depend on regularly scraping keywords from search engines to monitor the competitive position of their customers' websites for relevant keywords or theirindexingstatus.
The process of entering a website and extracting data in an automated fashion is also often called "crawling". Search engines get almost all their data from automated crawling bots.
Google is by far the largest search engine with most users in numbers as well as most revenue in creative advertisements, which makes Google the most important search engine to scrape for SEO related companies.[1]
Although Google does not take legal action against scraping, it uses a range of defensive methods that makes scraping their results a challenging task, even when the scraping tool is realisticallyspoofinga normalweb browser:
When search engine defense thinks an access might be automated, the search engine can react differently.
The first layer of defense is a captcha page[4]where the user is prompted to verify they are a real person and not a bot or tool. Solving thecaptchawill create acookiethat permits access to the search engine again for a while. After about one day, the captcha page is displayed again.
The second layer of defense is a similar error page but without captcha, in such a case the user is completely blocked from using the search engine until the temporary block is lifted, or the user changes their IP.
The third layer of defense is a long-term block of the entire network segment. Google has blocked large network blocks for months. This sort of block is likely triggered by an administrator and only happens if a scraping tool is sending a very high number of requests.
All these forms of detection may also happen to a normal user, especially users sharing the same IP address or network class (IPV4 ranges as well as IPv6 ranges).
To scrape a search engine successfully, the two major factors are time and amount.
The more keywords a user needs to scrape and the smaller the time for the job, the more difficult scraping will be and the more developed a scraping script or tool needs to be.
Scraping scripts need to overcome a few technical challenges:[citation needed]
When developing a scraper for a search engine, almost any programming language can be used. Although, depending on performance requirements, some languages will be favorable.
PHPis a commonly used language to write scraping scripts for websites or backend services, since it has powerful capabilities built-in (DOM parsers, libcURL); however, its memory usage is typically 10 times the factor of a similar C/C++code.Ruby on Railsas well asPythonare also frequently used to automated scraping jobs.
Additionally,bash scriptingcan be used together with cURL as a command line tool to scrape a search engine.
When scraping websites and services the legal part is often a big concern for companies, for web scraping it greatly depends on the country a scraping user/company is from as well as which data or website is being scraped. With many different court rulings all over the world.[5][6]
However, when it comes to scraping search engines the situation is different, search engines usually do not list intellectual property as they just repeat or summarize information they scraped from other websites.
The largest public known incident of a search engine being scraped happened in 2011 when Microsoft was caught scraping unknown keywords from Google for their own, rather new Bing service,[7]but even this incident did not result in a court case.
|
https://en.wikipedia.org/wiki/Search_engine_scraping
|
Collocation extractionis the task of using a computer to extractcollocationsautomatically from acorpus.
The traditional method of performing collocation extraction is to find a formula based on the statistical quantities of those words to calculate a score associated to every word pairs. Proposed formulas aremutual information,t-test,z test,chi-squared testandlikelihood ratio.[1]
Within the area ofcorpus linguistics,collocationis defined as a sequence of words ortermswhichco-occurmore often than would be expected by chance. 'Crystal clear', 'middle management', 'nuclear family', and 'cosmetic surgery' are examples of collocated pairs of words. Some words are often found together because they make up acompound noun, for example 'riding boots' or 'motor cyclist' or ‘collocation extraction’ its very self.
Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Collocation_extraction
|
Process miningis a family of techniques for analyzing event data to understand and improve operational processes. Part of the fields ofdata scienceandprocess management, process mining is generally built onlogsthat contain case id, a unique identifier for a particular process instance; an activity, a description of the event that is occurring; a timestamp; and sometimes other information such as resources, costs, and so on.[1][2]
There are three main classes of process mining techniques:process discovery,conformance checking, andprocess enhancement. In the past, terms likeworkflow miningandautomated business process discovery(ABPD)[3]were used.
Process mining techniques are often used when no formal description of the process can be obtained by other approaches, or when the quality of existing documentation is questionable.[4]For example, application of process mining methodology to the audit trails of aworkflow management system, the transaction logs of anenterprise resource planningsystem, or theelectronic patient recordsin a hospital can result in models describing processes of organizations.[5]Event log analysis can also be used to compare event logs withpriormodel(s) to understand whether the observations conform to a prescriptive or descriptive model. It is required that the event logs data be linked to a case ID, activities, and timestamps.[6][7]
Contemporary management trends such as BAM (business activity monitoring), BOM (business operations management), and BPI (business process intelligence) illustrate the interest in supporting diagnosis functionality in the context ofbusiness process managementtechnology (e.g.,workflow management systemsand otherprocess-awareinformation systems). Process mining is different from mainstreammachine learning,data mining, andartificial intelligencetechniques. For example, process discovery techniques in the field of process mining try to discover end-to-end process models that are able to describe sequential, choice relation, concurrent and loop behavior. Conformance checking techniques are closer tooptimizationthan to traditional learning approaches. However, process mining can be used to generatemachine learning,data mining, andartificial intelligenceproblems. After discovering a process model and aligning the event log, it is possible to create basic supervised and unsupervised learning problems. For example, to predict the remaining processing time of a running case or to identify the root causes of compliance problems.
The IEEETask Force on Process Miningwas established in October 2009 as part of the IEEE Computational Intelligence Society.[8]This is a vendor-neutral organization aims to promote the research, development, education and understanding of process mining, make end-users, developers, consultants, and researchers aware of the state-of-the-art in process mining, promote the use of process mining techniques and tools and stimulate new applications, play a role in standardization efforts for logging event data (e.g., XES), organize tutorials, special sessions, workshops, competitions, panels, and develop material (papers, books, online courses, movies, etc.) to inform and guide people new to the field. The IEEE Task Force on Process Mining established the International Process Mining Conference (ICPM) series,[9]lead the development of the IEEE XES standard for storing and exchanging event data[10][11], and wrote the Process Mining Manifesto[12]which was translated into 16 languages.
The term "process mining" was coined in a research proposal written by the Dutch computer scientistWil van der Aalst.[13]By 1999, this new field of research emerged under the umbrella of techniques related to data science and process science atEindhoven University. In the early days, process mining techniques were often studied with techniques used forworkflow management. In 2000, the first practical algorithm for process discovery, "Alpha miner"was developed. The next year, research papers introduced "Heuristic miner" a much similar algorithm based on heuristics. More powerful algorithms such asinductive minerwere developed for process discovery. 2004 saw the development of "Token-based replay" forconformance checking. Process mining branched out "performance analysis", "decision mining" and "organizational mining" in 2005 and 2006. In 2007, the first commercial process mining company "Futura Pi" was established. In 2009, theIEEE task force on PMgoverning body was formed to oversee the norms and standards related to process mining. Further techniques for conformance checking led in 2010 toalignment-based conformance checking". In 2011, the first process mining book was published. About 30 commercially available process mining tools were available in 2018[citation needed].
There are three categories of process mining techniques.
Process mining software helps organizations analyze and visualize their business processes based on data extracted from various sources, such as transaction logs or event data. This software can identify patterns, bottlenecks, and inefficiencies within a process, enabling organizations to improve their operational efficiency, reduce costs, and enhance their customer experience. In 2025,Gartnerlisted 40 tools in its process mining platform review category.[21]
|
https://en.wikipedia.org/wiki/Process_mining
|
In social sciences,sequence analysis (SA)is concerned with the analysis of sets of categorical sequences that typically describelongitudinal data. Analyzed sequences are encoded representations of, for example, individual life trajectories such as family formation, school to work transitions, working careers, but they may also describe daily or weekly time use or represent the evolution of observed or self-reported health, of political behaviors, or the development stages of organizations. Such sequences are chronologically ordered unlike words or DNA sequences for example.
SA is a longitudinal analysis approach that is holistic in the sense that it considers each sequence as a whole. SA is essentially exploratory. Broadly, SA provides a comprehensible overall picture of sets of sequences with the objective of characterizing the structure of the set of sequences, finding the salient characteristics of groups, identifying typical paths, comparing groups, and more generally studying how the sequences are related to covariates such as sex, birth cohort, or social origin.
Introduced in the social sciences in the 80s byAndrew Abbott,[1][2]SA has gained much popularity after the release of dedicated software such as the SQ[3]and SADI[4]addons forStataand theTraMineRRpackage[5]with its companions TraMineRextras[6]and WeightedCluster.[7]
Despite some connections, the aims and methods of SA in social sciences strongly differ from those ofsequence analysis in bioinformatics.
Sequence analysis methods were first imported into the social sciences from the information and biological sciences (seeSequence alignment) by theUniversity of ChicagosociologistAndrew Abbottin the 1980s, and they have since developed in ways that are unique to the social sciences.[8]Scholars inpsychology,economics,anthropology,demography,communication,political science,learning sciences, organizational studies, and especiallysociologyhave been using sequence methods ever since.
In sociology, sequence techniques are most commonly employed in studies of patterns of life-course development, cycles, and life histories.[9][10][11][12]There has been a great deal of work on the sequential development of careers,[13][14][15]and there is increasing interest in how career trajectories intertwine with life-course sequences.[16][17]Many scholars have used sequence techniques to model how work and family activities are linked in household divisions of labor and the problem of schedule synchronization within families.[18][19][20]The study of interaction patterns is increasingly centered on sequential concepts, such as turn-taking, the predominance of reciprocal utterances, and the strategic solicitation of preferred types of responses (seeConversation Analysis). Social network analysts (seeSocial network analysis) have begun to turn to sequence methods and concepts to understand how social contacts and activities are enacted in real time,[21][22]and to model and depict how whole networks evolve.[23]Social network epidemiologists have begun to examine social contact sequencing to better understand the spread of disease.[24]Psychologists have used those methods to study how the order of information affects learning, and to identify structure in interactions between individuals (seeSequence learning).
Many of the methodological developments in sequence analysis came on the heels of a special section devoted to the topic in a 2000 issue[10]ofSociological Methods & Research, which hosted a debate over the use of theoptimal matching(OM) edit distance for comparing sequences. In particular, sociologists objected to the descriptive and data-reducing orientation ofoptimal matching, as well as to a lack of fit between bioinformatic sequence methods and uniquely social phenomena.[25][26]The debate has given rise to several methodological innovations (seePairwise dissimilaritiesbelow) that address limitations of early sequence comparison methods developed in the 20th century. In 2006,David Starkand Balazs Vedres[23]proposed the term "social sequence analysis" to distinguish the approach from bioinformaticsequence analysis. However, if we except the nice book byBenjamin Cornwell,[27]the term was seldom used, probably because the context prevents any confusion in the SA literature.Sociological Methods & Researchorganized a special issue on sequence analysis in 2010, leading to what Aisenbrey and Fasang[28]referred to as the "second wave of sequence analysis", which mainly extended optimal matching and introduced other techniques to compare sequences. Alongside sequence comparison, recent advances in SA concerned among others the visualization of sets of sequence data,[5][29]the measure and analysis of the discrepancy of sequences,[30]the identification ofrepresentative sequences,[31]and the development of summary indicators of individual sequences.[32]Raab and Struffolino[33]have conceived more recent advances as the third wave of sequence analysis. This wave is largely characterized by the effort of bringing together the stochastic and the algorithmic modeling culture[34]by jointly applying SA with more established methods such asanalysis of variance,event history analysis,Markovian modeling,social networkanalysis, orcausal analysisandstatistical modelingin general.[35][36][37][27][30][38][39]
The analysis of sequence patterns has foundations in sociological theories that emerged in the middle of the 20th century.[27]Structural theorists argued that society is a system that is characterized by regular patterns. Even seemingly trivial social phenomena are ordered in highly predictable ways.[40]This idea serves as an implicit motivation behind social sequence analysts' use of optimal matching, clustering, and related methods to identify common "classes" of sequences at all levels of social organization, a form of pattern search. This focus on regularized patterns of social action has become an increasingly influential framework for understanding microsocial interaction and contact sequences, or "microsequences."[41]This is closely related toAnthony Giddens's theory ofstructuration, which holds that social actors' behaviors are predominantly structured by routines, and which in turn provides predictability and a sense of stability in an otherwise chaotic and rapidly moving social world.[42]This idea is also echoed inPierre Bourdieu'sconcept ofhabitus, which emphasizes the emergence and influence of stable worldviews in guiding everyday action and thus produce predictable, orderly sequences of behavior.[43]The resulting influence of routine as a structuring influence on social phenomena was first illustrated empirically byPitirim Sorokin, who led a 1939 study that found that daily life is so routinized that a given person is able to predict with about 75% accuracy how much time they will spend doing certain things the following day.[44]Talcott Parsons's argument[40]that all social actors are mutually oriented to their larger social systems (for example, their family and larger community) throughsocial rolesalso underlies social sequence analysts' interest in the linkages that exist between different social actors' schedules and ordered experiences, which has given rise to a considerable body of work onsynchronizationbetween social actors and their social contacts and larger communities.[19][18][45]All of these theoretical orientations together warrant critiques of thegeneral linear modelof social reality, which as applied in most work implies that society is either static or that it is highly stochastic in a manner that conforms toMarkovprocesses[1][46]This concern inspired the initial framing of social sequence analysis as an antidote to general linear models. It has also motivated recent attempts to model sequences of activities or events in terms as elements that link social actors in non-linear network structures[47][48]This work, in turn, is rooted inGeorg Simmel'stheory that experiencing similar activities, experiences, and statuses serves as a link between social actors.[49][50]
In demography and historical demography, from the 1980s the rapid appropriation of the life course perspective and methods was part of a substantive paradigmatic change that implied a stronger embedment of demographic processes into social sciences dynamics. After a first phase with a focus on the occurrence and timing of demographic events studied separately from each other with a hypothetico-deductive approach, from the early 2000s[34][51]the need to consider the structure of the life courses and to make justice to its complexity led to a growing use of sequence analysis with the aim of pursuing a holistic approach. At an inter-individual level,pairwise dissimilaritiesand clustering appeared as the appropriate tools for revealing the heterogeneity in human development. For example, the meta-narrations contrasting individualized Western societies with collectivist societies in the South (especially in Asia) were challenged by comparative studies revealing the diversity of pathways to legitimate reproduction.[52]At an intra-individual level, sequence analysis integrates the basic life course principle that individuals interpret and make decision about their life according to their past experiences and their perception of contingencies.[34]The interest for this perspective was also promoted by the changes in individuals' life courses for cohorts born between the beginning and the end of the 20th century. These changes have been described as de-standardization, de-synchronization, de-institutionalization.[53]Among the drivers of these dynamics, the transition to adulthood is key:[54]for more recent birth cohorts this crucial phase along individual life courses implied a larger number of events and lengths of the state spells experienced. For example, many postponed leaving parental home and the transition to parenthood, in some context cohabitation replaced marriage as long-lasting living arrangement, and the birth of the first child occurs more frequently while parents cohabit instead of within a wedlock.[55]Such complexity required to be measured to be able to compare quantitative indicators across birth cohorts[11][56](see[57]for an extension of this questioning to populations from low- and medium income countries). The demography's old ambition to develop a 'family demography' has found in the sequence analysis a powerful tool to address research questions at the cross-road with other disciplines: for example, multichannel techniques[58]represent precious opportunities to deal with the issue of compatibility between working and family lives.[59][37]Similarly, more recent combinations of sequence analysis and event history analysis have been developed (see[36]for a review) and can be applied, for instance, for understanding of the link between demographic transitions and health.
The analysis of temporal processes in the domain of political sciences[60]regards how institutions, that is, systems and organizations (regimes, governments, parties, courts, etc.) that crystallize political interactions, formalize legal constraints and impose a degree of stability or inertia. Special importance is given to, first, the role of contexts, which confer meaning to trends and events, while shared contexts offer shared meanings; second, to changes over time in power relationships, and, subsequently, asymmetries, hierarchies, contention, or conflict; and, finally, to historical events that are able to shape trajectories, such as elections, accidents, inaugural speeches, treaties, revolutions, or ceasefires. Empirically, political sequences' unit of analysis can be individuals, organizations, movements, or institutional processes. Depending on the unit of analysis, the sample sizes may be limited few cases (e.g., regions in a country when considering the turnover of local political parties over time) or include a few hundreds (e.g., individuals' voting patterns). Three broad kinds of political sequences may be distinguished. The first and most common iscareers,that is, formal, mostly hierarchical positions along which individuals progress in institutional environments, such as parliaments, cabinets, administrations, parties, unions or business organizations.[61][62][63]We may nametrajectoriespolitical sequences that develop in more informal and fluid contexts, such as activists evolving across various causes and social movements,[64][65]or voters navigating a political and ideological landscape across successive polls.[66]Finally,processesrelate to non-individual entities, such as: public policies developing through successive policy stages across distinct arenas;[67]sequences of symbolic or concrete interactions between national and international actors in diplomatic and military contexts;[68][69]and development of organizations or institutions, such as pathways of countries towards democracy (Wilson 2014).[70]
Asequencesis an ordered list of elements (s1,s2,...,sl) taken from a finite alphabetA. For a set S of sequences, three sizes matter: the numbernof sequences, the sizea= |A| of the alphabet, and the lengthlof the sequences (that could be different for each sequence). In social sciences,nis generally something between a few hundreds and a few thousands, the alphabet size remains limited (most often less than 20), while sequence length rarely exceeds 100.
We may distinguish betweenstate sequencesandevent sequences,[71]where states last while events occur at one time point and do not last but contribute possibly together with other events to state changes. For instance, the joint occurrence of the two events leaving home and starting a union provoke a state change from 'living at home with parents' to 'living with a partner'.
When a state sequence is represented as the list of states observed at the successive time points, the position of each element in the sequence conveys this time information and the distance between positions reflects duration. An alternative more compact representation of a sequence, is the list of the successive spells stamped with their duration, where aspell(also calledepisode) is asubstringin a same state. For example, inaabbbc,bbbis a spell of length 3 in stateb, and the whole sequence can be represented as (a,2)-(b,3)-(c,1).[71]
A crucial point when looking at state sequences is the timing scheme used to time align the sequences. This could be the historical calendar time, or a process time such as age, i.e. time since birth.
In event sequences, positions do not convey any time information. Therefore event occurrence time must be explicitly provided (as a timestamp) when it matters.
SA is essentially concerned with state sequences.
Conventional SA consists essentially in building a typology of the observed trajectories. Abbott and Tsay (2000)[10]describe this typical SA as a three-step program: 1. Coding individual narratives as sequences of states; 2. Measuring pairwise dissimilarities between sequences; and 3. Clustering the sequences from the pairwise dissimilarities. However, SA is much more (see e.g.[35][8]) and encompasses also among others the description and visual rendering of sets of sequences, ANOVA-like analysis and regression trees for sequences, the identification of representative sequences, the study of the relationship between linked sequences (e.g. dyadic, linked-lives, or various life dimensions such as occupation, family, health), and sequence-network.
Given an alignment rule, a set of sequences can be represented in tabular form with sequences in rows and columns corresponding to the positions in the sequences.
To describe such data, we may look at the columns and consider the cross-sectional state distributions at the successive positions.
Thechronogramordensity plotof a set of sequences renders these successive cross-sectional distributions.
For each (column) distribution we can compute characteristics such as entropy or modal state and look at how these values evolve over the positions (see[5]pp 18–21).
Alternatively, we can look at the rows. Theindex plot[73]where each sequence is represented as a horizontal stacked bar or line is the basic plot for rendering individual sequences.
We can compute characteristics of the individual sequences and examine the cross-sectional distribution of these characteristics.
Main indicators of individual sequences[32]
State sequences can nicely be rendered graphically and such plots prove useful for interpretation purposes. As shown above, the two basic plots are the index plot that renders individual sequences and the chronogram that renders the evolution of the cross-sectional state distribution along the timeframe. Chronograms (also known as status proportion plot or state distribution plot) completely overlook the diversity of the sequences, while index plots are often too scattered to be readable. Relative frequency plots and plots of representative sequences attempt to increase the readability of index plots without falling in the oversimplification of a chronogram. In addition, there are many plots that focus on specific characteristics of the sequences. Below is a list of plots that have been proposed in the literature for rendering large sets of sequences. For each plot, we give examples of software (details in sectionSoftware) that produce it.
Pairwise dissimilarities between sequences serve to compare sequences and many advanced SA methods are based on these dissimilarities. The most popular dissimilarity measure isoptimal matching(OM), i.e. the minimal cost of transforming one sequence into the other by means of indel (insert or delete) and substitution operations with possibly costs of these elementary operations depending on the states involved. SA is so intimately linked with OM that it is sometimes named optimal matching analysis (OMA).
There are roughly three categories of dissimilarity measures:[86]
Pairwise dissimilarities between sequences give access to a series of techniques to discover holistic structuring characteristics of the sequence data. In particular, dissimilarities between sequences can serve as input to cluster algorithms and multidimensional scaling, but also allow to identify medoids or other representative sequences, define neighborhoods, measure the discrepancy of a set of sequences, proceed to ANOVA-like analyses, and grow regression trees.
Although dissimilarity-based methods play a central role in social SA, essentially because of their ability to preserve the holistic perspective, several other approaches also prove useful for analyzing sequence data.
Some recent advances can be conceived as thethird wave of SA.[33]This wave is largely characterized by the effort of bringing together the stochastic and the algorithmic modeling culture by jointly applying SA with more established methods such as analysis of variance, event history, network analysis, or causal analysis and statistical modeling in general. Some examples are given below; see also "Other methods of analysis".
Although SA witnesses a steady inflow of methodological contributions that address the issues raised two decades ago,[28]some pressing open issues remain.[36]Among the most challenging, we can mention:
Up-to-date information on advances, methodological discussions, and recent relevant publications can be found on the Sequence Analysis Associationwebpage.
These techniques have proved valuable in a variety of contexts. In life-course research, for example, research has shown that retirement plans are affected not just by the last year or two of one's life, but instead how one's work and family careers unfolded over a period of several decades. People who followed an "orderly" career path (characterized by consistent employment and gradual ladder-climbing within a single organization) retired earlier than others, including people who had intermittent careers, those who entered the labor force late, as well as those who enjoyed regular employment but who made numerous lateral moves across organizations throughout their careers.[12]In the field ofeconomic sociology, research has shown that firm performance depends not just on a firm's current or recent social network connectedness, but also the durability or stability of their connections to other firms. Firms that have more "durably cohesive" ownership network structures attract more foreign investment than less stable or poorly connected structures.[23]Research has also used data on everyday work activity sequences to identify classes of work schedules, finding that the timing of work during the day significantly affects workers' abilities to maintain connections with the broader community, such as through community events.[19]More recently, social sequence analysis has been proposed as a meaningful approach to study trajectories in the domain of creative enterprise, allowing the comparison among the idiosyncrasies of unique creative careers.[131]While other methods for constructing and analyzing whole sequence structure have been developed during the past three decades, including event structure analysis,[118][119]OM and other sequence comparison methods form the backbone of research on whole sequence structures.
Some examples of application include:
Sociology
Demography and historical demography
Political sciences
Education and learning sciences
Psychology
Medical research
Survey methodology
Geography
Two main statistical computing environment offer tools to conduct a sequence analysis in the form of user-written packages: Stata and R.
The first international conference dedicated to social-scientific research that uses sequence analysis methods – the Lausanne Conference on Sequence Analysis, orLaCOSA– was held in Lausanne, Switzerland in June 2012.[159]A second conference (LaCOSA II) was held in Lausanne in June 2016.[160][161]TheSequence Analysis Association(SAA) was founded at the International Symposium on Sequence Analysis and Related Methods, in October 2018 at Monte Verità, TI, Switzerland. The SAA is an international organization whose goal is to organize events such as symposia and training courses and related events, and to facilitate scholars' access to sequence analysis resources.
|
https://en.wikipedia.org/wiki/Sequence_analysis_in_social_sciences
|
Inmachine learning,sequence labelingis a type ofpattern recognitiontask that involves the algorithmic assignment of acategoricallabel to each member of a sequence of observed values. A common example of a sequence labeling task ispart of speech tagging, which seeks to assign apart of speechto each word in an input sentence or document. Sequence labeling can be treated as a set of independentclassificationtasks, one per member of the sequence. However, accuracy is generally improved by making the optimal label for a given element dependent on the choices of nearby elements, using special algorithms to choose thegloballybest set of labels for the entire sequence at once.
As an example of why finding the globally best label sequence might produce better results than labeling one item at a time, consider the part-of-speech tagging task just described. Frequently, many words are members of multiple parts of speech, and the correct label of such a word can often be deduced from the correct label of the word to the immediate left or right. For example, the word "sets" can be either a noun or verb. In a phrase like "he sets the books down", the word "he" is unambiguously a pronoun, and "the" unambiguously adeterminer, and using either of these labels, "sets" can be deduced to be a verb, since nouns very rarely follow pronouns and are less likely to precede determiners than verbs are. But in other cases, only one of the adjacent words is similarly helpful. In "he sets and then knocks over the table", only the word "he" to the left is helpful (cf. "...picks up the sets and then knocks over..."). Conversely, in "... and also sets the table" only the word "the" to the right is helpful (cf. "... and also sets of books were ..."). An algorithm that proceeds from left to right, labeling one word at a time, can only use the tags of left-adjacent words and might fail in the second example above; vice versa for an algorithm that proceeds from right to left.
Most sequence labeling algorithms areprobabilisticin nature, relying onstatistical inferenceto find the best sequence. The most common statistical models in use for sequence labeling make a Markov assumption, i.e. that the choice of label for a particular word is directly dependent only on the immediately adjacent labels; hence the set of labels forms aMarkov chain. This leads naturally to thehidden Markov model(HMM), one of the most common statistical models used for sequence labeling. Other common models in use are themaximum entropy Markov modelandconditional random field.
|
https://en.wikipedia.org/wiki/Sequence_labeling
|
Action selectionis a way of characterizing the most basic problem of intelligent systems: what to do next. Inartificial intelligenceand computationalcognitive science, "the action selection problem" is typically associated withintelligent agentsandanimats—artificial systems that exhibit complex behavior in anagent environment. The term is also sometimes used inethologyor animal behavior.
One problem for understanding action selection is determining the level of abstraction used for specifying an "act". At the most basic level of abstraction, an atomic act could be anything fromcontracting a muscle celltoprovoking a war. Typically for any one action-selection mechanism, the set of possible actions is predefined and fixed.
Most researchers working in this field place high demands on their agents:
For these reasons, action selection is not trivial and attracts a good deal of research.
The main problem for action selection iscomplexity. Since allcomputationtakes both time and space (in memory), agents cannot possibly consider every option available to them at every instant in time. Consequently, they must bebiased, and constrain their search in some way. For AI, the question of action selection iswhat is the best way to constrain this search? For biology and ethology, the question ishow do various types of animals constrain their search? Do all animals use the same approaches? Why do they use the ones they do?
One fundamental question about action selection is whether it is really a problem at all for an agent, or whether it is just a description of anemergentproperty of an intelligent agent's behavior. However, if we consider how we are going to build an intelligent agent, then it becomes apparent there must besomemechanism for action selection. This mechanism may be highly distributed (as in the case of distributed organisms such associal insectcolonies orslime mold) or it may be a special-purpose module.
The action selection mechanism (ASM) determines not only the agent's actions in terms of impact on the world, but also directs its perceptualattention, and updates itsmemory. Theseegocentricsorts of actions may in turn result in modifying the agent's basic behavioral capacities, particularly in that updating memory implies some form ofmachine learningis possible. Ideally, action selection itself should also be able to learn and adapt, but there are many problems ofcombinatorial complexityand computationaltractabilitythat may require restricting the search space for learning.
In AI, an ASM is also sometimes either referred to as anagent architectureor thought of as a substantial part of one.
Generally, artificial action selection mechanisms can be divided into several categories:symbol-based systemssometimes known as classical planning,distributed solutions, and reactive ordynamic planning. Some approaches do not fall neatly into any one of these categories. Others are really more about providingscientific modelsthan practical AI control; these last are described further in the next section.
Early in thehistory of artificial intelligence, it was assumed that the best way for an agent to choose what to do next would be to compute aprobably optimalplan, and then execute that plan. This led to thephysical symbol systemhypothesis, that a physical agent that can manipulate symbols isnecessary and sufficientfor intelligence. Manysoftware agentsstill use this approach for action selection. It normally requires describing all sensor readings, the world, all of ones actions and all of one's goals in some form ofpredicate logic. Critics of this approach complain that it is too slow for real-time planning and that, despite the proofs, it is still unlikely to produce optimal plans because reducing descriptions of reality to logic is a process prone to errors.
Satisficingis a decision-making strategy that attempts to meet criteria for adequacy, rather than identify an optimal solution. A satisficing strategy may often, in fact, be (near) optimal if the costs of the decision-making process itself, such as the cost of obtaining complete information, are considered in the outcome calculus.
Goal driven architectures– In thesesymbolicarchitectures, the agent's behavior is typically described by a set of goals. Each goal can be achieved by a process or an activity, which is described by a prescripted plan. The agent must just decide which process to carry on to accomplish a given goal. The plan can expand to subgoals, which makes the process slightly recursive. Technically, more or less, the plans exploit condition-rules. These architectures arereactiveor hybrid. Classical examples of goal-driven architectures are implementable refinements ofbelief-desire-intentionarchitecture likeJAMorIVE.
In contrast to the symbolic approach, distributed systems of action selection actually have no one "box" in the agent that decides the next action. At least in their idealized form, distributed systems have manymodulesrunning in parallel and determining the best action based on local expertise. In these idealized systems, overall coherence is expected to emerge somehow, possibly through careful design of the interacting components. This approach is often inspired byartificial neural networksresearch. In practice, there is almost alwayssomecentralized system determining which module is "the most active" or has the most salience. There is evidence real biological brains also have suchexecutive decision systemswhich evaluate which of the competing systems deserves the mostattention, or more properly, has its desired actionsdisinhibited.
Because purely distributed systems are difficult to construct, many researchers have turned to using explicit hard-coded plans to determine the priorities of their system.
Dynamic or reactive planning methods compute just one next action in every instant based on the current context and pre-scripted plans. In contrast to classical planning methods, reactive or dynamic approaches do not suffercombinatorial explosion. On the other hand, they are sometimes seen as too rigid to be consideredstrong AI, since the plans are coded in advance. At the same time, natural intelligence can be rigid in some contexts although it is fluid and able to adapt in others.
Example dynamic planning mechanisms include:
Sometimes to attempt to address the perceived inflexibility of dynamic planning, hybrid techniques are used. In these, a more conventional AI planning system searches for new plans when the agent has spare time, and updates the dynamic plan library when it finds good solutions. The important aspect of any such system is that when the agent needs to select an action, some solution exists that can be used immediately (see furtheranytime algorithm).
Many dynamic models of artificial action selection were originally inspired by research inethology. In particular,Konrad LorenzandNikolaas Tinbergenprovided the idea of aninnate releasing mechanismto explain instinctive behaviors (fixed action patterns). Influenced by the ideas ofWilliam McDougall, Lorenz developed this into a "psychohydraulic" model of themotivationof behavior. In ethology, these ideas were influential in the 1960s, but they are now regarded as outdated because of their use of anenergy flowmetaphor; thenervous systemand the control of behavior are now normally treated as involving information transmission rather than energy flow. Dynamic plans and neural networks are more similar to information transmission while spreading activation is more similar to the diffuse control of emotional or hormonal systems.
Stan Franklinhas proposed thataction selectionis the right perspective to take in understanding the role and evolution ofmind. See his page onthe action selection paradigm.Archived2006-10-09 at theWayback Machine
Some researchers create elaborate models of neural action selection. See for example:
Thelocus coeruleus(LC) is one of the primary sources ofnoradrenalinein the brain and has been associated with selection ofcognitive processing, such as attention and behavioral tasks.[3][4][5][6]Thesubstantia nigra pars compacta(SNc) is one of the primary sources ofdopaminein the brain, and has been associated with action selection, primarily as part of thebasal ganglia.[7][8][9][10][11]CNET is a hypothesized neural signaling mechanism in the SNc and LC (which are catecholaminergic neurons), that could assist with action selection by routing energy between neurons in each group as part of action selection, to help one or more neurons in each group to reachaction potential.[12][13]It was first proposed in 2018, and is based on a number of physical parameters of those neurons, which can be broken down into three major components:
1)Ferritinandneuromelaninare present in high concentrations in those neurons, but it was unknown in 2018 whether they formed structures that would be capable of transmitting electrons over relatively long distances on the scale of microns between the largest of those neurons, which had not been previously proposed or observed.[14]Those structures would also need to provide a routing or switching function, which had also not previously been proposed or observed. Evidence of the presence of ferritin and neuromelanin structures in those neurons and their ability to both conduct electrons by sequentialtunnelingand to route/switch the path of the neurons was subsequently obtained.[15][16][17]
2) ) The axons of large SNc neurons were known to have extensive arbors, but it was unknown whether post-synaptic activity at the synapses of those axons would raise themembrane potentialof those neurons sufficiently to cause the electrons to be routed to the neuron or neurons with the most post-synaptic activity for the purpose of action selection. At the time, prevailing explanations of the purpose of those neurons was that they did not mediate action selection and were only modulatory and non-specific.[18]Prof. Pascal Kaeser of Harvard Medical School subsequently obtained evidence that large SNc neurons can be temporally and spatially specific and mediate action selection.[19]Other evidence indicates that the large LC axons have similar behavior.[20][21]
3) Several sources of electrons or excitons to provide the energy for the mechanism were hypothesized in 2018 but had not been observed at that time. Dioxetane cleavage (which can occur during somatic dopamine metabolism by quinone degradation of melanin) was contemporaneously proposed to generate high energy triplet state electrons by Prof. Doug Brash at Yale, which could provide a source for electrons for the CNET mechanism.[22][23][24]
While evidence of a number of physical predictions of the CNET hypothesis has thus been obtained, evidence of whether the hypothesis itself is correct has not been sought. One way to try to determine whether the CNET mechanism is present in these neurons would be to use quantum dot fluorophores and optical probes to determine whether electron tunneling associated with ferritin in the neurons is occurring in association with specific actions.[6][25][26]
|
https://en.wikipedia.org/wiki/Action_selection_mechanism
|
Inartificial intelligence(AI), anexpert systemis a computer system emulating the decision-making ability of a humanexpert.[1]Expert systems are designed to solve complex problems byreasoningthrough bodies of knowledge, represented mainly asif–then rulesrather than through conventionalprocedural programmingcode.[2]Expert systems were among the first truly successful forms of AI software.[3][4][5][6][7]They were created in the 1970s and then proliferated in the 1980s,[8]being then widely regarded as the future of AI — before the advent of successfulartificial neural networks.[9]An expert system is divided into two subsystems: 1) aknowledge base, which represents facts and rules; and 2) aninference engine, which applies the rules to the known facts to deduce new facts, and can include explaining and debugging abilities.
Soon after the dawn of modern computers in the late 1940s and early 1950s, researchers started realizing the immense potential these machines had for modern society. One of the first challenges was to make such machines able to “think” like humans – in particular, making these machines able to make important decisions the way humans do. The medical–healthcare field presented the tantalizing challenge of enabling these machines to make medical diagnostic decisions.[10]
Thus, in the late 1950s, right after the information age had fully arrived, researchers started experimenting with the prospect of using computer technology to emulate human decision making. For example, biomedical researchers started creating computer-aided systems for diagnostic applications in medicine and biology. These early diagnostic systems used patients’ symptoms and laboratory test results as inputs to generate a diagnostic outcome.[11][12]These systems were often described as the early forms of expert systems. However, researchers realized that there were significant limits when using traditional methods such as flow charts,[13][14]statistical pattern matching,[15]or probability theory.[16][17]
This previous situation gradually led to the development of expert systems, which used knowledge-based approaches. These expert systems in medicine were theMYCINexpert system,[18]theInternist-Iexpert system[19]and later, in the middle of the 1980s, theCADUCEUS.[20]
Expert systems were formally introduced around 1965 by theStanfordHeuristic Programming Project led byEdward Feigenbaum, who is sometimes termed the "father of expert systems";[21]other key early contributors were Bruce Buchanan and Randall Davis. The Stanford researchers tried to identify domains where expertise was highly valued and complex, such as diagnosing infectious diseases (Mycin) and identifying unknown organic molecules (Dendral).[22]The idea that "intelligent systems derive their power from the knowledge they possess rather than from the specific formalisms and inference schemes they use"[23]– as Feigenbaum said – was at the time a significant step forward, since the past research had been focused on heuristic computational methods, culminating in attempts to develop very general-purpose problem solvers (foremostly the conjunct work ofAllen NewellandHerbert Simon).[24]Expert systems became some of the first truly successful forms ofartificial intelligence(AI) software.[3][4][5][6][7]
Research on expert systems was also active in Europe. In the US, the focus tended to be on the use ofproduction rule systems, first on systems hard coded on top ofLispprogramming environments and then on expert system shells developed by vendors such asIntellicorp. In Europe, research focused more on systems and expert systems shells developed inProlog. The advantage of Prolog systems was that they employed a form ofrule-based programmingthat was based onformal logic.[25][26]
One such early expert system shell based on Prolog was APES.[27]One of the first use cases ofPrologand APES was in the legal area namely, the encoding of a large portion of the British Nationality Act. Lance Elliot wrote: "The British Nationality Act was passed in 1981 and shortly thereafter was used as a means of showcasing the efficacy of using Artificial Intelligence (AI) techniques and technologies, doing so to explore how the at-the-time newly enacted statutory law might be encoded into a computerized logic-based formalization. A now oft-cited research paper entitled “The British Nationality Act as a Logic Program” was published in 1986 and subsequently became a hallmark for subsequent work in AI and the law."[28][29]
In the 1980s, expert systems proliferated. Universities offered expert system courses and two-thirds of theFortune 500companies applied the technology in daily business activities.[8][30]Interest was international with theFifth Generation Computer Systems projectin Japan and increased research funding in Europe.
In 1981, the firstIBM PC, with thePC DOSoperating system, was introduced.[31]The imbalance between the high affordability of the relatively powerful chips in the PC, compared to the much more expensive cost of processing power in the mainframes that dominated the corporate IT world at the time, created a new type of architecture for corporate computing, termed theclient–server model.[32]Calculations and reasoning could be performed at a fraction of the price of a mainframe using a PC. This model also enabled business units to bypass corporate IT departments and directly build their own applications. As a result, client-server had a tremendous impact on the expert systems market. Expert systems were already outliers in much of the business world, requiring new skills that many IT departments did not have and were not eager to develop. They were a natural fit for new PC-based shells that promised to put application development into the hands of end users and experts. Until then, the main development environment for expert systems had been high endLisp machinesfromXerox,Symbolics, andTexas Instruments. With the rise of the PC and client-server computing, vendors such as Intellicorp and Inference Corporation shifted their priorities to developing PC-based tools. Also, new vendors, often financed byventure capital(such as Aion Corporation,Neuron Data, Exsys,VP-Expert, and many others[33][34]), started appearing regularly.
The first expert system to be used in a design capacity for a large-scale product was the Synthesis of Integral Design (SID) software program, developed in 1982. Written inLisp, SID generated 93% of theVAX 9000CPU logic gates.[35]Input to the software was a set of rules created by several expert logic designers. SID expanded the rules and generated softwarelogic synthesisroutines many times the size of the rules themselves. Surprisingly, the combination of these rules resulted in an overall design that exceeded the capabilities of the experts themselves, and in many cases out-performed the human counterparts. While some rules contradicted others, top-level control parameters for speed and area provided the tie-breaker. The program was highly controversial but used nevertheless due to project budget constraints. It was terminated by logic designers after the VAX 9000 project completion.
During the years before the middle of the 1970s, the expectations of what expert systems can accomplish in many fields tended to be extremely optimistic. At the start of these early studies, researchers were hoping to develop entirely automatic (i.e., completely computerized) expert systems. The expectations of people of what computers can do were frequently too idealistic. This situation radically changed afterRichard M. Karppublished his breakthrough paper: “Reducibility among Combinatorial Problems” in the early 1970s.[36]Thanks to Karp's work, together with other scholars, like Hubert L. Dreyfus,[37]it became clear that there are certain limits and possibilities when one designs computer algorithms. His findings describe what computers can do and what they cannot do. Many of the computational problems related to this type of expert systems have certain pragmatic limits. These findings laid down the groundwork that led to the next developments in the field.[10]
In the 1990s and beyond, the termexpert systemand the idea of a standalone AI system mostly dropped from the IT lexicon. There are two interpretations of this. One is that "expert systems failed": the IT world moved on because expert systems did not deliver on their over hyped promise.[38][39]The other is the mirror opposite, that expert systems were simply victims of their success: as IT professionals grasped concepts such as rule engines, such tools migrated from being standalone tools for developing special purposeexpertsystems, to being one of many standard tools.[40]Other researchers suggest that Expert Systems caused inter-company power struggles when the IT organization lost its exclusivity in software modifications to users or Knowledge Engineers.[41]
In the first decade of the 2000s, there was a "resurrection" for the technology, while using the termrule-based systems, with significant success stories and adoption.[42]Many of the leading major business application suite vendors (such asSAP,Siebel, andOracle) integrated expert system abilities into their suite of products as a way to specify business logic. Rule engines are no longer simply for defining the rules an expert would use but for any type of complex, volatile, and critical business logic; they often go hand in hand with business process automation and integration environments.[43][44][45]
The limits of prior type of expert systems prompted researchers to develop new types of approaches. They have developed more efficient, flexible, and powerful methods to simulate the human decision-making process. Some of the approaches that researchers have developed are based on new methods of artificial intelligence (AI), and in particular inmachine learninganddata miningapproaches with a feedback mechanism.[46][failed verification]Recurrent neural networksoften take advantage of such mechanisms. Related is the discussion on the disadvantages section.
Modern systems can incorporate new knowledge more easily and thus update themselves easily. Such systems can generalize from existing knowledge better and deal with vast amounts of complex data. Related is the subject ofbig datahere. Sometimes these type of expert systems are called "intelligent systems."[10]
More recently, it can be argued that expert systems have moved into the area ofbusiness rulesandbusiness rules management systems.
An expert system is an example of aknowledge-based system. Expert systems were the first commercial systems to use a knowledge-based architecture. In general view, an expert system includes the following components: aknowledge base, aninference engine, an explanation facility, a knowledge acquisition facility, and a user interface.[48][49]
The knowledge base represents facts about the world. In early expert systems such as Mycin and Dendral, these facts were represented mainly as flat assertions about variables. In later expert systems developed with commercial shells, the knowledge base took on more structure and used concepts fromobject-oriented programming. The world was represented asclasses, subclasses, andinstancesandassertionswere replaced by values of object instances. The rules worked by querying and asserting values of the objects.
The inference engine is anautomated reasoning systemthat evaluates the current state of the knowledge-base, applies relevant rules, and then asserts new knowledge into the knowledge base. The inference engine may also include abilities for explanation, so that it can explain to a user the chain of reasoning used to arrive at a particular conclusion by tracing back over the firing of rules that resulted in the assertion.[50]
There are mainly two modes for an inference engine:forward chainingandbackward chaining. The different approaches are dictated by whether the inference engine is being driven by the antecedent (left hand side) or the consequent (right hand side) of the rule. In forward chaining an antecedent fires and asserts the consequent. For example, consider the following rule:
R1:Man(x)⟹Mortal(x){\displaystyle R1:{\mathit {Man}}(x)\implies {\mathit {Mortal}}(x)}
A simple example of forward chaining would be to assert Man(Socrates) to the system and then trigger the inference engine. It would match R1 and assert Mortal(Socrates) into the knowledge base.
Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true. One of the early innovations of expert systems shells was to integrate inference engines with a user interface. This could be especially powerful with backward chaining. If the system needs to know a particular fact but does not, then it can simply generate an input screen and ask the user if the information is known. So in this example, it could use R1 to ask the user if Socrates was a Man and then use that new information accordingly.
The use of rules to explicitly represent knowledge also enabled explanation abilities. In the simple example above if the system had used R1 to assert that Socrates was Mortal and a user wished to understand why Socrates was mortal they could query the system and the system would look back at the rules which fired to cause the assertion and present those rules to the user as an explanation. In English, if the user asked "Why is Socrates Mortal?" the system would reply "Because all men are mortal and Socrates is a man". A significant area for research was the generation of explanations from the knowledge base in natural English rather than simply by showing the more formal but less intuitive rules.[51]
As expert systems evolved, many new techniques were incorporated into various types of inference engines.[52]Some of the most important of these were:
The goal of knowledge-based systems is to make the critical information required for the system to work explicit rather than implicit.[55]In a traditional computer program, the logic is embedded in code that can typically only be reviewed by an IT specialist. With an expert system, the goal was to specify the rules in a format that was intuitive and easily understood, reviewed, and even edited by domain experts rather than IT experts. The benefits of this explicitknowledge representationwere rapid development and ease of maintenance.
Ease of maintenance is the most obvious benefit. This was achieved in two ways. First, by removing the need to write conventional code, many of the normal problems that can be caused by even small changes to a system could be avoided with expert systems. Essentially, the logical flow of the program (at least at the highest level) was simply a given for the system, simply invoke the inference engine. This also was a reason for the second benefit:rapid prototyping. With an expert system shell it was possible to enter a few rules and have a prototype developed in days rather than the months or year typically associated with complex IT projects.
A claim for expert system shells that was often made was that they removed the need for trained programmers and that experts could develop systems themselves. In reality, this was seldom if ever true. While the rules for an expert system were more comprehensible than typical computer code, they still had a formal syntax where a misplaced comma or other character could cause havoc as with any other computer language. Also, as expert systems moved from prototypes in the lab to deployment in the business world, issues of integration and maintenance became far more critical. Inevitably demands to integrate with, and take advantage of, large legacy databases and systems arose. To accomplish this, integration required the same skills as any other type of system.[56]
Summing up the benefits of using expert systems, the following can be highlighted:[48]
The most common disadvantage cited for expert systems in the academic literature is theknowledge acquisitionproblem. Obtaining the time of domain experts for any software application is always difficult, but for expert systems it was especially difficult because the experts were by definition highly valued and in constant demand by the organization. As a result of this problem, a great deal of research in the later years of expert systems was focused on tools for knowledge acquisition, to help automate the process of designing, debugging, and maintaining rules defined by experts. However, when looking at the life-cycle of expert systems in actual use, other problems – essentially the same problems as those of any other large system – seem at least as critical as knowledge acquisition: integration, access to large databases, and performance.[57][58]
Performance could be especially problematic because early expert systems were built using tools (such as earlier Lisp versions) that interpreted code expressions without first compiling them. This provided a powerful development environment, but with the drawback that it was virtually impossible to match the efficiency of the fastest compiled languages (such asC). System and database integration were difficult for early expert systems because the tools were mostly in languages and platforms that were neither familiar to nor welcome in most corporate IT environments – programming languages such as Lisp and Prolog, and hardware platforms such asLisp machinesand personal computers. As a result, much effort in the later stages of expert system tool development was focused on integrating with legacy environments such asCOBOLand large database systems, and on porting to more standard platforms. These issues were resolved mainly by the client–server paradigm shift, as PCs were gradually accepted in the IT environment as a legitimate platform for serious business system development and as affordableminicomputerservers provided the processing power needed for AI applications.[56]
Another major challenge of expert systems emerges when the size of the knowledge base increases. This causes the processing complexity to increase. For instance, when an expert system with 100 million rules was envisioned as the ultimate expert system, it became obvious that such system would be too complex and it would face too many computational problems.[59]An inference engine would have to be able to process huge numbers of rules to reach a decision.
How to verify that decision rules are consistent with each other is also a challenge when there are too many rules. Usually such problem leads to asatisfiability(SAT) formulation.[60]This is a well-known NP-complete problemBoolean satisfiability problem. If we assume onlybinary variables, say n of them, and then the corresponding search space is of size 2n{\displaystyle ^{n}}. Thus, the search space can grow exponentially.
There are also questions on how to prioritize the use of the rules to operate more efficiently, or how to resolve ambiguities (for instance, if there are too many else-if sub-structures within one rule) and so on.[61]
Other problems are related to theoverfittingandovergeneralizationeffects when using known facts and trying to generalize to other cases not described explicitly in the knowledge base. Such problems exist with methods that employ machine learning approaches too.[62][63]
Another problem related to the knowledge base is how to make updates of its knowledge quickly and effectively.[64][65][66]Also how to add a new piece of knowledge (i.e., where to add it among many rules) is challenging. Modern approaches that rely on machine learning methods are easier in this regard.[citation needed]
Because of the above challenges, it became clear that new approaches to AI were required instead of rule-based technologies. These new approaches are based on the use of machine learning techniques, along with the use of feedback mechanisms.[10]
The key challenges that expert systems in medicine (if one considers computer-aided diagnostic systems as modern expert systems), and perhaps in other application domains, include issues related to aspects such as: big data, existing regulations, healthcare practice, various algorithmic issues, and system assessment.[67]
Finally, the following disadvantages of using expert systems can be summarized:[48]
Hayes-Roth divides expert systems applications into 10 categories illustrated in the following table. The example applications were not in the original Hayes-Roth table, and some of them arose well afterward. Any application that is not footnoted is described in the Hayes-Roth book.[50]Also, while these categories provide an intuitive framework to describe the space of expert systems applications, they are not rigid categories, and in some cases an application may show traits of more than one category.
Hearsay was an early attempt at solvingvoice recognitionthrough an expert systems approach. For the most part this category of expert systems was not all that successful. Hearsay and all interpretation systems are essentially pattern recognition systems—looking for patterns in noisy data. In the case of Hearsay recognizing phonemes in an audio stream. Other early examples were analyzing sonar data to detect Russian submarines. These kinds of systems proved much more amenable to aneural networkAI solution than a rule-based approach.
CADUCEUS andMYCINwere medical diagnosis systems. The user describes their symptoms to the computer as they would to a doctor and the computer returns a medical diagnosis.
Dendral was a tool to study hypothesis formation in the identification of organic molecules. The general problem it solved—designing a solution given a set of constraints—was one of the most successful areas for early expert systems applied to business domains such as salespeople configuringDigital Equipment Corporation(DEC)VAXcomputers and mortgage loan application development.
SMH.PAL is an expert system for the assessment of students with multiple disabilities.[77]
GARVAN-ES1 was a medical expert system, developed at theGarvan Institute of Medical Research, that provided automated clinical diagnostic comments on endocrine reports from a pathology laboratory. It was one of the first medical expert systems to go into routine clinical use internationally[73]and the first expert system to be used for diagnosis daily in Australia.[83]The system was written in "C" and ran on a PDP-11 in 64K of memory. It had 661 rules that were compiled; not interpreted.
Mistral[69]is an expert system to monitor dam safety, developed in the 1990s by Ismes (Italy). It gets data from an automatic monitoring system and performs a diagnosis of the state of the dam. Its first copy, installed in 1992 on theRidracoliDam (Italy), is still operational 24/7/365. It has been installed on several dams in Italy and abroad (e.g.,Itaipu Damin Brazil), and on landslide sites under the name of Eydenet,[70]and on monuments under the name of Kaleidos.[71]Mistral is a registered trade mark ofCESI.
|
https://en.wikipedia.org/wiki/Expert_system
|
In the field ofartificial intelligence, aninference engineis asoftware componentof an intelligent system that applies logical rules to theknowledge baseto deduce new information. The first inference engines were components ofexpert systems. The typical expert system consisted of a knowledge base and an inference engine. The knowledge base stored facts about the world. The inference engine applied logical rules to the knowledge base and deduced new knowledge. This process would iterate as each new fact in the knowledge base could trigger additional rules in the inference engine. Inference engines work primarily in one of two modes either special rule or facts:forward chainingandbackward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved.[1]
Additionally, the concept of 'inference' has expanded to include the process through which trainedneural networksgenerate predictions or decisions. In this context, an 'inference engine' could refer to the specific part of the system, or even the hardware, that executes these operations. This type of inference plays a crucial role in various applications, including (but not limited to)image recognition,natural language processing, andautonomous vehicles. The inference phase in these applications is typically characterized by a high volume of data inputs and real-time processing requirements.
The logic that an inference engine uses is typically represented as IF-THEN rules. The general format of such rules is IF <logical expression> THEN <logical expression>. Prior to the development of expert systems and inference engines, artificial intelligence researchers focused on more powerfultheorem proverenvironments that offered much fuller implementations offirst-order logic. For example, general statements that includeduniversal quantification(for all X some statement is true) andexistential quantification(there exists some X such that some statement is true). What researchers discovered is that the power of these theorem-proving environments was also their drawback. Back in 1965, it was far too easy to create logical expressions that could take an indeterminate or even infinite time to terminate. For example, it is common in universal quantification to make statements over an infinite set such as the set of all natural numbers. Such statements are perfectly reasonable and even required in mathematical proofs but when included in an automated theorem prover executing on a computer may cause the computer to fall into an infinite loop. Focusing on IF-THEN statements (what logicians callmodus ponens) still gave developers a very powerful general mechanism to represent logic, but one that could be used efficiently with computational resources. What is more, there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.[2]
A simple example ofmodus ponensoften used in introductory logic books is "If you are human then you are mortal". This can be represented inpseudocodeas:
A trivial example of how this rule would be used in an inference engine is as follows. Inforward chaining, the inference engine would find any facts in the knowledge base that matched Human(x) and for each fact it found would add the new information Mortal(x) to the knowledge base. So if it found an object called Socrates that was human it would deduce that Socrates was mortal. Inbackward chaining, the system would be given a goal, e.g. answer the question is Socrates mortal? It would search through the knowledge base and determine if Socrates was human and, if so, would assert he is also mortal. However, in backward chaining a common technique was to integrate the inference engine with a user interface. In that way, rather than simply being automated the system could now be interactive. In this trivial example, if the system was given the goal to answer the question if Socrates was mortal and it didn't yet know if he was human, it would generate a window to ask the user the question "Is Socrates human?" and would then use that information accordingly.
This innovation of integrating the inference engine with a user interface led to the second early advancement of expert systems: explanation capabilities. The explicit representation of knowledge as rules rather than code made it possible to generate explanations to users: both explanations in real time and after the fact. So if the system asked the user "Is Socrates human?", the user may wonder why she was being asked that question and the system would use the chain of rules to explain why it was currently trying to ascertain that bit of knowledge: that is, it needs to determine if Socrates is mortal and to do that needs to determine if he is human. At first these explanations were not much different than the standard debugging information that developers deal with when debugging any system. However, an active area of research was utilizing natural language technology to ask, understand, and generate questions and explanations using natural languages rather than computer formalisms.[3]
An inference engine cycles through three sequential steps:match rules,select rules, andexecute rules. The execution of the rules will often result in new facts or goals being added to the knowledge base, which will trigger the cycle to repeat. This cycle continues until no new rules can be matched.
In the first step,match rules, the inference engine finds all of the rules that are triggered by the current contents of the knowledge base. In forward chaining, the engine looks for rules where the antecedent (left hand side) matches some fact in the knowledge base. In backward chaining, the engine looks for antecedents that can satisfy one of the current goals.
In the second step,select rules, the inference engine prioritizes the various rules that were matched to determine the order to execute them. In the final step,execute rules, the engine executes each matched rule in the order determined in step two and then iterates back to step one again. The cycle continues until no new rules are matched.[4]
Early inference engines focused primarily on forward chaining. These systems were usually implemented in theLispprogramming language. Lisp was a frequent platform for early AI research due to its strong capability to do symbolic manipulation. Also, as aninterpreted languageit offered productive development environments appropriate todebuggingcomplex programs. A necessary consequence of these benefits was that Lisp programs tended to be slower and less robust than compiled languages of the time such asC. A common approach in these early days was to take an expert system application and repackage the inference engine used for that system as a re-usable tool other researchers could use for the development of other expert systems. For example,MYCINwas an early expert system for medical diagnosis and EMYCIN was an inference engine extrapolated from MYCIN and made available for other researchers.[1]
As expert systems moved from research prototypes to deployed systems there was more focus on issues such as speed and robustness. One of the first and most popular forward chaining engines wasOPS5, which used theRete algorithmto optimize the efficiency of rule firing. Another very popular technology that was developed was theProloglogic programming language. Prolog focused primarily on backward chaining and also featured various commercial versions and optimizations for efficiency and robustness.[5]
As expert systems prompted significant interest from the business world, various companies, many of them started or guided by prominent AI researchers created productized versions of inference engines. For example,Intellicorpwas initially guided byEdward Feigenbaum. These inference engine products were also often developed in Lisp at first. However, demands for more affordable and commercially viable platforms eventually madepersonal computerplatforms very popular.
ClipsRulesandRefPerSys(inspired byCAIA[6]and the work ofJacques Pitrat). TheFrama-Cstatic source code analyzer also uses some inference engine techniques.
|
https://en.wikipedia.org/wiki/Inference_engine
|
OPS5is arule-basedorproduction systemcomputer language, notable as the first such language to be used in a successfulexpert system, theR1/XCONsystem used to configureVAXcomputers.
The OPS (said to be short for "Official Production System") family was developed in the late 1970s byCharles Forgywhile atCarnegie Mellon University.Allen Newell's research group inartificial intelligencehad been working on production systems for some time, but Forgy's implementation, based on hisRete algorithm, was especially efficient, sufficiently so that it was possible to scale up to larger problems involving hundreds or thousands of rules.
OPS5 uses aforward chaininginference engine; programs execute by scanning "working memory elements" (which are vaguely object-like, with classes and attributes) looking for matches with the rules in "production memory". Rules have actions that may modify or remove the matched element, create new ones, perform side effects such as output, and so forth. Execution continues until no more matches can be found.
In this sense, OPS5 is an execution engine for aPetri netextended with inhibitor arcs.
The OPS5 forward chaining process makes it extremely parallelizeable during the matching phase, and several automatic parallelizing compilers were created.
OPS4was an early version, whileOPS83came later. The development of OPS4 was sponsored byARPAOrder No. 3597, and monitored by theAir Force Avionics Laboratoryunder Contract F33615-78-C-1151.[1]
The first implementation of OPS5 was written inLisp, and later rewritten inBLISSfor speed.
DEC OPS5is an extended implementation of the OPS5 language definition, developed for use with theOpenVMS, RISC ULTRIX, and DEC OSF/1 operating systems.
|
https://en.wikipedia.org/wiki/OPS5
|
TheProduction Rule Representation(PRR) is a proposed standard of theObject Management Group(OMG) that aims to define a vendor-neutral model for representing production rules within the Unified Modeling Language (UML), specifically for use in forward-chaining rule engines.
The OMG set up a Business Rules Working Group in 2002 as the first standards body to recognize the importance of the "Business Rules Approach". It issued 2 mainRFPsin 2003 – a standard for modeling production rules (PRR), and a standard for modeling business rules as business documentation (BSBR, nowSBVR).
PRR was mostly defined by and for vendors of Business Rule Engines (BREs) (sometimes termedBusiness Rules Engine(s), like in Wikipedia). Contributors have included all the major BRE vendors, members ofRuleML, and leading UML vendors.
PRR is currently at version 1.0.
|
https://en.wikipedia.org/wiki/Production_Rule_Representation
|
TheRete algorithm(/ˈriːtiː/REE-tee,/ˈreɪtiː/RAY-tee, rarely/ˈriːt/REET,/rɛˈteɪ/reh-TAY) is apattern matchingalgorithmfor implementingrule-based systems. The algorithm was developed to efficiently apply manyrulesor patterns to many objects, orfacts, in aknowledge base. It is used to determine which of the system's rules should fire based on its data store, its facts. The Rete algorithm was designed byCharles L. ForgyofCarnegie Mellon University, first published in a working paper in 1974, and later elaborated in his 1979 Ph.D. thesis and a 1982 paper.[1]
Anaive implementationof anexpert systemmight check eachruleagainst knownfactsin aknowledge base, firing that rule if necessary, then moving on to the next rule (and looping back to the first rule when finished). For even moderate sized rules and facts knowledge-bases, this naive approach performs far too slowly. The Rete algorithm provides the basis for a more efficient implementation. A Rete-based expert system builds a network ofnodes, where each node (except the root) corresponds to a pattern occurring in the left-hand-side (the condition part) of a rule. The path from theroot nodeto aleaf nodedefines a complete rule left-hand-side. Each node has a memory of facts that satisfy that pattern. This structure is essentially a generalizedtrie. As new facts are asserted or modified, they propagate along the network, causing nodes to be annotated when that fact matches that pattern. When a fact or combination of facts causes all of the patterns for a given rule to be satisfied, a leaf node is reached and the corresponding rule is triggered.
Rete was first used as the core engine of theOPS5production system language, which was used to build early systems including R1 for Digital Equipment Corporation. Rete has become the basis for many popular rule engines and expert system shells, includingCLIPS,Jess,Drools,IBM Operational Decision Management,BizTalkRules Engine,Soar, andEvrete. The word 'Rete' is Latin for 'net' or 'comb'. The same word is used in modern Italian to mean 'network'. Charles Forgy has reportedly stated that he adopted the term 'Rete' because of its use in anatomy to describe a network of blood vessels and nerve fibers.[2]
The Rete algorithm is designed to sacrificememoryfor increased speed. In most cases, the speed increase over naïve implementations is several orders of magnitude (because Rete performance is theoretically independent of the number of rules in the system). In very large expert systems, however, the original Rete algorithm tends to run into memory and server consumption problems. Other algorithms, both novel and Rete-based, have since been designed that require less memory (e.g. Rete*[3]or Collection Oriented Match[4]).
The Rete algorithm provides a generalized logical description of an implementation of functionality responsible for matching datatuples("facts") against productions ("rules") in a pattern-matching production system (a category ofrule engine). A production consists of one or more conditions and a set of actions that may be undertaken for each complete set of facts that match the conditions. Conditions test factattributes, including fact type specifiers/identifiers. The Rete algorithm exhibits the following major characteristics:
The Rete algorithm is widely used to implement matching functionality within pattern-matching engines that exploit a match-resolve-act cycle to supportforward chainingandinferencing.
Retes aredirected acyclic graphsthat represent higher-level rule sets. They are generally represented at run-time using a network of in-memory objects. These networks match rule conditions (patterns) to facts (relational data tuples). Rete networks act as a type of relational query processor, performingprojections,selectionsand joins conditionally on arbitrary numbers of data tuples.
Productions (rules) are typically captured and defined byanalystsanddevelopersusing some high-level rules language. They are collected into rule sets that are then translated, often at run time, into an executable Rete.
When facts are "asserted" to working memory, the engine createsworking memory elements(WMEs) for each fact. Facts aretuples, and may therefore contain an arbitrary number of data items. Each WME may hold an entire tuple, or, alternatively, each fact may be represented by a set of WMEs where each WME contains a fixed-length tuple. In this case, tuples are typically triplets (3-tuples).
Each WME enters the Rete network at a single root node. The root node passes each WME on to its child nodes, and each WME may then be propagated through the network, possibly being stored in intermediate memories, until it arrives at a terminal node.
The "left" (alpha) side of the node graph forms a discrimination network responsible for selecting individual WMEs based on simple conditional tests that match WME attributes against constant values. Nodes in the discrimination network may also perform tests that compare two or more attributes of the same WME. If a WME is successfully matched against the conditions represented by one node, it is passed to the next node. In most engines, the immediate child nodes of the root node are used to test the entity identifier or fact type of each WME. Hence, all the WMEs that represent the sameentitytype typically traverse a given branch of nodes in the discrimination network.
Within the discrimination network, each branch of alpha nodes (also called 1-input nodes) terminates at a memory, called analpha memory. These memories store collections of WMEs that match each condition in each node in a given node branch. WMEs that fail to match at least one condition in a branch are not materialised within the corresponding alpha memory. Alpha node branches may fork in order to minimise condition redundancy.
The "right" (beta) side of the graph chiefly performs joins between different WMEs. It is optional, and is only included if required. It consists of 2-input nodes where each node has a "left" and a "right" input. Each beta node sends its output to abeta memory.
In descriptions of Rete, it is common to refer to token passing within the beta network. In this article, however, we will describe data propagation in terms of WME lists, rather than tokens, in recognition of different implementation options and the underlying purpose and use of tokens. As any one WME list passes through the beta network, new WMEs may be added to it, and the list may be stored in beta memories. A WME list in a beta memory represents a partial match for the conditions in a given production.
WME lists that reach the end of a branch of beta nodes represent a complete match for a single production, and are passed to terminal nodes. These nodes are sometimes calledp-nodes, where "p" stands forproduction. Each terminal node represents a single production, and each WME list that arrives at a terminal node represents a complete set of matching WMEs for the conditions in that production. For each WME list it receives, a production node will "activate" a new production instance on the "agenda". Agendas are typically implemented asprioritised queues.
Beta nodes typically perform joins between WME lists stored in beta memories and individual WMEs stored in alpha memories. Each beta node is associated with two input memories. An alpha memory holds WM and performs "right" activations on the beta node each time it stores a new WME. A beta memory holds WME lists and performs "left" activations on the beta node each time it stores a new WME list. When a join node is right-activated, it compares one or more attributes of the newly stored WME from its input alpha memory against given attributes of specific WMEs in each WME list contained in the input beta memory. When a join node is left-activated it traverses a single newly stored WME list in the beta memory, retrieving specific attribute values of given WMEs. It compares these values with attribute values of each WME in the alpha memory.
Each beta node outputs WME lists that are either stored in a beta memory or sent directly to a terminal node. WME lists are stored in beta memories whenever the engine will perform additional left activations on subsequent beta nodes.
Logically, a beta node at the head of a branch of beta nodes is a special case because it takes no input from any beta memory higher in the network. Different engines handle this issue in different ways. Some engines use specialised adapter nodes to connect alpha memories to the left input of beta nodes. Other engines allow beta nodes to take input directly from two alpha memories, treating one as a "left" input and the other as a "right" input. In both cases, "head" beta nodes take their input from two alpha memories.
In order to eliminate node redundancies, any one alpha or beta memory may be used to perform activations on multiple beta nodes. As well as join nodes, the beta network may contain additional node types, some of which are described below. If a Rete contains no beta network, alpha nodes feed tokens, each containing a single WME, directly to p-nodes. In this case, there may be no need to store WMEs in alpha memories.
During any one match-resolve-act cycle, the engine will find all possible matches for the facts currently asserted to working memory. Once all the current matches have been found, and corresponding production instances have been activated on the agenda, the engine determines an order in which the production instances may be "fired". This is termedconflict resolution, and the list of activated production instances is termed theconflict set. The order may be based on rule priority (salience), rule order, the time at which facts contained in each instance were asserted to the working memory, the complexity of each production, or some other criteria. Many engines allow rule developers to select between different conflict resolution strategies or to chain a selection of multiple strategies.
Conflict resolution is not defined as part of the Rete algorithm, but is used alongside the algorithm. Some specialised production systems do not perform conflict resolution.
Having performed conflict resolution, the engine now "fires" the first production instance, executing a list of actions associated with the production. The actions act on the data represented by the production instance's WME list.
By default, the engine will continue to fire each production instance in order until all production instances have been fired. Each production instance will fire only once, at most, during any one match-resolve-act cycle. This characteristic is termedrefraction. However, the sequence of production instance firings may be interrupted at any stage by performing changes to the working memory. Rule actions can contain instructions to assert or retract WMEs from the working memory of the engine. Each time any single production instance performs one or more such changes, the engine immediately enters a new match-resolve-act cycle. This includes "updates" to WMEs currently in the working memory. Updates are represented by retracting and then re-asserting the WME. The engine undertakes matching of the changed data which, in turn, may result in changes to the list of production instances on the agenda. Hence, after the actions for any one specific production instance have been executed, previously activated instances may have been de-activated and removed from the agenda, and new instances may have been activated.
As part of the new match-resolve-act cycle, the engine performs conflict resolution on the agenda and then executes the current first instance. The engine continues to fire production instances, and to enter new match-resolve-act cycles, until no further production instances exist on the agenda. At this point the rule engine is deemed to have completed its work, and halts.
Some engines support advanced refraction strategies in which certain production instances executed in a previous cycle are not re-executed in the new cycle, even though they may still exist on the agenda.
It is possible for the engine to enter into never-ending loops in which the agenda never reaches the empty state. For this reason, most engines support explicit "halt" verbs that can be invoked from production action lists. They may also provide automaticloop detectionin which never-ending loops are automatically halted after a given number of iterations. Some engines support a model in which, instead of halting when the agenda is empty, the engine enters a wait state until new facts are asserted externally.
As for conflict resolution, the firing of activated production instances is not a feature of the Rete algorithm. However, it is a central feature of engines that use Rete networks. Some of the optimisations offered by Rete networks are only useful in scenarios where the engine performs multiple match-resolve-act cycles.
Conditional tests are most commonly used to perform selections and joins on individual tuples. However, by implementing additional beta node types, it is possible for Rete networks to performquantifications.Existential quantificationinvolves testing for the existence of at least one set of matching WMEs in working memory.Universal quantificationinvolves testing that an entire set of WMEs in working memory meets a given condition. A variation of universal quantification might test that a given number of WMEs, drawn from a set of WMEs, meets given criteria. This might be in terms of testing for either an exact number or a minimum number of matches.
Quantification is not universally implemented in Rete engines, and, where it is supported, several variations exist. A variant of existential quantification referred to asnegationis widely, though not universally, supported, and is described in seminal documents. Existentially negated conditions and conjunctions involve the use of specialised beta nodes that test for non-existence of matching WMEs or sets of WMEs. These nodes propagate WME lists only when no match is found. The exact implementation of negation varies. In one approach, the node maintains a simple count on each WME list it receives from its left input. The count specifies the number of matches found with WMEs received from the right input. The node only propagates WME lists whose count is zero. In another approach, the node maintains an additional memory on each WME list received from the left input. These memories are a form of beta memory, and store WME lists for each match with WMEs received on the right input. If a WME list does not have any WME lists in its memory, it is propagated down the network. In this approach, negation nodes generally activate further beta nodes directly, rather than storing their output in an additional beta memory. Negation nodes provide a form of 'negation as failure'.
When changes are made to working memory, a WME list that previously matched no WMEs may now match newly asserted WMEs. In this case, the propagated WME list and all its extended copies need to be retracted from beta memories further down the network. The second approach described above is often used to support efficient mechanisms for removal of WME lists. When WME lists are removed, any corresponding production instances are de-activated and removed from the agenda.
Existential quantification can be performed by combining two negation beta nodes. This represents the semantics ofdouble negation(e.g., "If NOT NOT any matching WMEs, then..."). This is a common approach taken by several production systems.
The Rete algorithm does not mandate any specific approach to indexing the working memory. However, most modern production systems provide indexing mechanisms. In some cases, only beta memories are indexed, whilst in others, indexing is used for both alpha and beta memories. A good indexing strategy is a major factor in deciding the overall performance of a production system, especially when executing rule sets that result in highly combinatorial pattern matching (i.e., intensive use of beta join nodes), or, for some engines, when executing rules sets that perform a significant number of WME retractions during multiple match-resolve-act cycles. Memories are often implemented using combinations of hash tables, and hash values are used to perform conditional joins on subsets of WME lists and WMEs, rather than on the entire contents of memories. This, in turn, often significantly reduces the number of evaluations performed by the Rete network.
When a WME is retracted from working memory, it must be removed from every alpha memory in which it is stored. In addition, WME lists that contain the WME must be removed from beta memories, and activated production instances for these WME lists must be de-activated and removed from the agenda. Several implementation variations exist, including tree-based and rematch-based removal. Memory indexing may be used in some cases to optimise removal.
When defining productions in a rule set, it is common to allow conditions to be grouped using an ORconnective. In many production systems, this is handled by interpreting a single production containing multiple ORed patterns as the equivalent of multiple productions. The resulting Rete network contains sets of terminal nodes which, together, represent single productions. This approach disallows any form of short-circuiting of the ORed conditions. It can also, in some cases, lead to duplicate production instances being activated on the agenda where the same set of WMEs match multiple internal productions. Some engines provide agenda de-duplication in order to handle this issue.
The following diagram illustrates the basic Rete topology, and shows the associations between different node types and memories.
For a more detailed and complete description of the Rete algorithm, see chapter 2 of Production Matching for Large Learning Systems by Robert Doorenbos (see link below).
A possible variation is to introduce additional memories for each intermediate node in the discrimination network. This increases the overhead of the Rete, but may have advantages in situations where rules are dynamically added to or removed from the Rete, making it easier to vary the topology of the discrimination network dynamically.
An alternative implementation is described by Doorenbos.[5]In this case, the discrimination network is replaced by a set of memories and an index. The index may be implemented using ahash table. Each memory holds WMEs that match a single conditional pattern, and the index is used to reference memories by their pattern. This approach is only practical when WMEs represent fixed-length tuples, and the length of each tuple is short (e.g., 3-tuples). In addition, the approach only applies to conditional patterns that performequalitytests againstconstantvalues. When a WME enters the Rete, the index is used to locate a set of memories whose conditional pattern matches the WME attributes, and the WME is then added directly to each of these memories. In itself, this implementation contains no 1-input nodes. However, in order to implement non-equality tests, the Rete may contain additional 1-input node networks through which WMEs are passed before being placed in a memory. Alternatively, non-equality tests may be performed in the beta network described below.
A common variation is to buildlinked listsof tokens where each token holds a single WME. In this case, lists of WMEs for a partial match are represented by the linked list of tokens. This approach may be better because it eliminates the need to copy lists of WMEs from one token to another. Instead, a beta node needs only to create a new token to hold a WME it wishes to join to the partial match list, and then link the new token to a parent token stored in the input beta memory. The new token now forms the head of the token list, and is stored in the output beta memory.
Beta nodes process tokens. A token is a unit of storage within a memory and also a unit of exchange between memories and nodes. In many implementations, tokens are introduced within alpha memories where they are used to hold single WMEs. These tokens are then passed to the beta network.
Each beta node performs its work and, as a result, may create new tokens to hold a list of WMEs representing a partial match. These extended tokens are then stored in beta memories, and passed to subsequent beta nodes. In this case, the beta nodes typically pass lists of WMEs through the beta network by copying existing WME lists from each received token into new tokens and then adding further WMEs to the lists as a result of performing a join or some other action. The new tokens are then stored in the output memory.
Although not defined by the Rete algorithm, some engines provide extended functionality to support greater control oftruth maintenance. For example, when a match is found for one production, this may result in the assertion of new WMEs which, in turn, match the conditions for another production. If a subsequent change to working memory causes the first match to become invalid, it may be that this implies that the second match is also invalid. The Rete algorithm does not define any mechanism to define and handle theselogical truthdependencies automatically. Some engines, however, support additional functionality in which truth dependencies can be automatically maintained. In this case, the retraction of one WME may lead to the automatic retraction of additional WMEs in order to maintain logical truth assertions.
The Rete algorithm does not define any approach to justification. Justification refers to mechanisms commonly required in expert and decision systems in which, at its simplest, the system reports each of the inner decisions used to reach some final conclusion. For example, an expert system might justify a conclusion that an animal is an elephant by reporting that it is large, grey, has big ears, a trunk and tusks. Some engines provide built-in justification systems in conjunction with their implementation of the Rete algorithm.
This article does not provide an exhaustive description of every possible variation or extension of the Rete algorithm. Other considerations and innovations exist. For example, engines may provide specialised support within the Rete network in order to apply pattern-matching rule processing to specificdata typesand sources such asprogrammatic objects,XMLdata orrelational data tables. Another example concerns additional time-stamping facilities provided by many engines for each WME entering a Rete network, and the use of these time-stamps in conjunction with conflict resolution strategies. Engines exhibit significant variation in the way they allow programmatic access to the engine and its working memory, and may extend the basic Rete model to support forms of parallel and distributed processing.
Several optimizations for Rete have been identified and described in academic literature. Several of these, however, apply only in very specific scenarios, and therefore often have little or no application in a general-purpose rules engine. In addition, alternative algorithms such as TREAT, developed byDaniel P. Miranker[6]LEAPS, and Design Time Inferencing (DeTI) have been formulated that may provide additional performance improvements.
The Rete algorithm is suited to scenarios where forward chaining and "inferencing" is used to calculate new facts from existing facts, or to filter and discard facts in order to arrive at some conclusion. It is also exploited as a reasonably efficient mechanism for performing highly combinatorial evaluations of facts where large numbers of joins must be performed between fact tuples. Other approaches to performing rule evaluation, such as the use ofdecision trees, or the implementation of sequential engines, may be more appropriate for simple scenarios, and should be considered as possible alternatives.
Performance of Rete is also largely a matter of implementation choices (independent of the network topology), one of which (the use of hash tables) leads to major improvements.
Most of the performance benchmarks and comparisons available on the web are biased in some way or another. To mention only a frequent bias and an unfair type of comparison:
1) the use of toy problems such as the Manners and Waltz examples; such examples are useful to estimate specific properties of the implementation, but they may not reflect real performance on complex applications;
2) the use of an old implementation; for instance, the references in the following two sections (Rete II and Rete-NT) compare some commercial products to totally outdated versions of CLIPS and they claim that the commercial products may be orders of magnitude faster than CLIPS; this is forgetting that CLIPS 6.30 (with the introduction of hash tables as in Rete II) is orders of magnitude faster than the version used for the comparisons (CLIPS 6.04).
In the 1980s,Charles Forgydeveloped a successor to the Rete algorithm namedRete II.[7]Unlike the original Rete (which is public domain) this algorithm was not disclosed. Rete II claims better performance for more complex problems (even orders of magnitude[8]), and is officially implemented in CLIPS/R2, a C/++ implementation and in OPSJ, a Java implementation in 1998. Rete II gives about a 100 to 1 order of magnitude performance improvement in more complex problems as shown by KnowledgeBased Systems Corporation[9]benchmarks.
Rete II can be characterized by two areas of improvement; specific optimizations relating to the general performance of the Rete network (including the use of hashed memories in order to increase performance with larger sets of data), and the inclusion of abackward chainingalgorithm tailored to run on top of the Rete network. Backward chaining alone can account for the most extreme changes in benchmarks relating to Rete vs. Rete II. Rete II is implemented in the commercial product Advisor from FICO, formerly called Fair Isaac[10]
Jess (at least versions 5.0 and later) also adds a commercial backward chaining algorithm on top of the Rete network, but it cannot be said to fully implement Rete II, in part due to the fact that no full specification is publicly available.
In the early 2000s, the Rete III engine was developed by Charles Forgy in cooperation with FICO engineers. The Rete III algorithm, which is not Rete-NT, is the FICO trademark for Rete II and is implemented as part of the FICO Advisor engine. It is basically the Rete II engine with an API that allows access to the Advisor engine because the Advisor engine can access other FICO products.[11]
In 2010, Forgy developed a new generation of the Rete algorithm. In an InfoWorld benchmark, the algorithm was deemed 500 times faster than the original Rete algorithm and 10 times faster than its predecessor, Rete II.[12]This algorithm is now licensed to Sparkling Logic, the company that Forgy joined as investor and strategic advisor,[13][14]as the inference engine of the SMARTS product.
Considering that Rete aims to supportfirst-order logic(basicallyif-then-elsestatements), Rete-OO[15]aims to provide a rule-based system that supports uncertainty (where the information to make a decision is missing or is inaccurate). According to the author's proposal, the rule "if Danger then Alarm" would be improved to something such as "given the probability of Danger, there will be a certain probability of hearing an Alarm" or even "the greater the Danger, the louder should be Alarm". For this it extends theDroolslanguage (which already implements the Rete algorithm) to make it supportprobabilistic logic, likefuzzy logicandBayesian networks.
|
https://en.wikipedia.org/wiki/Rete_algorithm
|
Inmathematics,computer science, andlogic,rewritingcovers a wide range of methods of replacing subterms of aformulawith other terms. Such methods may be achieved byrewriting systems(also known asrewrite systems,rewrite engines,[1][2]orreduction systems). In their most basic form, they consist of a set of objects, plusrelationson how to transform those objects.
Rewriting can benon-deterministic. One rule to rewrite a term could be applied in many different ways to that term, or more than one rule could be applicable. Rewriting systems then do not provide analgorithmfor changing one term to another, but a set of possible rule applications. When combined with an appropriate algorithm, however, rewrite systems can be viewed ascomputer programs, and severaltheorem provers[3]anddeclarative programming languagesare based on term rewriting.[4][5]
Inlogic, the procedure for obtaining theconjunctive normal form(CNF) of a formula can be implemented as a rewriting system.[6]For example, the rules of such a system would be:
For each rule, eachvariabledenotes a subexpression, and the symbol (→{\displaystyle \to }) indicates that an expression matching the left hand side of it can be rewritten to one matching the right hand side of it. In such a system, each rule is alogical equivalence, so performing a rewrite on an expression by these rules does not change the truth value of it. Other useful rewriting systems in logic may not preserve truth values, see e.g.equisatisfiability.
Term rewriting systems can be employed to compute arithmetic operations onnatural numbers.
To this end, each such number has to be encoded as aterm.
The simplest encoding is the one used in thePeano axioms, based on the constant 0 (zero) and thesuccessor functionS. For example, the numbers 0, 1, 2, and 3 are represented by the terms 0, S(0), S(S(0)), and S(S(S(0))), respectively.
The following term rewriting system can then be used to compute sum and product of given natural numbers.[7]
For example, the computation of 2+2 to result in 4 can be duplicated by term rewriting as follows:
where the notation above each arrow indicates the rule used for each rewrite.
As another example, the computation of 2⋅2 looks like:
where the last step comprises the previous example computation.
Inlinguistics,phrase structure rules, also calledrewrite rules, are used in some systems ofgenerative grammar,[8]as a means of generating the grammatically correct sentences of a language. Such a rule typically takes the formA→X{\displaystyle {\rm {A\rightarrow X}}}, where A is asyntactic categorylabel, such asnoun phraseorsentence, and X is a sequence of such labels ormorphemes, expressing the fact that A can be replaced by X in generating the constituent structure of a sentence. For example, the ruleS→NPVP{\displaystyle {\rm {S\rightarrow NP\ VP}}}means that a sentence can consist of a noun phrase (NP) followed by averb phrase(VP); further rules will specify what sub-constituents a noun phrase and a verb phrase can consist of, and so on.
From the above examples, it is clear that we can think of rewriting systems in an abstract manner. We need to specify a set of objects and the rules that can be applied to transform them. The most general (unidimensional) setting of this notion is called anabstract reduction system[9]orabstract rewriting system(abbreviatedARS).[10]An ARS is simply a setAof objects, together with abinary relation→ onAcalled thereduction relation,rewrite relation[11]or justreduction.[9]
Many notions and notations can be defined in the general setting of an ARS.→∗{\displaystyle {\overset {*}{\rightarrow }}}is thereflexive transitive closureof→{\displaystyle \rightarrow }.↔{\displaystyle \leftrightarrow }is thesymmetric closureof→{\displaystyle \rightarrow }.↔∗{\displaystyle {\overset {*}{\leftrightarrow }}}is thereflexive transitive symmetric closureof→{\displaystyle \rightarrow }. Theword problemfor an ARS is determining, givenxandy, whetherx↔∗y{\displaystyle x{\overset {*}{\leftrightarrow }}y}. An objectxinAis calledreducibleif there exists some otheryinAsuch thatx→y{\displaystyle x\rightarrow y}; otherwise it is calledirreducibleor anormal form. An objectyis called a "normal form ofx" ifx→∗y{\displaystyle x{\stackrel {*}{\rightarrow }}y}, andyis irreducible. If the normal form ofxis unique, then this is usually denoted withx↓{\displaystyle x{\downarrow }}. If every object has at least one normal form, the ARS is callednormalizing.x↓y{\displaystyle x\downarrow y}orxandyare said to bejoinableif there exists somezwith the property thatx→∗z←∗y{\displaystyle x{\overset {*}{\rightarrow }}z{\overset {*}{\leftarrow }}y}. An ARS is said to possess theChurch–Rosser propertyifx↔∗y{\displaystyle x{\overset {*}{\leftrightarrow }}y}impliesx↓y{\displaystyle x\downarrow y}. An ARS isconfluentif for allw,x, andyinA,x←∗w→∗y{\displaystyle x{\overset {*}{\leftarrow }}w{\overset {*}{\rightarrow }}y}impliesx↓y{\displaystyle x\downarrow y}. An ARS islocally confluentif and only if for allw,x, andyinA,x←w→y{\displaystyle x\leftarrow w\rightarrow y}impliesx↓y{\displaystyle x{\mathbin {\downarrow }}y}. An ARS is said to beterminatingornoetherianif there is no infinite chainx0→x1→x2→⋯{\displaystyle x_{0}\rightarrow x_{1}\rightarrow x_{2}\rightarrow \cdots }. A confluent and terminating ARS is calledconvergentorcanonical.
Important theorems for abstract rewriting systems are that an ARS isconfluentiffit has the Church–Rosser property,Newman's lemma(a terminating ARS is confluent if and only if it is locally confluent), and that theword problemfor an ARS isundecidablein general.
Astring rewriting system(SRS), also known assemi-Thue system, exploits thefree monoidstructure of thestrings(words) over analphabetto extend a rewriting relation,R{\displaystyle R}, toallstrings in the alphabet that contain left- and respectively right-hand sides of some rules assubstrings. Formally a semi-Thue system is atuple(Σ,R){\displaystyle (\Sigma ,R)}whereΣ{\displaystyle \Sigma }is a (usually finite) alphabet, andR{\displaystyle R}is a binary relation between some (fixed) strings in the alphabet, called the set ofrewrite rules. Theone-step rewriting relation→R{\displaystyle {\underset {R}{\rightarrow }}}induced byR{\displaystyle R}onΣ∗{\displaystyle \Sigma ^{*}}is defined as: ifs,t∈Σ∗{\displaystyle s,t\in \Sigma ^{*}}are any strings, thens→Rt{\displaystyle s{\underset {R}{\rightarrow }}t}if there existx,y,u,v∈Σ∗{\displaystyle x,y,u,v\in \Sigma ^{*}}such thats=xuy{\displaystyle s=xuy},t=xvy{\displaystyle t=xvy}, anduRv{\displaystyle uRv}. Since→R{\displaystyle {\underset {R}{\rightarrow }}}is a relation onΣ∗{\displaystyle \Sigma ^{*}}, the pair(Σ∗,→R){\displaystyle (\Sigma ^{*},{\underset {R}{\rightarrow }})}fits the definition of an abstract rewriting system. Since the empty string is inΣ∗{\displaystyle \Sigma ^{*}},R{\displaystyle R}is a subset of→R{\displaystyle {\underset {R}{\rightarrow }}}. If the relationR{\displaystyle R}issymmetric, then the system is called aThue system.
In a SRS, the reduction relation→R∗{\displaystyle {\overset {*}{\underset {R}{\rightarrow }}}}is compatible with the monoid operation, meaning thatx→R∗y{\displaystyle x{\overset {*}{\underset {R}{\rightarrow }}}y}impliesuxv→R∗uyv{\displaystyle uxv{\overset {*}{\underset {R}{\rightarrow }}}uyv}for all stringsx,y,u,v∈Σ∗{\displaystyle x,y,u,v\in \Sigma ^{*}}. Similarly, the reflexive transitive symmetric closure of→R{\displaystyle {\underset {R}{\rightarrow }}}, denoted↔R∗{\displaystyle {\overset {*}{\underset {R}{\leftrightarrow }}}}, is acongruence, meaning it is anequivalence relation(by definition) and it is also compatible with string concatenation. The relation↔R∗{\displaystyle {\overset {*}{\underset {R}{\leftrightarrow }}}}is called theThue congruencegenerated byR{\displaystyle R}. In a Thue system, i.e. ifR{\displaystyle R}is symmetric, the rewrite relation→R∗{\displaystyle {\overset {*}{\underset {R}{\rightarrow }}}}coincides with the Thue congruence↔R∗{\displaystyle {\overset {*}{\underset {R}{\leftrightarrow }}}}.
The notion of a semi-Thue system essentially coincides with thepresentation of a monoid. Since↔R∗{\displaystyle {\overset {*}{\underset {R}{\leftrightarrow }}}}is a congruence, we can define thefactor monoidMR=Σ∗/↔R∗{\displaystyle {\mathcal {M}}_{R}=\Sigma ^{*}/{\overset {*}{\underset {R}{\leftrightarrow }}}}of the free monoidΣ∗{\displaystyle \Sigma ^{*}}by the Thue congruence. If a monoidM{\displaystyle {\mathcal {M}}}isisomorphicwithMR{\displaystyle {\mathcal {M}}_{R}}, then the semi-Thue system(Σ,R){\displaystyle (\Sigma ,R)}is called amonoid presentationofM{\displaystyle {\mathcal {M}}}.
We immediately get some very useful connections with other areas of algebra. For example, the alphabet{a,b}{\displaystyle \{a,b\}}with the rules{ab→ε,ba→ε}{\displaystyle \{ab\rightarrow \varepsilon ,ba\rightarrow \varepsilon \}}, whereε{\displaystyle \varepsilon }is theempty string, is a presentation of thefree groupon one generator. If instead the rules are just{ab→ε}{\displaystyle \{ab\rightarrow \varepsilon \}}, then we obtain a presentation of thebicyclic monoid. Thus semi-Thue systems constitute a natural framework for solving theword problemfor monoids and groups. In fact, every monoid has a presentation of the form(Σ,R){\displaystyle (\Sigma ,R)}, i.e. it may always be presented by a semi-Thue system, possibly over an infinite alphabet.
The word problem for a semi-Thue system is undecidable in general; this result is sometimes known as thePost–Markov theorem.[12]
Aterm rewriting system(TRS) is a rewriting system whose objects areterms, which are expressions with nested sub-expressions. For example, the system shown under§ Logicabove is a term rewriting system. The terms in this system are composed of binary operators(∨){\displaystyle (\vee )}and(∧){\displaystyle (\wedge )}and the unary operator(¬){\displaystyle (\neg )}. Also present in the rules are variables, which represent any possible term (though a single variable always represents the same term throughout a single rule).
In contrast to string rewriting systems, whose objects are sequences of symbols, the objects of a term rewriting system form aterm algebra. A term can be visualized as a tree of symbols, the set of admitted symbols being fixed by a givensignature. As a formalism, term rewriting systems have the full power ofTuring machines, that is, everycomputable functioncan be defined by a term rewriting system.[13]
Some programming languages are based on term rewriting. One such example is Pure, a functional programming language for mathematical applications.[14][15]
Arewrite ruleis a pair ofterms, commonly written asl→r{\displaystyle l\rightarrow r}, to indicate that the left-hand sidelcan be replaced by the right-hand sider. Aterm rewriting systemis a setRof such rules. A rulel→r{\displaystyle l\rightarrow r}can beappliedto a termsif the left termlmatchessomesubtermofs, that is, if there is somesubstitutionσ{\displaystyle \sigma }such that the subterm ofs{\displaystyle s}rooted at somepositionpis the result of applying the substitutionσ{\displaystyle \sigma }to the terml. The subterm matching the left hand side of the rule is called aredexorreducible expression.[16]The result termtof this rule application is then the result ofreplacing the subtermat positionpinsby the termr{\displaystyle r}with the substitutionσ{\displaystyle \sigma }applied, see picture 1. In this case,s{\displaystyle s}is said to berewritten in one step, orrewritten directly, tot{\displaystyle t}by the systemR{\displaystyle R}, formally denoted ass→Rt{\displaystyle s\rightarrow _{R}t},s→Rt{\displaystyle s{\underset {R}{\rightarrow }}t}, or ass→Rt{\displaystyle s{\overset {R}{\rightarrow }}t}by some authors.
If a termt1{\displaystyle t_{1}}can be rewritten in several steps into a termtn{\displaystyle t_{n}}, that is, ift1→Rt2→R⋯→Rtn{\displaystyle t_{1}{\underset {R}{\rightarrow }}t_{2}{\underset {R}{\rightarrow }}\cdots {\underset {R}{\rightarrow }}t_{n}}, the termt1{\displaystyle t_{1}}is said to berewrittentotn{\displaystyle t_{n}}, formally denoted ast1→R+tn{\displaystyle t_{1}{\overset {+}{\underset {R}{\rightarrow }}}t_{n}}. In other words, the relation→R+{\displaystyle {\overset {+}{\underset {R}{\rightarrow }}}}is thetransitive closureof the relation→R{\displaystyle {\underset {R}{\rightarrow }}}; often, also the notation→R∗{\displaystyle {\overset {*}{\underset {R}{\rightarrow }}}}is used to denote thereflexive-transitive closureof→R{\displaystyle {\underset {R}{\rightarrow }}}, that is,s→R∗t{\displaystyle s{\overset {*}{\underset {R}{\rightarrow }}}t}ifs=t{\displaystyle s=t}ors→R+t{\displaystyle s{\overset {+}{\underset {R}{\rightarrow }}}t}.[17]A term rewriting given by a setR{\displaystyle R}of rules can be viewed as an abstract rewriting system as definedabove, with terms as its objects and→R{\displaystyle {\underset {R}{\rightarrow }}}as its rewrite relation.
For example,x∗(y∗z)→(x∗y)∗z{\displaystyle x*(y*z)\rightarrow (x*y)*z}is a rewrite rule, commonly used to establish a normal form with respect to the associativity of∗{\displaystyle *}.
That rule can be applied at the numerator in the terma∗((a+1)∗(a+2))1∗(2∗3){\displaystyle {\frac {a*((a+1)*(a+2))}{1*(2*3)}}}with the matching substitution{x↦a,y↦a+1,z↦a+2}{\displaystyle \{x\mapsto a,\;y\mapsto a+1,\;z\mapsto a+2\}}, see picture 2.[note 2]Applying that substitution to the rule's right-hand side yields the term(a∗(a+1))∗(a+2){\displaystyle (a*(a+1))*(a+2)}, and replacing the numerator by that term yields(a∗(a+1))∗(a+2)1∗(2∗3){\displaystyle {\frac {(a*(a+1))*(a+2)}{1*(2*3)}}}, which is the result term of applying the rewrite rule. Altogether, applying the rewrite rule has achieved what is called "applying the associativity law for∗{\displaystyle *}toa∗((a+1)∗(a+2))1∗(2∗3){\displaystyle {\frac {a*((a+1)*(a+2))}{1*(2*3)}}}" in elementary algebra. Alternately, the rule could have been applied to the denominator of the original term, yieldinga∗((a+1)∗(a+2))(1∗2)∗3{\displaystyle {\frac {a*((a+1)*(a+2))}{(1*2)*3}}}.
Termination issues of rewrite systems in general are handled inAbstract rewriting system#Termination and convergence. For term rewriting systems in particular, the following additional subtleties are to be considered.
Termination even of a system consisting of one rule with alinearleft-hand side is undecidable.[18][19]Termination is also undecidable for systems using only unary function symbols; however, it is decidable for finitegroundsystems.[20]
The following term rewrite system is normalizing,[note 3]but not terminating,[note 4]and not confluent:[21]f(x,x)→g(x),f(x,g(x))→b,h(c,x)→f(h(x,c),h(x,x)).{\displaystyle {\begin{aligned}f(x,x)&\rightarrow g(x),\\f(x,g(x))&\rightarrow b,\\h(c,x)&\rightarrow f(h(x,c),h(x,x)).\\\end{aligned}}}
The following two examples of terminating term rewrite systems are due to Toyama:[22]
and
Their union is a non-terminating system, since
f(g(0,1),g(0,1),g(0,1))→f(0,g(0,1),g(0,1))→f(0,1,g(0,1))→f(g(0,1),g(0,1),g(0,1))→⋯{\displaystyle {\begin{aligned}&f(g(0,1),g(0,1),g(0,1))\\\rightarrow &f(0,g(0,1),g(0,1))\\\rightarrow &f(0,1,g(0,1))\\\rightarrow &f(g(0,1),g(0,1),g(0,1))\\\rightarrow &\cdots \end{aligned}}}This result disproves a conjecture ofDershowitz,[23]who claimed that the union of two terminating term rewrite systemsR1{\displaystyle R_{1}}andR2{\displaystyle R_{2}}is again terminating if all left-hand sides ofR1{\displaystyle R_{1}}and right-hand sides ofR2{\displaystyle R_{2}}arelinear, and there are no "overlaps" between left-hand sides ofR1{\displaystyle R_{1}}and right-hand sides ofR2{\displaystyle R_{2}}. All these properties are satisfied by Toyama's examples.
SeeRewrite orderandPath ordering (term rewriting)for ordering relations used in termination proofs for term rewriting systems.
Higher-order rewriting systems are a generalization of first-order term rewriting systems tolambda terms, allowing higher order functions and bound variables.[24]Various results about first-order TRSs can be reformulated for HRSs as well.[25]
Graph rewrite systemsare another generalization of term rewrite systems, operating ongraphsinstead of (ground-)terms/ their correspondingtreerepresentation.
Trace theoryprovides a means for discussing multiprocessing in more formal terms, such as via thetrace monoidand thehistory monoid. Rewriting can be performed in trace systems as well.
|
https://en.wikipedia.org/wiki/Term_rewriting
|
Artificial immune systems(AIS) are a class ofrule-based machine learningsystems inspired by the principles and processes of the vertebrateimmune system. The algorithms are typically modeled after the immune system's characteristics oflearningandmemoryforproblem-solving, specifically for the computational techniques calledEvolutionary ComputationandAmorphous Computation.
The field of artificial immune systems (AIS) is concerned with abstracting the structure and function of theimmune systemto computational systems, and investigating the application of these systems towards solving computational problems from fields like mathematics, engineering, and information technology. AIS is a sub-field ofbiologically inspired computing, andnatural computation, with interests inmachine learningand belonging to the broader field ofArtificial Intelligence, such asArtificial General Intelligence.
Artificial immune systems (AIS) are adaptive systems, inspired by theoretical immunology and observed immune functions, principles and models, which are applied to problem solving.[1]
AIS is distinct fromcomputational immunologyandtheoretical biologythat are concerned with simulating immunology using computational and mathematical models towards better understanding the immune system, although such models initiated the field of AIS and continue to provide a fertile ground for inspiration. Finally, the field of AIS is not concerned with the investigation of the immune system as a substrate for computation, unlike other fields such asDNA computing.
AIS emerged in the mid-1980s with articles authored by Farmer, Packard and Perelson (1986) and Bersini and Varela (1990) on immune networks. However, it was only in the mid-1990s that AIS became a field in its own right. Forrestet al.(onnegative selection) and Kephartet al.[2]published their first papers on AIS in 1994, and Dasgupta conducted extensive studies on Negative Selection Algorithms. Hunt and Cooke started the works on Immune Network models in 1995; Timmis and Neal continued this work and made some improvements. De Castro & Von Zuben's and Nicosia & Cutello's work (onclonal selection) became notable in 2002. The first book on Artificial Immune Systems was edited by Dasgupta in 1999.
Currently, new ideas along AIS lines, such asdanger theoryand algorithms inspired by theinnate immune system, are also being explored. Although some believe that these new ideas do not yet offer any truly 'new' abstract, over and above existing AIS algorithms. This, however, is hotly debated, and the debate provides one of the main driving forces for AIS development at the moment. Other recent developments involve the exploration ofdegeneracyin AIS models,[3][4]which is motivated by its hypothesized role in open ended learning and evolution.[5][6]
Originally AIS set out to find efficient abstractions of processes found in theimmune systembut, more recently, it is becoming interested in modelling the biological processes and in applying immune algorithms to bioinformatics problems.
In 2008, Dasgupta and Nino[7]published a textbook onimmunological computationwhich presents a compendium of up-to-date work related to immunity-based techniques and describes a wide variety of applications.
The common techniques are inspired by specific immunological theories that explain the function and behavior of themammalianadaptive immune system.
|
https://en.wikipedia.org/wiki/Artificial_immune_system
|
Anassociative classifier(AC) is a kind ofsupervised learningmodel that usesassociation rulesto assign a target value. The term associative classification was coined byBing Liuet al.,[1]in which the authors defined a model made of rules "whose right-hand side are restricted to the classification class attribute".
The model generated by an AC and used to label new records consists ofassociation rules, where the consequent corresponds to the class label. As such, they can also be seen as a list of "if-then" clauses: if the record matches some criteria (expressed in the left side of the rule, also called antecedent), it is then labeled accordingly to the class on the right side of the rule (or consequent).
Most ACs read the list of rules in order, and apply the first matching rule to label the new record.[2]
The rules of an AC inherit some of the metrics of association rules, like the support or the confidence.[3]Metrics can be used to order or filter the rules in the model[4]and to evaluate their quality.
The first proposal of a classification model made of association rules was FBM. The approach was popularized by CBA,[1]although other authors had also previously proposed the mining of association rules for classification.[5]Other authors have since then proposed multiple changes to the initial model, like the addition of a redundant rule pruning phase[6]or the exploitation of Emerging Patterns.[7]
Notable implementations include:
|
https://en.wikipedia.org/wiki/Associative_classifier
|
Rule inductionis an area ofmachine learningin which formal rules are extracted from a set of observations. The rules extracted may represent a fullscientific modelof the data, or merely represent localpatternsin the data.
Data miningin general and rule induction in detail are trying to create algorithms without human programming but with analyzing existing data structures.[1]: 415-In the easiest case, a rule is expressed with “if-then statements” and was created with theID3 algorithmfor decision tree learning.[2]: 7[1]: 348Rule learning algorithm are taking training data as input and creating rules by partitioning the table withcluster analysis.[2]: 7A possible alternative over the ID3 algorithm is genetic programming which evolves a program until it fits to the data.[3]: 2
Creating different algorithm and testing them with input data can be realized in the WEKA software.[3]: 125Additional tools are machine learning libraries forPython, likescikit-learn.
Some major rule induction paradigms are:
Some rule induction algorithms are:
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Rule_induction
|
Rule-based machine translation(RBMT) is a classical approach ofmachine translationsystems based onlinguisticinformation about source and target languages. Such information is retrieved from (unilingual, bilingual or multilingual) dictionaries and grammars covering the main semantic, morphological, and syntactic regularities of each language. Having input sentences, an RBMT system generates output sentences on the basis of analysis of both the source and the target languages involved. RBMT has been progressively superseded by more efficient methods, particularlyneural machine translation.[1]
The first RBMT systems were developed in the early 1970s. The most important steps of this evolution were the emergence of the following RBMT systems:
Today, other common RBMT systems include:
There are three different types of rule-based machine translation systems:
RBMT systems can also be characterized as the systems opposite to Example-based Systems of Machine Translation (Example Based Machine Translation), whereas Hybrid Machine Translations Systems make use of many principles derived from RBMT.
The main approach of RBMT systems is based on linking the structure of the given input sentence with the structure of the demanded output sentence, necessarily preserving their unique meaning. The following example can illustrate the general frame of RBMT:
Minimally, to get a German translation of this English sentence one needs:
And finally, we need rules according to which one can relate these two structures together.
Accordingly, we can state the followingstages of translation:
Often only partial parsing is sufficient to get to the syntactic structure of the source sentence and to map it onto the structure of the target sentence.
Anontologyis a formal representation of knowledge that includes the concepts (such as objects, processes etc.) in a domain and some relations between them. If the stored information is of linguistic nature, one can speak of a lexicon.[6]InNLP, ontologies can be used as a source of knowledge for machine translation systems. With access to a large knowledge base, rule-based systems can be enabled to resolve many (especially lexical) ambiguities on their own. In the following classic examples, as humans, we are able to interpret theprepositional phraseaccording to the context because we use our world knowledge, stored in our lexicons:
I saw a man/star/molecule with a microscope/telescope/binoculars.[6]
Since the syntax does not change, a traditional rule-based machine translation system may not be able to differentiate between the meanings. With a large enough ontology as a source of knowledge however, the possible interpretations of ambiguous words in a specific context can be reduced.[6]
The ontology generated for the PANGLOSS knowledge-based machine translation system in 1993 may serve as an example of how an ontology forNLPpurposes can be compiled:[7][8]
The RBMT system contains:
The RBMT system makes use of the following:
|
https://en.wikipedia.org/wiki/Rule-based_machine_translation
|
Incomputer science, arule-based systemis a computer system in which domain-specificknowledgeis represented in the form of rules and general-purposereasoningis used to solve problems in the domain.
Two different kinds of rule-based systems emerged within the field ofartificial intelligencein the 1970s:
The differences and relationships between these two kinds of rule-based system has been a major source of misunderstanding and confusion.
Both kinds of rule-based systems use eitherforwardorbackward chaining, in contrast withimperative programs, which execute commands listed sequentially. However, logic programming systems have a logical interpretation, whereas production systems do not.
A classic example of a production rule-based system is the domain-specificexpert systemthat uses rules to make deductions or choices.[1]For example, an expert system might help a doctor choose the correct diagnosis based on a cluster of symptoms, or select tactical moves to play a game.
Rule-based systems can be used to performlexical analysistocompileorinterpretcomputer programs, or innatural language processing.[2]
Rule-based programming attempts to derive execution instructions from a starting set of data and rules. This is a more indirect method than that employed by animperative programming language, which lists execution steps sequentially.
A typical rule-based system has four basic components:[3]
Whereas the matching phase of the inference engine has a logical interpretation, the conflict resolution and action phases do not. Instead, "their semantics is usually described as a series of applications of various state-changing operators, which often gets quite involved (depending on the choices made in deciding which ECA rules fire, when, and so forth), and they can hardly be regarded as declarative".[5]
The logic programming family of computer systems includes the programming languageProlog, the database languageDatalogand the knowledge representation and problem-solving languageAnswer Set Programming(ASP). In all of these languages, rules are written in the form ofclauses:
and are read as declarative sentences in logical form:
In the simplest case ofHorn clauses(or "definite" clauses), which are a subset offirst-order logic, all of the A, B1, ..., Bnareatomic formulae.
Although Horn clause logic programs areTuring complete,[6][7]for many practical applications, it is useful to extend Horn clause programs by allowing negative conditions, implemented bynegation as failure. Such extended logic programs have the knowledge representation capabilities of anon-monotonic logic.
The most obvious difference between the two kinds of systems is that production rules are typically written in the forward direction,if A then B, and logic programming rules are typically written in the backward direction,B if A. In the case of logic programming rules, this difference is superficial and purely syntactic. It does not affect the semantics of the rules. Nor does it affect whether the rules are used to reason backwards, Prolog style, to reduce the goalBto the subgoalsA, or whether they are used, Datalog style, to deriveBfromA.
In the case of production rules, the forward direction of the syntax reflects the stimulus-response character of most production rules, with the stimulusAcoming before the responseB. Moreover, even in cases when the response is simply to draw a conclusionBfrom an assumptionA, as inmodus ponens, the match-resolve-act cycle is restricted to reasoning forwards fromAtoB. Reasoning backwards in a production system would require the use of an entirely different kind of inference engine.
In his Introduction to Cognitive Science,[8]Paul Thagardincludes logic and rules as alternative approaches to modelling human thinking. He does not consider logic programs in general, but he considers Prolog to be, not a rule-based system, but "a programming language that uses logic representations and deductive techniques" (page 40).
He argues that rules, which have the formIF condition THEN action, are "very similar" to logical conditionals, but they are simpler and have greater psychological plausibility (page 51). Among other differences between logic and rules, he argues that logic uses deduction, but rules use search (page 45) and can be used to reason either forward or backward (page 47). Sentences in logic "have to be interpreted asuniversally true", but rules can bedefaults, which admit exceptions (page 44). He does not observe that all of these features of rules apply to logic programming systems.
|
https://en.wikipedia.org/wiki/Rule-based_system
|
Incomputer science, arule-based systemis a computer system in which domain-specificknowledgeis represented in the form of rules and general-purposereasoningis used to solve problems in the domain.
Two different kinds of rule-based systems emerged within the field ofartificial intelligencein the 1970s:
The differences and relationships between these two kinds of rule-based system has been a major source of misunderstanding and confusion.
Both kinds of rule-based systems use eitherforwardorbackward chaining, in contrast withimperative programs, which execute commands listed sequentially. However, logic programming systems have a logical interpretation, whereas production systems do not.
A classic example of a production rule-based system is the domain-specificexpert systemthat uses rules to make deductions or choices.[1]For example, an expert system might help a doctor choose the correct diagnosis based on a cluster of symptoms, or select tactical moves to play a game.
Rule-based systems can be used to performlexical analysistocompileorinterpretcomputer programs, or innatural language processing.[2]
Rule-based programming attempts to derive execution instructions from a starting set of data and rules. This is a more indirect method than that employed by animperative programming language, which lists execution steps sequentially.
A typical rule-based system has four basic components:[3]
Whereas the matching phase of the inference engine has a logical interpretation, the conflict resolution and action phases do not. Instead, "their semantics is usually described as a series of applications of various state-changing operators, which often gets quite involved (depending on the choices made in deciding which ECA rules fire, when, and so forth), and they can hardly be regarded as declarative".[5]
The logic programming family of computer systems includes the programming languageProlog, the database languageDatalogand the knowledge representation and problem-solving languageAnswer Set Programming(ASP). In all of these languages, rules are written in the form ofclauses:
and are read as declarative sentences in logical form:
In the simplest case ofHorn clauses(or "definite" clauses), which are a subset offirst-order logic, all of the A, B1, ..., Bnareatomic formulae.
Although Horn clause logic programs areTuring complete,[6][7]for many practical applications, it is useful to extend Horn clause programs by allowing negative conditions, implemented bynegation as failure. Such extended logic programs have the knowledge representation capabilities of anon-monotonic logic.
The most obvious difference between the two kinds of systems is that production rules are typically written in the forward direction,if A then B, and logic programming rules are typically written in the backward direction,B if A. In the case of logic programming rules, this difference is superficial and purely syntactic. It does not affect the semantics of the rules. Nor does it affect whether the rules are used to reason backwards, Prolog style, to reduce the goalBto the subgoalsA, or whether they are used, Datalog style, to deriveBfromA.
In the case of production rules, the forward direction of the syntax reflects the stimulus-response character of most production rules, with the stimulusAcoming before the responseB. Moreover, even in cases when the response is simply to draw a conclusionBfrom an assumptionA, as inmodus ponens, the match-resolve-act cycle is restricted to reasoning forwards fromAtoB. Reasoning backwards in a production system would require the use of an entirely different kind of inference engine.
In his Introduction to Cognitive Science,[8]Paul Thagardincludes logic and rules as alternative approaches to modelling human thinking. He does not consider logic programs in general, but he considers Prolog to be, not a rule-based system, but "a programming language that uses logic representations and deductive techniques" (page 40).
He argues that rules, which have the formIF condition THEN action, are "very similar" to logical conditionals, but they are simpler and have greater psychological plausibility (page 51). Among other differences between logic and rules, he argues that logic uses deduction, but rules use search (page 45) and can be used to reason either forward or backward (page 47). Sentences in logic "have to be interpreted asuniversally true", but rules can bedefaults, which admit exceptions (page 44). He does not observe that all of these features of rules apply to logic programming systems.
|
https://en.wikipedia.org/wiki/Rule-based_programming
|
RuleMLis a global initiative, led by a non-profit organization RuleML Inc., that is devoted to advancing research and industry standards design activities in the technical area of rules that are semantic and highly inter-operable. The standards design takes the form primarily of amarkup language, also known as RuleML. The research activities include an annual research conference, theRuleML Symposium, also known as RuleML for short. Founded in fall 2000 by Harold Boley, Benjamin Grosof, and Said Tabet, RuleML was originally devoted purely to standards design, but then quickly branched out into the related activities of coordinating research and organizing an annual research conference starting in 2002. TheMinRuleMLis sometimes interpreted as standing forMarkup and Modeling. The markup language was developed to express both forward (bottom-up) and backward (top-down) rules inXMLfor deduction, rewriting, and further inferential-transformational tasks. It is defined by theRule Markup Initiative, an open network of individuals and groups from both industry and academia[1]that was formed to develop a canonical Web language for rules using XML markup and transformations from and to other rule standards/systems.
Markup standards and initiatives related to RuleML include:
|
https://en.wikipedia.org/wiki/RuleML
|
Abusiness rules engineis asoftware systemthat executes one or morebusiness rulesin a runtimeproduction environment. The rules might come from legalregulation("An employee can be fired for any reason or no reason but not for an illegal reason"), company policy ("All customers that spend more than $100 at one time will receive a 10% discount"), or other sources. A business rule system enables these company policies and other operational decisions to be defined, tested, executed and maintained separately fromapplication code.
Rule engines typically support rules, facts, priority (score), mutual exclusion, preconditions, and other functions.
Rule engine software is commonly provided as a component of abusiness rule management systemwhich, among other functions, provides the ability to: register, define, classify, and manage all the rules, verify consistency of rules definitions (”Gold-level customers are eligible for free shipping when order quantity > 10” and “maximum order quantity for Silver-level customers = 15” ), define the relationships between different rules, and relate some of these rules toITapplications that are affected or need to enforce one or more of the rules.
In anyITapplication, business rules can change more frequently than other parts of the application code. Rules engines orinference enginesserve as pluggablesoftware componentswhich execute business rules that abusiness rules approachhas externalized or separated from application code. This externalization or separation allows business users to modify the rules without the need for ITintervention. The system as a whole becomes more easily adaptable with such external business rules, but this does not preclude the usual requirements ofQAand other testing.
An article inComputerworldtraces rules engines to the early 1990s and to products from the likes ofPegasystems,Fair IsaacCorp,ILOG[1]and eMerge[2]fromSapiens.
Many organizations' rules efforts combine aspects of what is generally consideredworkflowdesign with traditional rule design. This failure to separate the two approaches can lead to problems with the ability to re-use and control both business rules and workflows. Design approaches that avoid this quandary separate the role of business rules and workflows as follows:[3]
Concretely, that means that a business rule may do things like detect that a business situation has occurred and raise a business event (typically carried via a messaging infrastructure) or create higher level business knowledge (e.g., evaluating the series of organizational, product, and regulatory-based rules concerning whether or not a loan meets underwriting criteria). On the other hand, a workflow would respond to an event that indicated something such as the overloading of a routing point by initiating a series of activities.
This separation is important because the same business judgment (mortgage meets underwriting criteria) or business event (router is overloaded) can be reacted to by many different workflows. Embedding the work done in response to rule-driven knowledge creation into the rule itself greatly reduces the ability of business rules to be reused across an organization because it makes them work-flow specific.
To create an architecture that employs a business rules engine it is essential to establish the integration between aBPM(Business Process Management) and aBRM(Business Rules Management) platform that is based upon processes responding to events or examining business judgments that are defined by business rules. There are some products in the marketplace that provide this integration natively. In other situations this type of abstraction and integration will have to be developed within a particular project or organization.
Most Java-based rules engines provide a technical call-level interface, based on theJSR-94application programming interface(API) standard, in order to allow for integration with different applications, and many rule engines allow forservice-orientedintegrations through Web-based standards such asWSDLandSOAP.
Most rule engines provide the ability to develop adata abstractionthat represents thebusiness entitiesand relationships that rules should be written against. Thisbusiness entity modelcan typically be populated from a variety of sources includingXML,POJOs,flat files, etc. There is no standard language for writing the rules themselves. Many engines use aJava-like syntax, while some allow the definition of custom business-friendly languages.
Most rules engines function as a callable library. However, it is becoming more popular for them to run as a generic process akin to the way thatRDBMSsbehave. Most engines treat rules as a configuration to be loaded into their process instance, although some are actually code generators for the whole rule execution instance and others allow the user to choose.
There are a number of different types of rule engines. These types (generally) differ in how Rules are scheduled for execution.
Most rules engines used by businesses areforward chaining, which can be further divided into two classes:
The biggest difference between these types is that production rule engines execute when a user or application invokes them, usually in a stateless manner. A reactive rule engine reacts automatically when events occur, usually in a stateful manner. Many (and indeed most) popular commercial rule engines have both production and reaction rule capabilities, although they might emphasize one class over another. For example, most business rules engines are primarily production rules engines, whereascomplex event processingrules engines emphasize reaction rules.
In addition, some rules engines supportbackward chaining. In this case a rules engine seeks to resolve the facts to fit a particular goal. It is often referred to as beinggoal drivenbecause it tries to determine if something exists based on existing information.
Another kind of rule engine automatically switches between back- and forward-chaining several times during a reasoning run, e.g. the Internet Business Logic system, which can be found by searching the web.
A fourth class of rules engine might be called a deterministic engine. These rules engines may forgo both forward chaining and backward chaining, and instead utilizedomain-specific languageapproaches to better describe policy. This approach is often easier to implement and maintain, and provides performance advantages over forward or backward chaining systems.
There are some circumstance whereFuzzy Logicbased inference may be more appropriate, where heuristics are used in rule processing, rather than Boolean rules. Examples might include customer classification, missing data inference, customer value calculations, etc. The DARL language[4]and the associated inference engine and editors is an example of this approach.
One common use case for rules engines is standardized access control to applications.OASISdefines a rules engine architecture and standard dedicated to access control calledXACML(eXtensible Access Control Markup Language).
One key difference between a XACML rule engine and a business rule engine is the fact that a XACML rule engine is stateless and cannot change the state of any data.
The XACML rule engine, called aPolicy Decision Point(PDP), expects a binary Yes/No question e.g. "Can Alice view document D?" and returns a decision e.g. Permit / deny.
|
https://en.wikipedia.org/wiki/Business_rule_engine
|
ABRMSorbusiness rule management systemis asoftwaresystem used to define, deploy, execute, monitor and maintain the variety and complexity of decision logic that is used by operational systems within an organization or enterprise. This logic, also referred to asbusiness rules, includes policies, requirements, and conditional statements that are used to determine the tactical actions that take place in applications and systems.
A BRMS includes, at minimum:
The top benefits of a BRMS include:
Some disadvantages of the BRMS include:[1]
Most BRMS vendors have evolved fromrule enginevendors to provide business-usablesoftware development lifecyclesolutions, based on declarative definitions of business rules executed in their own rule engine. BRMSs are increasingly evolving into broader digital decisioning platforms that also incorporate decision intelligence andmachine learningcapabilities.[2]
However, some vendors come from a different approach (for example, they map decision trees or graphs to executable code). Rules in the repository are generally mapped to decision services that are naturally fully compliant with the latestSOA,Web Services, or other software architecture trends.
In a BRMS, a representation of business rules maps to a software system for execution. A BRMS therefore relates tomodel-driven engineering, such as themodel-driven architecture(MDA) of theObject Management Group(OMG). It is no coincidence that many of the related standards come under the OMG banner.
A BRMS is a critical component forEnterprise Decision Managementas it allows for the transparent and agile management of the decision-making logic required in systems developed using this approach.
The OMGDecision Model and Notationstandard is designed to standardize elements of business rules development, specially decision table representations. There is also a standard for a Java RuntimeAPIfor rule enginesJSR-94.
Many standards, such asdomain-specific languages, define their own representation of rules, requiring translations to generic rule engines or their own custom engines.
Other domains, such asPMML, also define rules.
|
https://en.wikipedia.org/wiki/Business_rule_management_system
|
Inmachine learning(ML),boostingis anensemblemetaheuristicfor primarily reducingbias (as opposed to variance).[1]It can also improve thestabilityand accuracy of MLclassificationandregressionalgorithms. Hence, it is prevalent insupervised learningfor converting weak learners to strong learners.[2]
The concept of boosting is based on the question posed byKearnsandValiant(1988, 1989):[3][4]"Can a set of weak learners create a single strong learner?" A weak learner is defined as aclassifierthat is only slightly correlated with the true classification. A strong learner is a classifier that is arbitrarily well-correlated with the true classification.Robert Schapireanswered the question in the affirmative in a paper published in 1990.[5]This has had significant ramifications in machine learning andstatistics, most notably leading to the development of boosting.[6]
Initially, thehypothesis boosting problemsimply referred to the process of turning a weak learner into a strong learner.[3]Algorithms that achieve this quickly became known as "boosting".Freundand Schapire's arcing (Adapt[at]ive Resampling and Combining),[7]as a general technique, is more or less synonymous with boosting.[8]
While boosting is not algorithmically constrained, most boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier. When they are added, they are weighted in a way that is related to the weak learners' accuracy. After a weak learner is added, the data weights are readjusted, known as "re-weighting". Misclassified input data gain a higher weight and examples that are classified correctly lose weight.[note 1]Thus, future weak learners focus more on the examples that previous weak learners misclassified.
There are many boosting algorithms. The original ones, proposed byRobert Schapire(arecursivemajority gate formulation),[5]andYoav Freund(boost by majority),[9]were notadaptiveand could not take full advantage of the weak learners. Schapire and Freund then developedAdaBoost, an adaptive boosting algorithm that won the prestigiousGödel Prize.
Only algorithms that are provable boosting algorithms in theprobably approximately correct learningformulation can accurately be calledboosting algorithms. Other algorithms that are similar in spirit[clarification needed]to boosting algorithms are sometimes called "leveraging algorithms", although they are also sometimes incorrectly called boosting algorithms.[9]
The main variation between many boosting algorithms is their method ofweightingtraining datapoints andhypotheses.AdaBoostis very popular and the most significant historically as it was the first algorithm that could adapt to the weak learners. It is often the basis of introductory coverage of boosting in university machine learning courses.[10]There are many more recent algorithms such asLPBoost, TotalBoost,BrownBoost,xgboost, MadaBoost,LogitBoost, and others. Many boosting algorithms fit into the AnyBoost framework,[9]which shows that boosting performsgradient descentin afunction spaceusing aconvexcost function.
Given images containing various known objects in the world, a classifier can be learned from them to automaticallyclassifythe objects in future images. Simple classifiers built based on someimage featureof the object tend to be weak in categorization performance. Using boosting methods for object categorization is a way to unify the weak classifiers in a special way to boost the overall ability of categorization.[citation needed]
Object categorizationis a typical task ofcomputer visionthat involves determining whether or not an image contains some specific category of object. The idea is closely related with recognition, identification, and detection. Appearance based object categorization typically containsfeature extraction,learningaclassifier, and applying the classifier to new examples. There are many ways to represent a category of objects, e.g. fromshape analysis,bag of words models, or local descriptors such asSIFT, etc. Examples ofsupervised classifiersareNaive Bayes classifiers,support vector machines,mixtures of Gaussians, andneural networks. However, research[which?]has shown that object categories and their locations in images can be discovered in anunsupervised manneras well.[11]
The recognition of object categories in images is a challenging problem incomputer vision, especially when the number of categories is large. This is due to high intra class variability and the need for generalization across variations of objects within the same category. Objects within one category may look quite different. Even the same object may appear unalike under different viewpoint,scale, andillumination. Background clutter and partial occlusion add difficulties to recognition as well.[12]Humans are able to recognize thousands of object types, whereas most of the existingobject recognitionsystems are trained to recognize only a few,[quantify]e.g.human faces,cars, simple objects, etc.[13][needs update?]Research has been very active on dealing with more categories and enabling incremental additions of new categories, and although the general problem remains unsolved, several multi-category objects detectors (for up to hundreds or thousands of categories[14]) have been developed. One means is byfeaturesharing and boosting.
AdaBoost can be used for face detection as an example ofbinary categorization. The two categories are faces versus background. The general algorithm is as follows:
After boosting, a classifier constructed from 200 features could yield a 95% detection rate under a10−5{\displaystyle 10^{-5}}false positive rate.[15]
Another application of boosting for binary categorization is a system that detects pedestrians usingpatternsof motion and appearance.[16]This work is the first to combine both motion information and appearance information as features to detect a walking person. It takes a similar approach to theViola-Jones object detection framework.
Compared with binary categorization,multi-class categorizationlooks for common features that can be shared across the categories at the same time. They turn to be more genericedgelike features. During learning, the detectors for each category can be trained jointly. Compared with training separately, itgeneralizesbetter, needs less training data, and requires fewer features to achieve the same performance.
The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged). This can be done via convertingmulti-class classificationinto a binary one (a set of categories versus the rest),[17]or by introducing a penalty error from the categories that do not have the feature of the classifier.[18]
In the paper "Sharing visual features for multiclass and multiview object detection", A. Torralba et al. usedGentleBoostfor boosting and showed that when training data is limited, learning via sharing features does a much better job than no sharing, given same boosting rounds. Also, for a given performance level, the total number of features required (and therefore the run time cost of the classifier) for the feature sharing detectors, is observed to scale approximatelylogarithmicallywith the number of class, i.e., slower thanlineargrowth in the non-sharing case. Similar results are shown in the paper "Incremental learning of object detectors using a visual shape alphabet", yet the authors usedAdaBoostfor boosting.
Boosting algorithms can be based onconvexor non-convex optimization algorithms. Convex algorithms, such asAdaBoostandLogitBoost, can be "defeated" by random noise such that they can't learn basic and learnable combinations of weak hypotheses.[19][20]This limitation was pointed out by Long & Servedio in 2008. However, by 2009, multiple authors demonstrated that boosting algorithms based on non-convex optimization, such asBrownBoost, can learn from noisy datasets and can specifically learn the underlying classifier of the Long–Servedio dataset.
|
https://en.wikipedia.org/wiki/Boosting_(meta-algorithm)
|
Inpredictive analytics,data science,machine learningand related fields,concept driftordriftis an evolution of data that invalidates thedata model. It happens when the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes.Drift detectionanddrift adaptationare of paramount importance in the fields that involve dynamically changing data and data models.
In machine learning andpredictive analyticsthis drift phenomenon is called concept drift. In machine learning, a common element of a data model are the statistical properties, such asprobability distributionof the actual data. If they deviate from the statistical properties of thetraining data set, then the learned predictions may become invalid, if the drift is not addressed.[1][2][3][4]
Another important area issoftware engineering, where three types of data drift affectingdata fidelitymay be recognized. Changes in the software environment ("infrastructure drift") may invalidate software infrastructure configuration. "Structural drift" happens when the dataschemachanges, which may invalidate databases. "Semantic drift" is changes in the meaning of data while the structure does not change. In many cases this may happen in complicated applications when many independent developers introduce changes without proper awareness of the effects of their changes in other areas of the software system.[5][6]
For many application systems, the nature of data on which they operate are subject to changes for various reasons, e.g., due to changes in business model, system updates, or switching the platform on which the system operates.[6]
In the case ofcloud computing, infrastructure drift that may affect the applications running on cloud may be caused by the updates of cloud software.[5]
There are several types of detrimental effects of data drift on data fidelity. Data corrosion is passing the drifted data into the system undetected. Data loss happens when valid data are ignored due to non-conformance with the applied schema. Squandering is the phenomenon when new data fields are introduced upstream the data processing pipeline, but somewhere downstream there data fields are absent.[6]
"Data drift" may refer to the phenomenon when database records fail to match the real-world data due to the changes in the latter over time. This is a common problem with databases involving people, such as customers, employees, citizens, residents, etc. Human data drift may be caused by unrecorded changes in personal data, such as place of residence or name, as well as due to errors during data input.[7]
"Data drift" may also refer to inconsistency of data elements between several replicas of a database. The reasons can be difficult to identify. A simple drift detection is to runchecksumregularly. However the remedy may be not so easy.[8]
The behavior of the customers in anonline shopmay change over time. For example, if weekly merchandise sales are to be predicted, and apredictive modelhas been developed that works satisfactorily. The model may use inputs such as the amount of money spent onadvertising,promotionsbeing run, and other metrics that may affect sales. The model is likely to become less and less accurate over time – this is concept drift. In the merchandise sales application, one reason for concept drift may be seasonality, which means that shopping behavior changes seasonally. Perhaps there will be higher sales in the winter holiday season than during the summer, for example. Concept drift generally occurs when the covariates that comprise the data set begin to explain the variation of your target set less accurately — there may be someconfoundingvariables that have emerged, and that one simply cannot account for, which renders the model accuracy to progressively decrease with time. Generally, it is advised to perform health checks as part of the post-production analysis and to re-train the model with new assumptions upon signs of concept drift.
To prevent deterioration inpredictionaccuracy because of concept drift,reactiveandtrackingsolutions can be adopted. Reactive solutions retrain the model in reaction to a triggering mechanism, such as a change-detection test,[9][10]to explicitly detect concept drift as a change in the statistics of the data-generating process. When concept drift is detected, the current model is no longer up-to-date and must be replaced by a new one to restore prediction accuracy.[11][12]A shortcoming of reactive approaches is that performance may decay until the change is detected. Tracking solutions seek to track the changes in the concept by continually updating the model. Methods for achieving this includeonline machine learning, frequent retraining on the most recently observed samples,[13]and maintaining an ensemble of classifiers where one new classifier is trained on the most recent batch of examples and replaces the oldest classifier in the ensemble.[14]
Contextual information, when available, can be used to better explain the causes of the concept drift: for instance, in the sales prediction application, concept drift might be compensated by adding information about the season to the model. By providing information about the time of the year, the rate of deterioration of your model is likely to decrease, but concept drift is unlikely to be eliminated altogether. This is because actual shopping behavior does not follow any static,finite model. New factors may arise at any time that influence shopping behavior, the influence of the known factors or their interactions may change.
Concept drift cannot be avoided for complex phenomena that are not governed by fixedlaws of nature. All processes that arise from human activity, such associoeconomicprocesses, andbiological processesare likely to experience concept drift. Therefore, periodic retraining, also known as refreshing, of any model is necessary.
Many papers have been published describing algorithms for concept drift detection. Only reviews, surveys and overviews are here:
|
https://en.wikipedia.org/wiki/Concept_drift
|
Inmachine learning, a common task is the study and construction ofalgorithmsthat can learn from and make predictions ondata.[1]Such algorithms function by making data-driven predictions or decisions,[2]through building amathematical modelfrom input data. These input data used to build the model are usually divided into multipledata sets. In particular, three data sets are commonly used in different stages of the creation of the model: training, validation, and test sets.
The model is initially fit on atraining data set,[3]which is a set of examples used to fit the parameters (e.g. weights of connections between neurons inartificial neural networks) of the model.[4]The model (e.g. anaive Bayes classifier) is trained on the training data set using asupervised learningmethod, for example using optimization methods such asgradient descentorstochastic gradient descent. In practice, the training data set often consists of pairs of an inputvector(or scalar) and the corresponding output vector (or scalar), where the answer key is commonly denoted as thetarget(orlabel). The current model is run with the training data set and produces a result, which is then compared with thetarget, for each input vector in the training data set. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include bothvariable selectionand parameterestimation.
Successively, the fitted model is used to predict the responses for the observations in a second data set called thevalidation data set.[3]The validation data set provides an unbiased evaluation of a model fit on the training data set while tuning the model'shyperparameters[5](e.g. the number of hidden units—layers and layer widths—in a neural network[4]). Validation data sets can be used forregularizationbyearly stopping(stopping training when the error on the validation data set increases, as this is a sign ofover-fittingto the training data set).[6]This simple procedure is complicated in practice by the fact that the validation data set's error may fluctuate during training, producing multiple local minima. This complication has led to the creation of many ad-hoc rules for deciding when over-fitting has truly begun.[6]
Finally, thetest data setis a data set used to provide an unbiased evaluation of afinalmodel fit on the training data set.[5]If the data in the test data set has never been used in training (for example incross-validation), the test data set is also called aholdout data set. The term "validation set" is sometimes used instead of "test set" in some literature (e.g., if the original data set was partitioned into only two subsets, the test set might be referred to as the validation set).[5]
Deciding the sizes and strategies for data set division in training, test and validation sets is very dependent on the problem and data available.[7]
A training data set is adata setof examples used during the learning process and is used to fit the parameters (e.g., weights) of, for example, aclassifier.[9][10]
For classification tasks, a supervised learning algorithm looks at the training data set to determine, or learn, the optimal combinations of variables that will generate a goodpredictive model.[11]The goal is to produce a trained (fitted) model that generalizes well to new, unknown data.[12]The fitted model is evaluated using “new” examples from the held-out data sets (validation and test data sets) to estimate the model’s accuracy in classifying new data.[5]To reduce the risk of issues such as over-fitting, the examples in the validation and test data sets should not be used to train the model.[5]
Most approaches that search through training data for empirical relationships tend tooverfitthe data, meaning that they can identify and exploit apparent relationships in the training data that do not hold in general.
When a training set is continuously expanded with new data, then this isincremental learning.
A validation data set is adata setof examples used to tune thehyperparameters(i.e. the architecture) of a model. It is sometimes also called the development set or the "dev set".[13]An example of a hyperparameter forartificial neural networksincludes the number of hidden units in each layer.[9][10]It, as well as the testing set (as mentioned below), should follow the same probability distribution as the training data set.
In order to avoid overfitting, when anyclassificationparameter needs to be adjusted, it is necessary to have a validation data set in addition to the training and test data sets. For example, if the most suitable classifier for the problem is sought, the training data set is used to train the different candidate classifiers, the validation data set is used to compare their performances and decide which one to take and, finally, the test data set is used to obtain the performance characteristics such asaccuracy,sensitivity,specificity,F-measure, and so on. The validation data set functions as a hybrid: it is training data used for testing, but neither as part of the low-level training nor as part of the final testing.
The basic process of using a validation data set formodel selection(as part of training data set, validation data set, and test data set) is:[10][14]
Since our goal is to find the network having the best performance on new data, the simplest approach to the comparison of different networks is to evaluate the error function using data which is independent of that used for training. Various networks are trained by minimization of an appropriate error function defined with respect to a training data set. The performance of the networks is then compared by evaluating the error function using an independent validation set, and the network having the smallest error with respect to the validation set is selected. This approach is called thehold outmethod. Since this procedure can itself lead to some overfitting to the validation set, the performance of the selected network should be confirmed by measuring its performance on a third independent set of data called a test set.
An application of this process is inearly stopping, where the candidate models are successive iterations of the same network, and training stops when the error on the validation set grows, choosing the previous model (the one with minimum error).
A test data set is adata setthat isindependentof the training data set, but that follows the sameprobability distributionas the training data set. If a model fit to the training data set also fits the test data set well, minimaloverfittinghas taken place (see figure below). A better fitting of the training data set as opposed to the test data set usually points to over-fitting.
A test set is therefore a set of examples used only to assess the performance (i.e. generalization) of a fully specified classifier.[9][10]To do this, the final model is used to predict classifications of examples in the test set. Those predictions are compared to the examples' true classifications to assess the model's accuracy.[11]
In a scenario where both validation and test data sets are used, the test data set is typically used to assess the final model that is selected during the validation process. In the case where the original data set is partitioned into two subsets (training and test data sets), the test data set might assess the model only once (e.g., in theholdout method).[15]Note that some sources advise against such a method.[12]However, when using a method such ascross-validation, two partitions can be sufficient and effective since results are averaged after repeated rounds of model training and testing to help reduce bias and variability.[5][12]
Testing is trying something to find out about it ("To put to the proof; to prove the truth, genuineness, or quality of by experiment" according to the Collaborative International Dictionary of English) and to validate is to prove that something is valid ("To confirm; to render valid" Collaborative International Dictionary of English). With this perspective, the most common use of the termstest setandvalidation setis the one here described. However, in both industry and academia, they are sometimes used interchanged, by considering that the internal process is testing different models to improve (test set as a development set) and the final model is the one that needs to be validated before real use with an unseen data (validation set). "The literature on machine learning often reverses the meaning of 'validation' and 'test' sets. This is the most blatant example of the terminological confusion that pervades artificial intelligence research."[16]Nevertheless, the important concept that must be kept is that the final set, whether called test or validation, should only be used in the final experiment.
In order to get more stable results and use all valuable data for training, a data set can be repeatedly split into several training and a validation data sets. This is known ascross-validation. To confirm the model's performance, an additional test data set held out from cross-validation is normally used.
It is possible to use cross-validation on training and validation sets, andwithineach training set have further cross-validation for a test set for hyperparameter tuning. This is known asnested cross-validation.
Omissions in the training of algorithms are a major cause of erroneous outputs.[17]Types of such omissions include:[17]
An example of an omission of particular circumstances is a case where a boy was able to unlock the phone because his mother registered her face under indoor, nighttime lighting, a condition which was not appropriately included in the training of the system.[17][18]
Usage of relatively irrelevant input can include situations where algorithms use the background rather than the object of interest forobject detection, such as being trained by pictures of sheep on grasslands, leading to a risk that a different object will be interpreted as a sheep if located on a grassland.[17]
|
https://en.wikipedia.org/wiki/Training,_validation,_and_test_sets
|
Concurrent validityis a type of evidence that can be gathered to defend the use of a test for predicting other outcomes. It is a parameter used in sociology, psychology, and otherpsychometricor behavioral sciences. Concurrent validity is demonstrated when a test correlates well with a measure that has previously beenvalidated. The two measures may be for the same construct, but more often used for different, but presumably related, constructs.
The two measures in the study are taken at the same time. This is in contrast topredictive validity, where one measure occurs earlier and is meant to predict some later measure.[1]In both cases, the (concurrent) predictive power of the test is analyzed using a simplecorrelationorlinear regression.
Concurrent validity and predictive validity are two types ofcriterion-related validity. The difference between concurrent validity and predictive validity rests solely on the time at which the two measures are administered. Concurrent validity applies to validation studies in which the two measures are administered at approximately the same time. For example, an employment test may be administered to a group of workers and then the test scores can be correlated with the ratings of the workers' supervisors taken on the same day or in the same week. The resulting correlation would be a concurrent validity coefficient. This type of evidence might be used to support the use of the employment test for future selection of employees.
Concurrent validity may be used as a practical substitute for predictive validity. In the example above, predictive validity would be the best choice for validating an employment test, because using the employment test on existing employees may not be a strong analog for using the tests for selection. Reducedmotivationandrestriction of rangeare just two possible biasing effects for concurrent validity studies.[2][3]
Concurrent validity differs fromconvergent validityin that it focuses on the power of the focal test topredictoutcomes on another test or some outcome variable. Convergent validity refers to the observation of strong correlations between two tests that are assumed to measure the same construct. It is the interpretation of the focal test as apredictorthat differentiates this type of evidence from convergent validity, though both methods rely on simple correlations in the statistical analysis.
Thissocial psychology-related article is astub. You can help Wikipedia byexpanding it.
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Concurrent_validity
|
Face validityis the extent to which a test issubjectivelyviewed as covering the concept it purports to measure. It refers to the transparency or relevance of a test as it appears to test participants.[1][2]In other words, a test can be said to have face validity if it "looks like" it is going to measure what it is supposed to measure.[3]For instance, if a test is prepared to measure whether students can perform multiplication, and the people to whom it is shown all agree that it looks like a good test of multiplication ability, this demonstrates face validity of the test. Face validity is often contrasted withcontent validityandconstruct validity.
Some people use the term face validity to refer only to the validity of a test to observers who are not expert in testing methodologies. For instance, if a test is designed to measure whether children are good spellers, and parents are asked whether the test is a good test, this measures the face validity of the test. If an expert is asked instead, some people would argue that this does not measure face validity.[4]This distinction seems too careful for most applications.[citation needed]Generally, face validity means that the test "looks like" it will work, as opposed to "has been shown to work".
Insimulation, the first goal of the system designer is to construct a system which can support a task to be accomplished, and to record the learner's task performance for any particular trial. The task(s)—and therefore, the task performance—on the simulator should be representative of the real world that they model. Face validity is a subjective measure of the extent to which this selection appears reasonable "on the face of it"—that is, subjectively to an expert after only a superficial examination of the content.
Some assume that it is representative of the realism of the system, according to users and others who are knowledgeable about the real system being simulated.[5]Those would say that if these experts feel the model is adequate, then it has face validity. However, in factface validityrefers to the test, not the system.
|
https://en.wikipedia.org/wiki/Face_validity
|
Internal validityis the extent to which a piece of evidence supports a claim aboutcause and effect, within the context of a particular study. It is one of the most important properties of scientific studies and is an important concept in reasoning aboutevidencemore generally. Internal validity is determined by how well a study can rule out alternative explanations for its findings (usually, sources ofsystematic erroror 'bias'). It contrasts withexternal validity, the extent to which results can justify conclusions about other contexts (that is, the extent to which results can begeneralized). Both internal and external validity can be described using qualitative or quantitative forms ofcausal notation.
Inferences are said to possess internal validity if a causal relationship between twovariablesis properly demonstrated.[1][2]A validcausal inferencemay be made when three criteria are satisfied:
In scientific experimental settings, researchers often change the state of one variable (theindependent variable) to see what effect it has on a second variable (thedependent variable).[3]For example, a researcher might manipulate the dosage of a particular drug between different groups of people to see what effect it has on health. In this example, the researcher wants to make a causal inference, namely, that different doses of the drug may beheld responsiblefor observed changes or differences. When the researcher may confidently attribute the observed changes or differences in the dependent variable to the independent variable (that is, when the researcher observes an association between these variables and can rule out other explanations orrival hypotheses), then the causal inference is said to be internally valid.[4]
In many cases, however, thesize of effectsfound in the dependent variable may not just depend on
Rather, a number of variables or circumstances uncontrolled for (or uncontrollable) may lead to additional or alternative explanations (a) for the effects found and/or (b) for the magnitude of the effects found. Internal validity, therefore, is more a matter of degree than of either-or, and that is exactly why research designs other than true experiments may also yield results with a high degree of internal validity.
In order to allow for inferences with a high degree of internal validity, precautions may be taken during the design of the study. As a rule of thumb, conclusions based on direct manipulation of the independent variable allow for greater internal validity than conclusions based on an association observed without manipulation.
When considering only Internal Validity, highly controlled true experimental designs (i.e. with random selection, random assignment to either the control or experimental groups, reliable instruments, reliable manipulation processes, and safeguards against confounding factors) may be the "gold standard" of scientific research. However, the very methods used to increase internal validity may also limit the generalizability orexternal validityof the findings. For example, studying the behavior of animals in a zoo may make it easier to draw valid causal inferences within that context, but these inferences may not generalize to the behavior of animals in the wild. In general, a typical experiment in a laboratory, studying a particular process, may leave out many variables that normally strongly affect that process in nature.
To recall eight of these threats to internal validity, use themnemonic acronym,THIS MESS,[5]which stands for:
When it is not known which variable changed first, it can be difficult to determine which variable is the cause and which is the effect.
A major threat to the validity of causal inferences isconfounding: Changes in the dependent variable may rather be attributed to variations in a third variable which is related to the manipulated variable. Wherespurious relationshipscannot be ruled out, rival hypotheses to the original causal inference may be developed.
Selection bias refers to the problem that, at pre-test, differences between groups exist that may interact with the independent variable and thus be 'responsible' for the observed outcome. Researchers and participants bring to the experiment a myriad of characteristics, some learned and others inherent. For example, sex, weight, hair, eye, and skin color, personality, mental capabilities, and physical abilities, but also attitudes like motivation or willingness to participate.
During the selection step of the research study, if an unequal number of test subjects have similar subject-related variables there is a threat to the internal validity. For example, a researcher created two test groups, the experimental and the control groups. The subjects in both groups are not alike with regard to the independent variable but similar in one or more of the subject-related variables.
Self-selection also has a negative effect on the interpretive power of the dependent variable. This occurs often in online surveys where individuals of specific demographics opt into the test at higher rates than other demographics.
Events outside of the study/experiment or between repeated measures of the dependent variable may affect participants' responses to experimental procedures. Often, these are large-scale events (natural disaster, political change, etc.) that affect participants' attitudes and behaviors such that it becomes impossible to determine whether any change on the dependent measures is due to the independent variable, or the historical event.
Subjects change during the course of the experiment or even between measurements. For example, young children might mature and their ability to concentrate may change as they grow up. Both permanent changes, such as physical growth and temporary ones like fatigue, provide "natural" alternative explanations; thus, they may change the way a subject would react to the independent variable. So upon completion of the study, the researcher may not be able to determine if the cause of the discrepancy is due to time or the independent variable.
Repeatedly measuring the participants may lead to bias. Participants may remember the correct answers or may be conditioned to know that they are being tested. Repeatedly taking (the same or similar) intelligence tests usually leads to score gains, but instead of concluding that the underlying skills have changed for good, this threat to Internal Validity provides a good rival hypothesis.
The instrument used during the testing process can change the experiment. This also refers to observers being more concentrated or primed, or having unconsciously changed the criteria they use to make judgments. This can also be an issue with self-report measures given at different times. In this case, the impact may be mitigated through the use of retrospective pretesting. If any instrumentation changes occur, the internal validity of the main conclusion is affected, as alternative explanations are readily available.
This type of error occurs when subjects are selected on the basis of extreme scores (one far away from the mean) during a test. For example, when children with the worst reading scores are selected to participate in a reading course, improvements at the end of the course might be due to regression toward the mean and not the course's effectiveness. If the children had been tested again before the course started, they would likely have obtained better scores anyway.
Likewise, extreme outliers on individual scores are more likely to be captured in one instance of testing but will likely evolve into a more normal distribution with repeated testing.
This error occurs if inferences are made on the basis of only those participants that have participated from the start to the end. However, participants may have dropped out of the study before completion, and maybe even due to the study or programme or experiment itself. For example, the percentage of group members having quit smoking at post-test was found much higher in a group having received a quit-smoking training program than in the control group. However, in the experimental group only 60% have completed the program.
If this attrition is systematically related to any feature of the study, the administration of the independent variable, the instrumentation, or if dropping out leads to relevant bias between groups, a whole class of alternative explanations is possible that account for the observed differences.
This occurs when the subject-related variables, color of hair, skin color, etc., and the time-related variables, age, physical size, etc., interact. If a discrepancy between the two groups occurs between the testing, the discrepancy may be due to the age differences in the age categories.
If treatment effects spread from treatment groups to control groups, a lack of differences between experimental and control groups may be observed. This does not mean, however, that the independent variable has no effect or that there is no relationship between dependent and independent variable.
Behavior in the control groups may alter as a result of the study. For example, control group members may work extra hard to see that the expected superiority of the experimental group is not demonstrated. Again, this does not mean that the independent variable produced no effect or that there is no relationship between dependent and independent variable. Vice versa, changes in the dependent variable may only be affected due to a demoralized control group, working less hard or motivated, not due to the independent variable.
Experimenter bias occurs when the individuals who are conducting an experiment inadvertently affect the outcome by non-consciously behaving in different ways to members of control and experimental groups. It is possible to eliminate the possibility of experimenter bias through the use ofdouble-blindstudy designs, in which the experimenter is not aware of the condition to which a participant belongs.
Experiments that have high internal validity can produce phenomena and results that have no relevance in real life, resulting in the mutual-internal-validity problem.[6][7]It arises when researchers use experimental results to develop theories and then use those theories to design theory-testing experiments. This mutual feedback between experiments and theories can lead to theories that explain only phenomena and results in artificial laboratory settings but not in real life.
|
https://en.wikipedia.org/wiki/Internal_validity
|
Inpsychometrics,predictive validityis the extent to which ascoreon ascaleortestpredicts scores on some criterion measure.[1][2]
For example, thevalidityof acognitive testfor job performance is the correlation between test scores and, for example, supervisor performance ratings. Such a cognitive test would havepredictive validityif the observed correlation were statistically significant.
Predictive validity shares similarities withconcurrent validityin that both are generally measured as correlations between a test and some criterion measure. In a study of concurrent validity the test is administered at the same time as the criterion is collected. This is a common method of developing validity evidence for employment tests: A test is administered to incumbent employees, then a rating of those employees'job performanceis, or has already been, obtained independently of the test (often, as noted above, in the form of a supervisor rating). Note the possibility for restriction of range both in test scores and performance scores: The incumbent employees are likely to be a more homogeneous and higher performing group than the applicant pool at large.
In a strict study of predictive validity, the test scores are collected first. Then, at some later time the criterion measure is collected. Thus, for predictive validity, the employment test example is slightly different: Tests are administered, perhaps to job applicants, and then after those individuals work in the job for a year, their test scores are correlated with their first year job performance scores. Another relevant example isSATscores: These are validated by collecting the scores during the examinee's senior year and high school and then waiting a year (or more) to correlate the scores with their first year collegegrade point average. Thus predictive validity provides somewhat more useful data abouttest validitybecause it has greater fidelity to the real situation in which the test will be used. After all, most tests are administered to find out something about future behavior.
As with many aspects of social science, the magnitude of thecorrelationsobtained from predictive validity studies is usually not high.[3]A typical predictive validity for an employment test might obtain a correlation in the neighborhood ofr= .35. Higher values are occasionally seen and lower values are very common. Nonetheless, theutility(that is the benefit obtained by making decisions using the test) provided by a test with a correlation of .35 can be quite substantial. More information, and an explanation of the relationship between variance and predictive validity, can be found here.[4]
The latestStandards for Educational and Psychological Testing[5]reflectSamuel Messick'smodel of validity[6]and do not use the term "predictive validity." Rather, theStandardsdescribe validity-supporting "Evidence Based on Relationships [between the test scores and] Other Variables."
Predictive validity involves testing a group of subjects for a certain construct, and then comparing them with results obtained at some point in the future.
|
https://en.wikipedia.org/wiki/Predictive_validity
|
Avalidity scale, inpsychological testing, is a scale used in an attempt to measure reliability of responses, for example with the goal of detectingdefensiveness,malingering, or careless or random responding.
For example, theMinnesota Multiphasic Personality Inventoryhasvalidity scalesto measure questions not answered; client "faking good"; client "faking bad" (in first half of test); denial/evasiveness; client "faking bad" (in last half of test); answering similar/opposite question pairs inconsistently; answering questions all true/all false; honesty of test responses/not faking good or bad; "appearing excessively good"; frequency of presentation in clinical setting; and overreporting ofsomatic symptoms. ThePersonality Assessment Inventoryhas validity scales to measure inconsistency (the degree to which respondents answer similar questions in the same way), infrequency (the degree to which respondents rate extremely bizarre or unusual statements as true),positive impression(the degree to which respondents describe themselves in a positive light), andnegative impression(the degree to which respondents describe themselves in a negative light). ThePsychological Inventory of Criminal Thinkinghas two validity scales (Confusion and Defensiveness). TheInwald Personality Inventoryhas one validity scale, the Guardedness Scale, measuringsocial desirability.[1]
The usefulness of the currently-existing validity scales is sometimes questioned. One theory is that subjects in tests of validity scales are given instructions (e.g. to fake the best impression of themselves or to fake an emotionally disturbed person) that virtually guarantee the detection of faking. The tests may not be designed to detectrole faking.[2]
Some commonly used tests do not include validity scales, and are readily faked due to their high face validity.[3]
Thispsychology-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Validity_scale
|
Validationmay refer to:
|
https://en.wikipedia.org/wiki/Validation_(disambiguation)
|
Apache OFBizis anopen sourceenterprise resource planning(ERP) system. It provides a suite of enterprise applications that integrate and automate many of thebusiness processesof an enterprise.[citation needed]
OFBiz is anApache Software Foundationtop level project.
Apache OFBiz is a framework that provides acommon data modeland a set ofbusiness processes.
All applications are built around a common architecture using common data, logic and process components.
Beyond the framework itself, Apache OFBiz offers functionality including:
All Apache OFBiz functionality is built on a common framework. The functionality can be divided into the following distinct layers:
Apache OFBiz uses the concept of "screens" to represent the Apache OFBiz pages. Each page is, normally, represented as a screen. A page in Apache OFBiz consists of components. A component can be a header, footer, etc. When the page is rendered all the components are combined as specified in the screen definition. Components can be Java Server Pages ([JSP]s) <deprecated>, FTL pages built aroundFreeMarkertemplate engine, forms or menus widgets. Widgets are an OFBiz specific technology.
The business, or application layer defines services provided to the user. The services can be of several types: Java methods, SOAP, simple services, workflow, etc. A service engine is responsible for invocation, transactions and security.
Apache OFBiz uses a set of open source technologies and standards such asJava,Java EE,XMLandSOAP. Although Apache OFBiz is built around the concepts used by Java EE, many of its concepts are implemented in different ways; either because Apache OFBiz was designed prior to many recent improvements in Java EE or because Apache OFBiz authors didn't agree with those implementations.
The data layer is responsible for database access, storage and providing a common data interface to the business layer. Data is accessed not inobject orientedfashion but in arelationalway. Eachentity(represented as a row in the database) is provided to the business layer as a set of generic values. A generic value is not typed, so fields of an entity are accessed by thecolumnname.
Apache Solr is an enterprise search server with a REST-like API. It's highly scalable, adaptable, comprehensive, and capable of processing and handling large amounts of data. The Apache Solr / OFBiz integration not only speeds up searches, but also greatly enhances the search capabilities of OFBiz. Solr also added faceted and hierarchical search capabilities to OFBiz.
REST offers several advantages that make it a preferred choice for building and consuming web services particularly utilizing micro services architecture. The greatest benefit is the support for headless architecture.
Gradle, though it's a separate tool outside of OFBiz, this development is nonetheless significant because it simplifies the maintenance and upgrade of OFBiz dependencies on external libraries, which makes it easier to keep the system up to date and secure.
The OFBiz project was created by David E. Jones and Andrew Zeneski on April 13, 2001. The project was initially hosted as The Apache Open For Business Project on SourceForge and Open For Business Project (Apache OFBiz) at Open HUB.
Between September 2003 and May 2006, it was hosted as a java.net project, but the project has been removed from there. It has begun to be widely used around 2003. After incubating since January 31, 2006, it became a Top Level Apache project on December 20, 2006: Apache OFBiz Incubation Status.
|
https://en.wikipedia.org/wiki/Apache_OFBiz
|
Acanonical modelis adesign patternused to communicate between different data formats. Essentially: create a data model which is a superset of all the others ("canonical"), and create a "translator" module or layer to/from which all existing modules exchange data with other modules. The canonical model acts as a middleman. Each model now only needs to know how to communicate with the canonical model and doesn't need to know the implementation details of the other modules.
A form ofenterprise application integration, it is intended to reduce costs and standardize on agreed data definitions associated with integrating business systems. A canonical model is any model that iscanonicalin nature, meaning a model that is in the simplest form possible based on a standard enterprise application integration (EAI) solution. Most organizations also adopt a set of standards for message structure and content (message payload). The desire for consistent message payload results in the construction of an enterprise or business domain canonical model common view within a given context. Often the term canonical model is used interchangeably with integration strategy and often entails a move to a message-based integration methodology. A typical migration from point-to-point canonical data model, an enterprise design pattern which provides common data naming, definition and values within a generalized data framework. Advantages of using a canonical data model are reducing the number of data translations and reducing the maintenance effort.[1]
Adoption of a comprehensive enterprise interfacing to message-based integration begins with a decision on themiddlewareto be used to transport messages between endpoints. Often this decision results in the adoption of anenterprise service bus(ESB) or enterprise application integration (EAI) solution. Most organizations also adopt a set of standards for message structure and content (message payload). The desire for consistent message payload results in the construction of an enterprise form ofXML schemabuilt from the common model objects thus providing the desired consistency and re-usability while ensuring data integrity.[citation needed]
|
https://en.wikipedia.org/wiki/Canonical_model
|
TheData Reference Model(DRM) is one of the five reference models of theFederal Enterprise Architecture.
The DRM is a framework whose primary purpose is to enable information sharing and reuse across theUnited States federal governmentvia the standard description and discovery of common data and the promotion of uniform data management practices. The DRM describes artifacts which can be generated from the data architectures of federal government agencies. The DRM provides a flexible and standards-based approach to accomplish its purpose. The scope of the DRM is broad, as it may be applied within a single agency, within acommunity of interest, or cross-community of interest.
The DRM provides a standard means by whichdatamay be described, categorized, and shared. These are reflected within each of the DRM's three standardization areas:
The Data Reference Model version 2 released in November 2005 is a 114-page document with detailed architectural diagrams and an extensive glossary of terms.
The DRM also make many references to ISO standards specifically theISO/IEC 11179metadata registry standard.
The DRM is not technically a published technical interoperability standard such as web services, it is an excellent starting point for data architects within federal and state agencies. Any federal or state agencies that are involved with exchanging information with other agencies or that are involved indata warehousingefforts should use this document as a guide.
|
https://en.wikipedia.org/wiki/Data_Reference_Model
|
Afederal enterprise architecture framework(FEAF) is the U.S. referenceenterprise architectureof afederal government. It provides a common approach for the integration of strategic, business and technology management as part of organization design and performance improvement.[1]
The most familiar federal enterprise architecture is theenterprise architectureof theFederal government of the United States, the U.S. "Federal Enterprise Architecture" (FEA) and the corresponding U.S. "Federal Enterprise Architecture Framework" (FEAF). This lemma will focus on this particular enterprise architecture andenterprise architecture framework.
Enterprise architecture (EA) is a management best practice for aligning business and technology resources to achieve strategic outcomes, improve organizational performance and guide federal agencies to better execute theircore missions. An EA describes the current and future state of the agency, and lays out a plan for transitioning from the current state to the desired future state. A federal enterprise architecture is a work in progress to achieve these goals.[2]
The U.S. Federal Enterprise Architecture (FEA) is an initiative of the U.S.Office of Management and Budget, Office of E-Government and IT, that aims to realize the value of enterprise architecture within the U.S. Federal Government. Enterprise Architecture became a recognized strategic and management best practice in U.S. Federal Government with the passage of theClinger-Cohen Actin 1996.
There are numerous benefits that accrue from implementing and using an enterprise architecture within the U.S. Federal Government. Among them is to provide a common approach for IT acquisition in theUnited States federal government. It is also designed to ease sharing of information and resources across federal agencies, reduce costs, and improve citizen services.
In September 1999, the Federal CIO Council published the "Federal Enterprise Architecture Framework" (FEAF) Version 1.1 for developing anEnterprise Architecture(EA) within any Federal Agency for a system that transcends multiple inter-agency boundaries. It builds on common business practices and designs that cross organizational boundaries, among others theNIST Enterprise Architecture Model. The FEAF provides an enduring standard for developing and documenting architecture descriptions of high-priority areas. It provides guidance in describing architectures for multi-organizational functional segments of the Federal Government.[3]At the time of release, the Government's IT focus on Y2K issues and then the events of September 2001 diverted attention from EA implementation, though its practice in advance and subsequent to this may have ameliorated the impact of these events. As part of the President's Management Agenda, in August 2001, the E-Government Task Force project was initiated (unofficially called Project Quicksilver). A key finding in that strategy was that the substantial overlap and redundant agency systems constrained the ability to achieve the Bush administration strategy of making the government "citizen centered". The Task Force recommended the creation a Federal Enterprise Architecture Project and the creation of the FEA Office at OMB. This was a shift from the FEAF focus on Information Engineering, to a J2EE object re-use approach using reference models comprising taxonomies that linked performance outcomes to lines of business, process services components, types of data, and technology components. Interim releases since that time have provided successive increases in definition for the core reference models (see below), as well as a very robust methodology for actually developing an architecture in a series of templates forming the Federal Segment Architecture Methodology (FSAM) and its next generation replacement, the Collaborative Planning Methodology (CPM), which was designed to be more flexible, more widely applicable, and more inclusive of the larger set of planning disciplines.
These federal architectural segments collectively constitute the federal enterprise architecture. In 2001, the Federal Architecture Working Group (FAWG) was sponsoring the development of Enterprise Architecture products for trade and grant Federal architecture segments. Methods prescribed way of approaching a particular problem. As shown in the figure, the FEAF partitions a given architecture into business, data, applications, and technology architectures. The FEAF overall framework created at that time (see image) includes the first three columns of theZachman Frameworkand theSpewak'sEnterprise Architecture Planningmethodology.[3]
In May 2012 OMB published a full new guide, the "Common Approach to Federal Enterprise Architecture".[4]Released as part of the federal CIO's policy guidance and management tools for increasing shared approaches to IT service delivery, the guide presents an overall approach to developing and using Enterprise Architecture in the Federal Government. The Common Approach promotes increased levels of mission effectiveness by standardizing the development and use of architectures within and between Federal Agencies. This includes principles for using EA to help agencies eliminate waste and duplication, increase shared services, close performance gaps, and promote engagement among government, industry, and citizens.
On January 29, 2013, the White House released Version 2 of the Federal Enterprise Architecture Framework (FEAF-II), to government agencies, making it public about a year later.[5]The document meets the criteria set forth by Common Approach, emphasizing that strategic goals drive business services, which in turn provide the requirements for enabling technologies. At its core is the Consolidated Reference Model (CRM), which equips OMB and Federal agencies with a common language and framework to describe and analyze investments.
Overall the Federal Enterprise Architecture (FEA) is mandated by a series of federal laws and mandates. These federal laws have been:
Supplementary OMB circulars have been:
The Collaborative Planning Methodology (CPM) is a simple, repeatable process that consists of integrated, multi-disciplinary analysis that results in recommendations formed in collaboration with leaders, stakeholders, planners, and implementers. It is intended as a full planning and implementation lifecycle for use at all levels of scope defined in the Common Approach to Federal Enterprise Architecture: International, National, Federal, Sector, Agency, Segment, System, and Application.[4][5]
The Consolidated Reference Model of the Federal Enterprise Architecture Framework (FEAF) equips OMB and Federal agencies with a common language and framework to describe and analyze investments. It consists of a set of interrelatedreference modelsdesigned to facilitate cross-agency analysis and the identification of duplicative investments, gaps and opportunities for collaboration within and across agencies. Collectively, the reference models comprise a framework for describing important elements of federal agency operations in a common and consistent way. Through the use of the FEAF and its vocabulary, IT portfolios can be better managed and leveraged across the federal government, enhancing collaboration and ultimately transforming the Federal government.
The five reference models in version 1 (see below) have been regrouped and expanded into six in the FEAF-II.
The FEA is built using an assortment ofreference modelsthat develop a commontaxonomyfor describing IT resources. FEA Version 1 reference models (see image) included the following:
It is designed to ease sharing of information and resources across federal agencies, reduce costs, and improve citizen services. It is an initiative of the USOffice of Management and Budgetthat aims to comply with theClinger-Cohen Act.
The PRM is a standardized framework to measure the performance of major IT investments and their contribution to program performance.[1]The PRM has three main purposes:
The PRM uses a number of existing approaches to performance measurement, including theBalanced Scorecard, Baldrige Criteria,[6]value measuring methodology,program logic models, the value chain, and theTheory of Constraints. In addition, the PRM was informed by what agencies are currently measuring through PART assessments, GPRA,enterprise architecture, and Capital Planning and Investment Control. The PRM is currently composed of four measurement areas:
The "FEAbusiness reference model" is a function-driven framework for describing the business operations of the Federal Government independent of the agencies that perform them. This business reference model provides an organized, hierarchical construct for describing the day-to-day business operations of the Federal government using a functionally driven approach. The BRM is the first layer of the Federal Enterprise Architecture and it is the main viewpoint for the analysis of data, service components and technology.[1]
The BRM is broken down into four areas:
The Business Reference Model provides a framework that facilitates a functional (as opposed to organizational) view of the federal government's LoBs, including its internal operations and its services for the citizens, independent of the agencies, bureaus and offices that perform them. By describing the federal government around common business areas instead of by a stovepiped, agency-by-agency view, the BRM promotes agency collaboration and serves as the underlying foundation for the FEA and E-Gov strategies.[1]
While the BRM does provide an improved way of thinking about government operations, it is only a model; its true utility can only be realized when it is effectively used. The functional approach promoted by the BRM will do little to help accomplish the goals of E-Government if it is not incorporated into EA business architectures and the management processes of all Federal agencies and OMB.[1]
The Service Component Reference Model (SRM) is a business and performance-driven, functional framework that classifies Service Components with respect to how they support business and/or performance objectives.[1]The SRM is intended for use to support the discovery of government-wide business and application Service Components in IT investments and assets. The SRM is structured across horizontal and vertical service domains that, independent of the business functions, can provide a leverage-able foundation to support the reuse of applications, application capabilities, components, and business services.
The SRM establishes the following domains:
Each Service Domain is decomposed into Service Types. For example, the three Service Types associated with the Customer Services Domain are: Customer Preferences; Customer Relationship Management; and Customer Initiated Assistance. And each Service Type is decomposed further into components. For example, the four components within the Customer Preferences Service Type include: Personalization; Subscriptions; Alerts and Notifications; and Profile Management.[7]
TheData Reference Model(DRM) describes, at an aggregate level, the data and information that support government program and business line operations. This model enables agencies to describe the types of interaction and exchanges that occur between the federal government and citizens.[1]The DRM categorizes government information into greater levels of detail. It also establishes a classification for federal data and identifies duplicative data resources. Acommon data modelwill streamline information exchange processes within the federal government and between government and external stakeholders.
Volume One of the DRM provides a high-level overview of the structure, usage, and data-identification constructs. This document:
The DRM is the starting point from which data architects should develop modeling standards and concepts. The combined volumes of the DRM support data classification and enable horizontal and vertical information sharing.
The TRM is a component-driven, technical framework categorizing the standards and technologies to support and enable the delivery of Service Components and capabilities. It also unifies existing agency TRMs and E-Gov guidance by providing a foundation to advance the reuse and standardization of technology and Service Components from a government-wide perspective.[1]
The TRM consists of:
The figure on the right provides a high-level depiction of the TRM.
Aligning agency capital investments to the TRM leverages a common, standardized vocabulary, allowing interagency discovery, collaboration, and interoperability. Agencies and the federal government will benefit fromeconomies of scaleby identifying and reusing the best solutions and technologies to support their business functions, mission, and target architecture. Organized in a hierarchy, the TRM categorizes the standards and technologies that collectively
support the secure delivery, exchange, and construction of business and application Service Components that may be used and leveraged in acomponent-basedorservice-oriented architecture.[1]
In the FEA, enterprise, segment, and solution architectures provide different business perspectives by varying the level of detail and addressing related but distinct concerns. Just as enterprises are themselves hierarchically organized, so are the different views provided by each type of architecture. The Federal Enterprise Architecture Practice Guidance (2006) has defined three types of architecture:[2]
By definition,Enterprise Architecture(EA) is fundamentally concerned with identifying common or shared assets – whether they are strategies, business processes, investments, data, systems, or technologies. EA is driven by strategy; it helps an agency identify whether its resources are properly aligned to the agency mission and strategic goals and objectives. From an investment perspective, EA is used to drive decisions about the IT investment portfolio as a whole. Consequently, the primary stakeholders of the EA are the senior managers and executives tasked with ensuring the agency fulfills its mission as effectively and efficiently as possible.[2]
By contrast, "segment architecture" defines a simple roadmap for a core mission area, business service, or enterprise service. Segment architecture is driven by business management and delivers products that improve the delivery of services to citizens and agency staff. From an investment perspective, segment architecture drives decisions for a business case or group of business cases supporting a core mission area or common or shared service. The primary stakeholders for segment architecture are business owners and managers. Segment architecture is related to EA through three principles:
"Solution architecture" defines agency IT assets such as applications or components used to automate and improve individual agency business functions. The scope of a solution architecture is typically limited to a single project and is used to implement all or part of a system or business solution. The primary stakeholders for solution architecture are system users and developers. Solution architecture is commonly related to segment architecture and enterprise architecture through definitions and constraints. For example, segment architecture provides definitions of data or service interfaces used within a core mission area or service, which are accessed by individual solutions. Equally, a solution may be constrained to specific technologies and standards that are defined at the enterprise level.[2]
Results of the Federal Enterprise Architecture program are considered unsatisfactory:
|
https://en.wikipedia.org/wiki/Federal_enterprise_architecture
|
Adata platformusually refers to a software platform used for collecting and managing data, and acting as a data delivery point for application and reporting software.
Data platformcan also refer to
|
https://en.wikipedia.org/wiki/Data_platform_(disambiguation)
|
.reqif
RIF/ReqIF(Requirements Interchange Format) is anXMLfile format that can be used to exchange requirements, along with its associated metadata, between software tools from different vendors. The requirements exchange format also defines a workflow for transmitting the status of requirements between partners. Although developed in the automotive industry, ReqIF is suitable for lossless exchange of requirements in any industry.
In 2004, HIS (Herstellerinitiative Software) a consortium of German automotive manufacturers, defined a generic requirements interchange format called RIF.
The format was handed over in 2008 toProSTEP iViP e.V.for further maintenance. A project group responsible for international standardization further developed the format and handed over a revised version toObject Management Group(OMG) as "Request for Comment" in 2010.[1]
As the acronym RIF had an ambiguous meaning within the OMG, the new name ReqIF was introduced to separate it from theW3C'sRule Interchange Format.
In April 2011, the version 1.0.1 of ReqIF was adopted by OMG as a formal specification (OMG Document Number: formal/2011-04-02).
In October 2013, version 1.1 was published (OMG Document Number: formal/2013-10-01). Changes are restricted to the text of the standard, the XML schema and underlying model have not changed. Therefore, 1.1 and 1.0.1 .reqif files are equivalent.
In July 2016, version 1.2 was published (OMG Document Number: formal/2016-07-01). As with the previous versions, changes are restricted to the text of the standard, the XML schema and underlying model have not changed. Therefore, 1.2, 1.1 and 1.0.1 .reqif files are equivalent.
ReqIF is an exchange file format for exchanging requirements, attributes, additional files (e.g. images) across a chain of manufacturers, suppliers, sub-suppliers and the like. AGUIDensures unique identification of content across the process chain.
Requirements are typically elicited during the early phase of product development. This is the primary application of ReqIF, as development across organizations is happening more and more often. ReqIF allows for sharing of requirements between partners, even if different tools are used. In contrast to formats like Word, Excel or PDF, ReqIF allows for a loss-free exchange.
ReqIF was pioneered by automotive manufacturers, who started to demand the use of ReqIF in particular for the development of embedded controllers.
ReqIF is also used as the underlying data model for tool implementations. This is particularly true for the ReqIFReference implementation(Eclipse RMF), which is being used by an implementer forum,[2]that aims to ensure interoperability of various ReqIF implementations. ReqIF Server[3]is another tool that natively uses ReqIF.
RIF/ReqIF is a standardized meta-model, defined by an XML schema. Such files must conform to the schema and contain the description of the model (the datatypes), as well as the data. A successful data exchange between various tools only succeeds, if all parties agree on acommon data model. The previously mentioned implementor forum is working on such a common model and also organizes tests with tools of the participating manufacturers, to ensure future interoperability.
An OMG ReqIF file consists of XML with the root elementREQ-IF, containing information regarding the file itself as well as the contained datatypes and requirements.
The containers for requirements in ReqIF are called specification objects (SpecObject), which have user-defined attributes. Each attribute has a data type, which is one ofBoolean,Integer,Real,String,Enumeration(with user-defined values) and XHTML, which is also for formatted text and embedded objects, including images. Some datatypes can be constrained further, e.g. the range of numerical values.
Relationships between objects are represented asSpecRelations, which can also have attributes.
At last, hierarchical trees create a structured view on SpecObjects, calledSpecifications. Multiple references on the same SpecObject are permitted.
The structure of ReqIF is described in detail in the specification.[4]There is also a free one-page reference of the data model available[5]
|
https://en.wikipedia.org/wiki/Requirements_Interchange_Format
|
Generic data modelsare generalizations of conventionaldata models. They define standardised general relation types, together with the kinds of things that may be related by such a relation type.
The definition of generic data model is similar to the definition of a natural language. For example, a generic data model may define relation types such as a 'classification relation', being abinary relationbetween an individual thing and a kind of thing (a class) and a 'part-whole relation', being a binary relation between two things, one with the role of part, the other with the role of whole, regardless the kind of things that are related. Given an extensible list of classes, this allows the classification of any individual thing and to specify part-whole relations for any individual object. By standardisation of an extensible list of relation types, a generic data model enables the expression of an unlimited number of kinds of facts and will approach the capabilities of natural languages.
Conventional data models, on the other hand, have a fixed and limited domain scope, because the instantiation (usage) of such a model only allows expressions of kinds of facts that are predefined in the model.
Generic data models are developed as an approach to solve some shortcomings of conventionaldata models. For example, different modelers usually produce different conventional data models of the same domain. This can lead to difficulty in bringing the models of different people together and is an obstacle for data exchange and data integration. Invariably, however, this difference is attributable to different levels of abstraction in the models and differences in the kinds of facts that can be instantiated (the semantic expression capabilities of the models). The modelers need to communicate and agree on certain elements which are to be rendered more concretely, in order to make the differences less significant.
There are generic patterns that can be used to advantage for modeling business. These include entity types for PARTY (with included PERSON and ORGANIZATION), PRODUCT TYPE, PRODUCT INSTANCE, ACTIVITY TYPE, ACTIVITY INSTANCE, CONTRACT, GEOGRAPHIC AREA, and SITE. A model which explicitly includes versions of these entity classes will be both reasonably robust and reasonably easy to understand.
More abstract models are suitable for general purpose tools, and consist of variations on THING and THING TYPE, with all actual data being instances of these. Such abstract models are on one hand more difficult to manage, since they are not very expressive of real world things, but on the other hand they have a much wider applicability, especially if they are accompanied by a standardised dictionary. More concrete and specific data models will risk having to change as the scope or environment changes.
One approach to generic data modeling has the following characteristics:
This way of modeling allows the addition of standard classes and standard relation types as data (instances), which makes the data model flexible and prevents data model changes when the scope of the application changes.
A generic data model obeys the following rules[2]]:
Examples of generic data models are
1. David C. Hay. 1995.Data Model Patterns: Conventions of Thought. (New York: Dorset House).
2. David C. Hay. 2011.Enterprise Model Patterns: Describing the World. (Bradley Beach,New Jersey: Technics Publications).
3. Matthew West 2011.Developing High Quality Data Models(Morgan Kaufmann)
|
https://en.wikipedia.org/wiki/Generic_data_model
|
With the application of probabilitysamplingin the 1930s, surveys became a standard tool forempirical researchinsocial sciences,marketing, and official statistics.[1]The methods involved insurvey data collectionare any of a number of ways in which data can becollectedfor astatistical survey. These are methods that are used to collect information from a sample of individuals in a systematic way. First there was the change from traditional paper-and-pencil interviewing (PAPI) to computer-assisted interviewing (CAI). Now, face-to-face surveys (CAPI), telephone surveys (CATI), and mail surveys (CASI, CSAQ) are increasingly replaced by web surveys.[2]In addition, remote interviewers could possibly keep the respondent engaged while reducing cost as compared to in-person interviewers.[3]
The choice between administration modes is influenced by several factors, including 1) costs, 2) coverage of the target population (including group-specific preferences for certain modes[4]), 3) flexibility of asking questions, 4) respondents’ willingness to participate and 5) response accuracy. Different methods createmode effectsthat change how respondents answer. The most common modes of administration are listed under the following headings.[5]
Mobile data collection or mobile surveys is an increasingly popular method of data collection. Over 50% of surveys today are opened on mobile devices.[6]The survey, form, app or collection tool is on a mobile device such as a smart phone or a tablet. These devices offer innovative ways to gather data, and eliminate the laborious "data entry" (of paper form data into a computer), which delays data analysis and understanding. By eliminating paper, mobile data collection can also dramatically reduce costs: one World Bank study in Guatemala found a 71% decrease in cost while using mobile data collection, compared to the previous paper-based approach.[7]
Apart from the high mobile phone penetration,[8][9]further advantages are quicker response times and the possibility to reach previously hard-to-reach target groups. In this way, mobile technology allows marketers, researchers and employers to create real and meaningful mobile engagement in environments different from the traditional one in front of a desktop computer.[10][11]However, even when using mobile devices to answer the web surveys, most respondents still answer from home.[12][13]
SMS surveys can reach any handset, in any language and in any country. As they are not dependent on internet access and the answers can be sent when its convenient, they are a suitable mobile survey data collection channel for many situations that require fast, high volume responses. As a result, SMS surveys can deliver 80% of responses in less than 2 hours[14]and often at much lower cost compared to face-to-face surveys, due to the elimination of travel/personnel costs.[15]IM is similar to SMS, except that a mobile number is not required. IM functions are available in standalone software, such as Skype, or embedded on websites such as Facebook and Google.[3]
Online (Internet) surveys are becoming an essential research tool for a variety of research fields, including marketing, social and official statistics research. According toESOMARonline survey research accounted for 20% of global data-collection expenditure in 2006.[1]They offer capabilities beyond those available for any other type of self-administered questionnaire.[16]Online consumer panelsare also used extensively for carrying out surveys but the quality is considered inferior because the panelists are regular contributors and tend to be fatigued. However, when estimating the measurement quality (defined as product of reliability and validity) using amultitrait-multimethod approach(MTMM), some studies found a quite reasonable quality[17][18]and even that the quality of a series of questions in an online opt-in panel (Netquest) was very similar to the measurement quality for the same questions asked in theEuropean Social Survey(ESS), which is a face-to-face survey.[19]
Some studies have compared the quality of face-to-face surveys and/or telephone surveys with that of online surveys, for single questions, but also for more complex concepts measured with more than one question (also called Composite Scores or Index).[20][21][22]Focusing only on probability-based surveys (also for the online ones), they found overall that the face-to-face (using show-cards) and web surveys have quite similar levels of measurement quality, whereas the telephone surveys were performing worse. Other studies comparing paper-and-pencil questionnaires with web-based questionnaires showed that employees preferred online survey approaches to the paper-and-pencil format. There are also concerns about what has been called "ballot stuffing" in which employees make repeated responses to the same survey. Some employees are also concerned about privacy. Even if they do not provide their names when responding to a company survey, can they be certain that their anonymity is protected? Such fears prevent some employees from expressing an opinion.[23]
These issues, and potential remedies, are discussed in a number of sources.[26][27]
Telephone surveys use interviewers to encourage the sample persons to respond, which leads to higher response rates.[28]There are some potential for interviewer bias (e.g., some people may be more willing to discuss a sensitive issue with a female interviewer than with a male one). Depending on local call charge structure and coverage, this method can be cost efficient and may be appropriate for large national (or international)sampling framesusing traditional phones orcomputer assisted telephone interviewing(CATI). Because it is audio-based, this mode cannot be used for non-audio information such as graphics, demonstrations, or taste/smell samples.
Depending on local bulk mail postage, mail surveys may be relatively lower cost compared to other modes. The field method tends to be longer - often several months - before the surveys are returned and statistical analysis can begin. The questionnaire may be handed to the respondents or mailed to them, but in all cases they are returned to the researcher via mail. Because there is no interviewer presence, the mail mode is not suitable for issues that may require clarification. However, there is no interviewer bias and respondents can answer at their own convenience (allowing them to break up long surveys; also useful if they need to check records to answer a question). To correct nonresponse bias, extrapolation across waves could be done.[29]Response rates can be improved by using mail panels (members of the panel must agree to participate) and prepaid monetary incentives,[30]but response rates are affected by the class of mail through which the survey was sent.[31]Panels can be used in longitudinal designs where the same respondents are surveyed several times.
Visual presentation of survey questions make a difference in how respondents answer them; with four primary design elements: words (meaning), numbers (sequencing), symbols (e.g. arrow), and graphics (e.g. text boxes).[16]In translated surveys, writing practice (e.g. Spanish words are lengthier and require more printing space) and text orientation (e.g. Arabic is read from right to left) must be considered in questionnaire visual design to minimize data missingness.[32][33]
The face-to-face mode is suitable for locations where telephone or mail are not developed. Like the telephone mode, the interviewer presence runs the risk of interviewer bias.
Video interviewing is similar to face-to-face interviewing except that the interviewer and respondent are not physically in the same location, but are communicating via video conferencing such asZoomorTeams.[3]
Virtual-world interviews take place online in a space created for virtual interaction with other users or players, such asSecond Life. Both the respondent and interviewer chooseavatarsto represent themselves and interact by a chat feature or by real voice audio.[3]
Achatbotis used regularly in marketing and sales to gather experience feedback. When used for collecting survey responses, chatbot surveys should be kept short, trained to speak in a friendly human tone, and use easy-to-navigate interface with more advancedartificial intelligence.[34]
Researchers can combine several above methods for the data collection. For example, researchers can invite shoppers at malls, and send willing participants questionnaires by emails. With the introduction of computers to the survey process, survey mode now includes combinations of different approaches or mixed-mode designs. Some of the most common methods are:[35][16]
|
https://en.wikipedia.org/wiki/Survey_data_collection
|
Acase report form(orCRF) is a paper or electronic questionnaire specifically used in clinical trial research.[1]The case report form is the tool used by the sponsor of theclinical trialto collect data from each participating patient. All data on each patient participating in a clinical trial are held and/or documented in the CRF, includingadverse events.
The sponsor of the clinical trial develops the CRF to collect the specific data they need in order totesttheir hypotheses or answer their research questions. The size of a CRF can range from a handwritten one-time 'snapshot' of a patient's physical condition to hundreds of pages of electronically captured data obtained over a period of weeks or months. (It can also include required check-up visits months after the patient's treatment has stopped.)
The sponsor is responsible for designing a CRF that accurately represents the protocol of the clinical trial, as well as managing its production, monitoring the data collection and auditing the content of the filled-in CRFs.
In this case, this is a wrong case. Case report forms contain data obtained during the patient's participation in the clinical trial. Before being sent to the sponsor, this data is usually de-identified (not traceable to the patient) by removing the patient's name, medical record number, etc., and giving the patient a unique study number. The supervisingInstitutional Review Board(IRB) oversees the release of any personally identifiable data to the sponsor.
From the sponsor's point of view, the main logistic goal of a clinical trial is to obtain accurate CRFs. However, because of human and machine error, the data entered in CRFs is rarely completely accurate or entirely readable. To combat these errors monitors are usually hired by the sponsor to audit the CRF to make sure the CRF contains the correct data.
When the study administrators or automated mechanisms process the CRFs that were sent to the sponsor by local researchers, they make a note of queries. Queries are non-sensible or questionable data that must be explained. Examples of data that would lead to a query: a male patient being on female birth control medication or having had an abortion, or a 15-year-old participant having had hip replacement surgery. Each query has to be resolved by the individual attention of a member of each local research team, as well as an individual in the study administration. To ensure quality control, these queries are usually addressed and resolved before the CRF data is included by the sponsor in the finalclinical study report. Depending on variables relating to the nature of the study, (e.g., the health of the study population), the effectiveness of the study administrators in resolving these queries can significantly impact the cost of studies.
Originally all case report forms were made on paper. But recently there is a changing trend to perform clinical studies using an electronic case report form (eCRF).
This way of working has many advantages:
|
https://en.wikipedia.org/wiki/Case_report_form
|
Asafety data sheet(SDS),[1]material safety data sheet(MSDS), orproduct safety data sheet(PSDS) is a document that lists information relating tooccupational safety and healthfor the use of varioussubstancesandproducts. SDSs are a widely used type offact sheetused to catalogue information onchemical speciesincludingchemical compoundsand chemicalmixtures. SDS information may include instructions for the safe use and potentialhazardsassociated with a particularmaterialor product, along with spill-handling procedures. The older MSDS formats could vary from source to source within a country depending on national requirements; however, the newer SDS format is internationally standardized.
An SDS for a substance is not primarily intended for use by the generalconsumer, focusing instead on the hazards of working with the material in an occupational setting. There is also a duty to properlylabelsubstances on the basis of physico-chemical, health, or environmental risk. Labels often include hazard symbols such as theEuropean Union standardsymbols. The same product (e.g.paintssold under identical brand names by the same company) can have different formulations in different countries. Theformulationand hazards of a product using a generic name may vary between manufacturers in the same country.
The Globally Harmonized System of Classification and Labelling of Chemicals contains a standard specification for safety data sheets.[2]The SDS follows a 16 section format which is internationally agreed and for substances especially, the SDS should be followed with an Annex which contains the exposure scenarios of this particular substance.[3]The 16 sections are:[4]
InCanada, the program known as theWorkplace Hazardous Materials Information System(WHMIS) establishes the requirements for SDSs in workplaces and is administered federally byHealth Canadaunder theHazardous Products Act, Part II, and theControlled Products Regulations.
Safety data sheets have been made an integral part of the system of Regulation (EC) No 1907/2006 (REACH).[6]The original requirements of REACH for SDSs have been further adapted to take into account the rules for safety data sheets of the Global Harmonised System (GHS)[7]and the implementation of other elements of the GHS into EU legislation that were introduced by Regulation (EC) No 1272/2008 (CLP)[8]via an update to Annex II of REACH.[9]
The SDS must be supplied in an official language of the Member State(s) where the substance or mixture is placed on the market, unless the Member State(s) concerned provide(s) otherwise (Article 31(5) of REACH).
TheEuropean Chemicals Agency(ECHA) has published a guidance document on the compilation of safety data sheets.
In Germany, safety data sheets must be compiled in accordance with REACH Regulation No. 1907/2006. The requirements concerning national aspects are defined in the Technical Rule for Hazardous Substances (TRGS) 220 "National aspects when compiling safety data sheets".[10]A national measure mentioned in SDS section 15 is as example the water hazard class (WGK) it is based on regulations governing systems for handling substances hazardous to waters (AwSV).[11]
Dutch Safety Data Sheets are well known asveiligheidsinformatieblador Chemiekaarten. This is a collection of Safety Data Sheets of the most widely used chemicals. The Chemiekaarten boek is commercially available, but also made available through educational institutes, such as the web site offered by theUniversity of Groningen.[12]
This section contributes to a better understanding of the regulations governing SDS within theSouth Africanframework. As regulations may change, it is the responsibility of the reader to verify the validity of the regulations mentioned in text.
Asglobalisationincreased and countries engaged in cross-border trade, the quantity ofhazardous materialcrossing international borders amplified.[13]Realising the detrimental effects of hazardous trade, theUnited Nationsestablished a committee of experts specialising in the transportation ofhazardous goods.[14]The committee provides best practises governing the conveyance ofhazardous materialsand goods for land including road and railway; air as well as sea transportation. These best practises are constantly updated to remain current and relevant.
There are various other international bodies who provide greater detail and guidance for specific modes of transportation such as theInternational Maritime Organisation (IMO)by means of the International Maritime Code[15]and theInternational Civil Aviation Organisation(ICAO) via the Technical Instructions for the safe transport of dangerous goods by air[16]as well as theInternational Air Transport Association (IATA)who provides regulations for the transport of dangerous goods.
These guidelines prescribed by the international authorities are applicable to the South African land, sea and air transportation of hazardous materials and goods. In addition to these rules and regulations to International best practice, South Africa has also implemented common laws which are laws based on custom and practise. Common laws are a vital part of maintaining public order and forms the basis of case laws. Case laws, using the principles of common law are interpretations and decisions of statutes made by courts. Acts of parliament are determinations and regulations by parliament which form the foundation of statutory law. Statutory laws are published in the government gazette or on the official website. Lastly, subordinate legislation are the bylaws issued by local authorities and authorised by parliament.
Statutory law gives effect to the Occupational Health and Safety Act of 1993 and the National Road Traffic Act of 1996. The Occupational Health and Safety Act details the necessary provisions for the safe handling and storage of hazardous materials and goods whilst the transport act details with the necessary provisions for the transportation of the hazardous goods.
Relevant South African legislation includes the Hazardous Chemicals Agent regulations of 2021 under the Occupational Health and Safety Act of 1993,[17]the Chemical Substance Act 15 of 1973, and the National Road Traffic Act of 1996,[18]and the Standards Act of 2008.[19][20]
There has been selective incorporation of aspects of theGlobally Harmonised System (GHS) of Classification and Labelling of Chemicalsinto South African legislation. At each point of the chemical value chain, there is a responsibility to manage chemicals in a safe and responsible manner. SDS is therefore required by law.[21]A SDS is included in the requirements of Occupational Health and Safety Act, 1993 (Act No.85 of 1993) Regulation 1179 dated 25 August 1995.
The categories of information supplied in the SDS are listed in SANS 11014:2010; dangerous goods standards – Classification and information. SANS 11014:2010 supersedes the first edition SANS 11014-1:1994 and is an identical implementation of ISO 11014:2009. According to SANS 11014:2010:
In theU.K., the Chemicals (Hazard Information and Packaging for Supply) Regulations 2002 - known as CHIP Regulations - impose duties upon suppliers, and importers into the EU, ofhazardous materials.[22]
NOTE: Safety data sheets (SDS) are no longer covered by the CHIP regulations. The laws that require a SDS to be provided have been transferred to the European REACH Regulations.[23]
TheControl of Substances Hazardous to Health(COSHH) Regulations govern the use of hazardous substances in the workplace in the UK and specifically require an assessment of the use of a substance.[24]Regulation 12 requires that an employer provides employees with information, instruction and training for people exposed to hazardous substances. This duty would be very nearly impossible without the data sheet as a starting point. It is important for employers therefore to insist on receiving a data sheet from a supplier of a substance.
The duty to supply information is not confined to informing only business users of products. SDSs for retail products sold by large DIY shops are usually obtainable on those companies' web sites.
Web sites of manufacturers and large suppliers do not always include them even if the information is obtainable from retailers but written or telephone requests for paper copies will usually be responded to favourably.
TheUnited Nations(UN) defines certain details used in SDSs such as theUN numbersused to identify somehazardous materialsin a standard form while in international transit.
In theU.S., theOccupational Safety and Health Administrationrequires that SDSs be readily available to all employees for potentially harmful substances handled in the workplace under theHazard Communication Standard.[25]The SDS is also required to be made available to local fire departments and local and state emergency planning officials under Section 311 of theEmergency Planning and Community Right-to-Know Act. TheAmerican Chemical Societydefines Chemical Abstracts Service Registry Numbers (CAS numbers) which provide a unique number for each chemical and are also used internationally in SDSs.
Reviews of material safety data sheets by theU.S. Chemical Safety and Hazard Investigation Boardhave detected dangerous deficiencies.
The board's Combustible Dust Hazard Study analyzed 140 data sheets of substances capable of producing combustible dusts.[26]None of the SDSs contained all the information the board said was needed to work with the material safely, and 41 percent failed to even mention that the substance was combustible.
As part of its study of an explosion and fire that destroyed the Barton Solvents facility in Valley Center, Kansas, in 2007, the safety board reviewed 62 material safety data sheets for commonly used nonconductive flammable liquids. As in the combustible dust study, the board found all the data sheets inadequate.[27]
In 2012, the US adopted the 16 section Safety Data Sheet to replace Material Safety Data Sheets. This became effective on 1 December 2013. These new Safety Data Sheets comply with theGlobally Harmonized System of Classification and Labeling of Chemicals(GHS). By 1 June 2015, employers were required to have their workplace labeling and hazard communication programs updated as necessary – including all MSDSs replaced with SDS-formatted documents.[28]
Many companies offer the service of collecting, or writing and revising, data sheets to ensure they are up to date and available for their subscribers or users. Some jurisdictions impose an explicitduty of carethat each SDS be regularly updated, usually every three to five years.[29]However, when new information becomes available, the SDS must be revised without delay.[30]If a full SDS is not feasible, then a reduced workplace label should be authored.[31]
|
https://en.wikipedia.org/wiki/Safety_data_sheet
|
Data hierarchyrefers to the systematic organization of data, often in a hierarchical form. Data organization involves characters, fields, records, files and so on.[1][2]This concept is a starting point when trying to see what makes up data and whether data has a structure. For example, how does a person make sense of data such as 'employee', 'name', 'department', 'Marcy Smith', 'Sales Department' and so on, assuming that they are all related? One way to understand them is to see these terms as smaller or larger components in a hierarchy. One might say that Marcy Smith is one of the employees in the Sales Department, or an example of an employee in that Department. The data we want to capture about all our employees, and not just Marcy, is the name, ID number, address etc.
"Data hierarchy" is a basic concept in data anddatabase theoryand helps to show the relationships between smaller and larger components in a database or data file. It is used to give a better sense of understanding about the components of data and how they are related.
It is particularly important in databases withreferential integrity,third normal form, orperfect key. "Data hierarchy" is the result of proper arrangement of data without redundancy. Avoiding redundancy eventually leads to proper "data hierarchy" representing the relationship between data, and revealing its relational structure.
The components of the data hierarchy are listed below.
Adata fieldholds a single fact or attribute of an entity. Consider a date field, e.g. "19 September 2004". This can be treated as a single date field (e.g. birthdate), or three fields, namely, day of month, month and year.
Arecordis a collection of related fields. An Employee record may contain a name field(s), address fields, birthdate field and so on.
Afileis a collection of related records. If there are 100 employees, then each employee would have a record (e.g. called Employee Personal Details record) and the collection of 100 such records would constitute a file (in this case, called Employee Personal Details file).
Files are integrated into adatabase.[3]This is done using a Database Management System. If there are other facets of employee data that we wish to capture, then other files such as Employee Training History file and Employee Work History file could be created as well.
An illustration of the above description is shown in this diagram below:
The following terms are for better clarity. With reference to the example in the above diagram:
Data field label = Employee Name or EMP_NAME
Data field value = Jeffrey Tan
The above description is a view of data as understood by a user e.g. a person working in Human Resource Department.
The above structure can be seen in thehierarchical model, which is one way to organize data in a database.[2]
In terms of data storage, data fields are made ofbytesand these in turn are made up ofbits.
|
https://en.wikipedia.org/wiki/Data_hierarchy
|
Adatabase catalogof adatabaseinstance consists ofmetadatain which definitions ofdatabase objectssuch asbase tables,views(virtualtables),synonyms,value ranges,indexes,users, and user groups are stored.[1][2]It is anarchitectureproduct that documents the database's content anddata quality.[3]
TheSQLstandard specifies a uniform means to access the catalog, called theINFORMATION_SCHEMA, but not alldatabasesfollow this, even if they implement other aspects of the SQL standard. For an example of database-specificmetadataaccess methods, seeOracle metadata.
Thisdatabase-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Database_catalog
|
Ametadata registryis a central location in an organization wheremetadatadefinitions are stored and maintained in a controlled method.
Ametadata repositoryis thedatabasewhere metadata is stored. The registry also adds relationships with related metadata types. Ametadata enginecollects, stores and analyzes information about data and metadata (data about data) in use within a domain.[1]
Metadata registries are used whenever data must be used consistently within an organization or group of organizations. Examples of these situations include:
Central to the charter of any metadata management programme is the process of creating trusting relationships with stakeholders and that definitions and structures have been reviewed and approved by appropriate parties.
A metadata registry typically has the following characteristics:
Because metadata registries are used to store both semantics (the meaning of a data element) and systems-specific constraints (for example the maximum length of a string) it is important to identify what systems impose these constraints and to document them. For example the maximum length of a string should not change the meaning of a data element.
TheInternational Organization for Standardization(ISO) has published standards for a metadata registry calledISO/IEC 11179and also ISO15000-3 and ISO15000-4ebXML registry and repository (regrep)EbXML RegRep
There are two international standards which are commonly referred to as metadata registry standards: ISO/IEC 11179 and ISO 15000-3. There are some who believe that ISO/IEC 11179 and ISO 15000-3 are interchangeable or at least in some way similar. e.g.
"Of interest is that the ISO 11179 model was one of the inputs to the ebXML RIM (registry information model) and so has much functional equivalence to the "registry" region of the ISO 11179 conceptual model."[1]
This is however incorrect. Although the specification ebRIM v2.0 (5 December 2001) says at the beginning in its Design Objectives: "Leverage as much as possible the work done in the OASIS [OAS] and the ISO 11179 [ISO] Registry models"[2]by the time of ebRIM v3.0 (2 May 2005) all reference to ISO/IOEC 11179 is reduced to a mention under informative references on page 76 of 78.[3]It was recognised by some team members that theebXMLRIM data model had no place to store "fine grained artifacts"[4]ie. the data elements which are at the heart of ISO/IEC 11179, but not until 2009 can an explicit and definitive statement from the team be found.[5]
ISO/IEC 11179 says that it is concerned with "traditional" metadata: "We limit the scope of the term as it is used here in ISO/IEC 11179 to descriptions of data - the more traditional use of the term."
Originally the standard named itself a "data element" registry. It describes data elements: "data elements are the fundamental units of data" and "data elements themselves contain various kinds of data that include characters, images, sound, etc."
It also describes a registry with an analogy: "This is analogous to the registries maintained by governments to keep track of motor vehicles. A description of each motor vehicle is entered in the registry, but not the vehicle itself."
TheebXMLRIM says about its Repository and Registry that it is
It also says that it is
It also describes itself with "...this familiar metaphor. An ebXML Registry is like your local library. The repository is like the bookshelves in the library. The repository items in the repository are like book (sic) on the bookshelves." It goes on to say "The registry is like the card catalog … A RegistryObject is like a card in the card catalog."
What should be immediately apparent is that something which holds catalogue cards is not "like" a catalogue, it IS a catalogue.
Unfortunately for a number of organisations that have implemented ebXML RIM to satisfy a requirement for an ISO/IEC 11179 registry, ebXML RIM
It is
A metadata registry is frequently set up and administered by an organization'sdata architectordata modelingteam.
Data elements are frequently assigned todata stewardsor data stewardship teams that are responsible for the maintenance of individual data elements through a secure system.
Metadata registries frequently have a formal data element submission, approval and publishing approval process. Each data element should be accepted by a data stewardship team and reviewed before data elements are published. After publication change control processes should be used.
Metadata registries are frequently large and complex structures and require navigation, visualization and searching tools. Use of hierarchical viewing tools are frequently an essential part of a metadata registry system.Metadata publishingconsists of making data element definitions and structures available to both people and other systems.
In alphabetical order:
In alphabetical order:
Open Forums on Metadata Registries, in reverse chronological order:
|
https://en.wikipedia.org/wiki/Metadata_registry
|
OneSourceis an evolving[when?]data analysis tool used internally by theAir Combat Command(ACC) Vocabulary Services Team, and made available to general data management community. It is used by the greaterUS Department of Defense(DoD) andNATOcommunity forcontrolled vocabularymanagement and exploration. It provides its users with a consistent view ofsyntactical,lexical, andsemanticdata vocabularies through a community-driven web environment. It was created with the intention of directly supporting the DoDNet-centricData Strategy of visible, understandable, and accessible data assets.
OneSource serves developers, integrators, managers, andcommunity of interest(COI) participants as a focus point for searching, navigating, annotating,semantic matching, and mapping data terms extracted from military standards, COI vocabularies, programs of record, and other schemas and data sources.
OneSource is based upon aUnited States Air Forceresearched and developedtriplestoreknowledge basearchitecture, which allowsXML Schema,Web Ontology Language,relational database,spreadsheet, and even custom data models to be handled and presented in the same manner. Initial capability was released in 2006. Version 2 was released in 2008 with the previously disjoint matching and mapping capabilities fully integrated for use in a web browser.
A brief newsfeed of recent changes in the Namespace dataset is available to the general public.[1]
|
https://en.wikipedia.org/wiki/Vocabulary_OneSource
|
Ametadata repositoryis a database created to storemetadata. Metadata is information about the structures that contain the actual data. Metadata is often said to be "data about data", but this is misleading. Data profiles are an example of actual "data about data". Metadata adds one layer of abstraction to this definition– it is data about the structures that contain data. Metadata may describe the structure of any data, of any subject, stored in any format.
A well-designed metadata repository typically contains data far beyond simple definitions of the variousdata structures. Typical repositories store dozens to hundreds of separate pieces of information about each data structure.
Comparing the metadata of a couple data items - one digital and one physical - clarify what metadata is:
First, digital: For data stored in a database one may have a table called "Patient" with many columns, each containing data which describes a different attribute of each patient. One of these columns may be named "Patient_Last_Name". What is some of the metadata about the column that contains the actual surnames of patients in the database? We have already used two items: the name of the column that contains the data (Patient_Last_Name) and the name of the table that contains the column (Patient). Other metadata might include the maximum length of last name that may be entered, whether or not last name is required (can we have a patient without Patient_Last_Name?), and whether the database converts any surnames entered in lower case to upper case. Metadata of a security nature may show the restrictions which limit who may view these names.
Second, physical: For data stored in a brick and mortar library, one have many volumes and may have various media, including books. Metadata about books would include ISBN, Binding_Type, Page_Count, Author, etc. Within Binding_Type, metadata would include possible bindings, material, etc.
This contextual information of business data include meaning and content, policies that govern, technical attributes, specifications that transform, and programs that manipulate.[1]: 171
The metadata repository is responsible for physically storing and cataloging metadata. Data in a metadata repository should be generic, integrated, current, and historical:
With the transition of needs for the metadata usage for business intelligence has increased so is the scope of the metadata repository increased. Earlier data dictionaries are the closest place to interact technology with business. Data dictionaries are the universe of metadata repository in the initial stages but as the scope increased Business glossary and their tags to variety of status flags emerged in the business side while consumption of the technology metadata, their lineage and linkages made the repository, the source for valuable reports to bring business and technology together and helped data management decisions easier as well as assess the cost of the changes.
Metadata repository explores the enterprise wide data governance, data quality andmaster data management(includes master data and reference data) and integrates this wealth of information with integrated metadata across the organization to providedecision support systemfor data structures, even though it only reflects the structures consumed from various systems.
Repository has additional functionalities compared with registry. Metadata repository not only stores metadata like Metadata registry but also adds relationships with related metadata types. Metadata when related in a flow from its point of entry into organization up to the deliverables is considered as the lineage of that data point. Metadata when related across other related metadata types is called linkages. By providing the relationships to all the metadata points across the organization and maintaining its integrity with an architecture to handle the changes, metadata repository provides the basic material for understanding the complete data flow and their definitions and their impact. Also the important feature is to maintain the version control though this statement for contrasting is open for discussion. These definitions are still evolving, so the accuracy of the definitions needs refinement.
The purpose of registry is to define the metadata element and maintained across the organization. And data models and other data management teams refer to the registry for any changes to follow. While Metadata repository sources metadata from various metadata systems in the organizations and reflects what is in the upstream. Repository never acts as an upstream while registry is used as an upstream for metadata changes.
Metadata repository enables all the structure of the organizations data containers to one integrated place. This opens plethora of resourceful information for making calculated business decisions. This tool uses one generic form of data model to integrate all the models thus brings all the applications and programs of the organization into one format. And on top of it applying the business definitions and business processes brings the business and technology closer that will help organizations make reliable roadmaps with definite goals. With one stop information, business will have more control on the changes, and can do impact analysis of the tool. Usually business spends much time and money to make decisions based on discovery and research on impacts to make changes or to add new data structures or remove structures in data management of the organization. With a structured and well maintained repository, moving the product from ideation to delivery takes the least amount of time (considering other variables are constant).
To sum it up:
Eachdatabase management system(DBMS) and database tools have their own language for the metadata components within. Database applications already have their own repositories or registries that are expected to provide all of the necessary functionality to access the data stored within. Vendors do not want other companies to be capable of easily migrating data away from their products and into competitors products, so they are proprietary with the way they handle metadata.CASEtools, DBMS dictionaries,ETLtools,data cleansingtools,OLAPtools, anddata miningtools all handle and store metadata differently. Only a metadata repository can be designed to store the metadata components from all of these tools.[3]
Metadata repositories should store metadata in four classifications: ownership, descriptive characteristics, rules and policies, and physical characteristics. Ownership, showing the data owner and the application owner. The descriptive characteristics, define the names, types and lengths, and definitions describing business data or business processes. Rules and policies, will define security, data cleanliness, timelines for data, and relationships. Physical characteristics define the origin or source, and physical location.[1]: 176Like building alogical data modelfor creating a database, a logical meta model can help identify the metadata requirements for business data.[1]: 185The metadata repository will be centralized, decentralized, or distributed. A centralized design means that there is one database for the metadata repository that stores metadata for all applications business wide. A centralized metadata repository has the same advantages and disadvantages of acentralized database. Easier to manage because all the data is in one database, but the disadvantage is that bottlenecks may occur.
A decentralized metadata repository stores metadata in multiple databases, either separated by location and or departments of the business. This makes management of the repository more involved than a centralized metadata repository, but the advantage is that the metadata can be broken down into individual departments.
A distributed metadata repository uses a decentralized method, but unlike a decentralized metadata repository the metadata remains in its original application. AnXMLgateway is created[1]: 246that acts as a directory for accessing the metadata within each different application. The advantages and disadvantages for a distributed metadata repository mirror that of adistributed database.
Design of information model should include various layers of metadata types to be overlapped to create an integrated view of the data. Various metadata types should be stitched with related metadata elements in a top down model linking to business glossary.
Layers of Metadata:
Metadata repositories can be designed as either anEntity-relationship model, or anObject-oriented design.
|
https://en.wikipedia.org/wiki/Metadata_repository
|
TheOpen Grid Forum(OGF) is a community of users, developers, and vendors for standardization ofgrid computing. It was formed in 2006 in a merger of theGlobal Grid Forumand the Enterprise Grid Alliance.
The OGF models its process on theInternet Engineering Task Force(IETF), and produces documents with many acronyms such asOGSA,OGSI, andJSDL.
The OGF has two principal functions plus an administrative function: being thestandards organizationforgrid computing, and building communities within the overall grid community (including extending it within both academia and industry). Each of these function areas is then divided into groups of three types:working groupswith a generally tightly defined role (usually producing a standard),research groupswith a looser role bringing together people to discuss developments within their field and generate use cases and spawn working groups, andcommunity groups(restricted to community functions).
Three meetings are organized per year, divided (approximately evenly after averaging over a number of years) between North America, Europe and East Asia. Many working groups organize face-to-face meetings in the interim.
The concept of aforumto bring together developers, practitioners, and users of distributed computing (known asgrid computingat the time) was discussed at a "Birds of a Feather" session in November 1998 at the SC98 supercomputing conference.[1]Based on response to the idea during this BOF,Ian Fosterand Bill Johnston convened the firstGrid Forummeeting atNASA Ames Research Centerin June 1999, drawing roughly 100 people, mostly from the US. A group of organizers nominatedCharlie Catlett(fromArgonne National Laboratoryand theUniversity of Chicago) to serve as the initial chair, confirmed via a plenary vote was held at the secondGrid Forummeeting in Chicago in October 1999.[2][3]With advice and assistance from theInternet Engineering Task Force(IETF), OGF established a process based on the IETF. OGF is managed by a steering group.
During 1998, groups similar to Grid Forum began to organize in Europe (calledeGrid) and Japan. Discussions among leaders of these groups resulted in combining to form theGlobal Grid Forumwhich met for the first time inAmsterdamin March 2001.GGF-1in Amsterdam followed fiveGrid Forummeetings. Catlett served as GGF Chair for two 3-year terms and was succeeded by Mark Linesch (fromHewlett-Packard) in September 2004.
The Enterprise Grid Alliance (EGA), formed in 2004, was more focused on largedata centerbusinesses such asEMC Corporation,NetApp, andOracle Corporation.[4][5]AtGGF-18(the 23rd gathering of the forum, counting the first five GF meetings) in September 2006, GGF becameOpen Grid Forum (OGF)based on a merger with EGA.[6]In September 2007, Craig Lee of theAerospace Corporationbecame chair.[7]
Some technologies specified by OGF include:
In addition to technical standards, the OGF published community-developed informational and experimental documents.
The first version of theDRMAAAPI was implemented inSun's Grid engineand also in theUniversity of Wisconsin-Madison's programCondor cycle scavenger. The separate Globus Alliance maintains an implementation of some of these standards through the Globus Toolkit. A release ofUNICOREis based on the OGSA architecture and JSDL.
|
https://en.wikipedia.org/wiki/Open_Grid_Forum
|
1.0, Part 2 Datatypes (Recommendation),1.1, Part 1 Structures (Recommendation),
XSD(XML Schema Definition), a recommendation of the World Wide Web Consortium (W3C), specifies how to formally describe the elements in an Extensible Markup Language (XML) document. It can be used by programmers to verify each piece of item content in a document, to assure it adheres to the description of the element it is placed in.[1]
Like allXML schema languages, XSD can be used to express a set of rules to which an XML document must conform to be considered "valid" according to that schema. However, unlike most other schema languages, XSD was also designed with the intent that determination of a document's validity would produce a collection of information adhering to specificdata types. Such a post-validationinfosetcan be useful in the development of XML document processing software.
XML Schema, published as aW3C recommendationin May 2001,[2]is one of severalXML schema languages. It was the first separate schema language forXMLto achieve Recommendation status by the W3C. Because of confusion between XML Schema as a specific W3C specification, and the use of the same term to describe schema languages in general, some parts of the user community referred to this language asWXS, an initialism for W3C XML Schema, while others referred to it asXSD, an initialism for XML Schema Definition.[3][4]In Version 1.1 the W3C has chosen to adopt XSD as the preferred name, and that is the name used in this article.
In its appendix of references, the XSD specification acknowledges the influence ofDTDsand other early XML schema efforts such asDDML,SOX, XML-Data, andXDR. It has adopted features from each of these proposals but is also a compromise among them. Of those languages, XDR and SOX continued to be used and supported for a while after XML Schema was published. A number ofMicrosoftproducts supported XDR until the release ofMSXML6.0 (which dropped XDR in favor of XML Schema) in December 2006.[5]Commerce One, Inc. supported its SOX schema language until declaring bankruptcy in late 2004.
The most obvious features offered in XSD that are not available in XML's nativeDocument Type Definitions(DTDs) arenamespaceawareness and datatypes, that is, the ability to define element and attribute content as containing values such as integers and dates rather than arbitrary text.
The XSD 1.0 specification was originally published in 2001, with a second edition following in 2004 to correct large numbers of errors. XSD 1.1 became aW3C RecommendationinApril 2012.
Technically, aschemais an abstract collection of metadata, consisting of a set ofschema components: chiefly element and attribute declarations and complex and simple type definitions. These components are usually created by processing a collection ofschema documents, which contain the source language definitions of these components. In popular usage, however, a schema document is often referred to as a schema.
Schema documents are organized by namespace: all the named schema components belong to a target namespace, and the target namespace is a property of the schema document as a whole. A schema document mayincludeother schema documents for the same namespace, and mayimportschema documents for a different namespace.
When an instance document is validated against a schema (a process known asassessment), the schema to be used for validation can either be supplied as a parameter to the validation engine, or it can be referenced directly from the instance document using two special attributes,xsi:schemaLocationandxsi:noNamespaceSchemaLocation. (The latter mechanism requires the client invoking validation to trust the document sufficiently to know that it is being validated against the correct schema. "xsi" is the conventional prefix for the namespace "http://www.w3.org/2001/XMLSchema-instance".)
XML Schema Documents usually have the filename extension ".xsd". A uniqueInternet Media Typeis not yet registered for XSDs, so "application/xml" or "text/xml" should be used, as per RFC 3023.
The main components of a schema are:
Other more specialized components include annotations, assertions, notations, and theschema componentwhich contains information about the schema as a whole.
Simple types (also called data types) constrain the textual values that may appear in an element or attribute. This is one of the more significant ways in which XML Schema differs from DTDs. For example, an attribute might be constrained to hold only a valid date or a decimal number.
XSD provides a set of 19primitive data types(anyURI,base64Binary,boolean,date,dateTime,decimal,double,duration,float,hexBinary,gDay,gMonth,gMonthDay,gYear,gYearMonth,NOTATION,QName,string, andtime). It allows new data types to be constructed from these primitives by three mechanisms:
Twenty-five derived types are defined within the specification itself, and further derived types can be defined by users in their own schemas.
The mechanisms available for restricting data types include the ability to specify minimum and maximum values, regular expressions, constraints on the length of strings, and constraints on the number of digits in decimal values. XSD 1.1 again adds assertions, the ability to specify an arbitrary constraint by means of anXPath 2.0expression.
Complex types describe the permitted content of an element, including its element and text children and its attributes. A complex type definition consists of a set of attribute uses and a content model. Varieties of content model include:
A complex type can be derived from another complex type by restriction (disallowing some elements, attributes, or values that the base type permits) or by extension (allowing additional attributes and elements to appear). In XSD 1.1, a complex type may be constrained by assertions—XPath 2.0expressions evaluated against the content that must evaluate to true.
After XML Schema-based validation, it is possible to express an XML document's structure and content in terms of thedata modelthat was implicit during validation. The XML Schema data model includes:
This collection of information is called the Post-Schema-Validation Infoset (PSVI). The PSVI gives a valid XML document its "type" and facilitates treating the document as an object, usingobject-oriented programming(OOP) paradigms.
The primary reason for defining an XML schema is to formally describe an XML document; however the resulting schema has a number of other uses that go beyond simple validation.
The schema can be used to generate code, referred to asXML Data Binding. This code allows contents of XML documents to be treated as objects within the programming environment.
The schema can be used to generate human-readable documentation of an XML file structure; this is especially useful where the authors have made use of the annotation elements. No formal standard exists for documentation generation, but a number of tools are available, such as theXs3pstylesheet, that will produce high-quality readable HTML and printed material.
Although XML Schema is successful in that it has been widely adopted and largely achieves what it set out to, it has been the subject of a great deal of severe criticism, perhaps more so than any other W3C Recommendation.
Good summaries of the criticisms are provided by James Clark,[6]Anders Møller and Michael Schwartzbach,[7]Rick Jelliffe[8]and David Webber.[9]
General problems:
Practical limitations of expressibility:
Technical problems:
XSD 1.1 became aW3C RecommendationinApril 2012, which means it is an approved W3C specification.
Significant new features in XSD 1.1 are:
Until the Proposed Recommendation draft, XSD 1.1 also proposed the addition of a new numeric data type, precisionDecimal. This proved controversial, and was therefore dropped from the specification at a late stage of development.
W3C XML Schema 1.0 Specification
W3C XML Schema 1.1 Specification
Other
|
https://en.wikipedia.org/wiki/W3C_XML_Schema
|
Thescientific methodis anempiricalmethod for acquiringknowledgethat has been referred to while doingsciencesince at least the 17th century. Historically, it was developed through the centuries from the ancient and medieval world. The scientific method involves carefulobservationcoupled with rigorousskepticism, becausecognitive assumptionscan distort the interpretation of theobservation. Scientific inquiry includes creating a testablehypothesisthroughinductive reasoning, testing it through experiments and statistical analysis, and adjusting or discarding the hypothesis based on the results.[1][2][3]
Although procedures vary acrossfields, the underlyingprocessis often similar. In more detail: the scientific method involves makingconjectures(hypothetical explanations), predicting the logical consequences of hypothesis, then carrying out experiments or empirical observations based on those predictions.[4]A hypothesis is a conjecture based on knowledge obtained while seeking answers to the question. Hypotheses can be very specific or broad but must befalsifiable, implying that it is possible to identify a possible outcome of an experiment or observation that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested.[5]
While the scientific method is often presented as a fixed sequence of steps, it actually represents a set of general principles. Not all steps take place in everyscientific inquiry(nor to the same degree), and they are not always in the same order.[6][7]Numerous discoveries have not followed the textbook model of the scientific method and chance has played a role, for instance.[8][9][10]
The history of the scientific method considers changes in the methodology of scientific inquiry, not thehistory of scienceitself. The development of rules forscientific reasoninghas not been straightforward; the scientific method has been the subject of intense and recurring debate throughout the history of science, and eminent natural philosophers and scientists have argued for the primacy of various approaches to establishing scientific knowledge.
Different early expressions ofempiricismand the scientific method can be found throughout history, for instance with the ancientStoics,Aristotle,[11]Epicurus,[12]Alhazen,[A][a][B][i]Avicenna,Al-Biruni,[17][18]Roger Bacon[α], andWilliam of Ockham.[21]
In theScientific Revolutionof the 16th and 17th centuries, some of the most important developments were the furthering ofempiricismbyFrancis BaconandRobert Hooke,[22][23]therationalistapproach described byRené Descartes, andinductivism, brought to particular prominence byIsaac Newtonand those who followed him. Experiments were advocated byFrancis Baconand performed byGiambattista della Porta,[24]Johannes Kepler,[25][d]andGalileo Galilei.[β]There was particular development aided by theoretical works by the skepticFrancisco Sanches,[27]by idealists as well as empiricistsJohn Locke,George Berkeley, andDavid Hume.[e]C. S. Peirceformulated thehypothetico-deductive modelin the 20th century, and the model has undergone significant revision since.[30]
The term "scientific method" emerged in the 19th century, as a result of significant institutional development of science, and terminologies establishing clearboundariesbetween science and non-science, such as "scientist" and "pseudoscience".[31]Throughout the 1830s and 1850s, when Baconianism was popular, naturalists like William Whewell, John Herschel, and John Stuart Mill engaged in debates over "induction" and "facts," and were focused on how to generate knowledge.[31]In the late 19th and early 20th centuries, a debate overrealismvs.antirealismwas conducted as powerful scientific theories extended beyond the realm of the observable.[32]
The term "scientific method" came into popular use in the twentieth century;Dewey's 1910 book,How We Think, inspiredpopular guidelines.[33]It appeared in dictionaries and science textbooks, although there was little consensus on its meaning.[31]Although there was growth through the middle of the twentieth century,[f]by the 1960s and 1970s numerous influential philosophers of science such asThomas KuhnandPaul Feyerabendhad questioned the universality of the "scientific method," and largely replaced the notion of science as a homogeneous and universal method with that of it being a heterogeneous and local practice.[31]In particular,Paul Feyerabend, in the 1975 first edition of his bookAgainst Method, argued against there being any universal rules ofscience;[32]Karl Popper,[γ]and Gauch 2003,[6]disagreed with Feyerabend's claim.
Later stances include physicistLee Smolin's 2013 essay "There Is No Scientific Method",[35]in which he espouses twoethical principles,[δ]andhistorian of scienceDaniel Thurs' chapter in the 2015 bookNewton's Apple and Other Myths about Science, which concluded that the scientific method is a myth or, at best, an idealization.[36]Asmythsare beliefs,[37]they are subject to thenarrative fallacy, as pointed out by Taleb.[38]PhilosophersRobert Nolaand Howard Sankey, in their 2007 bookTheories of Scientific Method, said that debates over the scientific method continue, and argued that Feyerabend, despite the title ofAgainst Method, accepted certain rules of method and attempted to justify those rules with a meta methodology.[39]Staddon (2017) argues it is a mistake to try following rules in the absence of an algorithmic scientific method; in that case, "science is best understood through examples".[40][41]But algorithmic methods, such asdisproof of existing theory by experimenthave been used sinceAlhacen(1027) and hisBook of Optics,[a]and Galileo (1638) and hisTwo New Sciences,[26]andThe Assayer,[42]which still stand as scientific method.
The scientific method is the process by whichscienceis carried out.[43]As in other areas of inquiry, science (through the scientific method) can build on previous knowledge, and unify understanding of its studied topics over time.[g]Historically, the development of the scientific method was critical to theScientific Revolution.[45]
The overall process involves making conjectures (hypotheses), predicting their logical consequences, then carrying out experiments based on those predictions to determine whether the original conjecture was correct.[4]However, there are difficulties in a formulaic statement of method. Though the scientific method is often presented as a fixed sequence of steps, these actions are more accurately general principles.[46]Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always done in the same order.
There are different ways of outlining the basic method used for scientific inquiry. Thescientific communityandphilosophers of sciencegenerally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic ofexperimental sciencesthansocial sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below.The scientific method is an iterative, cyclical process through which information is continually revised.[47][48]It is generally recognized to develop advances in knowledge through the following elements, in varying combinations or contributions:[49][50]
Each element of the scientific method is subject topeer reviewfor possible mistakes. These activities do not describe all that scientists do butapply mostly to experimental sciences(e.g., physics, chemistry, biology, and psychology). The elements above are often taught inthe educational systemas "the scientific method".[C]
The scientific method is not a single recipe: it requires intelligence, imagination, and creativity.[51]In this sense, it is not a mindless set of standards and procedures to follow but is rather anongoing cycle, constantly developing more useful, accurate, and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton'sPrincipia. On the contrary, if the astronomically massive, the feather-light, and the extremely fast are removed from Einstein's theories – all phenomena Newton could not have observed – Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase confidence in Newton's work.
An iterative,[48]pragmatic[16]scheme of the four points above is sometimes offered as a guideline for proceeding:[52]
The iterative cycle inherent in this step-by-step method goes from point 3 to 6 and back to 3 again.
While this schema outlines a typical hypothesis/testing method,[53]many philosophers, historians, and sociologists of science, includingPaul Feyerabend,[h]claim that such descriptions of scientific method have little relation to the ways that science is actually practiced.
The basic elements of the scientific method are illustrated by the following example (which occurred from 1944 to 1953) from the discovery of the structure of DNA (marked withand indented).
In 1950, it was known thatgenetic inheritancehad a mathematical description, starting with the studies ofGregor Mendel, and that DNA contained genetic information (Oswald Avery'stransforming principle).[55]But the mechanism of storing genetic information (i.e., genes) in DNA was unclear. Researchers inBragg'slaboratory atCambridge UniversitymadeX-raydiffractionpictures of variousmolecules, starting withcrystalsofsalt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle.[56]
The scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (Thesubjectscan also be calledunsolved problemsor theunknowns.)[C]For example,Benjamin Franklinconjectured, correctly, thatSt. Elmo's firewaselectricalinnature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may alsoentailsome definitions andobservations; these observations often demand carefulmeasurementsand/or counting can take the form of expansiveempirical research.
Ascientific questioncan refer to the explanation of a specificobservation,[C]as in "Why is the sky blue?" but can also be open-ended, as in "How can Idesign a drugto cure this particular disease?" This stage frequently involves finding and evaluating evidence from previous experiments, personal scientific observations or assertions, as well as the work of other scientists. If the answer is already known, a different question that builds on the evidence can be posed. When applying the scientific method to research, determining a good question can be very difficult and it will affect the outcome of the investigation.[57]
The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference betweenpseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such ascorrelationandregression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specializedscientific instrumentssuch asthermometers,spectroscopes,particle accelerators, orvoltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement.
I am not accustomed to saying anything with certainty after only one or two observations.
The scientific definition of a term sometimes differs substantially from itsnatural languageusage. For example,massandweightoverlap in meaning in common discourse, but have distinct meanings inmechanics. Scientific quantities are often characterized by theirunits of measurewhich can later be described in terms of conventional physical units when communicating the work.
New theories are sometimes developed after realizing certain terms have not previously been sufficiently clearly defined. For example,Albert Einstein's first paper onrelativitybegins by definingsimultaneityand the means for determininglength. These ideas were skipped over byIsaac Newtonwith, "I do not definetime, space, place andmotion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations.Francis Crickcautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood.[59]In Crick's study ofconsciousness, he actually found it easier to studyawarenessin thevisual system, rather than to studyfree will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them.
Linus Paulingproposed that DNA might be atriple helix.[60][61]This hypothesis was also considered byFrancis CrickandJames D. Watsonbut discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong.[62]and that Pauling would soon admit his difficulties with that structure.
Ahypothesisis a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena. Normally, hypotheses have the form of amathematical model. Sometimes, but not always, they can also be formulated asexistential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form ofuniversal statements, stating that every instance of the phenomenon has a particular characteristic.
Scientists are free to use whatever resources they have – their own creativity, ideas from other fields,inductive reasoning,Bayesian inference, and so on – to imagine possible explanations for a phenomenon under study.Albert Einstein once observed that "there is no logical bridge between phenomena and their theoretical principles."[63][i]Charles Sanders Peirce, borrowing a page fromAristotle(Prior Analytics,2.25)[65]described the incipient stages ofinquiry, instigated by the "irritation of doubt" to venture a plausible guess, asabductive reasoning.[66]: II, p.290The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea.Michael Polanyimade such creativity the centerpiece of his discussion of methodology.
William Glenobserves that[67]
the success of a hypothesis, or its service to science, lies not simply in its perceived "truth", or power to displace, subsume or reduce a predecessor idea, but perhaps more in its ability to stimulate the research that will illuminate ... bald suppositions and areas of vagueness.
In general, scientists tend to look for theories that are "elegant" or "beautiful". Scientists often use these terms to refer to a theory that is following the known facts but is nevertheless relatively simple and easy to handle.Occam's Razorserves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses.
To minimize theconfirmation biasthat results from entertaining a single hypothesis,strong inferenceemphasizes the need for entertaining multiple alternative hypotheses,[68]and avoiding artifacts.[69]
James D. Watson,Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'.[70][71]This prediction followed from the work of Cochran, Crick and Vand[72](and independently by Stokes). The Cochran-Crick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x-shaped patterns.
In their first paper, Watson and Crick also noted that thedouble helixstructure they proposed provided a simple mechanism forDNA replication, writing, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material".[73]
Any useful hypothesis will enablepredictions, byreasoningincludingdeductive reasoning.[j]It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities.
It is essential that the outcome of testing such a prediction be currently unknown. Only in this case does a successful outcome increase the probability that the hypothesis is true. If the outcome is already known, it is called a consequence and should have already been considered whileformulating the hypothesis.
If the predictions are not accessible by observation or experience, the hypothesis is not yettestableand so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. For example, while a hypothesis on the existence of other intelligent species may be convincing with scientifically based speculation, no known experiment can test this hypothesis. Therefore, science itself can have little to say about the possibility. In the future, a new technique may allow for an experimental test and the speculation would then become part of accepted science.
For example, Einstein's theory ofgeneral relativitymakes several specific predictions about the observable structure ofspacetime, such as thatlightbends in agravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field.Arthur Eddington'sobservations made during a 1919 solar eclipsesupported General Relativity rather than Newtoniangravitation.[74]
Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team fromKing's College London–Rosalind Franklin,Maurice Wilkins, andRaymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin'sphoto 51, a detailed X-ray diffraction image, which showed an X-shape[75][76]and was able to confirm the structure was helical.[77][78][k]
Once predictions are made, they can be sought by experiments. If the test results contradict the predictions, the hypotheses which entailed them are called into question and become less tenable. Sometimes the experiments are conducted incorrectly or are not very well designed when compared to acrucial experiment. If the experimental results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject tofurther testing.Theexperimental controlis a technique for dealing with observational error. This technique uses the contrast between multiple samples, or observations, or populations, under differing conditions, to see what varies or what remains the same. We vary the conditions for the acts of measurement, to help isolate what has changed.Mill's canonscan then help us figure out what the important factor is.[82]Factor analysisis one technique for discovering the important factor in an effect.
Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, adouble-blindstudy or an archaeologicalexcavation. Even taking a plane fromNew YorktoParisis an experiment that tests theaerodynamicalhypotheses used for constructing the plane.
These institutions thereby reduce the research function to a cost/benefit,[83]which is expressed as money, and the time and attention of the researchers to be expended,[83]in exchange for a report to their constituents.[84]Current large instruments, such as CERN'sLarge Hadron Collider(LHC),[85]orLIGO,[86]or theNational Ignition Facility(NIF),[87]or theInternational Space Station(ISS),[88]or theJames Webb Space Telescope(JWST),[89][90]entail expected costs of billions of dollars, and timeframes extending over decades. These kinds of institutions affect public policy, on a national or even international basis, and the researchers would require shared access to such machines and theiradjunct infrastructure.[ε][91]
Scientists assume an attitude of openness and accountability on the part of those experimenting. Detailed record-keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work ofHipparchus(190–120 BCE), when determining a value for the precession of the Earth, whilecontrolled experimentscan be seen in the works ofal-Battani(853–929 CE)[92]andAlhazen(965–1039 CE).[93][l][b]
Watson and Crick then produced their model, using this information along with the previously known information about DNA's composition, especially Chargaff's rules of base pairing.[81]After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts,[95][96][97]Watson and Crick were able to infer the essential structure ofDNAby concretemodelingof the physical shapesof thenucleotideswhich comprise it.[81][98][99]They were guided by the bond lengths which had been deduced byLinus Paulingand byRosalind Franklin's X-ray diffraction images.
The scientific method is iterative. At any stage, it is possible to refine itsaccuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration. Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject.
This manner of iteration can span decades and sometimes centuries.Published paperscan be built upon. For example: By 1027,Alhazen, based on his measurements of therefractionof light, was able to deduce thatouter spacewas less dense thanair, that is: "the body of the heavens is rarer than the body of air".[14]In 1079Ibn Mu'adh'sTreatise On Twilightwas able to infer that Earth's atmosphere was 50 miles thick, based onatmospheric refractionof the sun's rays.[m]
This is why the scientific method is often represented as circular – new information leads to new characterisations, and the cycle of science continues. Measurements collectedcan be archived, passed onwards and used by others.Other scientists may start their own research andenter the processat any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility.
Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision;Georg Wilhelm Richmannwas killed byball lightning(1753) when attempting to replicate the 1752 kite-flying experiment ofBenjamin Franklin.[101]
If an experiment cannot berepeatedto produce the same results, this implies that the original results might have been in error. As a result, it is common for a single experiment to be performed multiple times, especially when there are uncontrolled variables or other indications ofexperimental error. For significant or surprising results, other scientists may also attempt to replicate the results for themselves, especially if those results would be important to their own work.[102]Replication has become a contentious issue in social and biomedical science where treatments are administered to groups of individuals. Typically anexperimental groupgets the treatment, such as a drug, and thecontrol groupgets a placebo.John Ioannidisin 2005 pointed out that the method being used has led to many findings that cannot be replicated.[103]
The process ofpeer reviewinvolves the evaluation of the experiment by experts, who typically give their opinions anonymously. Some journals request that the experimenter provide lists of possible peer reviewers, especially if the field is highly specialized. Peer review does not certify the correctness of the results, only that, in the opinion of the reviewer, the experiments themselves were sound (based on the description supplied by the experimenter). If the work passes peer review, which occasionally may require new experiments requested by the reviewers, it will be published in a peer-reviewedscientific journal. The specific journal that publishes the results indicates the perceived quality of the work.[n]
Scientists typically are careful in recording their data, a requirement promoted byLudwik Fleck(1896–1961) and others.[104]Though not typically required, they might be requested tosupply this datato other scientists who wish to replicate their original results (or parts of their original results), extending to the sharing of any experimental samples that may be difficult to obtain.[105]To protect against bad science and fraudulent data, government research-granting agencies such as theNational Science Foundation, and science journals, includingNatureandScience, have a policy that researchers must archive their data and methods so that other researchers can test the data and methods and build on the research that has gone before.Scientific data archivingcan be done at several national archives in the U.S. or theWorld Data Center.
The unfettered principles of science are to strive for accuracy and the creed of honesty; openness already being a matter of degrees. Openness is restricted by the general rigour of scepticism. And of course the matter of non-science.
Smolin, in 2013, espoused ethical principles rather than giving any potentially limited definition of the rules of inquiry.[δ]His ideas stand in the context of the scale of data–driven andbig science, which has seen increased importance of honesty and consequentlyreproducibility. His thought is that science is a community effort by those who have accreditation and are working within thecommunity. He also warns against overzealous parsimony.
Popper previously took ethical principles even further, going as far as to ascribe value to theories only if they were falsifiable. Popper used the falsifiability criterion to demarcate a scientific theory from a theory like astrology: both "explain" observations, but the scientific theory takes the risk of making predictions that decide whether it is right or wrong:[106][107]
"Those among us who are unwilling to expose their ideas to the hazard of refutation do not take part in the game of science."
Science has limits. Those limits are usually deemed to be answers to questions that aren't in science's domain, such as faith. Science has other limits as well, as it seeks to make true statements about reality.[108]The nature oftruthand the discussion on how scientific statements relate to reality is best left to the article on thephilosophy of sciencehere. More immediately topical limitations show themselves in the observation of reality.
It is the natural limitations of scientific inquiry that there is no pure observation as theory is required to interpret empirical data, and observation is therefore influenced by the observer's conceptual framework.[110]As science is an unfinished project, this does lead to difficulties. Namely, that false conclusions are drawn, because of limited information.
An example here are the experiments of Kepler and Brahe, used by Hanson to illustrate the concept. Despite observing the same sunrise the two scientists came to different conclusions—theirintersubjectivityleading to differing conclusions.Johannes KeplerusedTycho Brahe's method of observation, which was to project the image of the Sun on a piece of paper through a pinhole aperture, instead of looking directly at the Sun. He disagreed with Brahe's conclusion that total eclipses of the Sun were impossible because, contrary to Brahe, he knew that there were historical accounts of total eclipses. Instead, he deduced that the images taken would become more accurate, the larger the aperture—this fact is now fundamental for optical system design.[d]Another historic example here is thediscovery of Neptune, credited as being found via mathematics because previous observers didn't know what they were looking at.[111]
Scientific endeavour can be characterised as the pursuit of truths about the natural world or as the elimination of doubt about the same. The former is the direct construction of explanations from empirical data and logic, the latter the reduction of potential explanations.[ζ]It was establishedabovehow the interpretation of empirical data is theory-laden, so neither approach is trivial.
The ubiquitous element in the scientific method isempiricism, which holds that knowledge is created by a process involving observation; scientific theories generalize observations. This is in opposition to stringent forms ofrationalism, which holds that knowledge is created by the human intellect; later clarified by Popper to be built on prior theory.[113]The scientific method embodies the position that reason alone cannot solve a particular scientific problem; it unequivocally refutes claims thatrevelation, political or religiousdogma, appeals to tradition, commonly held beliefs, common sense, or currently held theories pose the only possible means of demonstrating truth.[16][80]
In 1877,[49]C. S. Peircecharacterized inquiry in general not as the pursuit of truthper sebut as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, the belief being that on which one is prepared to act. Hispragmaticviews framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or "hyperbolic doubt", which he held to be fruitless.[o]This "hyperbolic doubt" Peirce argues against here is of course just another name forCartesian doubtassociated withRené Descartes. It is a methodological route to certain knowledge by identifying what can't be doubted.
A strong formulation of the scientific method is not always aligned with a form ofempiricismin which the empirical data is put forward in the form of experience or other abstracted forms of knowledge as in current scientific practice the use ofscientific modellingand reliance on abstract typologies and theories is normally accepted. In 2010,Hawkingsuggested that physics' models of reality should simply be accepted where they prove to make useful predictions. He calls the conceptmodel-dependent realism.[116]
Rationality embodies the essence of sound reasoning, a cornerstone not only in philosophical discourse but also in the realms of science and practical decision-making. According to the traditional viewpoint, rationality serves a dual purpose: it governs beliefs, ensuring they align with logical principles, and it steers actions, directing them towards coherent and beneficial outcomes. This understanding underscores the pivotal role of reason in shaping our understanding of the world and in informing our choices and behaviours.[117]The following section will first explore beliefs and biases, and then get to the rational reasoning most associated with the sciences.
Scientific methodology often directs thathypothesesbe tested incontrolledconditions wherever possible. This is frequently possible in certain areas, such as in the biological sciences, and more difficult in other areas, such as in astronomy.
The practice of experimental control and reproducibility can have the effect of diminishing the potentially harmful effects of circumstance, and to a degree, personal bias. For example, pre-existing beliefs can alter the interpretation of results, as inconfirmation bias; this is aheuristicthat leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree (in other words, people tend to observe what they expect to observe).[37]
[T]he action of thought is excited by the irritation of doubt, and ceases when belief is attained.
A historical example is the belief that the legs of agallopinghorse are splayed at the point when none of the horse's legs touch the ground, to the point of this image being included in paintings by its supporters. However, the first stop-action pictures of a horse's gallop byEadweard Muybridgeshowed this to be false, and that the legs are instead gathered together.[118]
Another important human bias that plays a role is a preference for new, surprising statements (seeAppeal to novelty), which can result in a search for evidence that the new is true.[119]Poorly attested beliefs can be believed and acted upon via a less rigorous heuristic.[120]
Goldhaber and Nieto published in 2010 the observation that if theoretical structures with "many closely neighboring subjects are described by connecting theoretical concepts, then the theoretical structure acquires a robustness which makes it increasingly hard – though certainly never impossible – to overturn".[121]When a narrative is constructed its elements become easier to believe.[122][38]
Fleck (1979), p. 27 notes "Words and ideas are originally phonetic and mental equivalences of the experiences coinciding with them. ... Such proto-ideas are at first always too broad and insufficiently specialized. ... Once a structurally complete and closed system of opinions consisting of many details and relations has been formed, it offers enduring resistance to anything that contradicts it". Sometimes, these relations have their elements assumeda priori, or contain some other logical or methodological flaw in the process that ultimately produced them.Donald M. MacKayhas analyzed these elements in terms of limits to the accuracy of measurement and has related them to instrumental elements in a category of measurement.[η]
The idea of there being two opposed justifications for truth has shown up throughout the history of scientific method as analysis versus synthesis, non-ampliative/ampliative, or even confirmation and verification. (And there are other kinds of reasoning.) One to use what is observed to build towards fundamental truths – and the other to derive from those fundamental truths more specific principles.[123]
Deductive reasoning is the building of knowledge based on what has been shown to be true before. It requires the assumption of fact established prior, and, given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. Inductive reasoning builds knowledge not from established truth, but from a body of observations. It requires stringent scepticism regarding observed phenomena, because cognitive assumptions can distort the interpretation of initial perceptions.[124]
An example for how inductive and deductive reasoning works can be found in thehistory of gravitational theory.[p]It took thousands of years of measurements, from theChaldean,Indian,Persian,Greek,Arabic, andEuropeanastronomers, to fully record the motion of planetEarth.[q]Kepler(and others) were then able to build their early theories bygeneralizing the collected data inductively, andNewtonwas able to unify prior theory and measurements into the consequences of hislaws of motionin 1727.[r]
Another common example of inductive reasoning is the observation of acounterexampleto current theory inducing the need for new ideas.Le Verrierin 1859 pointed out problems with theperihelionofMercurythat showed Newton's theory to be at least incomplete. The observed difference of Mercury'sprecessionbetween Newtonian theory and observation was one of the things that occurred toEinsteinas a possible early test of histheory of relativity. His relativistic calculations matched observation much more closely than Newtonian theory did.[s]Though, today'sStandard Modelof physics suggests that we still do not know at least some of the concepts surrounding Einstein's theory, it holds to this day and is being built on deductively.
A theory being assumed as true and subsequently built on is a common example of deductive reasoning. Theory building on Einstein's achievement can simply state that 'we have shown that this case fulfils the conditions under which general/special relativity applies, therefore its conclusions apply also'. If it was properly shown that 'this case' fulfils the conditions, the conclusion follows. An extension of this is the assumption of a solution to an open problem. This weaker kind of deductive reasoning will get used in current research, when multiple scientists or even teams of researchers are all gradually solving specific cases in working towards proving a larger theory. This often sees hypotheses being revised again and again as new proof emerges.
This way of presenting inductive and deductive reasoning shows part of why science is often presented as being a cycle of iteration. It is important to keep in mind that that cycle's foundations lie in reasoning, and not wholly in the following of procedure.
Claims of scientific truth can be opposed in three ways: by falsifying them, by questioning their certainty, or by asserting the claim itself to be incoherent.[t]Incoherence, here, means internal errors in logic, like stating opposites to be true; falsification is what Popper would have called the honest work of conjecture and refutation[34]— certainty, perhaps, is where difficulties in telling truths from non-truths arise most easily.
Measurements in scientific work are usually accompanied by estimates of theiruncertainty.[83]The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due todata collectionlimitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon thesampling methodused and the number of samples taken.
In the case of measurement imprecision, there will simply be a 'probable deviation' expressing itself in a study's conclusions. Statistics are different.Inductive statistical generalisationwill take sample data and extrapolate more general conclusions, which has to be justified — and scrutinised. It can even be said that statistical models are only ever useful,but never a complete representation of circumstances.
In statistical analysis, expected and unexpected bias is a large factor.[129]Research questions, the collection of data, or the interpretation of results, all are subject to larger amounts of scrutiny than in comfortably logical environments. Statistical models go through aprocess for validation, for which one could even say that awareness of potential biases is more important than the hard logic; errors in logic are easier to find inpeer review, after all.[u]More general, claims to rational knowledge, and especially statistics, have to be put into their appropriate context.[124]Simple statements such as '9 out of 10 doctors recommend' are therefore of unknown quality because they do not justify their methodology.
Lack of familiarity with statistical methodologies can result in erroneous conclusions. Foregoing the easy example,[v]multiple probabilities interacting is where, for example medical professionals,[131]have shown a lack of proper understanding.Bayes' theoremis the mathematical principle lining out how standing probabilities are adjusted given new information. Theboy or girl paradoxis a common example. In knowledge representation,Bayesian estimation of mutual informationbetweenrandom variablesis a way to measure dependence, independence, or interdependence of the information under scrutiny.[132]
Beyond commonly associatedsurvey methodologyoffield research, the concept together withprobabilistic reasoningis used to advance fields of science where research objects have no definitive states of being. For example, instatistical mechanics.
Thehypothetico-deductive model, or hypothesis-testing method, or "traditional" scientific method is, as the name implies, based on the formation ofhypothesesand their testing viadeductive reasoning. A hypothesis stating implications, often calledpredictions, that are falsifiable via experiment is of central importance here, as not the hypothesis but its implications are what is tested.[133]Basically, scientists will look at the hypothetical consequences a (potential)theoryholds and prove or disprove those instead of the theory itself. If anexperimentaltest of those hypothetical consequences shows them to be false, it follows logically that the part of the theory that implied them was false also. If they show as true however, it does not prove the theory definitively.
Thelogicof this testing is what affords this method of inquiry to be reasoned deductively. The formulated hypothesis is assumed to be 'true', and from that 'true' statement implications are inferred. If the following tests show the implications to be false, it follows that the hypothesis was false also. If test show the implications to be true, new insights will be gained. It is important to be aware that a positive test here will at best strongly imply but not definitively prove the tested hypothesis, as deductive inference (A ⇒ B) is not equivalent like that; only (¬B ⇒ ¬A) is valid logic. Their positive outcomes however, as Hempel put it, provide "at least some support, some corroboration or confirmation for it".[134]This is whyPopperinsisted on fielded hypotheses to be falsifieable, as successful tests imply very little otherwise. AsGilliesput it, "successful theories are those that survive elimination through falsification".[133]
Deductive reasoning in this mode of inquiry will sometimes be replaced byabductive reasoning—the search for the most plausible explanation via logical inference. For example, in biology, where general laws are few,[133]as valid deductions rely on solid presuppositions.[124]
Theinductivist approachto deriving scientific truth first rose to prominence withFrancis Baconand particularly withIsaac Newtonand those who followed him.[135]After the establishment of the HD-method, it was often put aside as something of a "fishing expedition" though.[133]It is still valid to some degree, but today's inductive method is often far removed from the historic approach—the scale of the data collected lending new effectiveness to the method. It is most-associated with data-mining projects or large-scale observation projects. In both these cases, it is often not at all clear what the results of proposed experiments will be, and thus knowledge will arise after the collection of data through inductive reasoning.[r]
Where the traditional method of inquiry does both, the inductive approach usually formulates only aresearch question, not a hypothesis. Following the initial question instead, a suitable "high-throughput method" of data-collection is determined, the resulting data processed and 'cleaned up', and conclusions drawn after. "This shift in focus elevates the data to the supreme role of revealing novel insights by themselves".[133]
The advantage the inductive method has over methods formulating a hypothesis that it is essentially free of "a researcher's preconceived notions" regarding their subject. On the other hand, inductive reasoning is always attached to a measure of certainty, as all inductively reasoned conclusions are.[133]This measure of certainty can reach quite high degrees, though. For example, in the determination of largeprimes, which are used inencryption software.[136]
Mathematical modelling, or allochthonous reasoning, typically is the formulation of a hypothesis followed by building mathematical constructs that can be tested in place of conducting physical laboratory experiments. This approach has two main factors: simplification/abstraction and secondly a set of correspondence rules. The correspondence rules lay out how the constructed model will relate back to reality-how truth is derived; and the simplifying steps taken in the abstraction of the given system are to reduce factors that do not bear relevance and thereby reduce unexpected errors.[133]These steps can also help the researcher in understanding the important factors of the system, how far parsimony can be taken until the system becomes more and more unchangeable and thereby stable. Parsimony and related principles are further exploredbelow.
Once this translation into mathematics is complete, the resulting model, in place of the corresponding system, can be analysed through purely mathematical and computational means. The results of this analysis are of course also purely mathematical in nature and get translated back to the system as it exists in reality via the previously determined correspondence rules—iteration following review and interpretation of the findings. The way such models are reasoned will often be mathematically deductive—but they don't have to be. An example here areMonte-Carlo simulations. These generate empirical data "arbitrarily", and, while they may not be able to reveal universal principles, they can nevertheless be useful.[133]
Scientific inquiry generally aims to obtainknowledgein the form oftestable explanations[137][79]that scientists can use topredictthe results of future experiments. This allows scientists to gain a better understanding of the topic under study, and later to use that understanding to intervene in its causal mechanisms (such as to cure disease). The better an explanation is at making predictions, the more useful it frequently can be, and the more likely it will continue to explain a body of evidence better than its alternatives. The most successful explanations – those that explain and make accurate predictions in a wide range of circumstances – are often calledscientific theories.[C]
Most experimental results do not produce large changes in human understanding; improvements in theoretical scientific understanding typically result from a gradual process of development over time, sometimes across different domains of science.[138]Scientific models vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community. In general, explanations become accepted over time as evidence accumulates on a given topic, and the explanation in question proves more powerful than its alternatives at explaining the evidence. Often subsequent researchers re-formulate the explanations over time, or combined explanations to produce new explanations.
Scientific knowledge is closely tied toempirical findingsand can remain subject tofalsificationif new experimental observations are incompatible with what is found. That is, no theory can ever be considered final since new problematic evidence might be discovered. If such evidence is found, a new theory may be proposed, or (more commonly) it is found that modifications to the previous theory are sufficient to explain the new evidence. The strength of a theory relates to how long it has persisted without major alteration to its core principles.
Theories can also become subsumed by other theories. For example, Newton's laws explained thousands of years of scientific observations of the planetsalmost perfectly. However, these laws were then determined to be special cases of a more general theory (relativity), which explained both the (previously unexplained) exceptions to Newton's laws and predicted and explained other observations such as the deflection oflightbygravity. Thus, in certain cases independent, unconnected, scientific observations can be connected, unified by principles of increasing explanatory power.[139][121]
Since new theories might be more comprehensive than what preceded them, and thus be able to explain more than previous ones, successor theories might be able to meet a higher standard by explaining a larger body of observations than their predecessors.[139]For example, the theory ofevolutionexplains thediversity of life on Earth, how species adapt to their environments, and many otherpatternsobserved in the natural world;[140][141]its most recent major modification was unification withgeneticsto form themodern evolutionary synthesis. In subsequent modifications, it has also subsumed aspects of many other fields such asbiochemistryandmolecular biology.
During the course of history, one theory has succeeded another, and some have suggested further work while others have seemed content just to explain the phenomena. The reasons why one theory has replaced another are not always obvious or simple. The philosophy of science includes the question:What criteria are satisfied by a 'good' theory. This question has a long history, and many scientists, as well as philosophers, have considered it. The objective is to be able to choose one theory as preferable to another without introducingcognitive bias.[142]Though different thinkers emphasize different aspects,[ι]a good theory:
In trying to look for such theories, scientists will, given a lack of guidance by empirical evidence, try to adhere to:
The goal here is to make the choice between theories less arbitrary. Nonetheless, these criteria contain subjective elements, and should be consideredheuristicsrather than a definitive.[κ]Also, criteria such as these do not necessarily decide between alternative theories. QuotingBird:[148]
"[Such criteria] cannot determine scientific choice. First, which features of a theory satisfy these criteria may be disputable (e.g.does simplicity concern the ontological commitments of a theory or its mathematical form?). Secondly, these criteria are imprecise, and so there is room for disagreement about the degree to which they hold. Thirdly, there can be disagreement about how they are to be weighted relative to one another, especially when they conflict."
It also is debatable whether existing scientific theories satisfy all these criteria, which may represent goals not yet achieved. For example, explanatory power over all existing observations is satisfied by no one theory at the moment.[149][150]
Thedesiderataof a "good" theory have been debated for centuries, going back perhaps even earlier thanOccam's razor,[w]which is often taken as an attribute of a good theory. Science tries to be simple. When gathered data supports multiple explanations, the most simple explanation for phenomena or the most simple formation of a theory is recommended by the principle of parsimony.[151]Scientists go as far as to call simple proofs of complex statementsbeautiful.
We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.
The concept of parsimony should not be held to imply complete frugality in the pursuit of scientific truth. The general process starts at the opposite end of there being a vast number of potential explanations and general disorder. An example can be seen inPaul Krugman's process, who makes explicit to "dare to be silly". He writes that in his work on new theories of international trade hereviewed prior workwith an open frame of mind and broadened his initial viewpoint even in unlikely directions. Once he had a sufficient body of ideas, he would try to simplify and thus find what worked among what did not. Specific to Krugman here was to "question the question". He recognised that prior work had applied erroneous models to already present evidence, commenting that "intelligent commentary was ignored".[152]Thus touching on the need to bridge the common bias against other circles of thought.[153]
Occam's razor might fall under the heading of "simple elegance", but it is arguable thatparsimonyandelegancepull in different directions. Introducing additional elements could simplify theory formulation, whereas simplifying a theory's ontology might lead to increased syntactical complexity.[147]
Sometimes ad-hoc modifications of a failing idea may also be dismissed as lacking "formal elegance". This appeal to what may be called "aesthetic" is hard to characterise, but essentially about a sort of familiarity. Though, argument based on "elegance" is contentious and over-reliance on familiarity will breed stagnation.[144]
Principles of invariance have been a theme in scientific writing, and especially physics, since at least the early 20th century.[θ]The basic idea here is that good structures to look for are those independent of perspective, an idea that has featured earlier of course for example inMill's Methodsof difference and agreement—methods that would be referred back to in the context of contrast and invariance.[154]But as tends to be the case, there is a difference between something being a basic consideration and something being given weight. Principles of invariance have only been given weight in the wake of Einstein's theories of relativity, which reduced everything to relations and were thereby fundamentally unchangeable, unable to be varied.[155][x]AsDavid Deutschput it in 2009: "the search for hard-to-vary explanations is the origin of all progress".[146]
An example here can be found in one ofEinstein's thought experiments. The one of a lab suspended in empty space is an example of a useful invariant observation. He imagined the absence of gravity and an experimenter free floating in the lab. — If now an entity pulls the lab upwards, accelerating uniformly, the experimenter would perceive the resulting force as gravity. The entity however would feel the work needed to accelerate the lab continuously.[x]Through this experiment Einstein was able to equate gravitational and inertial mass; something unexplained by Newton's laws, and an early but "powerful argument for a generalised postulate of relativity".[156]
The feature, which suggests reality, is always some kind of invariance of a structure independent of the aspect, the projection.
The discussion oninvariancein physics is often had in the more specific context ofsymmetry.[155]The Einstein example above, in the parlance of Mill would be an agreement between two values. In the context of invariance, it is a variable that remains unchanged through some kind of transformation or change in perspective. And discussion focused on symmetry would view the two perspectives as systems that share a relevant aspect and are therefore symmetrical.
Related principles here arefalsifiabilityandtestability. The opposite of something beinghard-to-varyare theories that resist falsification—a frustration that was expressed colourfully byWolfgang Paulias them being "not even wrong". The importance of scientific theories to be falsifiable finds especial emphasis in the philosophy of Karl Popper. The broader view here is testability, since it includes the former and allows for additional practical considerations.[157][158]
Philosophy of science looks atthe underpinning logicof the scientific method, at what separatesscience from non-science, and theethicthat is implicit in science. There are basic assumptions, derived from philosophy by at least one prominent scientist,[D][159]that form the base of the scientific method – namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that rational explanations exist for elements of the real world.[159]These assumptions frommethodological naturalismform a basis on which science may be grounded.Logical positivist,empiricist,falsificationist, and other theories have criticized these assumptions and given alternative accounts of the logic of science, but each has also itself been criticized.
There are several kinds of modern philosophical conceptualizations and attempts at definitions of the method of science.[λ]The one attempted by theunificationists, who argue for the existence of a unified definition that is useful (or at least 'works' in every context of science). Thepluralists, arguing degrees of science being too fractured for a universal definition of its method to by useful. And those, who argue that the very attempt at definition is already detrimental to the free flow of ideas.
Additionally, there have been views on the social framework in which science is done, and the impact of the sciences social environment on research. Also, there is 'scientific method' as popularised by Dewey inHow We Think(1910) and Karl Pearson inGrammar of Science(1892), as used in fairly uncritical manner in education.
Scientific pluralism is a position within thephilosophy of sciencethat rejects various proposedunitiesof scientific method and subject matter. Scientific pluralists hold that science is not unified in one or more of the following ways: themetaphysicsof its subject matter, theepistemologyof scientific knowledge, or theresearch methodsand models that should be used. Some pluralists believe that pluralism is necessary due to the nature of science. Others say that sincescientific disciplinesalready vary in practice, there is no reason to believe this variation is wrong until a specific unification isempiricallyproven. Finally, some hold that pluralism should be allowed fornormativereasons, even if unity were possible in theory.
Unificationism, in science, was a central tenet oflogical positivism.[161][162]Different logical positivists construed this doctrine in several different ways, e.g. as areductionistthesis, that the objects investigated by thespecial sciencesreduce to the objects of a common, putatively more basic domain of science, usually thought to be physics; as the thesis that all theories and results of the various sciences can or ought to be expressed in a common language or "universal slang"; or as the thesis that all the special sciences share a common scientific method.[y]
Development of the idea has been troubled by accelerated advancement in technology that has opened up many new ways to look at the world.
The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend.
Paul Feyerabendexamined the history of science, and was led to deny that science is genuinely a methodological process. In his 1975 bookAgainst Methodhe argued that no description of scientific methodcould possibly be broad enoughto include all the approaches and methods used by scientists, and that there are no useful and exception-freemethodological rulesgoverning the progress of science. In essence, he said that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. He jokingly suggested that, if believers in the scientific method wish to express a single universally valid rule, it should be 'anything goes'.[164]As has been argued before him however, this is uneconomic;problem solvers, and researchers are to be prudent with their resources during their inquiry.[E]
A more general inference against formalised method has been found through research involving interviews with scientists regarding their conception of method. This research indicated that scientists frequently encounter difficulty in determining whether the available evidence supports their hypotheses. This reveals that there are no straightforward mappings between overarching methodological concepts and precise strategies to direct the conduct of research.[166]
Inscience education, the idea of a general and universal scientific method has been notably influential, and numerous studies (in the US) have shown that this framing of method often forms part of both students’ and teachers’ conception of science.[167][168]This convention of traditional education has been argued against by scientists, as there is a consensus that educations' sequential elements and unified view of scientific method do not reflect how scientists actually work.[169][170][171]Major organizations of scientists such as the American Association for the Advancement of Science (AAAS) consider the sciences to be a part of the liberal arts traditions of learning and proper understating of science includes understanding of philosophy and history, not just science in isolation.[172]
How the sciences make knowledge has been taught in the context of "the" scientific method (singular) since the early 20th century. Various systems of education, including but not limited to the US, have taught the method of science as a process or procedure, structured as a definitive series of steps:[176]observation, hypothesis, prediction, experiment.
This version of the method of science has been a long-established standard in primary and secondary education, as well as the biomedical sciences.[178]It has long been held to be an inaccurate idealisation of how some scientific inquiries are structured.[173]
The taught presentation of science had to defend demerits such as:[179]
The scientific method no longer features in the standards for US education of 2013 (NGSS) that replaced those of 1996 (NRC). They, too, influenced international science education,[179]and the standards measured for have shifted since from the singular hypothesis-testing method to a broader conception of scientific methods.[181]These scientific methods, which are rooted in scientific practices and not epistemology, are described as the 3dimensionsof scientific and engineering practices, crosscutting concepts (interdisciplinary ideas), and disciplinary core ideas.[179]
The scientific method, as a result of simplified and universal explanations, is often held to have reached a kind of mythological status; as a tool for communication or, at best, an idealisation.[36][170]Education's approach was heavily influenced by John Dewey's,How We Think(1910).[33]Van der Ploeg (2016) indicated that Dewey's views on education had long been used to further an idea of citizen education removed from "sound education", claiming that references to Dewey in such arguments were undue interpretations (of Dewey).[182]
The sociology of knowledge is a concept in the discussion around scientific method, claiming the underlying method of science to be sociological. King explains that sociology distinguishes here between the system of ideas that govern the sciences through an inner logic, and the social system in which those ideas arise.[μ][i]
A perhaps accessible lead into what is claimed isFleck'sthought, echoed inKuhn'sconcept ofnormal science. According to Fleck, scientists' work is based on a thought-style, that cannot be rationally reconstructed. It gets instilled through the experience of learning, and science is then advanced based on a tradition of shared assumptions held by what he calledthought collectives. Fleck also claims this phenomenon to be largely invisible to members of the group.[186]
Comparably, following thefield researchin an academic scientific laboratory byLatourandWoolgar,Karin Knorr Cetinahas conducted a comparative study of two scientific fields (namelyhigh energy physicsandmolecular biology) to conclude that the epistemic practices and reasonings within both scientific communities are different enough to introduce the concept of "epistemic cultures", in contradiction with the idea that a so-called "scientific method" is unique and a unifying concept.[187][z]
On the idea of Fleck'sthought collectivessociologists built the concept ofsituated cognition: that the perspective of the researcher fundamentally affects their work; and, too, more radical views.
Norwood Russell Hanson, alongsideThomas KuhnandPaul Feyerabend, extensively explored the theory-laden nature of observation in science. Hanson introduced the concept in 1958, emphasizing that observation is influenced by theobserver's conceptual framework. He used the concept ofgestaltto show how preconceptions can affect both observation and description, and illustrated this with examples like the initial rejection ofGolgi bodiesas an artefact of staining technique, and the differing interpretations of the same sunrise by Tycho Brahe and Johannes Kepler.Intersubjectivityled to different conclusions.[110][d]
Kuhn and Feyerabend acknowledged Hanson's pioneering work,[191][192]although Feyerabend's views on methodological pluralism were more radical. Criticisms like those from Kuhn and Feyerabend prompted discussions leading to the development of thestrong programme, a sociological approach that seeks to explain scientific knowledge without recourse to the truth or validity of scientific theories. It examines how scientific beliefs are shaped by social factors such as power, ideology, and interests.
Thepostmodernistcritiques of science have themselves been the subject of intense controversy. This ongoing debate, known as thescience wars, is the result of conflicting values and assumptions betweenpostmodernistandrealistperspectives. Postmodernists argue that scientific knowledge is merely a discourse, devoid of any claim to fundamental truth. In contrast, realists within the scientific community maintain that science uncovers real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate way of deriving truth.[193]
Somewhere between 33% and 50% of allscientific discoveriesare estimated to have beenstumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky.[9]Scientists themselves in the 19th and 20th century acknowledged the role of fortunate luck or serendipity in discoveries.[10]Louis Pasteuris credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected.[9][195]This is whatNassim Nicholas Talebcalls "Anti-fragility"; while some systems of investigation are fragile in the face ofhuman error, human bias, and randomness, the scientific method is more than resistant or tough – it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world.[196]
Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try to fix what theythinkis an error in their method. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious, and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise.[9][195]
When the scientific method employs statistics as a key part of its arsenal, there are mathematical and practical issues that can have a deleterious effect on the reliability of the output of scientific methods. This is described in a popular 2005 scientific paper "Why Most Published Research Findings Are False" byJohn Ioannidis, which is considered foundational to the field ofmetascience.[130]Much research in metascience seeks to identify poor use of statistics and improve its use, an example being themisuse of p-values.[197]
The points raised are both statistical and economical. Statistically, research findings are less likely to be true when studies are small and when there is significant flexibility in study design, definitions, outcomes, and analytical approaches. Economically, the reliability of findings decreases in fields with greater financial interests, biases, and a high level of competition among research teams. As a result, most research findings are considered false across various designs and scientific fields, particularly in modern biomedical research, which often operates in areas with very low pre- and post-study probabilities of yielding true findings. Nevertheless, despite these challenges, most new discoveries will continue to arise from hypothesis-generating research that begins with low or very low pre-study odds. This suggests that expanding the frontiers of knowledge will depend on investigating areas outside the mainstream, where the chances of success may initially appear slim.[130]
Science applied to complex systems can involve elements such astransdisciplinarity,systems theory,control theory, andscientific modelling.
In general, the scientific method may be difficult to apply stringently to diverse, interconnected systems and large data sets. In particular, practices used withinBig data, such aspredictive analytics, may be considered to be at odds with the scientific method,[198]as some of the data may have been stripped of the parameters which might be material in alternative hypotheses for an explanation; thus the stripped data would only serve to support thenull hypothesisin the predictive analytics application.Fleck (1979), pp. 38–50 notes "ascientific discovery remains incomplete without considerations of the social practicesthat condition it".[199]
Science is the process of gathering, comparing, and evaluating proposed models againstobservables.A model can be a simulation, mathematical or chemical formula, or set of proposed steps. Science is like mathematics in that researchers in both disciplines try to distinguish what isknownfrom what isunknownat each stage of discovery. Models, in both science and mathematics, need to be internally consistent and also ought to befalsifiable(capable of disproof). In mathematics, a statement need not yet be proved; at such a stage, that statement would be called aconjecture.[200]
Mathematical work and scientific work can inspire each other.[42]For example, the technical concept oftimearose inscience, and timelessness was a hallmark of a mathematical topic. But today, thePoincaré conjecturehas been proved using time as a mathematical concept in which objects can flow (seeRicci flow).[201]
Nevertheless, the connection between mathematics and reality (and so science to the extent it describes reality) remains obscure.Eugene Wigner's paper, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", is a very well-known account of the issue from a Nobel Prize-winning physicist. In fact, some observers (including some well-known mathematicians such asGregory Chaitin, and others such asLakoff and Núñez) have suggested that mathematics is the result of practitioner bias and human limitation (including cultural ones), somewhat like the post-modernist view of science.[202]
George Pólya's work onproblem solving,[203]the construction of mathematicalproofs, andheuristic[204][205]show that the mathematical method and the scientific method differ in detail, while nevertheless resembling each other in using iterative or recursive steps.
In Pólya's view,understandinginvolves restating unfamiliar definitions in your own words, resorting to geometrical figures, and questioning what we know and do not know already;analysis, which Pólya takes fromPappus,[206]involves free and heuristic construction of plausible arguments,working backward from the goal, and devising a plan for constructing the proof;synthesisis the strictEuclideanexposition of step-by-step details[207]of the proof;reviewinvolves reconsidering and re-examining the result and the path taken to it.
Building on Pólya's work,Imre Lakatosargued that mathematicians actually use contradiction, criticism, and revision as principles for improving their work.[208][ν]In like manner to science, where truth is sought, but certainty is not found, inProofs and Refutations, what Lakatos tried to establish was that no theorem ofinformal mathematicsis final or perfect. This means that, in non-axiomatic mathematics, we should not think that a theorem is ultimately true, only that nocounterexamplehas yet been found. Once a counterexample, i.e. an entity contradicting/not explained by the theorem is found, we adjust the theorem, possibly extending the domain of its validity. This is a continuous way our knowledge accumulates, through the logic and process of proofs and refutations. (However, if axioms are given for a branch of mathematics, this creates a logical system —Wittgenstein 1921Tractatus Logico-Philosophicus5.13; Lakatos claimed that proofs from such a system weretautological, i.e.internally logically true, byrewriting forms, as shown by Poincaré, who demonstrated the technique of transforming tautologically true forms (viz. theEuler characteristic) into or out of forms fromhomology,[209]or more abstractly, fromhomological algebra.[210][211][ν]
Lakatos proposed an account of mathematical knowledge based on Polya's idea ofheuristics. InProofs and Refutations, Lakatos gave several basic rules for finding proofs and counterexamples to conjectures. He thought that mathematical 'thought experiments' are a valid way to discover mathematical conjectures and proofs.[213]
Gauss, when asked how he came about histheorems, once replied "durch planmässiges Tattonieren" (throughsystematic palpable experimentation).[214]
The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend.
|
https://en.wikipedia.org/wiki/Process_(science)
|
Process architectureis the structural design of general process systems. It applies to fields such as computers (software, hardware, networks, etc.),business processes(enterprise architecture, policy and procedures, logistics, project management, etc.), and any other process system of varying degrees ofcomplexity.[1]
Processes are defined as having inputs, outputs and the energy required to transform inputs to outputs. Use of energy during transformation also implies a passage of time: a process takesreal timeto perform its associated action. A process also requires space for input/output objects and transforming objects to exist: a process uses real space.
A process system is a specializedsystemof processes. Processes are composed of processes. Complex processes are made up of several processes that are in turn made up of several processes. This results in an overall structuralhierarchyofabstraction. If the process system is studied hierarchically, it is easier to understand and manage; therefore, process architecture requires the ability to consider process systems hierarchically. Graphical modeling of process architectures is considered bydualistic Petri nets. Mathematical consideration of process architectures may be found inCCSand theπ-calculus.
The structure of a process system, or its architecture, can be viewed as a dualistic relationship of itsinfrastructureand suprastructure.[1][2]The infrastructure describes a process system's component parts and their interactions. The suprastructure considers the super system of which the process system is a part. (Suprastructure should not be confused withsuperstructure, which is actually part of the infrastructure built for (external) support.) As one traverses the process architecture from one level of abstraction to the next, infrastructure becomes the basis for suprastructure and vice versa as one looks within a system or without.
Requirements for a process system are derived at every hierarchical level.[2]Black-box requirements for a system come from its suprastructure. Customer requirements are black-box requirements near, if not at, the top of a process architecture's hierarchy. White-box requirements, such as engineering rules, programmingsyntax, etc., come from the process system's infrastructure.
Process systems are a dualistic phenomenon of change/no-change or form/transform and as such, are
well-suited to being modeled by the bipartitePetri netsmodeling system and in particular, process-classdualistic Petri netswhere processes can be simulated in real time and space and studied hierarchically.
|
https://en.wikipedia.org/wiki/Process_architecture
|
Incomputer science, theprocess calculi(orprocess algebras) are a diverse family of related approaches for formally modellingconcurrent systems. Process calculi provide a tool for the high-level description of interactions, communications, and synchronizations between a collection of independent agents or processes. They also providealgebraiclaws that allow process descriptions to be manipulated and analyzed, and permit formal reasoning about equivalences between processes (e.g., usingbisimulation). Leading examples of process calculi includeCSP,CCS,ACP, andLOTOS.[1]More recent additions to the family include theπ-calculus, theambient calculus,PEPA, thefusion calculusand thejoin-calculus.
While the variety of existing process calculi is very large (including variants that incorporatestochasticbehaviour, timing information, and specializations for studying molecular interactions), there are several features that all process calculi have in common:[2]
To define aprocess calculus, one starts with a set ofnames(orchannels) whose purpose is to provide means of communication. In many implementations, channels have rich internal structure to improve efficiency, but this is abstracted away in most theoretic models. In addition to names, one needs a means to form new processes from old ones. The basic operators, always present in some form or other, allow:[3]
Parallel composition of two processesP{\displaystyle {\mathit {P}}}andQ{\displaystyle {\mathit {Q}}}, usually writtenP|Q{\displaystyle P\vert Q}, is the key primitive distinguishing the process calculi from sequential models of computation. Parallel composition allows computation inP{\displaystyle {\mathit {P}}}andQ{\displaystyle {\mathit {Q}}}to proceed simultaneously and independently. But it also allows interaction, that is synchronisation and flow of information fromP{\displaystyle {\mathit {P}}}toQ{\displaystyle {\mathit {Q}}}(or vice versa) on a channel shared by both. Crucially, an agent or process can be connected to more than one channel at a time.
Channels may be synchronous or asynchronous. In the case of a synchronous channel, the agent sending a message waits until another agent has received the message. Asynchronous channels do not require any such synchronization. In some process calculi (notably theπ-calculus) channels themselves can be sent in messages through (other) channels, allowing the topology of process interconnections to change. Some process calculi also allow channels to becreatedduring the execution of a computation.
Interaction can be (but isn't always) adirectedflow of information. That is, input and output can be distinguished as dual interaction primitives. Process calculi that make such distinctions typically define an input operator (e.g.x(v){\displaystyle x(v)}) and an output operator (e.g.x⟨y⟩{\displaystyle x\langle y\rangle }), both of which name an interaction point (herex{\displaystyle {\mathit {x}}}) that is used to synchronise with a dual interaction primitive.
Should information be exchanged, it will flow from the outputting to the inputting process. The output primitive will specify the data to be sent. Inx⟨y⟩{\displaystyle x\langle y\rangle }, this data isy{\displaystyle y}. Similarly, if an input expects to receive data, one or morebound variableswill act as place-holders to be substituted by data, when it arrives. Inx(v){\displaystyle x(v)},v{\displaystyle v}plays that role. The choice of the kind of data that can be exchanged in an interaction is one of the key features that distinguishes different process calculi.
Sometimes interactions must be temporally ordered. For example, it might be desirable to specify algorithms such as:first receive some data onx{\displaystyle {\mathit {x}}}and then send that data ony{\displaystyle {\mathit {y}}}.Sequential compositioncan be used for such purposes. It is well known from other models of computation. In process calculi, the sequentialisation operator is usually integrated with input or output, or both. For example, the processx(v)⋅P{\displaystyle x(v)\cdot P}will wait for an input onx{\displaystyle {\mathit {x}}}. Only when this input has occurred will the processP{\displaystyle {\mathit {P}}}be activated, with the received data throughx{\displaystyle {\mathit {x}}}substituted for identifierv{\displaystyle {\mathit {v}}}.
The key operational reduction rule, containing the computational essence of process calculi, can be given solely in terms of parallel composition, sequentialization, input, and output. The details of this reduction vary among the calculi, but the essence remains roughly the same. The reduction rule is:
The interpretation to this reduction rule is:
The class of processes thatP{\displaystyle {\mathit {P}}}is allowed to range over as the continuation of the output operation substantially influences the properties of the calculus.
Processes do not limit the number of connections that can be made at a given interaction point. But interaction points allow interference (i.e. interaction). For the
synthesis of compact, minimal and compositional systems, the ability to restrict interference is crucial.Hidingoperations allow control of the connections made between interaction points when composing
agents in parallel. Hiding can be denoted in a variety of ways. For example, in theπ-calculusthe hiding of a namex{\displaystyle {\mathit {x}}}inP{\displaystyle {\mathit {P}}}can be expressed as(νx)P{\displaystyle (\nu \;x)P}, while inCSPit might be written asP∖{x}{\displaystyle P\setminus \{x\}}.
The operations presented so far describe only finite interaction and are consequently insufficient for full computability, which includes non-terminating behaviour.Recursionandreplicationare operations that allow finite descriptions of infinite behaviour. Recursion is well known from the sequential world. Replication!P{\displaystyle !P}can be understood as abbreviating the parallel composition of a countably infinite number ofP{\displaystyle {\mathit {P}}}processes:
Process calculi generally also include anull process(variously denoted asnil{\displaystyle {\mathit {nil}}},0{\displaystyle 0},STOP{\displaystyle {\mathit {STOP}}},δ{\displaystyle \delta }, or some other appropriate symbol) which has no interaction points. It is utterly inactive and its sole purpose is to act as the inductive anchor on top of which more interesting processes can be generated.
Process algebra has been studied fordiscrete timeandcontinuous time(real time or dense time).[4]
In the first half of the 20th century, various formalisms were proposed to capture the informal concept of acomputable function, withμ-recursive functions,Turing machinesand thelambda calculuspossibly being the best-known examples today. The surprising fact that they are essentially equivalent, in the sense that they are all encodable into each other, supports theChurch-Turing thesis. Another shared feature is more rarely commented on: they all are most readily understood as models ofsequentialcomputation. The subsequent consolidation of computer science required a more subtle formulation of the notion of computation, in particular explicit representations of concurrency and communication. Models of concurrency such as the process calculi,Petri netsin 1962, and theactor modelin 1973 emerged from this line of inquiry.
Research on process calculi began in earnest withRobin Milner's seminal work on theCalculus of Communicating Systems(CCS) during the period from 1973 to 1980.C.A.R. Hoare'sCommunicating Sequential Processes(CSP) first appeared in 1978, and was subsequently developed into a full-fledged process calculus during the early 1980s. There was much cross-fertilization of ideas between CCS and CSP as they developed. In 1982Jan BergstraandJan Willem Klopbegan work on what came to be known as theAlgebra of Communicating Processes(ACP), and introduced the termprocess algebrato describe their work.[1]CCS, CSP, and ACP constitute the three major branches of the process calculi family: the majority of the other process calculi can trace their roots to one of these three calculi.
Various process calculi have been studied and not all of them fit the paradigm sketched here. The most prominent example may be theambient calculus. This is to be expected as process calculi are an active field of study. Currently research on process calculi focuses on the following problems.
The ideas behind process algebra have given rise to several tools including:
Thehistory monoidis thefree objectthat is generically able to represent the histories of individual communicating processes. A process calculus is then aformal languageimposed on a history monoid in a consistent fashion.[6]That is, a history monoid can only record a sequence of events, with synchronization, but does not specify the allowed state transitions. Thus, a process calculus is to a history monoid what a formal language is to afree monoid(a formal language is a subset of the set of all possible finite-length strings of analphabetgenerated by theKleene star).
The use of channels for communication is one of the features distinguishing the process calculi from other models ofconcurrency, such asPetri netsand theactor model(seeActor model and process calculi). One of the fundamental motivations for including channels in the process calculi was to enable certain algebraic techniques, thereby making it easier to reason about processes algebraically.
|
https://en.wikipedia.org/wiki/Process_calculus
|
Aprocess flow diagram(PFD) is a diagram commonly used inchemicalandprocess engineeringto indicate the general flow of plant processes and equipment. The PFD displays the relationship betweenmajorequipment of a plant facility and does not show minor details such as piping details and designations. Another commonly used term for a PFD isprocessflowsheet. It is the key document in process design.[1]
Typically, process flow diagrams of a singleunit processinclude the following:
Process flow diagrams generally do not include:
Process flow diagrams of multiple process units within a large industrial plant will usually contain less detail and may be calledblock flow diagramsorschematic flow diagrams.
The process flow diagram below depicts a single chemical engineering unit process known as anamine treating plant:
The process flow diagram below is an example of a schematic or block flow diagram and depicts the various unit processes within a typicaloil refinery:
A PFD can be computer generated from process simulators (seeList of Chemical Process Simulators), CAD packages, or flow chart software using a library of chemical engineering symbols. Rules and symbols are available from standardization organizations such asDIN,ISOorANSI. Often PFDs are produced on large sheets of paper.
PFDs of many commercial processes can be found in the literature, specifically in encyclopedias of chemical technology, although some might be outdated. To find recent ones, patent databases such as those available from theUnited States Patent and Trademark Officecan be useful.
|
https://en.wikipedia.org/wiki/Process_flow_diagram
|
Inphilosophy, aprocess ontologyrefers to a universal model of the structure of the world as an ordered wholeness.[1][2]Such ontologies arefundamental ontologies, in contrast to the so-calledapplied ontologies. Fundamental ontologies do not claim to be accessible to anyempiricalproof in itself but to be a structural design pattern, out of which empiricalphenomenacan be explained and put together consistently. Throughout Western history, the dominating fundamental ontology is the so-calledsubstance theory. However, fundamental process ontologies have become more important in recent times, because the progress in the discovery of the foundations of physics has spurred the development of a basic concept able to integrate such boundary notions as "energy," "object", and those of the physical dimensions ofspaceandtime.
Incomputer science, aprocess ontologyis a description of the components and their relationships that make up a process. A formal process ontology is anontologyin the knowledge domain of operations. Often such ontologies take advantage of the benefits of anupper ontology.Planning softwarecan be used to perform plan generation based on the formal description of the process and its constraints. Numerous efforts have been made to define a process/planning ontology.[3]
A process may be defined as a set of transformations of input elements into output elements with specific properties, with the transformations characterized by parameters and constraints, such as in manufacturing or biology. A process may also be defined as theworkflowsandsequence of eventsinherent in processes such as manufacturing, engineering andbusiness processes.
The Process Specification Language (PSL) is a process ontology developed for the formal description and modeling of basic manufacturing, engineering and business processes. This ontology provides a vocabulary of classes and relations for concepts at the ground level of event-instances, object-instances, and timepoints. PSL’s top level is built around the following:[4]
In a process/planning ontology developed for the ontology Cyc, classes and relations above the ground level of PSL allow processes to be described purely at the type-level.[5][6]The ground level of PSL uses the primitives of event-instance, object-instance, and timepoint description. The types above the ground level of PSL have also been expressed in PSL, showing that the type-level and the ground level are relatively independent. The type-levels for the Cyc process ontology above this ground level use the following concepts:
The project SUPER[7](SemanticsUtilised forProcess management within and betweenEnteRprises) has a goal of the definition of ontologies for Semantic Business Process Management (SBPM), but these ontologies can be reused in diverse environments. Part of this project is to define an Upper Process Ontology (UPO) that ties together all other SUPER ontologies. The results of the project SUPER include the UPO and a set of ontologies for processes and organizations.[8][9]Most of the ontologies are written inWSML, and some are also written in OCML.
A candidate model for the UPO was DDPO[10](DOLCE+DnS Plan Ontology), a planning ontology which specifies plans and distinguishes between abstract and executable plans.DOLCE[11][12](Descriptive Ontology for Linguistic and Cognitive Engineering) aims at capturing the ontological categories underlying natural language and human commonsense.DnS(Descriptions and Situations), is a constructivist ontology that allows for context-sensitive redescriptions of the types and relations postulated by other given ontologies (or ground vocabularies). Together in DDPO, DOLCE and DnS are used to build a Plan Ontology that includes physical and non-physical objects (social entities, mental objects and states, conceptualizations, information objects, constraints), events, states, regions, qualities, and constructivist situations. The main target of DDPO is tasks, namely the types of actions, their sequencing, and the controls performed on them.
The ontology oXPDL[13]is a process interchange ontology based on the standardised XML Process Definition Language (XPDL). The purpose of oXPDL is to model the semantics of XPDL process models in standardized Web ontology languages such asOWLandWSML, while incorporating features of existing standard ontologies such asPSL,RosettaNet, andSUMO.
The General Formal Ontology[14][15](GFO) is an ontology integrating processes and objects. GFO includes elaborations of categories like objects, processes, time and space, properties, relations, roles, functions, facts, and situations. GFO allows for different axiomatizations of its categories, such as the existence of atomic time-intervals vs. dense time. Two of the specialties of GFO are its account of persistence and its time model. Regarding persistence, the distinction between endurants (objects) and perdurants (processes) is made explicit within GFO by the introduction of a special category, a persistant [sic]. A persistant is a special category with the intention that its instances "remain identical" over time. With respect to time, time intervals are taken as primitive in GFO, and time-points (called "time boundaries") are derived. Moreover, time-points may coincide, which is convenient for modelling instantaneous changes.
The multi metamodel process ontology[16][17](m3po) combines workflows and choreography descriptions so that it can be used as a process interchange ontology. For internal business processes, Workflow Management Systems are used for process modelling and allow describing and executing business processes.[18]For external business processes, choreography descriptions are used to describe how business partners can cooperate. A choreography can be considered to be a view of an internal business process with the internal logic not visible, similar to public views on private workflows.[19][20][21]The m3po ontology unifies both internal and external business processes, combining reference models and languages from the workflow and choreography domains. The m3po ontology is written inWSML. The related ontology m3pl, written inPSLusing the extension FLOWS (First Order Logic for Web Services), enables the extraction of choreography interfaces from workflow models.[22]
The m3po ontology combines features of the following reference models and languages:
The m3po ontology is organized using five key aspects of workflow specifications and workflow management.[23]Because different workflow models put a different emphasis on the five aspects, the most elaborate reference model for each aspect was used and combined into m3po.
|
https://en.wikipedia.org/wiki/Process_ontology
|
TheProcess Specification Language(PSL) is a set oflogicterms used to describeprocesses. The logic terms arespecifiedin anontologythat provides aformal descriptionof the components and their relationships that make up a process. The ontology was developed at the National Institute of Standards and Technology (NIST), and has been approved as an international standard in the documentISO18629.
The Process Specification Language can be used for the representation ofmanufacturing,engineeringandbusiness processes, including production scheduling, process planning,workflow management,business process reengineering, simulation, process realization, process modelling, andproject management. In the manufacturing domain, PSL's objective is to serve as a common representation for integrating several process-related applications throughout the manufacturing processlife cycle.[1]
The foundation of the ontology of PSL is a set of primitiveconcepts(object, activity, activity_occurrence, timepoint), constants (inf+, inf-), functions (beginof, endof), andrelations(occurrence_of, participates_in, between, before, exists_at, is_occurring_at). This core ontology is then used to describe more complex concepts.[2]The ontology uses theCommon Logic Interchange Format(CLIF) to represent the concepts, constants, functions, and relations.[3]
This ontology provides a vocabulary of classes and relations for concepts at the ground level of event-instances, object-instances, and timepoints. PSL's top level is built around the following:[4]
Thisstandards- ormeasurement-related article is astub. You can help Wikipedia byexpanding it.
Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
This business-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Process_Specification_Language
|
Computer-aided detection(CADe), also calledcomputer-aided diagnosis(CADx), are systems that assist doctors in the interpretation ofmedical images. Imaging techniques inX-ray,MRI,endoscopy, andultrasounddiagnostics yield a great deal of information that theradiologistor other medical professional has to analyze and evaluate comprehensively in a short time. CAD systems process digital images or videos for typical appearances and to highlight conspicuous sections, such as possible diseases, in order to offer input to support a decision taken by the professional.
CAD also has potential future applications indigital pathologywith the advent of whole-slide imaging andmachine learningalgorithms. So far its application has been limited to quantifyingimmunostainingbut is also being investigated for the standardH&E stain.[1]
CAD is aninterdisciplinarytechnology combining elements ofartificial intelligenceandcomputer visionwith radiological andpathologyimage processing. A typical application is the detection of a tumor. For instance, some hospitals use CAD to support preventive medical check-ups inmammography(diagnosis of breast cancer), the detection of polyps incolonoscopy, andlung cancer.
Computer-aided detection (CADe) systems are usually confined to marking conspicuous structures and sections. Computer-aided diagnosis (CADx) systems evaluate the conspicuous structures. For example, in mammography CAD highlights microcalcification clusters and hyperdense structures in the soft tissue. This allows the radiologist to draw conclusions about the condition of the pathology. Another application is CADq, which quantifies,e.g., the size of a tumor or the tumor's behavior in contrast medium uptake.Computer-aided simple triage (CAST)is another type of CAD, which performs a fully automatic initial interpretation andtriageof studies into some meaningful categories (e.g.negative and positive). CAST is particularly applicable in emergency diagnostic imaging, where a prompt diagnosis of critical, life-threatening condition is required.
Although CAD has been used in clinical environments for over 40 years, CAD usually does not substitute the doctor or other professional, but rather plays a supporting role. The professional (generally a radiologist) is generally responsible for the final interpretation of a medical image.[2]However, the goal of some CAD systems is to detect earliest signs of abnormality in patients that human professionals cannot, as indiabetic retinopathy, architectural distortion in mammograms,[3][4]ground-glass nodules in thoracic CT,[5][6]and non-polypoid (“flat”) lesions in CT colonography.[7]
In the late 1950s, with the dawn of modern computers researchers in various fields started exploring the possibility of building computer-aided medical diagnostic (CAD) systems.[8]These first CAD systems used flow-charts, statistical pattern-matching, probability theory, or knowledge bases to drive their decision-making process.[9]
In the early 1970s, some of the very early CAD systems in medicine, which were often referred as “expert systems” in medicine, were developed and used mainly for educational purposes. Examples include theMYCINexpert system,[10]theInternist-Iexpert system[11]and theCADUCEUS (expert system).[12]
During the beginning of the early developments, the researchers were aiming at building entirely automated CAD / expert systems. The expectated capability of computers was unrealistically optimistic among these scientists. However, after the breakthrough paper, “Reducibility among Combinatorial Problems” byRichard M. Karp,[13]it became clear that there were limitations but also potential opportunities when one develops algorithms to solve groups of important computational problems.[9]
As result of the new understanding of the various algorithmic limitations that Karp discovered in the early 1970s, researchers started realizing the serious limitations that CAD and expert systems in medicine have.[9]The recognition of these limitations brought the investigators to develop new kinds of CAD systems by using advanced approaches. Thus, by the late 1980s and early 1990s the focus sifted in the use ofdata miningapproaches for the purpose of using more advanced and flexible CAD systems.
In 1998, the first commercial CAD system for mammography, the ImageChecker system, was approved by the US Food and Drug Administration (FDA). In the following years several commercial CAD systems for analyzing mammography, breast MRI, medical imagining of lung, colon, and heart also received FDA approvals. Currently, CAD systems are used as a diagnostic aid to provide physicians for better medical decision-making.[14]
CAD is fundamentally based on highly complexpattern recognition. X-ray or other types of images are scanned for suspicious structures. Normally a few thousand images are required to optimize the algorithm. Digital image data are copied to a CAD server in aDICOM-format and are prepared and analyzed in several steps.
1. Preprocessingfor
2.Segmentationfor
3. Structure/ROI (Region of Interest) AnalyzeEvery detected region is analyzed individually for special characteristics:
4. Evaluation / classificationAfter the structure is analyzed, every ROI is evaluated individually (scoring) for the probability of a TP. The following procedures are examples of classification algorithms.
If the detected structures have reached a certain threshold level, they are highlighted in the image for the radiologist. Depending on the CAD system these markings can be permanently or temporary saved. The latter's advantage is that only the markings which are approved by the radiologist are saved. False hits should not be saved, because an examination at a later date becomes more difficult then.
CAD systems seek to highlight suspicious structures. Today's CAD systems cannot detect 100% of pathological changes. The hit rate (sensitivity) can be up to 90% depending on system and application.[24]A correct hit is termed a True Positive (TP), while the incorrect marking of healthy sections constitutes a False Positive (FP). The less FPs indicated, the higher thespecificityis. A low specificity reduces the acceptance of the CAD system because the user has to identify all of these wrong hits. The FP-rate in lung overview examinations (CAD Chest) could be reduced to 2 per examination. In other segments (e.g.CT lung examinations) the FP-rate could be 25 or more. InCASTsystems the FP rate must be extremely low (less than 1 per examination) to allow a meaningful studytriage.
The absolute detection rate of a radiologist is an alternative metric to sensitivity and specificity. Overall, results of clinical trials about sensitivity, specificity, and the absolute detection rate can vary markedly. Each study result depends on its basic conditions and has to be evaluated on those terms. The following facts have a strong influence:
Despite the many developments that CAD has achieved since the dawn of computers, there are still certain challenges that CAD systems face today.[25]
Some challenges are related to various algorithmic limitations in the procedures of a CAD system including input data collection, preprocessing, processing and system assessments. Algorithms are generally designed to select a single likely diagnosis, thus providing suboptimal results for patients with multiple, concurrent disorders.[26]Today input data for CAD mostly come fromelectronic health records(EHR). Effective designing, implementing and analyzing for EHR is a major necessity on any CAD systems.[25]
Due to the massive availability of data and the need to analyze such data,big datais also one of the biggest challenges that CAD systems face today. The increasingly vast amount of patient data is a serious problem. Often the patient data are complex and can be semi-structured orunstructured data. It requires highly developed approaches to store, retrieve and analyze them in reasonable time.[25]
During the preprocessing stage, input data must be normalized. Thenormalizationof input data includesnoise reductionand filtering.
Processing may contain a few sub-steps depending on applications. Basic three sub-steps on medical imaging are segmentation,feature extraction/ selection, and classification. These sub-steps require advanced techniques to analyze input data with less computational time. Although much effort has been devoted to creating innovative techniques for these procedures of CAD systems, no single best algorithm has emerged for any individual step. Ongoing studies in building innovative algorithms for all the aspects of CAD systems is essential.[25]
There is also a lack of standardized assessment measures for CAD systems.[25]This fact may cause the difficulty for obtaining approval for commercial use from governing bodies such as theFDA. Moreover, while many positive developments of CAD systems have been proven, studies for validating their algorithms for clinical practice have not been confirmed.[27]
Other challenges are related to the problem for healthcare providers to adopt new CAD systems in clinical practice. Some negative studies may discourage the use of CAD. In addition, the lack of training of health professionals on the use of CAD sometimes brings the incorrect interpretation of the system outcomes.[a]
CAD is used in the diagnosis ofbreast cancer,lung cancer,colon cancer,prostate cancer,bone metastases,coronary artery disease,congenital heart defect, pathological brain detection, fracture detection,Alzheimer's disease, anddiabetic retinopathy.
CAD is used in screeningmammography(X-ray examination of the female breast). Screening mammography is used for the early detection of breast cancer. CAD systems are often utilized to help classify a tumor as malignant (cancerous) or benign (non-cancerous). CAD is especially established in the US and the Netherlands and is used in addition to human evaluation, usually by a radiologist.
The first CAD system for mammography was developed in a research project at theUniversity of Chicago. Today it is commercially offered by iCAD andHologic. However, while achieving high sensitivities, CAD systems tend to have very low specificity and the benefits of using CAD remain uncertain. A 2008 systematic review on computer-aided detection in screening mammography concluded that CAD does not have a significant effect on cancer detection rate, but does undesirably increase recall rate (i.e.the rate of false positives). However, it noted considerable heterogeneity in the impact on recall rate across studies.[28]
Recent advances inmachine learning,deep-learningandartificial intelligencetechnology have enabled the development of CAD systems that are clinically proven to assistradiologistsin addressing the challenges of readingmammographicimages by improving cancer detection rates and reducing false positives and unnecessary patient recalls, while significantly decreasing reading times.[29]
Procedures to evaluate mammography based onmagnetic resonance imaging(MRI) exist too.
In the diagnosis of lung cancer,computed tomographywith special three-dimensional CAD systems are established and considered as appropriate second opinions.[30]At this a volumetric dataset with up to 3,000 single images is prepared and analyzed. Round lesions (lung cancer, metastases and benign changes) from 1 mm are detectable. Today all well-known vendors of medical systems offer corresponding solutions.
Early detection of lung cancer is valuable. However, the random detection of lung cancer in the early stage (stage 1) in the X-ray image is difficult. Round lesions that vary from 5–10 mm are easily overlooked.[31]The routine application of CAD Chest Systems may help to detect small changes without initial suspicion. A number of researchers developed CAD systems for detection of lung nodules (round lesions less than 30 mm) in chest radiography[32][33][34]and CT,[35][36]and CAD systems for diagnosis (e.g., distinction between malignant and benign) of lung nodules in CT. Virtual dual-energy imaging[37][38][39][40]improved the performance of CAD systems in chest radiography.[41]
CAD is available for detection ofcolorectal polypsin thecolonin CT colonography.[42][43]Polyps are small growths that arise from the inner lining of the colon. CAD detects the polyps by identifying their characteristic "bump-like" shape. To avoid excessive false positives, CAD ignores the normal colon wall, including thehaustralfolds.
State-of-the-art methods in cardiovascular computing, cardiovascular informatics, and mathematical andcomputational modelingcan provide valuable tools in clinical decision-making.[44]CAD systems with novel image-analysis-based markers as input can aid vascular physicians to decide with higher confidence on best suitable treatment forcardiovascular diseasepatients.
Reliable early-detection and risk-stratification ofcarotid atherosclerosisis of outmost importance for predictingstrokesin asymptomatic patients.[45]To this end, various noninvasive and low-cost markers have been proposed, usingultrasound-image-based features.[46]These combineechogenicity, texture, andmotion[47][48][49][50]characteristics to assist clinical decision towards improved prediction, assessment and management of cardiovascular risk.[51]
CAD is available for the automatic detection of significant (causing more than 50%stenosis)coronary artery diseasein coronary CT angiography (CCTA) studies.[52]
Early detection of pathology can be the difference between life and death. CADe can be done byauscultationwith a digital stethoscope and specialized software, also known ascomputer-aided auscultation. Murmurs, irregular heart sounds, caused by blood flowing through a defective heart, can be detected with high sensitivity and specificity. Computer-aided auscultation is sensitive to external noise and bodily sounds and requires an almost silent environment to function accurately.
Chaplot et al. was the first to useDiscrete Wavelet Transform(DWT) coefficients to detect pathological brains.[53]Maitra and Chatterjee employed the Slantlet transform, which is an improved version of DWT. Their feature vector of each image is created by considering the magnitudes of Slantlet transform outputs corresponding to six spatial positions chosen according to a specific logic.[54]
In 2010, Wang and Wu presented a forward neural network (FNN) based method to classify a given MR brain image as normal or abnormal. The parameters of FNN were optimized via adaptive chaotic particle swarm optimization (ACPSO). Results over 160 images showed that the classification accuracy was 98.75%.[55]
In 2011, Wu and Wang proposed using DWT for feature extraction, PCA for feature reduction, and FNN with scaled chaotic artificial bee colony (SCABC) as classifier.[56]
In 2013, Saritha et al. were the first to apply wavelet entropy (WE) to detect pathological brains. Saritha also suggested to use spider-web plots.[57]Later, Zhang et al. proved removing spider-web plots did not influence the performance.[58]Genetic pattern search method was applied to identify abnormal brain from normal controls. Its classification accuracy was reported as 95.188%.[59]Das et al. proposed to use Ripplet transform.[60]Zhang et al. proposed to use particle swarm optimization (PSO).[61]Kalbkhani et al. suggested to use GARCH model.[62]
In 2014, El-Dahshan et al. suggested the use of pulse coupled neural network.[63]
In 2015, Zhou et al. suggested application of naiveBayes classifierto detect pathological brains.[64]
CADs can be used to identify subjects with Alzheimer's and mild cognitive impairment from normal elder controls.
In 2014, Padmaet al. used combined wavelet statistical texture features to segment and classify AD benign and malignant tumor slices.[57]Zhang et al. found kernel support vector machine decision tree had 80% classification accuracy, with an average computation time of 0.022s for each image classification.[65]
In 2019, Signaevskyet al. have first reported a trained Fully Convolutional Network (FCN) for detection and quantification ofneurofibrillary tangles(NFT) in Alzheimer's disease and an array of other tauopathies. The trained FCN achieved high precision and recall in naivedigital whole slide image(WSI) semantic segmentation, correctly identifying NFT objects using a SegNet model trained for 200 epochs. The FCN reached near-practical efficiency with average processing time of 45 min per WSI pergraphics processing unit (GPU), enabling reliable and reproducible large-scale detection of NFTs. The measured performance on test data of eight naive WSI across various tauopathies resulted in therecall, precision, and anF1 scoreof 0.92, 0.72, and 0.81, respectively.[66]
Eigenbrain is a novel brain feature that can help to detect AD, based onprincipal component analysis (PCA)[67]orindependent component analysisdecomposition.[68]Polynomial kernel SVM has been shown to achieve good accuracy. The polynomial KSVM performs better than linear SVM and RBF kernel SVM.[69]Other approaches with decent results involve the use of texture analysis,[70]morphological features,[71]or high-order statistical features[72]
CADx is available for nuclear medicine images. Commercial CADx systems for the diagnosis of bone metastases in whole-body bone scans and coronary artery disease in myocardial perfusion images exist.[73]
With a high sensitivity and an acceptable false lesions detection rate, computer-aided automatic lesion detection system is demonstrated as useful and will probably in the future be able to help nuclear medicine physicians to identify possible bone lesions.[74]
Diabetic retinopathy is a disease of the retina that is diagnosed predominantly by fundoscopic images. Diabetic patients in industrialised countries generally undergo regular screening for the condition. Imaging is used to recognize early signs of abnormal retinal blood vessels. Manual analysis of these images can be time-consuming and unreliable.[75][76]CAD has been employed to enhance the accuracy, sensitivity, and specificity of automated detection method. The use of some CAD systems to replace human graders can be safe and cost effective.[76]
Image pre-processing, and feature extraction and classification are two main stages of these CAD algorithms.[77]
Image normalizationis minimizing the variation across the entire image. Intensity variations in areas between periphery and central macular region of the eye have been reported to cause inaccuracy of vessel segmentation.[78]Based on the 2014 review, this technique was the most frequently used and appeared in 11 out of 40 recently (since 2011) published primary research.[77]
Histogram equalizationis useful in enhancing contrast within an image.[80]This technique is used to increaselocal contrast.At the end of the processing, areas that were dark in the input image would be brightened, greatly enhancing the contrast among the features present in the area. On the other hand, brighter areas in the input image would remain bright or be reduced in brightness to equalize with the other areas in the image. Besides vessel segmentation, other features related to diabetic retinopathy can be further separated by using this pre-processing technique. Microaneurysm and hemorrhages are red lesions, whereas exudates are yellow spots. Increasing contrast between these two groups allow better visualization of lesions on images. With this technique, 2014 review found that 10 out of the 14 recently (since 2011) published primary research.[77]
Green channel filteringis another technique that is useful in differentiating lesions rather than vessels. This method is important because it provides the maximal contrast between diabetic retinopathy-related lesions.[81]Microaneurysms and hemorrhages are red lesions that appear dark after application of green channel filtering. In contrast, exudates, which appear yellow in normal image, are transformed into bright white spots after green filtering. This technique is mostly used according to the 2014 review, with appearance in 27 out of 40 published articles in the past three years.[77]In addition, green channel filtering can be used to detect center of optic disc in conjunction with double-windowing system.[citation needed]
Non-uniform illumination correctionis a technique that adjusts for non-uniform illumination in fundoscopic image. Non-uniform illumination can be a potential error in automated detection of diabetic retinopathy because of changes in statistical characteristics of image.[77]These changes can affect latter processing such as feature extraction and are not observable by humans. Correction of non-uniform illumination (f') can be achieved by modifying the pixel intensity using known original pixel intensity (f), and average intensities of local (λ) and desired pixels (μ) (see formula below).[82]Walter-Klein transformation is then applied to achieve the uniform illumination.[82]This technique is the least used pre-processing method in the review from 2014.
f′=f+μ−λ{\displaystyle f'=f+\mu -\lambda }
Morphological operationsis the second least used pre-processing method in 2014 review.[77]The main objective of this method is to provide contrast enhancement, especially darker regions compared to background.
After pre-processing of funduscopic image, the image will be further analyzed using different computational methods. However, the current literature agreed that some methods are used more often than others during vessel segmentation analyses. These methods are SVM, multi-scale, vessel-tracking, region growing approach, and model-based approaches.
Support vector machineis by far the most frequently used classifier in vessel segmentation, up to 90% of cases.[citation needed]SVM is a supervised learning model that belongs to the broader category of pattern recognition technique. The algorithm works by creating a largest gap between distinct samples in the data. The goal is to create the largest gap between these components that minimize the potential error in classification.[83]In order to successfully segregate blood vessel information from the rest of the eye image, SVM algorithm creates support vectors that separate the blood vessel pixel from the rest of the image through a supervised environment. Detecting blood vessel from new images can be done through similar manner using support vectors. Combination with other pre-processing technique, such as green channel filtering, greatly improves the accuracy of detection of blood vessel abnormalities.[77]Some beneficial properties of SVM include[83]
Multi-scaleapproach is a multiple resolution approach in vessel segmentation. At low resolution, large-diameter vessels can first be extracted. By increasing resolution, smaller branches from the large vessels can be easily recognized. Therefore, one advantage of using this technique is the increased analytical speed.[75]Additionally, this approach can be used with 3D images. The surface representation is a surface normal to the curvature of the vessels, allowing the detection of abnormalities on vessel surface.[citation needed]
Vessel trackingis the ability of the algorithm to detect "centerline" of vessels. These centerlines are maximal peak of vessel curvature. Centers of vessels can be found using directional information that is provided by Gaussian filter.[citation needed]Similar approaches that utilize the concept of centerline are the skeleton-based and differential geometry-based.[75]
Region growingapproach is a method of detecting neighboring pixels with similarities. A seed point is required for such method to start. Two elements are needed for this technique to work: similarity and spatial proximity. A neighboring pixel to the seed pixel with similar intensity is likely to be the same type and will be added to the growing region. One disadvantage of this technique is that it requires manual selection of seed point, which introduces bias and inconsistency in the algorithm.[75]This technique is also being used in optic disc identification.[citation needed]
Model-basedapproaches employ representation to extract vessels from images. Three broad categories of model-based are known: deformable, parametric, and template matching.[75]Deformable methods uses objects that will be deformed to fit the contours of the objects on the image. Parametric uses geometric parameters such as tubular, cylinder, or ellipsoid representation of blood vessels. Classical snake contour in combination with blood vessel topological information can also be used as a model-based approach.[84]Lastly, template matching is the usage of a template, fitted by stochastic deformation process using Hidden Markov Mode 1.
Automation of medical diagnosis labor (for example,quantifying red blood cells) has some historical precedent.[85]Thedeep learningrevolution of the 2010s has already produced AI that are more accurate in many areas of visual diagnosis than radiologists and dermatologists, and this gap is expected to grow.
Some experts, including many doctors, are dismissive of the effects that AI will have on medical specialties.
In contrast, many economists and artificial intelligence experts believe that fields such as radiology will be massively disrupted, with unemployment or downward pressure on the wages of radiologists; hospitals will need fewer radiologists overall, and many of the radiologists who still exist will require substantial retraining.Geoffrey Hinton, the "Godfather of deep learning", argues that in light of the likely advances expected in the next five or ten years, hospitals should immediately stop training radiologists, as their time-consuming and expensive training on visual diagnosis will soon be mostly obsolete, leading to a glut of traditional radiologists.[86][87]
An op-ed inJAMAargues that pathologists and radiologists should merge into a single "information specialist" role, and state that "To avoid being replaced by computers, radiologists must allow themselves to be displaced by computers." Information specialists would be trained in "Bayesian logic,statistics,data science", and somegenomicsandbiometrics; manual visual pattern recognition would be greatly de-emphasized compared with current onerous radiology training.[88]
|
https://en.wikipedia.org/wiki/Computer-aided_diagnosis
|
Perceptual learningislearningbetterperceptionskills such as differentiating twomusical tonesfrom one another or categorizations of spatial and temporal patterns relevant to real-world expertise. Examples of this may includereading, seeing relations amongchesspieces, and knowing whether or not anX-rayimage shows a tumor.
Sensory modalitiesmay includevisual, auditory, tactile, olfactory, and taste. Perceptual learning forms important foundations of complexcognitiveprocesses (i.e., language) and interacts with other kinds of learning to produce perceptual expertise.[1][2]Underlying perceptual learning are changes in the neural circuitry. The ability for perceptual learning is retained throughout life.[3]
Laboratory studies reported many examples of dramatic improvements in sensitivities from appropriately structured perceptuallearningtasks. In visualVernier acuitytasks, observers judge whether one line is displaced above or below a second line. Untrained observers are often already very good with this task, but after training, observers'thresholdhas been shown to improve as much as 6 fold.[4][5][6]Similar improvements have been found for visual motion discrimination[7]and orientation sensitivity.[8][9]Invisual searchtasks, observers are asked to find a target object hidden among distractors or in noise. Studies of perceptuallearningwith visual search show that experience leads to great gains in sensitivity and speed. In one study by Karni and Sagi,[3]the time it took for subjects to search for an oblique line among a field of horizontal lines was found to improve dramatically, from about 200ms in one session to about 50ms in a later session. With appropriate practice, visual search can become automatic and very efficient, such that observers do not need more time to search when there are more items present on the search field.[10]Tactile perceptual learning has been demonstrated on spatial acuity tasks such as tactile grating orientation discrimination, and on vibrotactile perceptual tasks such as frequency discrimination; tactile learning on these tasks has been found to transfer from trained to untrained fingers.[11][12][13][14]Practice with Braille reading and daily reliance on the sense of touch may underlie the enhancement in tactile spatial acuity of blind compared to sighted individuals.[15]
Perceptual learning is prevalent and occurs continuously in everyday life. "Experience shapes the way people see and hear."[16]Experience provides the sensory input to our perceptions as well as knowledge about identities. When people are less knowledgeable about different races and cultures people develop stereotypes because they are less knowledgeable. Perceptual learning is a more in-depth relationship between experience and perception. Different perceptions of the same sensory input may arise in individuals with different experiences or training. This leads to important issues about the ontology of sensory experience, the relationship between cognition and perception.
An example of this is money. Every day we look at money and we can look at it and know what it is but when you are asked to find the correct coin in similar coins that have slight differences we may have a problem finding the difference. This is because we see it every day but we are not directly trying to find a difference. Learning to perceive differences and similarities among stimuli based on exposure to the stimuli. A study conducted by Gibson's in 1955 illustrates how exposure to stimuli can affect how well we learn details for different stimuli.
As our perceptual system adapts to the natural world, we become better at discriminating between different stimuli when they belong to different categories than when they belong to the same category. We also tend to become less sensitive to the differences between two instances of the same category.[17]These effects are described as the result ofcategorical perception. Categorical perception effects do not transfer across domains.
Infants, when different sounds belong to the same phonetic category in their native language, tend to lose sensitivity to differences between speech sounds by 10 months of age.[18]They learn to pay attention to salient differences between native phonetic categories, and ignore the less language-relevant ones. In chess, expert chess players encode larger chunks of positions and relations on the board and require fewer exposures to fully recreate a chess board. This is not due to their possessing superior visual skill, but rather to their advanced extraction of structural patterns specific to chess.[19][20]
When a woman has a baby, shortly after the baby's birth she will be able to decipher the difference in her baby's cry. This is because she is becoming more sensitive to the differences. She can tell what cry is because they are hungry, need to be changed, etc.
Extensive practice reading in English leads to extraction and rapid processing of the structural regularities of English spelling patterns. Theword superiority effectdemonstrates this—people are often much faster at recognizing words than individual letters.[21][22]
In speech phonemes, observers who listen to a continuum of equally spaced consonant-vowel syllables going from /be/ to /de/ are much quicker to indicate that two syllables are different when they belonged to different phonemic categories than when they were two variants of the same phoneme, even when physical differences were equated between each pair of syllables.[23]
Other examples of perceptual learning in the natural world include the ability to distinguish between relative pitches in music,[24]identify tumors in x-rays,[25]sort day-old chicks by gender,[26]taste the subtle differences between beers or wines,[27]identify faces as belonging to different races,[28]detect the features that distinguish familiar faces,[29]discriminate between two bird species ("great blue crown heron" and "chipping sparrow"),[30]and attend selectively to the hue, saturation and brightness values that comprise a color definition.[31]
The prevalent idiom that “practice makes perfect” captures the essence of the ability to reach impressive perceptual expertise. This has been demonstrated for centuries and through extensive amounts of practice in skills such as wine tasting, fabric evaluation, or musical preference. The first documented report, dating to the mid-19th century, is the earliest example of tactile training aimed at decreasing the minimal distance at which individuals can discriminate whether one or two points on their skin have been touched. It was found that this distance (JND, Just Noticeable Difference) decreases dramatically with practice, and that this improvement is at least partially retained on subsequent days. Moreover, this improvement is at least partially specific to the trained skin area. A particularly dramatic improvement was found for skin positions at which initial discrimination was very crude (e.g. on the back), though training could not bring the JND of initially crude areas down to that of initially accurate ones (e.g. finger tips).[32]William Jamesdevoted a section in his Principles of Psychology (1890/1950) to "the improvement in discrimination by practice".[33]He noted examples and emphasized the importance of perceptual learning for expertise. In 1918,Clark L. Hull, a noted learning theorist, trained human participants to learn to categorize deformed Chinese characters into categories. For each category, he used 6 instances that shared some invariant structural property. People learned to associate a sound as the name of each category, and more importantly, they were able to classify novel characters accurately.[34]This ability to extract invariances from instances and apply them to classify new instances marked this study as a perceptual learning experiment. It was not until 1969, however, thatEleanor Gibsonpublished her seminal bookThe Principles of Perceptual learning and Developmentand defined the modern field of perceptual learning. She established the study of perceptual learning as an inquiry into the behavior and mechanism of perceptual change. By the mid-1970s, however, this area was in a state of dormancy due to a shift in focus to perceptual and cognitive development in infancy. Much of the scientific community tended to underestimate the impact of learning compared with innate mechanisms. Thus, most of this research focused on characterizing basic perceptual capacities of young infants rather than on perceptual learning processes.
Since the mid-1980s, there has been a new wave of interest in perceptual learning due to findings of cortical plasticity at the lowest sensory levels of sensory systems. Our increased understanding of the physiology and anatomy of our cortical systems has been used to connect the behavioral improvement to the underlying cortical areas. This trend began with earlier findings ofHubelandWieselthat perceptual representations at sensory areas of the cortex are substantially modified during a short ("critical") period immediately following birth. Merzenich, Kaas and colleagues showed that thoughneuroplasticityis diminished, it is not eliminated when the critical period ends.[35]Thus, when the external pattern of stimulation is substantially modified, neuronal representations in lower-level (e.g.primary) sensory areas are also modified. Research in this period centered on basic sensory discriminations, where remarkable improvements were found on almost any sensory task through discrimination practice. Following training, subjects were tested with novel conditions and learning transfer was assessed. This work departed from earlier work on perceptual learning, which spanned different tasks and levels.
A question still debated today is to what extent improvements from perceptual learning stems from peripheral modifications compared with improvement in higher-level readout stages. Early interpretations, such as that suggested byWilliam James, attributed it to higher-level categorization mechanisms whereby initially blurred differences are gradually associated with distinctively different labels. The work focused on basic sensory discrimination, however, suggests that the effects of perceptual learning are specific to changes in low-levels of the sensory nervous system (i.e., primary sensory cortices).[36]More recently, research suggest that perceptual learning processes are multilevel and flexible.[37]This cycles back to the earlier Gibsonian view that low-level learning effects are modulated by high-level factors, and suggests that improvement in information extraction may not involve only low-level sensory coding but also apprehension of relatively abstract structure and relations in time and space.
Within the past decade, researchers have sought a more unified understanding of perceptual learning and worked to apply these principles to improve perceptual learning in applied domains.
Perceptual learning effects can be organized into two broad categories: discovery effects and fluency effects.[1]Discovery effects involve some change in the bases of response such as in selecting new information relevant for the task, amplifying relevant information or suppressing irrelevant information. Experts extract larger "chunks" of information and discover high-order relations and structures in their domains of expertise that are invisible to novices. Fluency effects involve changes in the ease of extraction. Not only can experts process high-order information, they do so with great speed and lowattentional load. Discovery and fluency effects work together so that as the discovery structures becomes more automatic, attentional resources are conserved for discovery of new relations and for high-level thinking and problem-solving.
William James(Principles of Psychology, 1890) asserted that "My experience is what I agree to attend to. Only those items which I notice shape my mind - without selective interest, experience is an utter chaos.".[33]His view was extreme, yet its gist was largely supported by subsequentbehavioralandphysiologicalstudies. Mere exposure does not seem to suffice for acquiring expertise.
Indeed, a relevant signal in a givenbehavioralcondition may be considered noise in another. For example, when presented with two similar stimuli, one might endeavor to study the differences between their representations in order to improve one's ability to discriminate between them, or one may instead concentrate on the similarities to improve one's ability to identify both as belonging to the same category. A specific difference between them could be considered 'signal' in the first case and 'noise' in the second case. Thus, as we adapt to tasks and environments, we pay increasingly more attention to the perceptual features that are relevant and important for the task at hand, and at the same time, less attention to the irrelevant features. This mechanism is called attentional weighting.[37]
However, recent studies suggest that perceptual learning occurs without selective attention.[38]Studies of such task-irrelevant perceptual learning (TIPL) show that the degree of TIPL is similar to that found through direct training procedures.[39]TIPL for a stimulus depends on the relationship between that stimulus and important task events[40]or upon stimulus reward contingencies.[41]It has thus been suggested that learning (of task irrelevant stimuli) is contingent upon spatially diffusive learning signals.[42]Similar effects, but upon a shorter time scale, have been found for memory processes and in some cases is called attentional boosting.[43]Thus, when an important (alerting) event occurs, learning may also affect concurrent, non-attended and non-salient stimuli.[44]
The time course of perceptuallearningvaries from one participant to another.[11]Perceptual learning occurs not only within the first training session but also between sessions.[45]Fast learning (i.e., within-first-session learning) and slow learning (i.e., between-session learning) involves different changes in the human adultbrain. While the fast learning effects can only be retained for a short term of several days, the slowlearningeffects can be preserved for a long term over several months.[46]
Research on basicsensorydiscriminations often show that perceptuallearningeffects are specific to the trained task orstimulus.[47]Many researchers take this to suggest that perceptual learning may work by modifying thereceptive fieldsof the cells (e.g.,V1and V2 cells) that initially encode the stimulus. For example, individual cells could adapt to become more sensitive to important features, effectively recruiting more cells for a particular purpose, making some cells more specifically tuned for the task at hand.[48]Evidence for receptive field change has been found using single-cell recording techniques inprimatesin both tactile and auditory domains.[49]
However, not all perceptuallearningtasks are specific to the trained stimuli or tasks. Sireteanu and Rettenback[50]discussed discrimination learning effects that generalize across eyes, retinal locations and tasks. Ahissar and Hochstein[51]used visual search to show that learning to detect a single line element hidden in an array of differently-oriented line segments could generalize to positions at which the target was never presented. In human vision, not enough receptive field modification has been found in early visual areas to explain perceptual learning.[52]Training that produces large behavioral changes such as improvements in discrimination does not produce changes in receptive fields. In studies where changes have been found, the changes are too small to explain changes in behavior.[53]
The Reverse Hierarchy Theory (RHT), proposed by Ahissar & Hochstein, aims to link between learning dynamics and specificity and the underlying neuronal sites.[54]RHT proposes that naïve performance is based on responses at high-level cortical areas, where crude, categorical level representations of the environment are represented. Hence initial learning stages involve understanding global aspects of the task. Subsequent practice may yield better perceptual resolution as a consequence of accessing lower-level information via the feedback connections going from high to low levels. Accessing the relevant low-level representations requires a backward search during which informative input populations of neurons in the low level are allocated. Hence, subsequent learning and its specificity reflect the resolution of lower levels. RHT thus proposes that initial performance is limited by the high-level resolution whereas post-training performance is limited by the resolution at low levels. Since high-level representations of different individuals differ due to their prior experience, their initial learning patterns may differ. Several imaging studies are in line with this interpretation, finding that initial performance is correlated with average (BOLD) responses at higher-level areas whereas subsequent performance is more correlated with activity at lower-level areas[citation needed]. RHT proposes that modifications at low levels will occur only when the backward search (from high to low levels of processing) is successful. Such success requires that the backward search will "know" which neurons in the lower level are informative. This "knowledge" is gained by training repeatedly on a limited set of stimuli, such that the same lower-level neuronal populations are informative during several trials. Recent studies found that mixing a broad range of stimuli may also yield effective learning if these stimuli are clearly perceived as different, or are explicitly tagged as different. These findings further support the requirement for top-down guidance in order to obtain effective learning.
In some complex perceptual tasks, allhumansare experts. We are all very sophisticated, but not infallible at scene identification, face identification and speechperception. Traditional explanations attribute this expertise to some holistic, somewhat specialized, mechanisms. Perhaps such quick identifications are achieved by more specific and complex perceptual detectors which gradually "chunk" (i.e., unitize) features that tend to concur, making it easier to pull a whole set of information. Whether any concurrence of features can gradually be chunked with practice or chunking can only be obtained with some pre-disposition (e.g. faces, phonological categories) is an open question. Current findings suggest that such expertise is correlated with a significant increase in the cortical volume involved in these processes. Thus, we all have somewhat specialized face areas, which may reveal an innate property, but we also develop somewhat specialized areas for written words as opposed to single letters or strings of letter-like symbols. Moreover, special experts in a given domain have larger cortical areas involved in that domain. Thus, expert musicians have larger auditory areas.[55]These observations are in line with traditional theories of enrichment proposing that improved performance involves an increase in cortical representation. For this expertise, basic categorical identification may be based on enriched and detailed representations, located to some extent in specialized brain areas.Physiologicalevidence suggests that training for refined discrimination along basic dimensions (e.g. frequency in the auditory modality) also increases the representation of the trained parameters, though in these cases the increase may mainly involve lower-level sensory areas.[56]
In 2005, Petrov, Dosher and Lu pointed out that perceptuallearningmay be explained in terms of the selection of which analyzers best perform the classification, even in simple discrimination tasks. They explain that the some part of the neural system responsible for particular decisions have specificity[clarification needed], while low-level perceptual units do not.[37]In their model, encodings at the lowest level do not change. Rather, changes that occur in perceptual learning arise from changes in higher-level, abstract representations of the relevant stimuli. Because specificity can come from differentially selecting information, this "selective reweighting theory" allows for learning of complex, abstract representation. This corresponds to Gibson's earlier account of perceptual learning as selection andlearningof distinguishing features. Selection may be the unifying principles of perceptual learning at all levels.[57]
Ivan Pavlovdiscoveredconditioning. He found that when a stimulus (e.g. sound) is immediately followed by food several times, the mere presentation of this stimulus would subsequently elicit saliva in a dog's mouth. He further found that when he used a differential protocol, by consistently presenting food after one stimulus while not presenting food after another stimulus, dogs were quickly conditioned to selectively salivate in response to the rewarded one. He then asked whether this protocol could be used to increase perceptual discrimination, by differentially rewarding two very similar stimuli (e.g. tones with similar frequency). However, he found that differential conditioning was not effective.
Pavlov's studies were followed by many training studies which found that an effective way to increase perceptual resolution is to begin with a large difference along the required dimension and gradually proceed to small differences along this dimension. This easy-to-difficult transfer was termed "transfer along a continuum".
These studies showed that the dynamics of learning depend on the training protocol, rather than on the total amount of practice. Moreover, it seems that the strategy implicitly chosen for learning is highly sensitive to the choice of the first few trials during which the system tries to identify the relevant cues.
Several studies asked whetherlearningtakes place during practice sessions or in between, for example, during subsequent sleep. The dynamics oflearningare hard to evaluate since the directly measured parameter is performance, which is affected by bothlearning, inducing improvement, and fatigue, which hampers performance. Current studies suggest that sleep contributes to improved and durablelearningeffects, by further strengthening connections in the absence of continued practice.[45][58][59]Bothslow-waveandREM(rapid eye movement) stages of sleep may contribute to this process, via not-yet-understood mechanisms.
Practice with comparison and contrast of instances that belong to the same or different categories allow for the pick-up of the distinguishing features—features that are important for the classification task—and the filter of the irrelevant features.[60]
Learningeasy examples first may lead to better transfer and betterlearningof more difficult cases.[61]By recording ERPs from human adults, Ding and Colleagues investigated the influence of task difficulty on the brain mechanisms of visual perceptual learning. Results showed that difficult task training affected earlier visual processing stage and broader visual cortical regions than easy task training.[62]
Active classification effort and attention are often necessary to produce perceptual learning effects.[59]However, in some cases, mere exposure to certain stimulus variations can produce improved discriminations.
In many cases, perceptual learning does not require feedback (whether or not the classification is correct).[56]Other studies suggest that block feedback (feedback only after a block of trials) produces more learning effects than no feedback at all.[63]
Despite the marked perceptual learning demonstrated in different sensory systems and under varied training paradigms, it is clear that perceptual learning must face certain unsurpassable limits imposed by the physical characteristics of the sensory system. For instance, in tactile spatial acuity tasks, experiments suggest that the extent of learning is limited by fingertip surface area, which may constrain the underlying density ofmechanoreceptors.[11]
In many domains of expertise in the real world, perceptual learning interacts with other forms of learning.Declarative knowledgetends to occur with perceptual learning. As we learn to distinguish between an array of wine flavors, we also develop a wide range of vocabularies to describe the intricacy of each flavor.
Similarly, perceptual learning also interacts flexibly withprocedural knowledge. For example, the perceptual expertise of a baseball player at bat can detect early in the ball's flight whether the pitcher threw a curveball. However, the perceptual differentiation of the feel of swinging the bat in various ways may also have been involved in learning the motor commands that produce the required swing.[1]
Perceptuallearningis often said to beimplicit, such thatlearningoccurs without awareness. It is not at all clear whether perceptuallearningis always implicit. Changes in sensitivity that arise are often not conscious and do not involve conscious procedures, but perceptual information can be mapped onto various responses.[1]
In complex perceptual learning tasks (e.g., sorting of newborn chicks by sex, playing chess), experts are often unable to explain what stimulus relationships they are using in classification. However, in less complex perceptuallearningtasks, people can point out what information they're using to make classifications.
Perceptual learning is distinguished from category learning. Perceptual learning generally refers to the enhancement of detectability of a perceptual item or the discriminability between two or more items. In contrast, category learning involves labeling or categorizing an item into a particular group or category. However, in some cases, there is an overlap between perceptual learning and category learning. For instance, to discriminate between two items, a categorical difference between them may sometimes be utilized, in which case category learning, rather than perceptual learning, is thought to occur. Although perceptual learning and category learning are distinct forms of learning, they can interact. For example, category learning that groups multiple orientations into different categories can lead perceptual learning of one orientation to transfer across other orientations within the same category as the trained orientation. This is termed "category-induced perceptual learning".
Multiple different category learning systems may mediate the learning of different category structures. "Two systems that have received support are a frontal-based explicit system that uses logical reasoning, depends on working memory and executive attention, and is mediated primarily by the anterior cingulate, the prefrontal cortex and the associative striatum, including the head of the caudate. The second is a basal ganglia-mediated implicit system that uses procedural learning, requires a dopamine reward signal and is mediated primarily by the sensorimotor striatum"[64]The studies showed that there was significant involvement of the striatum and less involvement of the medial temporal lobes in category learning. In people who have striatal damage, the need to ignore irrelevant information is more predictive of a rule-based category learning deficit. Whereas, the complexity of the rule is predictive of an information integration category learning deficit.
An important potential application of perceptuallearningis the acquisition of skill for practical purposes. Thus it is important to understand whether training for increased resolution in lab conditions induces a general upgrade which transfers to other environmental contexts, or results from mechanisms which are context specific. Improving complex skills is typically gained by training under complex simulation conditions rather than one component at a time. Recent lab-based training protocols with complex action computer games have shown that such practice indeed modifiesvisualskills in a general way, which transfers to new visual contexts. In 2010, Achtman, Green, and Bavelier reviewed the research on video games to train visual skills.[65]They cite a previous review by Green & Bavelier (2006)[66]on using video games to enhance perceptual and cognitive abilities. A variety of skills were upgraded in video game players, including "improved hand-eye coordination,[67]increased processing in the periphery,[68]enhanced mental rotation skills,[69]greater divided attention abilities,[70]and faster reaction times,[71]to name a few". An important characteristic is the functional increase in the size of the effective visual field (within which viewers can identify objects), which is trained in action games and transfers to new settings. Whether learning of simple discriminations, which are trained in separation, transfers to new stimulus contexts (e.g. complex stimulus conditions) is still an open question.
Like experimental procedures, other attempts to apply perceptuallearningmethods to basic and complex skills use training situations in which the learner receives many short classification trials. Tallal, Merzenich and their colleagues have successfully adapted auditory discrimination paradigms to address speech and language difficulties.[72][73]They reported improvements in language learning-impaired children using specially enhanced and extended speech signals. The results applied not only to auditory discrimination performance but speech and language comprehension as well.
In educational domains, recent efforts byPhilip Kellmanand colleagues showed that perceptual learning can be systematically produced and accelerated using specific, computer-based technology. Their approach to perceptual learning methods take the form of perceptual learning modules (PLMs): sets of short, interactive trials that develop, in a particular domain, learners' pattern recognition, classification abilities, and their abilities to map across multiple representations. As a result of practice with mapping across transformations (e.g., algebra, fractions) and across multiple representations (e.g., graphs, equations, and word problems), students show dramatic gains in their structure recognition in fraction learning and algebra. They also demonstrated that when students practice classifying algebraic transformations using PLMs, the results show remarkable improvements in fluency at algebra problem solving.[57][74][75]These results suggests that perceptual learning can offer a needed complement to conceptual and procedural instructions in the classroom.
Similar results have also been replicated in other domains with PLMs, including anatomic recognition in medical and surgical training,[76]reading instrumental flight displays,[77]and apprehending molecular structures in chemistry.[78]
|
https://en.wikipedia.org/wiki/Perceptual_learning
|
Pattern recognitionis a very active field of research intimately bound tomachine learning. Also known as classification orstatistical classification, pattern recognition aims at building aclassifierthat can determine the class of an input pattern. This procedure, known as training, corresponds to learning an unknown decision function based only on a set of input-output pairs(xi,yi){\displaystyle ({\boldsymbol {x}}_{i},y_{i})}that form the training data (or training set). Nonetheless, in real world applications such ascharacter recognition, a certain amount of information on the problem is usually known beforehand. The incorporation of this prior knowledge into the training is the key element that will allow an increase of performance in many applications.
Prior knowledge[1]refers to all information about the problem available in addition to the training data. However, in this most general form, determining amodelfrom a finite set of samples without prior knowledge is anill-posedproblem, in the sense that a unique model may not exist. Many classifiers incorporate the general smoothness assumption that a test pattern similar to one of the training samples tends to be assigned to the same class.
The importance of prior knowledge in machine learning is suggested by its role in search and optimization. Loosely, theno free lunch theoremstates that all search algorithms have the same average performance over all problems, and thus implies that to gain in performance on a certain application one must use a specialized algorithm that includes some prior knowledge about the problem.
The different types of prior knowledge encountered in pattern recognition are now regrouped under two main categories: class-invariance and knowledge on the data.
A very common type of prior knowledge in pattern recognition is the invariance of the class (or the output of the classifier) to atransformationof the input pattern. This type of knowledge is referred to astransformation-invariance. The mostly used transformations used in image recognition are:
Incorporating the invariance to a transformationTθ:x↦Tθx{\displaystyle T_{\theta }:{\boldsymbol {x}}\mapsto T_{\theta }{\boldsymbol {x}}}parametrized inθ{\displaystyle \theta }into a classifier of outputf(x){\displaystyle f({\boldsymbol {x}})}for an input patternx{\displaystyle {\boldsymbol {x}}}corresponds to enforcing the equality
Local invariance can also be considered for a transformation centered atθ=0{\displaystyle \theta =0}, so thatT0x=x{\displaystyle T_{0}{\boldsymbol {x}}={\boldsymbol {x}}}, by using the constraint
The functionf{\displaystyle f}in these equations can be either the decision function of the classifier or its real-valued output.
Another approach is to consider class-invariance with respect to a "domain of the input space" instead of a transformation. In this case, the problem becomes findingf{\displaystyle f}so that
whereyP{\displaystyle y_{\mathcal {P}}}is the membership class of the regionP{\displaystyle {\mathcal {P}}}of the input space.
A different type of class-invariance found in pattern recognition ispermutation-invariance, i.e. invariance of the class to a permutation of elements in a structured input. A typical application of this type of prior knowledge is a classifier invariant to permutations of rows of the matrix inputs.
Other forms of prior knowledge than class-invariance concern the data more specifically and are thus of particular interest for real-world applications. The three particular cases that most often occur when gathering data are:
Prior knowledge of these can enhance the quality of the recognition if included in the learning. Moreover, not taking into account the poor quality of some data or a large imbalance between the classes can mislead the decision of a classifier.
|
https://en.wikipedia.org/wiki/Prior_knowledge_for_pattern_recognition
|
Template matching[1]is a technique indigital image processingfor finding small parts of an image which match a template image. It can be used forquality controlin manufacturing,[2]navigation of mobile robots,[3]oredge detectionin images.[4]
The main challenges in a template matching task are detection of occlusion, when a sought-after object is partly hidden in an image; detection of non-rigid transformations, when an object is distorted or imaged from different angles; sensitivity to illumination and background changes; background clutter; and scale changes.[5]
The feature-based approach to template matching relies on the extraction ofimage features, such as shapes, textures, and colors, that match the target image or frame. This approach is usually achieved usingneural networksanddeep-learningclassifierssuch as VGG,AlexNet, andResNet.[citation needed]Convolutional neural networks(CNNs), which many modern classifiers are based on, process an image by passing it through different hidden layers, producing avectorat each layer with classification information about the image. These vectors are extracted from the network and used as the features of the image.Feature extractionusingdeep neural networks, like CNNs, has proven extremely effective has become the standard in state-of-the-art template matching algorithms.[6]
This feature-based approach is often more robust than the template-based approach described below. As such, it has become the state-of-the-art method for template matching, as it can match templates with non-rigid and out-of-planetransformations, as well as high background clutter and illumination changes.[7][8][9]
For templates without strongfeatures, or for when the bulk of a template image constitutes the matching image as a whole, a template-based approach may be effective. Since template-based matching may require sampling of a large number of data points, it is often desirable to reduce the number of sampling points by reducing the resolution of search and template images by the same factor before performing the operation on the resultant downsized images. Thispre-processingmethod creates a multi-scale, orpyramid, representation of images, providing a reduced search window of data points within a search image so that the template does not have to be compared with every viable data point. Pyramid representations are a method ofdimensionality reduction, a common aim of machine learning on data sets that suffer thecurse of dimensionality.
In instances where the template may not provide a direct match, it may be useful to implementeigenspacesto create templates that detail the matching object under a number of different conditions, such as varying perspectives, illuminations,color contrasts, or objectposes.[10]For example, if an algorithm is looking for a face, its template eigenspaces may consist of images (i.e., templates) of faces in different positions to the camera, in different lighting conditions, or with different expressions (i.e., poses).
It is also possible for a matching image to be obscured or occluded by an object. In these cases, it is unreasonable to provide a multitude of templates to cover each possible occlusion. For example, the search object may be a playing card, and in some of the search images, the card is obscured by the fingers of someone holding the card, or by another card on top of it, or by some other object in front of the camera. In cases where the object is malleable or poseable, motion becomes an additional problem, and problems involving both motion and occlusion become ambiguous.[11]In these cases, one possible solution is to divide the template image into multiple sub-images and perform matching on each subdivision.
Template matching is a central tool incomputational anatomy(CA). In this field, adeformable template modelis used to model the space of human anatomies and theirorbitsunder thegroupofdiffeomorphisms, functions which smoothly deform an object.[12]Template matching arises as an approach to finding the unknown diffeomorphism that acts on a template image to match the target image.
Template matching algorithms in CA have come to be calledlarge deformation diffeomorphic metric mappings(LDDMMs). Currently, there are LDDMM template matching algorithms for matching anatomicallandmark points,curves,surfaces, volumes.
A basic method of template matching sometimes called "Linear Spatial Filtering" uses an image patch (i.e., the "template image" or "filter mask") tailored to a specificfeatureof search images to detect.[citation needed]This technique can be easily performed on grey images oredgeimages, where the additional variable of color is either not present or not relevant.Cross correlationtechniques compare the similarities of the search and template images. Their outputs should be highest at places where the image structure matches the template structure, i.e., where large search image values get multiplied by large template image values.
This method is normally implemented by first picking out a part of a search image to use as a template. LetS(x,y){\displaystyle S(x,y)}represent the value of a search image pixel, where(x,y){\displaystyle (x,y)}represents the coordinates of thepixelin the search image. For simplicity, assume pixel values are scalar, as in agreyscale image. Similarly, letT(xt,yt){\textstyle T(x_{t},y_{t})}represent the value of a template pixel, where(xt,yt){\textstyle (x_{t},y_{t})}represents the coordinates of the pixel in the template image. To apply the filter, simply move the center (or origin) of the template image over each point in the search image and calculate the sum of products, similar to adot product, between the pixel values in the search and template images over the whole area spanned by the template. More formally, if(0,0){\displaystyle (0,0)}is the center (or origin) of the template image, then the cross correlationT⋆S{\displaystyle T\star S}at each point(x,y){\displaystyle (x,y)}in the search image can be computed as:(T⋆S)(x,y)=∑(xt,yt)∈TT(xt,yt)⋅S(xt+x,yt+y){\displaystyle (T\star S)(x,y)=\sum _{(x_{t},y_{t})\in T}T(x_{t},y_{t})\cdot S(x_{t}+x,y_{t}+y)}For convenience,T{\displaystyle T}denotes both the pixel values of the template image as well as itsdomain, the bounds of the template. Note that all possible positions of the template with respect to the search image are considered. Since cross correlation values are greatest when the values of the search and template pixels align, the best matching position(xm,ym){\displaystyle (x_{m},y_{m})}corresponds to the maximum value ofT⋆S{\displaystyle T\star S}overS{\displaystyle S}.
Another way to handle translation problems on images using template matching is to compare the intensities of the pixels, using thesum of absolute differences(SAD) measure. To formulate this, letIS(xs,ys){\displaystyle I_{S}(x_{s},y_{s})}andIT(xt,yt){\displaystyle I_{T}(x_{t},y_{t})}denote thelight intensityof pixels in the search and template images with coordinates(xs,ys){\displaystyle (x_{s},y_{s})}and(xt,yt){\displaystyle (x_{t},y_{t})}, respectively. Then by moving the center (or origin) of the template to a point(x,y){\displaystyle (x,y)}in the search image, as before, the sum ofabsolute differencesbetween the template and search pixel intensities at that point is:SAD(x,y)=∑(xt,yt)∈T|IT(xt,yt)−IS(xt+x,yt+y)|{\displaystyle SAD(x,y)=\sum _{(x_{t},y_{t})\in T}\left\vert I_{T}(x_{t},y_{t})-I_{S}(x_{t}+x,y_{t}+y)\right\vert }With this measure, thelowestSAD gives the best position for the template, rather than the greatest as with cross correlation. SAD tends to be relatively simple to implement and understand, but it also tends to be relatively slow to execute. A simpleC++implementation of SAD template matching is given below.
In this simple implementation, it is assumed that the above described method is applied on grey images: This is whyGreyis used as pixel intensity. The final position in this implementation gives the top left location for where the template image best matches the search image.
One way to perform template matching on color images is to decompose thepixelsinto their color components and measure the quality of match between the color template and search image using the sum of the SAD computed for each color separately.
In the past, this type of spatial filtering was normally only used in dedicated hardware solutions because of the computational complexity of the operation,[13]however we can lessen this complexity by filtering it in the frequency domain of the image, referred to as 'frequency domain filtering,' this is done through the use of theconvolution theorem.
Another way of speeding up the matching process is through the use of an image pyramid. This is a series of images, at different scales, which are formed by repeatedly filtering and subsampling the original image in order to generate a sequence of reduced resolution images.[14]These lower resolution images can then be searched for the template (with a similarly reduced resolution), in order to yield possible start positions for searching at the larger scales. The larger images can then be searched in a small window around the start position to find the best template location.
Other methods can handle problems such as translation, scale, image rotation and even all affine transformations.[15][16][17]
Improvements can be made to the matching method by using more than one template (eigenspaces), these other templates can have different scales and rotations.
It is also possible to improve the accuracy of the matching method by hybridizing the feature-based and template-based approaches.[18]Naturally, this requires that the search and template images have features that are apparent enough to support feature matching.
Other methods which are similar include 'Stereo matching', 'Image registration' and 'Scale-invariant feature transform'.
Template matching has various applications and is used in such fields as face recognition (seefacial recognition system) and medical image processing. Systems have been developed and used in the past to count the number of faces that walk across part of a bridge within a certain amount of time. Other systems include automated calcified nodule detection within digital chest X-rays.[19]Recently, this method was implemented in geostatistical simulation which could provide a fast algorithm.[20]
|
https://en.wikipedia.org/wiki/Template_matching
|
Contextual image classification, a topic ofpattern recognitionincomputer vision, is an approach ofclassificationbased on contextual information in images. "Contextual" means this approach is focusing on the relationship of the nearby pixels, which is also called neighbourhood. The goal of this approach is to classify the images by using the contextual information.
Similar asprocessing language, a single word may have multiple meanings unless the context is provided, and the patterns within the sentences are the only informative segments we care about. For images, the principle is same. Find out the patterns and associate proper meanings to them.
As the image illustrated below, if only a small portion of the image is shown, it is very difficult to tell what the image is about.
Even try another portion of the image, it is still difficult to classify the image.
However, if we increase the contextual of the image, then it makes more sense to recognize.
As the full images shows below, almost everyone can classify it easily.
During the procedure ofsegmentation, the methods which do not use the contextual information are sensitive to noise and variations, thus the result of segmentation will contain a great deal of misclassified regions, and often these regions are small (e.g., one pixel).
Compared to other techniques, this approach is robust to noise and substantial variations for it takes the continuity of the segments into account.
Several methods of this approach will be described below.
This approach is very effective against small regions caused by noise. And these small regions are usually formed by few pixels or one pixel. The most probable label is assigned to these regions.
However, there is a drawback of this method. The small regions also can be formed by correct regions rather than noise, and in this case the method is actually making the classification worse.
This approach is widely used inremote sensingapplications.
This is a two-stage classification process:
Instead of using single pixels, the neighbour pixels can be merged into homogeneous regions benefiting from contextual information. And provide these regions to classifier.
The original spectral data can be enriched by adding the contextual information carried by the neighbour pixels, or even replaced in some occasions. This kind of pre-processing methods are widely used intextured imagerecognition. The typical approaches include mean values, variances, texture description, etc.
The classifier uses the grey level and pixel neighbourhood (contextual information) to assign labels to pixels. In such case the information is a combination of spectral and spatial information.
Contextual classification of image data is based on the Bayes minimum error classifier (also known as anaive Bayes classifier).
Present the pixel:
The neighbourhood:
Size of the neighbourhood. There is no limitation of the size, but it is considered to be relatively small for each pixelx0{\displaystyle x_{0}}.
A reasonable size of neighbourhood would be3×3{\displaystyle 3\times 3}of 4-connectivityor 8-connectivity (x0{\displaystyle x_{0}}is marked as red and placed in the centre).
The calculation:
Apply the minimum error classification on a pixelx0{\displaystyle x_{0}}, if the probability of a classωr{\displaystyle \omega _{r}}being presenting the pixelx0{\displaystyle x_{0}}is the highest among all, then assignωr{\displaystyle \omega _{r}}as its class.
The contextual classification rule is described as below, it uses the feature vectorx1{\displaystyle x_{1}}rather thanx0{\displaystyle x_{0}}.
Use the Bayes formula to calculate the posteriori probabilityP(ωs∣ξ){\displaystyle P(\omega _{s}\mid \xi )}
The number of vectors is the same as the number of pixels in the image. For the classifier uses a vector corresponding to each pixelxi{\displaystyle x_{i}}, and the vector is generated from the pixel's neighbourhood.
The basic steps of contextual image classification:
Thetemplate matchingis a "brute force" implementation of this approach.[1]The concept is first create a set of templates, and then look for small parts in the image match with a template.
This method is computationally high and inefficient. It keeps an entire templates list during the whole process and the number of combinations is extremely high. For am×n{\displaystyle m\times n}pixel image, there could be a maximum of2m×n{\displaystyle 2^{m\times n}}combinations, which leads to high computation. This method is a top down method and often calledtable look-upordictionary look-up.
TheMarkov chain[2]also can be applied in pattern recognition. The pixels in an image can be recognised as a set of random variables, then use the lower order Markov chain to find the relationship among the pixels. The image is treated as a virtual line, and the method uses conditional probability.
TheHilbert curveruns in a unique pattern through the whole image, it traverses every pixel without visiting any of them twice and keeps a continuous curve. It is fast and efficient.
The lower-order Markov chain and Hilbert space-filling curves mentioned above are treating the image as a line structure. The Markov meshes however will take the two dimensional information into account.
Thedependency tree[3]is a method using tree dependency to approximate probability distributions.
|
https://en.wikipedia.org/wiki/Contextual_image_classification
|
Semi-structured data[1]is a form ofstructured datathat does not obey the tabular structure of data models associated withrelational databasesor other forms ofdata tables, but nonetheless containstagsor other markers to separate semantic elements and enforce hierarchies of records and fields within the data. Therefore, it is also known asself-describingstructure.
In semi-structured data, the entities belonging to the same class may have differentattributeseven though they are grouped together, and the attributes' order is not important.
Semi-structured data are increasingly occurring since the advent of theInternetwherefull-textdocumentsanddatabasesare not the only forms of data anymore, and different applications need a medium forexchanging information. Inobject-oriented databases, one often finds semi-structured data.
XML,[2]other markup languages,email, andEDIare all forms of semi-structured data.OEM(Object Exchange Model)[3]was created prior to XML as a means of self-describing a data structure. XML has been popularized by web services that are developed utilizingSOAPprinciples.
Some types of data described here as "semi-structured", especially XML, suffer from the impression that they are incapable of structural rigor at the same functional level as Relational Tables and Rows. Indeed, the view of XML as inherently semi-structured (previously, it was referred to as "unstructured") has handicapped its use for a widening range of data-centric applications. Even documents, normally thought of as the epitome of semi-structure, can be designed with virtually the same rigor asdatabase schema, enforced by theXML schemaand processed by both commercial and custom software programs without reducing their usability by human readers.
In view of this fact, XML might be referred to as having "flexible structure" capable of human-centric flow and hierarchy as well as highly rigorous element structure and data typing.
The concept of XML as "human-readable", however, can only be taken so far. Some implementations/dialects of XML, such as the XML representation of the contents of a Microsoft Word document, as implemented in Office 2007 and later versions, utilize dozens or even hundreds of different kinds of tags that reflect a particular problem domain - in Word's case, formatting at the character and paragraph and document level, definitions of styles, inclusion of citations, etc. - which are nested within each other in complex ways. Understanding even a portion of such an XML document by reading it, let alone catching errors in its structure, is impossible without a very deep prior understanding of the specific XML implementation, along with assistance by software that understands the XML schema that has been employed. Such text is not "human-understandable" any more than a book written in Swahili (which uses the Latin alphabet) would be to an American or Western European who does not know a word of that language: the tags are symbols that are meaningless to a person unfamiliar with the domain.
JSONor JavaScript Object Notation, is an open standard format that uses human-readable text to transmit data objects. JSON has been popularized by web services developed utilizingRESTprinciples.
Databases such asMongoDBandCouchbasestore data natively in JSON format, leveraging the pros of semi-structured data architecture.
Thesemi-structured modelis adatabase modelwhere there is no separation between thedataand theschema, and the amount of structure used depends on the purpose.
The advantages of this model are the following:
The primary trade-off being made in using a semi-structureddatabase modelis that queries cannot be made as efficiently as in a more constrained structure, such as in therelational model. Typically the records in a semi-structured database are stored with unique IDs that are referenced with pointers to their location on disk. This makes navigational or path-based queries quite efficient, but for doing searches over many records (as is typical inSQL), it is not as efficient because it has to seek around the disk following pointers.
TheObject Exchange Model(OEM) is one standard to express semi-structured data, another way isXML.
|
https://en.wikipedia.org/wiki/Semi-structured_model
|
NoSQL(originally meaning "NotonlySQL" or "non-relational")[1]refers to a type ofdatabasedesign that stores and retrieves data differently from the traditional table-based structure ofrelational databases. Unlike relational databases, which organize data into rows and columns like a spreadsheet, NoSQL databases use a single data structure—such askey–value pairs,wide columns,graphs, ordocuments—to hold information. Since this non-relational design does not require a fixedschema, it scales easily to manage large, often unstructured datasets.[2]NoSQL systems are sometimes called"Not only SQL"because they can supportSQL-like query languages or work alongside SQL databases inpolyglot-persistentsetups, where multiple database types are combined.[3][4]Non-relational databases date back to the late 1960s, but the term "NoSQL" emerged in the early 2000s, spurred by the needs ofWeb 2.0companies like social media platforms.[5][6]
NoSQL databases are popular inbig dataandreal-time webapplications due to their simple design, ability to scale acrossclusters of machines(calledhorizontal scaling), and precise control over dataavailability.[7][8]These structures can speed up certain tasks and are often considered more adaptable than fixed database tables.[9]However, many NoSQL systems prioritize speed and availability over strict consistency (per theCAP theorem), usingeventual consistency—where updates reach all nodes eventually, typically within milliseconds, but may cause brief delays in accessing the latest data, known asstale reads.[10]While most lack fullACIDtransaction support, some, likeMongoDB, include it as a key feature.[11]
Barriers to wider NoSQL adoption include their use of low-levelquery languagesinstead of SQL, inability to perform ad hocjoinsacross tables, lack of standardized interfaces, and significant investments already made in relational databases.[12]Some NoSQL systems risklosing datathrough lost writes or other forms, though features likewrite-ahead logging—a method to record changes before they’re applied—can help prevent this.[13][14]Fordistributed transaction processingacross multiple databases, keeping data consistent is a challenge for both NoSQL and relational systems, as relational databases cannot enforce rules linking separate databases, and few systems support bothACIDtransactions andX/Open XAstandards for managing distributed updates.[15][16]Limitations within the interface environment are overcome using semantic virtualization protocols, such that NoSQL services are accessible to mostoperating systems.[17]
The termNoSQLwas used by Carlo Strozzi in 1998 to name his lightweightStrozzi NoSQL open-source relational databasethat did not expose the standardStructured Query Language(SQL) interface, but was still relational.[18]His NoSQLRDBMSis distinct from the around-2009 general concept of NoSQL databases. Strozzi suggests that, because the current NoSQL movement "departs from the relational model altogether, it should therefore have been called more appropriately 'NoREL'",[19]referring to "not relational".
Johan Oskarsson, then a developer atLast.fm, reintroduced the termNoSQLin early 2009 when he organized an event to discuss "open-sourcedistributed, non-relational databases".[20]The name attempted to label the emergence of an increasing number of non-relational, distributed data stores, including open source clones of Google'sBigtable/MapReduceand Amazon'sDynamoDB.
There are various ways to classify NoSQL databases, with different categories and subcategories, some of which overlap. What follows is a non-exhaustive classification by data model, with examples:[21]
Key–value (KV) stores use theassociative array(also called a map or dictionary) as their fundamental data model. In this model, data is represented as a collection of key–value pairs, such that each possible key appears at most once in the collection.[24][25]
The key–value model is one of the simplest non-trivial data models, and richer data models are often implemented as an extension of it. The key–value model can be extended to a discretely ordered model that maintains keys inlexicographic order. This extension is computationally powerful, in that it can efficiently retrieve selective keyranges.[26]
Key–value stores can useconsistency modelsranging fromeventual consistencytoserializability. Some databases support ordering of keys. There are various hardware implementations, and some users store data in memory (RAM), while others onsolid-state drives(SSD) orrotating disks(aka hard disk drive (HDD)).
The central concept of a document store is that of a "document". While the details of this definition differ among document-oriented databases, they all assume that documents encapsulate and encode data (or information) in some standard formats or encodings. Encodings in use includeXML,YAML, andJSONandbinaryforms likeBSON. Documents are addressed in the database via a uniquekeythat represents that document. Another defining characteristic of a document-oriented database is anAPIor query language to retrieve documents based on their contents.
Different implementations offer different ways of organizing and/or grouping documents:
Compared to relational databases, collections could be considered analogous to tables and documents analogous to records. But they are different – every record in a table has the same sequence of fields, while documents in a collection may have fields that are completely different.
Graph databases are designed for data whose relations are well represented as agraphconsisting of elements connected by a finite number of relations. Examples of data includesocial relations, public transport links, road maps, network topologies, etc.
The performance of NoSQL databases is usually evaluated using the metric ofthroughput, which is measured as operations per second. Performance evaluation must pay attention to the rightbenchmarkssuch as production configurations, parameters of the databases, anticipated data volume, and concurrent userworkloads.
Ben Scofield rated different categories of NoSQL databases as follows:[28]
Performance and scalability comparisons are most commonly done using theYCSBbenchmark.
Since most NoSQL databases lack ability for joins in queries, thedatabase schemagenerally needs to be designed differently. There are three main techniques for handling relational data in a NoSQL database. (Seetable join and ACID supportfor NoSQL databases that support joins.)
Instead of retrieving all the data with one query, it is common to do several queries to get the desired data. NoSQL queries are often faster than traditional SQL queries, so the cost of additional queries may be acceptable. If an excessive number of queries would be necessary, one of the other two approaches is more appropriate.
Instead of only storing foreign keys, it is common to store actual foreign values along with the model's data. For example, each blog comment might include the username in addition to a user id, thus providing easy access to the username without requiring another lookup. When a username changes, however, this will now need to be changed in many places in the database. Thus this approach works better when reads are much more common than writes.[29]
With document databases like MongoDB it is common to put more data in a smaller number of collections. For example, in a blogging application, one might choose to store comments within the blog post document, so that with a single retrieval one gets all the comments. Thus in this approach a single document contains all the data needed for a specific task.
A database is marked as supportingACIDproperties (atomicity, consistency, isolation, durability) orjoinoperations if the documentation for the database makes that claim. However, this doesn't necessarily mean that the capability is fully supported in a manner similar to most SQL databases.
Different NoSQL databases, such asDynamoDB,MongoDB,Cassandra,Couchbase, HBase, and Redis, exhibit varying behaviors when querying non-indexed fields. Many perform full-table or collection scans for such queries, applying filtering operations after retrieving data. However, modern NoSQL databases often incorporate advanced features to optimize query performance. For example, MongoDB supports compound indexes and query-optimization strategies, Cassandra offers secondary indexes and materialized views, and Redis employs custom indexing mechanisms tailored to specific use cases. Systems like Elasticsearch use inverted indexes for efficient text-based searches, but they can still require full scans for non-indexed fields. This behavior reflects the design focus of many NoSQL systems on scalability and efficient key-based operations rather than optimized querying for arbitrary fields. Consequently, while these databases excel at basicCRUDoperations and key-based lookups, their suitability for complex queries involving joins or non-indexed filtering varies depending on the database type—document, key–value, wide-column, or graph—and the specific implementation.[33]
|
https://en.wikipedia.org/wiki/NoSQL
|
Inlinguistics,anaphora(/əˈnæfərə/) is the use of an expression whose interpretation depends upon another expression in context (itsantecedent). In a narrower sense, anaphora is the use of an expression that depends specifically upon an antecedent expression and thus is contrasted withcataphora, which is the use of an expression that depends upon a postcedent expression. The anaphoric (referring) term is called ananaphor. For example, in the sentenceSally arrived, but nobody saw her, thepronounheris an anaphor, referring back to the antecedentSally. In the sentenceBefore her arrival, nobody saw Sally, the pronounherrefers forward to the postcedentSally, soheris now acataphor(and an anaphor in the broader, but not the narrower, sense). Usually, an anaphoric expression is apro-formor some other kind ofdeictic(contextually dependent) expression.[1]Both anaphora and cataphora are species ofendophora, referring to something mentioned elsewhere in a dialog or text.
Anaphora is an important concept for different reasons and on different levels: first, anaphora indicates howdiscourseis constructed and maintained; second, anaphora binds differentsyntacticalelements together at the level of the sentence; third, anaphora presents a challenge tonatural language processingincomputational linguistics, since the identification of the reference can be difficult; and fourth, anaphora partially reveals how language is understood and processed, which is relevant to fields of linguistics interested incognitive psychology.[2]
The termanaphorais actually used in two ways.
In a broad sense, it denotes the act of referring. Any time a given expression (e.g. a pro-form) refers to another contextual entity, anaphora is present.
In a second, narrower sense, the termanaphoradenotes the act of referring backwards in a dialog or text, such as referring to the left when an anaphor points to its left toward its antecedent in languages that are written from left to right. Etymologically,anaphoraderives fromAncient Greekἀναφορά (anaphorá, "a carrying back"), from ἀνά (aná, "up") + φέρω (phérō, "I carry"). In this narrow sense, anaphora stands in contrast tocataphora, which sees the act of referring forward in a dialog or text, or pointing to the right in languages that are written from left to right: Ancient Greek καταφορά (kataphorá, "a downward motion"), from κατά (katá, "downwards") + φέρω (phérō, "I carry"). A pro-form is a cataphor when it points to its right toward its postcedent. Both effects together are called either anaphora (broad sense) or less ambiguously, along withself-referencethey comprise the category of endophora.[3]
Examples of anaphora (in the narrow sense) and cataphora are given next. Anaphors and cataphors appear in bold, and their antecedents and postcedents are underlined:
A further distinction is drawn between endophoric andexophoric reference. Exophoric reference occurs when an expression, an exophor, refers to something that is not directly present in the linguistic context, but is rather present in the situational context. Deictic pro-forms are stereotypical exophors, e.g.
Exophors cannot be anaphors as they do not substantially refer within the dialog or text, though there is a question of what portions of a conversation or document are accessed by a listener or reader with regard to whether all references to which a term points within that language stream are noticed (i.e., if you hear only a fragment of what someone says using the pronounher, you might never discover whosheis, though if you heard the rest of what the speaker was saying on the same occasion, you might discover whosheis, either by anaphoric revelation or by exophoric implication because you realize whoshemust be according to what else is said abouthereven ifheridentity is not explicitly mentioned, as in the case ofhomophoric reference).
A listener might, for example, realize through listening to other clauses and sentences thatsheisa Queenbecause of some of her attributes or actions mentioned. But which queen? Homophoric reference occurs when a generic phrase obtains a specific meaning through knowledge of its context. For example, the referent of the phrasethe Queen(using an emphaticdefinite article, not the less specifica Queen, but also not the more specificQueen Elizabeth) must be determined by the context of the utterance, which would identify the identity of the queen in question. Until further revealed by additional contextual words, gestures, images or othermedia, a listener would not even know what monarchy or historical period is being discussed, and even after hearinghername isElizabethdoes not know, even if an English-UK Queen Elizabeth becomes indicated, if this queen meansQueen Elizabeth IorQueen Elizabeth IIand must await further clues in additional communications. Similarly, in discussing 'The Mayor' (of a city), the Mayor's identity must be understood broadly through the context which the speech references as general 'object' of understanding; is a particular human person meant, a current or future or past office-holder, the office in a strict legal sense, or the office in a general sense which includes activities a mayor might conduct, might even be expected to conduct, while they may not be explicitly defined for this office.
The termanaphoris used in a special way in thegenerative grammartradition. Here it denotes what would normally be called areflexiveorreciprocalpronoun, such ashimselforeach otherin English, and analogous forms in other languages. The use of the termanaphorin this narrow sense is unique to generative grammar, and in particular, to the traditionalbindingtheory.[4]This theory investigates the syntactic relationship that can or must hold between a given pro-form and its antecedent (or postcedent). In this respect, anaphors (reflexive and reciprocal pronouns) behave very differently from, for instance, personal pronouns.[5]
In some cases, anaphora may refer not to its usual antecedent, but to itscomplementset. In the following example a, the anaphoric pronountheyrefers to the children who are eating the ice-cream. Contrastingly, example b hastheyseeming to refer to the children who are not eating ice-cream:
In its narrower definition, an anaphoric pronoun must refer to some noun (phrase) that has already been introduced into the discourse. In complement anaphora cases, however, the anaphor refers to something that is not yet present in the discourse, since the pronoun's referent has not been formerly introduced, including the case of 'everything but' what has been introduced. The set of ice-cream-eating-children in example b is introduced into the discourse, but then the pronountheyrefers to the set of non-ice-cream-eating-children, a set which has not been explicitly mentioned.[7]
Bothsemanticandpragmaticsconsiderations attend this phenomenon, which followingdiscourse representation theorysince the early 1980s, such as work by Kamp (1981) and Heim (File Change Semantics, 1982), andgeneralized quantifier theory, such as work by Barwise and Cooper (1981), was studied in a series of psycholinguistic experiments in the early 1990s by Moxey and Sanford (1993) and Sanford et al. (1994).[6][8]In complement anaphora as in the case of the pronoun in example b, this anaphora refers to some sort of complement set (i.e. only to the set of non-ice-cream-eating-children) or to themaximal set(i.e. to all the children, both ice-cream-eating-children and non-ice-cream-eating-children) or some hybrid or variant set, including potentially one of those noted to the right of example b. The various possible referents in complement anaphora are discussed by Corblin (1996), Kibble (1997), and Nouwen (2003).[7]Resolving complement anaphora is of interest in shedding light on brain access toinformation,calculation,mental modeling,communication.[9][10]
There are many theories that attempt to prove how anaphors are related and trace back to their antecedents, with centering theory (Grosz, Joshi, and Weinstein 1983) being one of them. Taking the computational theory of mind view of language, centering theory gives a computational analysis of underlying antecedents. In their original theory, Grosz, Joshi, & Weinstein (1983) propose that some discourse entities in utterances are more "central" than others, and this degree of centrality imposes constraints on what can be the antecedent.
In the theory, there are different types of centers: forward facing, backwards facing, and preferred.
A ranked list of discourse entities in an utterance. The ranking is debated, some focusing on theta relations (Yıldırım et al. 2004) and some providing definitive lists.[example needed]
The highest ranked discourse entity in the previous utterance.[example needed]
The highest ranked discourse entity in the previous utterance realised in the current utterance.[example needed]
|
https://en.wikipedia.org/wiki/Anaphora_(linguistics)
|
Ingrammar, anantecedentis one or more words that establish the meaning of apronounor otherpro-form.[1]For example, in the sentence "John arrived late because traffic held him up," the word "John" is the antecedent of the pronoun "him." Pro-forms usually follow their antecedents, but sometimes precede them. In the latter case, the more accurate term would technically bepostcedent, although this term is not commonly distinguished fromantecedentbecause the definition ofantecedentusually encompasses it. The linguistic term that is closely related toantecedentandpro-formisanaphora. Theories of syntax explore the distinction between antecedents and postcedents in terms ofbinding.
Almost anysyntactic categorycan serve as the antecedent to a pro-form. The following examples illustrate a range of proforms and their antecedents. The pro-forms are in bold, and their antecedents are underlined:
This list of proforms and the types of antecedents that they take is by no means exhaustive, but rather it is intended to merely deliver an impression of the breadth of expressions that can function as proforms and antecedents. While the stereotypical proform is a pronoun and the stereotypical antecedent a noun or noun phrase, these examples demonstrate that most any syntactic category can in fact serve as an antecedent to a proform, whereby the proforms themselves are a diverse bunch.[2]The last two examples are particularly interesting, because they show that some proforms can even take discontinuous word combinations as antecedents, i.e. the antecedents arenotconstituents. A particularly frequent type of proform occurs inrelative clauses. Many relative clauses contain a relative pronoun, and these relative pronouns have an antecedent. Sentences d and h above contain relative clauses; the proformswhenandwhichare relative proforms.
In some cases, the wording could have an uncertain antecedent, where the antecedent of a pronoun is not clear because two or more prior nouns or phrases could match the count, gender, or logic as a prior reference.
In such cases, scholars have recommended to rewrite the sentence structure to be more specific,[3]or repeat the words of the antecedent rather than use only a pronoun phrase, as a technique to resolve the uncertain antecedent.
For example, consider the sentence, "There was a doll inside the box that was made of clay", where the word "that" could refer to either the box or the doll. To make it clear that the doll is what is made of clay, the sentence could be reworded as one of the following: "Inside the box, there was a doll that was made of clay", "Inside the box, there was a doll made of clay", or "There was a girl doll inside the box, and she was made of clay" (or similar wording).
Antecedents may also be unclear when they occur far from the noun or phrase they refer to.Bryan Garnercalls these "remote relatives" and gives this example from theNew York Times:
"C-130 aircraft packed with radio transmitters flew lazy circles over the Persian Gulf broadcasting messages in Arabic to the Iraqi people that were monitored by reporters near the border."
As Garner points out, “that were…the border” modifies “messages”, which occurs 7 words (3 of which are nouns) before.[4]In context, the phrase could also modify “the Iraqi people”, hence the uncertainty.
Theante-inantecedentmeans 'before; in front of'. Thus, when a pro-form precedes its antecedent, the antecedent is not literally anantecedent, but rather it is apostcedent,post-meaning 'after; behind'. The following examples, wherein the pro-forms are bolded and their postcedents are underlined, illustrate this distinction:
Postcedents are rare compared to antecedents, and in practice, the distinction between antecedents and postcedents is often ignored, with the termantecedentbeing used to denote both. This practice is a source of confusion, and some have therefore denounced using the termantecedentto meanpostcedentbecause of this confusion.[5]
Some pro-forms lack a linguistic antecedent. In such cases, the antecedent is implied in the given discourse environment or from general knowledge of the world. For instance, the first person pronounsI,me,we, andusand the second person pronounyouare pro-forms that usually lack a linguistic antecedent. However, their antecedents are present in the discourse context as the speaker and the listener. Pleonastic pro-forms also lack a linguistic antecedent, e.g.It is raining, where the pronounitis semantically empty and cannot be viewed as referring to anything specific in the discourse world. Definite pro-forms such astheyandyoualso have an indefinite use, which means they denote some person or people in general, e.g.They will get you for that, and therefore cannot be construed as taking a linguistic antecedent.
|
https://en.wikipedia.org/wiki/Antecedent_(grammar)
|
Inlinguistics,cataphora(/kəˈtæfərə/; fromGreek,καταφορά,kataphora, "a downward motion" fromκατά,kata, "downwards" andφέρω,pherō, "I carry") is the use of an expression or word thatco-referswith a later, more specific expression in the discourse.[1]The preceding expression, whose meaning is determined or specified by the later expression, may be called acataphor. Cataphora is a type ofanaphora, although the termsanaphoraandanaphorare sometimes used in a stricter sense, denoting only cases where the order of the expressions is the reverse of that found in cataphora.
An example of cataphora in English is the following sentence:
In this sentence, the pronounhe(the cataphor) appears earlier than the nounJohn(thepostcedent) that it refers to. This is the reverse of the more normal pattern, "strict" anaphora, where areferring expressionsuch asJohn(in the example above) orthe soldier(in the example below) appears before any pronouns that reference it. Both cataphora and anaphora are types ofendophora.
Other examples of the same type of cataphora are:
Cataphora across sentences is often used for rhetorical effect. It can build suspense and provide a description. For example:
The examples of cataphora described so far are strict cataphora, because the anaphor is an actualpronoun. Strict within-sentence cataphora is highly restricted in the sorts of structures it can appear within, generally restricted to a preceding subordinate clause. More generally, however, any fairly generalnoun phrasecan be considered an anaphor when itco-referswith a more specific noun phrase (i.e. both refer to the same entity), and if the more general noun phrase comes first, it can be considered an example of cataphora. Non-strict cataphora of this sort can occur in many contexts, for example:
(The anaphora little girlco-refers withJessica.)
(The anaphorthe right gadgetco-refers witha digital camera.)
Strict cross-sentence cataphora where the antecedent is an entire sentence is fairly common cross-linguistically:
Cataphora of this sort is particularly common in formal contexts, using an anaphoric expression such asthisorthe following. Such expressions are often used in conjunction with acolon.
Thispragmatics-related article is astub. You can help Wikipedia byexpanding it.
Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Cataphora
|
Thenearest referentis a grammatical term sometimes used when two or more possible referents of a pronoun, or other part of speech, cause ambiguity in a text. However "nearness", proximity, may not be the most meaningful criterion for a decision, particularly whereword order,inflectionand other aspects ofsyntaxare more relevant.
The concept of nearest referent is found in analysis of various languages, including classical languages Greek,[1]Latin[2]and Arabic.[3][4]It may create or resolve variant views in interpretation of a text.
There are other models than nearest referent for deciding what a pronoun, or other part of speech, refers to, andreference orderdistinguishespronoun-referent structureswhere:
This is also described asanaphoricreference (anaphor, previous referent) andcataphoricreference (cataphor, following referent).[6]
|
https://en.wikipedia.org/wiki/Nearest_referent
|
Inlinguistics,switch-reference(SR) describes any clause-levelmorphemethat signals whether certain prominent arguments in 'adjacent'clausesarecoreferential. In most cases, it marks whether thesubjectof the verb in one clause is coreferent with that of the previous clause, or of a subordinate clause to the matrix (main) clause that is dominating it.
The basic distinction made by a switch-reference system is whether the following clause has the same subject (SS) or a different subject (DS). That is known ascanonical switch-reference. For purposes of switch-reference, subject is defined as it is for languages with a nominative–accusative alignment: a subject is the sole argument of an intransitive clause or the agent of a transitive one. It holds even in languages with a high degree ofergativity.
TheWasho languageof California and Nevada exhibits a switch-reference system. When the subject of one verb is the same as the subject of the following verb, the verb takes no switch-reference marker. However, if the subject of one verb differs from the subject of the following verb, the verb takes the "different subject" marker, -š: as displayed below[1]
yá·saʼ
again
duléʼšugi
he.is.reaching.toward.him
yá·saʼ
again
gedumbéc̓edášaʼi
he.is.going.to.poke.him
yá·saʼ duléʼšugi yá·saʼ gedumbéc̓edášaʼi
againhe.is.reaching.toward.him againhe.is.going.to.poke.him
"Againheis reaching toward him, againhewill poke him" (same subject)
mémluyi
you.eat
-š
-DS
lémehi
I.will.drink
mémluyi -šlémehi
you.eat -DSI.will.drink
"Ifyoueat,Iwill drink" (different subjects)
TheSeri languageof northwestern Mexico also has a switch-reference system which is similar in most ways to those of other languages except for one very salient fact: the relevant argument in a passive clause is not the superficial subject of the passive verb but rather the always unexpressed underlying subject. In clauses withsubject raising, it is the raised subject that is relevant.[2]
There are four fundamental properties that any switch reference system, canonical and non-canonical, should satisfy.[3][4]Any system that does not have all these properties are categorically not switch reference:
A commonly used definition of canonical switch reference is that "switch-reference is an inflectional category of the verb, which indicates whether or not its subject is identical with the subject of some other verb."[5]There are several formal properties that apply specifically to canonical switch reference systems.[6]They include:
Many languages exhibit non-canonical switch-reference, the co-referents of arguments other than the subject being marked by switch-reference. Here is an example from Kiowa:
Kathryn
Kathryn
gʲà
'she-it'
kwút
write.PFV
gɔ
and.SS
Esther-àl
Esther-too
gʲà
'she-it'
kwút
write.PFV
Kathryn gʲà kwútgɔEsther-àl gʲà kwút
Kathryn 'she-it' write.PFVand.SSEsther-too 'she-it' write.PFV
Kathryn wrote a letter and Esther wrote one, too.
In this case, the use of the same-subject markergɔrather than the switch-reference markernɔindicates that the two subjects wrote letters at the same time, to the same person, and with the same subject.[7]
In addition, the nominative subject is not always marked by switch-reference. For instance, many clauses, including those withimpersonalor weather verbs, have no subject at all but can both bear and trigger switch-reference.[8]
Switch-reference markers often carry additional meanings or are at least fused with connectives that carry them. For instance, a switch-reference marker might mark a different subject and sequential events.
Switch-reference markers often appear attached to verbs, but they are not a verbal category. They often appear attached to sentence-initial particles, sentence-initial recapitulative verbs, adverbial conjunctions ('when', 'because', etc.), or coordinators ('and' or 'but' though it seems never 'or'), relativizers ('which,'that'), or sentencecomplementizers('that'). They can also appear as free morphemes or as differing agreement paradigms. However, most switch-reference languages aresubject–object–verblanguages, with verbs as well as complementizers and conjunctions coming at the end of clauses. Therefore, switch-reference often appears attached to verbs, a fact that has led to the common but erroneous claim that switch-reference is a verbal category.
One certain typological fact about switch-reference is that switch-reference markers appear at the 'edges' of clauses. It is found at the edge of either a subordinate clause (referring to the matrix clause) or at the edge of a coordinate clause (referring to the previous clause). It is also very common in clause-chaining languages ofNew Guinea, where it is found at the edge of medial clauses.
Switch-reference is also sensitive to syntactic structure. It can skip a clause that is string-adjacent (spoken one right after another) and refer to a matrix clause.[9][10]For instance, in the configuration [A[B][C]], for which B and C are subordinate clauses to A, any switch-reference-marking on C refers to A, not B.
Switch-reference is accounted for by many different explanations. These are some of the current theories:
Finer’s account of switch-reference is connected to a generalized version of Chomsky’s binding theory that also accounts for Ā-positions (non-argument positions).[11]Switch-reference markers occupy the head of the complementizer phrase (CP), which is an Ā-position. Same subject markers are Ā-anaphors (reflexives and reciprocals) and different subject markers are Ā-pronominals (pronouns that are not reflexives or reciprocals). That is, same subject marking is used when the indices are identical, and different subject marking is used otherwise. Since the switch-reference markers are complementizer heads, their domain (smallest XP with a subject) necessarily includes the subject of the higher clause, which can then be (non-)coreferent with the switch-reference marker.[12]
Déchaine & Wiltshko (2002) propose an explanation of switch-reference based on the DP/ΦP distinction (ΦP is their proposed intermediate projection between NP and DP that should be able to act like either of their distributions).[13]Déchaine & Wiltshko note that the different subject markers are very similar to their corresponding same subject markers with some added morphology such as SS-igvs. DS-igininAmele.[14]
This suggests that same subject markers are bare ΦPs and different subject markers are full DPs containing a ΦP. Since different subject markers are essentially DPs, they are subject to Principle C and so cannot be coreferent with any antecedent. This forces a different-subject reading. Additionally, switch-reference is dependent on tense. Same subject marking occurs, and only subjects act as pivots for switch-reference, because switch-reference is mediated by tense.[15]
The distribution of same subject and different subject markers do not always align with the coreference of the two subjects. Van Gijn (2016) provides a sentence in Central Pomo where the same subject marker-hiis used despite the subjects being distinct (seethematic coherence):[3]
ʔɑ́
1A
mkʰe
2A
kʰčé-ʔel
bridge-the
dó-č-hi
make-SML-IDENT
mí-li
that-with
ma
2PAT
ʔdí-m-ʔkʰe
take.PL-across-FUT
ʔɑ́ mkʰe kʰčé-ʔel dó-č-hi mí-li ma ʔdí-m-ʔkʰe
1A2Abridge-the make-SML-IDENTthat-with 2PATtake.PL-across-FUT
'I will build the bridge for you and on that you'll take them (across)'
Stirling (1993) proposed that switch-reference is about the congruence of "eventualities". Referential continuity is just one aspect of this. She notes six pivots for SR systems:
Same subject markers indicateidentitywhile different subject markers donon-identity, where identity is about agreement between “aspects of eventualities” and non-identity is disagreement in at least one of those parameters.
Keine (2013) also notes the inconsistency in the alignment of same subject and different subject markers with their subjects that may not actually be same or different. For example, in these two Zuni sentences, different subject marking is used despite the subjects being co-referent:[16]
Hoʼ
1SG.NOM
sa-kʼošo-p
dish-wash-DS
hoʼ
1SG.NOM
saʼleʼ
dish
kʼuhmo-kʼe-nna
break-CAUS-FUT
Hoʼ sa-kʼošo-p hoʼ saʼleʼ kʼuhmo-kʼe-nna
1SG.NOM dish-wash-DS 1SG.NOM dish break-CAUS-FUT
'Whenever I wash dishes, I always break a dish'
Teʼči-p
arrive-DS
antewa-kya
spend.the.night-PST
Teʼči-p antewa-kya
arrive-DS spend.the.night-PST
'He arrived and camped [there] for the night'
Different subject marking is used in Mesa Grande Diegueño (Yuman family) as well. This is unexpected because weather verbs do not project their own subjects, so there are no actual subjects that could be co-referent.[17]
Nya-a:lap-č
when-be.snowing-SS
/-m
-DS
səcu:r-č
be.cold-SS
apəsi:w
be.very.much
Nya-a:lap-č /-m səcu:r-č apəsi:w
when-be.snowing-SS -DS be.cold-SS be.very.much
'When it snows, it's very cold.'
If subject reference completely explained the distribution of switch reference markers, these sentences should not occur. What Keine proposes instead is that the switch-reference markersarethe different modes of spelling out the coordination. As well, switch-reference may exist clause-internally due to the coordination of low verbal projections. To note, switch reference existing clause-internally would have no issues with locality since indices and references are not being tracked across whole clauses.
Under Keine’s proposal, if two VPs are conjoined, then there is only onevP and one external argument (i.e. one subject). This subject is then semantically interpreted as the subject of both VPs. The coordination marker used in this contextisthe same subject marker. TwovPs, yielding two external arguments, may also be conjoined. Each one is interpreted as the subject of its respective VP. Morphological differences and semantic properties are just consequences of the tree geometry of the coordination structure.
The Amele sentences below illustrate Keine's coordination height proposal:
Ija
1SG
hu-m-ig
come-SS-1SG
sab
food
j-ig-a
eat-1SG-PAST
Ija hu-m-ig sab j-ig-a
1SG come-SS-1SG food eat-1SG-PAST
'I came and ate the food'
Ija
1SG
ho-co-min
come-DS-1SG
sab
food
ja-g-a
eat-2SG-PAST
Ija ho-co-min sab ja-g-a
1SG come-DS-1SG food eat-2SG-PAST
'I came and you ate the food'
Arregi & Hanink (2021) propose that the embedded C head agrees with the subject of the embedded clause, as well as the subject of the higher clause in referential index. The same subject and different subject markers are the morphological realization of the embedded C head.[20]If the index values of both subjects differ, or if there is feature conflict, then C is morphologically realized as-š, the different subject marker in Washo. If there is no feature conflict, then C is realized as ∅, the same subject marker in Washo. By extension, for any switch reference system, if the embedded and superordinate subjects have the same reference index, then embedded C is realized as the same subject marker. Likewise, if there is feature conflict instead, C is realized as the different subject marker.[21]
Switch reference is found in hundreds of languages inNorth America,South America,Australia, New Guinea (particularly in theTrans-New Guineaphylum, but not in many Papuan language families of northern New Guinea[22]), and the South Pacific. Typologies exist for North America,[23]Australia,[24]and New Guinea.[25]The distribution of these systems has been determined via surveys andtypological studies.[26]
Switch-reference tends to occur in geographical clusters spread over distinct language families. This system is suspected to spread through language contact, orareal diffusion, which accounts for the fact that the morphological marking varies from one language to the next. For example,Kiowais the only language in theKiowa-Tanoanfamily that uses switch reference, which can be explained by the migration history of theKiowa tribeand their close contact with theCrowandComanchetribes, both of which use switch-reference in their language.[27]Particularly in North America, theUto-Aztecanlanguage family is thought to have been a source of major influence.[28]
Many indigenous languages in Western South America use switch-reference systems such asQuechuan,Uru, andChipayain the Andes, andTacanan,Panoan,Barbacoan,Tucanoan, andJivaronain the Amazon area.[28]Panoan languages are unique in the way they allow different coreference pivots such as transitive and intransitive subjects, as well as objects.[29]
In North America, there are 11 language families and 4 isolate languages that use this system. These native languages that feature switch-reference can be found in regions stretching from the south and south-west of the U.S. to the north-west of Mexico. These include theYuman–Cochimí,Muskogean,Maiduan,Pomoan,Yokutsan,Plateau Penutian,Yukian, Kiowa-Tanoan,Siouan, and theNumicandTakic(subgroups of Uto-Aztecan) language families, and theSeri,Tonkawa,Washo, andZuniisolates.[30]These North American languages are unique in their productive use of this system, using switch-reference in coordinate, relative, and complement clauses, as well as semanticallyunderspecifiedclause chains.[28]
Australian languages that use switch-reference include that aboriginal language familiesPama-Nyungan,Arabana-Wangganguru,Arandic,Wagaya,Garawa-Waanyi, andDjingili.[29]Further, 70% ofPapuanlanguages, referring to languages native to the island ofNew Guinea, make use of switch reference systems.[31]While languages in Papua New Guinea are rich with personal pronouns, verbs still require switch-reference and agreement markers for participant tracking.[32]
Switch-reference systems are also present in languages ofVanuatu, parts of Africa, and potentially eastern Siberia.Vanuatu languagesare distinctive in that they mark theanticipatory subject. Although Africa is not typically known to be a region with switch-reference, it is quite prevalent inOmoticlanguages, particularly within theNorth Omoticsubgroup.[31]This influence may have also contributed to the development of switch-reference systems inEast Cushitic languages. Finally, the eastern SiberianYukaghirlanguage family andEven, aTungusiclanguage, may be considered switch-reference languages but there is currently inconclusive evidence.[33]
|
https://en.wikipedia.org/wiki/Switch-reference
|
Ingeographic information systems,toponym resolutionis therelationship processbetween atoponym, i.e. the mention of a place, and an unambiguous spatial footprint of the same place.[1]
The places mentioned in digitized text collections constitute a rich data source for researchers in many disciplines. However, toponyms inlanguage useare ambiguous, and difficult to assign a definite real-worldreferent. Over time, established geographic names may change (as in "Byzantium" > "Constantinople" > "Istanbul"); or they may be reused verbatim ("Boston" in England, UK vs. "Boston" in Massachusetts, USA), or with modifications (as in "York" vs. "New York"). To map a set of place names or toponyms that occur in a document to their correspondinglatitude/longitudecoordinates, a polygon, or any other spatial footprint, a disambiguation step is necessary. A toponym resolution algorithm is an automatic method that performs a mapping from a toponym to a spatial footprint.
Some methods for toponym resolution employ agazetteerof possible mappings between names and spatial footprints.[2]
The "unambiguous spatial footprint of the same place"[1]of definition can be in fact unambiguous, or "not so unambiguous". There are some differentcontexts ofuncertaintywhere the resolution process can occur:
The toponym resolution sometimes is a simple conversion from name to abbreviation, in special when the abbreviation is used as standardgeocode. For example, converting the official country nameAfghanistaninto anISO country code,AF.
In annotating media andmetadata, the conversion using amapand the geographical evidence (e.g. GPS), is the most usual approach to obtain toponym, or ageocodethat represents the toponym.
In contrast togeocodingof postal addresses, which are typically stored in structureddatabaserecords, toponym resolution is typically applied to large unstructured text document collections to associate the locations mentioned in them with maps. If some of those text documents are geotagged --- e.g. because they are micro-blog posts with latitude and longitude automatically added --- they can be used to infer the varying geographical specificity of arbitrary terms, e.g. "cable car" or "high tide"[3].
The process of annotating media (e.g., image, text, video) using spatial footprints is known asGeotagging. In order to automatically geotag a text document, the following steps are usually undertaken:toponym recognition(i.e., spotting textual references to geographic locations) andtoponym resolution(i.e., selecting an appropriate location interpretation for each geographic reference).
Toponym recognitioncan be considered as a special case ofnamed-entity recognitionwhere the objective is to merely derive location entities. However, the result of named-entity recognition can be further improved using hand-crafted rules or statistical rules.[4]
For obtaining location interpretations,resolutionmodels tend to leveragegazetteers(i.e., huge databases of locations) such asGeoNamesandOpenStreetMap. A naive approach to resolve toponyms is to pick the most populated interpretation from the list of candidates. For example, in the following excerpt:
Toronto man living, working in London 'uncertain of future' in U.K. after Brexit
The naive approach seems viable since toponymsTorontoandLondonrefer to their most common interpretation, located in Canada and Britain respectively, whereas in the following piece from a news article:
High-speed rail between Toronto and London by 2025
This approach fails to pinpoint toponymLondonas the city located inOntario, Canada. Hence, selecting the highest population cannot work well for toponyms in a localized context.
Additionally,toponym resolutiondoes not addressmetonymyin general. Nonetheless, a resolution technique can still disambiguate a metonymy reference as long as it is identified as a toponym in the recognition phase. For instance, in the following excerpt:
Canada is also adjusting its driving laws to account for cannabis DUIs.
Canadaindicates ametonymyand refers to "the government of Canada". However, it can be identified as a location by a generic named-entity recognizer and thus, a toponym resolver is able to disambiguate it.
Toponym resolution methods can be generally divided intosupervisedandunsupervisedmodels. Supervised methods typically cast the problem as a learning task wherein the model first extracts contextual and non-contextual features and then, a classifier is trained on a labelled dataset. Adaptive model[5]is one of the prominent models proposed in resolving toponyms. For each interpretation of a toponym, the model derives context-sensitive features based on geographical proximity and sibling relationships with other interpretations. In addition to context related features, the model benefits from context-free features including population, and audience location. On the other hand, unsupervised models do not warrant annotated data. They are superior to supervised models when the annotated corpus is not sufficiently large, and supervised models may not generalize well.[6]
Unsupervised models tend to better exploit the interplay of toponyms mentioned in a document. The Context-Hierarchy Fusion[6]model estimates the geographic scope of documents and leverages the connections between nearby place names as evidence to resolve toponyms. By means of mapping the problem to a conflict-freeset cover problem, this model achieves a coherent and robust resolution.
Furthermore, adopting Wikipedia and knowledge bases have been shown effective in toponym resolution. TopoCluster[7]models the geographical senses of words by incorporating Wikipedia pages of locations and disambiguates toponyms using the spatial senses of the words in the text.
Geoparsingis a special toponym resolution process of converting free-text descriptions of places (such as "twenty miles northeast of Jalalabad") into unambiguous geographic identifiers, such asgeographic coordinatesexpressed aslatitude-longitude. One can also geoparse location references from other forms of media, for examples audio content in which a speaker mentions a place. With geographic coordinates the features can be mapped and entered intoGeographic information systems. Two primary uses of the geographic coordinates derived from unstructured content are to plot portions of the content on maps and to search the content using a map as a filter.
Geoparsing goes beyondgeocoding. Geocoding analyzes unambiguous structured location references, such as postal addresses and rigorously formatted numerical coordinates. Geoparsing handles ambiguous references in unstructured discourse, such as "Al Hamra," which is the name of several places, including towns in both Syria and Yemen.
Ageoparseris a piece of software or a (web) service that helps in this process. Some examples:
|
https://en.wikipedia.org/wiki/Geoparsing
|
Ininformation extraction, anamed entityis areal-world object, such as a person, location, organization, product, etc., that can be denoted with aproper name. It can be abstract or have a physical existence. Examples of named entities includeBarack Obama,New York City,Volkswagen Golf, or anything else that can be named. Named entities can simply be viewed as entity instances (e.g.,New York Cityis an instance of acity).
From a historical perspective, the termNamed Entitywas coined during theMUC-6 evaluation campaign[1]and contained ENAMEX (entity name expressions e.g. persons, locations and organizations) and NUMEX (numerical expression).
A more formal definition can be derived from therigid designatorbySaul Kripke. In the expression "Named Entity", the word "Named" aims to restrict the possible set of entities to only those for which one or many rigid designators stands for the referent.[2]A designator is rigid when it designates the same thing in every possible world. On the contrary,flaccid designatorsmay designate different things in different possible worlds.
As an example, consider the sentence, "Biden is the president of the United States". Both "Biden" and the "United States" are named entities since they refer to specific objects (Joe BidenandUnited States). However, "president" is not a named entity since it can be used to refer to many different objects in different worlds (in different presidential periods referring to different persons, or even in different countries or organizations referring to different people). Rigid designators usually include proper names as well as certain natural terms like biological species and substances.
There is also a general agreement in theNamed Entity Recognitioncommunity to consider temporal and numerical expressions as named entities, such as amounts of money and other types of units, which may violate the rigid designator perspective.
The task of recognizing named entities in text isNamed Entity Recognitionwhile the task of determining the identity of the named entities mentioned in text is calledNamed Entity Disambiguation. Both tasks require dedicated algorithms and resources to be addressed.[3]
|
https://en.wikipedia.org/wiki/Named_entity
|
Author name disambiguationis the process ofdisambiguationandrecord linkageapplied to the names of individual people. The process could, for example, distinguish individuals with the name "John Smith".
An editor may apply the process to scholarly documents where the goal is to find all mentions of the same author and cluster them together. Authors of scholarly documents often share names which makes it hard to distinguish each author's work. Hence, author name disambiguation aims to find all publications that belong to a given author and distinguish them from publications of other authors who share the same name.
Considerable research has been conducted into name disambiguation.[1][2][3][4][5]Typical approaches for author name disambiguation rely on information to distinguish between authors, including (but not limited to) information about the authors such as: their name representation, affiliations and email addresses, and information about the publication: such as year of publication, co-authors, and the topic of the paper. This information can be used to train amachine learningclassifier to decide whether two author mentions refer to the same author or not.[6]Much research regards name disambiguation as aclusteringproblem, i.e., partitioning documents into clusters, where each represents an author.[2][7][8]Other research treats it as a classification problem.[9]Some works construct a document graph and utilize the graph topology to learn document similarity.[8][10]Recently, several pieces of research[10][11]aim to learn low-dimensional document representations by employing network embedding methods.[12][13]
There are multiple reasons that cause author names to be ambiguous, among which: individuals may publish under multiple names for a variety of reasons including different transliteration, misspelling, name change due to marriage, or the use of nicknames or middle names and initials.[14]
Motivations for disambiguating individuals include identifying inventors from patents, and researchers across differing publishers, research institutions and time periods.[15]Name disambiguation is also a cornerstone in author-centric academic search and mining systems, such asAMiner(formerly ArnetMiner).[16]
Author name disambiguation is only one record linkage problem in the scholarly data domain. Closely related, and potentially mutually beneficial problems include: organisation (affiliation) disambiguation,[17]as well as conference or publication venue disambiguation, since data publishers often use different names or aliases for these entities.
Several well-known benchmarks to evaluate author name disambiguation are listed below, each of which provides publications with some ambiguous names and their ground truths.
Source Codes
|
https://en.wikipedia.org/wiki/Author_Name_Disambiguation
|
Inlinguistics,coreference, sometimes writtenco-reference, occurs when two or more expressions refer to the same person or thing; they have the samereferent. For example, inBill said Alice would arrive soon, and she did, the wordsAliceandsherefer to the same person.[1]
Co-reference is often non-trivial to determine. For example, inBill said he would come, the wordhemay or may not refer to Bill. Determining which expressions are coreferences is an important part of analyzing or understanding the meaning, and often requires information from the context,
real-world knowledge, such as tendencies of some names to be associated with particular species ("Rover"), kinds of artifacts ("Titanic"), grammatical genders, or other properties.
Linguists commonly use indices to notate coreference, as inBillisaid heiwould come. Such expressions are said to becoindexed, indicating that they should be interpreted as coreferential.
When expressions are coreferential, the first to occur is often a full or descriptive form (for example, an entire personal name, perhaps with a title and role), while later occurrences use shorter forms (for example, just a given name, surname, or pronoun). The earlier occurrence is known as theantecedentand the other is called aproform, anaphor, or reference. However, pronouns can sometimes refer forward, as in "When she arrived home, Alice went to sleep." In such cases, the coreference is calledcataphoricrather than anaphoric.
Coreference is important forbindingphenomena in the field of syntax. The theory of binding explores the syntactic relationship that exists between coreferential expressions in sentences and texts.
When exploring coreference, numerous distinctions can be made, e.g.anaphora,cataphora, split antecedents, coreferring noun phrases, etc.[2]Several of these more specific phenomena are illustrated here:
Semanticists and logicians sometimes draw a distinction between coreference and what is known as abound variable.[3]Bound variables occur when the antecedent to the proform is an indefinite quantified expression, e.g.[4][clarification needed]
Quantified expressionssuch asevery studentandno studentare not considered referential. These expressions are grammatically singular but do not pick out single referents in the discourse or real world. Thus, the antecedents tohisin these examples are not properly referential, and neither ishis. Instead, it is considered avariablethat isboundby its antecedent. Its reference varies based upon which of the students in the discourse world is thought of. The existence of bound variables is perhaps more apparent with the following example:
This sentence is ambiguous. It can mean that Jack likes his grade but everyone else dislikes Jack's grade; or that no one likes theirowngrade except Jack. In the first meaning,hisis coreferential; in the second, it is a bound variable because its reference varies over the set of all students.
Coindex notation is commonly used for both cases. That is, when two or more expressions are coindexed, it does not signal whether one is dealing with coreference or a bound variable (or as in the last example, whether it depends on interpretation).
Incomputational linguistics, coreference resolution is a well-studied problem indiscourse. To derive the correct interpretation of a text, or even to estimate the relative importance of various mentioned subjects, pronouns and otherreferring expressionsmust be connected to the right individuals. Algorithms intended to resolve coreferences commonly look first for the nearest preceding individual that is compatible with the referring expression. For example,shemight attach to a preceding expression such asthe womanorAnne, but not as probably toBill. Pronouns such ashimselfhave much stricter constraints. As with many linguistic tasks, there is a tradeoff betweenprecision and recall.Cluster-quality metrics commonly used to evaluate coreference resolution algorithms include theRand index, theadjusted Rand index, and differentmutual information-based methods.
A particular problem for coreference resolution in English is the pronounit, which has many uses.Itcan refer much likeheandshe, except that it generally refers to inanimate objects (the rules are actually more complex: animals may be any ofit,he, orshe; ships are traditionallyshe; hurricanes are usuallyitdespite having gendered names).Itcan also refer to abstractions rather than beings, e.g.He was paid minimum wage, but didn't seem to mind it.Finally,italso haspleonasticuses, which do not refer to anything specific:
Pleonastic uses are not considered referential, and so are not part of coreference.[5]
Approaches to coreference resolution can broadly be separated into mention-pair, mention-ranking or entity-based algorithms. Mention-pair algorithms involvebinarydecisions if a pair of two given mentions belong to the same entity. Entity-wide constraints likegenderare not considered, which leads toerror propagation. For example, the pronounsheorshecan both have a high probability of coreference withthe teacher, but cannot be coreferent with each other. Mention-ranking algorithms expand on this idea but instead stipulate that one mention can only be coreferent with one (previous) mention. As a result, each previous mention must be given a score and the highest scoring mention (or no mention) is linked. Finally, in entity-based methods mentions are linked based on information of the whole coreference chain instead of individual mentions. The representation of a variable-width chain is more complex and computationally expensive than mention-based methods, which lead to these algorithms being mostly based onneural networkarchitectures.
|
https://en.wikipedia.org/wiki/Coreference
|
Anannotationis extra information associated with a particular point in adocumentor other piece of information. It can be a note that includes a comment or explanation.[1]Annotations are sometimes presentedin the margin of book pages. For annotations of different digital media, seeweb annotationandtext annotation.
Annotation Practices are highlighting a phrase or sentence and including a comment, circling a word that needs defining, posing a question when something is not fully understood and writing a short summary of a key section.[2]It also invites students to "(re)construct a history through material engagement and exciting DIY (Do-It-Yourself) annotation practices."[3]Annotation practices that are available today offer a remarkable set of tools for students to begin to work, and in a more collaborative, connected way than has been previously possible.[4]
Text and Film Annotation is a technique that involves using comments, text within a film. Analyzing videos is an undertaking that is never entirely free of preconceived notions, and the first step for researchers is to find their bearings within the field of possible research approaches and thus reflect on their own basic assumptions.[5]Annotations can take part within the video, and can be used when the data video is recorded. It is being used as a tool in text and film to write one's thoughts and emotion into the markings.[2]In any number of steps of analysis, it can also be supplemented with more annotations. Anthropologists Clifford Geertz calls it a "thick description." This can give a sense of how useful annotation is, especially by adding a description of how it can be implemented in film.[5]
Marginalia refers to writing or decoration in the margins of a manuscript. Medieval marginalia is so well known that amusing or disconcerting instances of it are fodder for viral aggregators such as Buzzfeed and Brainpickings, and the fascination with other readers’ reading is manifest in sites such as Melville's Marginalia Online or Harvard's online exhibit of marginalia from six personal libraries.[4]It can also be a part of other websites such as Pinterest, or even meme generators and GIF tools.
Textual scholarshipis a discipline that often uses the technique of annotation to describe or add additional historical context to texts and physical documents to make it easier to understand.[6]
Students oftenhighlightpassages in books in order to actively engage with the text. Students can use annotations to refer back to key phrases easily, or addmarginaliato aid studying and finding connections between the text and prior knowledge or running themes.[7]
Annotated bibliographiesadd commentary on the relevance or quality of each source, in addition to the usual bibliographic information that merely identifies the source.
Students use Annotation not only for academic purposes, but interpreting their own thoughts, feelings, and emotions.[2]Sites such as Scalar and Omeka are sites that students use. There are multiple genres with Annotation such as math, film, linguists, and literary theory which students find it most helpful to use. Most students reported the annotation process as helpful for improving overall writing ability, grammar, and academic vocabulary knowledge.
Mathematical expressions(symbols and formulae) can be annotated with their natural language meaning. This is essential for disambiguation, since symbols may have different meanings (e.g., "E" can be "energy" or "expectation value", etc.).[8][9]The annotation process can be facilitated and accelerated through recommendation, e.g., using the "AnnoMathTeX" system that is hosted by Wikimedia.[10][11][12]
From a cognitive perspective, annotation has an important role in learning and instruction. As part of guided noticing it involves highlighting, naming or labelling and commenting aspects of visual representations to help focus learners' attention on specific visual aspects. In other words, it means the assignment of typological representations (culturally meaningful categories), to topological representations (e.g. images).[13]This is especially important when experts, such as medical doctors, interpret visualizations in detail and explain their interpretations to others, for example by means of digital technology.[14]Here, annotation can be a way to establishcommon groundbetween interactants with different levels of knowledge.[15]The value of annotation has been empirically confirmed, for example, in a study which shows that in computer-based teleconsultations the integration of image annotation and speech leads to significantly improved knowledge exchange compared with the use of images and speech without annotation.[16]
Annotations were removed on January 15, 2019, fromYouTubeafter around a decade of service.[17]They had allowed users to provide information that popped up during videos, but YouTube indicated they did not work well on small mobile screens, and were being abused.
Markup languageslikeXMLandHTMLannotate text in a way that is syntactically distinguishable from that text. They can be used to add information about the desired visual presentation, or machine-readable semantic information, as in thesemantic web.[18]
This includesCSVandXLS. The process of assigning semantic annotations to tabular data is referred to as semantic labelling.Semantic Labellingis the process of assigning annotations fromontologiesto tabular data.[19][20][21][22]This process is also referred to as semantic annotation.[23][22]Semantic Labelling is often done in a (semi-)automatic fashion. Semantic Labelling techniques work on entity columns,[22]numeric columns,[19][21][24][25]coordinates,[26]and more.[26][25]
There are several semantic labelling types which utilises machine learning techniques. These techniques can be categorised following the work of Flach[27][28]as follows: geometric (using lines and planes, such asSupport-vector machine,Linear regression), probabilistic (e.g.,Conditional random field), logical (e.g.,Decision tree learning), and Non-ML techniques (e.g., balancing coverage and specificity[22]). Note that the geometric, probabilistic, and logical machine learning models are not mutually exclusive.[27]
Pham et al.[29]useJaccard indexandTF-IDFsimilarity for textual data andKolmogorov–Smirnov testfor the numeric ones. Alobaid and Corcho[21]usefuzzy clustering(c-means[30][31]) to label numeric columns.
Limaye et al.[32]usesTF-IDFsimilarity andgraphical models. They also usesupport-vector machineto compute the weights. Venetis et al.[33]construct an isA database which consists of the pairs (instance, class) and then compute maximum likelihood using these pairs. Alobaid and Corcho[34]approximated the q-q plot for predicting the properties of numeric columns.
Syed et al.[35]built Wikitology, which is "a hybrid knowledge base of structured and unstructured information extracted from Wikipedia augmented by RDF data from DBpedia and other Linked Data resources."[35]For the Wikitology index, they usePageRankforEntity linking, which is one of the tasks often used in semantic labelling. Since they were not able to query Google for all Wikipedia articles to get thePageRank, they usedDecision treeto approximate it.[35]
Alobaid and Corcho[22]presented an approach to annotate entity columns. The technique starts by annotating the cells in the entity column with the entities from the reference knowledge graph (e.g.,DBpedia). The classes are then gathered and each one of them is scored based on several formulas they presented taking into account the frequency of each class and their depth according to the subClass hierarchy.[36]
Here are some of the common semantic labelling tasks presented in the literature:
This is the most common task in semantic labelling. Given a text of a cell and a data source, the approach predicts the entity and link it to the one identified in the given data source. For example, if the input to the approach were the text "Richard Feynman" and a URL to the SPARQL endpoint of DBpedia, the approach would return "http://dbpedia.org/resource/Richard_Feynman", which is the entity from DBpedia. Some approaches use exact match.[22]while others use similarity metrics such asCosine similarity[32]
The subject column of a table is the column that contain the main subjects/entities in the table.[19][28][33][37][38]Some approaches expects the subject column as an input[22]while others predict the subject column such as TableMiner+.[38]
Columns types are divided differently by different approaches.[28]Some divide them into strings/text and numbers[21][29][39][25]while others divide them further[28](e.g., Number Typology,[19]Date,[35][33]coordinates[40]).
The relation betweenMadridandSpainis "capitalOf".[41]Such relations can easily be found in ontologies, such asDBpedia. Venetis et al.[33]use TextRunner[42]to extract the relation between two columns. Syed et al.[35]use the relation between the entities of the two columns and the most frequent relation is selected.
T2D[43]is the most common gold standard for semantic labelling. Two versions exists of T2D: T2Dv1 (sometimes are referred to T2D as well) and T2Dv2.[43]Another known benchmarks are published with the SemTab Challenge.[44]
The "annotate" function (also known as "blame" or "praise") used insource controlsystems such asGit,Team Foundation ServerandSubversiondetermines whocommittedchanges to the source code into the repository. This outputs a copy of the source code where each line is annotated with the name of the last contributor to edit that line (and possibly a revision number). This can help establish blame in the event a change caused a malfunction, or identify the author of brilliant code.
A special case is theJava programming language, where annotations can be used as a special form of syntacticmetadatain the source code.[45]Classes, methods, variables, parameters and packages may be annotated. The annotations can be embedded inclass filesgenerated by the compiler and may be retained by theJava virtual machineand thus influence therun-timebehaviour of an application. It is possible to create meta-annotations out of the existing ones in Java.[46]
Automatic image annotation is used to classify images forimage retrievalsystems.[47]
Since the 1980s,molecular biologyandbioinformaticshave created the need forDNA annotation. DNA annotation or genome annotation is the process of identifying the locations of genes and all of the coding regions in a genome and determining what those genes do. An annotation (irrespective of the context) is a note added by way of explanation or commentary. Once a genome is sequenced, it needs to be annotated to make sense of it.[48]
In thedigital imagingcommunity the term annotation is commonly used for visible metadata superimposed on animagewithout changing the underlying master image, such assticky notes, virtual laser pointers, circles, arrows, and black-outs (cf.redaction).[49]
In themedical imagingcommunity, an annotation is often referred to as aregion of interestand is encoded inDICOMformat.
In the United States, legal publishers such asThomson WestandLexis Nexispublish annotated versions ofstatutes, providing information aboutcourt casesthat have interpreted the statutes. Both the federalUnited States Codeand state statutes are subject to interpretation by thecourts, and the annotated statutes are valuable tools inlegal research.[50]
One purpose of annotation is to transform the data into a form suitable for computer-aided analysis. Prior to annotation, an annotation scheme is defined that typically consists of tags. During tagging, transcriptionists manually add tags into transcripts where required linguistical features are identified in an annotation editor. The annotation scheme ensures that the tags are added consistently across the data set and allows for verification of previously tagged data.[51]Aside from tags, more complex forms of linguistic annotation include the annotation of phrases and relations, e.g., intreebanks. Many different forms of linguistic annotation have been developed, as well as different formats and tools for creating and managing linguistic annotations, as described, for example, in the Linguistic Annotation Wiki.[52]
|
https://en.wikipedia.org/wiki/Annotation
|
There are two conceptualisations of data archaeology, the technical definition and the social science definition.
Data archaeology(alsodata archeology) in the technical sense refers to the art and science of recoveringcomputerdataencodedand/orencryptedin now obsoletemediaorformats. Data archaeology can also refer to recovering information from damagedelectronicformats afternatural disastersor human error.
It entails the rescue and recovery of old data trapped in outdated, archaic or obsolete storage formats such as floppy disks, magnetic tape, punch cards and transforming/transferring that data to more usable formats.
Data archaeology in the social sciences usually involves an investigation into the source and history of datasets and the construction of these datasets. It involves mapping out the entire lineage of data, its nature and characteristics, its quality and veracity and how these affect the analysis and interpretation of the dataset.
The findings of performing data archaeology affect the level to which the conclusions parsed from data analysis can be trusted.[1]
The term data archaeology originally appeared in 1993 as part of theGlobal Oceanographic Data Archaeology and Rescue Project(GODAR). The original impetus for data archaeology came from the need to recover computerised records of climatic conditions stored on oldcomputer tape, which can provide valuable evidence for testing theories ofclimate change. These approaches allowed the reconstruction of an image of theArcticthat had been captured by theNimbus 2satelliteon September 23, 1966, in higher resolution than ever seen before from this type of data.[2]
NASAalso utilises the services of data archaeologists to recover information stored on 1960s-era vintagecomputer tape, as exemplified by theLunar Orbiter Image Recovery Project(LOIRP).[3]
There is a distinction between data recovery and data intelligibility. One may be able to recover data but not understand it. For data archaeology to be effective, the data must be intelligible.[4]
A term closely related to data archaeology isdata lineage. The first step in performing data archaeology is an investigation into their data lineage. Data lineage entails the history of the data, its source and any alterations or transformations they have undergone. Data lineage can be found in the metadata of a dataset, thepara dataof a dataset or any accompanying identifiers (methodological guides etc). With data archaeology comes methodological transparency which is the level to which the data user can access the data history. The level of methodological transparency available determines not only how much can be recovered, but assists in knowing the data. Data lineage investigation involves what instruments were used, what the selection criteria are, the measurement parameters and the sampling frameworks.[1]
In the socio-political manner, data archaeology involves the analysis of data assemblages to reveal their discursive and material socio-technical elements and apparatuses. This kind of analysis can reveal the politics of the data being analysed and thus that of their producing institution. Archaeology in this sense, refers to the provenance of data. It involves mapping the sites, formats and infrastructures through which data flows and are altered or transformed over time. it has an interest in the life of data, and the politics that shapes the circulation of data. This serves to expose the key actors, practices and praxes at play and their roles. It can be accomplished in two steps. First is, accessing and assessing the technical stack of the data (this refers to the infrastructure and material technologies used to build/gather the data) to understand the physical representation of the data and also. Second, analysing the contextual stack of the data which shapes how the data is constructed, used and analysed. This can be done via a variety of processes, interviews, analysing technical and policy documents and investigating the effect of the data on a community or the institutional, financial, legal and material framing. This can be attained by creating adata assemblage[1]
Data archaeology charts the way data moves across different sites and can sometimes encounterdata friction.[5]
Data archaeologists can also usedata recoveryafter natural disasters such as fires, floods,earthquakes, or evenhurricanes. For example, in 1995 duringHurricane Marilynthe National Media Lab assisted theNational Archives and Records Administrationin recovering data at risk due to damaged equipment. The hardware was damaged from rain, salt water, and sand, yet it was possible to clean some of the disks and refit them with new cases thus saving the data within.[4]
When deciding whether or not to try and recover data, the cost must be taken into account. If there is enough time and money, most data will be able to be recovered. In the case ofmagnetic media, which are the most common type used for data storage, there are various techniques that can be used to recover the data depending on the type of damage.[4]: 17
Humidity can cause tapes to become unusable as they begin to deteriorate and become sticky. In this case, a heat treatment can be applied to fix this problem, by causing the oils and residues to either be reabsorbed into the tape or evaporate off the surface of the tape. However, this should only be done in order to provide access to the data so it can be extracted and copied to a medium that is more stable.[4]: 17–18
Lubrication loss is another source of damage to tapes. This is most commonly caused by heavy use, but can also be a result of improper storage or natural evaporation. As a result of heavy use, some of the lubricant can remain on the read-write heads which then collect dust and particles. This can cause damage to the tape. Loss of lubrication can be addressed by re-lubricating the tapes. This should be done cautiously, as excessive re-lubrication can cause tape slippage, which in turn can lead to media being misread and the loss of data.[4]: 18
Water exposure will damage tapes over time. This often occurs in a disaster situation. If the media is in salty or dirty water, it should be rinsed in fresh water. The process of cleaning, rinsing, and drying wet tapes should be done at room temperature in order to prevent heat damage. Older tapes should be recovered prior to newer tapes, as they are more susceptible to water damage.[4]: 18
The next step (after investigating the data lineage) is to establish what counts as good data and bad data to ensure that only the 'good' data gets migrated to the new data warehouse or repository. A good example of bad data is 'test data' in the technical data sense istest data.
To prevent the need of data archaeology, creators and holders of digital documents should take care to employdigital preservation.
Another effective preventive measure is the use of offshore backup facilities that could not be affected should a disaster occur. From these backup servers, copies of the lost data could easily be retrieved. A multi-site and multi-technique data distribution plan is advised for optimal data recovery, especially when dealing withbig data.TCP/IPmethod, snapshot recovery, mirror sites and tapes safeguarding data in a private cloud are also all good preventive methods. Daily transferring data from their mirror sites to the emergency servers.[6]
|
https://en.wikipedia.org/wiki/Data_archaeology
|
The study ofancient Greek personal namesis a branch ofonomastics, the study of names,[1]and more specifically ofanthroponomastics, the study of names of persons. There are hundreds of thousands and even millions of individuals whoseGreek nameare on record; they are thus an important resource for any general study of naming, as well as for the study ofancient Greeceitself. The names are found in literary texts, on coins and stampedamphorahandles, on potsherds used inostracisms, and, much more abundantly, in inscriptions and (inEgypt) onpapyri. This article will concentrate on Greek naming from the 8th century BC, when the evidence begins, to the end of the 6th century AD.[2]
Ancient Greeksusually had one name, but another element was often added in semi-official contexts or to aid identification: a father's name (patronym) in thegenitive case, or some regions as an adjectival formulation. A third element might be added, indicating the individual's membership in a particular kinship or other grouping, or city of origin (when the person in question was away from that city). Thus the oratorDemosthenes, while proposing decrees in theAthenian assembly, was known as "Demosthenes, son ofDemosthenesofPaiania"; Paiania was thedemeor regional sub-unit ofAtticato which he belonged by birth. In some rare occasions, if a person was illegitimate or fathered by a non-citizen, they might use their mother's name (metronym) instead of their father's. Ten days after birth, relatives on both sides were invited to a sacrifice and feast calleddekátē(δεκάτη), "tenth day"; on this occasion the father formally named the child.[3]
Demosthenes was unusual in bearing the same name as his father; it was more common for names to alternate between generations or between lines of a family. Thus it was common to name a first son after his paternal grandfather, and the second after the maternal grandfather, great-uncle, or great-aunt. A speaker in a Greek court case explained that he had named his four children after, respectively, his father, the father of his wife, a relative of his wife, and the father of his mother.[4]Alternatively, family members might adopt variants of the same name, such as "Demippos, son ofDemotimos". The practice of naming children after their grandparents is still widely practiced in Greece today.[5]
This article uses names written in Romanized form. For example the Greek name Δημόκριτος is written in the formDemokritos(meaning "chosen of the people"). But this same name is normally written asDemocritusin the Latin spelling which is the standard used in modern literature.
In many contexts, etiquette required that respectable women be spoken of as the wife or daughter of X rather than by their names.[6]On gravestones or dedications, however, they had to be identified by name. Here, the patronymic formula "son of X" used for men might be replaced by "wife of X", or supplemented as "daughter of X, wife of Y".
Many women bore forms of standard masculine names, with a feminine ending substituted for the masculine. Many standard names related to specific masculine achievements had a common feminine equivalent; the counterpart ofKallimachos"noble battle" or "fair in battle" would beKallimachē;likewiseStratonikos"army victory" would correspond toStratonikē.The taste mentioned above for giving family members related names was one motive for the creation of such feminine forms. There were also feminine names with no masculine equivalent, such asGlykera"sweet one";Hedistē"most delightful",Kalliopē, "beautiful-voiced".
Another distinctive way of forming feminine names was the neuter diminutive suffix-ion(-ιον, while the masculine corresponding suffix was -ιων), suggesting the idea of a "little thing": e.g.,Aristionfromaristos"best";Mikrionfrommikros"small". Perhaps by extension of this usage, women's names were sometimes formed from men's by a change to a neuter ending without the diminutive sense:Hilaronfromhilaros, "cheerful".
There were five main personal name types in Greece:[7]
Demosthenesis compounded from two ordinary Greek roots (a structure at least as old asProto-Indo-European):[8]demos"people" andsthenos"strength". A vast number of Greek names have this form, being compounded from two recognizable (though sometimes shortened) elements:Nikomachosfromnike"victory" andmache"battle",Sophoklesfromsophos"wise, skilled" andkleos"glory",Polykratesfrompoly"much" andkratos"power". The elements used in these compounds are typically positive and of good omen, stressing such ideas as beauty, strength, bravery, victory, glory, and horsemanship. The order of the elements was often reversible:aristosandkleosgive bothAristoklesandKlearistos.
Such compounds have a more or less clear meaning. It was already noted by Aristotle,[9]that two elements could be brought together in illogical ways. Thus the immensely productivehippos"horse" yielded, among hundreds of compounds, not only meaningful ones such asPhilippos"lover of horses" andHippodamas"horse-tamer", but alsoXenippos"stranger horse" andAndrippos"man horse" andHippias(likely just meaning "horse"). There were, in turn, numerous other names beginning withXen-andAndr-. These "irrational" compounds arose through a combination of common elements.[10]One motive was a tendency for members of the same family to receive names that echoed one another without being identical. Thus we meetDemippos, son ofDemotimos, where the son's name is irrational ("people horse") and the father's name meaningful ("people honour", i.e., honored among the people).
A very common element found in Greek names was also the rootEu-, fromeusmeaning "well" or "good". This is found in names such asEumenes"good mind", andEuphemos"good reputation". Similar looking prefixes includeEury-"wide", as inEurydike"wide justice",and alsoEuthy-"straight" as inEuthymenes, "straight mind". Some of the most famous compound names were also created using the wordAndros"man", such asAnaxandros"lord man", Lysandros"liberating man", Alexandros"defending man". The other forms includeAndrokles"man glory" (or "glorious man"),Andromachos"man battle" (or probably "man of battle") andNikanor"victory man". Another recurring element isLys-, fromlysis"releasing", found in names such asLysimachos"liberating battle", Lysanias"releasing from sorrow", andLysias"release" or "liberation".The rootPhil-, originating from the wordphilos"loving", was also used widely among Greeks, and found in names such asPhiloxenos"lover of strangers" andPhiletairos"lover of companions".
A second major category of names was shortened versions ("hypocoristics," or in GermanKosenamen) of the compounded names. Thus alongside the many names beginning withKall-"beauty" such asKallinikos"of fair victory", there are shortenedKalliasandKallon(masculine) orKallis(feminine). Alongside victory names such asNikostratos"victory army", there areNikiasandNikon(masculine) orNiko(feminine). Such shortenings were variously formed and very numerous: more than 250 shortenings of names inPhil(l)-("love") and related roots have been counted.
Ordinary nouns and adjectives of the most diverse types were used as names, either unadjusted or with the addition of a wide variety of suffixes. For instance, some twenty different names are formed fromaischros"ugly", including that of the poet we know asAeschylus, the Latin spelling ofAischylos. Among the many different categories of nouns and adjectives from which the most common names derive are colors (Xanthos"yellow"), animals (Moschos"heifer", andDorkas"roe deer"), physical characteristics (Simos"snub nose";Strabon"squinty-eyed"), parts of the body (Kephalos, fromkephale"head", and many from various slang terms for genitalia). Few of these simple names are as common as the most common compound names, but they are extraordinarily numerous and varied. Identifying their origins often taxes the knowledge of the outer reaches of Greek vocabulary.[11]Here the quest for dignity seen in the compound names largely disappears. Some, to our ears, sound positively disrespectful:Gastron"pot belly",Batrachos"frog",Kopreus"shitty", but these are probably by origin affectionate nicknames, in many cases applied to small children, and subsequently carried on within families.
Many Greeks bore names derived from those of gods. However, it was not normal before the Roman period for Greeks to bear exactly the same names as gods, but they did use adjectival forms of Divine names. For exampleDionysios"belonging toDionysos" andDemetrios"belonging toDemeter" (feminine formsDionysiaandDemetria) were the most common such names in ancient times.
There were also compound theophoric names, formed with a wide variety of suffixes, of which the most common were-doros"gift of" (e.g.Athenodoros"gift ofAthena") or-dotos"given by" (e.g.Apollodotos"given by Apollo"). Many names were also based on cult titles of gods:Pythodoros"gift of Pythios", i.e.Apollo. Other less common suffixes were-phon"voice of" (e.g.Dionysophon"voice of Dionysos") and-phanes"appearing" (e.g.Diophanes,"Zeusappearing" or "looking like Zeus"). Also common were names formed from the simpletheos"god", such asTheodoros, or the feminine formTheodora. All the major gods except the god of war, Ares, and gods associated with the underworld (Persephone, Hades,Plouton[=Latin Pluto]) generated theophoric names, as did some lesser gods (rivers in particular) and heroes. When new gods rose to prominence (Asklepios) or entered Greece from outside (Isis,Sarapis), they too generated theophoric names formed in the normal ways (e.g.Asklepiodotos,Isidoros,Sarapias).[12]
This is the German word used for names that derived not from other words but from the sounds made by little children addressing their relatives. Typically, they involve repeated consonants or syllables (like EnglishDada, Nana)—examples areNannaandPapas. They grew hugely in frequency from a low base in the Roman period, probably through the influence of other naming traditions such asPhrygian, in which such names were very common.
Many Greeks names used distinctive suffixes that conveyed additional meaning. The suffix-ides(idasin Doric areas such as Sparta) indicates patrilineal descent, and usually means "descendant of", but in some cases can also mean "son of". For exampleEurycratidesmeans "descendant ofEurycrates",whileLeonidasmeans "descendant ofLeon", as inLeon of Sparta, but literally may also mean "son of a lion", since the nameLeonmeans "lion" in Greek. Greeks often used this suffix when naming their sons after prominent ancestors like grandfathers and so on. The diminutive suffix-ionwas also common, e.g.Hephaestion("little Hephaestus").[13]
The main broad characteristics of Greek name formation listed above are found in otherIndo-European languages(the Indo-Iranian, Germanic, Celtic, and Balto-Slavic subgroups); they look like an ancient inheritance within Greek.[14]The naming practices of the Mycenaeans in the 14th/13th centuries BC, insofar as they can be reconstructed from the early Greek known asLinear B, seem already to display most of the characteristics of the system visible when literacy resumed in the 8th century BC, though non-Greek names were also present (and most of thesepre-Greeknames did not survive into the later epoch).[15]This is true also of the epic poetry of Homer, where many heroes have compound names of familiar types (Alexandros,Alkinoos,Amphimachos). But the names of several of the greatest heroes (e.g.Achilleus,Odysseus,Agamemnon,Priamos) cannot be interpreted in those terms and were seldom borne by mortals again until a taste for "heroic" names developed under the Roman Empire; they have a different, unexplained origin. The system described above underwent few changes before the Roman period, though the rise ofMacedoniato power earned names of that region such asPtolemaios,Berenike, andArsinoenew popularity. Alternative names ("X also known as Y") started to appear in documents in the 2nd century BC but had been occasionally mentioned in literary sources much earlier.
A different phenomenon, that of individuals bearing two names (e.g.,Hermogenes Theodotos), emerged among families of high social standing—particularly in Asia Minor in the Roman imperial period, possibly under the influence of Roman naming patterns. The influence of Rome is certainly visible both in the adoption of Roman names by Greeks and in the drastic transformation of names by Greeks who acquiredRoman citizenship, a status marked by possession of not one butthree names. Such Greeks often took thepraenomenandnomenof the authors or sponsors of their citizenship, but retained their Greek name ascognomento give such forms as Titus Flavius Alkibiades. Various mixed forms also emerged. The Latin suffix–ianus, originally indicating the birth family of a Roman adopted into another family, was taken over to mean initially "son of" (e.g.Asklepiodotianos'son of Asklepiodotos'), then later as a source of independent new names.
Another impulse came with the spread of Christianity, which brought new popularity to names from the New Testament, names of saints and martyrs, and existing Greek names such asTheodosios"gift of god", which could be reinterpreted in Christian terms. But non-Christian names, even theophoric names such asDionysiosorSarapion, continued to be borne by Christians — a reminder that a theophoric name could become a name like any other, its original meaning forgotten. Another phenomenon of late antiquity (5th–6th centuries) was a gradual shift away from the use of the father's name in the genitive as an identifier. A tendency emerged instead to indicate a person's profession or status within the Christian church: carpenter, deacon, etc.[16]Many Greek names have come down by various routes into modern English, some easily recognisable such as Helen or Alexander, some modified such as Denis (from Dionysios).[17]
The FrenchepigraphistLouis Robertdeclared that what is needed in the study of names is not "catalogues of names but the history of names and even history by means of names (l'histoire par les noms)."[18]Names are a neglected but in some areas crucial historical source.[19]Many names are characteristic of particular cities or regions. It is seldom safe to use an individual's name to assign him to a particular place, as the factors that determine individual choices of name are very various. But where a good cluster of names are present, it will usually be possible to identify with much plausibility where the group in question derives from. By such means, the origins of, say, bands of mercenaries or groups of colonists named in inscriptions without indication of their homeland can often be determined. Names are particularly important in situations of cultural contact: they may answer the question whether a particular city is Greek or non-Greek, and document the shifts and complexities in ethnic self-identification even within individual families. They also, through theophoric names, provide crucial evidence for the diffusion of new cults, and later of Christianity.
Two other once-popular ways of exploiting names for social history, by contrast, have fallen out of favor. Certain names and classes of name were often borne by slaves, since their names were given or changed at will by their owners, who may not have liked to allow them dignified names.[20]But no names or very few were so borne exclusively, and many slaves had names indistinguishable from those of the free; one can never identify a slave by name alone.[21]Similar arguments apply to so-called "courtesans’ names".
Jean Antoine Letronne(1851)[22]was the pioneer work stressing the importance of the subject.Papeand Benseler (1863–1870)[23]was for long the central work of reference but has now been replaced.
Bechtel (1917)[24]is still the main work that seeks to explain the formation and meaning of Greek names, although the studies of O. Masson et al. collected inOnomastica Graeca Selecta(1990–2000)[25]have constantly to be consulted.
L. Robert,Noms indigènes dans l’Asie Mineure gréco-romaine(1963),[26]is, despite its title, largely a successful attempt to show that many names attested in Asia Minor and supposed to be indigenous are in fact Greek; it is a dazzling demonstration of the resources of Greek naming.
The fundamental starting point is now the multi-volumeA Lexicon of Greek Personal Names, founded byP.M. Fraserand still being extended with the collaboration of many scholars.[27]This work lists, region by region, not only every name attested in the region but every bearer of that name (thus popularity of the name can be measured). The huge numbers of Greek names attested in Egypt are accessible atTrismegistos People.[28]Several volumes of studies have been published that build on the new foundation created by these comprehensive collections:S. HornblowerandE. Matthews(2000);[29]E. Matthews (2007);[30]R. W. V. Catling and F. Marchand (2010);[31]R. Parker(2013).[32]
|
https://en.wikipedia.org/wiki/Ancient_Greek_personal_names
|
TheGalton–Watson process, also called theBienaymé-Galton-Watson processor theGalton-Watson branching process, is abranchingstochastic processarising fromFrancis Galton's statistical investigation of the extinction offamily names.[1][2]The process models family names aspatrilineal(passed from father to son), while offspring are randomly either male or female, and names become extinct if the family name line dies out (holders of the family name die without male descendants).
Galton's investigation of this process laid the groundwork for the study ofbranching processesas a subfield ofprobability theory, and along with these subsequent processes the Galton-Watson process has found numerous applications across population genetics, computer science, and other fields.[3]
There was concern amongst theVictoriansthataristocraticsurnames were becoming extinct.[4]
In 1869, Galton publishedHereditary Genius, in which he treated the extinction of different social groups.
Galton originally posed a mathematical question regarding the distribution of surnames in an idealized population in an 1873 issue ofThe Educational Times:[5]
A large nation, of whom we will only concern ourselves with adult males,Nin number, and who each bear separate surnames colonise a district. Their law of population is such that, in each generation,a0per cent of the adult males have no male children who reach adult life;a1have one such male child;a2have two; and so on up toa5who have five. Find what proportion of their surnames will have become extinct afterrgenerations; and how many instances there will be of the surname being held bympersons.
The ReverendHenry William Watsonreplied with a solution.[6]Together, they then wrote an 1874 paper titled "On the probability of the extinction of families" in theJournal of the Anthropological Institute of Great Britain and Ireland(now theJournal of the Royal Anthropological Institute).[7]Galton and Watson appear to have derived their process independently of the earlier work byI. J. Bienaymé; see.[8]Their solution is incomplete, according to whichallfamily names go extinct with probability 1.
Bienayméhad previously published the answer to the problem in 1845,[9]with a promise to publish the derivation later, however there is no known publication of his solution. (However, Bru (1991)[10]purports to reconstruct the proof). He was inspired byÉmile Littré[11]andLouis-François Benoiston de Châteauneuf(a friend of Bienaymé).[12][13]
Cournotpublished a solution in 1847, in chapter 36 of De l'origine et des limites de la correspondance entre l'algèbre et la géométrie.[14]The problem in his formulation is
Ronald A. Fisher in 1922 studied the same problem formulated in terms of genetics. Instead of the extinction of family names, he studied the probability for a mutant gene to eventually disappear in a large population.[15]Haldanesolved the problem in 1927.[16]
Agner Krarup Erlangwas a member of the prominent Krarup family, which was going extinct. In 1929, he published the same problem posthumously (his obituary appears beside the problem). Erlang died childless.Steffensensolved it in 1930.
For a detailed history, see Kendall (1966[17]and 1975[13]) and[18]and also Section 17 of.[19]
Assume, for the sake of the model, that surnames are passed on to all male children by their father. Suppose the number of a man's sons to be arandom variabledistributedon the set { 0, 1, 2, 3, ... }. Further suppose the numbers of different men's sons to beindependentrandom variables, all having the same distribution.
Then the simplest substantial mathematical conclusion is that if the average number of a man's sons is 1 or less, then their surname willalmost surelydie out, and if it is more than 1, then there is more than zero probability that it will survive for any given number of generations.
A corollary of high extinction probabilities is that if a lineagehassurvived, it is likely to have experienced, purely by chance, an unusually high growth rate in its early generations at least when compared to the rest of the population.[citation needed]
A Galton–Watson process is a stochastic process {Xn} which evolves according to the recurrence formulaX0= 1 and
where{ξj(n):n,j∈N}{\displaystyle \{\xi _{j}^{(n)}:n,j\in \mathbb {N} \}}is a set ofindependent and identically-distributednatural number-valued random variables.
In the analogy with family names,Xncan be thought of as the number of descendants (along the male line) in thenth generation, andξj(n){\displaystyle \xi _{j}^{(n)}}can be thought of as the number of (male) children of thejth of these descendants. The recurrence relation states that the number of descendants in then+1st generation is the sum, over allnth generation descendants, of the number of children of that descendant.
Theextinction probability(i.e. the probability of final extinction) is given by
This is clearly equal to zero if each member of the population has exactly one descendant. Excluding this case (usually called the trivial case) there exists
a simple necessary and sufficient condition, which is given in the next section.
In the non-trivial case, the probability of final extinction is equal to 1 ifE{ξ1} ≤ 1 and strictly less than 1 ifE{ξ1} > 1.
The process can be treated analytically using the method ofprobability generating functions.
If the number of childrenξjat each node follows aPoisson distributionwith parameter λ, a particularly simple recurrence can be found for the total extinction probabilityxnfor a process starting with a single individual at timen= 0:
giving the above curves.
In the classical family surname Galton–Watson process described above, only men need to be considered, since only males transmit their family name to descendants. This effectively means that reproduction can be modeled as asexual. (Likewise, if mitochondrial transmission is analyzed, only women need to be considered, since only females transmit their mitochondria to descendants.)
A model more closely following actual sexual reproduction is the so-called "bisexual Galton–Watson process", where only couples reproduce.[citation needed](Bisexualin this context refers to the number of sexes involved, notsexual orientation.) In this process, each child is supposed as male or female, independently of each other, with a specified probability, and a so-called "mating function" determines how many couples will form in a given generation. As before, reproduction of different couples is considered to be independent of each other. Now the analogue of the trivial case corresponds to the case of each male and female reproducing in exactly one couple, having one male and one female descendant, and that the mating function takes the value of the minimum of the number of males and females (which are then the same from the next generation onwards).
Since the total reproduction within a generation depends now strongly on the mating function, there exists in general no simple necessary and sufficient condition for final extinction as is the case in the classical Galton–Watson process.[citation needed]However, excluding the non-trivial case, the concept of theaveraged reproduction mean(Bruss (1984)) allows for a general sufficient condition for final extinction, treated in the next section.
If in the non-trivial case theaveraged reproduction meanper couple stays bounded over all generations and will not exceed 1 for a sufficiently large population size, then the probability of final extinction is always 1.
Citing historical examples of Galton–Watson process is complicated due to the history of family names often deviating significantly from the theoretical model. Notably, new names can be created, existing names can be changed over a person's lifetime, and people historically have often assumed names of unrelated persons, particularly nobility. Thus, a small number of family names at present is notin itselfevidence for names having become extinct over time, or that they did so due to dying out of family name lines – that requires that there were more names in the pastandthat they die out due to the line dying out, rather than the name changing for other reasons, such as vassals assuming the name of their lord.
Chinese namesare a well-studied example of surname extinction: there are currently only about 3,100 surnames in use in China, compared with close to 12,000 recorded in the past,[20][21]with 22% of the population sharing the namesLi,WangandZhang(numbering close to 300 million people), and the top 200 names (6½%) covering 96% of the population. Names have changed or become extinct for various reasons such as people taking the names of their rulers, orthographic simplifications, andtaboos against using characters from an emperor's name, among others.[21]While family name lines dying out may be a factor in the surname extinction, it is by no means the only or even a significant factor. Indeed, the most significant factor affecting the surname frequency is other ethnic groups identifying asHanand adopting Han names.[21]Further, while new names have arisen for various reasons, this has been outweighed by old names disappearing.[21]
By contrast, some nations have adopted family names only recently. This means both that they have not experienced surname extinction for an extended period, and that the names were adopted when the nation had a relatively large population, rather than the smaller populations of ancient times.[21]Further, these names have often been chosen creatively and are very diverse. Examples include:
On the other hand, some examples of high concentration of family names are not primarily due to the Galton–Watson process:
Modern applications include the survival probabilities for a newmutantgene, or the initiation of anuclear chain reaction, or the dynamics ofdisease outbreaksin their first generations of spread, or the chances ofextinctionof smallpopulationoforganisms.
In the late 1930s,Leo Szilardindependently reinvented Galton-Watson processes to describe the behavior of free neutrons duringnuclear fission. This work involved generalizing formulas for extinction probabilities, which became essential for calculating the critical mass required for a continuous chain reaction with fissionable materials.[23]
The Galton–Watson model is an accurate[citation needed]description ofY chromosometransmission in genetics, and the model is thus useful for understandinghuman Y-chromosome DNA haplogroups. Likewise, sincemitochondriaare inherited only on the maternal line, the same mathematical formulation describes transmission of mitochondria.[24]
It explains (perhaps closest to Galton's original interest) why only a handful of males in the deep past of humanity now haveanysurviving male-line descendants, reflected in a rather small number of distinctivehuman Y-chromosome DNA haplogroups.[citation needed]
|
https://en.wikipedia.org/wiki/Extinction_of_surnames
|
Ahydronym(fromGreek:ὕδρω,hydrō, "water" andὄνομα,onoma, "name") is a type oftoponymthat designates aproper nameof abody of water. Hydronyms include the proper names of rivers and streams, lakes and ponds, swamps and marshes, seas and oceans. As a subset oftoponymy, a distinctive discipline ofhydronymy(orhydronomastics) studies the proper names of all bodies of water, the origins and meanings of those names, and their development and transmission through history.[1]
Within theonomasticclassification, main types of hydronyms are (in alphabetical order):
Often, a given body of water will have several entirely different names given to it by different peoples living along its shores. For example,Tibetan:རྫ་ཆུ,Wylie:rDza chu,ZYPY:Za quandThai:แม่น้ำโขง[mɛ̂ːnáːmkʰǒːŋ]are theTibetanandThainames, respectively, for the same river, theMekonginsoutheast Asia. (The Tibetan name is used forthree other riversas well.)
Hydronyms from various languages may all share a commonetymology. For example, theDanube,Don,Dniester,Dnieper, andDonetsrivers all contain theScythianname for "river" (cf.don, "river, water" in modernOssetic).[7][8]A similar suggestion is that theYarden,Yarkon, andYarmouk(and possibly, with distortion,Yabbokand/orArnon) rivers in theIsrael/Jordanarea contain theEgyptianword for river (itrw, transliterated in theBibleasye'or).
It is also possible for atoponymto become a hydronym: for example, theRiver Liffeytakes its name from the plain on which it stands, calledLipheorLife; the river originally was calledAn Ruirthech.[9][10]An unusual example is theRiver Cam, which originally was called theGranta, but when the town ofGrantebrycgebecameCambridge, the river's name changed to match the toponym. Another unusual example is theRiver Stortwhich is named after the town on the fordBishops Stortfordrather than the town being named after the river.
Compared to most other toponyms, hydronyms are very conservative linguistically, and people who move to an area often retain the existing name of a body of water rather than rename it in their own language.[11]For example, theRhineinGermanybears aCelticname, not aGermanname.[12]TheMississippi Riverin theUnited Statesbears anAnishinaabename, not a French or English one.[13]The names of large rivers are even more conservative than the local names of small streams.
Therefore, hydronomy may be a tool used to reconstruct past cultural interactions, population movements, religious conversions, or older languages.[14]For example, history professorKenneth H. Jacksonidentified a river-name pattern against which to fit the story of theAnglo-Saxoninvasion of Britain and pockets of surviving native British culture.[15]His river map of Britain divided the island into three principal areas of English settlement: the river valleys draining eastward in which surviving British names are limited to the largest rivers and Saxon settlement was early and dense; the highland spine; and a third region whose British hydronyms apply even to the smaller streams.
|
https://en.wikipedia.org/wiki/Hydronym
|
Amononymis a name composed of only one word. An individual who is known and addressed by a mononym is amononymous person.
A mononym may be the person's only name, given to them at birth. This was routine in most ancient societies, and remains common in modern societies such as inAfghanistan,[1]Bhutan, some parts ofIndonesia(especially by olderJavanesepeople),Myanmar,Mongolia,Tibet,[2]andSouth India.
In other cases, a person may select a single name from theirpolynymor adopt a mononym as a chosen name,pen name,stage name, orregnal name. A popularnicknamemay effectively become a mononym, in some cases adopted legally. For some historical figures, a mononym is the only name that is still known today.
The wordmononymcomes from Englishmono-("one", "single") and-onym("name", "word"), ultimately fromGreekmónos(μόνος, "single"), andónoma(ὄνομα, "name").[a][b]
The structure of persons' names has varied across time and geography. In somesocieties, individuals have been mononymous, receiving only a single name.Alulim, first king ofSumer, is one of the earliest names known;Narmer, anancient Egyptianpharaoh, is another. In addition, Biblical names likeAdam,Eve,Moses, orAbraham, were typically mononymous, as were names in the surrounding cultures of theFertile Crescent.[4]
Ancient Greeknames likeHeracles,Homer,Plato,Socrates, andAristotle, also follow the pattern, withepithets(similar to second names) only used subsequently by historians to distinguish between individuals with the same name, as in the case ofZeno the StoicandZeno of Elea; likewise,patronymicsor other biographic details (such ascityof origin, or another place name or occupation the individual was associated with) were used to specify whom one was talking about, but these details were not considered part of the name.[5]
A departure from this custom occurred, for example, among theRomans, who by theRepublicanperiod and throughout theImperialperiodused multiple names: a male citizen's name comprised three parts (this was mostly typical of the upper class, while others would usually have only two names):praenomen(given name),nomen(clan name) andcognomen(family line within the clan) – thenomenandcognomenwere almost always hereditary.[6]Famous ancient Romans who today are usually referred to by mononym includeCicero(Marcus Tullius Cicero) andTerence(Publius Terentius Afer).Roman emperors, for exampleAugustus,Caligula, andNero, are also often referred to in English by mononym.
Mononyms in other ancient cultures includeHannibal, theCelticqueenBoudica, and theNumidiankingJugurtha.
During theearly Middle Ages, mononymity slowly declined, with northern and easternEuropekeeping the tradition longer than the south. TheDutch Renaissancescholar and theologianErasmusis a late example of mononymity; though sometimes referred to as "Desiderius Erasmus" or "Erasmus of Rotterdam", he was christened only as "Erasmus", after themartyrErasmus of Formiae.[7]
Composers in thears novaandars subtiliorstyles of latemedieval musicwere often known mononymously—potentially because their names weresobriquets—such asBorlet,Egardus,Egidius,Grimace,Solage, andTrebor.[8]
Naming practices ofindigenous peoples of the Americasare highly variable, with one individual often bearing more than one name over a lifetime. In European and American histories, prominent Native Americans are usually mononymous, using a name that was frequently garbled and simplified in translation. For example, the Aztec emperor whose name was preserved inNahuatldocuments asMotecuhzoma Xocoyotzinwas called "Montezuma" in subsequent histories. In current histories he is often namedMoctezuma II, using the European custom of assigningregnal numbersto hereditary heads of state.
Native Americans from the 15th through 19th centuries, whose names are often thinly documented in written sources, are still commonly referenced with a mononym. Examples includeAnacaona(Haiti, 1464–1504),Agüeybaná(Puerto Rico, died 1510),Diriangén(Nicaragua, died 1523),Urracá(Panama, died 1531),Guamá(Cuba, died 1532),Atahualpa(Peru, 1497–1533),Lempira(Honduras, died 1537),Lautaro(Chile, 1534–1557),Tamanaco(Venezuela, died 1573),Pocahontas(United States, 1595–1617),Auoindaon(Canada, fl. 1623),Cangapol(Argentina, fl. 1735), andTecumseh(United States, 1768–1813).
Prominent Native Americans having a parent of European descent often received a European-style polynym in addition to a name or names from their indigenous community. The name of the Dutch-Seneca diplomatCornplanteris a translation of aSeneca-languagemononym (Kaintwakon, roughly "corn-planter"). He was also called "John Abeel" after hisDutchfather. His later descendants, includingJesse Cornplanter, used "Cornplanter" as a surname instead of "Abeel".
Some French authors have shown a preference for mononyms. In the 17th century, the dramatist and actor Jean-Baptiste Poquelin (1622–73) took the mononym stage name Molière.[9]
In the 18th century, François-Marie Arouet (1694–1778) adopted the mononymVoltaire, for both literary and personal use, in 1718 after his imprisonment in Paris'Bastille, to mark a break with his past. The new name combined several features. It was ananagramfor aLatinizedversion (where "u" become "v", and "j" becomes "i") of his familysurname, "Arouet, l[e] j[eune]" ("Arouet, the young"); it reversed the syllables of the name of the town his father came from, Airvault; and it has implications of speed and daring through similarity to French expressions such asvoltige,volte-faceandvolatile. "Arouet" would not have served the purpose, given that name's associations with "roué" and with an expression that meant "for thrashing".[10]
The 19th-century French authorMarie-Henri Beyle(1783–1842) used manypen names, most famously the mononym Stendhal, adapted from the name of the littlePrussiantown ofStendal, birthplace of the German art historianJohann Joachim Winckelmann, whom Stendhal admired.[11]
Nadar[12](Gaspard-Félix Tournachon, 1820–1910) was an early French photographer.
In the 20th century,Sidonie-Gabrielle Colette(1873–1954, author ofGigi, 1945), used her actual surname as her mononym pen name, Colette.[13]
In the 17th and 18th centuries, most Italian castrato singers used mononyms as stage names (e.g.Caffarelli,Farinelli). The German writer, mining engineer, and philosopher Georg Friedrich Philipp Freiherr von Hardenberg (1772–1801) became famous asNovalis.[14]
The 18th-century Italian painterBernardo Bellotto, who is now ranked as an important and original painter in his own right, traded on the mononymous pseudonym of his uncle and teacher, Antonio Canal (Canaletto), in those countries—Poland and Germany—where his famous uncle was not active, calling himself likewise "Canaletto". Bellotto remains commonly known as "Canaletto" in those countries to this day.[15]
The 19th-century Dutch writer Eduard Douwes Dekker (1820–87), better known by his mononymous pen nameMultatuli[16](from theLatinmulta tuli, "I have suffered [orborne] many things"), became famous for the satirical novel,Max Havelaar(1860), in which he denounced the abuses ofcolonialismin theDutch East Indies(nowIndonesia).
The 20th-century British authorHector Hugh Munro(1870–1916) became known by hispen name, Saki. In 20th-century Poland, thetheater-of-the-absurdplaywright, novelist,painter, photographer, andphilosopherStanisław Ignacy Witkiewicz(1885–1939) after 1925 often used the mononymous pseudonym Witkacy, aconflationof his surname (Witkiewicz) andmiddle name(Ignacy).[17]
Monarchsand otherroyalty, for exampleNapoleon, have traditionally availed themselves of theprivilegeof using a mononym, modified when necessary by anordinalorepithet(e.g., QueenElizabeth IIorCharles the Great). This is not always the case: KingCarl XVI Gustafof Sweden has two names. While many European royals have formally sportedlong chainsof names, in practice they have tended to use only one or two and not to usesurnames.[c]
In Japan, the emperor and his family have no surname, only a given name, such asHirohito, which in practice in Japanese is rarely used: out of respect and as a measure of politeness, Japanese prefer to say "the Emperor" or "the Crown Prince".[19]
Roman Catholicpopeshave traditionally adopted a single,regnal nameupon theirelection.John Paul Ibroke with this tradition – adopting a double name honoring his two predecessors[20]– and his successorJohn Paul IIfollowed suit, butBenedict XVIreverted to the use of a single name.
Surnames were introduced inTurkeyonly afterWorld War I, by the country's first president,Mustafa Kemal Atatürk, as part of his Westernization and modernization programs.[21]
SomeNorth American Indigenouspeople continue their nations' traditional naming practices, which may include the use of single names. InCanada, where government policy often included the imposition of Western-style names, one of the recommendations of theTruth and Reconciliation Commission of Canadawas for all provinces and territories to waive fees to allow Indigenous people to legally assume traditional names, including mononyms.[22]InOntario, for example, it is now legally possible to change to a single name or register one at birth, for members ofIndigenous nationswhich have a tradition of single names.[23]
In modern times, in countries that have long been part of theEast Asian cultural sphere(Japan, the Koreas, Vietnam, and China), mononyms are rare. An exception pertains to theEmperor of Japan.
In the past, mononyms were common inIndonesia, especially inJavanese names.[24]Some younger people may have them, but this practice is becoming rarer, since mononyms are no longer allowed for newborns since 2022 (seeNaming law § Indonesia).[25]
Single names still also occur inTibet.[2]MostAfghansalso have no surname.[26]
InBhutan, most people use either only one name or a combination of two personal names typically given by a Buddhist monk. There are no inherited family names; instead, Bhutanese differentiate themselves with nicknames or prefixes.[27]
In theNear East'sArabworld, the Syrian poet Ali Ahmad Said Esber (born 1930) at age 17 adopted the mononym pseudonym,Adunis, sometimes also spelled "Adonis". A perennial contender for the Nobel Prize in Literature, he has been described as the greatest living poet of the Arab world.[28]
In the West, mononymity, as well as its use by royals in conjunction with titles, has been primarily used or given to famous people such as prominent writers,artists,entertainers, musicians andathletes.[d]
ThecomedianandillusionistTeller, the silent half of the duoPenn & Teller, legally changed his original polynym, Raymond Joseph Teller, to the mononym "Teller" and possesses aUnited States passportissued in that single name.[30][31]Similarly,Kanye Westlegally changed his name to the mononym "Ye".[32]
In Brazil, it is very common for footballers to go by one name for simplicity and as a personal brand. Examples includePelé,RonaldoandKaká. Brazil's PresidentLuiz Inácio Lula da Silvais known as "Lula", a nickname he officially added to his full name. Such mononyms, which take their origin ingiven names,surnamesornicknames, are often used becausePortuguese namestend to be rather long.
In Australia, where nicknames and short names are extremely common, individuals with long names of European origin (such as formerPremier of New South WalesGladys Berejiklian, who is of Armenian descent, and soccer managerAnge Postecoglou, who was born in Greece) will often be referred to by a mononym, even in news headlines. Similarly, Greek basketball playerGiannis Antetokounmpois often referred to outside Greece as just "Giannis" due to the length of his last name.
Western computer systems do not always support monynyms, most still requiring a given name and a surname. Some companies get around this by entering the mononym as both the given name and the surname.
Mononyms are commonly used by many association footballers.
A large number of Brazilian footballers use mononyms, such asAlisson,Kaká,Neymar,RonaldoandRonaldinho.
Players from other countries where Portuguese is spoken, such as Portugal itself and Lusophone countries in Africa, also occasionally use mononyms, such asBruma,Otávio,Pepe,TotiandVitinhafrom Portugal.
Australian managerAnge Postecoglouand Spanish managerPep Guardiolaare commonly known as "Ange" and "Pep", even in news headlines.
|
https://en.wikipedia.org/wiki/Mononymous_persons
|
Anaming conventionis aconvention(generally agreed scheme) for naming things. Conventions differ in their intents, which may include to:
Well-chosen naming conventions aid the casual user in navigating and searching larger structures. Several areas where naming conventions are commonly used include:
Examples of naming conventions may include:
|
https://en.wikipedia.org/wiki/Naming_convention
|
Capacity optimizationis a general term for technologies used to improve storage use by shrinking stored data. Primary technologies used for capacity optimization aredata deduplicationanddata compression. These are delivered as software or hardware, integrated with storage systems or delivered as standalone products. Deduplication algorithms look for redundancy in sequences of bytes across comparison windows. Typically usingcryptographic hash functionsas identifiers of unique sequences, sequences are compared to the history of other such sequences, and where possible, the first uniquely stored version of a sequence is referenced rather than stored again. Different methods for selecting data windows include 4KB blocks to full-file comparisons known assingle-instance storage(SIS).
Capacity optimization generally refers to the use of this kind of technology in a storage system. An example of this kind of system is the Venti file system[1]in the Plan9 open source OS. There are also implementations in networking (especially wide-area networking), where they are sometimes calledbandwidth optimizationorWAN optimization.[2][3]
Commercial implementations of capacity optimization are most often found in backup/recovery storage, where storage of iterating versions of backups day to day creates an opportunity for reduction in space using this approach. The term was first used widely in 2005.[4]
Capacity optimization through sensing threshold adaptation for cognitive radio networks (https://doi.org/10.1007%2Fs11590-011-0345-8)
Thissoftware-engineering-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Capacity_optimization
|
Content-addressable storage(CAS), also referred to ascontent-addressed storageorfixed-content storage, is a way to store information so it can be retrieved based on its content, not its name or location. It has been used for high-speed storage andretrievalof fixed content, such as documents stored for compliance with government regulations.[citation needed]Content-addressable storage is similar tocontent-addressable memory.
CAS systems work by passing the content of the file through acryptographic hash functionto generate a unique key, the "content address". Thefile system'sdirectorystores these addresses and a pointer to the physical storage of the content. Because an attempt to store the same file will generate the same key, CAS systems ensure that the files within them are unique, and because changing the file will result in a new key, CAS systems provide assurance that the file is unchanged.
CAS became a significant market during the 2000s, especially after the introduction of the 2002Sarbanes–Oxley Actin the United States which required the storage of enormous numbers of documents for long periods and retrieved only rarely. Ever-increasing performance of traditional file systems and new software systems have eroded the value of legacy CAS systems, which have become increasingly rare after roughly 2018[citation needed]. However, the principles of content addressability continue to be of great interest to computer scientists, and form the core of numerous emerging technologies, such aspeer-to-peer file sharing,cryptocurrencies, anddistributed computing.
Traditionalfile systemsgenerally track files based on theirfilename. On random-access media like afloppy disk, this is accomplished using adirectorythat consists of some sort of list of filenames and pointers to the data. The pointers refer to a physical location on the disk, normally usingdisk sectors. On more modern systems and larger formats likehard drives, the directory is itself split into many subdirectories, each tracking a subset of the overall collection of files. Subdirectories are themselves represented as files in a parent directory, producing a hierarchy or tree-like organization. The series of directories leading to a particular file is known as a "path".[1]
In the context of CAS, these traditional approaches are referred to as "location-addressed", as each file is represented by a list of one or more locations, the path and filename, on the physical storage. In these systems, the same file with two different names will be stored as two files on disk and thus have two addresses. The same is true if the same file, even with the same name, is stored in more than one location in the directory hierarchy. This makes them less than ideal for adigital archive, where any unique information should only be stored once.[2]
As the concept of the hierarchical directory became more common inoperating systemsespecially during the late 1980s, this sort of access pattern began to be used by entirely unrelated systems. For instance, theWorld Wide Webuses a similar pathname/filename-like system known as theURLto point to documents. The same document on anotherweb serverhas a different URL in spite of being identical content. Likewise, if an existing location changes in any way, if the filename changes or the server moves to a newdomain name servicename, the document is no longer accessible. This leads to the common problem oflink rot.[2]
Although location-based storage is widely used in many fields, this was not always the case. Previously, the most common way to retrieve data from a large collection was to use some sort of identifier based on the content of the document. For instance, theISBNsystem is used to generate a unique number for every book. If one performs a web search for "ISBN 0465048994", one will be provided with a list of locations for the bookWhy Information Growson the topic of information storage. Although many locations will be returned, they all refer to the same work, and the user can then pick whichever location is most appropriate. Additionally, if any one of these locations changes or disappears, the content can be found at any of the other locations.[2]
CAS systems attempt to produce ISBN like results automatically and on any document. They do this by using acryptographic hash functionon the data of the document to produce what is sometimes known as a "key" or "fingerprint". This key is strongly tied to the exact content of the document, adding a single space at the end of the file, for instance, will produce a different key. In a CAS system, the directory does not map filenames onto locations, but uses the keys instead.[2]
This provides several benefits. For one, when a file is sent to the CAS for storage, the hash function will produce a key and then check to see if that key already exists in the directory. If it does, the file is not stored as the one already in storage is identical. This allows CAS systems to easily avoid duplicate data. Additionally, as the key is based on the content of the file, retrieving a document with a given key ensures that the stored file has not been changed. The downside to this approach is that any changes to the document produces a different key, which makes CAS systems unsuitable for files that are often edited. For all of these reasons, CAS systems are normally used for archives of largely static documents,[2]and are sometimes known as "fixed content storage" (FCS).[3]
Because the keys are not human-readable, CAS systems implement a second type of directory that storesmetadatathat will help users find a document. These almost always include a filename, allowing the classic name-based retrieval to be used. But the directory will also include fields for common identification systems like ISBN orISSNcodes, user-provided keywords, time and date stamps, andfull-text searchindexes. Users can search these directories and retrieve a key, which can then be used to retrieve the actual document.[2]
Using a CAS is very similar to using aweb search engine. The primary difference is that a web search is generally performed on a topic basis using an internal algorithm that finds "related" content and then produces a list of locations. The results may be a list of the identical content in multiple locations. In a CAS, more than one document may be returned for a given search, but each of those documents will be unique and presented only once.
Another advantage to CAS is that the physical location in storage is not part of the lookup system. If, for instance, a library'scard catalogstated a book could be found on "shelf 43, bin 10", if the library is re-arranged the entire catalog has to be updated. In contrast, the ISBN will not change and the book can be found by looking for the shelf with those numbers. In the computer setting, a file in theDOSfilesystem at the path A:\myfiles\textfile.txt points to the physical storage of the file in the myfiles subdirectory. This file disappears if the floppy is moved to the B: drive, and even moving its location within the disk hierarchy requires the user-facing directories to be updated. In CAS, only the internal mapping from key to physical location changes, and this exists in only one place and can be designed for efficient updating. This allows files to be moved among storage devices, and even across media, without requiring any changes to the retrieval.
For data that changes frequently, CAS is not as efficient as location-based addressing. In these cases, the CAS device would need to continually recompute the address of data as it was changed. This would result in multiple copies of the entire almost-identical document being stored, the problem that CAS attempts to avoid. Additionally, the user-facing directories would have to be continually updated with these "new" files, which would become polluted by many similar documents that would make searching more difficult. In contrast, updating a file in a location-based system is highly optimized, only the internal list of sectors has to be changed and many years of tuning have been applied to this operation.
Because CAS is used primarily for archiving, file deletion is often tightly controlled or even impossible under user control. In contrast, automatic deletion is a common feature, removing all files older than some legally defined requirement, say ten years.[2]
The simplest way to implement a CAS system is to store all of the files within a typical database to which clients connect to add, query, and retrieve files. However, the unique properties of content addressability mean that the paradigm is well suited for computer systems in which multiple hosts collaboratively manage files with no central authority, such as distributedfile sharingsystems, in which the physical location of a hosted file can change rapidly in response to changes in network topology, while the exact content of the files to be retrieved are of more importance to users than their current physical location. In a distributed system, content hashes are often used for quick network-wide searches for specific files, or to quickly see which data in a given file has been changed and must be propagated to other members of the network with minimalbandwidthusage. In these systems, content addressability allows highly variable network topology to be abstracted away from users who wish to access data, compared to systems like theWorld Wide Web, in which a consistent location of a file or service is key to easy use.
A hardware device called theContent Addressable File Store(CAFS) was developed byInternational Computers Limited(ICL) in the late 1960s and put into use byBritish Telecomin the early 1970s fortelephone directorylookups. The user-accessible search functionality was maintained by thedisk controllerwith a high-levelapplication programming interface(API) so users could send queries into what appeared to be ablack boxthat returned documents. The advantage was that no information had to be exchanged with the host computer while the disk performed the search.
Paul Carpentier and Jan van Riel coined the term CAS while working at a company called FilePool in the late 1990s. FilePool was purchased byEMC Corporationin 2001 and was released the next year as Centera.[4]The timing was perfect; the introduction of theSarbanes–Oxley Actin 2002 required companies to store huge amounts of documentation for extended periods and required them to do so in a fashion that ensured they were not edited after-the-fact.[5]
A number of similar products soon appeared from other large-system vendors. In mid-2004, the industry groupSNIAbegan working with a number of CAS providers to create standard behavior and interoperability guidelines for CAS systems.[6]
In addition to CAS, a number of similar products emerged that added CAS-like capabilities to existing products; notable among these wasIBM Tivoli Storage Manager. The rise ofcloud computingand the associatedelastic cloud storagesystems likeAmazon S3further diluted the value of dedicated CAS systems.Dellpurchased EMC in 2016 and stopped sales of the original Centera in 2018 in favor of their elastic storage product.[7]
CAS was not associated withpeer-to-peerapplications until the 2000s, when rapidly proliferatingInternet accessin homes and businesses led to a large number of computer users who wanted to swap files, originally doing so on centrally managed services likeNapster. However, aninjunction against Napsterprompted the independent development of file-sharing services such asBitTorrent, which could not be centrally shut down. In order to function without a central federating server, these services rely heavily on CAS to enforce the faithful copying and easy querying of unique files. At the same time, the growth of theopen-source softwaremovement in the 2000s led to the rapid proliferation of CAS-based services such asGit, aversion controlsystem that uses numerous cryptographic functions such asMerkle treesto enforce data integrity between users and allow for multiple versions of files with minimal disk and network usage. Around this time, individual users ofpublic-key cryptographyused CAS to store their public keys on systems such askey servers.
The rise ofmobile computingand high capacitymobile broadbandnetworks in the 2010s, coupled with increasing reliance onweb applicationsfor everyday computing tasks, strained the existing location-addressedclient–server modelcommonplace among Internet services, leading to an accelerated pace oflink rotand an increased reliance on centralizedcloud hosting. Furthermore, growing concerns about thecentralizationof computing power in the hands oflarge technology companies, potentialmonopolypower abuses, andprivacyconcerns led to a number of projects created with the goal of creating moredecentralizedsystems.Bitcoinuses CAS andpublic/private key pairsto manage wallet addresses, as do most othercryptocurrencies.IPFSuses CAS to identify and address communally hosted files on its network. Numerous otherpeer-to-peersystems designed to run onsmartphones, which often access the Internet from varying locations, utilize CAS to store and access user data for both convenience and data privacy purposes, such as secureinstant messaging.
The Centera CAS system consists of a series of networked nodes (typically large servers runningLinux), divided between storage nodes and access nodes. The access nodes maintain a synchronized directory of content addresses, and the corresponding storage node where each address can be found. When a new data element, orblob, is added, the device calculates ahashof the content and returns this hash as the blob's content address.[8]As mentioned above, the hash is searched to verify that identical content is not already present. If the content already exists, the device does not need to perform any additional steps; the content address already points to the proper content. Otherwise, the data is passed off to a storage node and written to the physical media.
When a content address is provided to the device, it first queries the directory for the physical location of the specified content address. The information is then retrieved from a storage node, and the actual hash of the data recomputed and verified. Once this is complete, the device can supply the requested data to the client. Within the Centera system, each content address actually represents a number of distinct data blobs, as well as optionalmetadata. Whenever a client adds an additional blob to an existing content block, the system recomputes the content address.
To provide additional data security, the Centera access nodes, when no read or write operation is in progress, constantly communicate with the storage nodes, checking the presence of at least two copies of each blob as well as their integrity. Additionally, they can be configured to exchange data with a different, e.g., off-site, Centera system, thereby strengthening the precautions against accidental data loss.
IBM has another flavor of CAS which can be software-based, Tivoli Storage manager 5.3, or hardware-based, the IBM DR550. The architecture is different in that it is based onhierarchical storage management(HSM) design which provides some additional flexibility such as being able to support not onlyWORMdisk but WORM tape and the migration of data from WORM disk to WORM tape and vice versa. This provides for additional flexibility in disaster recovery situations as well as the ability to reduce storage costs by moving data off the disk to tape.
Another typical implementation is iCAS from iTernity. The concept of iCAS is based on containers. Each container is addressed by its hash value. A container holds different numbers of fixed content documents. The container is not changeable, and the hash value is fixed after the write process.
|
https://en.wikipedia.org/wiki/Content-addressable_storage
|
Incomputing,data deduplicationis a technique for eliminating duplicate copies of repeating data. Successful implementation of the technique can improve storage utilization, which may in turn lowercapital expenditureby reducing the overall amount of storage media required to meet storage capacity needs. It can also be applied to network data transfers to reduce the number of bytes that must be sent.
The deduplication process requires comparison of data 'chunks' (also known as 'byte patterns') which are unique, contiguous blocks of data. These chunks are identified and stored during a process of analysis, and compared to other chunks within existing data. Whenever a match occurs, the redundant chunk is replaced with a small reference that points to the stored chunk. Given that the same byte pattern may occur dozens, hundreds, or even thousands of times (the match frequency is dependent on the chunk size), the amount of data that must be stored or transferred can be greatly reduced.[1][2]
A related technique issingle-instance (data) storage, which replaces multiple copies of content at the whole-file level with a single shared copy. While possible to combine this with other forms of data compression and deduplication, it is distinct from newer approaches to data deduplication (which can operate at the segment or sub-block level).
Deduplication is different from data compression algorithms, such asLZ77 and LZ78. Whereas compression algorithms identify redundant data inside individual files and encodes this redundant data more efficiently, the intent of deduplication is to inspect large volumes of data and identify large sections – such as entire files or large sections of files – that are identical, and replace them with a shared copy.
For example, a typical email system might contain 100 instances of the same 1 MB (megabyte) file attachment. Each time theemailplatform is backed up, all 100 instances of the attachment are saved, requiring 100 MB storage space. With data deduplication, only one instance of the attachment is actually stored; the subsequent instances are referenced back to the saved copy for deduplication ratio of roughly 100 to 1. Deduplication is often paired with data compression for additional storage saving: Deduplication is first used to eliminate large chunks of repetitive data, and compression is then used to efficiently encode each of the stored chunks.[3]
Incomputer code, deduplication is done by, for example, storing information invariablesso that they don't have to be written out individually but can be changed all at once at a centralreferencedlocation. Examples areCSS classesandnamed referencesinMediaWiki.
Storage-based data deduplication reduces the amount of storage needed for a given set of files. It is most effective in applications where many copies of very similar or even identical data are stored on a single disk. In the case of data backups, which routinely are performed to protect against data loss, most data in a given backup remain unchanged from the previous backup. Common backup systems try to exploit this by omitting (orhard linking) files that haven't changed or storingdifferencesbetween files. Neither approach captures all redundancies, however. Hard-linking does not help with large files that have only changed in small ways, such as an email database; differences only find redundancies in adjacent versions of a single file (consider a section that was deleted and later added in again, or a logo image included in many documents).
In-line network data deduplication is used to reduce the number of bytes that must be transferred between endpoints, which can reduce the amount of bandwidth required. SeeWAN optimizationfor more information.
Virtual servers and virtual desktops benefit from deduplication because it allows nominally separate system files for each virtual machine to be coalesced into a single storage space. At the same time, if a given virtual machine customizes a file, deduplication will not change the files on the other virtual machines—something that alternatives like hard links or shared disks do not offer. Backing up or making duplicate copies of virtual environments is similarly improved.
Deduplication may occur "in-line", as data is flowing, or "post-process" after it has been written.
With post-process deduplication, new data is first stored on the storage device and then a process at a later time will analyze the data looking for duplication. The benefit is that there is no need to wait for thehashcalculations and lookup to be completed before storing the data, thereby ensuring that store performance is not degraded. Implementations offering policy-based operation can give users the ability to defer optimization on "active" files, or to process files based on type and location. One potential drawback is that duplicate data may be unnecessarily stored for a short time, which can be problematic if the system is nearing full capacity.
Alternatively, deduplication hash calculations can be done in-line: synchronized as data enters the target device. If the storage system identifies a block which it has already stored, only a reference to the existing block is stored, rather than the whole new block.
The advantage of in-line deduplication over post-process deduplication is that it requires less storage and network traffic, since duplicate data is never stored or transferred. On the negative side, hash calculations may be computationally expensive, thereby reducing the storage throughput. However, certain vendors with in-line deduplication have demonstrated equipment which performs in-line deduplication at high rates.
Post-process and in-line deduplication methods are often heavily debated.[4][5]
TheSNIA Dictionaryidentifies two methods:[2]
Another way to classify data deduplication methods is according to where they occur. Deduplication occurring close to where data is created, is referred to as "source deduplication". When it occurs near where the data is stored, it is called "target deduplication".
Source deduplication ensures that data on the data source is deduplicated. This generally takes place directly within a file system. The file system will periodically scan new files creating hashes and compare them to hashes of existing files. When files with same hashes are found then the file copy is removed and the new file points to the old file. Unlikehard linkshowever, duplicated files are considered to be separate entities and if one of the duplicated files is later modified, then using a system calledcopy-on-writea copy of that changed file or block is created. The deduplication process is transparent to the users and backup applications. Backing up a deduplicated file system will often cause duplication to occur resulting in the backups being bigger than the source data.[6][7]
Source deduplication can be declared explicitly for copying operations, as no calculation is needed to know that the copied data is in need of deduplication. This leads to a new form of "linking" on file systems called thereflink(Linux) orclonefile(MacOS), where one or moreinodes(file information entries) are made to share some or all of their data. It is named analogously tohard links, which work at the inode level, andsymbolic linksthat work at the filename level.[8]The individual entries have a copy-on-write behavior that is non-aliasing, i.e. changing one copy afterwards will not affect other copies.[9]Microsoft'sReFSalso supports this operation.[10]
Target deduplication is the process of removing duplicates when the data was not generated at that location. Example of this would be a server connected to a SAN/NAS, The SAN/NAS would be a target for the server (target deduplication). The server is not aware of any deduplication, the server is also the point of data generation. A second example would be backup. Generally this will be a backup store such as a data repository or avirtual tape library.
One of the most common forms of data deduplication implementations works by comparing chunks of data to detect duplicates. For that to happen, each chunk of data is assigned an identification, calculated by the software, typically using cryptographic hash functions. In many implementations, the assumption is made that if the identification is identical, the data is identical, even though this cannot be true in all cases due to thepigeonhole principle; other implementations do not assume that two blocks of data with the same identifier are identical, but actually verify that data with the same identification is identical.[11]If the software either assumes that a given identification already exists in the deduplication namespace or actually verifies the identity of the two blocks of data, depending on the implementation, then it will replace that duplicate chunk with a link.
Once the data has been deduplicated, upon read back of the file, wherever a link is found, the system simply replaces that link with the referenced data chunk. The deduplication process is intended to be transparent to end users and applications.
Commercial deduplication implementations differ by their chunking methods and architectures.
To date, data deduplication has predominantly been used with secondary storage systems. The reasons for this are two-fold: First, data deduplication requires overhead to discover and remove the duplicate data. In primary storage systems, this overhead may impact performance. The second reason why deduplication is applied to secondary data, is that secondary data tends to have more duplicate data. Backup application in particular commonly generate significant portions of duplicate data over time.
Data deduplication has been deployed successfully with primary storage in some cases where the system design does not require significant overhead, or impact performance.
Single-instance storage(SIS) is a system's ability to take multiple copies of content objects and replace them by a single shared copy. It is a means to eliminate data duplication and to increase efficiency. SIS is frequently implemented infile systems,email serversoftware,databackup, and other storage-related computer software. Single-instance storage is a simple variant of data deduplication. While data deduplication may work at a segment or sub-block level, single instance storage works at the object level, eliminating redundant copies of objects such as entire files or email messages.[12]
Single-instance storage can be used alongside (or layered upon) other data duplication or data compression methods to improve performance in exchange for an increase in complexity and for (in some cases) a minor increase in storage space requirements.
One method for deduplicating data relies on the use ofcryptographic hash functionsto identify duplicate segments of data. If two different pieces of information generate the same hash value, this is known as acollision. The probability of a collision depends mainly on the hash length (seebirthday attack). Thus, the concern arises thatdata corruptioncan occur if ahash collisionoccurs, and additional means of verification are not used to verify whether there is a difference in data, or not. Both in-line and post-process architectures may offer bit-for-bit validation of original data for guaranteed data integrity. The hash functions used include standards such asSHA-1,SHA-256, and others.
The computational resource intensity of the process can be a drawback of data deduplication. To improve performance, some systems utilize both weak and strong hashes. Weak hashes are much faster to calculate but there is a greater risk of a hash collision. Systems that utilize weak hashes will subsequently calculate a strong hash and will use it as the determining factor to whether it is actually the same data or not. Note that the system overhead associated with calculating and looking up hash values is primarily a function of the deduplication workflow. The reconstitution of files does not require this processing and any incremental performance penalty associated with re-assembly of data chunks is unlikely to impact application performance.
Another concern is the interaction of compression and encryption. The goal of encryption is to eliminate any discernible patterns in the data. Thus encrypted data cannot be deduplicated, even though the underlying data may be redundant.
Although not a shortcoming of data deduplication, there have been data breaches when insufficient security and access validation procedures are used with large repositories of deduplicated data. In some systems, as typical with cloud storage,[citation needed]an attacker can retrieve data owned by others by knowing or guessing the hash value of the desired data.[13]
Deduplication is implemented in some filesystems such as inZFSorWrite Anywhere File Layoutand in differentdisk arraysmodels.[citation needed]It is a service available on bothNTFSandReFSon Windows servers.
|
https://en.wikipedia.org/wiki/Data_deduplication
|
Anentity–attribute–value model(EAV) is adata modeloptimized for the space-efficient storage of sparse—orad-hoc—property or data values, intended for situations where runtime usage patterns are arbitrary, subject to user variation, or otherwise unforeseeable using a fixed design. The use-case targets applications which offer a large or rich system of defined property types, which are in turn appropriate to a wide set of entities, but where typically only a small, specific selection of these are instantiated (or persisted) for a given entity. Therefore, this type of data model relates to the mathematical notion of asparse matrix.
EAV is also known asobject–attribute–value model,vertical database model, andopen schema.
This data representation is analogous to space-efficient methods of storing asparse matrix, where only non-empty values are stored. In an EAV data model, each attribute–value pair is a fact describing an entity, and a row in an EAV table stores a single fact. EAV tables are often described as "long and skinny": "long" refers to the number of rows, "skinny" to the few columns.
Data is recorded as three columns:
Consider how one would try to represent a general-purpose clinical record in a relational database. Clearly creating a table (or a set of tables) with thousands of columns is not feasible, because the vast majority of columns would benull. To complicate things, in a longitudinal medical record that follows the patient over time, there may be multiple values of the same parameter: the height and weight of a child, for example, change as the child grows. Finally, the universe of clinical findings keeps growing: for example, diseases emerge and new lab tests are devised; this would require constant addition of columns, and constant revision of the user interface. The term "attribute volatility" is sometimes used to describe the problems or situations that arise when the list of available attributes or their definitions needs to evolve over time.
The following shows a selection of rows of an EAV table for clinical findings from a visit to a doctor for a fever on the morning of 1998-05-01. The entries shown withinangle bracketsare references to entries in other tables, shown here as text rather than as encoded foreign key values for ease of understanding. In this example, thevaluesare all literal values, but they could also be pre-defined value lists. The latter are particularly useful when the possible values are known to be limited (i.e.,enumerable).
The example below illustrates symptoms findings that might be seen in a patient withpneumonia.
The EAV data described above is comparable to the contents of a supermarket sales receipt (which would be reflected in a Sales Line Items table in a database). The receipt lists only details of the items actually purchased, instead of listing every product in the shop that the customer might have purchased but didn't. Like the clinical findings for a given patient, the sales receipt is a compact representation of inherently sparse data.
Row modeling,[clarification needed]where facts about something (in this case, a sales transaction) are recorded as multiplerowsrather than multiplecolumns, is a standard data modeling technique. The differences between row modeling and EAV (which may be considered ageneralizationof row-modeling) are:
In a clinical data repository, row modeling also finds numerous uses; the laboratory test subschema is typically modeled this way, because lab test results are typically numeric, or can be encoded numerically.
The circumstances where you would need to go beyond standard row-modeling to EAV are listed below:
Certain ("hybrid") classes have some attributes that are non-sparse (present in all or most instances), while other attributes are highly variable and sparse. The latter are suitable for EAV modeling. For example, descriptions of products made by a conglomerate corporation depend on the product category, e.g., the attributes necessary to describe a brand of light bulb are quite different from those required to describe a medical imaging device, but both have common attributes such as packaging unit and per-item cost.
In clinical data, the entity is typically a clinical event, as described above. In more general-purpose settings, the entity is a foreign key into an "objects" table that records common information about every "object" (thing) in the database – at the minimum, a preferred name and brief description, as well as the category/class of entity to which it belongs. Every record (object) in this table is assigned a machine-generated object ID.
The "objects table" approach was pioneered by Tom Slezak and colleagues at Lawrence Livermore Laboratories for the Chromosome 19 database, and is now standard in most large bioinformatics databases. The use of an objects table does not mandate the concurrent use of an EAV design: conventional tables can be used to store the category-specific details of each object.
The major benefit to a central objects table is that, by having a supporting table of object synonyms and keywords, one can provide a standard Google-like search mechanism across the entire system where the user can find information about any object of interest without having to first specify the category that it belongs to. (This is important in bioscience systems where a keyword like "acetylcholine" could refer either to the molecule itself, which is a neurotransmitter, or the biological receptor to which it binds.)
In the EAV table itself, this is just an attribute ID, a foreign key into an Attribute Definitions table, as stated above. However, there are usually multiple metadata tables that contain attribute-related information, and these are discussed shortly.
Coercing all values into strings, as in the EAV data example above, results in a simple, but non-scalable, structure: constant data type inter-conversions are required if one wants to do anything with the values, and an index on the value column of an EAV table is essentially useless. Also, it is not convenient to store large binary data, such as images, inBase64encoded form in the same table as small integers or strings. Therefore, larger systems use separate EAV tables for each data type (includingbinary large objects, "BLOBS"), with the metadata for a given attribute identifying the EAV table in which its data will be stored. This approach is actually quite efficient because the modest amount of attribute metadata for a given class or form that a user chooses to work with can be cached readily in memory. However, it requires moving of data from one table to another if an attribute’s data type is changed.
EAV, as a general-purpose means ofknowledge representation, originated with the concept of "association lists" (attribute–value pairs). Commonly used today, these were first introduced in the languageLISP.[1]Attribute–value pairs are widely used for diverse applications, such as configuration files (using a simple syntax likeattribute = value). An example of non-database use of EAV is inUIMA(Unstructured Information Management Architecture), a standard now managed by theApache Foundationand employed in areas such asnatural language processing. Software that analyzes text typically marks up ("annotates") a segment: the example provided in the UIMA tutorial is a program that performsnamed-entity recognition(NER) on a document, annotating the text segment "President Bush" with the annotation–attribute–value triple(Person, Full_Name, "George W. Bush").[2]Such annotations may be stored in a database table.
While EAV does not have a direct connection to AV-pairs, Stead and Hammond appear to be the first to have conceived of their use for persistent storage of arbitrarily complex data.[3]The first medical record systems to employ EAV were the Regenstrief electronic medical record (the effort led by Clement MacDonald),[4]William Stead and Ed Hammond's TMR (The Medical Record) system and the HELP Clinical Data Repository (CDR) created by Homer Warner's group at LDS Hospital, Salt Lake City, Utah.[5][6](The Regenstrief system actually used a Patient-Attribute-Timestamp-Value design: the use of the timestamp supported retrieval of values for a given patient/attribute in chronological order.) All these systems, developed in the 1970s, were released before commercial systems based onE.F. Codd'srelational databasemodel were available, though HELP was much later ported to a relational architecture and commercialized by the 3M corporation. (Note that while Codd's landmark paper was published in 1970, its heavily mathematical tone had the unfortunate effect of diminishing its accessibility among non-computer-science types and consequently delaying the model's acceptance in IT and software-vendor circles. The value of the subsequent contribution ofChristopher J. Date, Codd's colleague at IBM, in translating these ideas into accessible language, accompanied by simple examples that illustrated their power, cannot be overstated.)
A group at the Columbia-Presbyterian Medical Center was the first to use a relationaldatabase engineas the foundation of an EAV system.[7]
The open-sourceTrialDBclinical studydata management systemof Nadkarni et al. was the first to use multiple EAV tables, one for each DBMSdata type.[8]
The EAV/CR framework, designed primarily by Luis Marenco and Prakash Nadkarni, overlaid the principles ofobject orientationonto EAV;[9]it built on Tom Slezak's object table approach (described earlier in the "Entity" section).SenseLab, a publicly accessible neuroscience database, is built with the EAV/CR framework.
The term "EAV database" refers to a database design where a significant proportion of the data is modeled as EAV. However, even in a database described as "EAV-based", some tables in the system are traditional relational tables.
As noted above, EAV modeling makes sense for categories of data, such as clinical findings, where attributes are numerous and sparse. Where these conditions do not hold, standard relational modeling (i.e., one column per attribute) is preferable; using EAV does not mean abandoning common sense or principles of good relational design. In clinical record systems, the subschemas dealing with patient demographics and billing are typically modeled conventionally. (While most vendor database schemas are proprietary,VistA, the system used throughout theUnited States Department of Veterans Affairs(VA) medical system, known as theVeterans Health Administration(VHA),[10]is open-source and its schema is readily inspectable, though it uses aMUMPSdatabase engine rather than a relational database.)
As discussed shortly, an EAV database is essentially unmaintainable without numerous supporting tables that contain supportingmetadata. The metadata tables, which typically outnumber the EAV tables by a factor of at least three or more, are typically standard relational tables.[8][9]An example of a metadata table is the Attribute Definitions table mentioned above.
In a simple EAV design, the values of an attribute are simple orprimitive data typesas far as the database engine is concerned. However, in EAV systems used for the representation of highly diverse data, it is possible that a given object (class instance) may have substructure: that is, some of its attributes may represent other kinds of objects, which in turn may have substructure, to an arbitrary level of complexity. A car, for example, has an engine, a transmission, etc., and the engine has components such as cylinders. (The permissible substructure for a given class is defined within the system's attribute metadata, as discussed later. Thus, for example, the attribute "random-access-memory" could apply to the class "computer" but not to the class "engine".)
To represent substructure, one incorporates a special EAV table where the value column contains references tootherentities in the system (i.e., foreign key values into the objects table). To get all the information on a given object requires a recursive traversal of the metadata, followed by a recursive traversal of the data that stops when every attribute retrieved is simple (atomic). Recursive traversal is necessary whether details of an individual class are represented in conventional or EAV form; such traversal is performed in standard object–relational systems, for example. In practice, the number of levels of recursion tends to be relatively modest for most classes, so the performance penalties due to recursion are modest, especially with indexing of object IDs.
EAV/CR (EAV with Classes and Relationships)[11][12][13]refers to a framework that supports complex substructure. Its name is somewhat of a misnomer: while it was an outshoot of work on EAV systems, in practice, many or even most of the classes in such a system may be represented in standard relational form, based on whether the attributes are sparse or dense. EAV/CR is really characterized by its very detailed metadata, which is rich enough to support the automatic generation of browsing interfaces to individual classes without having to write class-by-class user-interface code. The basis of such browser interfaces is that it is possible to generate a batch of dynamic SQL queries that is independent of the class of the object, by first consulting its metadata and using metadata information to generate a sequence of queries against the data tables, and some of these queries may be arbitrarily recursive. This approach works well for object-at-a-time queries, as in Web-based browsing interfaces where clicking on the name of an object brings up all details of the object in a separate page: the metadata associated with that object's class also facilitates the presentation of the object's details, because it includes captions of individual attributes, the order in which they are to be presented as well as how they are to be grouped.
One approach to EAV/CR is to allow columns to holdJSONstructures, which thus provide the needed class structure. For example,PostgreSQL, as of version 9.4, offers JSON binary column (JSONB) support, allowing JSON attributes to be queried, indexed and joined.
In the words of Prof. Dr. Daniel Masys (formerly Chair of Vanderbilt University's Medical Informatics Department), the challenges of working with EAV stem from the fact that in an EAV database, the "physical schema" (the way data are stored) is radically different from the "logical schema" – the way users, and many software applications such as statistics packages, regard it, i.e., as conventional rows and columns for individual classes. (Because an EAV table conceptually mixes apples, oranges, grapefruit and chop suey, if you want to do any analysis of the data using standard off-the-shelf software, in most cases you have to convert subsets of it into columnar form.[14]The process of doing this, calledpivoting, is important enough to be discussed separately.)
Metadatahelps perform the sleight of hand that lets users interact with the system in terms of the logical schema rather than the physical: the software continually consults the metadata for various operations such as data presentation, interactive validation, bulk data extraction andad hocquery. The metadata can actually be used to customize the behavior of the system.
EAV systems trade off simplicity in the physical andlogical structureof the data for complexity in their metadata, which, among other things, plays the role thatdatabase constraintsandreferential integritydo in standard database designs. Such a tradeoff is generally worthwhile, because in the typical mixed schema of production systems, the data in conventional relational tables can also benefit from functionality such as automatic interface generation. The structure of the metadata is complex enough that it comprises its own subschema within the database: various foreign keys in the data tables refer to tables within this subschema. This subschema is standard-relational, with features such as constraints and referential integrity being used to the hilt.
The correctness of the metadata contents, in terms of the intended system behavior, is critical and the task of ensuring correctness means that, when creating an EAV system, considerable design efforts must go into building user interfaces for metadata editing that can be used by people on the team who know the problem domain (e.g., clinical medicine) but are not necessarily programmers. (Historically, one of the main reasons why the pre-relational TMR system failed to be adopted at sites other than its home institution was that all metadata was stored in a single file with a non-intuitive structure. Customizing system behavior by altering the contents of this file, without causing the system to break, was such a delicate task that the system's authors only trusted themselves to do it.)
Where an EAV system is implemented throughRDF, theRDF Schemalanguage may conveniently be used to express such metadata. This Schema information may then be used by the EAV database engine to dynamically re-organize its internal table structure for best efficiency.[15]
Some final caveats regarding metadata:
Validation, presentation and grouping metadata make possible the creation of code frameworks that support automatic user interface generation for both data browsing as well as interactive editing. In a production system that is delivered over the Web, the task of validation of EAV data is essentially moved from the back-end/database tier (which is powerless with respect to this task) to the middle /Web server tier. While back-end validation is always ideal, because it is impossible to subvert by attempting direct data entry into a table, middle tier validation through a generic framework is quite workable, though a significant amount of software design effort must go into building the framework first. The availability ofopen-sourceframeworks that can be studied and modified for individual needs can go a long way in avoiding wheel reinvention.[citation needed]
(The first part of this section is aprécisof the Dinu/Nadkarni reference article in Central,[18]to which the reader is directed for more details.)
EAV modeling, under the alternative terms "generic data modeling" or "open schema", has long been a standard tool for advanced data modelers. Like any advanced technique, it can be double-edged, and should be used judiciously.
Also, the employment of EAV does not preclude the employment of traditional relational database modeling approaches within the same database schema. In EMRs that rely on an RDBMS, such asCerner, which use an EAV approach for their clinical-data subschema, the vast majority of tables in the schema are in fact traditionally modeled, with attributes represented as individual columns rather than as rows.
The modeling of the metadata subschema of an EAV system, in fact, is a very good fit for traditional modeling, because of the inter-relationships between the various components of the metadata. In the TrialDB system, for example, the number of metadata tables in the schema outnumber the data tables by about ten to one. Because the correctness and consistency of metadata is critical to the correct operation of an EAV system, the system designer wants to take full advantage of all of the features that RDBMSs provide, such as referential integrity and programmable constraints, rather than having to reinvent the RDBMS-engine wheel. Consequently, the numerous metadata tables that support EAV designs are typically in third-normal relational form.
Commercialelectronic health recordSystems (EHRs) use row-modeling for classes of data such as diagnoses, surgical procedures performed on and laboratory test results, which are segregated into separate tables. In each table, the "entity" is a composite of the patient ID and the date/time the diagnosis was made (or the surgery or lab test performed); the attribute is a foreign key into a specially designated lookup table that contains a controlled vocabulary - e.g.,ICD-10for diagnoses,Current Procedural Terminologyfor surgical procedures, with a set of value attributes. (E.g., for laboratory-test results, one may record the value measured, whether it is in the normal, low or high range, the ID of the person responsible for performing the test, the date/time the test was performed, and so on.) As stated earlier, this is not a full-fledged EAV approach because the domain of attributes for a given table is restricted, just as the domain of product IDs in a supermarket's Sales table would be restricted to the domain of Products in a Products table.
However, to capture data on parameters that are not always defined in standard vocabularies, EHRs also provide a "pure" EAV mechanism, where specially designated power-users can define new attributes, their data type, maximum and minimal permissible values (or permissible set of values/codes), and then allow others to capture data based on these attributes. In the Epic (TM) EHR, this mechanism is termed "Flowsheets", and is commonly used to capture inpatient nursing observation data.
The typical case for using the EAV model is for highly sparse, heterogeneous attributes, such as clinical parameters in the electronic medical record (EMRs), as stated above. Even here, however, it is accurate to state that the EAV modeling principle is applied to asub-schemaof the database rather than for all of its contents. (Patient demographics, for example, are most naturally modeled in one-column-per-attribute, traditional relational structure.)
Consequently, the arguments about EAV vs. "relational" design reflect incomplete understanding of the problem: An EAV design should be employed only for that sub-schema of a database where sparse attributes need to be modeled: even here, they need to be supported bythird normal formmetadata tables. There are relatively few database-design problems where sparse attributes are encountered: this is why the circumstances where EAV design is applicable are relatively rare. Even where they are encountered, a set of EAV tables is not the only way to address sparse data: an XML-based solution (discussed below) is applicable when the maximum number of attributes per entity is relatively modest, and the total volume of sparse data is also similarly modest. An example of this situation is the problems of capturing variable attributes for different product types.
Sparse attributes may also occur in E-commerce situations where an organization is purchasing or selling a vast and highly diverse set of commodities, with the details of individual categories of commodities being highly variable.
Another application of EAV is in modeling classes and attributes that, while not sparse, are dynamic, but where the number of data rows per class will be relatively modest – a couple of hundred rows at most, but typically a few dozen – and the system developer is also required to provide a Web-based end-user interface within a very short turnaround time. "Dynamic" means that new classes and attributes need to be continually defined and altered to represent an evolving data model. This scenario can occur in rapidly evolving scientific fields as well as in ontology development, especially during the prototyping and iterative refinement phases.
While the creation of new tables and columns to represent a new category of data is not especially labor-intensive, the programming of Web-based interfaces that support browsing or basic editing with type- and range-based validation is. In such a case, a more maintainable long-term solution is to create a framework where the class and attribute definitions are stored in metadata, and the software generates a basic user interface from this metadata dynamically.
The EAV/CR framework, mentioned earlier, was created to address this very situation. Note that an EAV data model is not essential here, but the system designer may consider it an acceptable alternative to creating, say, sixty or more tables containing a total of not more than two thousand rows. Here, because the number of rows per class is so few, efficiency considerations are less important; with the standard indexing by class ID/attribute ID, DBMS optimizers can easily cache the data for a small class in memory when running a query involving that class or attribute.
In the dynamic-attribute scenario, it is worth noting thatResource Description Framework(RDF) is being employed as the underpinning of Semantic-Web-related ontology work. RDF, intended to be a general method of representing information, is a form of EAV: an RDF triple comprises an object, a property, and a value.
At the end of Jon Bentley's book "Writing Efficient Programs", the author warns that making code more efficient generally also makes it harder to understand and maintain, and so one does not rush in and tweak code unless one has first determined that thereisa performance problem, and measures such as code profiling have pinpointed the exact location of the bottleneck. Once you have done so, you modify only the specific code that needs to run faster. Similar considerations apply to EAV modeling: you apply it only to the sub-system where traditional relational modeling is knowna priorito be unwieldy (as in the clinical data domain), or is discovered, during system evolution, to pose significant maintenance challenges. Database Guru (and currently a vice-president of Core Technologies at Oracle Corporation) Tom Kyte,[19]for example, correctly points out drawbacks of employing EAV in traditional business scenarios, and makes the point that mere "flexibility" is not a sufficient criterion for employing EAV. (However, he makes the sweeping claim that EAV should be avoided inallcircumstances, even though Oracle's Health Sciences division itself employs EAV to model clinical-data attributes in its commercial systems ClinTrial[20]and Oracle Clinical.[21])
The Achilles heel of EAV is the difficulty of working with large volumes of EAV data. It is often necessary to transiently or permanently inter-convert between columnar and row-or EAV-modeled representations of the same data; this can be both error-prone if done manually as well as CPU-intensive. Generic frameworks that utilize attribute and attribute-grouping metadata address the former but not the latter limitation; their use is more or less mandated in the case of mixed schemas that contain a mixture of conventional-relational and EAV data, where the error quotient can be very significant.
The conversion operation is calledpivoting. Pivoting is not required only for EAV data but also for any form of row-modeled data. (For example, implementations of theApriori algorithmfor Association Analysis, widely used to process supermarket sales data to identify other products that purchasers of a given product are also likely to buy, pivot row-modeled data as a first step.) Many database engines have proprietary SQL extensions to facilitate pivoting, and packages such as Microsoft Excel also support it. The circumstances where pivoting is necessary are considered below.
However, the structure of EAV data model is a perfect candidate for Relational Division, seerelational algebra. With a good indexing strategy it's possible to get a response time in less than a few hundred milliseconds on a billion row EAV table. Microsoft SQL Server MVP Peter Larsson has proved this on a laptop and made the solution general available.[22]
Obviously, no matter what approaches you take, querying EAV will not be as fast as querying standard column-modeled relational data for certain types of query, in much the same way that access of elements in sparse matrices are not as fast as those on non-sparse matrices if the latter fit entirely into main memory. (Sparse matrices, represented using structures such as linked lists, require list traversal to access an element at a given X-Y position, while access to elements in matrices represented as 2-D arrays can be performed using fast CPU register operations.) If, however, you chose the EAV approach correctly for the problem that you were trying to solve, this is the price that you pay; in this respect, EAV modeling is an example of a space (and schema maintenance) versus CPU-time tradeoff.
Originally postulated by Maier, Ullman and Vardi,[23]the "Universal Data Model" (UDM) seeks to simplify the query of a complex relational schema by naive users, by creating the illusion that everything is stored in a single giant "universal table". It does this by utilizing inter-table relationships, so that the user does not need to be concerned about what table contains what attribute. C.J. Date, however,[24]pointed out that in circumstances where a table is multiply related to another (as in genealogy databases, where an individual's father and mother are also individuals, or in some business databases where all addresses are stored centrally, and an organization can have different office addresses and shipping addresses), there is insufficient metadata within the database schema to specify unambiguous joins. When UDM has been commercialized, as in SAPBusinessObjects, this limitation is worked around through the creation of "Universes", which are relational views with predefined joins between sets of tables: the "Universe" developer disambiguates ambiguous joins by including the multiply-related table in a view multiple times using different aliases.
Apart from the way in which data is explicitly modeled (UDM simply uses relational views to intercede between the user and the database schema), EAV differs from Universal Data Models in that it also applies to transactional systems, not only query oriented (read-only) systems as in UDM. Also, when used as the basis for clinical-data query systems, EAV implementations do not necessarily shield the user from having to specify the class of an object of interest. In the EAV-based i2b2 clinical data mart,[25]for example, when the user searches for a term, she has the option of specifying the category of data that the user is interested in. For example, the phrase "lithium" can refer either to the medication (which is used to treatbipolar disorder), or a laboratory assay for lithium level in the patient's blood. (The blood level of lithium must be monitored carefully: too much of the drug causes severe side effects, while too little is ineffective.)
An Open Schema implementation can use an XML column in a table to capture the variable/sparse information.[26]Similar ideas can be applied to databases that supportJSON-valued columns: sparse, hierarchical data can be represented as JSON. If the database has JSON support, such as PostgreSQL and (partially) SQL Server 2016 and later, then attributes can be queried, indexed and joined. This can offer performance improvements of over 1000x over naive EAV implementations.,[27]but does not necessarily make the overall database application more robust.
Note that there are two ways in which XML or JSON data can be stored: one way is to store it as a plain string, opaque to the database server; the other way is to use a database server that can "see into" the structure. There are obviously some severe drawbacks to storing opaque strings: these cannot be queried directly, one cannot form an index based on their contents, and it is impossible to perform joins based on the content.
Building an application that has to manage data gets extremely complicated when using EAV models, because of the extent of infrastructure that has to be developed in terms of metadata tables and application-framework code. Using XML solves the problem of server-based data validation (which must be done by middle-tier and browser-based code in EAV-based frameworks), but has the following drawbacks:
All of the above drawbacks are remediable by creating a layer of metadata and application code, but in creating this, the original "advantage" of not having to create a framework has vanished. The fact is that modeling sparse data attributes robustly is a hard database-application-design problem no matter which storage approach is used. Sarka's work,[26]however, proves the viability of using an XML field instead of type-specific relational EAV tables for the data-storage layer, and in situations where the number of attributes per entity is modest (e.g., variable product attributes for different product types) the XML-based solution is more compact than an EAV-table-based one. (XML itself may be regarded as a means of attribute–value data representation, though it is based on structured text rather than on relational tables.)
There exist several other approaches for the representation of tree-structured data, be itXML,JSONor other formats, such as thenested set model, in a relational database. On the other hand, database vendors have begun to include JSON and XML support into their data structures and query features, like inIBM Db2, where XML data is stored as XML separate from the tables, usingXPathqueries as part of SQL statements, or inPostgreSQL, with a JSON data type[28]that can be indexed and queried. These developments accomplish, improve or substitute the EAV model approach.
The uses of JSON and XML are not necessarily the same as the use of an EAV model, though they can overlap. XML is preferable to EAV for arbitrarily hierarchical data that is relatively modest in volume for a single entity: it is not intended to scale up to the multi-gigabyte level with respect to data-manipulation performance.[citation needed]XML is not concerned per-se with the sparse-attribute problem, and when the data model underlying the information to be represented can be decomposed straightforwardly into a relational structure, XML is better suited as a means of data interchange than as a primary storage mechanism. EAV, as stated earlier, is specifically (and only) applicable to the sparse-attribute scenario. When such a scenario holds, the use of datatype-specific attribute–value tables that can be indexed by entity, by attribute, and by value and manipulated through simple SQL statements is vastly more scalable than the use of an XML tree structure.[citation needed]The Google App Engine, mentioned above,[citation needed]uses strongly-typed-value tables for a good reason.[citation needed]
An alternative approach to managing the various problems encountered with EAV-structured data is to employ agraph database. These represent entities as the nodes of a graph orhypergraph, and attributes as links or edges of that graph. The issue of table joins are addressed by providing graph-specific query languages, such asApacheTinkerPop,[29]or theOpenCogatomspace pattern matcher.[30]
Another alternative is to useSPARQLstore.
PostgreSQL version 9.4 includes support forJSONbinary columns (JSONB), which can be queried, indexed and joined. This allows performance improvements by factors of a thousand or more over traditional EAV table designs.[27]
A DB schema based on JSONB always has fewer tables: one may nest attribute–value pairs in JSONB type fields of the Entity table. That makes the DB schema easy to comprehend and SQL queries concise.[31]The programming code to manipulate the database objects on the abstraction layer turns out much shorter.[32]
Microsoft SQL Server 2008 offers a (proprietary) alternative to EAV.[33]Columns with an atomic data type (e.g., numeric, varchar or datetime columns) can be designated assparsesimply by including the word SPARSE in the column definition of the CREATE TABLE statement. Sparse columns optimize the storage of NULL values (which now take up no space at all) and are useful when the majority records in a table will have NULL values for that column. Indexes on sparse columns are also optimized: only those rows with values are indexed. In addition, the contents of all sparse columns in a particular row of a table can be collectively aggregated into a single XML column (a column set), whose contents are of the form[<column-name>column contents </column-name>]*....In fact, if a column set is defined for a table as part of a CREATE TABLE statement, all sparse columns subsequently defined are typically added to it. This has the interesting consequence that the SQL statementSELECT * from <tablename>will not return the individual sparse columns, but concatenate all of them into a single XML column whose name is that of the column set (which therefore acts as a virtual, computed column). Sparse columns are convenient for business applications such as product information, where the applicable attributes can be highly variable depending on the product type, but where the total number of variable attributes per product type are relatively modest.
However, this approach to modelling sparse attributes has several limitations: rival DBMSs have, notably, chosen not to borrow this idea for their own engines. Limitations include:
Manycloud computingvendors offer data stores based on the EAV model, where an arbitrary number of attributes can be associated with a given entity. Roger Jennings provides an in-depth comparison[34]of these. In Amazon's offering, SimpleDB, the data type is limited to strings, and data that is intrinsically non-string must be coerced to string (e.g., numbers must be padded with leading zeros) if you wish to perform operations such as sorting. Microsoft's offering, Windows Azure Table Storage, offers a limited set of data types: byte[], bool, DateTime, double, Guid, int, long and string[1]. The Google App Engine[2]offers the greatest variety of data types: in addition to dividing numeric data into int, long, or float, it also defines custom data types such as phone number, E-mail address, geocode and hyperlink. Google, but not Amazon or Microsoft, lets you define metadata that would prevent invalid attributes from being associated with a particular class of entity, by letting you create a metadata model.
Google lets you operate on the data using a subset of SQL; Microsoft offer a URL-based querying syntax that is abstracted via aLINQprovider; Amazon offer a more limited syntax. Of concern, built-in support for combining different entities through joins is currently (April '10) non-existent with all three engines. Such operations have to be performed by application code. This may not be a concern if the application servers are co-located with the data servers at the vendor's data center, but a lot of network traffic would be generated if the two were geographically separated.
An EAV approach is justified only when the attributes that are being modeled are numerous and sparse: if the data being captured does not meet this requirement, the cloud vendors' default EAV approach is often a mismatch for applications that require a true back-end database (as opposed to merely a means of persistent data storage). Retrofitting the vast majority of existing database applications, which use a traditional data-modeling approach, to an EAV-type cloud architecture, would require major surgery. Microsoft discovered, for example, that its database-application-developer base was largely reluctant to invest such effort. In 2010 therefore, Microsoft launched a premium offering, SQL Server Azure, a cloud-accessible, fully-fledged relational engine which allows porting of existing database applications with only modest changes. As of the early 2020s, the service allows standard-tier physical database sizes of up to 8TB,[35]with "hyperscale" and "business-critical" offerings also available.
|
https://en.wikipedia.org/wiki/Entity-attribute-value_model
|
Record linkage(also known asdata matching,data linkage,entity resolution, and many other terms) is the task of findingrecordsin a data set that refer to the sameentityacross different data sources (e.g., data files, books, websites, and databases). Record linkage is necessary whenjoiningdifferent data sets based on entities that may or may not share a common identifier (e.g.,database key,URI,National identification number), which may be due to differences in record shape, storage location, or curator style or preference. A data set that has undergone RL-oriented reconciliation may be referred to as beingcross-linked.
"Record linkage" is the term used by statisticians, epidemiologists, and historians, among others, to describe the process of joining records from one data source with another that describe the same entity. However, many other terms are used for this process. Unfortunately, this profusion of terminology has led to few cross-references between these research communities.[1][2]
Computer scientistsoften refer to it as "data matching" or as the "object identity problem". Commercial mail and database applications refer to it as "merge/purge processing" or "list washing". Other names used to describe the same concept include: "coreference/entity/identity/name/record resolution", "entity disambiguation/linking", "fuzzy matching", "duplicate detection", "deduplication", "record matching", "(reference) reconciliation", "object identification", "data/information integration" and "conflation".[3]
While they share similar names, record linkage andlinked dataare two separate approaches to processing and structuring data. Although both involve identifying matching entities across different data sets, record linkage standardly equates "entities" with human individuals; by contrast, Linked Data is based on the possibility of interlinking anyweb resourceacross data sets, using a correspondingly broader concept of identifier, namely aURI.
The initial idea of record linkage goes back toHalbert L. Dunnin his 1946 article titled "Record Linkage" published in theAmerican Journal of Public Health.[4]
Howard Borden Newcombethen laid the probabilistic foundations of modern record linkage theory in a 1959 article inScience.[5]These were formalized in 1969 byIvan Fellegiand Alan Sunter, in their pioneering work "A Theory For Record Linkage", where they proved that the probabilistic decision rule they described was optimal when the comparison attributes were conditionally independent.[6]In their work they recognized the growing interest in applying advances in computing and automation to large collections ofadministrative data, and theFellegi-Sunter theoryremains the mathematical foundation for many record linkage applications.
Since the late 1990s, variousmachine learningtechniques have been developed that can, under favorable conditions, be used to estimate the conditional probabilities required by the Fellegi-Sunter theory. Several researchers have reported that the conditional independence assumption of the Fellegi-Sunter algorithm is often violated in practice; however, published efforts to explicitly model the conditional dependencies among the comparison attributes have not resulted in an improvement in record linkage quality.[citation needed]On the other hand, machine learning or neural network algorithms that do not rely on these assumptions often provide far higher accuracy, when sufficient labeled training data is available.[7]
Record linkage can be done entirely without the aid of a computer, but the primary reasons computers are often used to complete record linkages are to reduce or eliminate manual review and to make results more easily reproducible. Computer matching has the advantages of allowing central supervision of processing, better quality control, speed, consistency, and better reproducibility of results.[8]
Record linkage is highly sensitive to the quality of the data being linked, so all data sets under consideration (particularly their key identifier fields) should ideally undergo adata quality assessmentprior to record linkage. Many key identifiers for the same entity can be presented quite differently between (and even within) data sets, which can greatly complicate record linkage unless understood ahead of time. For example, key identifiers for a man named William J. Smith might appear in three different data sets as so:
In this example, the different formatting styles lead to records that look different but in fact all refer to the same entity with the same logical identifier values. Most, if not all, record linkage strategies would result in more accurate linkage if these values were firstnormalizedorstandardizedinto a consistent format (e.g., all names are "Surname, Given name", and all dates are "YYYY/MM/DD"). Standardization can be accomplished through simple rule-baseddata transformationsor more complex procedures such as lexicon-basedtokenizationand probabilistic hidden Markov models.[9]Several of the packages listed in theSoftware Implementationssection provide some of these features to simplify the process of data standardization.
Entity resolutionis an operationalintelligenceprocess, typically powered by an entity resolution engine ormiddleware, whereby organizations can connect disparate data sources with aviewto understanding possible entity matches and non-obvious relationships across multipledata silos. It analyzes all of theinformationrelating to individuals and/or entities from multiple sources of data, and then applies likelihood and probability scoring to determine which identities are a match and what, if any, non-obvious relationships exist between those identities.
Entity resolution engines are typically used to uncoverrisk,fraud, and conflicts of interest, but are also useful tools for use withincustomer data integration(CDI) andmaster data management(MDM) requirements. Typical uses for entity resolution engines include terrorist screening, insurance fraud detection,USA Patriot Actcompliance,organized retail crimering detection and applicant screening.
For example: Across different data silos – employee records, vendor data, watch lists, etc. – an organization may have several variations of an entity named ABC, which may or may not be the same individual. These entries may, in fact, appear as ABC1, ABC2, or ABC3 within those data sources. By comparing similarities between underlying attributes such asaddress,date of birth, orsocial security number, the user can eliminate some possible matches and confirm others as very likely matches.
Entity resolution engines then apply rules, based on common sense logic, to identify hidden relationships across the data. In the example above, perhaps ABC1 and ABC2 are not the same individual, but rather two distinct people who share common attributes such as address or phone number.
While entity resolution solutions include data matching technology, many data matching offerings do not fit the definition of entity resolution. Here are four factors that distinguish entity resolution from data matching, according to John Talburt, director of theUALRCenter for Advanced Research in Entity Resolution and Information Quality:
In contrast to data quality products, more powerful identity resolution engines also include arules engineand workflow process, which apply business intelligence to the resolved identities and their relationships. These advanced technologies makeautomated decisionsand impact business processes in real time, limiting the need for human intervention.
The simplest kind of record linkage, calleddeterministicorrules-based record linkage, generates links based on the number of individual identifiers that match among the available data sets.[10]Two records are said to match via a deterministic record linkage procedure if all or some identifiers (above a certain threshold) are identical. Deterministic record linkage is a good option when the entities in the data sets are identified by a common identifier, or when there are several representative identifiers (e.g., name, date of birth, and sex when identifying a person) whose quality of data is relatively high.
As an example, consider two standardized data sets, Set A and Set B, that contain different bits of information about patients in a hospital system. The two data sets identify patients using a variety of identifiers:Social Security Number(SSN), name, date of birth (DOB), sex, andZIP code(ZIP). The records in two data sets (identified by the "#" column) are shown below:
The most simple deterministic record linkage strategy would be to pick a single identifier that is assumed to be uniquely identifying, say SSN, and declare that records sharing the same value identify the same person while records not sharing the same value identify different people. In this example, deterministic linkage based on SSN would create entities based on A1 and A2; A3 and B1; and A4. While A1, A2, and B2 appear to represent the same entity, B2 would not be included into the match because it is missing a value for SSN.
Handling exceptions such as missing identifiers involves the creation of additional record linkage rules. One such rule in the case of missing SSN might be to compare name, date of birth, sex, and ZIP code with other records in hopes of finding a match. In the above example, this rule would still not match A1/A2 with B2 because the names are still slightly different: standardization put the names into the proper (Surname, Given name) format but could not discern "Bill" as a nickname for "William". Running names through aphonetic algorithmsuch asSoundex,NYSIIS, ormetaphone, can help to resolve these types of problems. However, they may still stumble over surname changes as the result of marriage or divorce, but then B2 would be matched only with A1 since the ZIP code in A2 is different. Thus, another rule would need to be created to determine whether differences in particular identifiers are acceptable (such as ZIP code) and which are not (such as date of birth).
As this example demonstrates, even a small decrease in data quality or small increase in the complexity of the data can result in a very large increase in the number of rules necessary to link records properly. Eventually, these linkage rules will become too numerous and interrelated to build without the aid of specialized software tools. In addition, linkage rules are often specific to the nature of the data sets they are designed to link together. One study was able to link the Social SecurityDeath Master Filewith two hospital registries from theMidwestern United Statesusing SSN, NYSIIS-encoded first name, birth month, and sex, but these rules may not work as well with data sets from other geographic regions or with data collected on younger populations.[11]Thus, continuous maintenance testing of these rules is necessary to ensure they continue to function as expected as new data enter the system and need to be linked. New data that exhibit different characteristics than was initially expected could require a complete rebuilding of the record linkage rule set, which could be a very time-consuming and expensive endeavor.
Probabilistic record linkage, sometimes calledfuzzy matching(alsoprobabilistic mergingorfuzzy mergingin the context of merging of databases), takes a different approach to the record linkage problem by taking into account a wider range of potential identifiers, computing weights for each identifier based on its estimated ability to correctly identify a match or a non-match, and using these weights to calculate the probability that two given records refer to the same entity. Record pairs with probabilities above a certain threshold are considered to be matches, while pairs with probabilities below another threshold are considered to be non-matches; pairs that fall between these two thresholds are considered to be "possible matches" and can be dealt with accordingly (e.g., human reviewed, linked, or not linked, depending on the requirements). Whereas deterministic record linkage requires a series of potentially complex rules to be programmed ahead of time, probabilistic record linkage methods can be "trained" to perform well with much less human intervention.
Many probabilistic record linkage algorithms assign match/non-match weights to identifiers by means of two probabilities calledu{\displaystyle u}andm{\displaystyle m}. Theu{\displaystyle u}probability is the probability that an identifier in twonon-matchingrecords will agree purely by chance. For example, theu{\displaystyle u}probability for birth month (where there are twelve values that are approximately uniformly distributed) is1/12≈0.083{\displaystyle 1/12\approx 0.083}; identifiers with values that are not uniformly distributed will have differentu{\displaystyle u}probabilities for different values (possibly including missing values). Them{\displaystyle m}probability is the probability that an identifier inmatchingpairs will agree (or be sufficiently similar, such as strings with lowJaro-WinklerorLevenshteindistance). This value would be1.0{\displaystyle 1.0}in the case of perfect data, but given that this is rarely (if ever) true, it can instead be estimated. This estimation may be done based on prior knowledge of the data sets, by manually identifying a large number of matching and non-matching pairs to "train" the probabilistic record linkage algorithm, or by iteratively running the algorithm to obtain closer estimations of them{\displaystyle m}probability. If a value of0.95{\displaystyle 0.95}were to be estimated for them{\displaystyle m}probability, then the match/non-match weights for the birth month identifier would be:
The same calculations would be done for all other identifiers under consideration to find their match/non-match weights. Then, every identifier of one record would be compared with the corresponding identifier of another record to compute the total weight of the pair: thematchweight is added to the running total whenever a pair of identifiers agree, while thenon-matchweight is added (i.e. the running total decreases) whenever the pair of identifiers disagrees. The resulting total weight is then compared to the aforementioned thresholds to determine whether the pair should be linked, non-linked, or set aside for special consideration (e.g. manual validation).[12]
Determining where to set the match/non-match thresholds is a balancing act between obtaining an acceptablesensitivity(orrecall, the proportion of truly matching records that are linked by the algorithm) andpositive predictive value(orprecision, the proportion of records linked by the algorithm that truly do match). Various manual and automated methods are available to predict the best thresholds, and some record linkage software packages have built-in tools to help the user find the most acceptable values. Because this can be a very computationally demanding task, particularly for large data sets, a technique known asblockingis often used to improve efficiency. Blocking attempts to restrict comparisons to just those records for which one or more particularly discriminating identifiers agree, which has the effect of increasing the positive predictive value (precision) at the expense of sensitivity (recall).[12]For example, blocking based on a phonetically coded surname and ZIP code would reduce the total number of comparisons required and would improve the chances that linked records would be correct (since two identifiers already agree), but would potentially miss records referring to the same person whose surname or ZIP code was different (due to marriage or relocation, for instance). Blocking based on birth month, a more stable identifier that would be expected to change only in the case of data error, would provide a more modest gain in positive predictive value and loss in sensitivity, but would create only twelve distinct groups which, for extremely large data sets, may not provide much net improvement in computation speed. Thus, robust record linkage systems often use multiple blocking passes to group data in various ways in order to come up with groups of records that should be compared to each other.
In recent years, a variety of machine learning techniques have been used in record linkage. It has been recognized[7]that the classic Fellegi-Sunter algorithm for probabilistic record linkage outlined above is equivalent to theNaive Bayesalgorithm in the field of machine learning,[13]and suffers from the same assumption of the independence of its features (an assumption that is typically not true).[14][15]Higher accuracy can often be achieved by using various other machine learning techniques, including a single-layerperceptron,[7]random forest, andSVM.[16]In conjunction with distributed technologies,[17]accuracy and scale for record linkage can be improved further.
High quality record linkage often requires a human–machine hybrid system to safely manage uncertainty in the ever changing streams of chaotic big data.[18][19]Recognizing that linkage errors propagate into the linked data and its analysis, interactive record linkage systems have been proposed. Interactive record linkage is defined as people iteratively fine tuning the results from the automated methods and managing the uncertainty and its propagation to subsequent analyses.[20]The main objectives of interactive record linkage systems is to manually resolve uncertain linkages and validate the results until it is at acceptable levels for the given application. Variations of interactive record linkage that enhance privacy during the human interaction steps have also been proposed.[21][22]
Record linkage is increasingly required across databases held by different organisations, where the complementary data held by these organisations can, for example, help to identify patients that are susceptible to certain adverse drug reactions (linking hospital, doctor, pharmacy databases). In many such applications, however, the databases to be linked contain sensitive information about people which cannot be shared between the organisations.[23]
Privacy-preserving record linkage (PPRL) methods have been developed with the aim to link databases without the need of sharing the original sensitive values between the organisations that participate in a linkage.[24][25]In PPRL, generally the attribute values of records to be compared are encoded or encrypted in some form. A popular such encoding technique used areBloom filter,[26]which allows approximate similarities to be calculated between encoded values without the need for sharing the corresponding sensitive plain-text values. At the end of the PPRL process only limited information about the record pairs classified as matches is revealed to the organisations that participate in the linkage process. The techniques used in PPRL[24]must guarantee that no participating organisation, nor any external adversary, can compromise the privacy of the entities that are represented by records in the databases being linked.[27]
In an application with two files, A and B, denote the rows (records) byα(a){\displaystyle \alpha (a)}in file A andβ(b){\displaystyle \beta (b)}in file B. AssignK{\displaystyle K}characteristicsto each record. The set of records that represent identical entities is defined by
M={(a,b);a=b;a∈A;b∈B}{\displaystyle M=\left\{(a,b);a=b;a\in A;b\in B\right\}}
and the complement of setM{\displaystyle M}, namely setU{\displaystyle U}representing different entities is defined as
U={(a,b);a≠b;a∈A;b∈B}{\displaystyle U=\{(a,b);a\neq b;a\in A;b\in B\}}.
A vector,γ{\displaystyle \gamma }is defined, that contains the coded agreements and disagreements on each characteristic:
γ[α(a),β(b)]={γ1[α(a),β(b)],...,γK[α(a),β(b)]}{\displaystyle \gamma \left[\alpha (a),\beta (b)\right]=\{\gamma ^{1}\left[\alpha (a),\beta (b)\right],...,\gamma ^{K}\left[\alpha (a),\beta (b)\right]\}}
whereK{\displaystyle K}is a subscript for the characteristics (sex, age, marital status, etc.) in the files. The conditional probabilities of observing a specific vectorγ{\displaystyle \gamma }given(a,b)∈M{\displaystyle (a,b)\in M},(a,b)∈U{\displaystyle (a,b)\in U}are defined as
m(γ)=P{γ[α(a),β(b)]|(a,b)∈M}=∑(a,b)∈MP{γ[α(a),β(b)]}⋅P[(a,b)|M]{\displaystyle m(\gamma )=P\left\{\gamma \left[\alpha (a),\beta (b)\right]|(a,b)\in M\right\}=\sum _{(a,b)\in M}P\left\{\gamma \left[\alpha (a),\beta (b)\right]\right\}\cdot P\left[(a,b)|M\right]}
and
u(γ)=P{γ[α(a),β(b)]|(a,b)∈U}=∑(a,b)∈UP{γ[α(a),β(b)]}⋅P[(a,b)|U],{\displaystyle u(\gamma )=P\left\{\gamma \left[\alpha (a),\beta (b)\right]|(a,b)\in U\right\}=\sum _{(a,b)\in U}P\left\{\gamma \left[\alpha (a),\beta (b)\right]\right\}\cdot P\left[(a,b)|U\right],}respectively.[6]
MostMaster data management(MDM) products use a record linkage process to identify records from different sources representing the same real-world entity. This linkage is used to create a "golden master record" containing the cleaned, reconciled data about the entity. The techniques used in MDM are the same as for record linkage generally. MDM expands this matching not only to create a "golden master record" but to infer relationships also. (i.e. a person has a same/similar surname and same/similar address, this might imply they share a household relationship).
Record linkage plays a key role indata warehousingandbusiness intelligence. Data warehouses serve to combine data from many different operational source systems into onelogical data model, which can then be subsequently fed into a business intelligence system for reporting and analytics. Each operational source system may have its own method of identifying the same entities used in the logical data model, so record linkage between the different sources becomes necessary to ensure that the information about a particular entity in one source system can be seamlessly compared with information about the same entity from another source system. Data standardization and subsequent record linkage often occur in the "transform" portion of theextract, transform, load(ETL) process.
Record linkage is important to social history research since most data sets, such ascensus recordsand parish registers were recorded long before the invention ofNational identification numbers. When old sources are digitized, linking of data sets is a prerequisite forlongitudinal study. This process is often further complicated by lack of standard spelling of names, family names that change according to place of dwelling, changing of administrative boundaries, and problems of checking the data against other sources. Record linkage was among the most prominent themes in theHistory and computingfield in the 1980s, but has since been subject to less attention in research.[citation needed]
Record linkage is an important tool in creating data required for examining the health of the public and of the health care system itself. It can be used to improve data holdings, data collection, quality assessment, and the dissemination of information. Data sources can be examined to eliminate duplicate records, to identify under-reporting and missing cases (e.g., census population counts), to create person-oriented health statistics, and to generate disease registries and health surveillance systems. Some cancer registries link various data sources (e.g., hospital admissions, pathology and clinical reports, and death registrations) to generate their registries. Record linkage is also used to create health indicators. For example, fetal and infant mortality is a general indicator of a country's socioeconomic development, public health, and maternal and child services. If infant death records are matched to birth records, it is possible to use birth variables, such as birth weight and gestational age, along with mortality data, such as cause of death, in analyzing the data. Linkages can help in follow-up studies of cohorts or other groups to determine factors such as vital status, residential status, or health outcomes. Tracing is often needed for follow-up of industrial cohorts, clinical trials, and longitudinal surveys to obtain the cause of death and/or cancer. An example of a successful and long-standing record linkage system allowing for population-based medical research is theRochester Epidemiology Projectbased inRochester, Minnesota.[28]
The main reasons cited are:[citation needed]
|
https://en.wikipedia.org/wiki/Identity_resolution
|
The termsschema matchingandmappingare often used interchangeably for adatabaseprocess. For this article, we differentiate the two as follows:schemamatching is the process of identifying that two objects aresemanticallyrelated (scope of this article) while mapping refers to thetransformationsbetween the objects. For example, in the two schemas DB1.Student (Name, SSN, Level, Major, Marks)
and DB2.Grad-Student (Name, ID, Major, Grades); possible matches would be: DB1.Student ≈ DB2.Grad-Student; DB1.SSN = DB2.ID etc. and possible transformations or mappings would be: DB1.Marks to DB2.Grades (100–90 A; 90–80 B: etc.).
Automating these two approaches has been one of the fundamental tasks ofdata integration. In general, it is not possible to determine fully automatically the different correspondences between two schemas — primarily because of the differing and often not explicated or documented semantics of the two schemas.
Among others, common challenges to automating matching and mapping have been previously classified in[1]especially for relational DB schemas; and in[2]– a fairly comprehensive list of heterogeneity not limited to the relational model recognizing schematic vs semantic differences/heterogeneity. Most of these heterogeneities exist because schemas use different representations or definitions to represent the same information (schema conflicts); OR different expressions, units, and precision result in conflicting representations of the same data (data conflicts).[1]Research in schema matching seeks to provide automated support to the process of finding semantic matches between two schemas. This process is made harder due to heterogeneities at the following levels[3]
[4][5][6][7][8]
Discusses a generic methodology for the task of schema integration or the activities involved.[5]According to the authors, one can view the integration.
Approaches to schema integration can be broadly classified as ones that exploit either just schema information or schema and instance level information.[4][5]
Schema-level matchersonly consider schema information, not instance data. The available information includes the usual properties of schema elements, such as name, description, data type, relationship types (part-of, is-a, etc.), constraints, and schema structure. Working at the element (atomic elements like attributes of objects) or structure level (matching combinations of elements that appear together in a structure), these properties are used to identify matching elements in two schemas. Language-based or linguistic matchers use names and text (i.e., words or sentences) to find semantically similar schema elements. Constraint based matchers exploit constraints often contained in schemas. Such constraints are used to define data types and value ranges, uniqueness, optionality, relationship types and cardinalities, etc. Constraints in two input schemas are matched to determine the similarity of the schema elements.
Instance-level matchersuse instance-level data to gather important insight into the contents and meaning of the schema elements. These are typically used in addition to schema level matches in order to boost the confidence in match results, more so when the information available at the schema level is insufficient. Matchers at this level use linguistic and constraint based characterization of instances. For example, using linguistic techniques, it might be possible to look at the Dept, DeptName and EmpName instances to conclude that DeptName is a better match candidate for Dept than EmpName. Constraints like zipcodes must be 5 digits long or format of phone numbers may allow matching of such types of instance data.[9]
Hybrid matchersdirectly combine several matching approaches to determine match candidates based on multiple criteria or information sources.Most of these techniques also employ additional information such as dictionaries, thesauri, and user-provided match or mismatch information[10]
Reusing matching informationAnother initiative has been to re-use previous matching information as auxiliary information for future matching tasks. The motivation for this work is that structures or substructures often repeat, for example in schemas in the E-commerce domain. Such a reuse of previous matches however needs to be a careful choice. It is possible that such a reuse makes sense only for some part of a new schema or only in some domains. For example, Salary and Income may be considered identical in a payroll application but not in a tax reporting application. There are several open ended challenges in such reuse that deserves further work.
Sample PrototypesTypically, the implementation of such matching techniques can be classified as being either rule based or learner based systems. The complementary nature of these different approaches has instigated a number of applications using a combination of techniques depending on the nature of the domain or application under consideration.[4][5]
The relationship types between objects that are identified at the end of a matching process are typically those with set semantics such as overlap, disjointness, exclusion, equivalence, or subsumption. The logical encodings of these relationships are what they mean. Among others, an early attempt to use description logics for schema integration and identifying such relationships was presented.[11]Several state of the art matching tools today[4][7]and those benchmarked in theOntology Alignment Evaluation Initiative[12]are capable of identifying many such simple (1:1 / 1:n / n:1 element level matches) and complex matches (n:1 / n:m element or structure level matches) between objects.
The quality of schema matching is commonly measured byprecision and recall. While precision measures the number of correctly matched pairs out of all pairs that
were matched, recall measures how many of the actual pairs have been matched.
|
https://en.wikipedia.org/wiki/Schema_matching
|
Single-instance storage(SIS) is a system's ability to take multiple copies of content and replace them by a single shared copy. It is a means to eliminate data duplication and to increase efficiency. SIS is frequently implemented infile systems,e-mail serversoftware,databackup, and other storage-related computer software. Single-instance storage is a simple variant ofdata deduplication. While data deduplication may work at a segment or sub-block level, single-instance storage works at the whole-file level and eliminates redundant copies of entire files or e-mail messages.[1]
In the case of ane-mail server, single-instance storage would mean that a single copy of a message is held within itsdatabasewhile individual mailboxes access the content through areference pointer. However, there is a common misconception that the primary benefit of single-instance storage in mail servers is a reduction in disk space requirements. The truth is that its primary benefit is to greatly enhance delivery efficiency of messages sent to large distribution lists. In a mail server scenario disk space savings from single-instance storage are transient and drop off very quickly over time.[citation needed]
When used in conjunction with backup software, single-instance storage can reduce the quantity ofarchivemedia required since it avoids storing duplicate copies of the same file. Often identical files are installed on multiple computers, for exampleoperating systemfiles. With single-instance storage, only one copy of a file is written to the backup media therefore reducing space. This becomes more important when the storage is offsite and oncloud storagesuch asAmazon S3. In such cases, it has been reported that deduplication can help reduce the costs of storage, costs of bandwidth and backup windows by up to 10:1.[2]
Novell GroupWisewas built on single-instance storage, which accounts for its large capacity.
ISO CD/DVD image files can be optimized to use SIS to reduce the size of a CD/DVD compilation (if there are enough duplicated files) to make it fit into smaller media.
SIS is related to system wide file duplication search and multiple file instance detection tools such as the P2P applicationBearShare(5.n Versions and below) but differs in that SIS reduces storage utilization automatically and creates and retains symbolic linkages, whereas Bearshare allows for manual deletion of duplicates and associated user-level file system,Windows Explorertype of icon links.
SIS was introduced with theRemote Installation Servicesfeature ofWindows 2000 Server. A typical server might hold ten or more unique installation configurations (perhaps with differentdevice driversorsoftware suites) but perhaps only 20% of the data may be unique between configurations.[3]Microsoft states that "SIS works by searching a hard disk volume to identify duplicate files. When SIS finds identical files, it saves one copy of the file to a central repository, called the SIS Common Store, and replaces other copies withpointersto the stored versions."[4]Files are compared solely by theirhash functions; files with different names or dates can be consolidated so long as the data itself is identical.[3]Windows Server 2003Standard Edition has SIS capabilities but is limited to OEM OS system installs.[citation needed]
The file-basedWindows Imaging Formatintroduced inWindows Vistaalso supported single-instance storage. Single-instance storage was a feature ofMicrosoft Exchange Serversince version 4.0 and is also present in Microsoft'sWindows Home Server. It is deduplicating attachments only in Exchange 2007 and was dropped completely in Microsoft Exchange Server 2010.[5]Microsoft announced Windows Storage Server 2008 (WSS2008)[6]with Single Instance Storage on June 1, 2009, and states this feature is not available onWindows Server 2008.[6]
The feature is officially deprecated since Windows Server 2012, when a new, more powerful chunk-based data deduplication mechanism was introduced. It allows files with similar content to be deduplicated as long as they have stretches of identical data. This mechanism is more powerful than SIS.[7]Since Windows Server 2019, the feature is fully supported on ReFS.[8]
|
https://en.wikipedia.org/wiki/Single-instance_storage
|
Acceleratorswere a form ofselection-based searchwhich allowed a user to invoke an online service from any other page using only the mouse; they were introduced byMicrosoftinInternet Explorer 8.[1]Actions such as selecting the text or other objects gave users access to the Accelerator services (such as blogging with the selected text, or viewing a map of a selected geographical location), which could then be invoked with the selected object.
According to Microsoft, Accelerators eliminated the need to copy and paste content between web pages.[2]IE 8 specified anXML-based encoding which allowed aweb applicationorweb serviceto be invoked as an Accelerator service. How the service would be invoked and for what categories of content it would show up were specified in the XML file.[3]Similarities have been drawn between Accelerators and the controversialsmart tags, a feature experimented with in theIE 6Beta but withdrawn after criticism (though later included inMS Office).[4]
Support for Accelerators was removed inMicrosoft Edge, the successor to Internet Explorer.[5]
Microsoft introduced accelerators in Internet Explorer 8 Beta 1 as "activities."[6]It was later renamed to "accelerators." in IE 8 Beta 2.[7][8]
Accelerators were included in IE8 by default as a type of add-on.
This is an example of how to describe a map Accelerator using the OpenService Format:
|
https://en.wikipedia.org/wiki/Accelerator_(Internet_Explorer)
|
GRDDL(pronounced "griddle") is a markup format forGleaning Resource Descriptions from Dialects of Languages. It is aW3C Recommendation, and enables users to obtainRDFtriplesout ofXMLdocuments, includingXHTML. The GRDDL specification shows examples usingXSLT, however it was intended to be abstract enough to allow for other implementations as well. It became a Recommendation on September 11, 2007.[1]
A document specifies associated transformations, using one of a number of ways.
For instance, an XHTML document may contain the following markup:
Document consumers are informed that there are GRDDL transformations available in this page, by including the following in theprofileattribute of theheadelement:
The available transformations are revealed through one or morelinkelements:
This code is valid forXHTML1.x only. Theprofileattribute has been dropped inHTML5, including its XML serialisation.
If an XHTML page containsMicroformats, there is usually a specific profile.
For instance, a document with hcard information should have:
When fetchedhttp://www.w3.org/2006/03/hcardhas:
and
The GRDDL aware agent can then use that profileTransformation to extractallhcard data from pages that reference that link.
In a similar fashion to XHTML, GRDDL transformations can be attached to XML documents.
Just like a profileTransformation, an XML namespace can have a transformation associated with it.
This allows entire XML dialects (for instance, KML or Atom) to provide meaningful RDF.
An XML document simply points to a namespace
and when fetched,http://example.com/1.0/points to a namespaceTransformation.
This also allows very large amounts of the existing XML data in the wild to become RDF/XML with minimal effort from the namespace author.
Once a document has been transformed, there is anRDFrepresentation of that data.
This output is generally put into a database and queried viaSPARQL.
|
https://en.wikipedia.org/wiki/GRDDL
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.