text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
De Graffmay refer to:
In the United States:
|
https://en.wikipedia.org/wiki/De_Graff_(disambiguation)
|
Grafis a German comital title, which is part of many compound titles.
Grafmay also refer to:
|
https://en.wikipedia.org/wiki/Graf_(disambiguation)
|
Graphmay refer to:
|
https://en.wikipedia.org/wiki/Graph_(disambiguation)
|
Grof, Grófmay refer to:
People:
Other:
|
https://en.wikipedia.org/wiki/Grof_(disambiguation)
|
Groffmay refer to:
|
https://en.wikipedia.org/wiki/Groff_(disambiguation)
|
Incomputer science,graph transformation, orgraph rewriting, concerns the technique of creating a newgraphout of an original graph algorithmically. It has numerous applications, ranging fromsoftware engineering(software constructionand alsosoftware verification) tolayout algorithmsand picture generation.
Graph transformations can be used as a computation abstraction. The basic idea is that if the state of a computation can be represented as a graph, further steps in that computation can then be represented as transformation rules on that graph. Such rules consist of an original graph, which is to be matched to a subgraph in the complete state, and a replacing graph, which will replace the matched subgraph.
Formally, a graphrewritingsystem usually consists of a set of graph rewrite rules of the formL→R{\displaystyle L\rightarrow R}, withL{\displaystyle L}being called pattern graph (or left-hand side) andR{\displaystyle R}being called replacement graph (or right-hand side of the rule). A graph rewrite rule is applied to the host graph by searching for an occurrence of the pattern graph (pattern matching, thus solving thesubgraph isomorphism problem) and by replacing the found occurrence by an instance of the replacement graph. Rewrite rules can be further regulated in the case oflabeled graphs, such as in string-regulated graph grammars.
Sometimesgraph grammaris used as a synonym forgraph rewriting system, especially in the context offormal languages; the different wording is used to emphasize the goal of constructions, like the enumeration of all graphs from some starting graph, i.e. the generation of a graph language – instead of simply transforming a given state (host graph) into a new state.
The algebraic approach to graph rewriting is based uponcategory theory. The algebraic approach is further divided into sub-approaches, the most common of which are thedouble-pushout (DPO) approachand thesingle-pushout (SPO) approach. Other sub-approaches include thesesqui-pushoutand thepullback approach.
From the perspective of the DPO approach a graph rewriting rule is a pair ofmorphismsin the category of graphs andgraph homomorphismsbetween them:r=(L←K→R){\displaystyle r=(L\leftarrow K\rightarrow R)}, also writtenL⊇K⊆R{\displaystyle L\supseteq K\subseteq R}, whereK→L{\displaystyle K\rightarrow L}isinjective. The graph K is calledinvariantor sometimes thegluing graph. Arewritingsteporapplicationof a rule r to ahost graphG is defined by twopushoutdiagrams both originating in the samemorphismk:K→D{\displaystyle k\colon K\rightarrow D}, where D is acontext graph(this is where the namedouble-pushout comes from). Another graph morphismm:L→G{\displaystyle m\colon L\rightarrow G}models an occurrence of L in G and is called amatch. Practical understanding of this is thatL{\displaystyle L}is a subgraph that is matched fromG{\displaystyle G}(seesubgraph isomorphism problem), and after a match is found,L{\displaystyle L}is replaced withR{\displaystyle R}in host graphG{\displaystyle G}whereK{\displaystyle K}serves as an interface, containing the nodes and edges which are preserved when applying the rule. The graphK{\displaystyle K}is needed to attach the pattern being matched to its context: if it is empty, the match can only designate a whole connected component of the graphG{\displaystyle G}.
In contrast a graph rewriting rule of the SPO approach is a single morphism in the category oflabeled multigraphsandpartial mappingsthat preserve the multigraph structure:r:L→R{\displaystyle r\colon L\rightarrow R}. Thus a rewriting step is defined by a singlepushoutdiagram. Practical understanding of this is similar to the DPO approach. The difference is, that there is no interface between the host graph G and the graph G' being the result of the rewriting step.
From the practical perspective, the key distinction between DPO and SPO is how they deal with the deletion of nodes with adjacent edges, in particular, how they avoid that such deletions may leave behind "dangling edges". The DPO approach only deletes a node when the rule specifies the deletion of all adjacent edges as well (thisdangling conditioncan be checked for a given match), whereas the SPO approach simply disposes the adjacent edges, without requiring an explicit specification.
There is also another algebraic-like approach to graph rewriting, based mainly on Boolean algebra and an algebra of matrices, calledmatrix graph grammars.[1]
Yet another approach to graph rewriting, known asdeterminategraph rewriting, came out oflogicanddatabase theory.[2]In this approach, graphs are treated as database instances, and rewriting operations as a mechanism for defining queries and views; therefore, all rewriting is required to yield unique results (up to isomorphism), and this is achieved by applying any rewriting rule concurrently throughout the graph, wherever it applies, in such a way that the result is indeed uniquely defined.
Another approach to graph rewriting isterm graphrewriting, which involves the processing or transformation of term graphs (also known asabstract semantic graphs) by a set of syntactic rewrite rules.
Term graphs are a prominent topic in programming language research since term graph rewriting rules are capable of formally expressing a compiler'soperational semantics. Term graphs are also used as abstract machines capable of modelling chemical and biological computations as well as graphical calculi such as concurrency models. Term graphs can performautomated verificationand logical programming since they are well-suited to representing quantified statements in first order logic. Symbolic programming software is another application for term graphs, which are capable of representing and performing computation with abstract algebraic structures such as groups, fields and rings.
The TERMGRAPH conference[3]focuses entirely on research into term graph rewriting and its applications.
Graph rewriting systems naturally group into classes according to the kind of representation of graphs that are used and how the rewrites are expressed. The term graph grammar, otherwise equivalent to graph rewriting system or graph replacement system, is most often used in classifications. Some common types are:
Graphs are an expressive, visual and mathematically precise formalism for modelling of objects (entities) linked by relations; objects are represented by nodes and relations between them by edges. Nodes and edges are commonly typed and attributed. Computations are described in this model by changes in the relations between the entities or by attribute changes of the graph elements. They are encoded in graph rewrite/graph transformation rules and executed by graph rewrite systems/graph transformation tools.
|
https://en.wikipedia.org/wiki/Graph_transformation
|
Ahierarchical database modelis adata modelin which the data is organized into atree-like structure. The data are stored asrecordswhich is a collection of one or morefields. Each field contains a single value, and the collection of fields in a record defines itstype. One type of field is thelink, which connects a given record to associated records. Using links, records link to other records, and to other records, forming a tree. An example is a "customer" record that has links to that customer's "orders", which in turn link to "line_items".
The hierarchical database model mandates that each child record has only one parent, whereas each parent record can have zero or more child records. Thenetwork modelextends the hierarchical by allowing multiple parents and children. In order to retrieve data from these databases, the whole tree needs to be traversed starting from the root node. Both models were well suited to data that was normally stored ontape drives, which had to move the tape from end to end in order to retrieve data.
When therelational database modelemerged, one criticism of hierarchical database models was their close dependence on application-specific implementation. This limitation, along with the relational model's ease of use, contributed to the popularity of relational databases, despite their initially lower performance in comparison with the existing network and hierarchical models.[1]
The hierarchical structure was developed by IBM in the 1960s and used in early mainframeDBMS. Records' relationships form a treelike model. This structure is simple but inflexible because the relationship is confined to a one-to-many relationship. TheIBM Information Management System(IMS) andRDM Mobileare examples of a hierarchical database system with multiple hierarchies over the same data.
The hierarchical data model lost traction asCodd'srelational modelbecame the de facto standard used by virtually all mainstream database management systems. A relational-database implementation of a hierarchical model was first discussed in published form in 1992[2](see alsonested set model). Hierarchical data organization schemes resurfaced with the advent ofXMLin the late 1990s[3](see alsoXML database). The hierarchical structure is used primarily today for storing geographic information and file systems.[citation needed]
Currently, hierarchical databases are still widely used especially in applications that require very high performance and availability such as banking, health care, and telecommunications. One of the most widely used commercial hierarchical databases is IMS.[4]Another example of the use of hierarchical databases isWindows Registryin theMicrosoft Windowsoperating systems.[5]
An organization could store employee information in atablethat contains attributes/columns such as employee number, first name, last name, and department number. The organization provides each employee with computer hardware as needed, but computer equipment may only be used by the employee to which it is assigned. The organization could store the computer hardware information in a separate table that includes each part's serial number, type, and the employee that uses it. The tables might look like this:
In this model, theemployeedata table represents the "parent" part of the hierarchy, while thecomputertable represents the "child" part of the hierarchy.
In contrast to tree structures usually found in computer software algorithms, in this model the children point to the parents.
As shown, each employee may possess several pieces of computer equipment, but each individual piece of computer equipment may have only one employee owner.
Consider the following structure:
In this, the "child" is the same type as the "parent". The hierarchy stating EmpNo 10 is boss of 20, and 30 and 40 each report to 20 is represented by the "ReportsTo" column. In Relational database terms, the ReportsTo column is aforeign keyreferencing the EmpNo column. If the "child" data type were different, it would be in a different table, but there would still be a foreign key referencing the EmpNo column of the employees table.
This simple model is commonly known as the adjacency list model and was introduced by Dr.Edgar F. Coddafter initial criticisms surfaced that the relational model could not model hierarchical data.[citation needed]However, the model is only a special case of a generaladjacency listfor a graph.
|
https://en.wikipedia.org/wiki/Hierarchical_database_model
|
Vadalogis a system for performing complexlogic reasoning tasksoverknowledge graphs. Its language is based on an extension of the rule-based languageDatalog,Warded Datalog±.[1]
Vadalog was developed by researchers at theUniversity of OxfordandTechnische Universität Wienas well as employees at theBank of Italy.
A knowledge graph management system (KGMS) has to manageknowledge graphs, which incorporate large amounts of data in the form offactsandrelationships.In general, it can be seen as the union of three components:[2]
From a more technical standpoint, some additional requirements can be identified for defining a proper KGMS:
Other requirements may include more typicalDBMSfunctions and services, as the ones proposed byCodd.[8]
Vadalog offers a platform that fulfills all the requirements of a KGMS listed above. It is able to performrule-basedreasoning tasks on top of knowledge graphs and it also supports the data science workflow, such as data visualization and machine learning.[2]
A rule is an expression of the formn:−a1, ...,anwhere:
Aruleallows to infer new knowledge starting from the variables that are in the body: when all the variables in the body of a rule are successfully assigned, the rule is activated and it results in the derivation of the head predicate: given a databaseDand a set of rulesΣ,a reasoning taskaims at inferring new knowledge, applying the rules of the setΣto the databaseD(theextensional knowledge).
The most widespread form of knowledge that has been adopted over the last decades has been in the form of rules, be it inrule-based systems,ontology-based systemsor other forms and it can be typically captured in knowledge graphs.[7]The nature of knowledge graphs also makes the presence ofrecursionin these rules a particularly important aspect. Recursion means that the same rules might be called multiple times before obtaining the final answer of the reasoning task and it is particularly powerful as it allows an inference based on previously inferred results. This implies that the system must provide a strategy that guarantees termination. More technically, a program is recursive if thedependency graphbuilt with the application of the rules is cyclical. The simplest form of recursion is that in which the head of a rule also appears in the body (self-recursive rules).
The Vadalog language allows to answer reasoning queries that also include recursion. It is based on Warded Datalog±, which belongs to the Datalog±family of languages that extendsDatalogwithexistential quantifiersin rule heads[9]and at the same time restricts its syntax in order to achievedecidabilityandtractability.[10][11]Existential rules are also known astuple-generating dependencies(tgds).[12]
Anexistential rulehas the following form:
or, alternatively, in Datalog syntax, it can be written as follows:
Variables in Vadalog are like variables infirst-order logicand a variable is local to the rule in which it occurs. This means that occurrences of the same variable name in different rules refer to different variables.
In case of a set of rulesΣ{\displaystyle \Sigma }, consisting of the following:
the variable Z in the second rule is said to bedangerous,since the first rule will generate anullin the second term of the atomrand this will be injected to the second rule to get the atomp,leading to a propagation ofnullswhen trying to find an answer to the program. If arbitrary propagation is allowed, reasoning is undecidable and the program will be infinite.[7]Warded Datalog±overcomes this issue asking that for every rule defined in a setΣ{\displaystyle \Sigma }, all the variables in the rule bodies must coexist in at least one atom in the head, called award.The concept ofwardnessrestricts the way dangerous variable can be used inside a program. Although this is a limit in terms of expressive power, with this requirement and thanks to its architecture and termination algorithms,Warded Datalog±is able to find answers to a program in a finite number of steps. It also exhibits a good trade-off between computational complexity and expressive power, capturingPTIME data complexitywhile allowing ontological reasoning and the possibility of running programs with recursion.[13]
Vadalog replicates in its entirety Warded Datalog±and extends it with the inclusion in the language of:
In addition, the system provides a highly engineered architecture to allow efficient computation. This is done in the following two ways.
The Vadalog system is therefore able to perform ontological reasoning tasks, as it belongs to theDatalogfamily. Reasoning with the logical core of Vadalog capturesOWL 2 QLandSPARQL[16](through the use of existential quantifiers), and graph analytics (through support for recursion and aggregation).
Consider the following set of Vadalog rules:
The first rule states that for each personX{\displaystyle X}there exists an ancestorY{\displaystyle Y}. The second rule states that, ifX{\displaystyle X}is a parent ofZ{\displaystyle Z}, thenY{\displaystyle Y}is an ancestor ofZ{\displaystyle Z}too. Note the existential quantification in the first position of the ancestor predicate in the first rule, which will generate a null νiin the chase procedure. Such null is then propagated to the head of the second rule. Consider a databaseD = {person(Alice),person(Bob),parent(Alice,Bob)}with the extensional facts and the query of finding all the entailed ancestor facts as reasoning task.
By performing the chase procedure, the factancestor(ν1,Alice)is generated by triggering the first rule onperson(Alice). Then,ancestor(ν1,Bob)is created by activating the second rule onancestor(ν1,Alice)andparent(Alice,Bob). Finally, the first rule could be triggered onperson(Bob), but the resulting factancestor(ν2,Bob)is isomorphic withancestor(ν1,Bob), thus this fact is not generated and the corresponding portion of the chase graph is not explored.
In conclusion, the answer to the query is the set of facts{ancestor(ν1,Alice),ancestor(ν1,Bob)}.
The integration of Vadalog with data science tools is achieved by means ofdata bindings primitivesand functions.[7]
The system also provides an integration with theJupyterLabplatform, where Vadalog programs can be written and run and the output can be read, exploiting the functionalities of the platform. It gives also the possibility to evaluate the correctness of the program, run it and analyse the derivation process of output facts by means of tools assyntax highlighting,code analysis(checking whether the code is correct or there are errors) andexplanations of results(how the result has been obtained): all these functionalities are embedded in the notebook and help in writing and analyzing Vadalog code.
The Vadalog system can be employed to address many real-world use cases from distinct research and industry fields.[3]Among the latter, this section presents two relevant and accessible cases belonging to the financial domain.[17][18][19]
A company ownership graph shows entities as nodes and shares as edges. When an entity has a certain amount of shares on another one (commonly identified in the absolute majority), it is able to exert a decision power on that entity and this configures a company control and, more generally, a group structure. Searching for allcontrolrelationships requires to investigate different scenarios and very complex group structures, namely direct and indirect control. This query be translated into the following rules:
These rules can be written in a Vadalog program that will derive allcontroledges like the following:
The first rule states that each company controls itself. The second rule defines control ofXoverZby summing the shares ofZheld by companiesY, over all companiesYcontrolled byX.
This scenario consists in determining whether there exists a link between two entities in a company ownerships graph. Determining the existence of such links is relevant, for instance, in banking supervision and credit worthiness evaluation, as a company cannot act as guarantor for loans to another company if the two share such a relationship. Formally, two companiesXandYare involved in aclose linkif:
These rules can be written in a Vadalog program that will derive allclose linkedges like the following:
The first rule states that two companiesXandYconnected by an ownership edge are possible close links. The second rule states that, ifXandYare possible close links with a shareS1and there exists an ownership edge fromYto a companyZwith a shareS2, then alsoXandZare possible close links with a shareS1*S2. The third rule states that, if the sum of all the partial sharesSofYowned directly or indirectly byXis greater than or equal to 20% of the equity ofY, then they are close links according to the first definition. The fourth rule models the second definition of close links, i.e., the third-party case.
|
https://en.wikipedia.org/wiki/Vadalog
|
AtriplestoreorRDF storeis a purpose-builtdatabasefor the storage and retrieval oftriples[1]throughsemantic queries. A triple is a data entity composed ofsubject–predicate–object, like "Bob is 35" (i.e., Bob's age measured in years is 35) or "Bob knows Fred".
Much like arelational database, information in a triplestore is stored and retrieved via aquery language. Unlike a relational database, a triplestore is optimized for the storage and retrieval of triples. In addition to queries, triples can usually be imported and exported using theResource Description Framework(RDF) and other formats.
Some triplestores have been built as database engines from scratch, while others have been built on top of existing commercial relational database engines (such asSQL-based)[2]or NoSQLdocument-oriented databaseengines.[3]Like the early development ofonline analytical processing(OLAP) databases, this intermediate approach allowed large and powerful database engines to be constructed for little programming effort in the initial phases of triplestore development. A difficulty with implementing triplestores over SQL is that although "triples" may thus be "stored", implementing efficient querying of a graph-based RDF model (such as mapping fromSPARQL) onto SQL queries is difficult.[4]
Adding a name to the triple makes a "quad store" ornamed graph.
Agraph databasehas a more generalized structure than a triplestore, using graph structures with nodes, edges, and properties to represent and store data. Graph databases might provide index-free adjacency, meaning every element contains a direct pointer to its adjacent elements, and no index lookups are necessary. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores andnetwork databases.
|
https://en.wikipedia.org/wiki/RDF_Database
|
NoSQL(originally meaning "NotonlySQL" or "non-relational")[1]refers to a type ofdatabasedesign that stores and retrieves data differently from the traditional table-based structure ofrelational databases. Unlike relational databases, which organize data into rows and columns like a spreadsheet, NoSQL databases use a single data structure—such askey–value pairs,wide columns,graphs, ordocuments—to hold information. Since this non-relational design does not require a fixedschema, it scales easily to manage large, often unstructured datasets.[2]NoSQL systems are sometimes called"Not only SQL"because they can supportSQL-like query languages or work alongside SQL databases inpolyglot-persistentsetups, where multiple database types are combined.[3][4]Non-relational databases date back to the late 1960s, but the term "NoSQL" emerged in the early 2000s, spurred by the needs ofWeb 2.0companies like social media platforms.[5][6]
NoSQL databases are popular inbig dataandreal-time webapplications due to their simple design, ability to scale acrossclusters of machines(calledhorizontal scaling), and precise control over dataavailability.[7][8]These structures can speed up certain tasks and are often considered more adaptable than fixed database tables.[9]However, many NoSQL systems prioritize speed and availability over strict consistency (per theCAP theorem), usingeventual consistency—where updates reach all nodes eventually, typically within milliseconds, but may cause brief delays in accessing the latest data, known asstale reads.[10]While most lack fullACIDtransaction support, some, likeMongoDB, include it as a key feature.[11]
Barriers to wider NoSQL adoption include their use of low-levelquery languagesinstead of SQL, inability to perform ad hocjoinsacross tables, lack of standardized interfaces, and significant investments already made in relational databases.[12]Some NoSQL systems risklosing datathrough lost writes or other forms, though features likewrite-ahead logging—a method to record changes before they’re applied—can help prevent this.[13][14]Fordistributed transaction processingacross multiple databases, keeping data consistent is a challenge for both NoSQL and relational systems, as relational databases cannot enforce rules linking separate databases, and few systems support bothACIDtransactions andX/Open XAstandards for managing distributed updates.[15][16]Limitations within the interface environment are overcome using semantic virtualization protocols, such that NoSQL services are accessible to mostoperating systems.[17]
The termNoSQLwas used by Carlo Strozzi in 1998 to name his lightweightStrozzi NoSQL open-source relational databasethat did not expose the standardStructured Query Language(SQL) interface, but was still relational.[18]His NoSQLRDBMSis distinct from the around-2009 general concept of NoSQL databases. Strozzi suggests that, because the current NoSQL movement "departs from the relational model altogether, it should therefore have been called more appropriately 'NoREL'",[19]referring to "not relational".
Johan Oskarsson, then a developer atLast.fm, reintroduced the termNoSQLin early 2009 when he organized an event to discuss "open-sourcedistributed, non-relational databases".[20]The name attempted to label the emergence of an increasing number of non-relational, distributed data stores, including open source clones of Google'sBigtable/MapReduceand Amazon'sDynamoDB.
There are various ways to classify NoSQL databases, with different categories and subcategories, some of which overlap. What follows is a non-exhaustive classification by data model, with examples:[21]
Key–value (KV) stores use theassociative array(also called a map or dictionary) as their fundamental data model. In this model, data is represented as a collection of key–value pairs, such that each possible key appears at most once in the collection.[24][25]
The key–value model is one of the simplest non-trivial data models, and richer data models are often implemented as an extension of it. The key–value model can be extended to a discretely ordered model that maintains keys inlexicographic order. This extension is computationally powerful, in that it can efficiently retrieve selective keyranges.[26]
Key–value stores can useconsistency modelsranging fromeventual consistencytoserializability. Some databases support ordering of keys. There are various hardware implementations, and some users store data in memory (RAM), while others onsolid-state drives(SSD) orrotating disks(aka hard disk drive (HDD)).
The central concept of a document store is that of a "document". While the details of this definition differ among document-oriented databases, they all assume that documents encapsulate and encode data (or information) in some standard formats or encodings. Encodings in use includeXML,YAML, andJSONandbinaryforms likeBSON. Documents are addressed in the database via a uniquekeythat represents that document. Another defining characteristic of a document-oriented database is anAPIor query language to retrieve documents based on their contents.
Different implementations offer different ways of organizing and/or grouping documents:
Compared to relational databases, collections could be considered analogous to tables and documents analogous to records. But they are different – every record in a table has the same sequence of fields, while documents in a collection may have fields that are completely different.
Graph databases are designed for data whose relations are well represented as agraphconsisting of elements connected by a finite number of relations. Examples of data includesocial relations, public transport links, road maps, network topologies, etc.
The performance of NoSQL databases is usually evaluated using the metric ofthroughput, which is measured as operations per second. Performance evaluation must pay attention to the rightbenchmarkssuch as production configurations, parameters of the databases, anticipated data volume, and concurrent userworkloads.
Ben Scofield rated different categories of NoSQL databases as follows:[28]
Performance and scalability comparisons are most commonly done using theYCSBbenchmark.
Since most NoSQL databases lack ability for joins in queries, thedatabase schemagenerally needs to be designed differently. There are three main techniques for handling relational data in a NoSQL database. (Seetable join and ACID supportfor NoSQL databases that support joins.)
Instead of retrieving all the data with one query, it is common to do several queries to get the desired data. NoSQL queries are often faster than traditional SQL queries, so the cost of additional queries may be acceptable. If an excessive number of queries would be necessary, one of the other two approaches is more appropriate.
Instead of only storing foreign keys, it is common to store actual foreign values along with the model's data. For example, each blog comment might include the username in addition to a user id, thus providing easy access to the username without requiring another lookup. When a username changes, however, this will now need to be changed in many places in the database. Thus this approach works better when reads are much more common than writes.[29]
With document databases like MongoDB it is common to put more data in a smaller number of collections. For example, in a blogging application, one might choose to store comments within the blog post document, so that with a single retrieval one gets all the comments. Thus in this approach a single document contains all the data needed for a specific task.
A database is marked as supportingACIDproperties (atomicity, consistency, isolation, durability) orjoinoperations if the documentation for the database makes that claim. However, this doesn't necessarily mean that the capability is fully supported in a manner similar to most SQL databases.
Different NoSQL databases, such asDynamoDB,MongoDB,Cassandra,Couchbase, HBase, and Redis, exhibit varying behaviors when querying non-indexed fields. Many perform full-table or collection scans for such queries, applying filtering operations after retrieving data. However, modern NoSQL databases often incorporate advanced features to optimize query performance. For example, MongoDB supports compound indexes and query-optimization strategies, Cassandra offers secondary indexes and materialized views, and Redis employs custom indexing mechanisms tailored to specific use cases. Systems like Elasticsearch use inverted indexes for efficient text-based searches, but they can still require full scans for non-indexed fields. This behavior reflects the design focus of many NoSQL systems on scalability and efficient key-based operations rather than optimized querying for arbitrary fields. Consequently, while these databases excel at basicCRUDoperations and key-based lookups, their suitability for complex queries involving joins or non-indexed filtering varies depending on the database type—document, key–value, wide-column, or graph—and the specific implementation.[33]
|
https://en.wikipedia.org/wiki/Structured_storage
|
Innatural language processing(NLP), atext graphis agraph representationof atext item(document, passage or sentence). It is typically created as a preprocessing step to support NLP tasks such astext condensation[1]term disambiguation[2](topic-based)text summarization,[3]relation extraction[4]andtextual entailment.[5]
The semantics of what a text graph's nodes and edges represent can vary widely. Nodes for example can simply connect to tokenized words, or to domain-specific terms, or to entities mentioned in the text. The edges, on the other hand, can be between these text-based tokens or they can also link to aknowledge base.
The TextGraphs Workshop series[6]is a series of regularacademic workshopsintended to encourage the synergy between the fields ofnatural language processing(NLP) andgraph theory. The mix between the two started small, with graph theoretical framework providing efficient and elegant solutions for NLP applications that focused on single documents for part-of-speech tagging,word-sense disambiguationand semantic role labelling, got progressively larger withontology learningandinformation extractionfrom large text collections.
The11th edition of the workshop (TextGraphs-11)will be collocated with the Annual Meeting ofAssociation for Computational Linguistics(ACL 2017) inVancouver,BC,Canada.
|
https://en.wikipedia.org/wiki/Text_graph
|
Wikidatais acollaboratively editedmultilingualknowledge graphhosted by theWikimedia Foundation.[2]It is a common source ofopen datathat Wikimedia projects such asWikipedia,[3][4]and anyone else, are able to use under theCC0public domainlicense. Wikidata is a wiki powered by the softwareMediaWiki, including its extension forsemi-structured data, theWikibase. As of early 2025, Wikidata had 1.65 billion item statements (semantic triples).[5]
Wikidata is adocument-oriented database, focusing on items, which represent any kind of topic, concept, or object. Each item is allocated a unique,persistent identifier, a positive integer prefixed with the upper-case letter Q, known as a "QID". Q is the starting letter of the first name of Qamarniso Vrandečić (née Ismoilova), an Uzbek Wikimedian married to the Wikidata co-developerDenny Vrandečić.[6]This enables the basic information required to identify the topic that the item covers to be translated without favouring any language.
Examples of items include1988 Summer Olympics(Q8470),love(Q316),Johnny Cash(Q42775),Elvis Presley(Q303), andGorilla(Q36611).
Item labels do not need to be unique. For example, there are two items named "Elvis Presley":Elvis Presley(Q303), which representsthe American singer and actor, andElvis Presley(Q610926), which represents hisself-titled album. However, the combination of a label and its description must be unique. To avoid ambiguity, an item's unique identifier (QID) is hence linked to this combination.
Fundamentally, an item consists of:
Statements are how any information known about an item is recorded in Wikidata. Formally, they consist ofkey–value pairs, which match aproperty(such as "author", or "publication date") with one or more entityvalues(such as "Sir Arthur Conan Doyle" or "1902"). For example, the informal English statement "milk is white" would be encoded by a statement pairing the propertycolor(P462)with the valuewhite(Q23444)under the itemmilk(Q8495).
Statements may map a property to more than one value. For example, the "occupation" property forMarie Curiecould be linked with the values "physicist" and "chemist", to reflect the fact that she engaged in both occupations.[7]
Values may take on many types including other Wikidata items, strings, numbers, or media files. Properties prescribe what types of values they may be paired with. For example, the propertyofficial website(P856)may only be paired with values of type "URL".[8]
Optionally,qualifierscan be used to refine the meaning of a statement by providing additional information. For example, a "population" statement could be modified with a qualifier such as "point in time (P585): 2011" (as its own key-value pair). Values in the statements may also be annotated withreferences, pointing to a source backing up the statement's content.[9]As with statements, all qualifiers and references are property–value pairs.
Each property has a numeric identifier prefixed with a capital P and a page on Wikidata with optional label, description, aliases, and statements. As such, there are properties with the sole purpose of describing other properties, such assubproperty of(P1647).
Properties may also define more complex rules about their intended usage, termedconstraints. For example, thecapital(P36)property includes a "single value constraint", reflecting the reality that (typically) territories have only one capital city. Constraints are treated as testing alerts and hints, rather than inviolable rules.[10]
Before a new property is created, it needs to undergo a discussion process.[11][12]
The most used property iscites work(P2860), which is used on more than 290,000,000 item pages as of November 2023.[update][13]
Inlinguistics, alexemeis a unit oflexicalmeaning representing a group of words that share the same core meaning and grammatical characteristics.[14][15]Similarly, Wikidata'slexemesare items with a structure that makes them more suitable to storelexicographicaldata. Since 2016, Wikidata has supported lexicographical entries in the form of lexemes.[16]
In Wikidata, lexicographical entries have a different identifier from regular item entries. These entries are prefixed with the letter L, such as in the example entries forbookandcow. Lexicographical entries in Wikidata can contain statements, senses, and forms.[17]The use of lexicographical entries in Wikidata allows for the documentation of word usage, the connection between words and items on Wikidata, word translations, and enables machine-readable lexicographical data.
In 2020, lexicographical entries on Wikidata exceeded 250,000. The language with the most lexicographical entries wasRussian, with a total of 101,137 lexemes, followed byEnglishwith 38,122 lexemes. There are over 668 languages with lexicographical entries on Wikidata.[18]
In Wikidata, a schema is a data model that outlines the necessary attributes for a data item.[19][20]For instance, a data item that uses the attribute "instance of" with the value "human" would typically include attributes such as "place of birth," "date of birth,""date of death," and "place of death."[21]The entity schema in Wikidata utilizesShape Expression(ShEx) to describe the data in Wikidata items in the form of aResource Description Framework(RDF).[22]The use of entity schemas in Wikidata helps address data inconsistencies and unchecked vandalism.[19]
In January 2019, development started of a new extension for MediaWiki to enable storing ShEx in a separate namespace.[23][24]Entity schemas are stored with different identifiers than those used for items, properties, and lexemes. Entity schemas are stored with an "E" identifier, such asE10for the entity schema of human data instances andE270for the entity schema of building data instances. This extension has since been installed on Wikidata[25]and enables contributors to use ShEx for validating and describing Resource Description Framework data in items and lexemes. Any item or lexeme on Wikidata can be validated against an entity schema,[clarification needed]and this makes it an important tool for quality assurance.
Wikidata's content collections include data for biographies,[26]medicine,[27]digital humanities,[28]scholarly metadata through the WikiCite project.[29]
It includes data collections from other open projects includingFreebase (database).[30]
The creation of the project was funded by donations from theAllen Institute for AI, theGordon and Betty Moore Foundation, andGoogle, Inc., totaling€1.3 million.[31][32]The development of the project is mainly driven byWikimedia Deutschlandunder the management ofLydia Pintscher, and was originally split into three phases:[33]
Wikidata was launched on 29 October 2012 and was the first new project of the Wikimedia Foundation since 2006.[3][34][35]At this time, only the centralization of language links was available. This enabled items to be created and filled with basic information: a label – a name or title, aliases – alternative terms for the label, a description, and links to articles about the topic in all the various language editions of Wikipedia (interwikipedia links).
Historically, a Wikipedia article would include a list of interlanguage links (links to articles on the same topic in other editions of Wikipedia, if they existed). Wikidata was originally a self-containedrepositoryof interlanguage links.[36]Wikipedia language editions were still not able to access Wikidata, so they needed to continue to maintain their own lists of interlanguage links.[citation needed]
On 14 January 2013, theHungarian Wikipediabecame the first to enable the provision of interlanguage links via Wikidata.[37]This functionality was extended to theHebrewandItalianWikipedias on 30 January, to theEnglish Wikipediaon 13 February and to all other Wikipedias on 6 March.[38][39][40][41]After no consensus was reached over a proposal to restrict the removal of language links from the English Wikipedia,[42]they were automatically removed bybots. On 23 September 2013, interlanguage links went live on Wikimedia Commons.[43]
On 4 February 2013, statements were introduced to Wikidata entries. The possible values for properties were initially limited to two data types (items and images on Wikimedia Commons), with moredata types(such ascoordinatesand dates) to follow later. The first new type, string, was deployed on 6 March.[44]
The ability for the various language editions of Wikipedia to access data from Wikidata was rolled out progressively between 27 March and 25 April 2013.[45][46]On 16 September 2015, Wikidata began allowing so-calledarbitrary access, or access from a given article of a Wikipedia to the statements on Wikidata items not directly connected to it. For example, it became possible to read data about Germany from the Berlin article, which was not feasible before.[47]On 27 April 2016, arbitrary access was activated on Wikimedia Commons.[48]
According to a 2020 study, a large proportion of the data on Wikidata consists of entries imported en masse from other databases byInternet bots, which helps to "break down the walls" ofdata silos.[49]
On 7 September 2015, theWikimedia Foundationannounced the release of the Wikidata Query Service,[50]which lets users run queries on the data contained in Wikidata.[51]The service usesSPARQLas the query language. As of November 2018, there are at least 26 different tools that allow querying the data in different ways.[52]It usesBlazegraphas itstriplestoreandgraph database.[53][54]
In 2021,Wikimedia Deutschlandreleased the Query Builder,[55]"a form-based query builder to allow people who don't know how to use SPARQL" to write a query.
The bars on thelogocontain the word "WIKI" encoded inMorse code.[56]It was created by Arun Ganesh and selected through community decision.[57]
In November 2014, Wikidata received the Open Data Publisher Award from theOpen Data Institute"for sheer scale, and built-in openness".[58]
In December 2014, Google announced that it would shut downFreebasein favor of Wikidata.[59]
As of November 2018[update], Wikidata information was used in 58.4% of all English Wikipedia articles, mostly for external identifiers or coordinate locations. In aggregate, data from Wikidata is shown in 64% of allWikipedias' pages, 93% of allWikivoyagearticles, 34% of allWikiquotes', 32% of allWikisources', and 27% ofWikimedia Commons.[60]
As of December 2020[update], Wikidata's data was visualized by at least 20 other external tools[61]and over 300 papers have been published about Wikidata.[62]
A systematic literature review of the uses of Wikidata in research was carried out in 2019.[69]
|
https://en.wikipedia.org/wiki/Wikidata
|
Incomputingandtelecommunications, acharacteris the internal representation of acharacter (symbol)used within a computer or system.
Examples of characters includeletters,numerical digits,punctuationmarks (such as "." or "-"), andwhitespace. The concept also includescontrol characters, which do not correspond to visible symbols but rather to instructions to format or process the text. Examples of control characters includecarriage returnandtabas well as other instructions toprintersor other devices that display or otherwise process text.
Characters are typically combined intostrings.
Historically, the termcharacterwas used to denote a specific number of contiguousbits. While a character is most commonly assumed to refer to 8 bits (onebyte) today, other options like the6-bit character codewere once popular,[1][2]and the5-bit Baudot codehas been used in the past as well. The term has even been applied to 4 bits[3]with only 16 possible values. All modern systems use a varying-size sequence of these fixed-sized pieces, for instanceUTF-8uses a varying number of 8-bitcode unitsto define a "code point" andUnicodeuses varying number ofthoseto define a "character".
Computers and communication equipment represent characters using acharacter encodingthat assigns each character to something – anintegerquantity represented by a sequence ofdigits, typically – that can bestoredor transmitted through anetwork. Two examples of usual encodings areASCIIand theUTF-8encoding forUnicode. While most character encodings map characters to numbers and/or bit sequences,Morse codeinstead represents characters using a series of electrical impulses of varying length.
The dictionary Merriam-Webster defines a "character", in the relevant sense, as "a symbol (such as a letter or number) that represents information;also: a representation of such a symbol that may be accepted by a computer".[4]
Historically, the termcharacterhas been widely used by industry professionals to refer to anencoded character, often as defined by the programming language orAPI. Likewise,character sethas been widely used to refer to a specific repertoire of characters that have been mapped to specific bit sequences or numerical codes. The termglyphis used to describe a particular visual appearance of a character. Many computerfontsconsist of glyphs that are indexed by the numerical code of the corresponding character.
With the advent and widespread acceptance of Unicode[5]and bit-agnosticcoded character sets,[clarification needed]a character is increasingly being seen as a unit ofinformation, independent of any particular visual manifestation. TheISO/IEC 10646 (Unicode) International Standarddefinescharacter, orabstract characteras "a member of a set of elements used for the organization, control, or representation of data". Unicode's definition supplements this with explanatory notes that encourage the reader to differentiate between characters, graphemes, and glyphs, among other things. Such differentiation is an instance of the wider theme of theseparation of presentation and content.
For example, theHebrew letteraleph("א") is often used by mathematicians to denote certain kinds ofinfinity(ℵ), but it is also used in ordinary Hebrew text. In Unicode, these two uses are considered different characters, and have two different Unicode numerical identifiers ("code points"), though they may be rendered identically. Conversely, theChineselogogramfor water ("水") may have a slightly different appearance inJapanesetexts than it does in Chinese texts, and localtypefacesmay reflect this. But nonetheless in Unicode they are considered the same character, and share the same code point.
The Unicode standard also differentiates between these abstract characters andcoded charactersorencoded charactersthat have been paired with numeric codes that facilitate their representation in computers.
Thecombining characteris also addressed by Unicode. For instance, Unicode allocates a code point to each of
This makes it possible to code the middle character of the word 'naïve' either as a single character 'ï' or as a combination of the character'i 'with the combining diaeresis: (U+0069 LATIN SMALL LETTER I + U+0308 COMBINING DIAERESIS); this is also rendered as'ï '.
These are considered canonically equivalent by the Unicode standard.
Acharin theC programming languageis a data type with the size of exactly onebyte,[6][7]which in turn is defined to be large enough to contain any member of the "basic execution character set". The exact number of bits can be checked viaCHAR_BITmacro. By far the most common size is 8 bits, and the POSIX standardrequiresit to be 8 bits.[8]In newer C standardscharis required to holdUTF-8code units[6][7]which requires a minimum size of 8 bits.
AUnicodecode point may require as many as 21 bits.[9]This will not fit in acharon most systems, so more than one is used for some of them, as in the variable-length encodingUTF-8where each code point takes 1 to 4 bytes. Furthermore, a "character" may require more than one code point (for instance withcombining characters), depending on what is meant by the word "character".
The fact that a character was historically stored in a single byte led to the two terms ("char" and "character") being used interchangeably in most documentation. This often makes the documentation confusing or misleading when multibyte encodings such as UTF-8 are used, and has led to inefficient and incorrect implementations of string manipulation functions (such as computing the "length" of a string as a count of code units rather than bytes). Modern POSIX documentation attempts to fix this, defining "character" as a sequence of one or more bytes representing a single graphic symbol or control code, and attempts to use "byte" when referring to char data.[10][11]However it still contains errors such as defining an array ofcharas acharacter array(rather than abyte array).[12]
Unicode can also be stored in strings made up of code units that are larger thanchar. These are called "wide characters". The original C type was calledwchar_t. Due to some platforms definingwchar_tas 16 bits and others defining it as 32 bits, recent versions have addedchar16_t,char32_t. Even then the objects being stored might not be characters, for instance the variable-lengthUTF-16is often stored in arrays ofchar16_t.
Other languages also have achartype. Some such asC++use at least 8 bits like C.[7]Others such asJavause 16 bits forcharin order to represent UTF-16 values.
|
https://en.wikipedia.org/wiki/Character_(computing)
|
Grapheme–color synesthesiaorcolored grapheme synesthesiais a form ofsynesthesiain which an individual's perception of numerals and letters is associated with the experience of colors. Like all forms of synesthesia, grapheme–color synesthesia is involuntary, consistent and memorable.[1][failed verification]Grapheme–color synesthesia is one of the most common forms of synesthesia and, because of the extensive knowledge of thevisual system, one of the most studied.[2]
While it is extremely unlikely that any two synesthetes will report the same colors for all letters and numbers, studies of large numbers of synesthetes find that there are some commonalities across letters (e.g., "A" is likely to be red).[3][4]Early studies argued that grapheme–color synesthesia was not due to associative learning.[5]However, one recent study has documented a case of synesthesia in which synesthetic associations could be traced back to coloredrefrigerator magnets.[6]Despite the existence of this individual case, the majority of synesthetic associations do not seem to be driven by learning of this sort.[4][7]Rather, it seems that more frequent letters are paired with more frequent colors, and some meaning-based rules, such as ‘b’ being blue, drive most synesthetic associations.
There has been a lot more research as to why and how synesthesia occurs with more recent technology and as synesthesia has become more well known.[citation needed]It has been found that grapheme–color synesthetes have moregrey matterin their brain. There is evidence of an increased grey matter volume in the left caudalintraparietal sulcus(IPS).[8]There was also found to be an increased grey matter volume in the rightfusiform gyrus. These results are consistent with another study on the brain functioning of grapheme–color synesthetes. Grapheme–color synesthetes tend to have an increased thickness, volume and surface area of the fusiform gyrus.[2]Furthermore, the area of the brain where word, letter and color processing are located,V4a, is where the most significant difference in make-up was found. Though not certain, these differences are thought to be part of the reasoning for the presence of grapheme–color synesthesia.
Synesthetes often report that they were unaware their experiences were unusual until they realized other people did not have them, while others report feeling as if they had been keeping a secret their entire lives. Many synesthetes can vividly remember when they first noticed their synesthetic experiences, or when they first learned that such experiences were unusual.[1]Writer and synesthetePatricia Lynne Duffyremembers one early experience:
"'One day,' I said to my father, 'I realized that to make an 'R' all I had to do was first write a 'P' and then draw a line down from its loop. And I was so surprised that I could turn a yellow letter into an orange letter just by adding a line.'"[9]
As does filmmakerStephanie Morgenstern:
"A few years ago, I mentioned to a friend that I remembered phone numbers by their colour. He said "So you're a synesthete!" I hadn't heard of synesthesia (which means something close to 'sense-fusion') – I only knew that numbers seemed naturally to have colours: five is blue, two is green, three is red… And music has colours too: the key of C# minor is a sharp, tangy yellow, F major is a warm brown..."[10]
Many synesthetes never realize that their experiences are in any way unusual or exceptional. For example, theNobel Prizewinningphysicist,Richard Feynmanreports:
When I see equations, I see the letters in colors – I don't know why. As I'm talking, I see vague pictures ofBessel functionsfrom Jahnke and Emde's book, with light-tan j's, slightly violet-bluish n's, and dark brown x's flying around. And I wonder what the hell it must look like to the students."[11]
While synesthetes sometimes report seeing colors projected in space, they do not confuse their synesthetic colors with real colors in the external world. Rather, they report that they are simultaneously aware of the external color and also the internal, synesthetic color:
As C relates ... "It is difficult to explain...I see what you see. I know the numbers are in black...but as soon as I recognise the form of a 7 it has to be yellow."[12]
Finally, synesthetes are quite precise in the color mappings that they experience, which can lead them to make quite detailed comparisons of their colors:
I came back from college on a semester break, and was sitting with my family around the dinner table, and – I don't know why I said it – but I said, "The number five is yellow." There was a pause, and my father said, "No, it's yellow-ochre." And my mother and my brother looked at us like, 'this is a new game, would you share the rules with us?'"
And I was dumbfounded. So I thought, "Well." At that time in my life I was having trouble deciding whether the number two was green and the number six blue, or just the other way around. And I said to my father, "Is the number two green?" and he said, "Yes, definitely. It's green." And then he took a long look at my mother and my brother and became very quiet.
Thirty years after that, he came to my loft in Manhattan and he said, "you know, the number four *is* red, and the number zero is white. And," he said, "the number nine is green." I said, "Well, I agree with you about the four and the zero, but nine is definitely not green!"[13]
Individuals with grapheme–color synesthesia rarely claim that their sensations are problematic or unwanted. In some cases, individuals report useful effects, such as aid in memory or spelling of difficult words.
I sometimes use my synaesthesia to help me remember difficult proper names. Here's a Thai chef who wrote a terrific vegetarian cookbook [these letters appear in a distinct pattern for Cassidy]:
Unfortunately, this method can backfire too, because I confuse similarly colored names easily [the following names appear very similarly colored to Cassidy]:
This is especially problematic at parties.
These experiences have led to the development of technologies intended to improve the retention and memory of graphemes by individuals without synesthesia. Computers, for instance, could use "artificial synesthesia" to color words and numbers to improve usability.[15]A somewhat related example of "computer-aided synesthesia" is using letter coloring in a web browser to preventIDN homograph attacks. (Someone with synesthesia can sometimes distinguish between barely different looking characters in a similar way.)
|
https://en.wikipedia.org/wiki/Grapheme%E2%80%93color_synesthesia
|
Insemiotics, asignis anything thatcommunicatesameaningthat is not the sign itself to the interpreter of the sign. The meaning can be intentional, as when a word is uttered with a specific meaning, or unintentional, as when asymptomis taken as a sign of a particular medical condition. Signs can communicate through any of thesenses, visual, auditory, tactile, olfactory, or taste.
Two major theories describe the way signs acquire the ability to transfer information. Both theories understand the defining property of the sign as a relation between a number of elements. In semiology, the tradition of semiotics developed byFerdinand de Saussure(1857–1913), the sign relation is dyadic, consisting only of a form of the sign (the signifier) and its meaning (the signified). Saussure saw this relation as being essentially arbitrary (the principle ofsemiotic arbitrariness), motivated only bysocial convention. Saussure's theory has been particularly influential in the study of linguistic signs. The other majorsemiotic theory, developed byCharles Sanders Peirce(1839–1914), defines the sign as a triadic relation as "something that stands for something, to someone in some capacity".[1]This means that a sign is a relation between the sign vehicle (the specific physical form of the sign), a sign object (the aspect of the world that the sign carries meaning about) and an interpretant (the meaning of the sign as understood by an interpreter). According to Peirce, signs can be divided by the type of relation that holds the sign relation together as eithericons, indices orsymbols. Icons are those signs that signify by means ofsimilaritybetween sign vehicle and sign object (e.g. a portrait or map), indices are those that signify by means of a direct relation of contiguity or causality between sign vehicle and sign object (e.g. a symptom), and symbols are those that signify through a law or arbitrary social convention.
According toFerdinand de Saussure(1857–1913), a sign is composed of thesignifier[2](signifiant), and thesignified(signifié). These cannot be conceptualized as separate entities but rather as a mapping from significant differences in sound to potential (correct) differential denotation. The Saussurean sign exists only at the level of thesynchronicsystem, in which signs are defined by their relative and hierarchical privileges of co-occurrence. It is thus a common misreading of Saussure to take signifiers to be anything one could speak, and signifieds as things in the world. In fact, the relationship of language toparole(or speech-in-context) is and always has been a theoretical problem for linguistics (cf. Roman Jakobson's famous essay "Closing Statement: Linguistics and Poetics" et al.).
A famous thesis by Saussure states that the relationship between a sign and the real-world thing it denotes is an arbitrary one. There is not a natural relationship between a word and the object it refers to, nor is there a causal relationship between the inherent properties of the object and the nature of the sign used to denote it. For example, there is nothing about the physical quality of paper that requires denotation by the phonological sequence 'paper'. There is, however, what Saussure called 'relative motivation': the possibilities of signification of a signifier are constrained by thecompositionalityof elements in the linguistic system (cf.Émile Benveniste's paper on the arbitrariness of the sign in the first volume of his papers on general linguistics). In other words, a word is only available to acquire a new meaning if it is identifiablydifferentfrom all the other words in the language and it has no existing meaning.Structuralismwas later based on this idea that it is only within a given system that one can define the distinction between the levels of system and use, or the semantic "value" of a sign.
Charles Sanders Peirce(1839–1914) proposed a different theory. Unlike Saussure who approached the conceptual question from a study oflinguisticsandphonology, Peirce, considered the father ofPragmaticism, extended the concept of sign to embrace many other forms. He considered "word" to be only one particular kind of sign, and characterized sign as any mediational means tounderstanding. He covered not only artificial, linguistic and symbolic signs, but also all semblances (such as kindred sensible qualities), and all indicators (such as mechanical reactions). He counted as symbols all terms, propositions and arguments whose interpretation is based upon convention or habit, even apart from their expression in particular languages. He held that "all this universe is perfused with signs, if it is not composed exclusively of signs".[3]The setting of Peirce's study of signs is philosophical logic, which he defined as formal semiotic,[4]and characterized as a normative field following esthetics and ethics, as more basic than metaphysics,[5]and as the art of devising methods of research.[6]He argued that, since all thought takes time, all thought is in signs,[7]that all thought has the form of inference (even when not conscious and deliberate),[7]and that, as inference, "logic is rooted in the social principle", since inference depends on a standpoint that, in a sense, is unlimited.[8]The result is a theory not of language in particular, but rather of the production of meaning, and it rejects the idea of a static relationship between a sign and what it represents: itsobject. Peirce believed that signs are meaningful through recursive relationships that arise in sets of three.
Even when a sign represents by a resemblance or factual connection independent of interpretation, the sign is a sign only insofar as it is at least potentially interpretable by a mind and insofar as the sign is a determination of a mind or at least aquasi-mind, that functions as if it were a mind, for example in crystals and the work of bees[9]—the focus here is on sign action in general, not on psychology, linguistics, or social studies (fields Peirce also pursued).
A sign depends on an object in a way that enables (and, in a sense, determines) an interpretation, aninterpretant, to depend on the objectas the sign depends on the object. The interpretant, then, is a further sign of the object, and thus enables and determines still further interpretations, further interpretant signs. The process, calledsemiosis, is irreducibly triadic, Peirce held, and is logically structured to perpetuate itself. It is what defines sign, object and interpretant in general.[10]AsJean-Jacques Nattiezput it, "the process of referring effected by the sign isinfinite." (Peirce used the word "determine" in the sense not of strict determinism, but of effectiveness that can vary like an influence.[11][12])
Peirce further characterized the threesemiotic elementsas follows:[13]
Peirce explained that signs mediate between their objects and their interpretants in semiosis, the triadic process of determination. In semiosis afirstis determined or influenced to be a sign by asecond, as its object. The object determines the sign to determine athirdas an interpretant.Firstnessitself is one of Peirce'sthree categoriesof all phenomena, and is quality of feeling. Firstness is associated with a vague state of mind as feeling and a sense of the possibilities, with neither compulsion nor reflection. In semiosis the mind discerns an appearance or phenomenon, a potential sign.Secondnessis reaction or resistance, a category associated with moving from possibility to determinate actuality. Here, through experience outside of and collateral to the given sign or sign system, one recalls or discovers the object the sign refers to, for example when a sign consists in a chance semblance of an absent but remembered object. It is through one's collateral experience[15]that the object determines the sign to determine an interpretant.Thirdnessis representation or mediation, the category associated with signs, generality, rule, continuity, habit-taking and purpose. Here one forms an interpretant expressing a meaning or ramification of the sign about the object. When a second sign is considered, the initial interpretant may be confirmed, or new possible meanings may be identified. As each new sign is addressed, more interpretants, themselves signs, emerge. It can involve a mind's reading of nature, people, mathematics, anything.
Peirce generalized the communicational idea of utterance and interpretation of a sign, to cover all signs:[16]
Admitting that connected Signs must have a Quasi-mind, it may further be declared that there can be no isolated sign. Moreover, signs require at least two Quasi-minds; aQuasi-uttererand aQuasi-interpreter; and although these two are at one (i.e., are one mind) in the sign itself, they must nevertheless be distinct. In the Sign they are, so to say,welded. Accordingly, it is not merely a fact of human Psychology, but a necessity of Logic, that every logical evolution of thought should be dialogic.
According to Nattiez, writing withJean Molino, the tripartite definition of sign, object and interpretant is based on the "trace" orneutral level, Saussure's "sound-image" (or "signified", thus Peirce's "representamen"). Thus, "a symbolic form...is not some 'intermediary' in a process of 'communication' that transmits the meaning intended by the author to the audience; it is instead the result of a complexprocessof creation (thepoieticprocess) that has to do with the form as well as the content of the work; it is also the point of departure for a complex process of reception (theesthesicprocess thatreconstructsa 'message'").[17]
Molino's and Nattiez's diagram:
Peirce's theory of the sign therefore offered a powerful analysis of the signification system, its codes, and its processes of inference and learning—because the focus was often on natural or cultural context rather than linguistics, which only analyses usage in slow time whereas human semiotic interaction in the real world often has a chaotic blur of language and signal exchange. Nevertheless, the implication that triadic relations are structured to perpetuate themselves leads to a level of complexity not usually experienced in the routine of message creation and interpretation. Hence, different ways of expressing the idea have developed.
By 1903,[18]Peirce came toclassify signsby three universal trichotomies dependent on his three categories (quality, fact, habit). He classified any sign:[19]
Because of those classificatory interdependences, the three trichotomies intersect to form ten (rather than 27) classes of signs. There are also various kinds of meaningful combination. Signs can be attached to one another. A photograph is an index with a meaningfully attached icon. Arguments are composed of dicisigns, and dicisigns are composed of rhemes. In order to be embodied, legisigns (types) need sinsigns (tokens) as their individual replicas or instances. A symbol depends as a sign on how itwillbe interpreted, regardless of resemblance or factual connection to its object; but the symbol's individual embodiment is an index to your experience of the object. A symbol is instanced by a specialized indexical sinsign. A symbol such as a sentence in a language prescribes qualities of appearance for its instances, and is itself a replica of a symbol such as a proposition apart from expression in a particular language. Peirce covered both semantic and syntactical issues in his theoretical grammar, as he sometimes called it. He regarded formal semiotic, as logic, as furthermore encompassing study of arguments (hypothetical,deductiveandinductive) and inquiry's methods includingpragmatism; and as allied to but distinct from logic's pure mathematics.
Peirce sometimes referred to thegroundof a sign. The ground is the pure abstraction of a quality.[22]A sign's ground is therespectin which the sign represents its object, e.g. as inliteral and figurative language. For example, an iconpresentsa characteristic or quality attributed to an object, while a symbolimputesto an object a quality either presented by an icon or symbolized so as to evoke a mental icon.
Peirce called an icon apart from a label, legend, or other index attached to it, a "hypoicon", and divided the hypoicon into three classes: (a) theimage, which depends on a simple quality; (b) thediagram, whose internal relations, mainly dyadic or so taken, represent by analogy the relations in something; and (c) themetaphor, which represents the representative character of a sign by representing a parallelism in something else.[23]A diagram can be geometric, or can consist in an array of algebraic expressions, or even in the common form "All __ is ___" which is subjectable, like any diagram, to logical or mathematical transformations. Peirce held that mathematics is done by diagrammatic thinking—observation of, and experimentation on, diagrams. Peirce developed for deductive logic a system of visualexistential graphs, which continue to be researched today.
It is now agreed that the effectiveness of the acts that may convert the message into text (including speaking, writing, drawing, music and physical movements) depends uponthe knowledge of the sender. If the sender is not familiar with the current language, its codes and its culture, then he or she will not be able to say anything at all, whether as a visitor in a different language area or because of a medical condition such asaphasia.
Modern theories deny the Saussurian distinction between signifier and signified, and look for meaning not in the individual signs, but in their context and the framework of potential meanings that could be applied. Such theories assert that language is a collective memory or cultural history of all the different ways in which meaning has been communicated, and may to that extent, constitute all life's experiences (seeLouis Hjelmslev). Hjelmslev did not consider the sign to be the smallestsemioticunit, as he believed it possible to decompose it further; instead, he considered the "internal structure of language" to be a system offigurae, a concept somewhat related to that offigure of speech, which he considered to be the ultimate semiotic unit.[24][25][26]
This position implies that speaking is simply one more form of behaviour and changes the focus of attention from the text as language, to the text as arepresentationof purpose, a functional version ofauthorial intent. But, once the message has been transmitted, the text exists independently.[citation needed]
Hence, although the writers who co-operated to produce this page exist, they can only be represented by the signs actually selected and presented here. The interpretation process in the receiver's mind may attribute meanings completely different from those intended by the senders. But, why might this happen? Neither the sender nor the receiver of a text has a perfect grasp of all language. Each individual's relatively smallstockof knowledge is the product of personal experience and their attitude to learning. When theaudiencereceives the message, there will always be an excess of connotations available to be applied to the particular signs in their context (no matter how relatively complete or incomplete their knowledge, thecognitiveprocess is the same).[citation needed]
The first stage in understanding the message is therefore, to suspend or defer judgement until more information becomes available. At some point, the individual receiver decides which of all possible meanings represents the best possible fit. Sometimes, uncertainty may not be resolved, so meaning is indefinitely deferred, or a provisional or approximate meaning is allocated. More often, the receiver's desire forclosure(seeGestalt psychology) leads to simple meanings being attributed out of prejudices and without reference to the sender's intentions.[citation needed]
Incritical theory, the notion of sign is used variously. AsDaniel Chandlerhas said:
Many postmodernist theorists postulate a complete disconnection of the signifier and the signified. An 'empty' or 'floating signifier' is variously defined as a signifier with a vague, highly variable, unspecifiable or non-existent signified. Such signifiers mean different things to different people: they may stand for many or even any signifieds; they may mean whatever their interpreters want them to mean.[27]
In the semiotic theory ofFélix Guattari, semioticblack holesare the "a-temporal" destruction ofsigns.[28][further explanation needed]
|
https://en.wikipedia.org/wiki/Sign_(semiotics)
|
Graphocentrismorscriptismis a typically unconscious interpretative bias in whichwritingis privileged overspeech.[1][2]
Biasesin favor of the written or printed word are closely associated with the ranking ofsightabovesound, theeyeabove theear, which has been called 'ocularcentrism'.[3]It opposesphonocentrism, which is the bias in favor of speech.
Thisphilosophy of language-related article is astub. You can help Wikipedia byexpanding it.
Thissociolinguisticsarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Graphocentrism
|
Graphonomicsis the interdisciplinary field directed towards the scientific analysis of thehandwritingprocess, product, and other graphic skills.[1],[2]
Researchers inhandwriting recognition,forensic handwriting examination,kinesiology,psychology,computer science,artificial intelligence,paleographyandneurosciencecooperate in order to achieve a better understanding of the humanskillof handwriting. Research in graphonomics generally involveshandwriting movement analysis[3]in one form or another.
The first international conference relating to graphonomics was held in Nijmegen, The Netherlands, in July 1982.[4]The term 'graphonomics' was used there for the first time.[5]
The second conference was held in July 1985 in Hong Kong[4]and, at that meeting, a decision was taken to form theInternational Graphonomics Society.[6]The IGS became a legal non-profit organization under Netherlands law on January 30, 1987.[6]
Subsequently, an international conference, symposium and/or workshop has been held every two years. Past events have been held in various locations with most events having a specific theme, as follows:[7]
As mentioned above, the IGS was created at the 1985 international conference with the main purpose being to coordinate and assist in the growth and development of the field of graphonomics in all its forms.[6]This has been done through conferences, workshops and publication of proceedings from those events.
As the main academic body for graphonomics, the IGS publishes a biannual bulletin as well as proceedings of the biennial conference. TheBulletin of the International Graphonomics Societyis published by the IGS in March and November each year and it is the primary means of communication among IGS members and the public. A complete list of past BIGS issues is available online.[9]Conference proceedings are published in the form of a peer-reviewed scientific journal or book shortly after each of the conferences.[10]
Some research topics in graphonomics include:
|
https://en.wikipedia.org/wiki/Graphonomics
|
In philosophy,deconstructionis a loosely-defined set of approaches to understanding the relationship betweentextandmeaning. The concept of deconstruction was introduced by the philosopherJacques Derrida, who described it as a turn away fromPlatonism's ideas of "true"formsandessenceswhich are valued above appearances.[additional citation(s) needed][1]
Since the 1980s, these proposals of language's fluidity instead of being ideally static and discernible have inspired a range of studies in thehumanities,[2]including the disciplines oflaw,[3]: 3–76[4][5]anthropology,[6]historiography,[7]linguistics,[8]sociolinguistics,[9]psychoanalysis,LGBT studies, andfeminism. Deconstruction also inspireddeconstructivisminarchitectureand remains important withinart,[10]music,[11]andliterary criticism.[12][13]
Jacques Derrida's1967 bookOf Grammatologyintroduced the majority of ideas influential within deconstruction.[14]: 25Derrida published a number of other works directly relevant to the concept of deconstruction, such asDifférance,Speech and Phenomena, andWriting and Difference.
To Derrida,
That is what deconstruction is made of: not the mixture but the tension between memory,fidelity, the preservation of something that has been given to us, and, at the same time, heterogeneity, something absolutely new, and a break.[15]: 6[dubious–discuss]
According to Derrida, and taking inspiration from the work ofFerdinand de Saussure,[16]language as a system of signs and words only has meaning because of the contrast between these signs.[17][14]: 7, 12AsRichard Rortycontends, "words have meaning only because of contrast-effects with other words ... no word can acquire meaning in the way in which philosophers fromAristotletoBertrand Russellhave hoped it might—by being the unmediated expression of something non-linguistic (e.g., an emotion, a sensed observation, a physical object, an idea, aPlatonic Form)".[17]As a consequence, meaning is never present, but rather is deferred to other signs. Derrida refers to the—in his view, mistaken—belief that there is a self-sufficient, non-deferred meaning asmetaphysics of presence. Rather, according to Derrida, a concept must be understood in the context of its opposite: for example, the wordbeingdoes not have meaning without contrast with the wordnothing.[18]: 220[19]: 26
Further, Derrida contends that "in a classical philosophical opposition we are not dealing with the peaceful coexistence of avis-a-vis, but rather with a violent hierarchy. One of the two terms governs the other (axiologically, logically, etc.), or has the upper hand":signifiedoversignifier; intelligible over sensible; speech over writing; activity over passivity, etc.[further explanation needed]The first task of deconstruction is, according to Derrida, to find and overturn these oppositions inside a text or texts; but the final objective of deconstruction is not to surpass all oppositions, because it is assumed they are structurally necessary to produce sense: the oppositions simply cannot be suspended once and for all, as the hierarchy of dual oppositions always reestablishes itself (because it is necessary for meaning). Deconstruction, Derrida says, only points to the necessity of an unending analysis that can make explicit the decisions and hierarchies intrinsic to all texts.[19]: 41[contradictory]
Derrida further argues that it is not enough to expose and deconstruct the way oppositions work and then stop there in a nihilistic or cynical position, "thereby preventing any means of intervening in the field effectively".[19]: 42To be effective, deconstruction needs to create new terms, not to synthesize the concepts in opposition, but to mark their difference and eternal interplay. This explains why Derrida always proposes new terms in his deconstruction, not as a free play but from the necessity of analysis. Derrida called these undecidables—that is, unities of simulacrum—"false" verbal properties (nominal or semantic) that can no longer be included within philosophical (binary) opposition. Instead, they inhabit philosophical oppositions[further explanation needed]—resisting and organizing them—without ever constituting a third term or leaving room for a solution in the form of aHegelian dialectic(e.g.,différance,archi-writing,pharmakon, supplement, hymen, gram, spacing).[19]: 19[jargon][further explanation needed]
Derrida's theories on deconstruction were themselves influenced by the work of linguists such as Ferdinand de Saussure (whose writings onsemioticsalso became a cornerstone ofstructuralismin the mid-20th century) and literary theorists such asRoland Barthes(whose works were an investigation of the logical ends of structuralist thought). Derrida's views on deconstruction stood in opposition to the theories of structuralists such aspsychoanalytic theoristJacques Lacan, and anthropologistClaude Lévi-Strauss. However, Derridaresisted attemptsto label his work as "post-structuralist".
Derrida's motivation for developing deconstructive criticism, suggesting the fluidity of language over static forms, was largely inspired byFriedrich Nietzsche's philosophy, beginning with his interpretation ofTrophonius. InDaybreak, Nietzsche announces that "All things that live long are gradually so saturated with reason that their origin in unreason thereby becomes improbable. Does not almost every precise history of an origination impress our feelings as paradoxical and wantonly offensive? Does the good historian not, at bottom, constantly contradict?".[20]
Nietzsche's point inDaybreakis that standing at the end of modern history, modern thinkers know too much to continue to be deceived by an illusory grasp of satisfactorily complete reason. Mere proposals of heightened reasoning, logic, philosophizing and science are no longer solely sufficient as the royal roads to truth. Nietzsche disregards Platonism to revisualize the history of the West as the self-perpetuating history of a series of political moves, that is, a manifestation of thewill to power, that at bottom have no greater or lesser claim to truth in any noumenal (absolute) sense. By calling attention to the fact that he has assumed the role of a subterranean Trophonius, in dialectical opposition to Plato, Nietzsche hopes to sensitize readers to the political and cultural context, and the political influences that impact authorship.
Where Nietzsche did not achieve deconstruction, as Derrida sees it, is that he missed the opportunity to further explore the will to power as more than a manifestation of the sociopolitically effective operation of writing that Plato characterized, stepping beyond Nietzsche's penultimate revaluation of all Western values, to the ultimate, which is the emphasis on "the role of writing in the production of knowledge".[21]
Derrida approaches all texts as constructed around elemental oppositions which alldiscoursehas to articulate if it intends to make any sense whatsoever. This is so because identity is viewed innon-essentialistterms as a construct, and because constructs only produce meaning through the interplay ofdifferenceinside a "system of distinct signs". This approach to text is influenced by thesemiologyofFerdinand de Saussure.[22][23]
Saussure is considered one of the fathers ofstructuralismwhen he explained that terms get their meaning in reciprocal determination with other terms inside language:
In language there are only differences. Even more important: a difference generally implies positive terms between which the difference is set up; but in language there are only differences without positive terms. Whether we take the signified or the signifier, language has neither ideas nor sounds that existed before the linguistic system, but only conceptual and phonic differences that have issued from the system. The idea or phonic substance that a sign contains is of less importance than the other signs that surround it. [...] A linguistic system is a series of differences of sound combined with a series of differences of ideas; but the pairing of a certain number of acoustical signs with as many cuts made from the mass thought engenders a system of values.[16]
Saussure explicitly suggested that linguistics was only a branch of a more general semiology, a science of signs in general, human codes being only one part. Nevertheless, in the end, as Derrida pointed out, Saussure made linguistics "the regulatory model", and "for essential, and essentially metaphysical, reasons had to privilege speech, and everything that links the sign to phone".[19]: 21, 46, 101, 156, 164Derrida will prefer to follow the more "fruitful paths (formalization)" of a general semiotics without falling into what he considered "a hierarchizing teleology" privileging linguistics, and to speak of "mark" rather than of language, not as something restricted to mankind, but as prelinguistic, as the pure possibility of language, working everywhere there is a relation to something else.[citation needed]
Derrida's original use of the worddeconstructionwas a translation of the GermanDestruktion, a concept from the work ofMartin Heideggerthat Derrida sought to apply to textual reading. Heidegger's term referred to a process of exploring the categories and concepts that tradition has imposed on a word, and the history behind them.[24]
Derrida's concerns flow from a consideration of several issues:
To this end, Derrida follows a long line of modern philosophers, who look backwards to Plato and his influence on the Western metaphysical tradition.[21][page needed]Like Nietzsche, Derrida suspects Plato of dissimulation in the service of a political project, namely the education, through critical reflections, of a class of citizens more strategically positioned to influence the polis. However, unlike Nietzsche, Derrida is not satisfied with such a merely political interpretation of Plato, because of the particular dilemma in which modern humans find themselves. His Platonic reflections are inseparably part of his critique ofmodernity, hence his attempt to be something beyond the modern, because of his Nietzschean sense that the modern has lost its way and become mired innihilism.
Différanceis the observation that the meanings of words come from theirsynchronywith other words within the language and theirdiachronybetween contemporary and historical definitions of a word. Understanding language, according to Derrida, requires an understanding of both viewpoints of linguistic analysis. The focus on diachrony has led to accusations against Derrida of engaging in theetymological fallacy.[25]
There is one statement by Derrida—in an essay onRousseauinOf Grammatology—which has been of great interest to his opponents.[14]: 158It is the assertion that "there is no outside-text" (il n'y a pas de hors-texte),[14]: 158–59, 163which is often mistranslated as "there is nothing outside of the text". The mistranslation is often used to suggest Derrida believes that nothing exists but words.Michel Foucault, for instance, famously misattributed to Derrida the very different phraseIl n'y a rien en dehors du textefor this purpose.[26]According to Derrida, his statement simply refers to the unavoidability of context that is at the heart ofdifférance.[27]: 133
For example, the wordhousederives its meaning more as a function of how it differs fromshed,mansion,hotel,building, etc. (form of content, whichLouis Hjelmslevdistinguished from form of expression) than how the wordhousemay be tied to a certain image of a traditional house (i.e., the relationship betweensignified and signifier), with each term being established in reciprocal determination with the other terms than by an ostensive description or definition: when can one talk about ahouseor amansionor ashed? The same can be said about verbs in all languages: when should one stop sayingwalkand start sayingrun? The same happens, of course, with adjectives: when must one stop sayingyellowand start sayingorange, or exchangepastforpresent? Not only are the topological differences between the words relevant here, but the differentials between what is signified is also covered bydifférance.
Thus, complete meaning is always "differential" andpostponedin language; there is never a moment when meaning is complete and total. A simple example would consist of looking up a given word in a dictionary, then proceeding to look up the words found in that word's definition, etc., also comparing with older dictionaries. Such a process would never end.
Derrida describes the task of deconstruction as the identification of metaphysics of presence, orlogocentrismin western philosophy. Metaphysics of presence is the desire for immediate access to meaning, the privileging of presence over absence. This means that there is an assumed bias in certain binary oppositions where one side is placed in a position over another, such as good over bad, speech over the written word, male over female. Derrida writes,
Without a doubt, Aristotle thinks of time on the basis ofousiaasparousia, on the basis of the now, the point, etc. And yet an entire reading could be organized that would repeat in Aristotle's text both this limitation and its opposite.[24]: 29–67
To Derrida, the central bias of logocentrism was the now being placed as more important than the future or past. This argument is largely based on the earlier work of Heidegger, who, inBeing and Time, claimed that the theoretical attitude of pure presence is parasitical upon a moreoriginaryinvolvement with the world in concepts such asready-to-handandbeing-with.[28]
In the deconstruction procedure, one of the main concerns of Derrida is to not collapse into Hegel's dialectic, where these oppositions would be reduced to contradictions in a dialectic that has the purpose of resolving it into a synthesis.[19]: 43The presence of Hegelian dialectics was enormous in the intellectual life of France during the second half of the 20th century, with the influence ofKojèveandHyppolite, but also with the impact of dialectics based on contradiction developed byMarxists, and including theexistentialismofSartre, etc. This explains Derrida's concern to always distinguish his procedure from Hegel's,[19]: 43since Hegelianism believes binary oppositions would produce a synthesis, while Derrida saw binary oppositions as incapable of collapsing into a synthesis free from the original contradiction.
There have been problems defining deconstruction. Derrida claimed that all of his essays were attempts to define what deconstruction is,[29]: 4and that deconstruction is necessarily complicated and difficult to explain since it actively criticises the very language needed to explain it.
Derrida has been more forthcoming with negative (apophatic) than with positive descriptions of deconstruction. When asked byToshihiko Izutsusome preliminary considerations on how to translatedeconstructionin Japanese, in order to at least prevent using a Japanese term contrary todeconstruction's actual meaning, Derrida began his response by saying that such a question amounts to "what deconstruction is not, or ratheroughtnot to be".[29]: 1
Derrida states that deconstruction is not an analysis, a critique, or a method[29]: 3in the traditional sense that philosophy understands these terms. In these negative descriptions of deconstruction, Derrida is seeking to "multiply the cautionary indicators and put aside all the traditional philosophical concepts".[29]: 3This does not mean that deconstruction has absolutely nothing in common with an analysis, a critique, or a method, because while Derrida distances deconstruction from these terms, he reaffirms "the necessity of returning to them, at least under erasure".[29]: 3Derrida's necessity of returning to a termunder erasuremeans that even though these terms are problematic, they must be used until they can be effectively reformulated or replaced. The relevance of the tradition of negative theology to Derrida's preference for negative descriptions of deconstruction is the notion that a positive description of deconstruction would over-determine the idea of deconstruction and would close off the openness that Derrida wishes to preserve for deconstruction. If Derrida were to positively define deconstruction—as, for example, a critique—then this would make the concept of critique immune to itself being deconstructed.[citation needed]Some new philosophy beyond deconstruction would then be required in order to encompass the notion of critique.
Derrida states that "Deconstruction is not a method, and cannot be transformed into one".[29]: 3This is because deconstruction is not a mechanical operation. Derrida warns against considering deconstruction as a mechanical operation, when he states that "It is true that in certain circles (university or cultural, especially in the United States) the technical and methodological "metaphor" that seems necessarily attached to the very word 'deconstruction' has been able to seduce or lead astray".[29]: 3Commentator Richard Beardsworth explains that:
Derrida is careful to avoid this term [method] because it carries connotations of a procedural form of judgement. A thinker with a method has already decidedhowto proceed, is unable to give him or herself up to the matter of thought in hand, is a functionary of the criteria which structure his or her conceptual gestures. For Derrida [...] this is irresponsibility itself. Thus, to talk of a method in relation to deconstruction, especially regarding its ethico-political implications, would appear to go directly against the current of Derrida's philosophical adventure.[30]
Beardsworth here explains that it would be irresponsible to undertake a deconstruction with a complete set of rules that need only be applied as a method to the object of deconstruction, because this understanding would reduce deconstruction to a thesis of the reader that the text is then made to fit. This would be an irresponsible act of reading, because it becomes a prejudicial procedure that only finds what it sets out to find.
Derrida states that deconstruction is not acritiquein theKantiansense.[29]: 3This is becauseKantdefines the termcritiqueas the opposite ofdogmatism. For Derrida, it is not possible to escape the dogmatic baggage of the language used in order to perform a pure critique in the Kantian sense. Language is dogmatic because it is inescapablymetaphysical. Derrida argues that language is inescapably metaphysical because it is made up ofsignifiersthat only refer to that which transcends them—the signified.[citation needed]In addition, Derrida asks rhetorically "Is not the idea of knowledge and of the acquisition of knowledge in itself metaphysical?"[3]: 5By this, Derrida means that all claims to know something necessarily involve an assertion of the metaphysical type that somethingisthe case somewhere. For Derrida the concept of neutrality is suspect and dogmatism is therefore involved in everything to a certain degree. Deconstruction can challenge a particular dogmatism and hence de-sediment dogmatism in general, but it cannot escape all dogmatism all at once.
Derrida states that deconstruction is not ananalysisin the traditional sense.[29]: 3This is because the possibility of analysis is predicated on the possibility of breaking up the text being analysed into elemental component parts. Derrida argues that there are no self-sufficient units of meaning in a text, because individual words or sentences in a text can only be properly understood in terms of how they fit into the larger structure of the text and language itself. For more on Derrida's theory of meaning see the article ondifférance.
Derrida states that his use of the word deconstruction first took place in a context in which "structuralismwas dominant" and deconstruction's meaning is within this context. Derrida states that deconstruction is an "antistructuralist gesture" because "[s]tructures were to be undone, decomposed, desedimented". At the same time, deconstruction is also a "structuralist gesture" because it is concerned with the structure of texts. So, deconstruction involves "a certain attention to structures"[29]: 2and tries to "understand how an 'ensemble' was constituted".[29]: 3As both a structuralist and an antistructuralist gesture, deconstruction is tied up with what Derrida calls the "structural problematic".[29]: 2The structural problematic for Derrida is the tension between genesis, that which is "in the essential mode of creation or movement", and structure: "systems, or complexes, or static configurations".[18]: 194An example of genesis would be thesensoryideasfrom which knowledge is then derived in theempiricalepistemology. An example of structure would be abinary oppositionsuch asgood and evilwhere the meaning of each element is established, at least partly, through its relationship to the other element.
It is for this reason that Derrida distances his use of the term deconstruction frompost-structuralism, a term that would suggest that philosophy could simply go beyond structuralism. Derrida states that "the motif of deconstruction has been associated with 'post-structuralism'", but that this term was "a word unknown in France until its 'return' from the United States".[29]: 3In his deconstruction ofEdmund Husserl, Derrida actually arguesforthe contamination of pure origins by the structures of language and temporality.Manfred Frankhas even referred to Derrida's work as "neostructuralism", identifying a "distaste for the metaphysical concepts of domination and system".[31][32]
The popularity of the term deconstruction, combined with the technical difficulty of Derrida's primary material on deconstruction and his reluctance to elaborate his understanding of the term, has meant that many secondary sources have attempted to give a more straightforward explanation than Derrida himself ever attempted. Secondary definitions are therefore an interpretation of deconstruction by the person offering them rather than a summary of Derrida's actual position.
"to show that things - texts, institutions, traditions, societies, beliefs, and practices of whatever size and sort you need - do not have definable meanings and determinable missions, that they are always more than any mission would impose, that they exceed the boundaries they currently occupy"[35]
"While in a sense itisimpossibly difficult to define, the impossibility has less to do with the adoption of a position or the assertion of a choice on deconstruction's part than with the impossibility of every 'is' as such. Deconstruction begins, as it were, from a refusal of the authority or determining power of every 'is', or simply from a refusal of authority in general. While such refusal may indeed count as a position, it is not the case that deconstruction holds this as a sort of 'preference' ".[36][page needed]
[Deconstruction] signifies a project of critical thought whose task is to locate and 'take apart' those concepts which serve as the axioms or rules for a period of thought, those concepts which command the unfolding of an entire epoch of metaphysics. 'Deconstruction' is somewhat less negative than the Heideggerian or Nietzschean terms 'destruction' or 'reversal'; it suggests that certain foundational concepts of metaphysics will never be entirely eliminated...There is no simple 'overcoming' of metaphysics or the language of metaphysics.
A survey of thesecondary literaturereveals a wide range of heterogeneous arguments. Particularly problematic are the attempts to give neat introductions to deconstruction by people trained in literary criticism who sometimes have little or no expertise in the relevant areas of philosophy in which Derrida is working.[editorializing]These secondary works (e.g.Deconstruction for Beginners[38][page needed]andDeconstructions: A User's Guide)[39][page needed]have attempted to explain deconstruction while being academically criticized for being too far removed from the original texts and Derrida's actual position.[citation needed]
Cambridge Dictionarystates thatdeconstructionis "the act of breaking something down into its separate parts in order to understand its meaning, especially when this is different from how it was previously understood".[40]TheMerriam-Websterdictionary states thatdeconstructionis "the analytic examination of something (such as a theory) often in order to reveal its inadequacy".[41]
Derrida's observations have greatly influenced literary criticism and post-structuralism.
Derrida's method consisted of demonstrating all the forms and varieties of the originary complexity ofsemiotics, and their multiple consequences in many fields. His way of achieving this was by conducting readings of philosophical and literary texts, with the goal to understand what in those texts runs counter to their apparent systematicity (structural unity) or intended sense (authorial genesis). By demonstrating theaporiasand ellipses of thought, Derrida hoped to show the infinitely subtle ways that this originary complexity, which by definition cannot ever be completely known, works its structuring and destructuring effects.[42]
Deconstruction denotes thepursuing of the meaningof a text to the point of exposing the supposed contradictions and internal oppositions upon which it is founded—supposedly showing that those foundations are irreducibly complex, unstable, or impossible. It is an approach that may be deployed in philosophy, inliterary analysis, and even in the analysis of scientific writings.[43]Deconstruction generally tries to demonstrate that any text is not a discrete whole but contains several irreconcilable and contradictory meanings; that any text therefore has more than one interpretation; that the text itself links these interpretations inextricably; that the incompatibility of these interpretations is irreducible; and thus that an interpretative reading cannot go beyond a certain point. Derrida refers to this point as an "aporia" in the text; thus, deconstructive reading is termed "aporetic".[44]He insists that meaning is made possible by the relations of a word to other words within the network of structures that language is.[45]
Derrida initially resisted granting to his approach the overarching namedeconstruction, on the grounds that it was a precise technical term that could not be used to characterize his work generally. Nevertheless, he eventually accepted that the term had come into common use to refer to his textual approach, and Derrida himself increasingly began to use the term in this more general way.
Derrida's deconstruction strategy is also used bypostmoderniststo locate meaning in a text rather than discover meaning due to the position that it has multiple readings. There is a focus on the deconstruction that denotes the tearing apart of a text to find arbitrary hierarchies and presuppositions for the purpose of tracing contradictions that shadow a text's coherence.[46]Here, the meaning of a text does not reside with the author or the author's intentions because it is dependent on the interaction between reader and text.[46]Even the process oftranslationis also seen as transformative since it "modifies the original even as it modifies the translating language".[47]
Derrida's lecture atJohns Hopkins University, "Structure, Sign, and Play in the Human Sciences", often appears in collections as a manifesto against structuralism. Derrida's essay was one of the earliest to propose some theoretical limitations to structuralism, and to attempt to theorize on terms that were clearly no longer structuralist. Structuralism viewed language as a number of signs, composed of a signified (the meaning) and a signifier (the word itself). Derrida proposed that signs always referred to other signs, existing only in relation to each other, and there was therefore no ultimate foundation or centre. This is the basis ofdifférance.[48]
Between the late 1960s and the early 1980s, many thinkers were influenced by deconstruction, includingPaul de Man,Geoffrey Hartman, andJ. Hillis Miller. This group came to be known as theYale schooland was especially influential inliterary criticism. Derrida and Hillis Miller were subsequently affiliated with theUniversity of California, Irvine.[49]
Miller has described deconstruction this way: "Deconstruction is not a dismantling of the structure of a text, but a demonstration that it has already dismantled itself. Its apparently solid ground is no rock, but thin air."[50]
Arguing that law and politics cannot be separated, the founders of the Critical Legal Studies movement found it necessary to criticize the absence of the recognition of this inseparability at the level of theory. To demonstrate theindeterminacyoflegal doctrine, these scholars often adopt a method, such asstructuralisminlinguistics, or deconstruction inContinental philosophy, to make explicit the deep structure of categories and tensions at work in legal texts and talk. The aim was to deconstruct the tensions and procedures by which they are constructed, expressed, and deployed.
For example,Duncan Kennedy, in explicit reference to semiotics and deconstruction procedures, maintains that various legal doctrines are constructed around the binary pairs of opposed concepts, each of which has a claim upon intuitive and formal forms of reasoning that must be made explicit in their meaning and relative value, and criticized. Self and other, private and public, subjective and objective, freedom and control are examples of such pairs demonstrating the influence of opposing concepts on the development of legal doctrines throughout history.[4]
Deconstructive readings of history and sources have changed the entire discipline of history. InDeconstructing History,Alun Munslowexamines history in what he argues is a postmodern age. He provides an introduction to the debates and issues of postmodernist history. He also surveys the latest research into the relationship between the past, history, and historical practice, as well as articulating his own theoretical challenges.[7]
Jean-Luc Nancyargues, in his 1982 bookThe Inoperative Community, for an understanding of community and society that is undeconstructable because it is prior to conceptualisation. Nancy's work is an important development of deconstruction because it takes the challenge of deconstruction seriously and attempts to develop an understanding of political terms that is undeconstructable and therefore suitable for a philosophy after Derrida. Nancy's work produced a critique of deconstruction by making the possibility for a relation to the other. This relation to the other is called “anastasis” in Nancy's work.[51]
Simon Critchleyargues, in his 1992 bookThe Ethics of Deconstruction,[52]that Derrida's deconstruction is an intrinsically ethical practice. Critchley argues that deconstruction involves an openness to theOtherthat makes it ethical in theLevinasianunderstanding of the term.
Jacques Derrida has had a great influence on contemporarypolitical theoryand political philosophy. Derrida's thinking has inspiredSlavoj Zizek,Richard Rorty,Ernesto Laclau,Judith Butlerand many more contemporary theorists who have developed a deconstructive approach topolitics. Because deconstruction examines the internal logic of any given text or discourse it has helped many authors to analyse the contradictions inherent in all schools of thought; and, as such, it has proved revolutionary in political analysis, particularly ideology critiques.[53][page needed]
Richard Beardsworth, developing from Critchley'sEthics of Deconstruction, argues, in his 1996Derrida and the Political, that deconstruction is an intrinsically political practice. He further argues that the future of deconstruction faces a perhaps undecidable choice between atheologicalapproach and a technological approach, represented first of all by the work ofBernard Stiegler.[54]
The term "deconstructing faith" has been used to describe processes of critically examining one's religious beliefs with the possibility of rejecting them, taking individual responsibility for beliefs acquired from others, or reconstructing more nuanced or mature faith. This use of the term has been particularly prominent in American Evangelical Christianity in the 2020s. AuthorDavid Haywardsaid he "co-opted the term"deconstructionbecause he was reading the work of Derrida at the time his religious beliefs came into question.[55]Others had earlier used the term "faith deconstruction" to describe similar processes, and theologianJames W. Fowlerarticulated a similar concept as part of his faith stages theory.[56][57]
Leading Spanish chefFerran Adriàcoined"deconstruction" as a style of cuisine, which he described as drawing from the creative principles of Spanish modernists likeSalvador DalíandAntoni Gaudíto deconstruct conventional cooking techniques in the modern era. Deconstructed recipes typically preserve the core ingredients and techniques of an established dish, but prepare components of a dish separately while experimenting radically with its flavor, texture, ratios, and assembly to culminate in a stark,minimaliststyle ofpresentationwith similarly minimal portion sizes.[58][59]
Derrida was involved in a number of high-profile disagreements with prominent philosophers, includingMichel Foucault,John Searle,Willard Van Orman Quine,Peter Kreeft, andJürgen Habermas. Most of the criticisms of deconstruction were first articulated by these philosophers and then repeated elsewhere.
In the early 1970s, Searle had a brief exchange withJacques Derridaregardingspeech-act theory. The exchange was characterized by a degree of mutual hostility between the philosophers, each of whom accused the other of having misunderstood his basic points.[27]: 29[citation needed]Searle was particularly hostile to Derrida's deconstructionist framework and much later refused to let his response to Derrida be printed along with Derrida's papers in the 1988 collectionLimited Inc. Searle did not consider Derrida's approach to be legitimate philosophy, or even intelligible writing, and argued that he did not want to legitimize the deconstructionist point of view by paying any attention to it. Consequently, some critics[who?][60]have considered the exchange to be a series of elaborate misunderstandings rather than a debate, while others[who?][61]have seen either Derrida or Searle gaining the upper hand.
The debate began in 1972, when, in his paper "Signature Event Context", Derrida analyzed J. L. Austin's theory of theillocutionary act. While sympathetic to Austin's departure from a purely denotational account of language to one that includes "force", Derrida was sceptical of the framework of normativity employed by Austin. Derrida argued that Austin had missed the fact that any speech event is framed by a "structure of absence" (the words that are left unsaid due to contextual constraints) and by "iterability" (the constraints on what can be said, imposed by what has been said in the past). Derrida argued that the focus onintentionalityin speech-act theory was misguided because intentionality is restricted to that which is already established as a possible intention. He also took issue with the way Austin had excluded the study of fiction, non-serious, or "parasitic" speech, wondering whether this exclusion was because Austin had considered these speech genres as governed by different structures of meaning, or had not considered them due to a lack of interest. In his brief reply to Derrida, "Reiterating the Differences: A Reply to Derrida", Searle argued that Derrida's critique was unwarranted because it assumed that Austin's theory attempted to give a full account of language and meaning when its aim was much narrower. Searle considered the omission of parasitic discourse forms to be justified by the narrow scope of Austin's inquiry.[62][63]Searle agreed with Derrida's proposal that intentionality presupposes iterability, but did not apply the same concept of intentionality used by Derrida, being unable or unwilling to engage with the continental conceptual apparatus.[61]This, in turn, caused Derrida to criticize Searle for not being sufficiently familiar withphenomenologicalperspectives on intentionality.[64]Some critics[who?][64]have suggested that Searle, by being so grounded in the analytical tradition that he was unable to engage with Derrida's continental phenomenological tradition, was at fault for the unsuccessful nature of the exchange, however Searle also argued that Derrida's disagreement with Austin turned on Derrida's having misunderstood Austin'stype–token distinctionand having failed to understand Austin's concept of failure in relation toperformativity.
Derrida, in his response to Searle ("a b c ..."inLimited Inc), ridiculed Searle's positions. Claiming that a clear sender of Searle's message could not be established, Derrida suggested that Searle had formed with Austin asociété à responsabilité limitée(a "limited liability company") due to the ways in which the ambiguities of authorship within Searle's reply circumvented the very speech act of his reply. Searle did not reply. Later in 1988, Derrida tried to review his position and his critiques of Austin and Searle, reiterating that he found the constant appeal to "normality" in the analytical tradition to be problematic.[27]: 133[61][65][66][67][68][69][70]
InThe Philosophical Discourse of Modernity,Jürgen Habermascriticized what he considered Derrida's opposition torational discourse.[71]Further, in an essay on religion and religious language, Habermas criticized what he saw as Derrida's emphasis onetymologyandphilology[71](seeEtymological fallacy).
The American philosopherWalter A. Davis, inInwardness and Existence: Subjectivity in/and Hegel, Heidegger, Marx and Freud, argues that both deconstruction and structuralism are prematurely arrested moments of a dialectical movement that issues from Hegelian "unhappy consciousness".[72]
Popular criticism of deconstruction intensified following theSokal affair, which many people took as an indicator of the quality of deconstruction as a whole, despite the absence of Derrida from Sokal's follow-up bookImpostures intellectuelles.[73]
Chip Morningstarholds a view critical of deconstruction, believing it to be "epistemologically challenged". He claims the humanities are subject to isolation and genetic drift due to their unaccountability to the world outside academia. During the Second International Conference on Cyberspace (Santa Cruz, California, 1991), he reportedlyheckleddeconstructionists off the stage.[74]He subsequently presented his views in the article "How to Deconstruct Almost Anything", where he stated, "Contrary to the report given in the 'Hype List' column of issue #1 of Wired ('Po-Mo Gets Tek-No', page 87), we did not shout down thepostmodernists. We made fun of them."[75]
|
https://en.wikipedia.org/wiki/Deconstruction
|
Writing systemsare used to record human language, and may be classified according to certain common features.
The usual name of the script is given first; the name of thelanguagesin which the script is written follows (in brackets), particularly in the case where the language name differs from the script name. Other informative or qualifying annotations for the script may also be provided.
Ideographic scripts (in which graphemes areideogramsrepresenting concepts or ideas rather than a specific word in a language) and pictographic scripts (in which the graphemes are iconic pictures) are not thought to be able to express all that can be communicated by language, as argued by the linguistsJohn DeFrancisandJ. Marshall Unger. Essentially, they postulate that notruewriting system can be completely pictographic or ideographic; it must be able to refer directly to a language in order to have the full expressive capacity of a language. Unger disputes claims made on behalf ofBlissymbolsin his 2004 bookIdeogram.
Although a fewpictographicorideographic scriptsexist today, there is no single way to read them because there is no one-to-one correspondence between symbol and language. Hieroglyphs were commonly thought to be ideographic before they were translated, and to this day, Chinese is often erroneously said to be ideographic.[1]In some cases of ideographic scripts, only the author of a text can read it with any certainty, and it may be said that they areinterpretedrather than read. Such scripts often work best as mnemonic aids for oral texts or as outlines that will be fleshed out in speech.
There are also symbol systems used to represent things other than language:
In logographic writing systems,glyphsrepresentwordsormorphemes(meaningful components of words, as inmean-ing-ful) rather than phonetic elements.
No logographic script is composed solely oflogograms. All contain graphemes that representphonetic(sound-based) elements as well. These phonetic elements may be used on their own (to represent, for example, grammatical inflections or foreign words), or may serve asphonetic complementsto a logogram (used to specify the sound of a logogram that might otherwise represent more than one word). In the case of Chinese, the phonetic element is built into the logogram itself; in Egyptian and Mayan, many glyphs are purely phonetic, whereas others function as either logograms or phonetic elements, depending on context. For this reason, many such scripts may be more properly referred to as logosyllabic or complex scripts; the terminology used is largely a product of custom in the field, and is to an extent arbitrary.
In asyllabary, graphemes representsyllablesormoras. (The 19th-century termsyllabicsusually referred toabugidasrather than true syllabaries.)
In most of these systems, some consonant-vowel combinations are written as syllables, but others are written as consonant plus vowel. In the case of Old Persian, all vowels were written regardless, so it waseffectivelya true alphabet despite its syllabic component. In Japanese a similar system plays a minor role in foreign borrowings; for example, [tu] is written [to]+[u], and [ti] as [te]+[i]. Paleohispanicsemi-syllabariesbehaved as asyllabaryfor thestop consonantsand as analphabetfor the rest of consonants and vowels.
The Tartessian or Southwestern script is typologically intermediate between a pure alphabet and the Paleohispanic full semi-syllabaries. Although the letter used to write a stop consonant was determined by the following vowel, as in a fullsemi-syllabary, the following vowel was also written, as in an alphabet. Some scholars treat Tartessian as a redundant semi-syllabary, others treat it as a redundant alphabet. Other scripts, such as Bopomofo, are semi-syllabic in a different sense: they transcribe half syllables. That is, they have letters forsyllable onsetsandrimes(kan = "k-an")rather than for consonants and vowels(kan = "k-a-n").
Asegmental scripthasgraphemeswhich represent thephonemes(basic unit of sound) of a language.
Note that there need not be (and rarely is) a one-to-one correspondence between the graphemes of the script and the phonemes of a language. A phoneme may be represented only by some combination or string of graphemes, the same phoneme may be represented by more than one distinct grapheme, the same grapheme may stand for more than one phoneme, or some combination of all of the above.
Segmental scripts may be further divided according to the types of phonemes they typically record:
Anabjadis a segmental script containing symbols forconsonantsonly, or where vowels areoptionallywritten withdiacritics("pointing") or only written word-initially.
A truealphabetcontains separate letters (notdiacriticmarks) for bothconsonantsandvowels.
Linearalphabets are composed of lines on a surface, such as ink on paper.
Afeatural scripthas elements that indicate the components of articulation, such asbilabial consonants,fricatives, orback vowels. Scripts differ in how many features they indicate.
Manual alphabetsare frequently found as parts ofsign languages. They are not used for writingper se, but for spelling out words while signing.
These are other alphabets composed of something other than lines on a surface.
Anabugida, oralphasyllabary, is a segmental script in whichvowelsounds are denoted bydiacritical marksor other systematic modification of theconsonants. Generally, however, if a single letter is understood to have an inherent unwritten vowel, and only vowels other than this are written, then the system is classified as an abugida regardless of whether the vowels look like diacritics or full letters. The vast majority of abugidas are found from India to Southeast Asia and belong historically to the Brāhmī family, however the term is derived from the first characters of the abugida inGe'ez: አ (a) ቡ (bu) ጊ (gi) ዳ (da) — (compare withalphabet). Unlike abjads, the diacritical marks and systemic modifications of the consonants are not optional.
In at least one abugida, not only the vowel but anysyllable-finalconsonant is written with a diacritic. That is, if representing [o] with an under-ring, and final [k] with an over-cross, [sok] would be written ass̥̽.
In a few abugidas, the vowels are basic, and the consonants secondary. If no consonant is written in Pahawh Hmong, it is understood to be /k/; consonants are written after the vowel they precede in speech. In Japanese Braille, the vowels but not the consonants have independent status, and it is the vowels which are modified when the consonant isyorw.
The following list contains writing systems that are in active use by a population of at least 50,000.
Malay(inBrunei)others
These systems have not been deciphered. In some cases, such asMeroitic, the sound values of the glyphs are known, but the texts still cannot be read because the language is not understood. Several of these systems, such asIsthmian scriptandIndus script, are claimed to have been deciphered, but these claims have not been confirmed by independent researchers. In many cases it is doubtful that they are actually writing. TheVinča symbolsappear to beproto-writing, andquipumay have recorded only numerical information. There are doubts that the Indus script is writing, and thePhaistos Dischas so little content or context that its nature is undetermined.
Comparatively recent manuscripts and other texts written in undeciphered (and often unidentified) writing systems; some of these may represent ciphers of known languages orhoaxes.
This section lists alphabets used to transcribephoneticorphonemicsound; not to be confused withspelling alphabetslike theICAO spelling alphabet. Some of these are used for transcription purposes by linguists; others are pedagogical in nature or intended as general orthographic reforms.
Alphabets may exist in forms other than visible symbols on a surface. Some of these are:
SeeList of constructed scriptsfor an expanded version of this table.
|
https://en.wikipedia.org/wiki/List_of_writing_systems
|
Of Grammatology(French:De la grammatologie) is a 1967 book by the French philosopherJacques Derrida. The book, originating the idea ofdeconstruction, proposes that throughoutcontinental philosophy, especially as philosophers engaged withlinguisticand semiotic ideas, writing has been erroneously considered as derivative from speech, making it a "fall" from the real "full presence" of speech and the independent act of writing.
The work was initially unsuccessfully submitted by Derrida as aDoctorat de spécialitéthesis (directed byMaurice de Gandillac) under the full titleDe la grammatologie : Essai sur la permanence de concepts platonicien, aristotélicien et scolastique de signe écrit[1](Of Grammatology: Essay on the Permanence of Platonic, Aristotelian and Scholastic Concepts of the Written Sign).
InOf Grammatology, Derrida discusses writers such asClaude Lévi-Strauss,Ferdinand de Saussure,Jean-Jacques Rousseau,Étienne Condillac,Louis Hjelmslev,Emile Benveniste,Martin Heidegger,Edmund Husserl,Roman Jakobson,Gottfried Wilhelm Leibniz,André Leroi-Gourhan, andWilliam Warburton. In the course of the work he deconstructs the philosophies of language and the act of writing given by these authors, identifying what he callsphonocentrism, and showing the myriadaporiasand ellipses to which this leads them. Derrida avoids describing what he is theorizing as acritiqueof the work of these thinkers, but he nevertheless calls for a new science of "grammatology" that would explore the questions that he raises about how to theorize the act of writing.[2]
Of Grammatologyintroduced many of the concepts which Derrida would employ in later work, especially in relation tolinguisticsand writing.[3]
The book begins with a reading of Saussure's linguisticstructuralismas presented in theCourse in General Linguistics, and in particularsigns, which for Saussure have the two separate components of sound and meaning. These components are also calledsignifier (signifiant) and signified (signifié).[4]
Derrida quotes Saussure: "Language and writing are two distinct systems of signs; the second exists for the sole purpose of representing the first."[5]Highlighting the imbalanced dynamic between speech and writing that Saussure uses, Derrida instead offers the idea that written symbols are in fact legitimate signifiers on their own, and should not be considered as secondary, or derivative, relative to oral speech.[6]
Much of the second half ofOf Grammatologyconsists of a sustained reading of Jean-Jacques Rousseau, especially hisEssay on the Origin of Languages. Derrida analyzes Rousseau in terms of what he calls a "logic of supplementarity,"[7]according to which "the supplement isexterior, outside of the positivity to which it is super-added, alien to that which, in order to be replaced by it, must be other than it."[8]Derrida shows how Rousseau consistently appeals to the idea that a supplement comes from the outside to contaminate a supposedly pure origin (of language, in this case). This tendency manifests in many different binaries that Rousseau sets up throughout theEssay: writing supplements speech, articulation supplements accent, need supplements passion, north supplements south, etc.[9]Derrida calls these binaries a "system of oppositions that controls the entireEssay."[10]He then argues that Rousseau, without expresslydeclaringit, neverthelessdescribeshow a logic of supplementarity isalways alreadyat work in the origin that it is supposed to corrupt: "This relationship of mutual and incessant supplementarity or substitution is the order of language. It is the origin of language, as it is described without being declared, in theEssay on the Origin of Languages."[11]
Of Grammatologywas first published byLes Éditions de Minuitin 1967. The English translation byGayatri Chakravorty Spivakwas first published in 1976. A revised edition of the translation was published in 1997. A further revised edition was published in January 2016.[12]
Of Grammatologyis one of three books which Derrida published in 1967, and which served to establish his reputation. The other two wereLa voix et le phénomène, translated asSpeech and Phenomena, andL'écriture et la différence, translated asWriting and Difference. It has been called a foundational text for deconstructive criticism.[13]
The philosopherIain Hamilton Granthas comparedOf Grammatologyto the philosopherGilles Deleuzeand the psychoanalystFélix Guattari'sAnti-Oedipus(1972), the philosopherLuce Irigaray'sSpeculum of the Other Woman(1974), the philosopherJean-François Lyotard'sLibidinal Economy(1974), and the sociologistJean Baudrillard'sSymbolic Exchange and Death(1976), noting that like them it forms part ofpost-structuralism, a response to the demise ofstructuralismas a dominant intellectual discourse.[14]
|
https://en.wikipedia.org/wiki/Of_Grammatology
|
Post-structuralismis a philosophical movement that questions the objectivity or stability of the various interpretive structures that are posited bystructuralismand considers them to be constituted by broader systems ofpower.[1]Although different post-structuralists present different critiques of structuralism, common themes include the rejection of the self-sufficiency of structuralism, as well as an interrogation of thebinary oppositionsthat constitute its structures. Accordingly, post-structuralism discards the idea of interpreting media (or the world) within pre-established, socially constructed structures.[2][3][4][5]
Structuralismproposes that humanculturecan be understood by means of astructure that is modeled on language. As a result, there is concreterealityon the one hand, abstractideasabout reality on the other hand, and a "third order" that mediates between the two.[6]
A post-structuralist response, then, might suggest that in order to build meaning out of such an interpretation, one must (falsely) assume that the definitions of these signs are both valid and fixed, and that the author employing structuralist theory is somehow above and apart from these structures they are describing so as to be able to wholly appreciate them. The rigidity and tendency to categorize intimations of universal truths found in structuralist thinking is a common target of post-structuralist thought, while also building upon structuralist conceptions of reality mediated by the interrelationship between signs.[7]
Writers whose works are often characterised as post-structuralist includeRoland Barthes,Jacques Derrida,Michel Foucault,Gilles Deleuze, andJean Baudrillard, although many theorists who have been called "post-structuralist" have rejected the label.[8]
Post-structuralism emerged inFranceduring the 1960s as a movement critiquingstructuralism. According toJ. G. Merquior, alove–hate relationshipwith structuralism developed among many leading French thinkers in the 1960s.[4]The period was marked by the rebellion of students and workers against the state inMay 1968.
In a 1966 lecture titled "Structure, Sign, and Play in the Discourse of the Human Sciences",Jacques Derridapresented a thesis on an apparent rupture in intellectual life. Derrida interpreted this event as a "decentering" of the former intellectual cosmos. Instead of progress or divergence from an identified centre, Derrida described this "event" as a kind of "play."
A year later, in 1967,Roland Barthespublished "The Death of the Author", in which he announced a metaphorical event: the "death" of the author as an authentic source of meaning for a given text. Barthes argued that any literary text has multiple meanings and that the author was not the prime source of the work's semantic content. The "Death of the Author," Barthes maintained, was the "Birth of the Reader," as the source of the proliferation of meanings of the text.[9]
InElements of Semiology(1967), Barthes advances the concept of themetalanguage, a systematized way of talking about concepts like meaning and grammar beyond the constraints of a traditional (first-order) language; in a metalanguage, symbols replace words and phrases. Insofar as one metalanguage is required for one explanation of the first-order language, another may be required, so metalanguages may actually replace first-order languages. Barthes exposes how this structuralist system is regressive; orders of language rely upon a metalanguage by which it is explained, and thereforedeconstructionitself is in danger of becoming a metalanguage, thus exposing all languages and discourse to scrutiny. Barthes' other works contributed deconstructive theories about texts.
The occasional designation of post-structuralism as a movement can be tied to the fact that mounting criticism of Structuralism became evident at approximately the same time that Structuralism became a topic of interest in universities in the United States. This interest led to a colloquium atJohns Hopkins Universityin 1966 titled "The Languages of Criticism and the Sciences of Man", to which such French philosophers asJacques Derrida,Roland Barthes, andJacques Lacanwere invited to speak.
Derrida's lecture at that conference, "Structure, Sign, and Play in the Human Sciences", was one of the earliest to propose some theoretical limitations to Structuralism, and to attempt to theorize on terms that were clearly no longer structuralist.
The element of "play" in the title of Derrida's essay is often erroneously interpreted in a linguistic sense, based on a general tendency towards puns and humour, whilesocial constructionismas developed in the later work ofMichel Foucaultis said to create play in the sense of strategic agency by laying bare the levers of historical change.
Structuralism, as an intellectual movement in France in the 1950s and 1960s, studied underlying structures incultural products(such astexts) and used analytical concepts fromlinguistics,psychology,anthropology, and other fields tointerpretthose structures. Structuralism posits the concept ofbinary opposition, in which frequently-used pairs of opposite-but-related words (concepts) are often arranged in a hierarchy; for example:Enlightenment/Romantic, male/female, speech/writing, rational/emotional, signified/signifier, symbolic/imaginary, and east/west.
Post-structuralism rejects the structuralist notion that the dominant word in a pair is dependent on itssubservientcounterpart, and instead argues that founding knowledge on either pure experience (phenomenology) or onsystematicstructures (structuralism) is impossible,[10]because history and culture actually condition the study of underlying structures, and these are subject to biases and misinterpretations.Gilles Deleuzeand others saw this impossibility not as a failure or loss, but rather as a cause for "celebration and liberation."[11]A post-structuralist approach argues that to understand an object (a text, for example), one must study both the object itself and thesystemsof knowledge that produced the object.[12]The uncertain boundaries between structuralism and post-structuralism become further blurred by the fact that scholars rarely label themselves as post-structuralists. Some scholars associated with structuralism, such asRoland BarthesandMichel Foucault, also became noteworthy in post-structuralism.[13]
The following are often said to be post-structuralists, or to have had a post-structuralist period:
Some observers from outside of the post-structuralist camp have questioned the rigour and legitimacy of the field. American philosopherJohn Searlesuggested in 1990: "The spread of 'poststructuralist'literary theoryis perhaps the best-known example of a silly but non-catastrophic phenomenon."[45][46]Similarly, physicistAlan Sokalin 1997 criticized "thepostmodernist/poststructuralist gibberish that is nowhegemonicin some sectors of the American academy."[47]
Literature scholarNorman Hollandin 1992 saw post-structuralism as flawed due to reliance onSaussure's linguistic model, which was seriously challenged by the 1950s and was soon abandoned by linguists:
Saussure's views are not held, so far as I know, by modern linguists, only by literary critics and the occasional philosopher. [Strict adherence to Saussure] has elicited wrongfilmand literary theory on a grand scale. One can find dozens of books of literary theory bogged down in signifiers and signifieds, but only a handful that refers toChomsky."[48]
|
https://en.wikipedia.org/wiki/Post-structuralism
|
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias
Structuralismis an intellectual current andmethodologicalapproach, primarily in thesocial sciences, that interprets elements ofhuman cultureby way of their relationship to a broader system.[1]It works to uncover thestructural patternsthat underlie all the things that humansdo,think,perceive, andfeel.
Alternatively, as summarized by philosopherSimon Blackburn, structuralism is:[2]
"The belief that phenomena of human life are not intelligible except through their interrelations. These relations constitute a structure, and behind local variations in the surface phenomena there are constant laws of abstract structure."
The termstructuralismis ambiguous, referring to different schools of thought in different contexts. As such, the movement inhumanitiesandsocial sciencescalled structuralism relates tosociology.Emile Durkheimbased his sociological concept on 'structure' and 'function', and from his work emerged the sociological approach ofstructural functionalism.
Apart from Durkheim's use of the termstructure, thesemiologicalconcept ofFerdinand de Saussurebecame fundamental for structuralism. Saussure conceived language and society as a system of relations. His linguistic approach was also a refutation ofevolutionary linguistics.
Structuralism in Europe developed in the early 20th century, mainly inFranceand theRussian Empire, in thestructural linguisticsofFerdinand de Saussureand the subsequentPrague,Moscow, andCopenhagenschools of linguistics. As an intellectual movement, structuralism became the heir toexistentialism. After World War II, an array of scholars in thehumanitiesborrowed Saussure's concepts for use in their respective fields. French anthropologistClaude Lévi-Strausswas arguably the first such scholar, sparking a widespread interest in structuralism.
Throughout the 1940s and 1950s,existentialism, such as that propounded byJean-Paul Sartre, was the dominant Europeanintellectual movement. Structuralism rose to prominence in France in the wake of existentialism, particularly in the 1960s. The initial popularity of structuralism in France led to its spread across the globe. By the early 1960s, structuralism as a movement was coming into its own and some believed that it offered a single unified approach to human life that would embrace all disciplines.
By the late 1960s, many of structuralism's basic tenets came under attack from a new wave of predominantly French intellectuals/philosophers such as historianMichel Foucault,Jacques Derrida,Marxist philosopherLouis Althusser, andliterary criticRoland Barthes. Though elements of their work necessarily relate to structuralism and are informed by it, these theorists eventually came to be referred to aspost-structuralists. Many proponents of structuralism, such asLacan, continue to influencecontinental philosophyand many of the fundamental assumptions of some of structuralism's post-structuralist critics are a continuation of structuralist thinking.
Russian functional linguistRoman Jakobsonwas a pivotal figure in the adaptation of structural analysis to disciplines beyond linguistics, including philosophy, anthropology, and literary theory. Jakobson was a decisive influence on anthropologistClaude Lévi-Strauss, by whose work the termstructuralismfirst appeared in reference tosocial sciences. Lévi-Strauss' work in turn gave rise to the structuralist movement inFrance, also called French structuralism, influencing the thinking of other writers, most of whom disavowed themselves as being a part of this movement. This included such writers asLouis AlthusserandpsychoanalystJacques Lacan, as well as thestructural MarxismofNicos Poulantzas.Roland BarthesandJacques Derridafocused on how structuralism could be applied toliterature.
The origins of structuralism are connected with the work ofFerdinand de Saussureonlinguisticsalong with the linguistics of thePragueandMoscowschools. In brief, Saussure'sstructural linguisticspropounded three related concepts.[2][3]
Structuralism rejected the concept ofhuman freedomand choice, focusing instead on the way that human experience and behaviour is determined by various structures. The most important initial work on this score was Lévi-Strauss's 1949 volumeThe Elementary Structures of Kinship. Lévi-Strauss had knownRoman Jakobsonduring their time together at theNew SchoolinNew YorkduringWWIIand was influenced by both Jakobson's structuralism, as well as the Americananthropologicaltradition.
InElementary Structures, he examinedkinshipsystems from a structural point of view and demonstrated how apparently different social organizations were different permutations of a few basic kinship structures. In the late 1958, he publishedStructural Anthropology, a collection of essays outlining his program for structuralism.
Blending Freud and Saussure, French(post)structuralistJacques Lacanapplied structuralism topsychoanalysis. Similarly,Jean Piagetapplied structuralism to the study ofpsychology, though in a different way. Piaget, who would better define himself asconstructivist, considered structuralism as "a method and not a doctrine," because, for him, "there exists no structure without a construction, abstract or genetic."[5]
Proponents of structuralism argue that a specific domain of culture may be understood by means of a structure that is modelled on language and is distinct both from the organizations of reality and those of ideas, or the imagination—the "third order."[6]In Lacan'spsychoanalytictheory, for example, the structural order of "the Symbolic" is distinguished both from "the Real" and "the Imaginary;" similarly, in Althusser'sMarxisttheory, the structural order of thecapitalist mode of productionis distinct both from the actual, real agents involved in its relations and from theideologicalforms in which those relations are understood.
Although French theoristLouis Althusseris often associated with structuralsocial analysis, which helped give rise to "structural Marxism," such association was contested by Althusser himself in the Italian foreword to the second edition ofReading Capital. In this foreword Althusser states the following:
Despite the precautions we took to distinguish ourselves from the 'structuralist' ideology…, despite the decisive intervention of categories foreign to 'structuralism'…, the terminology we employed was too close in many respects to the 'structuralist' terminology not to give rise to an ambiguity. With a very few exceptions…our interpretation of Marx has generally been recognized and judged, in homage to the current fashion, as 'structuralist'.… We believe that despite the terminological ambiguity, the profound tendency of our texts was not attached to the 'structuralist' ideology.[7]
In a later development,feminist theoristAlison Assiterenumerated four ideas common to the various forms of structuralism:[8]
InFerdinand de Saussure'sCourse in General Linguistics, the analysis focuses not on the use of language (parole, 'speech'), but rather on the underlyingsystemof language (langue). This approach examines how the elements of language relate to each other in the present,synchronicallyrather thandiachronically. Saussure argued thatlinguistic signswere composed of two parts:
This differed from previous approaches that focused on the relationship between words and the things in the world that they designate.[9]
Although not fully developed by Saussure, other key notions in structural linguistics can be found in structural "idealism." Astructural idealismis a class of linguistic units (lexemes,morphemes, or evenconstructions) that are possible in a certain position in a givensyntagm, or linguistic environment (such as a given sentence). The different functional role of each of these members of theparadigmis called 'value' (French:valeur).
In France,Antoine MeilletandÉmile Benvenistecontinued Saussure's project, and members of thePrague schoolof linguistics such asRoman JakobsonandNikolai Trubetzkoyconducted influential research. The clearest and most important example of Prague school structuralism lies inphonemics. Rather than simply compiling a list of which sounds occur in a language, the Prague school examined how they were related. They determined that the inventory of sounds in a language could be analysed as a series of contrasts.
Thus, in English, the sounds /p/ and /b/ represent distinctphonemesbecause there are cases (minimal pairs) where the contrast between the two is the only difference between two distinct words (e.g. 'pat' and 'bat'). Analyzing sounds in terms ofcontrastive featuresalso opens up comparative scope—for instance, it makes clear the difficultyJapanesespeakers have differentiating /r/ and /l/ inEnglishand other languages is because these sounds are not contrastive in Japanese.Phonologywould become the paradigmatic basis for structuralism in a number of different fields.
Based on the Prague school concept, André Martinet in France,J. R. Firthin the UK andLouis Hjelmslevin Denmark developed their own versions of structural andfunctional linguistics.
According to structural theory inanthropologyandsocial anthropology,meaningis produced and reproduced within a culture through various practices, phenomena, and activities that serve as systems of signification.
A structuralist approach may study activities as diverse as food-preparation and serving rituals, religious rites, games, literary and non-literary texts, and other forms of entertainment to discover the deep structures by which meaning is produced and reproduced within the culture. For example,Lévi-Straussanalysed in the 1950s cultural phenomena including mythology, kinship (thealliance theoryand theincest taboo), and food preparation. In addition to these studies, he produced more linguistically-focused writings in which he applied Saussure's distinction betweenlangueandparolein his search for the fundamental structures of the human mind, arguing that the structures that form the "deep grammar" of society originate in the mind and operate in people unconsciously. Lévi-Strauss took inspiration frommathematics.[10]
Another concept used in structural anthropology came from thePrague school of linguistics, whereRoman Jakobsonand others analysed sounds based on the presence or absence of certain features (e.g., voiceless vs. voiced). Lévi-Strauss included this in his conceptualization of the universal structures of the mind, which he held to operate based on pairs ofbinary oppositionssuch as hot-cold, male-female, culture-nature, cooked-raw, or marriageable vs. tabooed women.
A third influence came fromMarcel Mauss(1872–1950), who had written ongift-exchangesystems. Based on Mauss, for instance, Lévi-Strauss argued analliancetheory—that kinship systems are based on the exchange of women between groups—as opposed to the 'descent'-basedtheory described byEdward Evans-PritchardandMeyer Fortes. While replacing Mauss at hisEcole Pratique des Hautes Etudeschair, the writings of Lévi-Strauss became widely popular in the 1960s and 1970s and gave rise to the term "structuralism" itself.
In Britain, authors such asRodney NeedhamandEdmund Leachwere highly influenced by structuralism. Authors such asMaurice Godelierand Emmanuel Terray combinedMarxismwith structural anthropology in France. In the United States, authors such asMarshall SahlinsandJames Boonbuilt on structuralism to provide their own analysis of human society. Structural anthropology fell out of favour in the early 1980s for a number of reasons. D'Andrade suggests that this was because it made unverifiable assumptions about the universal structures of the human mind. Authors such asEric Wolfargued thatpolitical economyandcolonialismshould be at the forefront of anthropology. More generally, criticisms of structuralism byPierre Bourdieuled to a concern with how cultural and social structures were changed by human agency and practice, a trend whichSherry Ortnerhas referred to as 'practice theory'.
One example is Douglas E. Foley'sLearning Capitalist Culture(2010), in which he applied a mixture of structural and Marxist theories to his ethnographicfieldworkamong high school students in Texas. Foley analyzed how they reach a shared goal through the lens of social solidarity when he observed "Mexicanos" and "Anglo-Americans" come together on the same football team to defeat the school's rivals.[11]: 36–7However, he also continually applies a marxist lens and states that he, "wanted to wow peers with a new cultural marxist theory of schooling."[11]: 176
Some anthropological theorists, however, while finding considerable fault with Lévi-Strauss's version of structuralism, did not turn away from a fundamental structural basis for human culture. TheBiogenetic Structuralismgroup for instance argued that some kind of structural foundation for culture must exist because all humans inherit the same system of brain structures. They proposed a kind ofneuroanthropologywhich would lay the foundations for a more complete scientific account of cultural similarity and variation by requiring an integration ofcultural anthropologyandneuroscience—a program that theorists such asVictor Turneralso embraced.
Inliterary theory, structuralist criticism relates literary texts to a larger structure, which may be a particulargenre, a range ofintertextualconnections (such as patterns ofmetaphor[12]),
a model of a universalnarrative structure, or a system of recurrent patterns or motifs.[13][14]
The field ofstructuralist semioticsargues that there must be a structure in every text, which explains why it is easier for experienced readers than for non-experienced readers to interpret a text.[15]Everything that is written seems to be governed by rules, or "grammar of literature", that one learns in educational institutions and that are to be unmasked.[16]
A potential problem for a structuralist interpretation is that it can be highly reductive; as scholarCatherine Belseyputs it: "the structuralist danger of collapsing all difference."[17]An example of such a reading might be if a student concludes the authors ofWest Side Storydid not write anything "really" new, because their work has the same structure as Shakespeare'sRomeo and Juliet. In both texts a girl and a boy fall in love (a "formula" with a symbolic operator between them would be "Boy+Girl") despite the fact that they belong to two groups that hate each other ("Boy's Group-Girl's Group" or "Opposing forces") and conflict is resolved by their deaths. Structuralist readings focus on how the structures of the single text resolve inherent narrative tensions. If a structuralist reading focuses on multiple texts, there must be some way in which those texts unify themselves into a coherent system. The versatility of structuralism is such that a literary critic could make the same claim about a story of twofriendlyfamilies ("Boy's Family+Girl's Family") that arrange a marriage between their children despite the fact that the children hate each other ("Boy-Girl") and then the children commit suicide to escape the arranged marriage; the justification is that the second story's structure is an 'inversion' of the first story's structure: the relationship between the values of love and the two pairs of parties involved have been reversed.
Structuralist literary criticismargues that the "literary banter of a text" can lie only in new structure, rather than in the specifics of character development and voice in which that structure is expressed. Literary structuralism often follows the lead ofVladimir Propp,Algirdas Julien Greimas, andClaude Lévi-Straussin seeking out basic deep elements in stories,myths, and more recently, anecdotes, which are combined in various ways to produce the many versions of the ur-story or ur-myth.
There is considerable similarity between structural literary theory andNorthrop Frye'sarchetypal criticism, which is also indebted to the anthropological study of myths. Some critics have also tried to apply the theory to individual works, but the effort to find unique structures in individual literary works runs counter to the structuralist program and has an affinity withNew Criticism.
Yifu Lin criticizes early structural economic systems and theories, discussing the failures of it. He writes:
"The structuralism believes that the failure to develop advanced capital-intensive industries spontaneously in a developing country is due to market failures caused by various structural rigidities..." "According to neoliberalism, the main reason for the failure of developing countries to catch up with developed countries was too much state intervention in the market, causing misallocation of resources, rent seeking and so forth."
Rather these failures are more so centered around the unlikelihood of such quick development of these advanced industries within developing countries.[18]
New structural economics is an economic development strategy developed byWorld Bank Chief EconomistJustin Yifu Lin. The strategy combines ideas from bothneoclassical economicsand structural economics.
NSE studies two parts:the base and the superstructure. A base is a combination of forces and relations of production, consisting of, but not limited to, industry and technology, while the superstructure consists of hard infrastructure and institutions. This results in an explanation of how the base impacts the superstructure which then determinestransaction costs.[19]
Structuralism is less popular today than other approaches, such aspost-structuralismanddeconstruction. Structuralism has often been criticized for being ahistorical and for favouringdeterministicstructural forces over theability of people to act. As the political turbulence of the 1960s and 1970s (particularly thestudent uprisings of May 1968) began affecting academia, issues of power and political struggle moved to the center of public attention.[20]
In the 1980s,deconstruction—and its emphasis on the fundamental ambiguity of language rather than its logical structure—became popular. By the end of the century, structuralism was seen as a historically importantschool of thought, but the movements that it spawned, rather than structuralism itself, commanded attention.[21]
Several social theorists and academics have strongly criticized structuralism or even dismissed it. FrenchhermeneuticphilosopherPaul Ricœur(1969) criticized Lévi-Strauss for overstepping the limits ofvalidityof the structuralist approach, ending up in what Ricœur described as "aKantianismwithout atranscendental subject."[22]
AnthropologistAdam Kuper(1973) argued that:[23]
'Structuralism' came to have something of the momentum of a millennial movement and some of its adherents felt that they formed asecret societyof the seeing in a world of the blind. Conversion was not just a matter of accepting a new paradigm. It was, almost, a question of salvation.
Philip Noel Pettit(1975) called for an abandoning of "thepositivistdream which Lévi-Strauss dreamed forsemiology," arguing that semiology is not to be placed among thenatural sciences.[24]Cornelius Castoriadis(1975) criticized structuralism as failing to explainsymbolic mediationin the social world;[25]he viewed structuralism as a variation on the "logicist" theme, arguing that, contrary to what structuralists advocate, language—and symbolic systems in general—cannot be reduced to logical organizations on the basis of thebinary logicofoppositions.[26]
Critical theoristJürgen Habermas(1985) accused structuralists likeFoucaultof beingpositivists; Foucault, while not an ordinary positivist per se, paradoxically uses the tools of science to criticize science, according to Habermas.[27](SeePerformative contradictionandFoucault–Habermas debate.) SociologistAnthony Giddens(1993) is another notable critic; while Giddens draws on a range of structuralist themes in his theorizing, he dismisses the structuralist view that the reproduction ofsocial systemsis merely "a mechanical outcome."[28]
|
https://en.wikipedia.org/wiki/Structuralism
|
Awriting systemcomprises a set of symbols, called ascript, as well as the rules by which the script represents a particularlanguage. The earliestwritingappeared during the late 4th millennium BC. Throughout history, each independently invented writing system gradually emerged from a system ofproto-writing, where a small number ofideographswere used in a manner incapable of fully encoding language, and thus lacking the ability to express a broad range of ideas.
Writing systems are generally classified according to how its symbols, calledgraphemes, relate to units of language. Phonetic writing systems – which includealphabetsandsyllabaries– use graphemes that correspond to sounds in the correspondingspoken language. Alphabets use graphemes calledlettersthat generally correspond to spokenphonemes. They are typically divided into three sub-types:Pure alphabetsuse letters to represent bothconsonantandvowelsounds,abjadsgenerally only use letters representing consonant sounds, andabugidasuse letters representing consonant–vowel pairs. Syllabaries use graphemes calledsyllabogramsthat represent entiresyllablesormoras. By contrast,logographic(ormorphographic) writing systems use graphemes that represent the units of meaning in a language, such as itswordsormorphemes. Alphabets typically use fewer than 100 distinct symbols, while syllabaries and logographies may use hundreds or thousands respectively.
According to most contemporary definitions,writingis a visual and tactile notation representinglanguage. As such, the use of writing by a community presupposes an analysis of the structure of language at some level.[2]The symbols used in writing correspond systematically to functional units of either aspokenorsigned language. This definition excludes a broader class of symbolic markings, such as drawings and maps.[a][4]A text is any instance of written material, including transcriptions of spoken material.[5]The act of composing and recording a text is referred to aswriting,[6]and the act of viewing and interpreting the text asreading.[7]
The relationship between writing and language more broadly has been the subject of philosophical analysis as early asAristotle(384–322 BC).[8]While the use of language is universal across human societies, writing is not; writing emerged much more recently, and was independently invented in only a handful of locations throughout history. While most spoken languages have not been written, all written languages have been predicated on an existing spoken language.[9]When those with signed languages as their first language read writing associated with a spoken language, this functions as literacy in a second, acquired language.[b][10]A single language (e.g.Hindustani) can be written using multiple writing systems, and a writing system can also represent multiple languages. For example,Chinese charactershave been used to write multiple languages throughout theSinosphere– including theVietnamese languagefrom at least the 13th century, until their replacement with the Latin-basedVietnamese alphabetin the 20th century.[11]
In the first several decades of modernlinguisticsas a scientific discipline, linguists often characterized writing as merely the technology used to record speech – which was treated as being of paramount importance, for what was seen as the unique potential for its study to further the understanding of human cognition.[12]
While researchers of writing systems generally use some of the same core terminology, precise definitions and interpretations can vary by author, often depending on the theoretical approach being employed.[13]
Agraphemeis the basic functional unit of a writing system. Graphemes are generally defined as minimally significant elements which, when taken together, comprise the set of symbols from which texts may be constructed.[14]All writing systems require a set of defined graphemes, collectively called ascript.[15]The concept of the grapheme is similar to that of thephonemein the study of spoken languages. Likewise, as many sonically distinctphonesmay function as the same phoneme depending on the speaker, dialect, and context, many visually distinctglyphs(orgraphs) may be identified as the same grapheme. These variant glyphs are known as theallographsof a grapheme: For example, the lowercase letter⟨a⟩may be represented by the double-storey|a|and single-storey|ɑ|shapes,[16]or others written in cursive, block, or printed styles.[17]The choice of a particular allograph may be influenced by the medium used, the writing instrument used, the stylistic choice of the writer, the preceding and succeeding graphemes in the text, the time available for writing, the intended audience, and the largely unconscious features of an individual's handwriting.
Orthography(lit.'correct writing') refers to the rules and conventions for writing shared by a community, including the ordering of and relationship between graphemes. Particularly foralphabets, orthography includes the concept ofspelling. For example,English orthographyincludesuppercase and lowercaseforms for 26lettersof theLatin alphabet(with these graphemes corresponding to various phonemes), punctuation marks (mostly non-phonemic), and other symbols, such as numerals. Writing systems may be regarded as complete if they are able to represent all that may be expressed in the spoken language, while a partial writing system cannot represent the spoken language in its entirety.[18]
In each instance, writing emerged from systems ofproto-writing, though historically most proto-writing systems did not produce writing systems. Proto-writing usesideographicand mnemonic symbols to communicate, but lacks the capability to fully encode language. Examples include:
Writing has been invented independently multiple times in human history – first emerging between 3400 and 3200 BC ascuneiform, a system initially used to write theSumerian languagein southern Mesopotamia; it was later adapted to writeAkkadianas its speakers spread throughout the region, with Akkadian writing appearing in significant quantitiesc.2350 BC.[23]Cuneiform was closely followed byEgyptian hieroglyphs. It is generally agreed that the two systems were invented independently from one another; both evolved from proto-writing systems with the earliest coherent texts datedc.2600 BC.Chinese charactersemerged independently in theYellow Rivervalleyc.1200 BC. There is no evidence of contact between China and the literate peoples of the Near East, and the Mesopotamian and Chinese approaches for representing sound and meaning are distinct.[24][25][26]TheMesoamerican writing systems, includingOlmecand theMaya script, were also invented independently.[27]
With each independent invention of writing, the ideographs used in proto-writing were decoupled from the direct representation of ideas, and gradually came to represent words instead. This occurred via application of therebusprinciple, where a symbol was appropriated to represent an additional word that happened to be similar in pronunciation to the word for the idea originally represented by the symbol. This allowed words without concrete visualizations to be represented by symbols for the first time; the gradual shift from ideographic symbols to those wholly representing language took place over centuries, and required the conscious analysis of a given language by those attempting to write it.[28]
Alphabetic writing descends from previous morphographic writing, and first appeared before 2000 BC to write a Semitic language spoken in theSinai Peninsula. Most of the world's alphabets either descend directly from thisProto-Sinaitic script, or were directly inspired by its design. Descendants include thePhoenician alphabet(c.1050 BC), and its child in theGreek alphabet(c.800 BC).[29][30]TheLatin alphabet, which descended from the Greek alphabet, is by far the most common script used by writing systems.[31]
Writing systems are most often categorized according to what units of language a system's graphemes correspond to.[32]At the most basic level, writing systems can be either phonographic (lit.'sound writing') when graphemes represent units of sound in a language, or morphographic ('form writing') when graphemes represent units of meaning (such aswordsormorphemes).[33]Depending on the author, the older termlogographic('word writing') is often used, either with the same meaning asmorphographic, or specifically in reference to systems where the basic unit being written is the word. Recent scholarship generally prefersmorphographicoverlogographic, with the latter seen as potentially vague or misleading – in part because systems usually operate on the level of morphemes, not words.[34]
Many classifications define three primary categories, where phonographic systems are subdivided into syllabic and alphabetic (orsegmental) systems. Syllabaries use symbols called syllabograms to representsyllablesormoras. Alphabets use symbols called letters that correspond to spoken phonemes (or more technically, todiaphonemes). Alphabets are generally classified into three subtypes, withabjadshaving letters forconsonants, pure alphabets having letters for both consonants andvowels, andabugidashaving characters that correspond to consonant–vowel pairs.[35]David Diringerproposed a five-fold classification of writing systems, comprising pictographic scripts, ideographic scripts, analytic transitional scripts, phonetic scripts, and alphabetic scripts.[36]
In practice, writing systems are classified according to the primary type of symbols used, and typically include exceptional cases where symbols function differently. For example, logographs found within phonetic systems like English include theampersand⟨&⟩and the numerals⟨0⟩,⟨1⟩, etc. – which correspond to specific words (and,zero,one, etc.) and not to the underlying sounds.[32]Most writing systems can be described as mixed systems that feature elements of both phonography and morphography.[37]
A logogram is a character that represents a morpheme within a language.Chinese charactersrepresent the only major logographic writing systems still in use: they have historically been used to write thevarieties of Chinese, as well asJapanese,Korean,Vietnamese, and other languages of theSinosphere. As each character represents a single unit of meaning, thousands are required to write all the words of a language. If the logograms do not adequately represent all meanings and words of a language, written language can be confusing or ambiguous to the reader.[38]
Logograms are sometimes conflated withideograms, symbols which graphically represent abstract ideas; most linguists now reject this characterization:[39]Chinese characters are often semantic–phonetic compounds, which include a component related to the character's meaning, and a component that gives a hint for its pronunciation.[40]
A syllabary is a set of written symbols (calledsyllabograms) that represent eithersyllablesormoras– a unit ofprosodythat is often but not always a syllable in length.[41]Syllabaries are best suited to languages with relatively simple syllable structure, since a different symbol is needed for every syllable. Japanese, for example, contains about 100 moras, which are represented by moraichiragana. By contrast, English features complex syllable structures with a relatively large inventory of vowels and complexconsonant clusters– for a total of 15–16 thousand distinct syllables. Some syllabaries have larger inventories: theYi scriptcontains 756 different symbols.[42]
An alphabet uses symbols (calledletters) that correspond to the phonemes of a language, e.g. its vowels and consonants. However, these correspondences are rarely uncomplicated, andspellingis often mediated by other factors than just which sounds are used by a speaker.[43]The wordalphabetis derived fromalphaandbeta, the names for the first two letters in theGreek alphabet.[44]Anabjadis an alphabet whose letters only represent the consonantal sounds of a language. They were the first alphabets to develop historically,[45]with most used to writeSemitic languages, and originally deriving from theProto-Sinaitic script. Themorphologyof Semitic languages is particularly suited to this approach, as the denotation of vowels is generally redundant.[46]Optional markings for vowels may be used for some abjads, but are generally limited to applications like education.[47]Many pure alphabets were derived from abjads through the addition of dedicated vowel letters, as with the derivation of the Greek alphabet from the Phoenician alphabetc.800 BC.Abjadis the word for "alphabet" in Arabic, and analogously derives from the traditional order of letters in theArabic alphabet('alif,bā',jīm,dāl).[48]
Anabugidais a type of alphabet with symbols corresponding to consonant–vowel pairs, where basic symbols for each consonant are associated with aninherent vowelby default, and other possible vowels for each consonant are indicated via predictable modifications made to the basic symbols.[49]In an abugida, there may be a sign forkwith no vowel, but also one forka(ifais the inherent vowel), andkeis written by modifying thekasign in a way consistent with howlawould be modified to getle. In many abugidas, modification consists of the addition of a vowel sign; other possibilities include rotation of the basic sign, or addition ofdiacritics.
While true syllabaries have one symbol per syllable and no systematic visual similarity, the graphic similarity in most abugidas stems from their origins as abjads – with added symbols comprising markings for different vowels added onto a pre-existing base symbol. The largest single group of abugidas is theBrahmic familyof scripts, however, which includes nearly all the scripts used in India and Southeast Asia. The nameabugidawas derived by linguistPeter T. Daniels(b.1951) from the first four characters of an order of theGeʽez script, which is used for certain Nilo-Saharan and Afro-Asiatic languages of Ethiopia and Eritrea.[50]
Originally proposed as a category byGeoffrey Sampson(b.1944),[51][52]a featural system uses symbols representing sub-phonetic elements – e.g. those traits that can be used to distinguish between and analyse a language's phonemes, such as theirvoicingorplace of articulation. The only prominent example of a featural system is thehangulscript used to write Korean, where featural symbols are combined into letters, which are in turn joined into syllabic blocks. Many scholars, includingJohn DeFrancis(1911–2009), reject a characterization of hangul as a featural system – with arguments including that Korean writers do not themselves think in these terms when writing – or question the viability of Sampson's category altogether.[53]
As hangul was consciously created by literate experts, Daniels characterizes it as a "sophisticatedgrammatogeny"[54]– a writing system intentionally designed for a specific purpose, as opposed to having evolved gradually over time. Other featural grammatogenies includeshorthandsdeveloped by professionals andconstructed scriptscreated by hobbyists and creatives, like theTengwarscript designed byJ. R. R. Tolkiento write the Elven languages he also constructed. Many of these feature advanced graphic designs corresponding to phonological properties. The basic unit of writing in these systems can map to anything from phonemes to words. It has been shown that even the Latin script has sub-character features.[55]
All writing is linear in the broadest sense – i.e., the spatial arrangement of symbols indicates the order in which they should be read.[56]On a more granular level, systems with discontinuous marks likediacriticscan be characterized as less linear than those without.[57]In the initial historical distinction,linearwriting systems (e.g. the Phoenician alphabet) generally form glyphs as a series of connected lines or strokes, while systems that generally use discrete, more pictorial marks (e.g. cuneiform) are sometimes termednon-linear. The historical abstraction of logographs into phonographs is often associated with a linearization of the script.[58]
InBraille, raised bumps on the writingsubstrateare used to encode non-linear symbols. The original system – whichLouis Braille(1809–1852) invented in order to allow people withvisual impairmentsto read and write – used characters that corresponded to the letters of the Latin alphabet.[59]Moreover, that Braille is equivalent to visual writing systems in function demonstrates that the phenomenon of writing is fundamentally spatial in nature, not merely visual.[60]
Writing systems may be characterized by how text is graphically divided into lines, which are to be read in sequence:[61]
In left-to-right scripts (LTR), horizontal rows are sequenced from top to bottom on a page, with each row read from left to right. Right-to-left scripts (RTL), which use the opposite directionality, include theArabic alphabet.[62]
Egyptian hieroglyphs were written either left-to-right or right-to-left, with the animal and human glyphs turned to face the beginning of the line. The early alphabet did not have a fixed direction, and was written both vertically and horizontally; it was most commonly writtenboustrophedonically: starting in one horizontal direction, then turning at the end of the line and reversing direction.[63]
The right-to-left direction of the Phoenician alphabet initially stabilized afterc.800 BC.[64]Left-to-right writing has an advantage that, since most people areright-handed,[65]the hand does not interfere with what is being written (which, when inked, may not have dried yet) as the hand is to the right side of the pen. TheGreek alphabetand its successors settled on a left-to-right pattern, from the top to the bottom of the page. Other scripts, such asArabicandHebrew, came to be written right to left. Scripts that historically incorporate Chinese characters have traditionally been written vertically in columns arranged from right to left, while a horizontal direction from left to right was only widely adopted in the 20th century due to Western influence.[66]
Several scripts used in the Philippines and Indonesia, such asHanunoo, are traditionally written with lines moving away from the writer, from bottom to top, but are read left to right;[67]oghamis written from bottom to top, commonly on the corner of a stone.[68]The ancientLibyco-Berber alphabetwas also written from bottom to top.[69]
|
https://en.wikipedia.org/wiki/Writing_system
|
Awritten languageis the representation of alanguageby means ofwriting. This involves the use of visual symbols, known asgraphemes, to represent linguistic units such asphonemes,syllables,morphemes, orwords. However, written language is not merelyspokenorsigned languagewritten down, though it can approximate that. Instead, it is a separate system with its own norms, structures, and stylistic conventions, and it often evolves differently than its corresponding spoken or signed language.
Written languages serve as crucial tools for communication, enabling the recording, preservation, and transmission of information, ideas, and culture across time and space. Theorthographyof a written language comprises the norms by which it is expected to function, including rules regarding spelling and typography. A society's use of written language generally has a profound impact on its social organization, cultural identity, and technological profile.
Writing,speech, andsigningare three distinct modalities oflanguage; each has unique characteristics and conventions.[2]When discussing properties common to the modes of language, the individual speaking, signing, or writing will be referred to as thesender, and the individual listening, viewing, or reading as thereceiver; senders and receivers together will be collectively termedagents. The spoken, signed, and written modes of language mutually influence one another, with the boundaries between conventions for each being fluid—particularly in informal written contexts like taking quick notes or posting on social media.[3]
Spoken and signed language is typically more immediate, reflecting the local context of the conversation and the emotions of the agents, often via paralinguistic cues likebody language. Utterances are typically less premeditated, and are more likely to feature informal vocabulary and shorter sentences.[4]They are also primarily used in dialogue, and as such include elements that facilitateturn-taking; these includingprosodicfeatures such as trailing off andfillersthat indicate the sender has not yet finished their turn. Errors encountered in spoken and signed language includedisfluenciesand hesitation.[5]
By contrast, written language is typically more structured and formal. While speech and signing are transient, writing is permanent. It allows for planning, revision, and editing, which can lead to more complex sentences and a more extensive vocabulary. Written language also has to convey meaning without the aid of tone of voice, facial expressions, or body language, which often results in more explicit and detailed descriptions.[6]
While a speaker can typically be identified by the quality of their voice, the author of a written text is often not obvious to a reader only analyzing the text itself. Writers may nevertheless indicate their identity via the graphical characteristics of theirhandwriting.[7]
Written languages generally change more slowly than their spoken or signed counterparts. As a result, the written form of a language may retain archaic features or spellings that no longer reflect contemporary speech.[8]Over time, this divergence may contribute to a dynamic of diglossia.
There are too many grammatical differences to address, but here is a sample. In terms ofclausetypes, written language is predominantly declarative (e.g.It's red.) and typically contains fewerimperatives(e.g.Make it red.),interrogatives(e.g.Is it red?), and exclamatives (e.g.How red it is!) than spoken or signed language.Noun phrasesare generally predominantlythird person, but they are even more so in written language.Verb phrasesin spoken English are more likely to be insimple aspectthan in perfect or progressive aspect, and almost all of the past perfect verbs appear in written fiction.[9]
Information packagingis the way that information is packaged within a sentence, that is the linear order in which information is presented. For example,On the hill, there was a treehas a different informational structure thanThere was a tree on the hill. While, in English, at least, the second structure is more common, the first example is relatively much more common in written language than in spoken language. Another example is that a construction likeit was difficult to follow himis relatively more common in written language than in spoken language, compared to the alternative packagingto follow him was difficult.[10]A final example, again from English, is that thepassive voiceis relatively more common in writing than in speaking.[11]
Written language typically has higherlexical densitythan spoken or signed language, meaning there is a wider range of vocabulary used and individual words are less likely to be repeated. It also includes fewer first and second-person pronouns and fewer interjections. Written English has fewer verbs and more nouns than spoken English, but even accounting for that, verbs likethink,say,know, andguessappear relatively less commonly with a content clause complement (e.g.I thinkthat it's OK.) in written English than in spoken English.[12]
Writing developed independently in a handful of different locations, namelyMesopotamiaand Egypt (c.3200– c.3100 BCE),China(c.1250 BCE), andMesoamerica(c.1 CE).[13]Scholars mark the difference betweenprehistoryandhistorywith the invention of the first written language.[14]The first writing can be dated back to theNeolithicera, with clay tablets being used to keep track of livestock and commodities. The first example of written language can be dated toUruk, at the end of the 4th millennium BCE.[15]An ancient Mesopotamian poem tells a tale about the invention of writing:
Because the messenger's mouth was heavy and he couldn't repeat, the Lord of Kulaba patted some clay and put words on it, like a tablet. Until then, there had been no putting words on clay.
The origins of written language are tied to the development of human civilization. The earliest forms of writing were born out of the necessity to record commerce, historical events, and cultural traditions.[17]The first known true writing systems were developed during the earlyBronze Age(late 4th millennium BCE) in ancientSumer, present-day southern Iraq. This system, known ascuneiform, waspictographicat first, but later evolved into an alphabet, a series of wedge-shaped signs used to represent languagephonemically.[18]
At roughly the same time, the system ofEgyptian hieroglyphswas developing in theNilevalley, also evolving from pictographic proto-writing to include phonemic elements.[19]TheIndus Valley civilizationdeveloped a form of writing known as theIndus scriptc.2600 BCE, although its precise nature remains undeciphered.[20]TheChinese script, one of the oldest continuously used writing systems in the world, originated around the late 2nd millennium BCE, evolving fromoracle bone scriptused fordivinationpurposes.[21]
The development and use of written language has had profound impacts on human societies, influencing everything from social organization and cultural identity to technology and the dissemination of knowledge.[15]Plato(c.427– 348 BCE), through the voice ofSocrates, expressed concerns in the dialogue "Phaedrus" that a reliance on writing would weaken one's ability to memorize and understand, as written words would "create forgetfulness in the learners' souls, because they will not use their memories". He further argued that written words, being unable to answer questions or clarify themselves, are inferior to the living, interactive discourse of oral communication.[22]
Written language facilitates the preservation and transmission of culture, history, and knowledge across time and space, allowing societies to develop complex systems of law, administration, and education.[16][page needed]For example, the invention of writing in ancient Mesopotamia enabled the creation of detailed legal codes, like theCode of Hammurabi.[14][page needed]The advent of digital technology has revolutionized written communication, leading to the emergence of new written genres and conventions, such as interactions viasocial media. This has implications for social relationships, education, and professional communication.[23][page needed]
Literacyis the ability to read and write. From a graphemic perspective, this ability requires the capability of correctly recognizing or reproducing graphemes, the smallest units of written language. Literacy is a key driver ofsocial mobility. Firstly, it underpins success in formal education, where the ability to comprehend textbooks, write essays, and interact with written instructional materials is fundamental. High literacy skills can lead to better academic performance, opening doors to higher education and specialized training opportunities.[24][better source needed]
In the job market, proficiency in written language is often a determinant of employment opportunities. Many professions require a high level of literacy, from drafting reports and proposals to interpreting technical manuals. The ability to effectively use written language can lead to higher paying jobs and upward career progression.[25][better source needed]
Literacy enables additional ways for individuals to participate in civic life, including understanding news articles and political debates to navigating legal documents.[26][better source needed]However, disparities in literacy rates and proficiency with written language can contribute tosocial inequalities. Socio-economic status, race, gender, and geographic location can all influence an individual's access to quality literacy instruction. Addressing these disparities through inclusive and equitable education policies is crucial for promoting social mobility and reducing inequality.[27]
The Canadian philosopherMarshall McLuhan(1911–1980) primarily presented his ideas about written language inThe Gutenberg Galaxy(1962). Therein, McLuhan argued that the invention and spread of theprinting press, and the shift fromoral traditionto written culture that it spurred, fundamentally changed the nature of human society. This change, he suggested, led to the rise ofindividualism,nationalism, and other aspects of modernity.[28]
McLuhan proposed that written language, especially as reproduced in large quantities by the printing press, contributed to a linear and sequential mode of thinking, as opposed to the more holistic and contextual thinking fostered by oral cultures. He associated this linear mode of thought with a shift towards more detached and objective forms of reasoning, which he saw as characteristic of the modern age. Furthermore, he theorized about the effects of different media on human consciousness and society. He famously asserted that "the medium is the message", meaning that the form of a medium embeds itself in any message it would transmit or convey, creating a symbiotic relationship by which the medium influences how the message is perceived.
While McLuhan's ideas are influential, they have also been critiqued and debated. Some scholars argue that he overemphasized the role of the medium (in this case, written language) at the expense of the content of communication.[29]It has also been suggested that his theories are overly deterministic, not sufficiently accounting for the ways in which people can use and interpret media in varied ways.[30]
Diglossia is a sociolinguistic phenomenon where two distinct varieties of a language – often one spoken and one written – are used by a single language community in different social contexts.[31]
The "high variety", often the written language, is used in formal contexts, such as literature, formal education, or official communications. This variety tends to be more standardized and conservative, and may incorporate older or more formal vocabulary and grammar.[32]The "low variety", often the spoken language, is used in everyday conversation and informal contexts. It is typically more dynamic and innovative, and may incorporate regional dialects, slang, and other informal language features.[33]
Diglossic situations are common in many parts of the world, including theArab world, where the highModern Standard Arabicvariety coexists with other, lowvarieties of Arabiclocal to specific regions.[34]Diglossia can have significant implications for language education, literacy, and sociolinguistic dynamics within a language community.[35]
Analogously,digraphiaoccurs when a language may be written in different scripts. For example,Serbianmay be written using either theCyrillicorLatin script, whileHindustanimay be written inDevanagarior theUrdu alphabet.[36]
Writing systems can be broadly classified into several types based on the units of language they correspond with: namely logographic, syllabic, and alphabetic.[37]They are distinct fromphonetic transcriptionswith technical applications, which are not used as writing as such. For example, notation systems for signed languages likeSignWritingbeen developed,[38]but it is not universally agreed that these constitute a written form of the sign language in themselves.[39]
Orthography comprises the rules and conventions for writing a given language,[40]including how its graphemes are understood to correspond with speech. In some orthographies, there is a one-to-one correspondence between phonemes and graphemes, as inSerbianandFinnish.[41]These are known asshallow orthographies. In contrast, orthographies like that of English and French are considereddeep orthographiesdue to the complex relationships between sounds and symbols.[42]For instance, in English, the phoneme/f/can be represented by the graphemes⟨f⟩as in⟨fish⟩,⟨ph⟩as in⟨phone⟩, or⟨gh⟩as in⟨enough⟩.[43]
Orthographies also include rules about punctuation, capitalization, word breaks, and emphasis. They may also include specific conventions for representing foreign words and names, and for handling spelling changes to reflect changes in pronunciation or meaning over time.[44]
|
https://en.wikipedia.org/wiki/Written_language
|
Grafikwas a specialist London-based magazine ongraphic designandvisual culture.
Grafiklasted nearly a quarter century as an independently published magazine. It started life asHot Graphics Internationalin the mid '80s during the ‘Digital Revolution’. With help from Meta and a new editor Tim Rich, the magazine was transformed into a monthly magazineGraphics International. In 2001 Caroline Roberts took over the role of editor and in July 2003 the magazine underwent a radical transformation ofrebrandingand re-designing by London-based graphic design agency MadeThought. The magazine was now calledGrafik. In 2009 the role ofpublisherand editor-in-chief was taken over by Caroline Roberts, the neweditorwas Angharad Lewis, and the magazine was re-designed[1]by Swedish graphic designer and art director Matilda Saxow.[2]
In June 2010 the company which publishedGrafik, Adventures in Publishing Ltd., went into administration and was eventually liquidated; as a result, the magazine ceased publication for an eight-month period.
Editor-in-chief Caroline Roberts and Editor Angharad Lewis secured the rights to the name and assets ofGrafik, and in October 2010 it was announced that the magazine would relaunch at the beginning of 2011.[3]
The magazine was relaunched in February 2011 with a new publisher, Pyramyd, new designer, Michael Bojkowski and a revised editorial format created by a London-based agency called Woodbridge & Rees, formed by the previous editors. In December 2011, Pyramyd decided to cease publication of the magazine.
Grafikreemerged as a website in March 2014 under the editorial direction of former editors Caroline Roberts and Angharad Lewis, with a new editorial team and published in London by Protein.
In 2018 William Rowe (director of Protein) teamed up with Marcroy Smith (director of People of Print LTD) to form a new company calledGrafik Media Ltdwhich is geared at bringing the magazine back into a printed publication and a global brand. The first step was creatingGrafik Editionswhich is in partnership with The PrintSpace in Shoreditch, London to create limited edition archival prints by leading visual artists and graphic designers. They then moved into production of Grafik branded items and also launchedGrafik Projectsas a means to share the work of exceptional graphic design work across their publishing online platforms and act as a fundraising method towards the resurrection of Grafik Magazine. They hope to launch the new magazine Mid-2022.
The magazine focuses on contemporary graphic design and international visual culture. Regular features include reviews of notable design events and exhibitions, showcases of emerging and established talent, critical viewpoints and special reports, often covering a piece ofdesign historywith a particular relevance for today. There are also regular features on logoforms andletterforms, and copious book reviews of both graphic design books and those of a more general interest to the creative community.
|
https://en.wikipedia.org/wiki/Grafik_(magazine)
|
Scholastic Corporationis an American multinational publishing, education, and media company that publishes and distributes books, comics, and educational materials for schools, teachers, parents, children, and other educational institutions. Products are distributed via retail and online sales and through schools viareading clubsand book fairs.Clifford the Big Red Dog, a character created byNorman Bridwellin 1963, is the mascot of Scholastic.
Scholastic was founded in 1920 by Maurice R. Robinson nearPittsburgh, Pennsylvaniato be a publisher of youth magazines. The first publication wasThe Western Pennsylvania Scholastic. It coveredhigh school sportsand social activities; the four-page magazine debuted on October 22, 1920, and was distributed in 50 high schools.[3]More magazines followed for Scholastic Magazines.[3][4]In 1948, Scholastic entered the book club business.[5]In the 1960s, scholastic international publishing locations were added in England 1964, New Zealand 1964, and Sydney 1968.[6]Also in the 1960s, Scholastic entered the book publishing business. In the 1970s, Scholastic created its TV entertainment division.[3]From 1975 until his death in 2021,Richard Robinson, son of the corporation's founder, was CEO and president.[7]Scholastic began trading onNASDAQon May 12, 1987. In 2000, Scholastic purchasedGrolierfor US$400 million.[8][9]Scholastic became involved in a video collection in 2001. In February 2012, Scholastic boughtWeekly Reader PublishingfromReader's Digest Association, and announced in July 2012 that it planned to discontinue separate issues ofWeekly Readermagazines after more than a century of publication, and co-branded the magazines asScholastic News/Weekly Reader.[10]Scholastic sold READ 180 to Houghton Mifflin Harcourt in 2015. in December 2015, Scholastic launched the Scholastic Reads Podcasts. On October 22, 2020, Scholastic celebrated its 100th anniversary. In 2005, Scholastic developed FASTT Math with Tom Snyder to help students with their proficiency with math skills, specifically being multiplication, division, addition, and subtraction through a series of games and memorization quizzes gauging the student's progress.[11]In 2013, Scholastic developed System 44 with Houghton Mifflin Harcourt to help students encourage reading skills. In 2011, Scholastic developed READ 180 withHoughton Mifflin Harcourtto help students understand their reading skills.[12]
The business has three segments: Children's Book Publishing and Distribution, Education Solutions, and International. Scholastic holds the perpetual US publishing rights to theHarry PotterandHunger Gamesbook series.[13][14]Scholastic is the world's largest publisher and distributor of children's books and print and digital educational materials for pre-K to grade 12.[15]In addition toHarry PotterandThe Hunger Games, Scholastic is known for its school book clubs and book fairs, classroom magazines such asScholastic NewsandScience World, and popular book series:Clifford the Big Red Dog,The Magic School Bus,Goosebumps,Horrible Histories,Captain Underpants,Animorphs,The Baby-Sitters Club, andI Spy. Scholastic also publishes instructional reading and writing programs, and offers professional learning and consultancy services for school improvement.Clifford the Big Red Dogis the official mascot of Scholastic.[16]
The Scholastic Art & Writing awards was Founded in 1923 by Maurice R. Robinson,The Scholastic Art & Writing Awards,[17]administered by theAlliance for Young Artists & Writers, is a competition which recognizes talented young artists and writers from across the United States.[18]
The success and enduring legacy of theScholastic Art & Writing Awardscan be attributed in part to its well-planned and executed marketing initiatives. These efforts have allowed the competition to adapt to the changing times, connect with a wider audience, and continue its mission of nurturing the creative potential of the nation's youth.
In 2005, Scholastic developedFASTT MathwithTom Snyderto help students with their proficiency with math skills, specifically beingmultiplication,division,addition, andsubtractionthrough a series of games and memorization quizzes gauging the student's progress.[30]In 2013, Scholastic developed System 44 withHoughton Mifflin Harcourtto help students encourage reading skills. In 2011, Scholastic developed READ 180 with Houghton Mifflin Harcourt to help students understand their reading skills. Scholastic Reference publishesreference books.[31][32]
Scholastic Entertainment (formerly Scholastic Productions and Scholastic Media) is a corporate division[33]led byDeborah Fortesince 1995.
It covers "all forms of media and consumer products, and is comprised of four main groups – Productions, Marketing & Consumer Products, Interactive, and Audio."Weston Woodsis its production studio, acquired in 1996, as wasSoup2Nuts(best known forDr. Katz, Professional Therapist,Science CourtandHome Movies) from 2001 to 2015 before shutting down.[34]Scholastic has produced audiobooks such as the Caldecott/Newbery Collection;[35]Scholastic has been involved with several television programs and feature films based on its books. In 1985, Scholastic Productions teamed up withKarl-Lorimar Home Video, a home video unit ofLorimar Productions, to form the line Scholastic-Lorimar Home Video, whereas Scholastic would produce made-for-video programming, and became a best-selling video line for kids, and the pact expired for two years, whereas Scholastic would team up with leading independent family video distributor and a label ofInternational Video Entertainment,Family Home Entertainment, to distribute made-for-video programming for the next three years.[36]
Scholastic Book Fairs began in 1981. Scholastic provides book fair products to schools, which then conduct the book fairs. Schools can elect to receive books, supplies and equipment or a portion of the proceeds from the book fair.[37]
In the United States, during fiscal 2024, revenue from the book fairs channel ($541.6 million) accounted for more than half of the company's revenue in the "Total Children's Book Publishing and Distribution" segment ($955.2 million),[38]and schools earned over $200 million in proceeds in cash and incentive credits.[39]
In October 2023, Scholastic created a separate category for books dealing with "race, LGBTQ and other issues related to diversity", allowing schools to opt out of carrying these types of books. Scholastic defended the move, citing legislation in multiple states seeking toban booksdealing withLGBTQissues orrace.[40]After public backlash from educators, authors, andfree speechadvocacy groups, Scholastic reversed course, saying the new category will be discontinued, writing: "It is unsettling that the current divisive landscape in the U.S. is creating an environment that could deny any child access to books, or that teachers could be penalized for creating access to all stories for their students".[41][42]
Scholastic Book Fairs have been criticized for spurring unnecessary purchases, highlighting economic inequality among students, and disruption of school activities and facilities.[43][44]
Scholasticbook clubsare offered at schools in many countries. Typically, teachers administer the program to the students in their own classes, but in some cases, the program is administered by a central contact for the entire school. Within Scholastic, Reading Clubs is a separate unit (compared to, e.g., Education). Reading clubs are arranged by age/grade.[45]Book club operators receive "Classroom Funds" redeemable only for Scholastic Corporation products.[46][47][48]
In January 2025, claims of a data breach affecting Scholastic came from a group calling themselves Puppygirl Hacker Polycule.[49]The breach affected an estimated 8 million customers consisting of names, email addresses, phone numbers, and home addresses. The breach was provided toHave I Been Pwned?in an effort to inform customers.[50]
|
https://en.wikipedia.org/wiki/Scholastic_Corporation
|
-ism(/-ˌɪzəm/) is asuffixin manyEnglish words, originally derived from theAncient Greeksuffix-ισμός(-ismós), and reachedEnglishthrough theLatin-ismus, and theFrench-isme.[1]It is used to create abstract nouns of action, state, condition, or doctrine, and is often used to describephilosophies,theories,religions,social movements,artistic movements,lifestyles,[2]behaviors,scientific phenomena,[3]ormedical conditions.[4][5]
The concept of an -ism may resemble that of agrand narrative.[6]
Skeptics of any given -isms can quote the dictum attributed toEisenhower: "All -isms are wasms".[7]
The first recorded usage of the suffixismas a separate word in its own right was in 1680. By the nineteenth century it was being used byThomas Carlyleto signify a pre-packagedideology. It was later used in this sense by such writers asJulian HuxleyandGeorge Bernard Shaw. In the United States of the mid-nineteenth century, the phrase "the isms" was used as a collective derogatory term to lump together the radical social reform movements of the day (such asslavery abolitionism,feminism,alcohol prohibitionism,Fourierism,pacifism, Technoism, earlysocialism, etc.) and various spiritual or religious movements considered non-mainstream by the standards of the time (such astranscendentalism,spiritualism,Mormonismetc.). Southerners often prided themselves on the American South being free from all of these pernicious "Isms" (except for alcohol temperance campaigning, which was compatible with a traditional Protestant focus on individual morality). So on September 5 and 9, 1856, theExaminernewspaper ofRichmond, Virginia, ran editorials on "Our Enemies, the Isms and their Purposes", while in 1858Parson Brownlowcalled for a "Missionary Society of the South, for the Conversion of the Freedom Shriekers, Spiritualists, Free-lovers, Fourierites, andInfidelReformers of the North" (seeThe Freedom-of-thought Struggle in the Old SouthbyClement Eaton). In the present day, it appears in the title of a standard survey of political thought,Today's Ismsby William Ebenstein, first published in the 1950s, and now in its 11th edition.
In 2004, theOxford English Dictionaryadded two new draft definitions of -isms to reference their relationship to words that convey injustice:[8]
In December 2015,Merriam-Webster Dictionarydeclared -ism to be the Word of the Year.[9]
For examples of the use of -ism as a suffix:
|
https://en.wikipedia.org/wiki/-ism
|
-logyis asuffixin the English language, used with words originally adapted fromAncient Greekending in-λογία(-logía).[1]The earliest English examples were anglicizations of the French-logie, which was in turn inherited from theLatin-logia.[2]The suffix became productive in English from the 18th century, allowing the formation of new terms with no Latin or Greek precedent.
The English suffix has two separate main senses, reflecting two sources of the-λογίαsuffix in Greek:[3]
Philologyis an exception: while its meaning is closer to the first sense, the etymology of the word is similar to the second sense.[8]
In English names for fields of study, the suffix-logyis most frequently found preceded by the euphonic connective voweloso that the word ends in-ology.[9]In these Greek words, therootis always a noun and-o-is thecombining vowelfor all declensions of Greek nouns. However, when new names for fields of study are coined in modern English, the formations ending in-logyalmost always add an-o-, except when the root word ends in an "l" or a vowel, as in these exceptions:[10]analogy,dekalogy,disanalogy,genealogy,genethlialogy,hexalogy;herbalogy(a variant ofherbology),mammalogy,mineralogy,paralogy,petralogy(a variant ofpetrology);elogy;heptalogy;antilogy,festilogy;trilogy,tetralogy,pentalogy;palillogy,pyroballogy;dyslogy;eulogy; andbrachylogy.[7]Linguists sometimes jokingly refer tohaplologyashaplogy(subjecting the wordhaplologyto the process of haplology itself).
Permetonymy, words ending in-logyare sometimes used to describe a subject rather than the study of it (e.g.,technology). This usage is particularly widespread in medicine; for example,pathologyis often used simply to refer to "the disease" itself (e.g., "We haven't found the pathology yet") rather than "the study of a disease".
Books, journals, and treatises about a subject also often bear the name of this subject (e.g., the scientific journalEcology).
When appended to other English words, the suffix can also be used humorously to createnonce words(e.g.,beerologyas "the study of beer"). As with otherclassical compounds, adding the suffix to an initial word-stem derived from Greek orLatinmay be used to lend grandeur or the impression of scientific rigor to humble pursuits, as incosmetology("the study of beauty treatment") orcynology("the study of dog training").
The -logy or -ology suffix is commonly used to indicate finite series of art works like books or movies. For paintings, the "tych" suffix is more common (e.g.diptych,triptych). Examples include:
Further terms like duology (two, mostly ingenre fiction) quadrilogy (four) and octalogy (eight) have been coined but are rarely used: for a series of 10, sometimes "decalog" is used (e.g. in theVirgin Decalog) instead of "decalogy".
|
https://en.wikipedia.org/wiki/-ology
|
-logyis asuffixin the English language, used with words originally adapted fromAncient Greekending in-λογία(-logía).[1]The earliest English examples were anglicizations of the French-logie, which was in turn inherited from theLatin-logia.[2]The suffix became productive in English from the 18th century, allowing the formation of new terms with no Latin or Greek precedent.
The English suffix has two separate main senses, reflecting two sources of the-λογίαsuffix in Greek:[3]
Philologyis an exception: while its meaning is closer to the first sense, the etymology of the word is similar to the second sense.[8]
In English names for fields of study, the suffix-logyis most frequently found preceded by the euphonic connective voweloso that the word ends in-ology.[9]In these Greek words, therootis always a noun and-o-is thecombining vowelfor all declensions of Greek nouns. However, when new names for fields of study are coined in modern English, the formations ending in-logyalmost always add an-o-, except when the root word ends in an "l" or a vowel, as in these exceptions:[10]analogy,dekalogy,disanalogy,genealogy,genethlialogy,hexalogy;herbalogy(a variant ofherbology),mammalogy,mineralogy,paralogy,petralogy(a variant ofpetrology);elogy;heptalogy;antilogy,festilogy;trilogy,tetralogy,pentalogy;palillogy,pyroballogy;dyslogy;eulogy; andbrachylogy.[7]Linguists sometimes jokingly refer tohaplologyashaplogy(subjecting the wordhaplologyto the process of haplology itself).
Permetonymy, words ending in-logyare sometimes used to describe a subject rather than the study of it (e.g.,technology). This usage is particularly widespread in medicine; for example,pathologyis often used simply to refer to "the disease" itself (e.g., "We haven't found the pathology yet") rather than "the study of a disease".
Books, journals, and treatises about a subject also often bear the name of this subject (e.g., the scientific journalEcology).
When appended to other English words, the suffix can also be used humorously to createnonce words(e.g.,beerologyas "the study of beer"). As with otherclassical compounds, adding the suffix to an initial word-stem derived from Greek orLatinmay be used to lend grandeur or the impression of scientific rigor to humble pursuits, as incosmetology("the study of beauty treatment") orcynology("the study of dog training").
The -logy or -ology suffix is commonly used to indicate finite series of art works like books or movies. For paintings, the "tych" suffix is more common (e.g.diptych,triptych). Examples include:
Further terms like duology (two, mostly ingenre fiction) quadrilogy (four) and octalogy (eight) have been coined but are rarely used: for a series of 10, sometimes "decalog" is used (e.g. in theVirgin Decalog) instead of "decalogy".
|
https://en.wikipedia.org/wiki/-logy
|
Thesuffixologyis commonly used in the English language to denote a field of study. Theologyending is a combination of the letteropluslogyin which the letterois used as aninterconsonantalletter which, forphonologicalreasons, precedes themorphemesuffixlogy.[1]Logyis asuffixin the English language, used with words originally adapted fromAncient Greekending in-λογία(-logia).[2]
English names for fields of study are usually created by taking aroot(the subject of the study) and appending the suffixlogyto it with the interconsonantaloplaced in between (with an exception explained below). For example, the worddermatologycomes from the rootdermatopluslogy.[3]Sometimes, anexcrescence, the addition of a consonant, must be added to avoid poor construction of words.
There are additional uses for the suffix such as to describe a subject rather than the study of it (e.g.technology). The suffix is often humorously appended to other English words to createnonce words. For example,stupidologywould refer to the study of stupidity;beerologywould refer to the study of beer.[1]
Not all scientific studies are suffixed withology. When the root word ends with the letter "L" or a vowel, exceptions occur. For example, the study ofmammalswould take the root wordmammaland appendologyto it resulting inmammalologybut because of its final letter being an "L", it instead createsmammalogy. There are exceptions for this exception too. For example, the wordangelologywith the root wordangel, ends in an "L" but is not spelledangelogyaccording to the "L" rule.[4][5]
The terminal-logyis used to denote a discipline. These terms often utilize the suffix-logistor-ologistto describe one who studies the topic. In this case, the suffixologywould be replaced withologist. For example, one who studiesbiologyis called abiologist.
This list of words contains all words that end inology. It includes words that denote a field of study and those that do not, as well as common misspelled words that do not end inologybut are often written as such.
3.
Interdisciplinary branch of the humanities that addresses the language, costume, literature, art, culture, and history ofAlbanians.
2. phycology
1. brachylogy
5. lexicology
1. ecology
1. physiognomy
2. symptomatology
1. symbolology
The study of types.
[273][274][275][276][277][278][279][280][281][282][283][284][285][286][287][288]
|
https://en.wikipedia.org/wiki/List_of_words_ending_in_ology
|
This is alist of graphical methodswith a mathematical basis.
Included arediagramtechniques,charttechniques,plottechniques, and other forms ofvisualization.
There is also alist of computer graphics and descriptive geometry topics.
|
https://en.wikipedia.org/wiki/List_of_graphical_methods
|
PGF/TikZis a pair of languages for producingvector graphics(e.g., technical illustrations and drawings) from a geometric/algebraic description, with standard features including the drawing of points, lines, arrows, paths, circles, ellipses and polygons. PGF is a lower-level language, while TikZ is a set of higher-level macros that use PGF. The top-level PGF and TikZ commands are invoked asTeXmacros, but in contrast withPSTricks, the PGF/TikZ graphics themselves are described in a language that resemblesMetaPost. Till Tantau is the designer of the PGF and TikZ languages. He is also the main developer of the only known interpreter for PGF and TikZ, which is written in TeX. PGF is an acronym for "Portable Graphics Format". TikZ was introduced in version 0.95 of PGF, and it is arecursive acronymfor "TikZ istkeinZeichenprogramm" (German for "TikZ isnota drawing program").
The PGF/TikZ interpreter can be used from the popularLaTeXandConTeXtmacro packages, and also directly from the originalTeX.[2]: 116Since TeX itself is not concerned with graphics, the interpreter supports multiple TeX output backends:dvips,dvipdfm/dvipdfmx/xdvipdfmx,TeX4ht, andpdftex's internalPDFoutput driver.[2]: 117–120Unlike PSTricks, PGF can thus directly produce eitherPostScriptor PDF output, but it cannot use some of the more advanced PostScript programming features that PSTricks can use due to the "least common denominator" effect.[3]PGF/TikZ comes with an extensive documentation; the version 3.1.4a of the manual has over 1300 pages.[2]
The standard LaTeXpictureenvironment can also be used as a front end for PGF by using thepgfpict2epackage.[2]: 27
The project has been under constant development since 2005.[4]Most of the development until 2018 was done by Till Tantau and since then Henri Menke has been the main contributor.[5]Version 3.0.0 was released on 20 December 2013.[6]One of the major new features of this version wasgraph drawingusing thegraphdrawingpackage, which however requiresLuaTeX.[7]This version also added a new data visualization method and support for directSVGoutput via the newdvisvgmdriver.[6]
Several graphical editors can produce output for PGF/TikZ, such as theKDEprogram Cirkuit[8]and the math drawing programGeoGebra.[9]Export to TikZ is also available as extensions forInkscape,[10]Blender,[11]MATLAB,[12]matplotlib,[13]Gnuplot,[14]Julia,[15]andR.[16]The circuit-macros package[17]ofm4 macrosexportscircuit diagramsto TikZ using thedpic -gcommand line option.[18]The dot2tex program can convert files in theDOTgraph description language to PGF/TikZ.[19]
TikZ featureslibrariesfor easy drawing of many kinds of diagrams, such as the following (alphabetized by library name):[2]
The following images were created with TikZ and show some examples of the range of graphic types that can be produced. The link in each caption points to the source code for the image.
|
https://en.wikipedia.org/wiki/PGF/TikZ
|
Data and information visualization(data viz/visorinfo viz/vis)[2]is the practice ofdesigningand creatinggraphicor visualrepresentationsof a large amount[3]of complex quantitative and qualitativedataandinformationwith the help of static, dynamic or interactive visual items. Typically based on data and information collected from a certaindomain of expertise, these visualizations are intended for a broader audience to help them visually explore and discover, quickly understand, interpret and gain important insights into otherwise difficult-to-identify structures, relationships, correlations, local and global patterns, trends, variations, constancy, clusters, outliers and unusual groupings within data (exploratory visualization).[4][5][6]When intended for the general public (mass communication) to convey a concise version of known, specific information in a clear and engaging manner (presentationalorexplanatory visualization),[4]it is typically calledinformation graphics.
Data visualizationis concerned with presenting sets of primarily quantitative raw data in a schematic form, using imagery. The visual formats used in data visualization include charts and graphs (e.g.pie charts,bar charts,line charts,area charts,cone charts,pyramid charts,donut charts,histograms,spectrograms,cohort charts,waterfall charts,funnel charts,bullet graphs, etc.),diagrams,plots(e.g.scatter plots,distribution plots,box-and-whisker plots), geospatialmaps(such asproportional symbol maps,choropleth maps,isopleth mapsandheat maps), figures,correlation matrices, percentagegauges, etc., which sometimes can be combined in adashboard.
Information visualization, on the other hand, deals with multiple, large-scale and complicated datasets which contain quantitative (numerical) data as well as qualitative (non-numerical, i.e. verbal or graphical) and primarily abstract information and its goal is to add value to raw data, improve the viewers' comprehension, reinforce their cognition and help them derive insights and make decisions as they navigate and interact with the computer-supported graphical display. Visual tools used in information visualization includemapsfor location based data;hierarchical[7]organisations of data such astree maps,radial_trees, and othertree_structures; displays that prioritiserelationships(Heer et al. 2010) such asSankey diagrams,network diagrams,venn diagrams,mind maps,semantic networks,entity-relationship diagrams;flow charts,timelines, etc.
Emerging technologieslikevirtual,augmentedandmixed realityhave the potential to make information visualization more immersive, intuitive, interactive and easily manipulable and thus enhance the user'svisual perceptionandcognition.[8]In data and information visualization, the goal is to graphically present and explore abstract, non-physical and non-spatial data collected fromdatabases,information systems,file systems,documents,business data, etc. (presentational and exploratory visualization) which is different from the field ofscientific visualization, where the goal is to render realistic images based on physical andspatialscientific datato confirm or rejecthypotheses(confirmatory visualization).[9]
Effective data visualization is properly sourced, contextualized, simple and uncluttered. The underlying data is accurate and up-to-date to make sure that insights are reliable. Graphical items are well-chosen for the given datasets and aesthetically appealing, with shapes, colors and other visual elements used deliberately in a meaningful and non-distracting manner. The visuals are accompanied by supporting texts (labels and titles). These verbal and graphical components complement each other to ensure clear, quick and memorable understanding. Effective information visualization is aware of the needs and concerns and the level of expertise of the target audience, deliberately guiding them to the intended conclusion.[10][3]Such effective visualization can be used not only for conveying specialized, complex, big data-driven ideas to a wider group of non-technical audience in a visually appealing, engaging and accessible manner, but also to domain experts and executives for making decisions, monitoring performance, generating new ideas and stimulating research.[10][4]In addition, data scientists, data analysts and data mining specialists use data visualization to check the quality of data, find errors, unusual gaps and missing values in data, clean data, explore the structures and features of data and assess outputs of data-driven models.[4]Inbusiness, data and information visualization can constitute a part ofdata storytelling, where they are paired with a coherentnarrativestructure orstorylineto contextualize the analyzed data and communicate the insights gained from analyzing the data clearly and memorably with the goal of convincing the audience into making a decision or taking an action in order to createbusiness value.[3][11]This can be contrasted with the field ofstatistical graphics, where complex statistical data are communicated graphically in an accurate and precise manner among researchers and analysts with statistical expertise to help them performexploratory data analysisor to convey the results of such analyses, where visual appeal, capturing attention to a certain issue and storytelling are not as important.[12]
The field of data and information visualization is of interdisciplinary nature as it incorporates principles found in the disciplines ofdescriptive statistics(as early as the 18th century),[13]visual communication,graphic design,cognitive scienceand, more recently,interactive computer graphicsandhuman-computer interaction.[14]Since effective visualization requires design skills, statistical skills and computing skills, it is argued by authors such as Gershon and Page that it is both an art and a science.[15]The neighboring field ofvisual analyticsmarries statistical data analysis, data and information visualization and human analytical reasoning through interactive visual interfaces to help human users reach conclusions, gain actionable insights and make informed decisions which are otherwise difficult for computers to do.
Research into how people read and misread various types of visualizations is helping to determine what types and features of visualizations are most understandable and effective in conveying information.[16][17]On the other hand, unintentionally poor or intentionally misleading and deceptive visualizations (misinformative visualization) can function as powerful tools which disseminatemisinformation, manipulate public perception and divertpublic opiniontoward a certain agenda.[18]Thus data visualization literacy has become an important component ofdataandinformation literacyin theinformation ageakin to the roles played bytextual,mathematicalandvisual literacyin the past.[19]
The field of data and information visualization has emerged "from research inhuman–computer interaction,computer science,graphics,visual design,psychology,photographyandbusiness methods. It is increasingly applied as a critical component in scientific research,digital libraries,data mining, financial data analysis, market studies, manufacturingproduction control, anddrug discovery".[20]
Data and information visualization presumes that "visual representations and interaction techniques take advantage of the human eye's broad bandwidth pathway into the mind to allow users to see, explore, and understand large amounts of information at once. Information visualization focused on the creation of approaches for conveying abstract information in intuitive ways."[21]
Data analysis is an indispensable part of all applied research and problem solving in industry. The most fundamental data analysis approaches are visualization (histograms, scatter plots, surface plots, tree maps, parallel coordinate plots, etc.),statistics(hypothesis test,regression,PCA, etc.),data mining(association mining, etc.), andmachine learningmethods (clustering,classification,decision trees, etc.). Among these approaches, information visualization, or visual data analysis, is the most reliant on the cognitive skills of human analysts, and allows the discovery of unstructured actionable insights that are limited only by human imagination and creativity. The analyst does not have to learn any sophisticated methods to be able to interpret the visualizations of the data. Information visualization is also a hypothesis generation scheme, which can be, and is typically followed by more analytical or formal analysis, such as statistical hypothesis testing.
To communicate information clearly and efficiently, data visualization usesstatistical graphics,plots,information graphicsand other tools. Numerical data may be encoded using dots, lines, or bars, to visually communicate a quantitative message.[22]Effective visualization helps users analyze and reason about data and evidence.[23]It makes complex data more accessible, understandable, and usable, but can also be reductive.[24]Users may have particular analytical tasks, such as making comparisons or understandingcausality, and the design principle of the graphic (i.e., showing comparisons or showing causality) follows the task. Tables are generally used where users will look up a specific measurement, while charts of various types are used to show patterns or relationships in the data for one or more variables.
Data visualization refers to the techniques used to communicate data or information by encoding it as visual objects (e.g., points, lines, or bars) contained in graphics. The goal is to communicate information clearly and efficiently to users. It is one of the steps indata analysisordata science. According to Vitaly Friedman (2008) the "main goal of data visualization is to communicate information clearly and effectively through graphical means. It doesn't mean that data visualization needs to look boring to be functional or extremely sophisticated to look beautiful. To convey ideas effectively, both aesthetic form and functionality need to go hand in hand, providing insights into a rather sparse and complex data set by communicating its key aspects in a more intuitive way. Yet designers often fail to achieve a balance between form and function, creating gorgeous data visualizations which fail to serve their main purpose — to communicate information".[25]
Indeed,Fernanda ViegasandMartin M. Wattenbergsuggested that an ideal visualization should not only communicate clearly, but stimulate viewer engagement and attention.[26]
Data visualization is closely related toinformation graphics,information visualization,scientific visualization,exploratory data analysisandstatistical graphics. In the new millennium, data visualization has become an active area of research, teaching and development. According to Post et al. (2002), it has united scientific and information visualization.[27]
In the commercial environment data visualization is often referred to asdashboards.Infographicsare another very common form of data visualization.
The greatest value of a picture is when it forces us to notice what we never expected to see.
Edward Tuftehas explained that users of information displays are executing particularanalytical taskssuch as making comparisons. Thedesign principleof the information graphic should support the analytical task.[29]As William Cleveland and Robert McGill show, different graphical elements accomplish this more or less effectively. For example, dot plots and bar charts outperform pie charts.[30]
In his 1983 bookThe Visual Display of Quantitative Information,[31]Edward Tuftedefines 'graphical displays' and principles for effective graphical display in the following passage:
"Excellence in statistical graphics consists of complex ideas communicated with clarity, precision, and efficiency. Graphical displays should:
Graphicsrevealdata. Indeed, graphics can be more precise and revealing than conventional statistical computations."[32]
For example, the Minard diagram shows the losses suffered by Napoleon's army in the 1812–1813 period. Six variables are plotted: the size of the army, its location on a two-dimensional surface (x and y), time, the direction of movement, and temperature. The line width illustrates a comparison (size of the army at points in time), while the temperature axis suggests a cause of the change in army size. This multivariate display on a two-dimensional surface tells a story that can be grasped immediately while identifying the source data to build credibility. Tufte wrote in 1983 that: "It may well be the best statistical graphic ever drawn."[32]
Not applying these principles may result inmisleading graphs, distorting the message, or supporting an erroneous conclusion. According to Tufte,chartjunkrefers to the extraneous interior decoration of the graphic that does not enhance the message or gratuitous three-dimensional or perspective effects. Needlessly separating the explanatory key from the image itself, requiring the eye to travel back and forth from the image to the key, is a form of "administrative debris." The ratio of "data to ink" should be maximized, erasing non-data ink where feasible.[32]
TheCongressional Budget Officesummarized several best practices for graphical displays in a June 2014 presentation. These included: a) Knowing your audience; b) Designing graphics that can stand alone outside the report's context; and c) Designing graphics that communicate the key messages in the report.[33]
Useful criteria for a data or information visualization include:[34]
Readability means that it is possible for a viewer to understand the underlying data, such as by making comparisons between proportionally sized visual elements to compare their respective data values; or using a legend to decode a map, like identifying coloured regions on a climate map to read temperature at that location. For greatest efficiency and simplicity of design and user experience, this readability is enhanced through the use of bijective mapping in that design of the image elements - where the mapping of representational element to data variable is unique.[35]
Kosara (2007)[34]also identifies the need for a visualisation to be "recognisable as a visualisation and not appear to be something else". He also states that recognisability and readability may not always be required in all types of visualisation e.g. "informative art" (which would still meet all three above criteria but might not look like a visualisation) or "artistic visualisation" (which similarly is still based on non-visual data to create an image, but may not be readable or recognisable).
AuthorStephen Fewdescribed eight types of quantitative messages that users may attempt to understand or communicate from a set of data and the associated graphs used to help communicate the message:
Analysts reviewing a set of data may consider whether some or all of the messages and graphic types above are applicable to their task and audience. The process of trial and error to identify meaningful relationships and messages in the data is part ofexploratory data analysis.
A human can distinguish differences in line length, shape, orientation, distances, and color (hue) readily without significant processing effort; these are referred to as "pre-attentive attributes". For example, it may require significant time and effort ("attentive processing") to identify the number of times the digit "5" appears in a series of numbers; but if that digit is different in size, orientation, or color, instances of the digit can be noted quickly through pre-attentive processing.[38]
Compelling graphics take advantage of pre-attentive processing and attributes and the relative strength of these attributes. For example, since humans can more easily process differences in line length than surface area, it may be more effective to use a bar chart (which takes advantage of line length to show comparison) rather than pie charts (which use surface area to show comparison).[38]
Almost all data visualizations are created for human consumption. Knowledge of human perception and cognition is necessary when designing intuitive visualizations.[39]Cognition refers to processes in human beings like perception, attention, learning, memory, thought, concept formation, reading, and problem solving.[40]Human visual processing is efficient in detecting changes and making comparisons between quantities, sizes, shapes and variations in lightness. When properties of symbolic data are mapped to visual properties, humans can browse through large amounts of data efficiently. It is estimated that 2/3 of the brain's neurons can be involved in visual processing. Proper visualization provides a different approach to show potential connections, relationships, etc. which are not as obvious in non-visualized quantitative data. Visualization can become a means ofdata exploration.
Studies have shown individuals used on average 19% less cognitive resources, and 4.5% better able to recall details when comparing data visualization with text.[41]
The modern study of visualization started withcomputer graphics, which "has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power often limited its usefulness. The recent emphasis on visualization started in 1987 with the special issue of Computer Graphics on Visualization inScientific Computing. Since then there have been several conferences and workshops, co-sponsored by theIEEE Computer SocietyandACM SIGGRAPH".[42]They have been devoted to the general topics ofdata visualization, information visualization andscientific visualization, and more specific areas such asvolume visualization.
In 1786,William Playfairpublished the first presentation graphics.
There is no comprehensive 'history' of data visualization. There are no accounts that span the entire development of visual thinking and the visual representation of data, and which collate the contributions of disparate disciplines.[43]Michael Friendly and Daniel J Denis ofYork Universityare engaged in a project that attempts to provide a comprehensive history of visualization. Contrary to general belief, data visualization is not a modern development. Since prehistory, stellar data, or information such as location of stars were visualized on the walls of caves (such as those found inLascaux Cavein Southern France) since thePleistoceneera.[44]Physical artefacts such as Mesopotamianclay tokens(5500 BC), Incaquipus(2600 BC) and Marshall Islandsstick charts(n.d.) can also be considered as visualizing quantitative information.[45][46]
The first documented data visualization can be tracked back to 1160 B.C. withTurin Papyrus Mapwhich accurately illustrates the distribution of geological resources and provides information about quarrying of those resources.[47]Such maps can be categorized asthematic cartography, which is a type of data visualization that presents and communicates specific data and information through a geographical illustration designed to show a particular theme connected with a specific geographic area. Earliest documented forms of data visualization were various thematic maps from different cultures and ideograms and hieroglyphs that provided and allowed interpretation of information illustrated. For example,Linear Btablets ofMycenaeprovided a visualization of information regarding Late Bronze Age era trades in the Mediterranean. The idea of coordinates was used by ancient Egyptian surveyors in laying out towns, earthly and heavenly positions were located by something akin to latitude and longitude at least by 200 BC, and the map projection of a spherical Earth into latitude and longitude byClaudius Ptolemy[c.85–c.165] in Alexandria would serve as reference standards until the 14th century.[47]
The invention of paper and parchment allowed further development of visualizations throughout history. Figure shows a graph from the 10th or possibly 11th century that is intended to be an illustration of the planetary movement, used in an appendix of a textbook in monastery schools.[48]The graph apparently was meant to represent a plot of the inclinations of the planetary orbits as a function of the time. For this purpose, the zone of the zodiac was represented on a plane with a horizontal line divided into thirty parts as the time or longitudinal axis. The vertical axis designates the width of the zodiac. The horizontal scale appears to have been chosen for each planet individually for the periods cannot be reconciled. The accompanying text refers only to the amplitudes. The curves are apparently not related in time.
By the 16th century, techniques and instruments for precise observation and measurement of physical quantities, and geographic and celestial position were well-developed (for example, a "wall quadrant" constructed byTycho Brahe[1546–1601], covering an entire wall in his observatory). Particularly important were the development of triangulation and other methods to determine mapping locations accurately.[43]Very early, the measure of time led scholars to develop innovative way of visualizing the data (e.g. Lorenz Codomann in 1596, Johannes Temporarius in 1596[49]).
French philosopher and mathematicianRené DescartesandPierre de Fermatdeveloped analytic geometry and two-dimensional coordinate system which heavily influenced the practical methods of displaying and calculating values. Fermat andBlaise Pascal's work on statistics and probability theory laid the groundwork for what we now conceptualize as data.[43]According to the Interaction Design Foundation, these developments allowed and helped WilliamPlayfair, who saw potential for graphical communication of quantitative data, to generate and develop graphical methods of statistics.[39]
In the second half of the 20th century,Jacques Bertinused quantitative graphs to represent information "intuitively, clearly, accurately, and efficiently".[39]
John Tukey and Edward Tufte pushed the bounds of data visualization; Tukey with his new statistical approach of exploratory data analysis and Tufte with his book "The Visual Display of Quantitative Information" paved the way for refining data visualization techniques for more than statisticians. With the progression of technology came the progression of data visualization; starting with hand-drawn visualizations and evolving into more technical applications – including interactive designs leading to software visualization.[50]
Programs likeSAS,SOFA,R,Minitab, Cornerstone and more allow for data visualization in the field of statistics. Other data visualization applications, more focused and unique to individuals, programming languages such asD3,Python(through matplotlib, seaborn) andJavaScriptand Java(through JavaFX) help to make the visualization of quantitative data a possibility. Private schools have also developed programs to meet the demand for learning data visualization and associated programming libraries, including free programs likeThe Data Incubatoror paid programs likeGeneral Assembly.[51]
Beginning with the symposium "Data to Discovery" in 2013, ArtCenter College of Design, Caltech and JPL in Pasadena have run an annual program on interactive data visualization.[52]The program asks: How can interactive data visualization help scientists and engineers explore their data more effectively? How can computing, design, and design thinking help maximize research results? What methodologies are most effective for leveraging knowledge from these fields? By encoding relational information with appropriate visual and interactive characteristics to help interrogate, and ultimately gain new insight into data, the program develops new interdisciplinary approaches to complex science problems, combining design thinking and the latest methods from computing, user-centered design, interaction design and 3D graphics.
Data visualization involves specific terminology, some of which is derived from statistics. For example, authorStephen Fewdefines two types of data, which are used in combination to support a meaningful analysis or visualization:
The distinction between quantitative and categorical variables is important because the two types require different methods of visualization.
Two primary types ofinformation displaysare tables and graphs.
Eppler and Lengler have developed the "Periodic Table of Visualization Methods," an interactive chart displaying various data visualization methods. It includes six types of data visualization methods: data, information, concept, strategy, metaphor and compound.[55]In "Visualization Analysis and Design"Tamara Munznerwrites "Computer-based visualization systems provide visual representations of datasets designed to help people carry out tasks more effectively." Munzner argues that visualization "is suitable when there is a need to augment human capabilities rather than replace people with computational decision-making methods."[56]
Variable-width ("variwide") bar chart
Orthogonal (orthogonal composite) bar chart
Interactive data visualizationenables direct actions on a graphicalplotto change elements and link between multiple plots.[59]
Interactive data visualization has been a pursuit ofstatisticianssince the late 1960s. Examples of the developments can be found on theAmerican Statistical Associationvideo lending library.[60]
Common interactions include:
There are different approaches on the scope of data visualization. One common focus is on information presentation, such as Friedman (2008). Friendly (2008) presumes two main parts of data visualization:statistical graphics, andthematic cartography.[61]In this line the "Data Visualization: Modern Approaches" (2007) article gives an overview of seven subjects of data visualization:[62]
All these subjects are closely related tographic designand information representation.
On the other hand, from acomputer scienceperspective, Frits H. Post in 2002 categorized the field into sub-fields:[27][63]
Within The Harvard Business Review, Scott Berinato developed a framework to approach data visualisation.[64]To start thinking visually, users must consider two questions; 1) What you have and 2) what you're doing. The first step is identifying what data you want visualised. It is data-driven like profit over the past ten years or a conceptual idea like how a specific organisation is structured. Once this question is answered one can then focus on whether they are trying to communicate information (declarative visualisation) or trying to figure something out (exploratory visualisation). Scott Berinato combines these questions to give four types of visual communication that each have their own goals.[64]
These four types of visual communication are as follows;
Data and information visualization insights are being applied in areas such as:[20]
Notable academic and industry laboratories in the field are:
Conferences in this field, ranked by significance in data visualization research,[66]are:
For further examples, see:Category:Computer graphics organizations
Data presentation architecture(DPA) is a skill-set that seeks to identify, locate, manipulate, format and present data in such a way as to optimally communicate meaning and proper knowledge.
Historically, the termdata presentation architectureis attributed to Kelly Lautt:[a]"Data Presentation Architecture (DPA) is a rarely applied skill set critical for the success and value ofBusiness Intelligence. Data presentation architecture weds the science of numbers, data and statistics indiscovering valuable informationfrom data and making it usable, relevant and actionable with the arts of data visualization, communications,organizational psychologyandchange managementin order to provide business intelligence solutions with the data scope, delivery timing, format and visualizations that will most effectively support and drive operational, tactical and strategic behaviour toward understood business (or organizational) goals. DPA is neither an IT nor a business skill set but exists as a separate field of expertise. Often confused with data visualization, data presentation architecture is a much broader skill set that includes determining what data on what schedule and in what exact format is to be presented, not just the best way to present data that has already been chosen. Data visualization skills are one element of DPA."
DPA has two main objectives:
With the above objectives in mind, the actual work of data presentation architecture consists of:
DPA work shares commonalities with several other fields, including:
|
https://en.wikipedia.org/wiki/Data_Presentation_Architecture
|
Visual inspectionis a common method ofquality control,data acquisition, anddata analysis.
Visual Inspection, used in maintenance of facilities, mean inspection of equipment and structures using either or all of raw human senses such as vision, hearing, touch and smell and/or any non-specialized inspection equipment.
Inspections requiring Ultrasonic, X-Ray equipment,Infrared, etc. are not typically regarded as visual inspection as these Inspection methodologies require specialized equipment, training and certification.
A study of the visual inspection of smallintegrated circuitsfound that the modal duration of eye fixations of trained inspectors was about 200 ms. The most accurate inspectors made the fewest eye fixations and were the fastest. When the same chip was judged more than once by an individual inspector the consistency of judgment was very high whereas the consistency between inspectors was somewhat less. Variation by a factor of six in inspection speed led to variation of less than a factor of two in inspection accuracy. Visual inspection had afalse positiverate of 2% and afalse negativerate of 23%.[1]
To do aneyeball searchis to look for something specific in a mass ofcodeor data withone's own eyes, as opposed to using some sort ofpattern matchingsoftware likegrepor any other automated search tool. Also known asvgreporogrep, i.e., "visual/optical grep".[2]See alsovdiff.
"Eyeballing" is the most common and readily available method of initial data assessment.[3]This method is effective for identifying patterns or anomalies in complex data but can be time-intensive and error-prone.[4]Although low-cost and adaptable, its efficiency andROIoften fall short compared to automated tools, which offer greater scalability and consistency.[5]However, switching from manual visual inspection to automated methods depends on the task's complexity, scale, and the balance between upfront costs and long-term efficiency.[6]
Experts inpattern recognitionmaintain that the "eyeball" technique is still the most effective procedure for searching arbitrary, possibly unknown structures in data.[7]
In the military, applying this sort of search to real-world terrain is often referred to as "using theMark I Eyeball" device (pronounced as Mark One Eyeball), the U.S. military adopting it in 1950s.[8]The term is an allusion on military nomenclature, "Mark I" being the first version of a military vehicle or weapon.
|
https://en.wikipedia.org/wiki/Visual_inspection
|
Achart(sometimes known as agraph) is agraphical representationfordata visualization, in which "thedatais represented bysymbols, such as bars in abar chart, lines in aline chart, or slices in apie chart".[1]A chart can representtabularnumericdata,functionsor some kinds ofqualitystructure and provides different info.
The term "chart" as a graphical representation ofdatahas multiple meanings:
Charts are often used to ease understanding of large quantities of data and the relationships between parts of the data. Charts can usually be read more quickly than the raw data. They are used in a wide variety of fields, and can be created by hand (often ongraph paper) or by computer using acharting application. Certain types of charts are more useful for presenting a given data set than others. For example, data that presentspercentagesin different groups (such as "satisfied, not satisfied, unsure") are often displayed in apie chart, but maybe more easily understood when presented in a horizontalbar chart.[2]On the other hand, data that represents numbers that change over a period of time (such as "annual revenue from 1990 to 2000") might be best shown as aline chart.
A chart can take a large variety of forms. However, there are common features that provide the chart with its ability to extract meaning from data.
Typically the data in a chart is represented graphically since humans can infer meaning from pictures more quickly than from text. Thus, the text is generally used only to annotate the data.
One of the most important uses of text in a graph is thetitle. A graph's title usually appears above the main graphic and provides a succinct description of what the data in the graph refers to.
Dimensions in the data are often displayed onaxes. If a horizontal and a vertical axis are used, they are usually referred to as the x-axis and y-axis. Each axis will have ascale, denoted by periodic graduations and usually accompanied by numerical or categorical indications. Each axis will typically also have a label displayed outside or beside it, briefly describing the dimension represented. If the scale is numerical, the label will often be suffixed with the unit of that scale in parentheses. For example, "Distance traveled (m)" is a typical x-axis label and would mean that the distance traveled, in units of meters, is related to the horizontal position of the data within the chart.
Within the graph, agridof lines may appear to aid in the visual alignment of data. The grid can be enhanced by visually emphasizing the lines at regular or significant graduations. The emphasized lines are then called major gridlines, and the remainder is minor grid lines.
A chart's data can appear in all manner of formats and may include individual textuallabelsdescribing the datum associated with the indicated position in the chart. The data may appear as dots or shapes, connected or unconnected, and in any combination of colors and patterns. In addition, inferences or points of interest can be overlaid directly on the graph to further aid information extraction.
When the data appearing in a chart contains multiple variables, the chart may include alegend(also known as akey). A legend contains a list of the variables appearing in the chart and an example of their appearance. This information allows the data from each variable to be identified in the chart.
Four of the most common charts are:
This gallery shows:
Other common charts are:
Examples of less common charts are:
This gallery shows:
Some types of charts have specific uses in a certain field
This gallery shows:
Other examples:
Some of the better-known named charts are:
Some specific charts have become well known by effectively explaining a phenomenon or idea.
There are dozens of other types of charts. Here are some of them:
One more example:Bernal chart
While charts can be drawn by hand, computer software is often used to automatically produce a chart based on entered data. For examples of commonly used software tools, seeList of charting software.
|
https://en.wikipedia.org/wiki/Chart
|
There are many different types of software available to producecharts.
A number of notable examples (with their own Wikipedia articles) are given below and organized according to theprogramming languageor other context in which they are used.
|
https://en.wikipedia.org/wiki/List_of_charting_software
|
Instatistics, the number ofdegrees of freedomis the number of values in the final calculation of astatisticthat are free to vary.[1]
Estimates ofstatistical parameterscan be based upon different amounts of information or data. The number of independent pieces of information that go into the estimate of a parameter is called the degrees of freedom. In general, the degrees of freedom of an estimate of a parameter are equal to the number of independentscoresthat go into the estimate minus the number of parameters used as intermediate steps in the estimation of the parameter itself. For example, if thevarianceis to be estimated from a random sample ofN{\textstyle N}independent scores, then the degrees of freedom is equal to the number of independent scores (N) minus the number of parameters estimated as intermediate steps (one, namely, the sample mean) and is therefore equal toN−1{\textstyle N-1}.[2]
Mathematically, degrees of freedom is the number ofdimensionsof the domain of arandom vector, or essentially the number of "free" components (how many components need to be known before the vector is fully determined).
The term is most often used in the context oflinear models(linear regression,analysis of variance), where certain random vectors are constrained to lie inlinear subspaces, and the number of degrees of freedom is the dimension of thesubspace. The degrees of freedom are also commonly associated with the squared lengths (or "sum of squares" of the coordinates) of such vectors, and the parameters ofchi-squaredand other distributions that arise in associated statistical testing problems.
While introductory textbooks may introduce degrees of freedom as distribution parameters or through hypothesis testing, it is the underlying geometry that defines degrees of freedom, and is critical to a proper understanding of the concept.
Although the basic concept of degrees of freedom was recognized as early as 1821 in the work of German astronomer and mathematicianCarl Friedrich Gauss,[3]its modern definition and usage was first elaborated by English statisticianWilliam Sealy Gossetin his 1908Biometrikaarticle "The Probable Error of a Mean", published under the pen name "Student".[4]While Gosset did not actually use the term 'degrees of freedom', he explained the concept in the course of developing what became known asStudent's t-distribution. The term itself was popularized by English statistician and biologistRonald Fisher, beginning with his 1922 work on chi squares.[5]
In equations, the typical symbol for degrees of freedom isν(lowercaseGreek letter nu). In text and tables, the abbreviation "d.f." is commonly used.R. A. Fisherusednto symbolize degrees of freedom but modern usage typically reservesnfor sample size. When reporting the results ofstatistical tests, the degrees of freedom are typically noted beside the test statistic as either subscript or in parentheses.[6]
Geometrically, the degrees of freedom can be interpreted as the dimension of certain vector subspaces. As a starting point, suppose that we have a sample of independent normally distributed observations,
This can be represented as ann-dimensionalrandom vector:
Since this random vector can lie anywhere inn-dimensional space, it hasndegrees of freedom.
Now, letX¯{\displaystyle {\bar {X}}}be thesample mean. The random vector can be decomposed as the sum of the sample mean plus a vector of residuals:
The first vector on the right-hand side is constrained to be a multiple of the vector of 1's, and the only free quantity isX¯{\displaystyle {\bar {X}}}. It therefore has 1 degree of freedom.
The second vector is constrained by the relation∑i=1n(Xi−X¯)=0{\textstyle \sum _{i=1}^{n}(X_{i}-{\bar {X}})=0}. The firstn− 1 components of this vector can be anything. However, once you know the firstn− 1 components, the constraint tells you the value of thenth component. Therefore, this vector hasn− 1 degrees of freedom.
Mathematically, the first vector is theoblique projectionof the data vector onto thesubspacespannedby the vector of 1's. The 1 degree of freedom is the dimension of this subspace. The second residual vector is the least-squares projection onto the (n− 1)-dimensionalorthogonal complementof this subspace, and hasn− 1 degrees of freedom.
In statistical testing applications, often one is not directly interested in the component vectors, but rather in their squared lengths. In the example above, theresidual sum-of-squaresis
If the data pointsXi{\displaystyle X_{i}}are normally distributed with mean 0 and varianceσ2{\displaystyle \sigma ^{2}}, then the residual sum of squares has a scaledchi-squared distribution(scaled by the factorσ2{\displaystyle \sigma ^{2}}), withn− 1 degrees of freedom. The degrees-of-freedom, here a parameter of the distribution, can still be interpreted as the dimension of an underlying vector subspace.
Likewise, the one-samplet-teststatistic,
follows aStudent's tdistribution withn− 1 degrees of freedom when the hypothesized meanμ0{\displaystyle \mu _{0}}is correct. Again, the degrees-of-freedom arises from the residual vector in the denominator.
When the results of structural equation models (SEM) are presented, they generally include one or more indices of overall model fit, the most common of which is aχ2statistic. This forms the basis for other indices that are commonly reported. Although it is these other statistics that are most commonly interpreted, thedegrees of freedomof theχ2are essential to understanding model fit as well as the nature of the model itself.
Degrees of freedom in SEM are computed as a difference between the number of unique pieces of information that are used as input into the analysis, sometimes called knowns, and the number of parameters that are uniquely estimated, sometimes called unknowns. For example, in a one-factor confirmatory factor analysis with 4 items, there are 10 knowns (the six unique covariances among the four items and the four item variances) and 8 unknowns (4 factor loadings and 4 error variances) for 2 degrees of freedom. Degrees of freedom are important to the understanding of model fit if for no other reason than that, all else being equal, the fewer degrees of freedom, the better indices such asχ2will be.
It has been shown that degrees of freedom can be used by readers of papers that contain SEMs to determine if the authors of those papers are in fact reporting the correct model fit statistics. In the organizational sciences, for example, nearly half of papers published in top journals report degrees of freedom that are inconsistent with the models described in those papers, leaving the reader to wonder which models were actually tested.[7]
A common way to think of degrees of freedom is as the number of independent pieces of information available to estimate another piece of information. More concretely, the number of degrees of freedom is the number of independent observations in a sample of data that are available to estimate a parameter of the population from which that sample is drawn. For example, if we have two observations, when calculating the mean we have two independent observations; however, when calculating the variance, we have only one independent observation, since the two observations are equally distant from the sample mean.
In fitting statistical models to data, the vectors of residuals are constrained to lie in a space of smaller dimension than the number of components in the vector. That smaller dimension is the number ofdegrees of freedom for error, also calledresidual degrees of freedom.
Perhaps the simplest example is this. Suppose
arerandom variableseach withexpected valueμ, and let
be the "sample mean." Then the quantities
are residuals that may be consideredestimatesof theerrorsXi−μ. The sum of the residuals (unlike the sum of the errors) is necessarily 0. If one knows the values of anyn− 1 of the residuals, one can thus find the last one. That means they are constrained to lie in a space of dimensionn− 1. One says that there aren− 1 degrees of freedom for errors.
An example which is only slightly less simple is that ofleast squaresestimation ofaandbin the model
wherexiis given, but eiand henceYiare random. Leta^{\displaystyle {\widehat {a}}}andb^{\displaystyle {\widehat {b}}}be the least-squares estimates ofaandb. Then the residuals
are constrained to lie within the space defined by the two equations
One says that there aren− 2 degrees of freedom for error.
Notationally, the capital letterYis used in specifying the model, while lower-caseyin the definition of the residuals; that is because the former are hypothesized random variables and the latter are actual data.
We can generalise this to multiple regression involvingpparameters and covariates (e.g.p− 1 predictors and one mean (=intercept in the regression)), in which case the cost indegrees of freedom of the fitisp, leavingn - pdegrees of freedom for errors
The demonstration of thetand chi-squared distributions for one-sample problems above is the simplest example where degrees-of-freedom arise. However, similar geometry and vector decompositions underlie much of the theory oflinear models, includinglinear regressionandanalysis of variance. An explicit example based on comparison of three means is presented here; the geometry of linear models is discussed in more complete detail by Christensen (2002).[8]
Suppose independent observations are made for three populations,X1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}},Y1,…,Yn{\displaystyle Y_{1},\ldots ,Y_{n}}andZ1,…,Zn{\displaystyle Z_{1},\ldots ,Z_{n}}. The restriction to three groups and equal sample sizes simplifies notation, but the ideas are easily generalized.
The observations can be decomposed as
whereX¯,Y¯,Z¯{\displaystyle {\bar {X}},{\bar {Y}},{\bar {Z}}}are the means of the individual samples, andM¯=(X¯+Y¯+Z¯)/3{\displaystyle {\bar {M}}=({\bar {X}}+{\bar {Y}}+{\bar {Z}})/3}is the mean of all 3nobservations. In vector notation this decomposition can be written as
The observation vector, on the left-hand side, has 3ndegrees of freedom. On the right-hand side, the first vector has one degree of freedom (or dimension) for the overall mean. The second vector depends on three random variables,X¯−M¯{\displaystyle {\bar {X}}-{\bar {M}}},Y¯−M¯{\displaystyle {\bar {Y}}-{\bar {M}}}andZ¯−M¯{\displaystyle {\overline {Z}}-{\overline {M}}}. However, these must sum to 0 and so are constrained; the vector therefore must lie in a 2-dimensional subspace, and has 2 degrees of freedom. The remaining 3n− 3 degrees of freedom are in the residual vector (made up ofn− 1 degrees of freedom within each of the populations).
In statistical testing problems, one usually is not interested in the component vectors themselves, but rather in their squared lengths, or Sum of Squares. The degrees of freedom associated with a sum-of-squares is the degrees-of-freedom of the corresponding component vectors.
The three-population example above is an example ofone-way Analysis of Variance. The model, or treatment, sum-of-squares is the squared length of the second vector,
with 2 degrees of freedom. The residual, or error, sum-of-squares is
with 3(n−1) degrees of freedom. Of course, introductory books on ANOVA usually state formulae without showing the vectors, but it is this underlying geometry that gives rise to SS formulae, and shows how to unambiguously determine the degrees of freedom in any given situation.
Under the null hypothesis of no difference between population means (and assuming that standard ANOVA regularity assumptions are satisfied) the sums of squares have scaled chi-squared distributions, with the corresponding degrees of freedom. The F-test statistic is the ratio, after scaling by the degrees of freedom. If there is no difference between population means this ratio follows anF-distributionwith 2 and 3n− 3 degrees of freedom.
In some complicated settings, such as unbalancedsplit-plotdesigns, the sums-of-squares no longer have scaled chi-squared distributions. Comparison of sum-of-squares with degrees-of-freedom is no longer meaningful, and software may report certain fractional 'degrees of freedom' in these cases. Such numbers have no genuine degrees-of-freedom interpretation, but are simply providing anapproximatechi-squared distribution for the corresponding sum-of-squares. The details of such approximations are beyond the scope of this page.
Several commonly encountered statistical distributions (Student'st,chi-squared,F) have parameters that are commonly referred to asdegrees of freedom. This terminology simply reflects that in many applications where these distributions occur, the parameter corresponds to the degrees of freedom of an underlying random vector, as in the preceding ANOVA example. Another simple example is: ifXi;i=1,…,n{\displaystyle X_{i};i=1,\ldots ,n}are independent normal(μ,σ2){\displaystyle (\mu ,\sigma ^{2})}random variables, the statistic
follows a chi-squared distribution withn− 1 degrees of freedom. Here, the degrees of freedom arises from the residual sum-of-squares in the numerator, and in turn then− 1 degrees of freedom of the underlying residual vector{Xi−X¯}{\displaystyle \{X_{i}-{\bar {X}}\}}.
In the application of these distributions to linear models, the degrees of freedom parameters can take onlyintegervalues. The underlying families of distributions allow fractional values for the degrees-of-freedom parameters, which can arise in more sophisticated uses. One set of examples is problems where chi-squared approximations based oneffective degrees of freedomare used. In other applications, such as modellingheavy-taileddata, a t orF-distribution may be used as an empirical model. In these cases, there is no particulardegrees of freedominterpretation to the distribution parameters, even though the terminology may continue to be used.
Many non-standard regression methods, includingregularized least squares(e.g.,ridge regression),linear smoothers,smoothing splines, andsemiparametric regression, are not based onordinary least squaresprojections, but rather onregularized(generalizedand/or penalized) least-squares, and so degrees of freedom defined in terms of dimensionality is generally not useful for these procedures. However, these procedures are still linear in the observations, and the fitted values of the regression can be expressed in the form
wherey^{\displaystyle {\hat {y}}}is the vector of fitted values at each of the original covariate values from the fitted model,yis the original vector of responses, andHis thehat matrixor, more generally, smoother matrix.
For statistical inference, sums-of-squares can still be formed: the model sum-of-squares is‖Hy‖2{\displaystyle \|Hy\|^{2}}; the residual sum-of-squares is‖y−Hy‖2{\displaystyle \|y-Hy\|^{2}}. However, becauseHdoes not correspond to an ordinary least-squares fit (i.e. is not an orthogonal projection), these sums-of-squares no longer have (scaled, non-central) chi-squared distributions, and dimensionally defined degrees-of-freedom are not useful.
Theeffective degrees of freedomof the fit can be defined in various ways to implementgoodness-of-fit tests,cross-validation, and otherstatistical inferenceprocedures. Here one can distinguish betweenregression effective degrees of freedomandresidual effective degrees of freedom.
For the regression effective degrees of freedom, appropriate definitions can include thetraceof the hat matrix,[9]tr(H), the trace of the quadratic form of the hat matrix, tr(H'H), the form tr(2H–HH'), or theSatterthwaite approximation,tr(H'H)2/tr(H'HH'H).[10]In the case of linear regression, the hat matrixHisX(X'X)−1X ', and all these definitions reduce to the usual degrees of freedom. Notice that
the regression (not residual) degrees of freedom in linear models are "the sum of the sensitivities of the fitted values with respect to the observed response values",[11]i.e. the sum ofleverage scores.
One way to help to conceptualize this is to consider a simple smoothing matrix like aGaussian blur, used to mitigate data noise. In contrast to a simple linear or polynomial fit, computing the effective degrees of freedom of the smoothing function is not straightforward. In these cases, it is important to estimate the Degrees of Freedom permitted by theH{\displaystyle H}matrix so that the residual degrees of freedom can then be used to estimate statistical tests such asχ2{\displaystyle \chi ^{2}}.
There are corresponding definitions of residual effective degrees-of-freedom (redf), withHreplaced byI−H. For example, if the goal is to estimate error variance, the redf would be defined as tr((I−H)'(I−H)), and the unbiased estimate is (withr^=y−Hy{\displaystyle {\hat {r}}=y-Hy}),
or:[12][13][14][15]
The last approximation above[13]reduces the computational cost fromO(n2) to onlyO(n). In general the numerator would be the objective function being minimized; e.g., if the hat matrix includes an observation covariance matrix, Σ, then‖r^‖2{\displaystyle \|{\hat {r}}\|^{2}}becomesr^′Σ−1r^{\displaystyle {\hat {r}}'\Sigma ^{-1}{\hat {r}}}.
Note that unlike in the original case, non-integer degrees of freedom are allowed, though the value must usually still be constrained between 0 andn.[16]
Consider, as an example, thek-nearest neighboursmoother, which is the average of theknearest measured values to the given point. Then, at each of thenmeasured points, the weight of the original value on the linear combination that makes up the predicted value is just 1/k. Thus, the trace of the hat matrix isn/k. Thus the smooth costsn/keffective degrees of freedom.
As another example, consider the existence of nearly duplicated observations. Naive application of classical formula,n−p, would lead to over-estimation of the residuals degree of freedom, as if each observation were independent. More realistically, though, the hat matrixH=X(X' Σ−1X)−1X 'Σ−1would involve an observation covariance matrix Σ indicating the non-zero correlation among observations.
The more general formulation of effective degree of freedom would result in a more realistic estimate for, e.g., the error variance σ2, which in its turn scales the unknown parameters'a posterioristandard deviation; the degree of freedom will also affect the expansion factor necessary to produce anerror ellipsefor a givenconfidence level.
Similar concepts are theequivalent degrees of freedominnon-parametric regression,[17]thedegree of freedom of signalin atmospheric studies,[18][19]and thenon-integer degree of freedomin geodesy.[20][21]
The residual sum-of-squares‖y−Hy‖2{\displaystyle \|y-Hy\|^{2}}has ageneralized chi-squared distribution, and the theory associated with this distribution[22]provides an alternative route to the answers provided above.[further explanation needed]
|
https://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics)#In_non-standard_regression
|
Instatistics,kernel regressionis anon-parametrictechnique to estimate theconditional expectationof arandom variable. The objective is to find a non-linear relation between a pair of random variablesXandY.
In anynonparametric regression, theconditional expectationof a variableY{\displaystyle Y}relative to a variableX{\displaystyle X}may be written:
wherem{\displaystyle m}is an unknown function.
NadarayaandWatson, both in 1964, proposed to estimatem{\displaystyle m}as a locally weighted average, using akernelas a weighting function.[1][2][3]The Nadaraya–Watson estimator is:
whereKh(t)=1hK(th){\displaystyle K_{h}(t)={\frac {1}{h}}K\left({\frac {t}{h}}\right)}is a kernel with a bandwidthh{\displaystyle h}such thatK(⋅){\displaystyle K(\cdot )}is of order at least 1, that is∫−∞∞uK(u)du=0{\displaystyle \int _{-\infty }^{\infty }uK(u)\,du=0}.
Starting with the definition ofconditional expectation,
we estimate the joint distributionsf(x,y) andf(x) usingkernel density estimationwith a kernelK:
We get:
which is the Nadaraya–Watson estimator.
whereh{\displaystyle h}is the bandwidth (or smoothing parameter).
wheresi=xi−1+xi2.{\displaystyle s_{i}={\frac {x_{i-1}+x_{i}}{2}}.}[4]
This example is based upon Canadian cross-section wage data consisting of a random sample taken from the 1971 Canadian Census Public Use Tapes for male individuals having common education (grade 13). There are 205 observations in total.[citation needed]
The figure to the right shows the estimated regression function using a second order Gaussian kernel along with asymptotic variability bounds.
The following commands of theR programming languageuse thenpreg()function to deliver optimal smoothing and to create the figure given above. These commands can be entered at the command prompt via cut and paste.
According toDavid Salsburg, the algorithms used in kernel regression were independently developed and used infuzzy systems: "Coming up with almost exactly the same computer algorithm, fuzzy systems and kernel density-based regressions appear to have been developed completely independently of one another."[5]
|
https://en.wikipedia.org/wiki/Kernel_regression
|
Moving least squaresis a method of reconstructingcontinuous functionsfrom asetof unorganized point samples via the calculation of aweighted least squaresmeasurebiased towards the region around the point at which the reconstructed value is requested.
Incomputer graphics, the moving least squares method is useful for reconstructing a surface from a set of points. Often it is used to create a 3D surface from apoint cloudthrough eitherdownsamplingorupsampling.
In numerical analysis to handle contributions of geometry where it is difficult to obtain discretizations, the moving least squares methods have also been used and generalized to solvePDEson curved surfaces and other geometries.[1][2][3]This includes numerical methods developed for curved surfaces for solving scalar parabolic PDEs[1][3]and vector-valued hydrodynamic PDEs.[2]
In machine learning, moving least squares methods have also been used to develop model classes and learning methods. This includes function regression methods[4]and neural network function and operator regression approaches, such as GMLS-Nets.[5]
Consider a functionf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }and a set of sample pointsS={(xi,fi)|f(xi)=fi}{\displaystyle S=\{(x_{i},f_{i})|f(x_{i})=f_{i}\}}. Then, the moving least square approximation of degreem{\displaystyle m}at the pointx{\displaystyle x}isp~(x){\displaystyle {\tilde {p}}(x)}wherep~{\displaystyle {\tilde {p}}}minimizes the weighted least-square error
over all polynomialsp{\displaystyle p}of degreem{\displaystyle m}inRn{\displaystyle \mathbb {R} ^{n}}.θ(s){\displaystyle \theta (s)}is the weight and it tends to zero ass→∞{\displaystyle s\to \infty }.
In the exampleθ(s)=e−s2{\displaystyle \theta (s)=e^{-s^{2}}}. The smooth interpolator of "order 3" is a quadratic interpolator.
|
https://en.wikipedia.org/wiki/Moving_least_squares
|
Instatistics, amoving average(rolling averageorrunning averageormoving mean[1]orrolling mean) is a calculation to analyze data points by creating a series ofaveragesof different selections of the full data set. Variations include:simple,cumulative, orweightedforms.
Mathematically, a moving average is a type ofconvolution. Thus insignal processingit is viewed as alow-passfinite impulse responsefilter. Because theboxcar functionoutlines its filter coefficients, it is called aboxcar filter. It is sometimes followed bydownsampling.
Given a series of numbers and a fixed subset size, the first element of the moving average is obtained by taking the average of the initial fixed subset of the number series. Then the subset is modified by "shifting forward"; that is, excluding the first number of the series and including the next value in the series.
A moving average is commonly used withtime seriesdata to smooth out short-term fluctuations and highlight longer-term trends or cycles - in this case the calculation is sometimes called atime average. The threshold between short-term and long-term depends on the application, and the parameters of the moving average will be set accordingly. It is also used ineconomicsto examine gross domestic product, employment or other macroeconomic time series. When used with non-time series data, a moving average filters higher frequency components without any specific connection to time, although typically some kind of ordering is implied. Viewed simplistically it can be regarded as smoothing the data.
In financial applications asimple moving average(SMA) is the unweightedmeanof the previousk{\displaystyle k}data-points. However, in science and engineering, the mean is normally taken from an equal number of data on either side of a central value. This ensures that variations in the mean are aligned with the variations in the data rather than being shifted in time.An example of a simple equally weighted running mean is the mean over the lastk{\displaystyle k}entries of a data-set containingn{\displaystyle n}entries. Let those data-points bep1,p2,…,pn{\displaystyle p_{1},p_{2},\dots ,p_{n}}. This could be closing prices of a stock. The mean over the lastk{\displaystyle k}data-points (days in this example) is denoted asSMAk{\displaystyle {\textit {SMA}}_{k}}and calculated as:SMAk=pn−k+1+pn−k+2+⋯+pnk=1k∑i=n−k+1npi{\displaystyle {\begin{aligned}{\textit {SMA}}_{k}&={\frac {p_{n-k+1}+p_{n-k+2}+\cdots +p_{n}}{k}}\\&={\frac {1}{k}}\sum _{i=n-k+1}^{n}p_{i}\end{aligned}}}
When calculating the next meanSMAk,next{\displaystyle {\textit {SMA}}_{k,{\text{next}}}}with the same sampling widthk{\displaystyle k}the range fromn−k+2{\displaystyle n-k+2}ton+1{\displaystyle n+1}is considered. A new valuepn+1{\displaystyle p_{n+1}}comes into the sum and the oldest valuepn−k+1{\displaystyle p_{n-k+1}}drops out. This simplifies the calculations by reusing the previous meanSMAk,prev{\displaystyle {\textit {SMA}}_{k,{\text{prev}}}}.SMAk,next=1k∑i=n−k+2n+1pi=1k(pn−k+2+pn−k+3+⋯+pn+pn+1⏟∑i=n−k+2n+1pi+pn−k+1−pn−k+1⏟=0)=1k(pn−k+1+pn−k+2+⋯+pn)⏟=SMAk,prev−pn−k+1k+pn+1k=SMAk,prev+1k(pn+1−pn−k+1){\displaystyle {\begin{aligned}{\textit {SMA}}_{k,{\text{next}}}&={\frac {1}{k}}\sum _{i=n-k+2}^{n+1}p_{i}\\&={\frac {1}{k}}{\Big (}\underbrace {p_{n-k+2}+p_{n-k+3}+\dots +p_{n}+p_{n+1}} _{\sum _{i=n-k+2}^{n+1}p_{i}}+\underbrace {p_{n-k+1}-p_{n-k+1}} _{=0}{\Big )}\\&=\underbrace {{\frac {1}{k}}{\Big (}p_{n-k+1}+p_{n-k+2}+\dots +p_{n}{\Big )}} _{={\textit {SMA}}_{k,{\text{prev}}}}-{\frac {p_{n-k+1}}{k}}+{\frac {p_{n+1}}{k}}\\&={\textit {SMA}}_{k,{\text{prev}}}+{\frac {1}{k}}{\Big (}p_{n+1}-p_{n-k+1}{\Big )}\end{aligned}}}This means that the moving average filter can be computed quite cheaply on real time data with a FIFO /circular bufferand only 3 arithmetic steps.
During the initial filling of the FIFO / circular buffer the sampling window is equal to the data-set size thusk=n{\displaystyle k=n}and the average calculation is performed as acumulative moving average.
The period selected (k{\displaystyle k}) depends on the type of movement of interest, such as short, intermediate, or long-term.
If the data used are not centered around the mean, a simple moving average lags behind the latest datum by half the sample width. An SMA can also be disproportionately influenced by old data dropping out or new data coming in. One characteristic of the SMA is that if the data has a periodic fluctuation, then applying an SMA of that period will eliminate that variation (the average always containing one complete cycle). But a perfectly regular cycle is rarely encountered.[2]
For a number of applications, it is advantageous to avoid the shifting induced by using only "past" data. Hence acentral moving averagecan be computed, using data equally spaced on either side of the point in the series where the mean is calculated.[3]This requires using an odd number of points in the sample window.
A major drawback of the SMA is that it lets through a significant amount of the signal shorter than the window length. This can lead to unexpected artifacts, such as peaks in the smoothed result appearing where there were troughs in the data. It also leads to the result being less smooth than expected since some of the higher frequencies are not properly removed.
Its frequency response is a type of low-pass filter calledsinc-in-frequency.
The continuous moving average is defined with the following integral. Theε{\displaystyle \varepsilon }environment[xo−ε,xo+ε]{\displaystyle [x_{o}-\varepsilon ,x_{o}+\varepsilon ]}aroundxo{\displaystyle x_{o}}defines the intensity of smoothing of the graph of the function.
The continuous moving average of the functionf{\displaystyle f}is defined as:
A largerε>0{\displaystyle \varepsilon >0}smoothes the source graph of the function (blue)f{\displaystyle f}more. The animations below show the moving average as animation in dependency of different values forε>0{\displaystyle \varepsilon >0}. The fraction12⋅ε{\displaystyle {\frac {1}{2\cdot \varepsilon }}}is used, because2⋅ε{\displaystyle 2\cdot \varepsilon }is the interval width for the integral.
In acumulative average(CA), the data arrive in an ordered datum stream, and the user would like to get the average of all of the data up until the current datum. For example, an investor may want the average price of all of the stock transactions for a particular stock up until the current time. As each new transaction occurs, the average price at the time of the transaction can be calculated for all of the transactions up to that point using the cumulative average, typically an equally weightedaverageof the sequence ofnvaluesx1.…,xn{\displaystyle x_{1}.\ldots ,x_{n}}up to the current time:CAn=x1+⋯+xnn.{\displaystyle {\textit {CA}}_{n}={{x_{1}+\cdots +x_{n}} \over n}\,.}
The brute-force method to calculate this would be to store all of the data and calculate the sum and divide by the number of points every time a new datum arrived. However, it is possible to simply update cumulative average as a new value,xn+1{\displaystyle x_{n+1}}becomes available, using the formulaCAn+1=xn+1+n⋅CAnn+1.{\displaystyle {\textit {CA}}_{n+1}={{x_{n+1}+n\cdot {\textit {CA}}_{n}} \over {n+1}}.}
Thus the current cumulative average for a new datum is equal to the previous cumulative average, timesn, plus the latest datum, all divided by the number of points received so far,n+1. When all of the data arrive (n=N), then the cumulative average will equal the final average. It is also possible to store a running total of the data as well as the number of points and dividing the total by the number of points to get the CA each time a new datum arrives.
The derivation of the cumulative average formula is straightforward. Usingx1+⋯+xn=n⋅CAn{\displaystyle x_{1}+\cdots +x_{n}=n\cdot {\textit {CA}}_{n}}and similarly forn+ 1, it is seen thatxn+1=(x1+⋯+xn+1)−(x1+⋯+xn){\displaystyle x_{n+1}=(x_{1}+\cdots +x_{n+1})-(x_{1}+\cdots +x_{n})}xn+1=(n+1)⋅CAn+1−n⋅CAn{\displaystyle x_{n+1}=(n+1)\cdot {\textit {CA}}_{n+1}-n\cdot {\textit {CA}}_{n}}
Solving this equation forCAn+1{\displaystyle {\textit {CA}}_{n+1}}results inCAn+1=xn+1+n⋅CAnn+1=xn+1+(n+1−1)⋅CAnn+1=(n+1)⋅CAn+xn+1−CAnn+1=CAn+xn+1−CAnn+1{\displaystyle {\begin{aligned}{\textit {CA}}_{n+1}&={x_{n+1}+n\cdot {\textit {CA}}_{n} \over {n+1}}\\[6pt]&={x_{n+1}+(n+1-1)\cdot {\textit {CA}}_{n} \over {n+1}}\\[6pt]&={(n+1)\cdot {\textit {CA}}_{n}+x_{n+1}-{\textit {CA}}_{n} \over {n+1}}\\[6pt]&={{\textit {CA}}_{n}}+{{x_{n+1}-{\textit {CA}}_{n}} \over {n+1}}\end{aligned}}}
A weighted average is an average that has multiplying factors to give different weights to data at different positions in the sample window. Mathematically, the weighted moving average is theconvolutionof the data with a fixed weighting function. One application is removingpixelizationfrom a digital graphical image. This is also known asAnti-aliasing[citation needed]
In the financial field, and more specifically in the analyses of financial data, aweighted moving average(WMA) has the specific meaning of weights that decrease in arithmetical progression.[4]In ann-day WMA the latest day has weightn, the second latestn−1{\displaystyle n-1}, etc., down to one.
WMAM=npM+(n−1)pM−1+⋯+2p((M−n)+2)+p((M−n)+1)n+(n−1)+⋯+2+1{\displaystyle {\text{WMA}}_{M}={np_{M}+(n-1)p_{M-1}+\cdots +2p_{((M-n)+2)}+p_{((M-n)+1)} \over n+(n-1)+\cdots +2+1}}
The denominator is atriangle numberequal ton(n+1)2.{\textstyle {\frac {n(n+1)}{2}}.}In the more general case the denominator will always be the sum of the individual weights.
When calculating the WMA across successive values, the difference between the numerators ofWMAM+1{\displaystyle {\text{WMA}}_{M+1}}andWMAM{\displaystyle {\text{WMA}}_{M}}isnpM+1−pM−⋯−pM−n+1{\displaystyle np_{M+1}-p_{M}-\dots -p_{M-n+1}}. If we denote the sumpM+⋯+pM−n+1{\displaystyle p_{M}+\dots +p_{M-n+1}}byTotalM{\displaystyle {\text{Total}}_{M}}, then
TotalM+1=TotalM+pM+1−pM−n+1NumeratorM+1=NumeratorM+npM+1−TotalMWMAM+1=NumeratorM+1n+(n−1)+⋯+2+1{\displaystyle {\begin{aligned}{\text{Total}}_{M+1}&={\text{Total}}_{M}+p_{M+1}-p_{M-n+1}\\[3pt]{\text{Numerator}}_{M+1}&={\text{Numerator}}_{M}+np_{M+1}-{\text{Total}}_{M}\\[3pt]{\text{WMA}}_{M+1}&={{\text{Numerator}}_{M+1} \over n+(n-1)+\cdots +2+1}\end{aligned}}}
The graph at the right shows how the weights decrease, from highest weight for the most recent data, down to zero. It can be compared to the weights in the exponential moving average which follows.
Anexponential moving average (EMA), also known as anexponentially weighted moving average (EWMA),[5]is a first-orderinfinite impulse responsefilter that applies weighting factors which decreaseexponentially. The weighting for each olderdatumdecreases exponentially, never reaching zero.
This formulation is according to Hunter (1986).[6]
There is also a multivariate implementation of EWMA, known as MEWMA.[7]
Other weighting systems are used occasionally – for example, in share trading avolume weightingwill weight each time period in proportion to its trading volume.
A further weighting, used by actuaries, is Spencer's 15-Point Moving Average[8](a central moving average). Its symmetric weight coefficients are [−3, −6, −5, 3, 21, 46, 67, 74, 67, 46, 21, 3, −5, −6, −3], which factors as[1, 1, 1, 1]×[1, 1, 1, 1]×[1, 1, 1, 1, 1]×[−3, 3, 4, 3, −3]/320and leaves samples of any quadratic or cubic polynomial unchanged.[9][10]
Outside the world of finance, weighted running means have many forms and applications. Each weighting function or "kernel" has its own characteristics. In engineering and science the frequency and phase response of the filter is often of primary importance in understanding the desired and undesired distortions that a particular filter will apply to the data.
A mean does not just "smooth" the data. A mean is a form of low-pass filter. The effects of the particular filter used should be understood in order to make an appropriate choice. On this point, the French version of this article discusses the spectral effects of 3 kinds of means (cumulative, exponential, Gaussian).
From a statistical point of view, the moving average, when used to estimate the underlying trend in a time series, is susceptible to rare events such as rapid shocks or other anomalies. A more robust estimate of the trend is thesimple moving medianoverntime points:p~SM=Median(pM,pM−1,…,pM−n+1){\displaystyle {\widetilde {p}}_{\text{SM}}={\text{Median}}(p_{M},p_{M-1},\ldots ,p_{M-n+1})}where themedianis found by, for example, sorting the values inside the brackets and finding the value in the middle. For larger values ofn, the median can be efficiently computed by updating anindexable skiplist.[11]
Statistically, the moving average is optimal for recovering the underlying trend of the time series when the fluctuations about the trend arenormally distributed. However, the normal distribution does not place high probability on very large deviations from the trend which explains why such deviations will have a disproportionately large effect on the trend estimate. It can be shown that if the fluctuations are instead assumed to beLaplace distributed, then the moving median is statistically optimal.[12]For a given variance, the Laplace distribution places higher probability on rare events than does the normal, which explains why the moving median tolerates shocks better than the moving mean.
When the simple moving median above is central, the smoothing is identical to themedian filterwhich has applications in, for example, image signal processing. The Moving Median is a more robust alternative to the Moving Average when it comes to estimating the underlying trend in a time series. While the Moving Average is optimal for recovering the trend if the fluctuations around the trend are normally distributed, it is susceptible to the impact of rare events such as rapid shocks or anomalies. In contrast, the Moving Median, which is found by sorting the values inside the time window and finding the value in the middle, is more resistant to the impact of such rare events. This is because, for a given variance, the Laplace distribution, which the Moving Median assumes, places higher probability on rare events than the normal distribution that the Moving Average assumes. As a result, the Moving Median provides a more reliable and stable estimate of the underlying trend even when the time series is affected by large deviations from the trend. Additionally, the Moving Median smoothing is identical to the Median Filter, which has various applications in image signal processing.
In amoving average regression model, a variable of interest is assumed to be a weighted moving average of unobserved independent error terms; the weights in the moving average are parameters to be estimated.
Those two concepts are often confused due to their name, but while they share many similarities, they represent distinct methods and are used in very different contexts.
|
https://en.wikipedia.org/wiki/Moving_average
|
Instatistics,multivariate adaptive regression splines(MARS) is a form ofregression analysisintroduced byJerome H. Friedmanin 1991.[1]It is anon-parametric regressiontechnique and can be seen as an extension oflinear modelsthat automatically models nonlinearities and interactions between variables.
The term "MARS" is trademarked and licensed to Salford Systems. In order to avoid trademark infringements, many open-source implementations of MARS are called "Earth".[2][3]
This section introduces MARS using a few examples. We start with a set of data: a matrix of input variablesx, and a vector of the observed responsesy, with a response for each row inx. For example, the data could be:
Here there is only oneindependent variable, so thexmatrix is just a single column. Given these measurements, we would like to build a model which predicts the expectedyfor a givenx.
Alinear modelfor the above data is
The hat on they^{\displaystyle {\widehat {y}}}indicates thaty^{\displaystyle {\widehat {y}}}is estimated from the data. The figure on the right shows a plot of this function:
a line giving the predictedy^{\displaystyle {\widehat {y}}}versusx, with the original values ofyshown as red dots.
The data at the extremes ofxindicates that the relationship betweenyandxmay be non-linear (look at the red dots relative to the regression line at low and high values ofx). We thus turn to MARS to automatically build a model taking into account non-linearities. MARS software constructs a model from the givenxandyas follows
The figure on the right shows a plot of this function: the predictedy^{\displaystyle {\widehat {y}}}versusx, with the original values ofyonce again shown as red dots. The predicted response is now a better fit to the originalyvalues.
MARS has automatically produced a kink in the predictedyto take into account non-linearity. The kink is produced byhinge functions. The hinge functions are the expressions starting withmax{\displaystyle \max }(wheremax(a,b){\displaystyle \max(a,b)}isa{\displaystyle a}ifa>b{\displaystyle a>b}, elseb{\displaystyle b}). Hinge functions are described in more detail below.
In this simple example, we can easily see from the plot thatyhas a non-linear relationship withx(and might perhaps guess that y varies with the square ofx). However, in general there will be multipleindependent variables, and the relationship betweenyand these variables will be unclear and not easily visible by plotting. We can use MARS to discover that non-linear relationship.
An example MARS expression with multiple variables is
This expression models air pollution (the ozone level) as a function of the temperature and a few other variables. Note that the last term in the formula (on the last line) incorporates an interaction betweenwind{\displaystyle \mathrm {wind} }andvis{\displaystyle \mathrm {vis} }.
The figure on the right plots the predictedozone{\displaystyle \mathrm {ozone} }aswind{\displaystyle \mathrm {wind} }andvis{\displaystyle \mathrm {vis} }vary, with the other variables fixed at their median values. The figure shows that wind does not affect the ozone level unless visibility is low. We see that MARS can build quite flexible regression surfaces by combining hinge functions.
To obtain the above expression, the MARS model building procedure automatically selects which variables to use (some variables are important, others not), the positions of the kinks in the hinge functions, and how the hinge functions are combined.
MARS builds models of the form
The model is a weighted sum of basis functionsBi(x){\displaystyle B_{i}(x)}.
Eachci{\displaystyle c_{i}}is a constant coefficient.
For example, each line in the formula for ozone above is one basis function
multiplied by its coefficient.
Eachbasis functionBi(x){\displaystyle B_{i}(x)}takes one of the following three forms:
1) a constant 1. There is just one such term, the intercept.
In the ozone formula above, the intercept term is 5.2.
2) ahingefunction. A hinge function has the formmax(0,x−constant){\displaystyle \max(0,x-{\text{constant}})}ormax(0,constant−x){\displaystyle \max(0,{\text{constant}}-x)}. MARS automatically selects variables and values of those variables for knots of the hinge functions. Examples of such basis functions can be seen in the middle three lines of the ozone formula.
3) a product of two or more hinge functions.
These basis functions can model interaction between two or more variables.
An example is the last line of the ozone formula.
A key part of MARS models arehinge functionstaking the form
or
wherec{\displaystyle c}is a constant, called theknot.
The figure on the right shows a mirrored pair of hinge functions with a knot at 3.1.
A hinge function is zero for part of its range, so can be used to partition the data into disjoint regions, each of which can be treated independently. Thus for example a mirrored pair of hinge functions in the expression
creates thepiecewiselinear graph shown for the simple MARS model in the previous section.
One might assume that only piecewise linear functions can be formed from hinge functions, but hinge functions can be multiplied together to form non-linear functions.
Hinge functions are also calledramp,hockey stick, orrectifierfunctions. Instead of themax{\displaystyle \max }notation used in this article, hinge functions are often represented by[±(xi−c)]+{\displaystyle [\pm (x_{i}-c)]_{+}}where[⋅]+{\displaystyle [\cdot ]_{+}}means take the positive part.
MARS builds a model in two phases:
the forward and the backward pass.
This two-stage approach is the same as that used byrecursive partitioningtrees.
MARS starts with a model which consists of just the intercept term
(which is the mean of the response values).
MARS then repeatedly adds basis function in pairs to the model. At each step it finds the pair of basis functions that gives the maximum reduction in sum-of-squaresresidualerror (it is agreedy algorithm). The two basis functions in the pair are identical except that a different side of a mirrored hinge function is used for each function. Each new basis function consists of a term already in the model (which could perhaps be the intercept term) multiplied by a new hinge function. A hinge function is defined by a variable and a knot, so to add a new basis function, MARS must search over all combinations of the following:
1) existing terms (calledparent termsin this context)
2) all variables (to select one for the new basis function)
3) all values of each variable (for the knot of the new hinge function).
To calculate the coefficient of each term, MARS applies a linear regression over the terms.
This process of adding terms continues until the change in residual error is too small to continue or until the maximum number of terms is reached. The maximum number of terms is specified by the user before model building starts.
The search at each step is usually done in abrute-forcefashion, but a key aspect of MARS is that because of the nature of hinge functions, the search can be done quickly using a fast least-squares update technique. Brute-force search can be sped up by using aheuristicthat reduces the number of parent terms considered at each step ("Fast MARS"[4]).
The forward pass usuallyoverfitsthe model. To build a model with better generalization ability, the backward pass prunes the model, deleting the least effective term at each step until it finds the best submodel. Model subsets are compared using the Generalized cross validation (GCV) criterion described below.
The backward pass has an advantage over the forward pass: at any step it can choose any term to delete, whereas the forward pass at each step can only see the next pair of terms.
The forward pass adds terms in pairs, but the backward pass typically discards one side of the pair and so terms are often not seen in pairs in the final model. A paired hinge can be seen in the equation fory^{\displaystyle {\widehat {y}}}in the first MARS example above; there are no complete pairs retained in the ozone example.
The backward pass compares the performance of different models using Generalized Cross-Validation (GCV), a minor variant on theAkaike information criterionthat approximates theleave-one-out cross-validationscore in the special case where errors are Gaussian, or where the squared error loss function is used. GCV was introduced by Craven andWahbaand extended by Friedman for MARS; lower values of GCV indicate better models. The formula for the GCV is
where RSS is the residual sum-of-squares measured on the training data andNis the number of observations (the number of rows in thexmatrix).
The effective number of parameters is defined as
wherepenaltyis typically 2 (giving results equivalent to theAkaike information criterion) but can be increased by the user if they so desire.
Note that
is the number of hinge-function knots, so the formula penalizes the addition of knots. Thus the GCV formula adjusts (i.e. increases) the training RSS to penalize more complex models. We penalize flexibility because models that are too flexible will model the specific realization of noise in the data instead of just the systematic structure of the data.
One constraint has already been mentioned: the user
can specify the maximum number of terms in the forward pass.
A further constraint can be placed on the forward pass
by specifying a maximum allowable degree of interaction.
Typically only one or two degrees of interaction are allowed,
but higher degrees can be used when the data warrants it.
The maximum degree of interaction in the first MARS example
above is one (i.e. no interactions or anadditive model);
in the ozone example it is two.
Other constraints on the forward pass are possible.
For example, the user can specify that interactions are allowed
only for certain input variables.
Such constraints could make sense because of knowledge
of the process that generated the data.
No regression modeling technique is best for all situations.
The guidelines below are intended to give an idea of the pros and cons of MARS,
but there will be exceptions to the guidelines.
It is useful to compare MARS torecursive partitioningand this is done below.
(Recursive partitioning is also commonly calledregression trees,decision trees, orCART;
see therecursive partitioningarticle for details).
Several free and commercial software packages are available for fitting MARS-type models.
|
https://en.wikipedia.org/wiki/Multivariate_adaptive_regression_splines
|
ASavitzky–Golay filteris adigital filterthat can be applied to a set ofdigital datapoints for the purpose ofsmoothingthe data, that is, to increase the precision of the data without distorting the signal tendency. This is achieved, in a process known asconvolution, by fitting successive sub-sets of adjacent data points with a low-degreepolynomialby the method oflinear least squares. When the data points are equally spaced, ananalytical solutionto the least-squares equations can be found, in the form of a single set of "convolution coefficients" that can be applied to all data sub-sets, to give estimates of the smoothed signal, (or derivatives of the smoothed signal) at the central point of each sub-set. The method, based on established mathematical procedures,[1][2]was popularized byAbraham SavitzkyandMarcel J. E. Golay, who published tables of convolution coefficients for various polynomials and sub-set sizes in 1964.[3][4]Some errors in the tables have been corrected.[5]The method has been extended for the treatment of 2- and 3-dimensional data.
Savitzky and Golay's paper is one of the most widely cited papers in the journalAnalytical Chemistry[6]and is classed by that journal as one of its "10 seminal papers" saying "it can be argued that the dawn of the computer-controlled analytical instrument can be traced to this article".[7]
The data consists of a set of points{xj{\displaystyle \{x_{j}},yj};j=1,...,n{\displaystyle y_{j}\};j=1,...,n}, wherexj{\displaystyle x_{j}}is an independent variable andyj{\displaystyle y_{j}}is an observed value. They are treated with a set ofm{\displaystyle m}convolution coefficients,Ci{\displaystyle C_{i}}, according to the expression
Selected convolution coefficients are shown in thetables, below. For example, for smoothing by a 5-point quadratic polynomial,m=5,i=−2,−1,0,1,2{\displaystyle m=5,i=-2,-1,0,1,2}and thejth{\displaystyle j^{th}}smoothed data point,Yj{\displaystyle Y_{j}}, is given by
where,C−2=−3/35,C−1=12/35{\displaystyle C_{-2}=-3/35,C_{-1}=12/35}, etc. There are numerous applications of smoothing, such as avoiding the propagation of noise through an algorithm chain, or sometimes simply to make the data appear to be less noisy than it really is.
The following are applications of numerical differentiation of data.[8]NoteWhen calculating thenth derivative, an additional scaling factor ofn!hn{\displaystyle {\frac {n!}{h^{n}}}}may be applied to all calculated data points to obtain absolute values (see expressions fordnYdxn{\displaystyle {\frac {d^{n}Y}{dx^{n}}}}, below, for details).
The "moving average filter" is a trivial example of a Savitzky–Golay filter that is commonly used with time series data to smooth out short-term fluctuations and highlight longer-term trends or cycles.
Each subset of the data set is fit with a straight horizontal line as opposed to a higher order polynomial. An unweighted moving average filter is the simplest convolution filter.
The moving average is often used for a quick technical analysis of financial data, like stock prices, returns or trading volumes. It is also used in economics to examine gross domestic product, employment or other macroeconomic time series.
It was not included in some tables of Savitzky-Golay convolution coefficients as all the coefficient values are identical, with the value1m{\displaystyle {\frac {1}{m}}}.
When the data points are equally spaced, ananalytical solutionto the least-squares equations can be found.[2]This solution forms the basis of theconvolutionmethod of numerical smoothing and differentiation. Suppose that the data consists of a set ofnpoints (xj,yj) (j= 1, ...,n), wherexjis an independent variable andyjis a datum value. A polynomial will be fitted bylinear least squaresto a set ofm(an odd number) adjacent data points, each separated by an intervalh. Firstly, a change of variable is made
wherex¯{\displaystyle {\bar {x}}}is the value of the central point.ztakes the values1−m2,⋯,0,⋯,m−12{\displaystyle {\tfrac {1-m}{2}},\cdots ,0,\cdots ,{\tfrac {m-1}{2}}}(e.g.m= 5 →z= −2, −1, 0, 1, 2).[note 1]The polynomial, of degreekis defined as
The coefficientsa0,a1etc. are obtained by solving thenormal equations(boldarepresents avector, boldJrepresents amatrix).
whereJ{\displaystyle \mathbf {J} }is aVandermonde matrix, that isi{\displaystyle i}-th row ofJ{\displaystyle \mathbf {J} }has values1,zi,zi2,…{\displaystyle 1,z_{i},z_{i}^{2},\dots }.
For example, for a cubic polynomial fitted to 5 points,z= −2, −1, 0, 1, 2 the normal equations are solved as follows.
Now, the normal equations can be factored into two separate sets of equations, by rearranging rows and columns, with
Expressions for the inverse of each of these matrices can be obtained usingCramer's rule
The normal equations become
and
Multiplying out and removing common factors,
The coefficients ofyin these expressions are known asconvolutioncoefficients. They are elements of the matrix
In general,
In matrix notation this example is written as
Tables of convolution coefficients, calculated in the same way formup to 25, were published for the Savitzky–Golay smoothing filter in 1964,[3][5]The value of the central point,z= 0, is obtained from a single set of coefficients,a0for smoothing,a1for 1st derivative etc. The numerical derivatives are obtained by differentiatingY. This means that thederivatives are calculated for the smoothed data curve. For a cubic polynomial
In general, polynomials of degree (0 and 1),[note 3](2 and 3), (4 and 5) etc. give the same coefficients for smoothing and even derivatives. Polynomials of degree (1 and 2), (3 and 4) etc. give the same coefficients for odd derivatives.
It is not necessary always to use the Savitzky–Golay tables. The summations in the matrixJTJcan be evaluated inclosed form,
so that algebraic formulae can be derived for the convolution coefficients.[13][note 4]Functions that are suitable for use with a curve that has aninflection pointare:
Simpler expressions that can be used with curves that don't have an inflection point are:
Higher derivatives can be obtained. For example, a fourth derivative can be obtained by performing two passes of a second derivative function.[14]
An alternative to fittingmdata points by a simple polynomial in the subsidiary variable,z, is to useorthogonal polynomials.
whereP0, ...,Pkis a set of mutually orthogonal polynomials of degree 0, ...,k. Full details on how to obtain expressions for the orthogonal polynomials and the relationship between the coefficientsbandaare given by Guest.[2]Expressions for the convolution coefficients are easily obtained because the normal equations matrix,JTJ, is adiagonal matrixas the product of any two orthogonal polynomials is zero by virtue of their mutual orthogonality. Therefore, each non-zero element of its inverse is simply the reciprocal the corresponding element in the normal equation matrix. The calculation is further simplified by usingrecursionto build orthogonalGram polynomials. The whole calculation can be coded in a few lines ofPASCAL, a computer language well-adapted for calculations involving recursion.[15]
Savitzky–Golay filters are most commonly used to obtain the smoothed or derivative value at the central point,z= 0, using a single set of convolution coefficients. (m− 1)/2 points at the start and end of the series cannot be calculated using this process. Various strategies can be employed to avoid this inconvenience.
It is implicit in the above treatment that the data points are all given equal weight. Technically, theobjective function
being minimized in the least-squares process has unit weights,wi= 1. When weights are not all the same the normal equations become
If the same set of diagonal weights is used for all data subsets,W=diag(w1,w2,...,wm){\displaystyle W={\text{diag}}(w_{1},w_{2},...,w_{m})}, an analytical solution to the normal equations can be written down. For example, with a quadratic polynomial,
An explicit expression for the inverse of this matrix can be obtained usingCramer's rule. A set of convolution coefficients may then be derived as
Alternatively the coefficients,C, could be calculated in a spreadsheet, employing a built-in matrix inversion routine to obtain the inverse of the normal equations matrix. This set of coefficients, once calculated and stored, can be used with all calculations in which the same weighting scheme applies. A different set of coefficients is needed for each different weighting scheme.
It was shown that Savitzky–Golay filter can be improved by introducing weights that decrease at the ends of the fitting interval.[16]
Two-dimensional smoothing and differentiation can also be applied to tables of data values, such as intensity values in a photographic image which is composed of a rectangular grid of pixels.[17][18]Such a grid is referred as a kernel, and the data points that constitute the kernel are referred as nodes. The trick is to transform the rectangular kernel into a single row by a simple ordering of the indices of the nodes. Whereas the one-dimensional filter coefficients are found by fitting a polynomial in the subsidiary variablezto a set ofmdata points, the two-dimensional coefficients are found by fitting a polynomial in subsidiary variablesvandwto a set of the values at them×nkernel nodes. The following example, for a bivariate polynomial of total degree 3,m= 7, andn= 5, illustrates the process, which parallels the process for the one dimensional case, above.[19]
The rectangular kernel of 35 data values,d1−d35
becomes a vector when the rows are placed one after another.
The Jacobian has 10 columns, one for each of the parametersa00−a03, and 35 rows, one for each pair ofvandwvalues. Each row has the form
The convolution coefficients are calculated as
The first row ofCcontains 35 convolution coefficients, which can be multiplied with the 35 data values, respectively, to obtain the polynomial coefficienta00{\displaystyle a_{00}}, which is the smoothed value at the central node of the kernel (i.e. at the 18th node of the above table). Similarly, other rows ofCcan be multiplied with the 35 values to obtain other polynomial coefficients, which, in turn, can be used to obtain smoothed values and different smoothed partial derivatives at different nodes.
Nikitas and Pappa-Louisi showed that depending on the format of the used polynomial, the quality of smoothing may vary significantly.[20]They recommend using the polynomial of the form
because such polynomials can achieve good smoothing both in the central and in the near-boundary regions of a kernel, and therefore they can be confidently used in smoothing both at the internal and at the near-boundary data points of a sampled domain. In order to avoid ill-conditioning when solving the least-squares problem,p<mandq<n. For software that calculates the two-dimensional coefficients and for a database of suchC's, see the section on multi-dimensional convolution coefficients, below.
The idea of two-dimensional convolution coefficients can be extended to the higher spatial dimensions as well, in a straightforward manner,[17][21]by arranging multidimensional distribution of the kernel nodes in a single row. Following the aforementioned finding by Nikitas and Pappa-Louisi[20]in two-dimensional cases, usage of the following form of the polynomial is recommended in multidimensional cases:
whereDis the dimension of the space,a{\displaystyle a}'s are the polynomial coefficients, andu's are the coordinates in the different spatial directions. Algebraic expressions for partial derivatives of any order, be it mixed or otherwise, can be easily derived from the above expression.[21]Note thatCdepends on the manner in which the kernel nodes are arranged in a row and on the manner in which the different terms of the expanded form of the above polynomial is arranged, when preparing the Jacobian.
Accurate computation ofCin multidimensional cases becomes challenging, as precision of standard floating point numbers available in computer programming languages no longer remain sufficient. The insufficient precision causes the floating point truncation errors to become comparable to the magnitudes of someCelements, which, in turn, severely degrades its accuracy and renders it useless.Chandra Shekharhas brought forth twoopen sourcesoftware,Advanced Convolution Coefficient Calculator (ACCC)andPrecise Convolution Coefficient Calculator (PCCC), which handle these accuracy issues adequately. ACCC performs the computation by using floating point numbers, in an iterative manner.[22]The precision of the floating-point numbers is gradually increased in each iteration, by usingGNU MPFR. Once the obtainedC's in two consecutive iterations start having same significant digits until a pre-specified distance, the convergence is assumed to have reached. If the distance is sufficiently large, the computation yields a highly accurateC. PCCC employs rational number calculations, by usingGNU Multiple Precision Arithmetic Library, and yields a fully accurateC, in therational numberformat.[23]In the end, these rational numbers are converted into floating point numbers, until a pre-specified number of significant digits.
A database ofC'sthat are calculated by using ACCC, for symmetric kernels and both symmetric and asymmetric polynomials, on unity-spaced kernel nodes, in the 1, 2, 3, and 4 dimensional spaces, is made available.[24]Chandra Shekhar has also laid out a mathematical framework that describes usage ofCcalculated on unity-spaced kernel nodes to perform filtering and partial differentiations (of various orders) on non-uniformly spaced kernel nodes,[21]allowing usage ofCprovided in the aforementioned database. Although this method yields approximate results only, they are acceptable in most engineering applications, provided that non-uniformity of the kernel nodes is weak.
It is inevitable that the signal will be distorted in the convolution process. From property 3 above, when data which has a peak is smoothed the peak height will be reduced and thehalf-widthwill be increased. Both the extent of the distortion and S/N (signal-to-noise ratio) improvement:
For example, If the noise in all data points is uncorrelated and has a constantstandard deviation,σ, the standard deviation on the noise will be decreased by convolution with anm-point smoothing function to[26][note 5]
These functions are shown in the plot at the right. For example, with a 9-point linear function (moving average) two thirds of the noise is removed and with a 9-point quadratic/cubic smoothing function only about half the noise is removed. Most of the noise remaining is low-frequency noise(seeFrequency characteristics of convolution filters, below).
Although the moving average function gives better noise reduction it is unsuitable for smoothing data which has curvature overmpoints. A quadratic filter function is unsuitable for getting a derivative of a data curve with aninflection pointbecause a quadratic polynomial does not have one. The optimal choice of polynomial order and number of convolution coefficients will be a compromise between noise reduction and distortion.[28]
One way to mitigate distortion and improve noise removal is to use a filter of smaller width and perform more than one convolution with it. For two passes of the same filter this is equivalent to one pass of a filter obtained by convolution of the original filter with itself.[29]For example, 2 passes of the filter with coefficients (1/3, 1/3, 1/3) is equivalent to 1 pass of the filter with coefficients
(1/9, 2/9, 3/9, 2/9, 1/9).
The disadvantage of multipassing is that the equivalent filter width forn{\displaystyle n}passes of anm{\displaystyle m}–point function isn(m−1)+1{\displaystyle n(m-1)+1}so multipassing is subject to greater end-effects. Nevertheless, multipassing has been used to great advantage. For instance, some 40–80 passes on data with a signal-to-noise ratio of only 5 gave useful results.[30]The noise reduction formulae given above do not apply becausecorrelationbetween calculated data points increases with each pass.
Convolution maps to multiplication in theFourierco-domain. Thediscrete Fourier transformof a convolution filter is areal-valued functionwhich can be represented as
θ runs from 0 to 180degrees, after which the function merely repeats itself. The plot for a 9-point quadratic/cubic smoothing function is typical. At very low angle, the plot is almost flat, meaning that low-frequency components of the data will be virtually unchanged by the smoothing operation. As the angle increases the value decreases so that higher frequency components are more and more attenuated. This shows that the convolution filter can be described as alow-pass filter: the noise that is removed is primarily high-frequency noise and low-frequency noise passes through the filter.[31]Some high-frequency noise components are attenuated more than others, as shown by undulations in the Fourier transform at large angles. This can give rise to small oscillations in the smoothed data[32]and phase reversal, i.e., high-frequency oscillations in the data get inverted by Savitzky–Golay filtering.[33]
Convolution affects the correlation between errors in the data. The effect of convolution can be expressed as a linear transformation.
By the law oferror propagation, thevariance-covariance matrixof the data,Awill be transformed intoBaccording to
To see how this applies in practice, consider the effect of a 3-point moving average on the first three calculated points,Y2−Y4, assuming that the data points have equalvarianceand that there is no correlation between them.Awill be anidentity matrixmultiplied by a constant,σ2, the variance at each point.
In this case thecorrelation coefficients,
between calculated pointsiandjwill be
In general, the calculated values are correlated even when the observed values are not correlated. The correlation extends overm− 1calculated points at a time.[34]
To illustrate the effect of multipassing on the noise and correlation of a set of data, consider the effects of a second pass of a 3-point moving average filter. For the second pass[note 6]
After two passes, the standard deviation of the central point has decreased to1981σ=0.48σ{\displaystyle {\sqrt {\tfrac {19}{81}}}\sigma =0.48\sigma }, compared to 0.58σfor one pass. The noise reduction is a little less than would be obtained with one pass of a 5-point moving average which, under the same conditions, would result in the smoothed points having the smaller standard deviation of 0.45σ.
Correlation now extends over a span of 4 sequential points with correlation coefficients
The advantage obtained by performing two passes with the narrower smoothing function is that it introduces less distortion into the calculated data.
Compared with other smoothing filters, e.g. convolution with aGaussianor multi-passmoving-averagefiltering, Savitzky–Golay filters have an initially flatter response and sharper cutoff in the frequency domain, especially for high orders of the fit polynomial (seefrequency characteristics). For data with limitedsignal bandwidth, this means that Savitzky–Golay filtering can provide bettersignal-to-noise ratiothan many other filters; e.g., peak heights of spectra are better preserved than for other filters with similar noise suppression. Disadvantages of the Savitzky–Golay filters are comparably poor suppression of some high frequencies (poorstopbandsuppression) and artifacts when using polynomial fits for thefirst and last points.[16]
Alternative smoothing methods that share the advantages of Savitzky–Golay filters and mitigate at least some of their disadvantages are Savitzky–Golay filters with properly chosen alternativefitting weights,Whittaker–Henderson smoothingandHodrick–Prescott filter(equivalent methods closely related to smoothingsplines), and convolution with awindowedsinc function.[16]
Consider a set of data points(xj,yj)1≤j≤n{\displaystyle (x_{j},y_{j})_{1\leq j\leq n}}. The Savitzky–Golay tables refer to the case that the stepxj−xj−1{\displaystyle x_{j}-x_{j-1}}is constant,h. Examples of the use of the so-called convolution coefficients, with a cubic polynomial and a window size,m, of 5 points are as follows.
Selected values of the convolution coefficients for polynomials of degree 1, 2, 3, 4 and 5 are given in the following tables[note 7]The values were calculated using the PASCAL code provided in Gorry.[15]
|
https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter
|
Afinite differenceis a mathematical expression of the formf(x+b) −f(x+a). Finite differences (or the associateddifference quotients) are often used as approximations of derivatives, such as innumerical differentiation.
Thedifference operator, commonly denotedΔ{\displaystyle \Delta }, is theoperatorthat maps a functionfto the functionΔ[f]{\displaystyle \Delta [f]}defined byΔ[f](x)=f(x+1)−f(x).{\displaystyle \Delta [f](x)=f(x+1)-f(x).}Adifference equationis afunctional equationthat involves the finite difference operator in the same way as adifferential equationinvolvesderivatives. There are many similarities between difference equations and differential equations. Certainrecurrence relationscan be written as difference equations by replacing iteration notation with finite differences.
Innumerical analysis, finite differences are widely used forapproximating derivatives, and the term "finite difference" is often used as an abbreviation of "finite difference approximation of derivatives".[1][2][3]
Finite differences were introduced byBrook Taylorin 1715 and have also been studied as abstract self-standing mathematical objects in works byGeorge Boole(1860),L. M. Milne-Thomson(1933), andKároly Jordan[de](1939). Finite differences trace their origins back to one ofJost Bürgi's algorithms (c.1592) and work by others includingIsaac Newton. The formal calculus of finite differences can be viewed as an alternative to thecalculusofinfinitesimals.[4]
Three basic types are commonly considered:forward,backward, andcentralfinite differences.[1][2][3]
Aforward difference, denotedΔh[f],{\displaystyle \Delta _{h}[f],}of afunctionfis a function defined asΔh[f](x)=f(x+h)−f(x).{\displaystyle \Delta _{h}[f](x)=f(x+h)-f(x).}
Depending on the application, the spacinghmay be variable or constant. When omitted,his taken to be 1; that is,Δ[f](x)=Δ1[f](x)=f(x+1)−f(x).{\displaystyle \Delta [f](x)=\Delta _{1}[f](x)=f(x+1)-f(x).}
Abackward differenceuses the function values atxandx−h, instead of the values atx+handx:∇h[f](x)=f(x)−f(x−h)=Δh[f](x−h).{\displaystyle \nabla _{h}[f](x)=f(x)-f(x-h)=\Delta _{h}[f](x-h).}
Finally, thecentral differenceis given byδh[f](x)=f(x+h2)−f(x−h2)=Δh/2[f](x)+∇h/2[f](x).{\displaystyle \delta _{h}[f](x)=f(x+{\tfrac {h}{2}})-f(x-{\tfrac {h}{2}})=\Delta _{h/2}[f](x)+\nabla _{h/2}[f](x).}
The approximation ofderivativesby finite differences plays a central role infinite difference methodsfor thenumericalsolution ofdifferential equations, especiallyboundary value problems.
Thederivativeof a functionfat a pointxis defined by thelimitf′(x)=limh→0f(x+h)−f(x)h.{\displaystyle f'(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}.}
Ifhhas a fixed (non-zero) value instead of approaching zero, then the right-hand side of the above equation would be writtenf(x+h)−f(x)h=Δh[f](x)h.{\displaystyle {\frac {f(x+h)-f(x)}{h}}={\frac {\Delta _{h}[f](x)}{h}}.}
Hence, the forward difference divided byhapproximates the derivative whenhis small. The error in this approximation can be derived fromTaylor's theorem. Assuming thatfis twice differentiable, we haveΔh[f](x)h−f′(x)=o(h)→0ash→0.{\displaystyle {\frac {\Delta _{h}[f](x)}{h}}-f'(x)=o(h)\to 0\quad {\text{as }}h\to 0.}
The same formula holds for the backward difference:∇h[f](x)h−f′(x)=o(h)→0ash→0.{\displaystyle {\frac {\nabla _{h}[f](x)}{h}}-f'(x)=o(h)\to 0\quad {\text{as }}h\to 0.}
However, the central (also called centered) difference yields a more accurate approximation. Iffis three times differentiable,δh[f](x)h−f′(x)=o(h2).{\displaystyle {\frac {\delta _{h}[f](x)}{h}}-f'(x)=o\left(h^{2}\right).}
The main problem[citation needed]with the central difference method, however, is that oscillating functions can yield zero derivative. Iff(nh) = 1fornodd, andf(nh) = 2forneven, thenf′(nh) = 0if it is calculated with thecentral difference scheme. This is particularly troublesome if the domain offis discrete. See alsoSymmetric derivative.
Authors for whom finite differences mean finite difference approximations define the forward/backward/central differences as the quotients given in this section (instead of employing the definitions given in the previous section).[1][2][3]
In an analogous way, one can obtain finite difference approximations to higher order derivatives and differential operators. For example, by using the above central difference formula forf′(x+h/2)andf′(x−h/2)and applying a central difference formula for the derivative off′atx, we obtain the central difference approximation of the second derivative off:
Similarly we can apply other differencing formulas in a recursive manner.
More generally, then-th order forward, backward, and centraldifferences are given by, respectively,
These equations usebinomial coefficientsafter the summation sign shown as(ni). Each row ofPascal's triangleprovides the coefficient for each value ofi.
Note that the central difference will, for oddn, havehmultiplied by non-integers. This is often a problem because it amounts to changing the interval of discretization. The problem may be remedied substituting the average ofδn[f](x−h2){\displaystyle \ \delta ^{n}[f](\ x-{\tfrac {\ h\ }{2}}\ )\ }andδn[f](x+h2).{\displaystyle \ \delta ^{n}[f](\ x+{\tfrac {\ h\ }{2}}\ )~.}
Forward differences applied to asequenceare sometimes called thebinomial transformof the sequence, and have a number of interesting combinatorial properties. Forward differences may be evaluated using theNörlund–Rice integral. The integral representation for these types of series is interesting, because the integral can often be evaluated usingasymptotic expansionorsaddle-pointtechniques; by contrast, the forward difference series can be extremely hard to evaluate numerically, because the binomial coefficients grow rapidly for largen.
The relationship of these higher-order differences with the respective derivatives is straightforward,dnfdxn(x)=Δhn[f](x)hn+o(h)=∇hn[f](x)hn+o(h)=δhn[f](x)hn+o(h2).{\displaystyle {\frac {d^{n}f}{dx^{n}}}(x)={\frac {\Delta _{h}^{n}[f](x)}{h^{n}}}+o(h)={\frac {\nabla _{h}^{n}[f](x)}{h^{n}}}+o(h)={\frac {\delta _{h}^{n}[f](x)}{h^{n}}}+o\left(h^{2}\right).}
Higher-order differences can also be used to construct better approximations. As mentioned above, the first-order difference approximates the first-order derivative up to a term of orderh. However, the combinationΔh[f](x)−12Δh2[f](x)h=−f(x+2h)−4f(x+h)+3f(x)2h{\displaystyle {\frac {\Delta _{h}[f](x)-{\frac {1}{2}}\Delta _{h}^{2}[f](x)}{h}}=-{\frac {f(x+2h)-4f(x+h)+3f(x)}{2h}}}approximatesf′(x)up to a term of orderh2. This can be proven by expanding the above expression inTaylor series, or by using the calculus of finite differences, explained below.
If necessary, the finite difference can be centered about any point by mixing forward, backward, and central differences.
For a givenpolynomialof degreen≥ 1, expressed in the functionP(x), with real numbersa≠ 0andbandlower order terms(if any) marked asl.o.t.:P(x)=axn+bxn−1+l.o.t.{\displaystyle P(x)=ax^{n}+bx^{n-1}+l.o.t.}
Afternpairwise differences, the following result can be achieved, whereh≠ 0is areal numbermarking the arithmetic difference:[5]Δhn[P](x)=ahnn!{\displaystyle \Delta _{h}^{n}[P](x)=ah^{n}n!}
Only the coefficient of the highest-order term remains. As this result is constant with respect tox, any further pairwise differences will have the value0.
LetQ(x)be a polynomial of degree1:Δh[Q](x)=Q(x+h)−Q(x)=[a(x+h)+b]−[ax+b]=ah=ah11!{\displaystyle \Delta _{h}[Q](x)=Q(x+h)-Q(x)=[a(x+h)+b]-[ax+b]=ah=ah^{1}1!}
This proves it for the base case.
LetR(x)be a polynomial of degreem− 1wherem≥ 2and the coefficient of the highest-order term bea≠ 0. Assuming the following holds true for all polynomials of degreem− 1:Δhm−1[R](x)=ahm−1(m−1)!{\displaystyle \Delta _{h}^{m-1}[R](x)=ah^{m-1}(m-1)!}
LetS(x)be a polynomial of degreem. With one pairwise difference:Δh[S](x)=[a(x+h)m+b(x+h)m−1+l.o.t.]−[axm+bxm−1+l.o.t.]=ahmxm−1+l.o.t.=T(x){\displaystyle \Delta _{h}[S](x)=[a(x+h)^{m}+b(x+h)^{m-1}+{\text{l.o.t.}}]-[ax^{m}+bx^{m-1}+{\text{l.o.t.}}]=ahmx^{m-1}+{\text{l.o.t.}}=T(x)}
Asahm≠ 0, this results in a polynomialT(x)of degreem− 1, withahmas the coefficient of the highest-order term. Given the assumption above andm− 1pairwise differences (resulting in a total ofmpairwise differences forS(x)), it can be found that:Δhm−1[T](x)=ahm⋅hm−1(m−1)!=ahmm!{\displaystyle \Delta _{h}^{m-1}[T](x)=ahm\cdot h^{m-1}(m-1)!=ah^{m}m!}
This completes the proof.
This identity can be used to find the lowest-degree polynomial that intercepts a number of points(x,y)where the difference on thex-axis from one point to the next is a constanth≠ 0. For example, given the following points:
We can use a differences table, where for all cells to the right of the firsty, the following relation to the cells in the column immediately to the left exists for a cell(a+ 1,b+ 1), with the top-leftmost cell being at coordinate(0, 0):(a+1,b+1)=(a,b+1)−(a,b){\displaystyle (a+1,b+1)=(a,b+1)-(a,b)}
To find the first term, the following table can be used:
This arrives at a constant648. The arithmetic difference ish= 3, as established above. Given the number of pairwise differences needed to reach the constant, it can be surmised this is a polynomial of degree3. Thus, using the identity above:648=a⋅33⋅3!=a⋅27⋅6=a⋅162{\displaystyle 648=a\cdot 3^{3}\cdot 3!=a\cdot 27\cdot 6=a\cdot 162}
Solving fora, it can be found to have the value4. Thus, the first term of the polynomial is4x3.
Then, subtracting out the first term, which lowers the polynomial's degree, and finding the finite difference again:
Here, the constant is achieved after only two pairwise differences, thus the following result:−306=a⋅32⋅2!=a⋅18{\displaystyle -306=a\cdot 3^{2}\cdot 2!=a\cdot 18}
Solving fora, which is−17, the polynomial's second term is−17x2.
Moving on to the next term, by subtracting out the second term:
Thus the constant is achieved after only one pairwise difference:108=a⋅31⋅1!=a⋅3{\displaystyle 108=a\cdot 3^{1}\cdot 1!=a\cdot 3}
It can be found thata= 36and thus the third term of the polynomial is36x. Subtracting out the third term:
Without any pairwise differences, it is found that the 4th and final term of the polynomial is the constant−19. Thus, the lowest-degree polynomial intercepting all the points in the first table is found:4x3−17x2+36x−19{\displaystyle 4x^{3}-17x^{2}+36x-19}
Usinglinear algebraone can construct finite difference approximations which utilize an arbitrary number of points to the left and a (possibly different) number of points to the right of the evaluation point, for any order derivative. This involves solving a linear system such that theTaylor expansionof the sum of those points around the evaluation point best approximates the Taylor expansion of the desired derivative. Such formulas can be represented graphically on a hexagonal or diamond-shaped grid.[6]This is useful for differentiating a function on a grid, where, as one approaches the edge of the grid, one must sample fewer and fewer points on one side.[7]Finite difference approximations for non-standard (and even non-integer) stencils given an arbitrary stencil and a desired derivative order may be constructed.[8]
An important application of finite differences is innumerical analysis, especially innumerical differential equations, which aim at the numerical solution ofordinaryandpartial differential equations. The idea is to replace the derivatives appearing in the differential equation by finite differences that approximate them. The resulting methods are calledfinite difference methods.
Common applications of the finite difference method are in computational science and engineering disciplines, such asthermal engineering,fluid mechanics, etc.
TheNewton seriesconsists of the terms of theNewton forward difference equation, named afterIsaac Newton; in essence, it is theGregory–Newton interpolation formula[9](named afterIsaac NewtonandJames Gregory), first published in hisPrincipia Mathematicain 1687,[10][11]namely the discrete analog of the continuous Taylor expansion,
f(x)=∑k=0∞Δk[f](a)k!(x−a)k=∑k=0∞(x−ak)Δk[f](a),{\displaystyle f(x)=\sum _{k=0}^{\infty }{\frac {\Delta ^{k}[f](a)}{k!}}\,(x-a)_{k}=\sum _{k=0}^{\infty }{\binom {x-a}{k}}\,\Delta ^{k}[f](a),}
which holds for anypolynomialfunctionfand for many (but not all)analytic functions. (It does not hold whenfisexponential typeπ{\displaystyle \pi }. This is easily seen, as the sine function vanishes at integer multiples ofπ{\displaystyle \pi }; the corresponding Newton series is identically zero, as all finite differences are zero in this case. Yet clearly, the sine function is not zero.) Here, the expression(xk)=(x)kk!{\displaystyle {\binom {x}{k}}={\frac {(x)_{k}}{k!}}}is thebinomial coefficient, and(x)k=x(x−1)(x−2)⋯(x−k+1){\displaystyle (x)_{k}=x(x-1)(x-2)\cdots (x-k+1)}is the "falling factorial" or "lower factorial", while theempty product(x)0is defined to be 1. In this particular case, there is an assumption of unit steps for the changes in the values ofx,h= 1of the generalization below.
Note the formal correspondence of this result toTaylor's theorem. Historically, this, as well as theChu–Vandermonde identity,(x+y)n=∑k=0n(nk)(x)n−k(y)k,{\displaystyle (x+y)_{n}=\sum _{k=0}^{n}{\binom {n}{k}}(x)_{n-k}\,(y)_{k},}(following from it, and corresponding to thebinomial theorem), are included in the observations that matured to the system ofumbral calculus.
Newton series expansions can be superior to Taylor series expansions when applied to discrete quantities like quantum spins (seeHolstein–Primakoff transformation),bosonic operator functionsor discrete counting statistics.[12]
To illustrate how one may use Newton's formula in actual practice, consider the first few terms of doubling theFibonacci sequencef= 2, 2, 4, ...One can find apolynomialthat reproduces these values, by first computing a difference table, and then substituting the differences that correspond tox0(underlined) into the formula as follows,xf=Δ0Δ1Δ212_0_222_234f(x)=Δ0⋅1+Δ1⋅(x−x0)11!+Δ2⋅(x−x0)22!(x0=1)=2⋅1+0⋅x−11+2⋅(x−1)(x−2)2=2+(x−1)(x−2){\displaystyle {\begin{matrix}{\begin{array}{|c||c|c|c|}\hline x&f=\Delta ^{0}&\Delta ^{1}&\Delta ^{2}\\\hline 1&{\underline {2}}&&\\&&{\underline {0}}&\\2&2&&{\underline {2}}\\&&2&\\3&4&&\\\hline \end{array}}&\quad {\begin{aligned}f(x)&=\Delta ^{0}\cdot 1+\Delta ^{1}\cdot {\dfrac {(x-x_{0})_{1}}{1!}}+\Delta ^{2}\cdot {\dfrac {(x-x_{0})_{2}}{2!}}\quad (x_{0}=1)\\\\&=2\cdot 1+0\cdot {\dfrac {x-1}{1}}+2\cdot {\dfrac {(x-1)(x-2)}{2}}\\\\&=2+(x-1)(x-2)\\\end{aligned}}\end{matrix}}}
For the case of nonuniform steps in the values ofx, Newton computes thedivided differences,Δj,0=yj,Δj,k=Δj+1,k−1−Δj,k−1xj+k−xj∋{k>0,j≤max(j)−k},Δ0k=Δ0,k{\displaystyle \Delta _{j,0}=y_{j},\qquad \Delta _{j,k}={\frac {\Delta _{j+1,k-1}-\Delta _{j,k-1}}{x_{j+k}-x_{j}}}\quad \ni \quad \left\{k>0,\;j\leq \max \left(j\right)-k\right\},\qquad \Delta 0_{k}=\Delta _{0,k}}the series of products,P0=1,Pk+1=Pk⋅(ξ−xk),{\displaystyle {P_{0}}=1,\quad \quad P_{k+1}=P_{k}\cdot \left(\xi -x_{k}\right),}and the resulting polynomial is thescalar product,[13]f(ξ)=Δ0⋅P(ξ).{\displaystyle f(\xi )=\Delta 0\cdot P\left(\xi \right).}
In analysis withp-adic numbers,Mahler's theoremstates that the assumption thatfis a polynomial function can be weakened all the way to the assumption thatfis merely continuous.
Carlson's theoremprovides necessary and sufficient conditions for a Newton series to be unique, if it exists. However, a Newton series does not, in general, exist.
The Newton series, together with theStirling seriesand theSelberg series, is a special case of the generaldifference series, all of which are defined in terms of suitably scaled forward differences.
In a compressed and slightly more general form and equidistant nodes the formula readsf(x)=∑k=0(x−ahk)∑j=0k(−1)k−j(kj)f(a+jh).{\displaystyle f(x)=\sum _{k=0}{\binom {\frac {x-a}{h}}{k}}\sum _{j=0}^{k}(-1)^{k-j}{\binom {k}{j}}f(a+jh).}
The forward difference can be considered as anoperator, called thedifference operator, which maps the functionftoΔh[f].[14][15]This operator amounts toΔh=Th−I,{\displaystyle \Delta _{h}=\operatorname {T} _{h}-\operatorname {I} \ ,}whereThis theshift operatorwith steph, defined byTh[f](x) =f(x+h),andIis theidentity operator.
The finite difference of higher orders can be defined in recursive manner asΔnh≡ Δh(Δn− 1h).Another equivalent definition isΔnh≡ [Th− I]n.
The difference operatorΔhis alinear operator, as such it satisfiesΔh[α f+β g](x) =αΔh[f](x) +βΔh[g](x).
It also satisfies a specialLeibniz rule:
Similar Leibniz rules hold for the backward and central differences.
Formally applying theTaylor serieswith respect toh, yields the operator equationΔh=hD+12!h2D2+13!h3D3+⋯=ehD−I,{\displaystyle \operatorname {\Delta } _{h}=h\operatorname {D} +{\frac {1}{2!}}h^{2}\operatorname {D} ^{2}+{\frac {1}{3!}}h^{3}\operatorname {D} ^{3}+\cdots =e^{h\operatorname {D} }-\operatorname {I} \ ,}whereDdenotes the conventional, continuous derivative operator, mappingfto its derivativef′.The expansion is valid when both sides act onanalytic functions, for sufficiently smallh; in the special case that the series of derivatives terminates (when the function operated on is a finitepolynomial) the expression is exact, forallfinite stepsizes,h.ThusTh=ehD,and formally inverting the exponential yieldshD=ln(1+Δh)=Δh−12Δh2+13Δh3−⋯.{\displaystyle h\operatorname {D} =\ln(1+\Delta _{h})=\Delta _{h}-{\tfrac {1}{2}}\,\Delta _{h}^{2}+{\tfrac {1}{3}}\,\Delta _{h}^{3}-\cdots ~.}This formula holds in the sense that both operators give the same result when applied to a polynomial.
Even for analytic functions, the series on the right is not guaranteed to converge; it may be anasymptotic series. However, it can be used to obtain more accurate approximations for the derivative. For instance, retaining the first two terms of the series yields the second-order approximation tof′(x)mentioned at the end of the section§ Higher-order differences.
The analogous formulas for the backward and central difference operators arehD=−ln(1−∇h)andhD=2arsinh(12δh).{\displaystyle h\operatorname {D} =-\ln(1-\nabla _{h})\quad {\text{ and }}\quad h\operatorname {D} =2\operatorname {arsinh} \left({\tfrac {1}{2}}\,\delta _{h}\right)~.}
The calculus of finite differences is related to theumbral calculusof combinatorics. This remarkably systematic correspondence is due to the identity of thecommutatorsof the umbral quantities to their continuum analogs (h→ 0limits),
[Δhh,xTh−1]=[D,x]=I.{\displaystyle \left[{\frac {\Delta _{h}}{h}},x\,\operatorname {T} _{h}^{-1}\right]=[\operatorname {D} ,x]=I.}
A large number of formal differential relations of standard calculus involving
functionsf(x)thussystematically map to umbral finite-difference analogsinvolvingf(xT−1h).
For instance, the umbral analog of a monomialxnis a generalization of the above falling factorial (Pochhammer k-symbol),(x)n≡(xTh−1)n=x(x−h)(x−2h)⋯(x−(n−1)h),{\displaystyle \ (x)_{n}\equiv \left(\ x\ \operatorname {T} _{h}^{-1}\right)^{n}=x\left(x-h\right)\left(x-2h\right)\cdots {\bigl (}x-\left(n-1\right)\ h{\bigr )}\ ,}so thatΔhh(x)n=n(x)n−1,{\displaystyle \ {\frac {\Delta _{h}}{h}}(x)_{n}=n\ (x)_{n-1}\ ,}hence the above Newton interpolation formula (by matching coefficients in the expansion of an arbitrary functionf(x)in such symbols), and so on.
For example, the umbral sine issin(xTh−1)=x−(x)33!+(x)55!−(x)77!+⋯{\displaystyle \ \sin \left(x\ \operatorname {T} _{h}^{-1}\right)=x-{\frac {(x)_{3}}{3!}}+{\frac {(x)_{5}}{5!}}-{\frac {(x)_{7}}{7!}}+\cdots \ }
As in thecontinuum limit, theeigenfunctionofΔh/halso happens to be an exponential,
and henceFourier sums of continuum functions are readily, faithfully mapped to umbral Fourier sums, i.e., involving the same Fourier coefficients multiplying these umbral basis exponentials.[16]This umbral exponential thus amounts to the exponentialgenerating functionof thePochhammer symbols.
Thus, for instance, theDirac delta functionmaps to its umbral correspondent, thecardinal sine functionδ(x)↦sin[π2(1+xh)]π(x+h),{\displaystyle \ \delta (x)\mapsto {\frac {\sin \left[{\frac {\pi }{2}}\left(1+{\frac {x}{h}}\right)\right]}{\pi (x+h)}}\ ,}and so forth.[17]Difference equationscan often be solved with techniques very similar to those for solvingdifferential equations.
The inverse operator of the forward difference operator, so then the umbral integral, is theindefinite sumor antidifference operator.
Analogous torules for finding the derivative, we have:
All of the above rules apply equally well to any difference operator as toΔ, includingδand∇.
See references.[18][19][20][21]
Finite differences can be considered in more than one variable. They are analogous topartial derivativesin several variables.
Some partial derivative approximations are:fx(x,y)≈f(x+h,y)−f(x−h,y)2hfy(x,y)≈f(x,y+k)−f(x,y−k)2kfxx(x,y)≈f(x+h,y)−2f(x,y)+f(x−h,y)h2fyy(x,y)≈f(x,y+k)−2f(x,y)+f(x,y−k)k2fxy(x,y)≈f(x+h,y+k)−f(x+h,y−k)−f(x−h,y+k)+f(x−h,y−k)4hk.{\displaystyle {\begin{aligned}f_{x}(x,y)&\approx {\frac {f(x+h,y)-f(x-h,y)}{2h}}\\f_{y}(x,y)&\approx {\frac {f(x,y+k)-f(x,y-k)}{2k}}\\f_{xx}(x,y)&\approx {\frac {f(x+h,y)-2f(x,y)+f(x-h,y)}{h^{2}}}\\f_{yy}(x,y)&\approx {\frac {f(x,y+k)-2f(x,y)+f(x,y-k)}{k^{2}}}\\f_{xy}(x,y)&\approx {\frac {f(x+h,y+k)-f(x+h,y-k)-f(x-h,y+k)+f(x-h,y-k)}{4hk}}.\end{aligned}}}
Alternatively, for applications in which the computation offis the most costly step, and both first and second derivatives must be computed, a more efficient formula for the last case isfxy(x,y)≈f(x+h,y+k)−f(x+h,y)−f(x,y+k)+2f(x,y)−f(x−h,y)−f(x,y−k)+f(x−h,y−k)2hk,{\displaystyle f_{xy}(x,y)\approx {\frac {f(x+h,y+k)-f(x+h,y)-f(x,y+k)+2f(x,y)-f(x-h,y)-f(x,y-k)+f(x-h,y-k)}{2hk}},}since the only values to compute that are not already needed for the previous four equations aref(x+h,y+k)andf(x−h,y−k).
|
https://en.wikipedia.org/wiki/Newton_series
|
Smoothing splinesare function estimates,f^(x){\displaystyle {\hat {f}}(x)}, obtained from a set of noisy observationsyi{\displaystyle y_{i}}of the targetf(xi){\displaystyle f(x_{i})}, in order to balance a measure ofgoodness of fitoff^(xi){\displaystyle {\hat {f}}(x_{i})}toyi{\displaystyle y_{i}}with a derivative based measure of the smoothness off^(x){\displaystyle {\hat {f}}(x)}. They provide a means for smoothing noisyxi,yi{\displaystyle x_{i},y_{i}}data. The most familiar example is the cubic smoothing spline, but there are many other possibilities, including for the case wherex{\displaystyle x}is a vector quantity.
Let{xi,Yi:i=1,…,n}{\displaystyle \{x_{i},Y_{i}:i=1,\dots ,n\}}be a set of observations, modeled by the relationYi=f(xi)+ϵi{\displaystyle Y_{i}=f(x_{i})+\epsilon _{i}}where theϵi{\displaystyle \epsilon _{i}}are independent, zero mean random variables. The cubic smoothing spline estimatef^{\displaystyle {\hat {f}}}of the functionf{\displaystyle f}is defined to be the unique minimizer, in theSobolev spaceW22{\displaystyle W_{2}^{2}}on a compact interval, of[1][2]
Remarks:
It is useful to think of fitting a smoothing spline in two steps:
Now, treat the second step first.
Given the vectorm^=(f^(x1),…,f^(xn))T{\displaystyle {\hat {m}}=({\hat {f}}(x_{1}),\ldots ,{\hat {f}}(x_{n}))^{T}}of fitted values, the sum-of-squares part of the spline criterion is fixed. It remains only to minimize∫f^″(x)2dx{\displaystyle \int {\hat {f}}''(x)^{2}\,dx}, and the minimizer is a natural cubicsplinethat interpolates the points(xi,f^(xi)){\displaystyle (x_{i},{\hat {f}}(x_{i}))}. This interpolating spline is a linear operator, and can be written in the form
wherefi(x){\displaystyle f_{i}(x)}are a set of spline basis functions. As a result, the roughness penalty has the form
where the elements ofAare∫fi″(x)fj″(x)dx{\displaystyle \int f_{i}''(x)f_{j}''(x)dx}. The basis functions, and hence the matrixA, depend on the configuration of the predictor variablesxi{\displaystyle x_{i}}, but not on the responsesYi{\displaystyle Y_{i}}orm^{\displaystyle {\hat {m}}}.
Ais ann×nmatrix given byA=ΔTW−1Δ{\displaystyle A=\Delta ^{T}W^{-1}\Delta }.
Δis an(n-2)×nmatrix of second differences with elements:
Δii=1/hi{\displaystyle \Delta _{ii}=1/h_{i}},Δi,i+1=−1/hi−1/hi+1{\displaystyle \Delta _{i,i+1}=-1/h_{i}-1/h_{i+1}},Δi,i+2=1/hi+1{\displaystyle \Delta _{i,i+2}=1/h_{i+1}}
Wis an(n-2)×(n-2)symmetric tri-diagonal matrix with elements:
Wi−1,i=Wi,i−1=hi/6{\displaystyle W_{i-1,i}=W_{i,i-1}=h_{i}/6},Wii=(hi+hi+1)/3{\displaystyle W_{ii}=(h_{i}+h_{i+1})/3}andhi=ξi+1−ξi{\displaystyle h_{i}=\xi _{i+1}-\xi _{i}}, the distances between successive knots (or x values).
Now back to the first step. The penalized sum-of-squares can be written as
whereY=(Y1,…,Yn)T{\displaystyle Y=(Y_{1},\ldots ,Y_{n})^{T}}.
Minimizing overm^{\displaystyle {\hat {m}}}by differentiating againstm^{\displaystyle {\hat {m}}}. This results in:−2{Y−m^}+2λAm^=0{\displaystyle -2\{Y-{\hat {m}}\}+2\lambda A{\hat {m}}=0}[6]andm^=(I+λA)−1Y.{\displaystyle {\hat {m}}=(I+\lambda A)^{-1}Y.}
De Boor's approach exploits the same idea, of finding a balance between having a smooth curve and being close to the given data.[7]
wherep{\displaystyle p}is a parameter called smooth factor and belongs to the interval[0,1]{\displaystyle [0,1]}, andδi;i=1,…,n{\displaystyle \delta _{i};i=1,\dots ,n}are the quantities controlling the extent of smoothing (they represent the weightδi−2{\displaystyle \delta _{i}^{-2}}of each pointYi{\displaystyle Y_{i}}). In practice, sincecubic splinesare mostly used,m{\displaystyle m}is usually2{\displaystyle 2}. The solution form=2{\displaystyle m=2}was proposed byChristian Reinschin 1967.[8]Form=2{\displaystyle m=2}, whenp{\displaystyle p}approaches1{\displaystyle 1},f^{\displaystyle {\hat {f}}}converges to the "natural" spline interpolant to the given data.[7]Asp{\displaystyle p}approaches0{\displaystyle 0},f^{\displaystyle {\hat {f}}}converges to a straight line (the smoothest curve). Since finding a suitable value ofp{\displaystyle p}is a task of trial and error, a redundant constantS{\displaystyle S}was introduced for convenience.[8]S{\displaystyle S}is used to numerically determine the value ofp{\displaystyle p}so that the functionf^{\displaystyle {\hat {f}}}meets the following condition:
The algorithm described by de Boor starts withp=0{\displaystyle p=0}and increasesp{\displaystyle p}until the condition is met.[7]Ifδi{\displaystyle \delta _{i}}is an estimation of the standard deviation forYi{\displaystyle Y_{i}}, the constantS{\displaystyle S}is recommended to be chosen in the interval[n−2n,n+2n]{\displaystyle \left[n-{\sqrt {2n}},n+{\sqrt {2n}}\right]}. HavingS=0{\displaystyle S=0}means the solution is the "natural" spline interpolant.[8]IncreasingS{\displaystyle S}means we obtain a smoother curve by getting farther from the given data.
There are two main classes of method for generalizing from smoothing with respect to a scalarx{\displaystyle x}to smoothing with respect to a vectorx{\displaystyle x}. The first approach simply generalizes the spline smoothing penalty to the multidimensional setting. For example, if trying to estimatef(x,z){\displaystyle f(x,z)}we might use theThin plate splinepenalty and find thef^(x,z){\displaystyle {\hat {f}}(x,z)}minimizing
The thin plate spline approach can be generalized to smoothing with respect to more than two dimensions and to other orders of differentiation in the penalty.[1]As the dimension increases there are some restrictions on the smallest order of differential that can be used,[1]but actually Duchon's original paper,[9]gives slightly more complicated penalties that can avoid this restriction.
The thin plate splines are isotropic, meaning that if we rotate thex,z{\displaystyle x,z}co-ordinate system the estimate will not change, but also that we are assuming that the same level of smoothing is appropriate in all directions. This is often considered reasonable when smoothing with respect to spatial location, but in many other cases isotropy is not an appropriate assumption and can lead to sensitivity to apparently arbitrary choices of measurement units. For example, if smoothing with respect to distance and time an isotropic smoother will give different results if distance is measure in metres and time in seconds, to what will occur if we change the units to centimetres and hours.
The second class of generalizations to multi-dimensional smoothing deals directly with this scale invariance issue using tensor product spline constructions.[10][11][12]Such splines have smoothing penalties with multiple smoothing parameters, which is the price that must be paid for not assuming that the same degree of smoothness is appropriate in all directions.
Smoothing splines are related to, but distinct from:
Source code forsplinesmoothing can be found in the examples fromCarl de Boor'sbookA Practical Guide to Splines. The examples are in theFortranprogramming language. The updated sources are available also on Carl de Boor's official site[1].
|
https://en.wikipedia.org/wiki/Spline_smoothing
|
Instatistics,Box–Behnken designsareexperimental designsforresponse surface methodology, devised byGeorge E. P. BoxandDonald Behnkenin 1960, to achieve the following goals:
Box-Behnken design is still considered to be more proficient and more powerful than other designs such as the three-level full factorial design, central composite design (CCD) andDoehlert design, despite its poor coverage of the corner of nonlinear design space.[1]
The design with 7 factors was found first while looking for a design having the desired property concerning estimation variance, and then similar designs were found for other numbers of factors.
Each design can be thought of as a combination of a two-level (full or fractional)factorial designwith anincomplete block design. In each block, a certain number of factors are put through all combinations for the factorial design, while the other factors are kept at the central values. For instance, the Box–Behnken design for 3 factors involves three blocks, in each of which 2 factors are varied through the 4 possible combinations of high and low. It is necessary to include centre points as well (in which all factors are at their central values).
In this table,mrepresents the number of factors which are varied in each of the blocks.
The design for 8 factors was not in the original paper. Taking the 9 factor design, deleting one column and any resulting duplicate rows produces an 81 run design for 8 factors, while giving up some "rotatability" (see above). Designs for other numbers of factors have also been invented (at least up to 21). A design for 16 factors exists having only 256 factorial points. UsingPlackett–Burmansto construct a 16 factor design (see below) requires only 221 points.
Most of these designs can be split into groups (blocks), for each of which the model will have a different constant term, in such a way that the block constants will be uncorrelated with the other coefficients.
These designs can be augmented with positive and negative "axial points", as incentral composite designs, but, in this case, to estimate univariate cubic and quartic effects, with length α = min(2, (int(1.5 +K/4))1/2), forKfactors, roughly to approximate original design points' distances from the centre.
Plackett–Burman designscan be used, replacing the fractional factorial and incomplete block designs, to construct smaller or larger Box–Behnkens, in which case, axial points of lengthα= ((K+ 1)/2)1/2better approximate original design points' distances from the centre. Since each column of the basic design has 50% 0s and 25% each +1s and −1s, multiplying each column,j, byσ(Xj)·21/2and addingμ(Xj) prior to experimentation, under ageneral linear modelhypothesis, produces a "sample" of output Y with correct first and second moments ofY.
|
https://en.wikipedia.org/wiki/Box%E2%80%93Behnken_design
|
Instatistics, acentral composite designis an experimental design, useful inresponse surface methodology, for building a second order (quadratic) model for theresponse variablewithout needing to use a complete three-levelfactorial experiment.
After the designed experiment is performed,linear regressionis used, sometimes iteratively, to obtain results. Coded variables are often used when constructing this design.
The design consists of three distinct sets of experimental runs:
The design matrix for a central composite design experiment involvingkfactors is derived from a matrix,d, containing the following three different parts corresponding to the three types of experimental runs:
Thendis the vertical concatenation:
The design matrixXused in linear regression is the horizontal concatenation of a column of 1s (intercept),d, and all elementwise products of a pair of columns ofd:
whered(i) represents theith column ind.
There are many different methods to select a useful value of α. LetFbe the number of points due to the factorial design andT= 2k+n, the number of additional points, wherenis the number of central points in the design. Common values are as follows (Myers, 1971):
Statistical approaches such asResponse Surface Methodologycan be employed to maximize the production of a special substance by optimization of operational factors. In contrast to conventional methods, the interaction among process variables can be determined by statistical techniques. For instance, in a study, a central composite design was employed to investigate the effect of critical parameters of organosolv pretreatment of rice straw including temperature, time, and ethanol concentration. The residual solid, lignin recovery, and hydrogen yield were selected as the response variables.[1]
Myers, Raymond H.Response Surface Methodology. Boston: Allyn and Bacon, Inc., 1971
|
https://en.wikipedia.org/wiki/Central_composite_design
|
Gradient-enhanced kriging(GEK) is asurrogate modelingtechnique used in engineering. A surrogate model (alternatively known as ametamodel,response surfaceor emulator) is a prediction of the output of an expensive computer code.[1]This prediction is based on a small number of evaluations of the expensive computer code.
Adjoint solvers are now becoming available in a range ofcomputational fluid dynamics(CFD) solvers, such asFluent,OpenFOAM,SU2and US3D. Originally developed foroptimization, adjoint solvers are now finding more and more use inuncertainty quantification.
An adjoint solver allows one to compute thegradientof the quantity of interest with respect to all design parameters at the cost of one additional solve. This, potentially, leads to alinearspeedup: the computational cost of constructing an accurate surrogate decrease, and the resulting computational speedups{\displaystyle s}scales linearly with the numberd{\displaystyle d}of design parameters.
The reasoning behind this linear speedup is straightforward. Assume we runN{\displaystyle N}primal solves andN{\displaystyle N}adjoint solves, at a total cost of2N{\displaystyle 2N}. This results inN+dN{\displaystyle N+dN}data;N{\displaystyle N}values for the quantity of interest andd{\displaystyle d}partial derivatives in each of theN{\displaystyle N}gradients. Now assume that each partial derivative provides as much information for our surrogate as a single primal solve. Then, the total cost of getting the same amount of information from primal solves only isN+dN{\displaystyle N+dN}. The speedup is the ratio of these costs:[2][3]
A linear speedup has been demonstrated for afluid-structure interactionproblem[2]and for atransonicairfoil.[3]
One issue with adjoint-based gradients in CFD is that they can be particularlynoisy.[4][5]When derived in aBayesianframework, GEK allows one to incorporate not only the gradient information, but also theuncertaintyin that gradient information.[6]
When using GEK one takes the following steps:
Once the surrogate has been constructed it can be used in different ways, for example for surrogate-baseduncertainty quantification(UQ) oroptimization.
In aBayesianframework, we useBayes' Theoremto predict theKrigingmean and covariance conditional on the observations. When using GEK, the observations are usually the results of a number of computer simulations. GEK can be interpreted as a form ofGaussian processregression.
Along the lines of,[7]we are interested in the outputX{\displaystyle X}of our computer simulation, for which we assume thenormalprior probability distribution:
with prior meanμ{\displaystyle \mu }and priorcovariance matrixP{\displaystyle P}. The observationsy{\displaystyle y}have the normallikelihood:
withH{\displaystyle H}the observation matrix andR{\displaystyle R}the observation error covariance matrix, which contains theobservation uncertainties. After applyingBayes' Theoremwe obtain a normally distributedposterior probability distribution, with Kriging mean:
and Kriging covariance:
where we have the gain matrix:
In Kriging, the prior covariance matrixP{\displaystyle P}is generated from a covariance function. One example of a covariance function is the Gaussian covariance:
where we sum over the dimensionsk{\displaystyle k}andξ{\displaystyle \xi }are the input parameters. Thehyperparametersμ{\displaystyle \mu },σ{\displaystyle \sigma }andθ{\displaystyle \theta }can be estimated from aMaximum Likelihood Estimate(MLE).[6][8][9]
There are several ways of implementing GEK. The first method, indirect GEK, defines a small but finite stepsizeh{\displaystyle h}, and uses the gradient information to append synthetic data to the observationsy{\displaystyle y}, see for example.[8]Indirect Kriging is sensitive to the choice of the step-sizeh{\displaystyle h}and cannot includeobservation uncertainties.
Direct GEK is a form of co-Kriging, where we add the gradient information as co-variables. This can be done by modifying the prior covarianceP{\displaystyle P}or by modifying the observation matrixH{\displaystyle H}; both approaches lead to the same GEK predictor. When we construct direct GEK through the prior covariance matrix, we append the partial derivatives toy{\displaystyle y}, and modify the prior covariance matrixP{\displaystyle P}such that it also contains the derivatives (and second derivatives) of the covariance function, see for example[10].[6]The main advantages of direct GEK over indirect GEK are: 1) we do not have to choose a step-size, 2) we can includeobservation uncertaintiesfor the gradients inR{\displaystyle R}, and 3) it is less susceptible to poorconditioningof the gain matrixK{\displaystyle K}.[6][8]
Another way of arriving at the same direct GEK predictor is to append the partial derivatives to the observationsy{\displaystyle y}and include partial derivative operators in the observation matrixH{\displaystyle H}, see for example.[11]
Current gradient-enhanced kriging methods do not scale well with the number of sampling points due to the rapid growth in the size of the correlation matrix, where new information is added for each sampling point in each direction of the design space. Furthermore, they do not scale well with the number of independent variables due to the increase in the number of hyperparameters that needs to be estimated.
To address this issue, a new gradient-enhanced surrogate model approach that drastically reduced the number of hyperparameters through the use of thepartial-least squaresmethod that maintains accuracy is developed. In addition, this method is able to control the size of the correlation matrix by adding only relevant points defined through the information provided by the partial-least squares method. For more details, see.[12]This approach is implemented into the Surrogate Modeling Toolbox (SMT) in Python (https://github.com/SMTorg/SMT), and it runs on Linux, macOS, and Windows. SMT is distributed under the New BSD license.
A universal augmented framework is proposed in[9]to append derivatives of any order to the observations. This method can be viewed as a generalization of Direct GEK that takes into account higher-order derivatives. Also, the observations and derivatives are not required to be measured at the same location under this framework.
As an example, consider the flow over atransonicairfoil.[3]The airfoil is operating at aMach numberof 0.8 and anangle of attackof 1.25 degrees. We assume that the shape of the airfoil is uncertain; the top and the bottom of the airfoil might have shifted up or down due to manufacturing tolerances. In other words, the shape of the airfoil that we are using might be slightly different from the airfoil that we designed.
On the right we see the reference results for thedrag coefficientof the airfoil, based on a large number of CFD simulations. Note that the lowest drag, which corresponds to 'optimal' performance, is close to the undeformed 'baseline' design of the airfoil at (0,0).
After designing a sampling plan (indicated by the gray dots) and running the CFD solver at those sample locations, we obtain the Kriging surrogate model. The Kriging surrogate is close to the reference, but perhaps not as close as we would desire.
In the last figure, we have improved the accuracy of this surrogate model by including the adjoint-based gradient information, indicated by the arrows, and applying GEK.
GEK has found the following applications:
|
https://en.wikipedia.org/wiki/Gradient-enhanced_kriging
|
IOSO(IndirectOptimizationon the basis ofSelf-Organization) is amultiobjective, multidimensional nonlinear optimization technology.
IOSO Technology is based on theresponse surface methodologyapproach.
At each IOSO iteration the internally constructed response surface model for the objective is being optimized within the current search region. This step is followed by a direct call to the actual mathematical model of the system for the candidate optimal point obtained from optimizing internal response surface model. During IOSO operation, the information about the system behavior is stored for the points in the neighborhood of the extremum, so that the response surface model becomes more accurate for this search area. The following steps are internally taken while proceeding from one IOSO iteration to another:
IOSO is based on the technology being developed for more than 20 years bySigma Technologywhich grew out of IOSO Technology Center in 2001. Sigma Technology is headed by prof . Egorov I. N., CEO.
IOSO is the name of the group ofmultidisciplinary design optimizationsoftware that runs onMicrosoft Windowsas well as onUnix/LinuxOS and was developed bySigma Technology. It is used to improve the performance of complex systems and technological processes and to develop new materials based on a search for their optimal parameters. IOSO is easily integrated with almost anycomputer aided engineering(CAE) tool.
IOSO group of software consists of:
IOSO NM is used to maximize or minimize system or object characteristics which can include the performance or cost of or loads on the object in question. The search for optimal values for object or system characteristics is carried out by means of optimal change to design, geometrical or other parameters of the object.
It is often necessary to select or co-ordinate management parameters for the system while it is in operation in order to achieve a certain effect during the operation of the system or to reduce the impact of some factors on the system.
When the design process involves the use of any mathematical models of real-life objects, whether commercial or corporate, there is the problem of co-ordinating the experiment findings and model computation results. All models imply a set of unknown factors or constants. Searching for the optimal values thereof makes it possible to co-ordinate the experiment findings and model computation results.
Practical application of the numerical optimization results is difficult because any complex technical system is a stochastic system and the characteristics of this system have probabilistic nature. We would like to emphasize that, speaking about the stochastic properties of a technical system within the frame of optimization tasks, we imply that the important parameters of any system are stochastically spread. Normally it occurs during the production stage despite the up-to-date level of modern technology. Random deviations of the system parameters lead to a random change in system efficiency.
An efficiency extreme value, obtained during the optimization problem while solving in traditional (deterministic) approach, is simply a maximum attainable value and can be considered as just conventional optimum from the point of view of its practical realization. Thus, one can consider two different types of optimization criteria. One of them is an ideal efficiency which can be achieved under the conditions of absolutely precise practical replication of the system parameters under consideration. Other optimization criteria are of probabilistic nature. For example: mathematical expectation of the efficiency; the total probability of assuring preset constraints; variance of the efficiency and so on
It is evident that the extreme of the one of these criteria doesn't guarantee the assurance of the high level of another one. Even more, these criteria may contradict to each other. Thus, in this case we have amultiobjective optimizationproblem.
IOSO concept of robust design optimization and robust optimal control allows to determine the optimal practical solution that could be implemented with the high probability for the given technology level of the production plants. Many modern probabilistic approaches either employ the estimation of probabilistic efficiency criteria only at the stage of the analysis of obtaining deterministic solution, or use significantly simplified assessments of probabilistic criteria during optimization process. The distinctive feature of our approach is that during robust design optimization we solve the optimization problem involving direct stochastic formulation, where the estimation of probabilistic criteria is accomplished at each iteration. This procedure reliably produces fully robust optimal solution. High efficiency of the robust design optimization is provided by the capabilities of IOSO algorithms to solvestochastic optimizationproblems with large level of noise.
Application examples
|
https://en.wikipedia.org/wiki/IOSO
|
In thedesign of experiments,optimal experimental designs(oroptimum designs[2]) are a class ofexperimental designsthat areoptimalwith respect to somestatisticalcriterion. The creation of this field of statistics has been credited to Danish statisticianKirstine Smith.[3][4]
In thedesign of experimentsforestimatingstatistical models,optimal designsallow parameters to beestimated without biasand withminimum variance. A non-optimal design requires a greater number ofexperimental runstoestimatetheparameterswith the sameprecisionas an optimal design. In practical terms, optimal experiments can reduce the costs of experimentation.
The optimality of a design depends on thestatistical modeland is assessed with respect to a statistical criterion, which is related to the variance-matrix of the estimator. Specifying an appropriate model and specifying a suitable criterion function both require understanding ofstatistical theoryand practical knowledge withdesigning experiments.
Optimal designs offer three advantages over sub-optimalexperimental designs:[5]
Experimental designs are evaluated using statistical criteria.[6]
It is known that theleast squaresestimator minimizes thevarianceofmean-unbiasedestimators(under the conditions of theGauss–Markov theorem). In theestimationtheory forstatistical modelswith onerealparameter, thereciprocalof the variance of an ("efficient") estimator is called the "Fisher information" for that estimator.[7]Because of this reciprocity,minimizingthevariancecorresponds tomaximizingtheinformation.
When thestatistical modelhas severalparameters, however, themeanof the parameter-estimator is avectorand itsvarianceis amatrix. Theinverse matrixof the variance-matrix is called the "information matrix". Because the variance of the estimator of a parameter vector is a matrix, the problem of "minimizing the variance" is complicated. Usingstatistical theory, statisticians compress the information-matrix using real-valuedsummary statistics; being real-valued functions, these "information criteria" can be maximized.[8]The traditional optimality-criteria areinvariantsof theinformationmatrix; algebraically, the traditional optimality-criteria arefunctionalsof theeigenvaluesof the information matrix.
Other optimality-criteria are concerned with the variance ofpredictions:
In many applications, the statistician is most concerned with a"parameter of interest"rather than with"nuisance parameters". More generally, statisticians considerlinear combinationsof parameters, which are estimated via linear combinations of treatment-means in thedesign of experimentsand in theanalysis of variance; such linear combinations are calledcontrasts. Statisticians can use appropriate optimality-criteria for suchparameters of interestand forcontrasts.[12]
Catalogs of optimal designs occur in books and in software libraries.
In addition, majorstatistical systemslikeSASandRhave procedures for optimizing a design according to a user's specification. The experimenter must specify amodelfor the design and an optimality-criterion before the method can compute an optimal design.[13]
Some advanced topics in optimal design require morestatistical theoryand practical knowledge in designing experiments.
Since the optimality criterion of most optimal designs is based on some function of the information matrix, the 'optimality' of a given design ismodeldependent: While an optimal design is best for thatmodel, its performance may deteriorate on othermodels. On othermodels, anoptimaldesign can be either better or worse than a non-optimal design.[14]Therefore, it is important tobenchmarkthe performance of designs under alternativemodels.[15]
The choice of an appropriate optimality criterion requires some thought, and it is useful to benchmark the performance of designs with respect to several optimality criteria. Cornell writes that
since the [traditional optimality] criteria . . . are variance-minimizing criteria, . . . a design that is optimal for a given model using one of the . . . criteria is usually near-optimal for the same model with respect to the other criteria.
Indeed, there are several classes of designs for which all the traditional optimality-criteria agree, according to the theory of "universal optimality" ofKiefer.[17]The experience of practitioners like Cornell and the "universal optimality" theory of Kiefer suggest that robustness with respect to changes in theoptimality-criterionis much greater than is robustness with respect to changes in themodel.
High-quality statistical software provide a combination of libraries of optimal designs or iterative methods for constructing approximately optimal designs, depending on the model specified and the optimality criterion. Users may use a standard optimality-criterion or may program a custom-made criterion.
All of the traditional optimality-criteria areconvex (or concave) functions, and therefore optimal-designs are amenable to the mathematical theory ofconvex analysisand their computation can use specialized methods ofconvex minimization.[18]The practitioner need not selectexactly onetraditional, optimality-criterion, but can specify a custom criterion. In particular, the practitioner can specify a convex criterion using the maxima of convex optimality-criteria andnonnegative combinationsof optimality criteria (since these operations preserveconvex functions). Forconvexoptimality criteria, theKiefer-Wolfowitzequivalence theoremallows the practitioner to verify that a given design is globally optimal.[19]TheKiefer-Wolfowitzequivalence theoremis related with theLegendre-Fenchelconjugacyforconvex functions.[20]
If an optimality-criterion lacksconvexity, then finding aglobal optimumand verifying its optimality often are difficult.
When scientists wish to test several theories, then a statistician can design an experiment that allows optimal tests between specified models. Such "discrimination experiments" are especially important in thebiostatisticssupportingpharmacokineticsandpharmacodynamics, following the work ofCoxand Atkinson.[21]
When practitioners need to consider multiplemodels, they can specify aprobability-measureon the models and then select any design maximizing theexpected valueof such an experiment. Such probability-based optimal-designs are called optimalBayesiandesigns. SuchBayesian designsare used especially forgeneralized linear models(where the response follows anexponential-familydistribution).[22]
The use of aBayesian designdoes not force statisticians to useBayesian methodsto analyze the data, however. Indeed, the "Bayesian" label for probability-based experimental-designs is disliked by some researchers.[23]Alternative terminology for "Bayesian" optimality includes "on-average" optimality or "population" optimality.
Scientific experimentation is an iterative process, and statisticians have developed several approaches to the optimal design of sequential experiments.
Sequential analysiswas pioneered byAbraham Wald.[24]In 1972,Herman Chernoffwrote an overview of optimal sequential designs,[25]whileadaptive designswere surveyed later by S. Zacks.[26]Of course, much work on the optimal design of experiments is related to the theory ofoptimal decisions, especially thestatistical decision theoryofAbraham Wald.[27]
Optimal designs forresponse-surface modelsare discussed in the textbook by Atkinson, Donev and Tobias, and in the survey of Gaffke and Heiligers and in the mathematical text of Pukelsheim. Theblockingof optimal designs is discussed in the textbook of Atkinson, Donev and Tobias and also in the monograph by Goos.
The earliest optimal designs were developed to estimate the parameters of regression models with continuous variables, for example, byJ. D. Gergonnein 1815 (Stigler). In English, two early contributions were made byCharles S. PeirceandKirstine Smith.
Pioneering designs for multivariateresponse-surfaceswere proposed byGeorge E. P. Box. However, Box's designs have few optimality properties. Indeed, theBox–Behnken designrequires excessive experimental runs when the number of variables exceeds three.[28]Box's"central-composite" designsrequire more experimental runs than do the optimal designs of Kôno.[29]
The optimization of sequential experimentation is studied also instochastic programmingand insystemsandcontrol. Popular methods includestochastic approximationand other methods ofstochastic optimization. Much of this research has been associated with the subdiscipline ofsystem identification.[30]In computationaloptimal control, D. Judin & A. Nemirovskii andBoris Polyakhas described methods that are more efficient than the (Armijo-style)step-size rulesintroduced byG. E. P. Boxinresponse-surface methodology.[31]
Adaptive designsare used inclinical trials, and optimaladaptive designsare surveyed in theHandbook of Experimental Designschapter by Shelemyahu Zacks.
There are several methods of finding an optimal design, given ana priorirestriction on the number of experimental runs or replications. Some of these methods are discussed by Atkinson, Donev and Tobias and in the paper by Hardin andSloane. Of course, fixing the number of experimental runsa prioriwould be impractical. Prudent statisticians examine the other optimal designs, whose number of experimental runs differ.
In the mathematical theory on optimal experiments, an optimal design can be aprobability measurethat issupportedon an infinite set of observation-locations. Such optimal probability-measure designs solve a mathematical problem that neglected to specify the cost of observations and experimental runs. Nonetheless, such optimal probability-measure designs can bediscretizedto furnishapproximatelyoptimal designs.[32]
In some cases, a finite set of observation-locations suffices tosupportan optimal design. Such a result was proved by Kôno andKieferin their works onresponse-surface designsfor quadratic models. The Kôno–Kiefer analysis explains why optimal designs for response-surfaces can have discrete supports, which are very similar as do the less efficient designs that have been traditional inresponse surface methodology.[33]
In 1815, an article on optimal designs forpolynomial regressionwas published byJoseph Diaz Gergonne, according toStigler.
Charles S. Peirceproposed an economic theory of scientific experimentation in 1876, which sought to maximize the precision of the estimates. Peirce's optimal allocation immediately improved the accuracy of gravitational experiments and was used for decades by Peirce and his colleagues. In his 1882 published lecture atJohns Hopkins University, Peirce introduced experimental design with these words:
Logic will not undertake to inform you what kind of experiments you ought to make in order best to determine the acceleration of gravity, or the value of the Ohm; but it will tell you how to proceed to form a plan of experimentation.[....] Unfortunately practice generally precedes theory, and it is the usual fate of mankind to get things done in some boggling way first, and find out afterward how they could have been done much more easily and perfectly.[34]
Kirstine Smithproposed optimal designs for polynomial models in 1918. (Kirstine Smith had been a student of the Danish statisticianThorvald N. Thieleand was working withKarl Pearsonin London.)
The textbook by Atkinson, Donev and Tobias has been used for short courses for industrial practitioners as well as university courses.
Optimalblock designsare discussed by Bailey and by Bapat. The first chapter of Bapat's book reviews thelinear algebraused by Bailey (or the advanced books below). Bailey's exercises and discussion ofrandomizationboth emphasize statistical concepts (rather than algebraic computations).
Optimalblock designsare discussed in the advanced monograph by Shah and Sinha and in the survey-articles by Cheng and by Majumdar.
|
https://en.wikipedia.org/wiki/Optimal_design
|
Plackett–Burman designsareexperimental designspresented in 1946 byRobin L. PlackettandJ. P. Burmanwhile working in the BritishMinistry of Supply.[1]Their goal was to find experimental designs for investigating the dependence of some measured quantity on a number ofindependent variables(factors), each takingLlevels, in such a way as to minimize thevarianceof the estimates of these dependencies using a limited number of experiments. Interactions between the factors were considered negligible. The solution to this problem is to find an experimental design whereeach combinationof levels for anypairof factors appears thesame number of times, throughout all the experimental runs (refer to table). A completefactorial designwould satisfy this criterion, but the idea was to find smaller designs.
For the case of two levels (L= 2), Plackett and Burman used themethodfound in 1933 byRaymond Paleyfor generatingorthogonal matriceswhose elements are all either 1 or −1 (Hadamard matrices). Paley's method could be used to find such matrices of sizeNfor mostNequal to a multiple of 4. In particular, it worked for all suchNup to 100 exceptN= 92. IfNis a power of 2, however, the resulting design is identical to afractional factorial design, so Plackett–Burman designs are mostly used whenNis a multiple of 4 but not a power of 2 (i.e.N= 12, 20, 24, 28, 36 …).[3]If one is trying to estimate less thanNparameters (including the overall average), then one simply uses a subset of the columns of the matrix.
For the case of more than two levels, Plackett and Burman rediscovered designs that had previously been given byRaj Chandra BoseandK. Kishenat theIndian Statistical Institute.[4]Plackett and Burman give specifics for designs having a number of experiments equal to the number of levelsLto some integer power, forL= 3, 4, 5, or 7.
When interactions between factors are not negligible, they are often confounded in Plackett–Burman designs with the main effects, meaning that the designs do not permit one to distinguish between certain main effects and certain interactions. This is calledconfounding.
In 1993, Dennis Lin described a construction method via half-fractions of Plackett–Burman designs, using one column to take half of the rest of the columns.[5]The resulting matrix, minus that column, is a "supersaturated design"[6]for finding significant first order effects, under the assumption that few exist.
Box–Behnkendesigns can be made smaller, or very large ones constructed, by replacing thefractional factorialsandincomplete blockstraditionally used for plan and seed matrices, respectively, with Plackett–Burmans. For example, a quadratic design for 30 variables requires a 30 column PB plan matrix of zeroes and ones, replacing the ones in each line using PB seed matrices of −1s and +1s (for 15 or 16 variables) wherever a one appears in the plan matrix, creating a 557 runs design with values, −1, 0, +1, to estimate the 496 parameters of a full quadratic model. Addingaxial pointsallows estimating univariate cubic and quartic effects.
By equivocating certain columns with parameters to be estimated, Plackett–Burmans can also be used to construct mixed categorical and numerical designs, with interactions or high order effects, requiring no more than 4 runs more than the number of model parameters to be estimated. Sort bya-1columns assigned to categorical variableAand following columns, whereA= 1 + int(a·i/(max(i) + 0.00001)),i= row number anda=A'snumber of values. Next sort on columns assigned to any other categorical variables and following columns, repeating as needed. Such designs, if large, may otherwise be incomputable by standard search techniques likeD-optimality. For example, 13 variables averaging 3 values each could have well over a million combinations to search. To estimate the 105 parameters in a quadratic model of 13 variables, one must formally exclude from consideration or compute |X'X| for well over 106C102, i.e. 313C105, or roughly 10484matrices.
This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
|
https://en.wikipedia.org/wiki/Plackett%E2%80%93Burman_design
|
Asurrogate modelis an engineering method used when an outcome of interest cannot be easily measured or computed, so an approximatemathematical modelof the outcome is used instead. Most engineering design problems require experiments and/or simulations to evaluate design objective and constraint functions as a function of design variables. For example, in order to find the optimalairfoilshape for an aircraft wing, an engineer simulates the airflow around the wing for different shape variables (e.g., length, curvature, material, etc.). For many real-world problems, however, a single simulation can take many minutes, hours, or even days to complete. As a result, routine tasks such asdesign optimization,design space exploration,sensitivity analysisand "what-if" analysis become impossible since they require thousands or even millions of simulation evaluations.
One way of alleviating this burden is by constructing approximation models, known assurrogate models,metamodelsoremulators, that mimic the behavior of the simulation model as closely as possible while being computationally cheaper to evaluate. Surrogate models are constructed using a data-driven, bottom-up approach. The exact, inner working of the simulation code is not assumed to be known (or even understood), relying solely on the input-output behavior. A model is constructed based on modeling the response of the simulator to a limited number of intelligently chosen data points. This approach is also known as behavioral modeling orblack-boxmodeling, though the terminology is not always consistent. When only a single design variable is involved, the process is known ascurve fitting.
Though using surrogate models in lieu of experiments and simulations in engineering design is more common, surrogate modeling may be used in many other areas of science where there are expensive experiments and/or function evaluations.
The scientific challenge of surrogate modeling is the generation of a surrogate that is as accurate as possible, using as few simulation evaluations as possible. The process comprises three major steps which may be interleaved iteratively:
The accuracy of the surrogate depends on the number and location of samples (expensive experiments or simulations) in the design space. Variousdesign of experiments(DOE) techniques cater to different sources of errors, in particular, errors due to noise in the data or errors due to an improper surrogate model.
Popular surrogate modeling approaches are: polynomialresponse surfaces;kriging; more generalizedBayesianapproaches;[1]gradient-enhanced kriging(GEK);radial basis function;support vector machines;space mapping;[2]artificial neural networksandBayesian networks.[3]Other methods recently explored includeFouriersurrogate modeling[4][5]andrandom forests.[6]
For some problems, the nature of the true function is not knowna priori, and therefore it is not clear which surrogate model will be the most accurate one. In addition, there is no consensus on how to obtain the most reliable estimates of the accuracy of a given surrogate. Many other problems have known physics properties. In these cases, physics-based surrogates such asspace-mappingbased models are commonly used.[2][7]
Recently proposed comparison-based surrogate models (e.g., rankingsupport vector machines) forevolutionary algorithms, such asCMA-ES, allow preservation of some invariance properties of surrogate-assisted optimizers:[8]
An important distinction can be made between two different applications of surrogate models: design optimization and design space approximation (also known as emulation).
In surrogate model-based optimization, an initial surrogate is constructed using some of the available budgets of expensive experiments and/or simulations. The remaining experiments/simulations are run for designs which the surrogate model predicts may have promising performance. The process usually takes the form of the following search/update procedure.
Depending on the type of surrogate used and the complexity of the problem, the process may converge on alocalorglobal optimum, or perhaps none at all.[9]
In design space approximation, one is not interested in finding the optimal parameter vector, but rather in the global behavior of the system. Here the surrogate is tuned to mimic the underlying model as closely as needed over the complete design space. Such surrogates are a useful, cheap way to gain insight into the global behavior of the system. Optimization can still occur as a post-processing step, although with no update procedure (see above), the optimum found cannot be validated.
SAEAs are an advanced class of optimization techniques that integrate evolutionary algorithms (EAs) with surrogate models. In traditional EAs, evaluating the fitness of candidate solutions often requires computationally expensive simulations or experiments. SAEAs address this challenge by building a surrogate model, which is a computationally inexpensive approximation of the objective function or constraint functions.
The surrogate model serves as a substitute for the actual evaluation process during the evolutionary search. It allows the algorithm to quickly estimate the fitness of new candidate solutions, thereby reducing the number of expensive evaluations needed. This significantly speeds up the optimization process, especially in cases where the objective function evaluations are time-consuming or resource-intensive.
SAEAs typically involve three main steps: (1) building the surrogate model using a set of initial sampled data points, (2) performing the evolutionary search using the surrogate model to guide the selection, crossover, and mutation operations, and (3) periodically updating the surrogate model with new data points generated during the evolutionary process to improve its accuracy.
By balancing exploration (searching new areas in the solution space) and exploitation (refining known promising areas), SAEAs can efficiently find high-quality solutions to complex optimization problems. They have been successfully applied in various fields, including engineering design, machine learning, and computational finance, where traditional optimization methods may struggle due to the high computational cost of fitness evaluations.[12][13]
|
https://en.wikipedia.org/wiki/Surrogate_model
|
Bayesian optimizationis asequential designstrategy forglobal optimizationofblack-boxfunctions,[1][2][3]that does not assume any functional forms. It is usually employed to optimize expensive-to-evaluate functions. With the rise ofartificial intelligenceinnovation in the 21st century, Bayesian optimizations have found prominent use inmachine learningproblems foroptimizing hyperparameter values.[4][5]
The term is generally attributed toJonas Mockus[lt]and is coined in his work from a series of publications on global optimization in the 1970s and 1980s.[6][7][1]
The earliest idea of Bayesian optimization[8]sprang in 1964, from a paper by American applied mathematician Harold J. Kushner,[9]“A New Method of Locating the Maximum Point of an Arbitrary Multipeak Curve in the Presence of Noise”. Although not directly proposing Bayesian optimization, in this paper, he first proposed a new method of locating the maximum point of an arbitrary multipeak curve in a noisy environment. This method provided an important theoretical foundation for subsequent Bayesian optimization.
By the 1980s, the framework we now use for Bayesian optimization was explicitly established. In 1978, the Lithuanian scientist Jonas Mockus,[10]in his paper “The Application of Bayesian Methods for Seeking the Extremum”, discussed how to use Bayesian methods to find the extreme value of a function under various uncertain conditions. In his paper, Mockus first proposed theExpected Improvement principle (EI), which is one of the core sampling strategies of Bayesian optimization. This criterion balances exploration while optimizing the function efficiently by maximizing the expected improvement. Because of the usefulness and profound impact of this principle, Jonas Mockus is widely regarded as the founder of Bayesian optimization. Although Expected Improvement principle (IE) is one of the earliest proposed core sampling strategies for Bayesian optimization, it is not the only one, with the development of modern society, we also have Probability of Improvement (PI), or Upper Confidence Bound (UCB)[11]and so on.
In the 1990s, Bayesian optimization began to gradually transition from pure theory to real-world applications. In 1998, Donald R. Jones[12]and his coworkers published a paper titled “Gaussian Optimization[13]”. In this paper, they proposed the Gaussian Process(GP) and elaborated on the Expected Improvement principle(EI) proposed by Jonas Mockus in 1978. Through the efforts of Donald R. Jones and his colleagues, Bayesian Optimization began to shine in the fields like computers science and engineering. However, the computational complexity of Bayesian optimization for the computing power at that time still affected its development to a large extent.
In the 21st century, with the gradual rise of artificial intelligence and bionic robots, Bayesian optimization has been widely used in machine learning and deep learning, and has become an important tool forHyperparameter Tuning.[14]Companies such as Google, Facebook and OpenAI have added Bayesian optimization to their deep learning frameworks to improve search efficiency. However, Bayesian optimization still faces many challenges, for example, because of the use of Gaussian Process[15]as a proxy model for optimization, when there is a lot of data, the training of Gaussian Process will be very slow and the computational cost is very high. This makes it difficult for this optimization method to work well in more complex drug development and medical experiments.
Bayesian optimization is used on problems of the formmaxx∈Xf(x){\textstyle \max _{x\in X}f(x)}, withX{\textstyle X}being the set of all possible parametersx{\textstyle x}, typically with less than or equal to 20dimensionsfor optimal usage (X→Rd∣d≤20{\textstyle X\rightarrow \mathbb {R} ^{d}\mid d\leq 20}), and whose membership can easily be evaluated. Bayesian optimization is particularly advantageous for problems wheref(x){\textstyle f(x)}is difficult to evaluate due to its computational cost. The objective function,f{\textstyle f}, is continuous and takes the form of some unknown structure, referred to as a "black box". Upon its evaluation, onlyf(x){\textstyle f(x)}is observed and itsderivativesare not evaluated.[17]
Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place apriorover it. The prior captures beliefs about the behavior of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form theposterior distributionover the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines the next query point.
There are several methods used to define the prior/posterior distribution over the objective function. The most common two methods useGaussian processesin a method calledkriging. Another less expensive method uses theParzen-Tree Estimatorto construct two distributions for 'high' and 'low' points, and then finds the location that maximizes the expected improvement.[18]
Standard Bayesian optimization relies upon eachx∈X{\displaystyle x\in X}being easy to evaluate, and problems that deviate from this assumption are known asexotic Bayesian optimizationproblems. Optimization problems can become exotic if it is known that there is noise, the evaluations are being done in parallel, the quality of evaluations relies upon a tradeoff between difficulty and accuracy, the presence of random environmental conditions, or if the evaluation involves derivatives.[17]
Examples of acquisition functions include
and hybrids of these.[19]They all trade-offexploration and exploitationso as to minimize the number of function queries. As such, Bayesian optimization is well suited for functions that are expensive to evaluate.
The maximum of the acquisition function is typically found by resorting to discretization or by means of an auxiliary optimizer. Acquisition functions are
maximized using anumerical optimization technique, such asNewton's methodor quasi-Newton methods like theBroyden–Fletcher–Goldfarb–Shanno algorithm.
The approach has been applied to solve a wide range of problems,[20]includinglearning to rank,[21]computer graphicsand visual design,[22][23][24]robotics,[25][26][27][28]sensor networks,[29][30]automatic algorithm configuration,[31][32]automatic machine learningtoolboxes,[33][34][35]reinforcement learning,[36]planning, visual attention, architecture configuration indeep learning, static program analysis, experimentalparticle physics,[37][38]quality-diversity optimization,[39][40][41]chemistry, material design, and drug development.[17][42][43]
Bayesian optimization has been applied in the field of facial recognition.[44]The performance of the Histogram of Oriented Gradients (HOG) algorithm, a popular feature extraction method, heavily relies on its parameter settings. Optimizing these parameters can be challenging but crucial for achieving high accuracy.[44]A novel approach to optimize the HOG algorithm parameters and image size for facial recognition using a Tree-structured Parzen Estimator (TPE) based Bayesian optimization technique has been proposed.[44]This optimized approach has the potential to be adapted for other computer vision applications and contributes to the ongoing development of hand-crafted parameter-based feature extraction algorithms in computer vision.[44]
|
https://en.wikipedia.org/wiki/Bayesian_Optimization
|
Inmathematics, theabscissa(/æbˈsɪs.ə/; pluralabscissaeorabscissas) and theordinateare respectively the first and secondcoordinateof apointin aCartesian coordinate system:[1][2]
Together they form anordered pairwhich defines the location of a point in two-dimensionalrectangular space.
More technically, the abscissa of a point is the signed measure of its projection on the primary axis. Itsabsolute valueis the distance between the projection and theoriginof the axis, and itssignis given by the location on the projection relative to the origin (before: negative; after: positive). Similarly, the ordinate of a point is the signed measure of its projection on the secondary axis.In three dimensions, the third direction is sometimes referred to as theapplicate.[3]
Though the word "abscissa" (fromLatinlinea abscissa'a line cut off') has been used at least sinceDe Practica Geometrie(1220) byFibonacci(Leonardo of Pisa), its use in its modern sense may be due to Venetian mathematicianStefano degli Angeliin his workMiscellaneum Hyperbolicum, et Parabolicum(1659).[4]Historically, the term was used in the more general sense of a 'distance'.[5]
In his 1892 workVorlesungen über die Geschichte der Mathematik("Lectures on history of mathematics"), volume 2, Germanhistorian of mathematicsMoritz Cantorwrites:
Gleichwohl ist durch [Stefano degli Angeli] vermuthlich ein Wort in den mathematischen Sprachschatz eingeführt worden, welches gerade in der analytischen Geometrie sich als zukunftsreich bewährt hat. […] Wir kennen keine ältere Benutzung des WortesAbscissein lateinischen Originalschriften. Vielleicht kommt das Wort in Uebersetzungen derApollonischen Kegelschnittevor, wo Buch I Satz 20 vonἀποτεμνομέναιςdie Rede ist, wofür es kaum ein entsprechenderes lateinisches Wort alsabscissageben möchte.[6]
At the same time it was presumably by [Stefano degli Angeli] that a word was introduced into the mathematical vocabulary for which especially in analytic geometry the future proved to have much in store. […] We know of no earlier use of the wordabscissain Latin original texts. Maybe the word appears in translations of theApollonian conics, where [in] Book I, Chapter 20 there is mention ofἀποτεμνομέναις,for which there would hardly be a more appropriate Latin word thanabscissa.
The use of the wordordinateis related to the Latin phraselinea ordinata appliicata'line applied parallel'.
In a somewhat obsolete variant usage, the abscissa of a point may also refer to any number that describes the point's location along some path, e.g. the parameter of aparametric equation.[1]Used in this way, the abscissa can be thought of as a coordinate-geometry analog to theindependent variablein amathematical modelor experiment (with any ordinates filling a role analogous todependent variables).
|
https://en.wikipedia.org/wiki/Abscissa_and_ordinate
|
In thestatisticaltheory of thedesign of experiments,blockingis the arranging ofexperimental unitsthat are similar to one another in groups (blocks) based on one or more variables. These variables are chosen carefully to minimize the effect of their variability on the observed outcomes. There are different ways that blocking can be implemented, resulting in different confounding effects. However, the different methods share the same purpose: to control variability introduced by specific factors that could influence the outcome of an experiment. The roots of blocking originated from the statistician,Ronald Fisher, following his development ofANOVA.[1]
The use of blocking in experimental design has an evolving history that spans multiple disciplines. The foundational concepts of blocking date back to the early 20th century with statisticians likeRonald A. Fisher. His work in developinganalysis of variance(ANOVA) set the groundwork for grouping experimental units to control for extraneous variables. Blocking evolved over the years, leading to the formalization of randomized block designs andLatin squaredesigns.[1]Today, blocking still plays a pivotal role in experimental design, and in recent years, advancements in statistical software and computational capabilities have allowed researchers to explore more intricate blocking designs.
We often want to reduce or eliminate the influence of someConfoundingfactor when designing an experiment. We can sometimes do this by "blocking", which involves the separate consideration of blocks of data that have different levels of exposure to that factor.[2]
In the examples listed above, a nuisance variable is a variable that is not the primary focus of the study but can affect the outcomes of the experiment.[3]They are considered potential sources of variability that, if not controlled or accounted for, may confound the interpretation between theindependent and dependent variables.
To address nuisance variables, researchers can employ different methods such as blocking or randomization. Blocking involves grouping experimental units based on levels of the nuisance variable to control for its influence.Randomizationhelps distribute the effects of nuisance variables evenly across treatment groups.
By using one of these methods to account for nuisance variables, researchers can enhance the internal validity of their experiments, ensuring that the effects observed are more likely attributable to the manipulated variables rather than extraneous influences.
In the first example provided above, the sex of the patient would be a nuisance variable. For example, consider if the drug was a diet pill and the researchers wanted to test the effect of the diet pills on weight loss. The explanatory variable is the diet pill and the response variable is the amount of weight loss. Although the sex of the patient is not the main focus of the experiment—the effect of the drug is—it is possible that the sex of the individual will affect the amount of weight lost.
In thestatisticaltheory of thedesign of experiments, blocking is the arranging ofexperimental unitsin groups (blocks) that are similar to one another. Typically, a blocking factor is a source ofvariabilitythat is not of primary interest to the experimenter.[3][4]
When studying probability theory the blocks method consists of splitting a sample into blocks (groups) separated by smaller subblocks so that the blocks can be considered almost independent.[5]The blocks method helps proving limit theorems in the case of dependent random variables.
The blocks method was introduced byS. Bernstein:[6]The method was successfully applied in the theory of sums of dependent random variables and inextreme value theory.[7][8][9]
In our previous diet pills example, a blocking factor could be the sex of a patient. We could put individuals into one of two blocks (male or female). And within each of the two blocks, we can randomly assign the patients to either the diet pill (treatment) or placebo pill (control). By blocking on sex, this source of variability is controlled, therefore, leading to greater interpretation of how the diet pills affect weight loss.
A nuisance factor is used as a blocking factor if every level of the primary factor occurs the same number of times with each level of the nuisance factor.[3]The analysis of the experiment will focus on the effect of varying levels of the primary factor within each block of the experiment.
The general rule is:
Blocking is used to remove the effects of a few of the most important nuisance variables. Randomization is then used to reduce the contaminating effects of the remaining nuisance variables. For important nuisance variables, blocking will yield higher significance in the variables of interest than randomizing.[10]
Implementing blocking inexperimental designinvolves a series of steps to effectively control for extraneous variables and enhance the precision of treatment effect estimates.
Identify potential factors that are not the primary focus of the study but could introduce variability.
Carefully choose blocking factors based on their relevance to the study as well as their potential to confound the primary factors of interest.[11]
There are consequences to partitioning a certain sized experiment into a certain number of blocks as the number of blocks determines the number ofconfoundedeffects.[12]
You may choose to randomly assign experimental units to treatment conditions within each block which may help ensure that any unaccounted for variability is spread evenly across treatment groups. However, depending on how you assign treatments to blocks, you may obtain a different number of confounded effects.[4]Therefore, the number of as well as which specific effects get confounded can be chosen which means that assigning treatments to blocks is superior overrandom assignment.[4]
By running a different design for eachreplicate, where a different effect gets confounded each time, the interaction effects are partially confounded instead of completely sacrificing one single effect.[4]Replication enhances the reliability of results and allows for a more robust assessment of treatment effects.[12]
One useful way to look at a randomized block experiment is to consider it as a collection ofcompletely randomizedexperiments, each run within one of the blocks of the total experiment.[3]
with
Suppose engineers at a semiconductor manufacturing facility want to test whether different wafer implant material dosages have a significant effect on resistivity measurements after a diffusion process taking place in a furnace. They have four different dosages they want to try and enough experimental wafers from the same lot to run three wafers at each of the dosages.
The nuisance factor they are concerned with is "furnace run" since it is known that each furnace run differs from the last and impacts many process parameters.
An ideal way to run this experiment would be to run all the 4x3=12 wafers in the same furnace run. That would eliminate the nuisance furnace factor completely. However, regular production wafers have furnace priority, and only a few experimental wafers are allowed into any furnace run at the same time.
A non-blocked way to run this experiment would be to run each of the twelve experimental wafers, in random order, one per furnace run. That would increase the experimental error of each resistivity measurement by the run-to-run furnace variability and make it more difficult to study the effects of the different dosages. The blocked way to run this experiment, assuming you can convince manufacturing to let you put four experimental wafers in a furnace run, would be to put four wafers with different dosages in each of three furnace runs. The only randomization would be choosing which of the three wafers with dosage 1 would go into furnace run 1, and similarly for the wafers with dosages 2, 3 and 4.
LetX1be dosage "level" andX2be the blocking factor furnace run. Then the experiment can be described as follows:
Before randomization, the design trials look like:
An alternate way of summarizing the design trials would be to use a 4x3 matrix whose 4 rows are the levels of the treatmentX1and whose columns are the 3 levels of the blocking variableX2. The cells in the matrix have indices that match theX1,X2combinations above.
By extension, note that the trials for any K-factor randomized block design are simply the cell indices of akdimensional matrix.
The model for a randomized block design with one nuisance variable is
where
|
https://en.wikipedia.org/wiki/Blocking_(statistics)
|
Instatistics,latent variables(fromLatin:present participleoflateo'lie hidden'[citation needed]) arevariablesthat can only beinferredindirectly through amathematical modelfrom otherobservable variablesthat can be directlyobservedormeasured.[1]Suchlatent variable modelsare used in many disciplines, includingengineering,medicine,ecology,physics,machine learning/artificial intelligence,natural language processing,bioinformatics,chemometrics,demography,economics,management,political science,psychologyand thesocial sciences.
Latent variables may correspond to aspects of physical reality. These could in principle be measured, but may not be for practical reasons. Among the earliest expressions of this idea isFrancis Bacon'spolemictheNovum Organum, itself a challenge to the more traditional logic expressed inAristotle'sOrganon:
But the latent process of which we speak, is far from being obvious to men’s minds, beset as they now are. For we mean not the measures, symptoms, or degrees of any process which can be exhibited in the bodies themselves, but simply a continued process, which, for the most part, escapes the observation of the senses.
In this situation, the termhidden variablesis commonly used, reflecting the fact that the variables are meaningful, but not observable. Other latent variables correspond to abstract concepts, like categories, behavioral or mental states, or data structures. The termshypothetical variablesorhypothetical constructsmay be used in these situations.
The use of latent variables can serve toreduce the dimensionalityof data. Many observable variables can be aggregated in a model to represent an underlying concept, making it easier to understand the data. In this sense, they serve a function similar to that of scientific theories. At the same time, latent variables link observable "sub-symbolic" data in the real world to symbolic data in the modeled world.
Latent variables, as created by factor analytic methods, generally represent "shared" variance, or the degree to which variables "move" together. Variables that have no correlation cannot result in a latent construct based on the commonfactor model.[4]
Examples of latent variables from the field ofeconomicsincludequality of life, business confidence, morale, happiness and conservatism: these are all variables which cannot be measured directly. However, by linking these latent variables to other, observable variables, the values of the latent variables can be inferred from measurements of the observable variables. Quality of life is a latent variable which cannot be measured directly, so observable variables are used to infer quality of life. Observable variables to measure quality of life include wealth, employment, environment, physical and mental health, education, recreation and leisure time, and social belonging.
Latent-variable methodology is used in many branches ofmedicine. A class of problems that naturally lend themselves to latent variables approaches arelongitudinal studieswhere the time scale (e.g. age of participant or time since study baseline) is not synchronized with the trait being studied. For such studies, an unobserved time scale that is synchronized with the trait being studied can be modeled as a transformation of the observed time scale using latent variables. Examples of this includedisease progression modelingandmodeling of growth(see box).
There exists a range of different model classes and methodology that make use of latent variables and allow inference in the presence of latent variables. Models include:
Analysis and inference methods include:
Bayesian statisticsis often used for inferring latent variables.
|
https://en.wikipedia.org/wiki/Latent_and_observable_variables
|
Instatistics, amediationmodel seeks to identify and explain the mechanism or process that underlies an observed relationship between anindependent variableand adependent variablevia the inclusion of a third hypothetical variable, known as amediator variable(also amediating variable,intermediary variable, orintervening variable).[1]Rather than a direct causal relationship between the independent variable and the dependent variable, a mediation model proposes that the independent variable influences the mediator variable, which in turn influences the dependent variable. Thus, the mediator variable serves to clarify the nature of the causal relationship between the independent and dependent variables.[2][3]
Mediation analyses are employed to understand a known relationship by exploring the underlying mechanism or process by which one variable influences another variable through a mediator variable.[4]In particular, mediation analysis can contribute to better understanding the relationship between an independent variable and a dependent variable when these variables do not have an obvious direct connection.
Baron and Kenny (1986) laid out several requirements that must be met to form a true mediation relationship.[5]They are outlined below using a real-world example. See the diagram above for a visual representation of the overall mediating relationship to be explained. The original steps are as follows.
Relationship Duration
The following example, drawn from Howell (2009),[6]explains each step of Baron and Kenny's requirements to understand further how a mediation effect is characterized. Step 1 and step 2 use simple regression analysis, whereas step 3 usesmultiple regression analysis.
Such findings would lead to the conclusion implying that your feelings of competence and self-esteem mediate the relationship between how you were parented and how confident you feel about parenting your own children.
If step 1 does not yield a significant result, one may still have grounds to move to step 2. Sometimes there is actually a significant relationship between independent and dependent variables but because of small sample sizes, or other extraneous factors, there could not be enough power to predict the effect that actually exists.[7]
In the diagram shown above, the indirect effect is the product of path coefficients "A" and "B". The direct effect is the coefficient " C' ".
The direct effect measures the extent to which the dependent variable changes when the independent variable increases by one unit and the mediator variable remains unaltered. In contrast, the indirect effect measures the extent to which the dependent variable changes when the independent variable is held constant and the mediator variable changes by the amount it would have changed had the independent variable increased by one unit.[8][9]
In linear systems, the total effect is equal to the sum of the direct and indirect (C' + ABin the model above). In nonlinear models, the total effect is not generally equal to the sum of the direct and indirect effects, but to a modified combination of the two.[9]
A mediator variable can either account for all or some of the observed relationship between two variables.
Maximum evidence for mediation, also called full mediation, would occur if inclusion of the mediation variable drops the relationship between the independent variable and dependent variable (see pathway c′in diagram above) to zero.
Partial mediation maintains that the mediating variable accounts for some, but not all, of the relationship between the independent variable and dependent variable. Partial mediation implies that there is not only a significant relationship between the mediator and the dependent variable, but also some direct relationship between the independent and dependent variable.
In order for either full or partial mediation to be established, the reduction in variance explained by the independent variable must be significant as determined by one of several tests, such as theSobel test.[10]The effect of an independent variable on the dependent variable can become nonsignificant when the mediator is introduced simply because a trivial amount of variance is explained (i.e., not true mediation). Thus, it is imperative to show a significant reduction in variance explained by the independent variable before asserting either full or partial mediation. It is possible to have statistically significant indirect effects in the absence of a total effect.[11]This can be explained by the presence of several mediating paths that cancel each other out, and become noticeable when one of the cancelling mediators is controlled for. This implies that the terms 'partial' and 'full' mediation should always be interpreted relative to the set of variables that are present in the model. In all cases, the operation of "fixing a variable" must be distinguished from that of "controlling for a variable," which has been inappropriately used in the literature.[8][12]The former stands for physically fixing, while the latter stands for conditioning on, adjusting for, or adding to the regression model. The two notions coincide only when all error terms (not shown in the diagram) are statistically uncorrelated. When errors are correlated, adjustments must be made to neutralize those correlations before embarking on mediation analysis (seeBayesian network).
Sobel's test[10]is performed to determine if the relationship between the independent variable and dependent variable has been significantly reduced after inclusion of the mediator variable. In other words, this test assesses whether a mediation effect is significant. It examines the relationship between the independent variable and the dependent variable compared to the relationship between the independent variable and dependent variable including the mediation factor.
The Sobel test is more accurate than the Baron and Kenny steps explained above; however, it does have low statistical power. As such, large sample sizes are required in order to have sufficient power to detect significant effects. This is because the key assumption of Sobel's test is the assumption of normality. Because Sobel's test evaluates a given sample on the normal distribution, small sample sizes and skewness of the sampling distribution can be problematic (seeNormal distributionfor more details). Thus, the rule of thumb as suggested by MacKinnon et al., (2002)[13]is that a sample size of 1000 is required to detect a small effect, a sample size of 100 is sufficient in detecting a medium effect, and a sample size of 50 is required to detect a large effect.
The equation for Sobel is:[14]
The bootstrapping method provides some advantages to the Sobel's test, primarily an increase in power. The Preacher and Hayes bootstrapping method is anon-parametric testand does not impose the assumption of normality. Therefore, if the raw data is available, the bootstrap method is recommended.[14]Bootstrapping involves repeatedly randomly sampling observations with replacement from the data set to compute the desired statistic in each resample. Computing over hundreds, or thousands, of bootstrap resamples provide an approximation of the sampling distribution of the statistic of interest. The Preacher–Hayes method provides point estimates and confidence intervals by which one can assess the significance or nonsignificance of a mediation effect. Point estimates reveal the mean over the number of bootstrapped samples and if zero does not fall between the resulting confidence intervals of the bootstrapping method, one can confidently conclude that there is a significant mediation effect to report.
As outlined above, there are a few different options one can choose from to evaluate a mediation model.
Bootstrapping[15][16]is becoming the most popular method of testing mediation because it does not require the normality assumption to be met, and because it can be effectively utilized with smaller sample sizes (N< 25). However, mediation continues to be most frequently determined using the logic of Baron and Kenny[17]or theSobel test. It is becoming increasingly more difficult to publish tests of mediation based purely on the Baron and Kenny method or tests that make distributional assumptions such as the Sobel test. Thus, it is important to consider your options when choosing which test to conduct.[11]
While the concept of mediation as defined within psychology is theoretically appealing, the methods used to study mediation empirically have been challenged by statisticians and epidemiologists[8][12][18]and interpreted formally.[9]
Hayes (2009) critiqued Baron and Kenny's mediation steps approach,[11]and as of 2019,David A. Kennyon his website stated that mediation can exist in the absence of a 'significant' total effect (sometimes referred to as "inconsistent mediation"), and therefore step 1 of the original 1986 approach may not be needed. Later publications by Hayes questioned the concepts of full mediation and partial mediation, and advocated for the abandonment of these terms and of the steps in classical (1986) mediation.
Experimental approaches to mediation must be carried out with caution. First, it is important to have strong theoretical support for the exploratory investigation of a potential mediating variable.
A criticism of a mediation approach rests on the ability to manipulate and measure a mediating variable. Thus, one must be able to manipulate the proposed mediator in an acceptable and ethical fashion. As such, one must be able to measure the intervening process without interfering with the outcome. The mediator must also be able to establish construct validity of manipulation.
One of the most common criticisms of the measurement-of-mediation approach is that it is ultimately a correlational design. Consequently, it is possible that some other third variable, independent from the proposed mediator, could be responsible for the proposed effect. However, researchers have worked hard to provide counter-evidence to this disparagement. Specifically, the following counter-arguments have been put forward:[4]
Mediation can be an extremely useful and powerful statistical test; however, it must be used properly. It is important that the measures used to assess the mediator and the dependent variable are theoretically distinct and that the independent variable and mediator cannot interact. Should there be an interaction between the independent variable and the mediator one would have grounds to investigatemoderation.
Another model that is often tested is one in which competing variables in the model are alternative potential mediators or an unmeasured cause of the dependent variable. An additional variable in acausal modelmay obscure or confound the relationship between the independent and dependent variables. Potentialconfoundersare variables that may have a causal impact on both the independent variable and dependent variable. They include common sources of measurement error (as discussed above) as well as other influences shared by both the independent and dependent variables.
In experimental studies, there is a special concern about aspects of the experimental manipulation or setting that may account for study effects, rather than the motivating theoretical factor. Any of these problems may produce spurious relationships between the independent and dependent variables as measured. Ignoring a confounding variable may bias empirical estimates of the causal effect of the independent variable.
Asuppressor variableincreases the predictive validity of another variable when included in a regression equation. Suppression can occur when a single causal variable is related to an outcome variable through two separate mediator variables, and when one of those mediated effects is positive and one is negative. In such a case, each mediator variable suppresses or conceals the effect that is carried through the other mediator variable. For example, higher intelligence scores (a causal variable,A) may cause an increase in error detection (a mediator variable,B) which in turn may cause a decrease in errors made at work on an assembly line (an outcome variable,X); at the same time, intelligence could also cause an increase in boredom (C), which in turn may cause anincreasein errors (X). Thus, in one causal path intelligence decreases errors, and in the other it increases them. When neither mediator is included in the analysis, intelligence appears to have no effect or a weak effect on errors. However, when boredom is controlled intelligence will appear to decrease errors, and when error detection is controlled intelligence will appear to increase errors. If intelligence could be increased while only boredom was held constant, errors would decrease; if intelligence could be increased while holding only error detection constant, errors would increase.
In general, the omission of suppressors or confounders will lead to either an underestimation or an overestimation of the effect ofAonX, thereby either reducing or artificially inflating the magnitude of a relationship between two variables.
Other important third variables aremoderators. Moderators are variables that can make the relationship between two variables either stronger or weaker. Such variables further characterize interactions in regression by affecting the direction and/or strength of the relationship betweenXandY. A moderating relationship can be thought of as aninteraction. It occurs when the relationship between variables A and B depends on the level of C. Seemoderationfor further discussion.
Mediation andmoderationcan co-occur in statistical models. It is possible to mediate moderation and moderate mediation.
Moderated mediationis when the effect of the treatmentAon the mediator and/or the partial effectBon the dependent variable depend in turn on levels of another variable (moderator). Essentially, in moderated mediation, mediation is first established, and then one investigates if the mediation effect that describes the relationship between the independent variable and dependent variable is moderated by different levels of another variable (i.e., a moderator). This definition has been outlined by Muller, Judd, and Yzerbyt (2005)[20]and Preacher, Rucker, and Hayes (2007).[21]
There are five possible models of moderated mediation, as illustrated in the diagrams below.[20]
In addition to the models mentioned above, a new variable can also exist which moderates the relationship between the independent variable and mediator (the A path) while at the same time have the new variable moderate the relationship between the independent variable and dependent variable (the C Path).[1]
Mediated moderation is a variant of both moderation and mediation. This is where there is initially overall moderation and the direct effect of the moderator variable on the outcome is mediated. The main difference between mediated moderation and moderated mediation is that for the former there is initial (overall) moderation and this effect is mediated and for the latter there is no moderation but the effect of either the treatment on the mediator (pathA) is moderated or the effect of the mediator on the outcome (pathB) is moderated.[20]
In order to establish mediated moderation, one must first establishmoderation, meaning that the direction and/or the strength of the relationship between the independent and dependent variables (pathC) differs depending on the level of a third variable (the moderator variable). Researchers next look for the presence of mediated moderation when they have a theoretical reason to believe that there is a fourth variable that acts as the mechanism or process that causes the relationship between the independent variable and the moderator (pathA) or between the moderator and the dependent variable (pathC).
The following is a published example of mediated moderation in psychological research.[22]Participants were presented with an initial stimulus (a prime) that made them think of morality or made them think of might. They then participated in thePrisoner's Dilemma Game(PDG), in which participants pretend that they and their partner in crime have been arrested, and they must decide whether to remain loyal to their partner or to compete with their partner and cooperate with the authorities. The researchers found that prosocial individuals were affected by the morality and might primes, whereas proself individuals were not. Thus,social value orientation(proself vs. prosocial) moderated the relationship between the prime (independent variable: morality vs. might) and the behaviour chosen in the PDG (dependent variable: competitive vs. cooperative).
The researchers next looked for the presence of a mediated moderation effect. Regression analyses revealed that the type of prime (morality vs. might) mediated the moderating relationship of participants’social value orientationon PDG behaviour. Prosocial participants who experienced the morality prime expected their partner to cooperate with them, so they chose to cooperate themselves. Prosocial participants who experienced the might prime expected their partner to compete with them, which made them more likely to compete with their partner and cooperate with the authorities. In contrast, participants with a pro-self social value orientation always acted competitively.
Muller, Judd, and Yzerbyt (2005)[20]outline three fundamental models that underlie both moderated mediation and mediated moderation.Morepresents the moderator variable(s),Merepresents the mediator variable(s), andεirepresents the measurement error of each regression equation.
Moderation of the relationship between the independent variable (X) and the dependent variable (Y), also called the overall treatment effect (pathCin the diagram).
Moderation of the relationship between the independent variable and the mediator (pathA).
Moderation of both the relationship between the independent and dependent variables (pathA) and the relationship between the mediator and the dependent variable (pathB).
Mediation analysis quantifies the
extent to which a variable participates in the transmittance
of change from a cause to its effect. It is inherently a causal
notion, hence it cannot be defined in statistical terms. Traditionally,
however, the bulk of mediation analysis has been conducted
within the confines of linear regression, with statistical
terminology masking the causal character of the
relationships involved. This led to difficulties,
biases, and limitations that have been alleviated by
modern methods of causal analysis, based on causal diagrams
and counterfactual logic.
The source of these difficulties lies in defining mediation
in terms of changes induced by adding a third variables into
a regression equation. Such statistical changes are
epiphenomena which sometimes accompany mediation but,
in general, fail to capture the causal relationships that
mediation analysis aims to quantify.
The basic premise of the causal approach is that it is
not always appropriate to "control" for the mediatorMwhen we seek to estimate the direct effect ofXonY(see the Figure above).
The classical rationale for "controlling" forM"
is that, if we succeed in preventingMfrom changing, then
whatever changes we measure in Y are attributable solely
to variations inXand we are justified then in proclaiming the
effect observed as "direct effect ofXonY." Unfortunately,
"controlling forM" does not physically preventMfrom changing;
it merely narrows the analyst's attention to cases
of equalMvalues. Moreover, the language of probability
theory does not possess the notation to express the idea
of "preventingMfrom changing" or "physically holdingMconstant".
The only operator probability provides is "Conditioning"
which is what we do when we "control" forM,
or addMas a regressor in the equation forY.
The result is that, instead of physically holdingMconstant
(say atM=m) and comparingYfor units underX= 1' to those underX= 0, we allowMto vary but ignore all units except those in
whichMachieves the valueM=m. These two operations are
fundamentally different, and yield different results,[23][24]except in the case of no omitted variables. Improperly conditioning mediated effects can be a type ofbad control.
To illustrate, assume that the error terms ofMandYare correlated. Under such conditions, the
structural coefficientBandA(betweenMandYand betweenYandX)
can no longer be estimated by regressingYonXandM.
In fact, the regression slopes may both be nonzero even whenCis zero.[25]This has two consequences. First, new strategies must be devised for estimating the structural coefficientsA, BandC. Second, the basic definitions of direct and indirect effects must go beyond regression analysis, and should invoke an operation that mimics "fixingM", rather than "conditioning onM."
Such an operator, denoted do(M=m), was defined in Pearl (1994)[24]and it operates by removing the equation ofMand replacing it by a constantm. For example, if the basic mediation model consists of the equations:
then after applying the operator do(M=m) the model becomes:
and after applying the operator do(X=x) the model becomes:
where the functionsfandg, as well as the
distributions of the error terms ε1and ε3remain
unaltered. If we further rename the variablesMandYresulting from do(X=x)
asM(x) andY(x), respectively, we obtain what
came to be known as "potential
outcomes"[26]or "structural counterfactuals".[27]These new variables provide convenient notation
for defining direct and indirect effects. In particular,
four types of effects have been defined for the
transition fromX= 0 toX= 1:
(a) Total effect –
(b) Controlled direct effect -
(c) Natural direct effect -
(d) Natural indirect effect
WhereE[ ] stands for expectation taken over the error terms.
These effects have the following interpretations:
A controlled version of the indirect effect does not
exist because there is no way of disabling the
direct effect by fixing a variable to a constant.
According to these definitions the total effect can be decomposed as a sum
whereNIErstands for the reverse transition, fromX= 1 toX= 0; it becomes additive in linear systems,
where reversal of transitions entails sign reversal.
The power of these definitions lies in their generality; they are applicable to models with arbitrary nonlinear interactions, arbitrary dependencies among the disturbances, and both continuous and categorical variables.
In linear analysis, all effects are determined by sums of products of structural coefficients, giving
Therefore, all effects are estimable whenever the model is identified. In non-linear systems, more stringent conditions are needed for estimating the direct and indirect effects.[9][28][29]For example, if no confounding exists,
(i.e., ε1, ε2, and ε3are mutually independent) the following formulas can be derived:[9]
The last two equations are calledMediation Formulas[30][31][32]and have become the target of estimation in many studies of mediation.[28][29][31][32]They give distribution-free expressions for direct and indirect effects and demonstrate that, despite the arbitrary nature of the error distributions and the functionsf,g, andh, mediated effects can nevertheless be estimated from data using regression. The analyses ofmoderated mediationandmediating moderatorsfall as special cases of the causal mediation analysis, and the mediation formulas identify how various interactions coefficients contribute to the necessary and sufficient components of mediation.[29][30]
Assume the model takes the form
where the parameterc3{\displaystyle c_{3}}quantifies the degree to whichMmodifies the effect ofXonY. Even when all parameters are estimated from data, it is still not obvious what combinations of parameters measure the direct and indirect effect ofXonY, or, more practically, how to assess the fraction of the total effectTE{\displaystyle TE}that isexplainedby mediation and the fraction ofTE{\displaystyle TE}that isowedto mediation. In linear analysis, the former fraction is captured by the productb1c2/TE{\displaystyle b_{1}c_{2}/TE}, the latter by the difference(TE−c1)/TE{\displaystyle (TE-c_{1})/TE}, and the two quantities coincide. In the presence of interaction, however, each fraction demands a separate analysis, as dictated by the Mediation Formula, which yields:
Thus, the fraction of output response for which mediation would besufficientis
while the fraction for which mediation would benecessaryis
These fractions involve non-obvious combinations
of the model's parameters, and can be constructed
mechanically with the help of the Mediation Formula. Significantly, due to interaction, a direct effect can be sustained even when the parameterc1{\displaystyle c_{1}}vanishes and, moreover, a total effect can be sustained even when both the direct and indirect effects vanish. This illustrates that estimating parameters in isolation tells us little about the effect of mediation and, more generally, mediation and moderation are intertwined and cannot be assessed separately.
As of 19 June 2014, this article is derived in whole or in part fromCausal Analysis in Theory and Practice. The copyright holder has licensed the content in a manner that permits reuse underCC BY-SA 3.0andGFDL. All relevant terms must be followed.[dead link]
|
https://en.wikipedia.org/wiki/Mediator_variable
|
Inmathematics, theinverse trigonometric functions(occasionally also calledantitrigonometric,[1]cyclometric,[2]orarcusfunctions[3]) are theinverse functionsof thetrigonometric functions, under suitably restricteddomains. Specifically, they are the inverses of thesine,cosine,tangent,cotangent,secant, andcosecantfunctions,[4]and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used inengineering,navigation,physics, andgeometry.
Several notations for the inverse trigonometric functions exist. The most common convention is to name inverse trigonometric functions using an arc- prefix:arcsin(x),arccos(x),arctan(x), etc.[1](This convention is used throughout this article.) This notation arises from the following geometric relationships:[citation needed]when measuring in radians, an angle ofθradians will correspond to anarcwhose length isrθ, whereris the radius of the circle. Thus in theunit circle, the cosine of x function is both the arc and the angle, because the arc of a circle of radius 1 is the same as the angle. Or, "the arc whose cosine isx" is the same as "the angle whose cosine isx", because the length of the arc of the circle in radii is the same as the measurement of the angle in radians.[5]In computer programming languages, the inverse trigonometric functions are often called by the abbreviated formsasin,acos,atan.[6]
The notationssin−1(x),cos−1(x),tan−1(x), etc., as introduced byJohn Herschelin 1813,[7][8]are often used as well in English-language sources,[1]much more than the alsoestablishedsin[−1](x),cos[−1](x),tan[−1](x)– conventions consistent with the notation of aninverse function, that is useful (for example) to define themultivaluedversion of each inverse trigonometric function:tan−1(x)={arctan(x)+πk∣k∈Z}.{\displaystyle \tan ^{-1}(x)=\{\arctan(x)+\pi k\mid k\in \mathbb {Z} \}~.}However, this might appear to conflict logically with the common semantics for expressions such assin2(x)(although onlysin2x, without parentheses, is the really common use), which refer to numeric power rather than function composition, and therefore may result in confusion between notation for thereciprocal(multiplicative inverse) andinverse function.[9]
The confusion is somewhat mitigated by the fact that each of the reciprocal trigonometric functions has its own name — for example,(cos(x))−1= sec(x). Nevertheless, certain authors advise against using it, since it is ambiguous.[1][10]Another precarious convention used by a small number of authors is to use anuppercasefirst letter, along with a “−1” superscript:Sin−1(x),Cos−1(x),Tan−1(x), etc.[11]Although it is intended to avoid confusion with thereciprocal, which should be represented bysin−1(x),cos−1(x), etc., or, better, bysin−1x,cos−1x, etc., it in turn creates yet another major source of ambiguity, especially since many popular high-level programming languages (e.g.MathematicaandMAGMA) use those very same capitalised representations for the standard trig functions, whereas others (Python,SymPy,NumPy,Matlab,MAPLE, etc.) use lower-case.
Hence, since 2009, theISO 80000-2standard has specified solely the "arc" prefix for the inverse functions.
Since none of the six trigonometric functions areone-to-one, they must be restricted in order to have inverse functions. Therefore, the resultrangesof the inverse functions are proper (i.e. strict)subsetsof the domains of the original functions.
For example, usingfunctionin the sense ofmultivalued functions, just as thesquare rootfunctiony=x{\displaystyle y={\sqrt {x}}}could be defined fromy2=x,{\displaystyle y^{2}=x,}the functiony=arcsin(x){\displaystyle y=\arcsin(x)}is defined so thatsin(y)=x.{\displaystyle \sin(y)=x.}For a given real numberx,{\displaystyle x,}with−1≤x≤1,{\displaystyle -1\leq x\leq 1,}there are multiple (in fact,countably infinitelymany) numbersy{\displaystyle y}such thatsin(y)=x{\displaystyle \sin(y)=x}; for example,sin(0)=0,{\displaystyle \sin(0)=0,}but alsosin(π)=0,{\displaystyle \sin(\pi )=0,}sin(2π)=0,{\displaystyle \sin(2\pi )=0,}etc. When only one value is desired, the function may be restricted to itsprincipal branch. With this restriction, for eachx{\displaystyle x}in the domain, the expressionarcsin(x){\displaystyle \arcsin(x)}will evaluate only to a single value, called itsprincipal value. These properties apply to all the inverse trigonometric functions.
The principal inverses are listed in the following table.
Note: Some authors define the range of arcsecant to be(0≤y<π2{\textstyle 0\leq y<{\frac {\pi }{2}}}orπ≤y<3π2{\textstyle \pi \leq y<{\frac {3\pi }{2}}}),[12]because the tangent function is nonnegative on this domain. This makes some computations more consistent. For example, using this range,tan(arcsec(x))=x2−1,{\displaystyle \tan(\operatorname {arcsec}(x))={\sqrt {x^{2}-1}},}whereas with the range(0≤y<π2{\textstyle 0\leq y<{\frac {\pi }{2}}}orπ2<y≤π{\textstyle {\frac {\pi }{2}}<y\leq \pi }),we would have to writetan(arcsec(x))=±x2−1,{\displaystyle \tan(\operatorname {arcsec}(x))=\pm {\sqrt {x^{2}-1}},}since tangent is nonnegative on0≤y<π2,{\textstyle 0\leq y<{\frac {\pi }{2}},}but nonpositive onπ2<y≤π.{\textstyle {\frac {\pi }{2}}<y\leq \pi .}For a similar reason, the same authors define the range of arccosecant to be(−π<y≤−π2{\textstyle (-\pi <y\leq -{\frac {\pi }{2}}}or0<y≤π2).{\textstyle 0<y\leq {\frac {\pi }{2}}).}
Ifxis allowed to be acomplex number, then the range ofyapplies only to its real part.
The table below displays names and domains of the inverse trigonometric functions along with therangeof their usualprincipal valuesinradians.
The symbolR=(−∞,∞){\displaystyle \mathbb {R} =(-\infty ,\infty )}denotes the set of allreal numbersandZ={…,−2,−1,0,1,2,…}{\displaystyle \mathbb {Z} =\{\ldots ,\,-2,\,-1,\,0,\,1,\,2,\,\ldots \}}denotes the set of allintegers. The set of all integer multiples ofπ{\displaystyle \pi }is denoted by
πZ:={πn:n∈Z}={…,−2π,−π,0,π,2π,…}.{\displaystyle \pi \mathbb {Z} ~:=~\{\pi n\;:\;n\in \mathbb {Z} \}~=~\{\ldots ,\,-2\pi ,\,-\pi ,\,0,\,\pi ,\,2\pi ,\,\ldots \}.}
The symbol∖{\displaystyle \,\setminus \,}denotesset subtractionso that, for instance,R∖(−1,1)=(−∞,−1]∪[1,∞){\displaystyle \mathbb {R} \setminus (-1,1)=(-\infty ,-1]\cup [1,\infty )}is the set of points inR{\displaystyle \mathbb {R} }(that is, real numbers) that arenotin the interval(−1,1).{\displaystyle (-1,1).}
TheMinkowski sumnotationπZ+(0,π){\textstyle \pi \mathbb {Z} +(0,\pi )}andπZ+(−π2,π2){\displaystyle \pi \mathbb {Z} +{\bigl (}{-{\tfrac {\pi }{2}}},{\tfrac {\pi }{2}}{\bigr )}}that is used above to concisely write the domains ofcot,csc,tan,andsec{\displaystyle \cot ,\csc ,\tan ,{\text{ and }}\sec }is now explained.
Domain of cotangentcot{\displaystyle \cot }and cosecantcsc{\displaystyle \csc }:
The domains ofcot{\displaystyle \,\cot \,}andcsc{\displaystyle \,\csc \,}are the same. They are the set of all anglesθ{\displaystyle \theta }at whichsinθ≠0,{\displaystyle \sin \theta \neq 0,}i.e. all real numbers that arenotof the formπn{\displaystyle \pi n}for some integern,{\displaystyle n,}
πZ+(0,π)=⋯∪(−2π,−π)∪(−π,0)∪(0,π)∪(π,2π)∪⋯=R∖πZ{\displaystyle {\begin{aligned}\pi \mathbb {Z} +(0,\pi )&=\cdots \cup (-2\pi ,-\pi )\cup (-\pi ,0)\cup (0,\pi )\cup (\pi ,2\pi )\cup \cdots \\&=\mathbb {R} \setminus \pi \mathbb {Z} \end{aligned}}}
Domain of tangenttan{\displaystyle \tan }and secantsec{\displaystyle \sec }:
The domains oftan{\displaystyle \,\tan \,}andsec{\displaystyle \,\sec \,}are the same. They are the set of all anglesθ{\displaystyle \theta }at whichcosθ≠0,{\displaystyle \cos \theta \neq 0,}
πZ+(−π2,π2)=⋯∪(−3π2,−π2)∪(−π2,π2)∪(π2,3π2)∪⋯=R∖(π2+πZ){\displaystyle {\begin{aligned}\pi \mathbb {Z} +\left(-{\tfrac {\pi }{2}},{\tfrac {\pi }{2}}\right)&=\cdots \cup {\bigl (}{-{\tfrac {3\pi }{2}}},{-{\tfrac {\pi }{2}}}{\bigr )}\cup {\bigl (}{-{\tfrac {\pi }{2}}},{\tfrac {\pi }{2}}{\bigr )}\cup {\bigl (}{\tfrac {\pi }{2}},{\tfrac {3\pi }{2}}{\bigr )}\cup \cdots \\&=\mathbb {R} \setminus \left({\tfrac {\pi }{2}}+\pi \mathbb {Z} \right)\\\end{aligned}}}
Each of the trigonometric functions is periodic in the real part of its argument, running through all its values twice in each interval of2π:{\displaystyle 2\pi :}
This periodicity is reflected in the general inverses, wherek{\displaystyle k}is some integer.
The following table shows how inverse trigonometric functions may be used to solve equalities involving the six standard trigonometric functions.
It is assumed that the given valuesθ,{\displaystyle \theta ,}r,{\displaystyle r,}s,{\displaystyle s,}x,{\displaystyle x,}andy{\displaystyle y}all lie within appropriate ranges so that the relevant expressions below arewell-defined.
Note that "for somek∈Z{\displaystyle k\in \mathbb {Z} }" is just another way of saying "for someintegerk.{\displaystyle k.}"
The symbol⟺{\displaystyle \,\iff \,}islogical equalityand indicates that if the left hand side is true then so is the right hand side and, conversely, if the right hand side is true then so is the left hand side (see this footnote[note 1]for more details and an example illustrating this concept).
where the first four solutions can be written in expanded form as:
For example, ifcosθ=−1{\displaystyle \cos \theta =-1}thenθ=π+2πk=−π+2π(1+k){\displaystyle \theta =\pi +2\pi k=-\pi +2\pi (1+k)}for somek∈Z.{\displaystyle k\in \mathbb {Z} .}While ifsinθ=±1{\displaystyle \sin \theta =\pm 1}thenθ=π2+πk=−π2+π(k+1){\textstyle \theta ={\frac {\pi }{2}}+\pi k=-{\frac {\pi }{2}}+\pi (k+1)}for somek∈Z,{\displaystyle k\in \mathbb {Z} ,}wherek{\displaystyle k}will be even ifsinθ=1{\displaystyle \sin \theta =1}and it will be odd ifsinθ=−1.{\displaystyle \sin \theta =-1.}The equationssecθ=−1{\displaystyle \sec \theta =-1}andcscθ=±1{\displaystyle \csc \theta =\pm 1}have the same solutions ascosθ=−1{\displaystyle \cos \theta =-1}andsinθ=±1,{\displaystyle \sin \theta =\pm 1,}respectively. In all equations aboveexceptfor those just solved (i.e. except forsin{\displaystyle \sin }/cscθ=±1{\displaystyle \csc \theta =\pm 1}andcos{\displaystyle \cos }/secθ=−1{\displaystyle \sec \theta =-1}), the integerk{\displaystyle k}in the solution's formula is uniquely determined byθ{\displaystyle \theta }(for fixedr,s,x,{\displaystyle r,s,x,}andy{\displaystyle y}).
With the help ofinteger parityParity(h)={0ifhis even1ifhis odd{\displaystyle \operatorname {Parity} (h)={\begin{cases}0&{\text{if }}h{\text{ is even }}\\1&{\text{if }}h{\text{ is odd }}\\\end{cases}}}it is possible to write a solution tocosθ=x{\displaystyle \cos \theta =x}that doesn't involve the "plus or minus"±{\displaystyle \,\pm \,}symbol:
And similarly for the secant function,
whereπh+πParity(h){\displaystyle \pi h+\pi \operatorname {Parity} (h)}equalsπh{\displaystyle \pi h}when the integerh{\displaystyle h}is even, and equalsπh+π{\displaystyle \pi h+\pi }when it's odd.
The solutions tocosθ=x{\displaystyle \cos \theta =x}andsecθ=x{\displaystyle \sec \theta =x}involve the "plus or minus" symbol±,{\displaystyle \,\pm ,\,}whose meaning is now clarified. Only the solution tocosθ=x{\displaystyle \cos \theta =x}will be discussed since the discussion forsecθ=x{\displaystyle \sec \theta =x}is the same.
We are givenx{\displaystyle x}between−1≤x≤1{\displaystyle -1\leq x\leq 1}and we know that there is an angleθ{\displaystyle \theta }in some interval that satisfiescosθ=x.{\displaystyle \cos \theta =x.}We want to find thisθ.{\displaystyle \theta .}The table above indicates that the solution isθ=±arccosx+2πkfor somek∈Z{\displaystyle \,\theta =\pm \arccos x+2\pi k\,\quad {\text{ for some }}k\in \mathbb {Z} }which is a shorthand way of saying that (at least) one of the following statement is true:
As mentioned above, ifarccosx=π{\displaystyle \,\arccos x=\pi \,}(which by definition only happens whenx=cosπ=−1{\displaystyle x=\cos \pi =-1}) then both statements (1) and (2) hold, although with different values for the integerk{\displaystyle k}: ifK{\displaystyle K}is the integer from statement (1), meaning thatθ=π+2πK{\displaystyle \theta =\pi +2\pi K}holds, then the integerk{\displaystyle k}for statement (2) isK+1{\displaystyle K+1}(becauseθ=−π+2π(1+K){\displaystyle \theta =-\pi +2\pi (1+K)}).
However, ifx≠−1{\displaystyle x\neq -1}then the integerk{\displaystyle k}is unique and completely determined byθ.{\displaystyle \theta .}Ifarccosx=0{\displaystyle \,\arccos x=0\,}(which by definition only happens whenx=cos0=1{\displaystyle x=\cos 0=1}) then±arccosx=0{\displaystyle \,\pm \arccos x=0\,}(because+arccosx=+0=0{\displaystyle \,+\arccos x=+0=0\,}and−arccosx=−0=0{\displaystyle \,-\arccos x=-0=0\,}so in both cases±arccosx{\displaystyle \,\pm \arccos x\,}is equal to0{\displaystyle 0}) and so the statements (1) and (2) happen to be identical in this particular case (and so both hold).
Having considered the casesarccosx=0{\displaystyle \,\arccos x=0\,}andarccosx=π,{\displaystyle \,\arccos x=\pi ,\,}we now focus on the case wherearccosx≠0{\displaystyle \,\arccos x\neq 0\,}andarccosx≠π,{\displaystyle \,\arccos x\neq \pi ,\,}So assume this from now on. The solution tocosθ=x{\displaystyle \cos \theta =x}is stillθ=±arccosx+2πkfor somek∈Z{\displaystyle \,\theta =\pm \arccos x+2\pi k\,\quad {\text{ for some }}k\in \mathbb {Z} }which as before is shorthand for saying that one of statements (1) and (2) is true. However this time, becausearccosx≠0{\displaystyle \,\arccos x\neq 0\,}and0<arccosx<π,{\displaystyle \,0<\arccos x<\pi ,\,}statements (1) and (2) are different and furthermore,exactly oneof the two equalities holds (not both). Additional information aboutθ{\displaystyle \theta }is needed to determine which one holds. For example, suppose thatx=0{\displaystyle x=0}and thatallthat is known aboutθ{\displaystyle \theta }is that−π≤θ≤π{\displaystyle \,-\pi \leq \theta \leq \pi \,}(and nothing more is known). Thenarccosx=arccos0=π2{\displaystyle \arccos x=\arccos 0={\frac {\pi }{2}}}and moreover, in this particular casek=0{\displaystyle k=0}(for both the+{\displaystyle \,+\,}case and the−{\displaystyle \,-\,}case) and so consequently,θ=±arccosx+2πk=±(π2)+2π(0)=±π2.{\displaystyle \theta ~=~\pm \arccos x+2\pi k~=~\pm \left({\frac {\pi }{2}}\right)+2\pi (0)~=~\pm {\frac {\pi }{2}}.}This means thatθ{\displaystyle \theta }could be eitherπ/2{\displaystyle \,\pi /2\,}or−π/2.{\displaystyle \,-\pi /2.}Without additional information it is not possible to determine which of these valuesθ{\displaystyle \theta }has.
An example of some additional information that could determine the value ofθ{\displaystyle \theta }would be knowing that the angle is above thex{\displaystyle x}-axis (in which caseθ=π/2{\displaystyle \theta =\pi /2}) or alternatively, knowing that it is below thex{\displaystyle x}-axis (in which caseθ=−π/2{\displaystyle \theta =-\pi /2}).
The table below shows how two anglesθ{\displaystyle \theta }andφ{\displaystyle \varphi }must be related if their values under a given trigonometric function are equal or negatives of each other.
The vertical double arrow⇕{\displaystyle \Updownarrow }in the last row indicates thatθ{\displaystyle \theta }andφ{\displaystyle \varphi }satisfy|sinθ|=|sinφ|{\displaystyle \left|\sin \theta \right|=\left|\sin \varphi \right|}if and only if they satisfy|cosθ|=|cosφ|.{\displaystyle \left|\cos \theta \right|=\left|\cos \varphi \right|.}
Thus given a single solutionθ{\displaystyle \theta }to an elementary trigonometric equation (sinθ=y{\displaystyle \sin \theta =y}is such an equation, for instance, and becausesin(arcsiny)=y{\displaystyle \sin(\arcsin y)=y}always holds,θ:=arcsiny{\displaystyle \theta :=\arcsin y}is always a solution), the set of all solutions to it are:
The equations above can be transformed by using the reflection and shift identities:[13]
These formulas imply, in particular, that the following hold:
sinθ=−sin(−θ)=−sin(π+θ)=−sin(π−θ)=−cos(π2+θ)=−cos(π2−θ)=−cos(−π2−θ)=−cos(−π2+θ)=−cos(3π2−θ)=−cos(−3π2+θ)cosθ=−cos(−θ)=−cos(π+θ)=−cos(π−θ)=−sin(π2+θ)=−sin(π2−θ)=−sin(−π2−θ)=−sin(−π2+θ)=−sin(3π2−θ)=−sin(−3π2+θ)tanθ=−tan(−θ)=−tan(π+θ)=−tan(π−θ)=−cot(π2+θ)=−cot(π2−θ)=−cot(−π2−θ)=−cot(−π2+θ)=−cot(3π2−θ)=−cot(−3π2+θ){\displaystyle {\begin{aligned}\sin \theta &=-\sin(-\theta )&&=-\sin(\pi +\theta )&&={\phantom {-}}\sin(\pi -\theta )\\&=-\cos \left({\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\cos \left({\frac {\pi }{2}}-\theta \right)&&=-\cos \left(-{\frac {\pi }{2}}-\theta \right)\\&={\phantom {-}}\cos \left(-{\frac {\pi }{2}}+\theta \right)&&=-\cos \left({\frac {3\pi }{2}}-\theta \right)&&=-\cos \left(-{\frac {3\pi }{2}}+\theta \right)\\[0.3ex]\cos \theta &={\phantom {-}}\cos(-\theta )&&=-\cos(\pi +\theta )&&=-\cos(\pi -\theta )\\&={\phantom {-}}\sin \left({\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\sin \left({\frac {\pi }{2}}-\theta \right)&&=-\sin \left(-{\frac {\pi }{2}}-\theta \right)\\&=-\sin \left(-{\frac {\pi }{2}}+\theta \right)&&=-\sin \left({\frac {3\pi }{2}}-\theta \right)&&={\phantom {-}}\sin \left(-{\frac {3\pi }{2}}+\theta \right)\\[0.3ex]\tan \theta &=-\tan(-\theta )&&={\phantom {-}}\tan(\pi +\theta )&&=-\tan(\pi -\theta )\\&=-\cot \left({\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\cot \left({\frac {\pi }{2}}-\theta \right)&&={\phantom {-}}\cot \left(-{\frac {\pi }{2}}-\theta \right)\\&=-\cot \left(-{\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\cot \left({\frac {3\pi }{2}}-\theta \right)&&=-\cot \left(-{\frac {3\pi }{2}}+\theta \right)\\[0.3ex]\end{aligned}}}
where swappingsin↔csc,{\displaystyle \sin \leftrightarrow \csc ,}swappingcos↔sec,{\displaystyle \cos \leftrightarrow \sec ,}and swappingtan↔cot{\displaystyle \tan \leftrightarrow \cot }gives the analogous equations forcsc,sec,andcot,{\displaystyle \csc ,\sec ,{\text{ and }}\cot ,}respectively.
So for example, by using the equalitysin(π2−θ)=cosθ,{\textstyle \sin \left({\frac {\pi }{2}}-\theta \right)=\cos \theta ,}the equationcosθ=x{\displaystyle \cos \theta =x}can be transformed intosin(π2−θ)=x,{\textstyle \sin \left({\frac {\pi }{2}}-\theta \right)=x,}which allows for the solution to the equationsinφ=x{\displaystyle \;\sin \varphi =x\;}(whereφ:=π2−θ{\textstyle \varphi :={\frac {\pi }{2}}-\theta }) to be used; that solution being:φ=(−1)karcsin(x)+πkfor somek∈Z,{\displaystyle \varphi =(-1)^{k}\arcsin(x)+\pi k\;{\text{ for some }}k\in \mathbb {Z} ,}which becomes:π2−θ=(−1)karcsin(x)+πkfor somek∈Z{\displaystyle {\frac {\pi }{2}}-\theta ~=~(-1)^{k}\arcsin(x)+\pi k\quad {\text{ for some }}k\in \mathbb {Z} }where using the fact that(−1)k=(−1)−k{\displaystyle (-1)^{k}=(-1)^{-k}}and substitutingh:=−k{\displaystyle h:=-k}proves that another solution tocosθ=x{\displaystyle \;\cos \theta =x\;}is:θ=(−1)h+1arcsin(x)+πh+π2for someh∈Z.{\displaystyle \theta ~=~(-1)^{h+1}\arcsin(x)+\pi h+{\frac {\pi }{2}}\quad {\text{ for some }}h\in \mathbb {Z} .}The substitutionarcsinx=π2−arccosx{\displaystyle \;\arcsin x={\frac {\pi }{2}}-\arccos x\;}may be used express the right hand side of the above formula in terms ofarccosx{\displaystyle \;\arccos x\;}instead ofarcsinx.{\displaystyle \;\arcsin x.\;}
Trigonometric functions of inverse trigonometric functions are tabulated below. A quick way to derive them is by considering the geometry of a right-angled triangle, with one side of length 1 and another side of lengthx,{\displaystyle x,}then applying thePythagorean theoremand definitions of the trigonometric ratios. It is worth noting that for arcsecant and arccosecant, the diagram assumes thatx{\displaystyle x}is positive, and thus the result has to be corrected through the use ofabsolute valuesand thesignum(sgn) operation.
Complementary angles:
Negative arguments:
Reciprocal arguments:
The identities above can be used with (and derived from) the fact thatsin{\displaystyle \sin }andcsc{\displaystyle \csc }arereciprocals(i.e.csc=1sin{\displaystyle \csc ={\tfrac {1}{\sin }}}), as arecos{\displaystyle \cos }andsec,{\displaystyle \sec ,}andtan{\displaystyle \tan }andcot.{\displaystyle \cot .}
Useful identities if one only has a fragment of a sine table:
Whenever the square root of a complex number is used here, we choose the root with the positive real part (or positive imaginary part if the square was negative real).
A useful form that follows directly from the table above is
It is obtained by recognizing thatcos(arctan(x))=11+x2=cos(arccos(11+x2)){\displaystyle \cos \left(\arctan \left(x\right)\right)={\sqrt {\frac {1}{1+x^{2}}}}=\cos \left(\arccos \left({\sqrt {\frac {1}{1+x^{2}}}}\right)\right)}.
From thehalf-angle formula,tan(θ2)=sin(θ)1+cos(θ){\displaystyle \tan \left({\tfrac {\theta }{2}}\right)={\tfrac {\sin(\theta )}{1+\cos(\theta )}}}, we get:
This is derived from the tangentaddition formula
by letting
Thederivativesfor complex values ofzare as follows:
Only for real values ofx:
These formulas can be derived in terms of the derivatives of trigonometric functions. For example, ifx=sinθ{\displaystyle x=\sin \theta }, thendx/dθ=cosθ=1−x2,{\textstyle dx/d\theta =\cos \theta ={\sqrt {1-x^{2}}},}so
Integrating the derivative and fixing the value at one point gives an expression for the inverse trigonometric function as a definite integral:
Whenxequals 1, the integrals with limited domains areimproper integrals, but still well-defined.
Similar to the sine and cosine functions, the inverse trigonometric functions can also be calculated usingpower series, as follows. For arcsine, the series can be derived by expanding its derivative,11−z2{\textstyle {\tfrac {1}{\sqrt {1-z^{2}}}}}, as abinomial series, and integrating term by term (using the integral definition as above). The series for arctangent can similarly be derived by expanding its derivative11+z2{\textstyle {\frac {1}{1+z^{2}}}}in ageometric series, and applying the integral definition above (seeLeibniz series).
Series for the other inverse trigonometric functions can be given in terms of these according to the relationships given above. For example,arccos(x)=π/2−arcsin(x){\displaystyle \arccos(x)=\pi /2-\arcsin(x)},arccsc(x)=arcsin(1/x){\displaystyle \operatorname {arccsc}(x)=\arcsin(1/x)}, and so on. Another series is given by:[14]
Leonhard Eulerfound a series for the arctangent that converges more quickly than itsTaylor series:
(The term in the sum forn= 0 is theempty product, so is 1.)
Alternatively, this can be expressed as
Another series for the arctangent function is given by
wherei=−1{\displaystyle i={\sqrt {-1}}}is theimaginary unit.[16]
Two alternatives to the power series for arctangent are thesegeneralized continued fractions:
The second of these is valid in the cut complex plane. There are two cuts, from −ito the point at infinity, going down the imaginary axis, and fromito the point at infinity, going up the same axis. It works best for real numbers running from −1 to 1. The partial denominators are the odd natural numbers, and the partial numerators (after the first) are just (nz)2, with each perfect square appearing once. The first was developed byLeonhard Euler; the second byCarl Friedrich Gaussutilizing theGaussian hypergeometric series.
For real and complex values ofz:
For realx≥ 1:
For all realxnot between -1 and 1:
The absolute value is necessary to compensate for both negative and positive values of the arcsecant and arccosecant functions. The signum function is also necessary due to the absolute values in thederivativesof the two functions, which create two different solutions for positive and negative values of x. These can be further simplified using the logarithmic definitions of theinverse hyperbolic functions:
The absolute value in the argument of the arcosh function creates a negative half of its graph, making it identical to the signum logarithmic function shown above.
All of these antiderivatives can be derived usingintegration by partsand the simple derivative forms shown above.
Using∫udv=uv−∫vdu{\displaystyle \int u\,dv=uv-\int v\,du}(i.e.integration by parts), set
Then
which by the simplesubstitutionw=1−x2,dw=−2xdx{\displaystyle w=1-x^{2},\ dw=-2x\,dx}yields the final result:
Since the inverse trigonometric functions areanalytic functions, they can be extended from the real line to the complex plane. This results in functions with multiple sheets andbranch points. One possible way of defining the extension is:
where the part of the imaginary axis which does not lie strictly between the branch points (−i and +i) is thebranch cutbetween the principal sheet and other sheets. The path of the integral must not cross a branch cut. Forznot on a branch cut, a straight line path from 0 tozis such a path. Forzon a branch cut, the path must approach fromRe[x] > 0for the upper branch cut and fromRe[x] < 0for the lower branch cut.
The arcsine function may then be defined as:
where (the square-root function has its cut along the negative real axis and) the part of the real axis which does not lie strictly between −1 and +1 is the branch cut between the principal sheet of arcsin and other sheets;
which has the same cut as arcsin;
which has the same cut as arctan;
where the part of the real axis between −1 and +1 inclusive is the cut between the principal sheet of arcsec and other sheets;
which has the same cut as arcsec.
These functions may also be expressed usingcomplex logarithms. This extends theirdomainsto thecomplex planein a natural fashion. The following identities for principal values of the functions hold everywhere that they are defined, even on their branch cuts.
Because all of the inverse trigonometric functions output an angle of a right triangle, they can be generalized by usingEuler's formulato form a right triangle in the complex plane. Algebraically, this gives us:
or
wherea{\displaystyle a}is the adjacent side,b{\displaystyle b}is the opposite side, andc{\displaystyle c}is the hypotenuse. From here, we can solve forθ{\displaystyle \theta }.
or
Simply taking the imaginary part works for any real-valueda{\displaystyle a}andb{\displaystyle b}, but ifa{\displaystyle a}orb{\displaystyle b}is complex-valued, we have to use the final equation so that the real part of the result isn't excluded. Since the length of the hypotenuse doesn't change the angle, ignoring the real part ofln(a+bi){\displaystyle \ln(a+bi)}also removesc{\displaystyle c}from the equation. In the final equation, we see that the angle of the triangle in the complex plane can be found by inputting the lengths of each side. By setting one of the three sides equal to 1 and one of the remaining sides equal to our inputz{\displaystyle z}, we obtain a formula for one of the inverse trig functions, for a total of six equations. Because the inverse trig functions require only one input, we must put the final side of the triangle in terms of the other two using thePythagorean Theoremrelation
The table below shows the values of a, b, and c for each of the inverse trig functions and the equivalent expressions forθ{\displaystyle \theta }that result from plugging the values into the equationsθ=−iln(a+ibc){\displaystyle \theta =-i\ln \left({\tfrac {a+ib}{c}}\right)}above and simplifying.
The particular form of the simplified expression can cause the output to differ from theusual principal branchof each of the inverse trig functions. The formulations given will output the usual principal branch when using theIm(lnz)∈(−π,π]{\displaystyle \operatorname {Im} \left(\ln z\right)\in (-\pi ,\pi ]}andRe(z)≥0{\displaystyle \operatorname {Re} \left({\sqrt {z}}\right)\geq 0}principal branch for every function except arccotangent in theθ{\displaystyle \theta }column. Arccotangent in theθ{\displaystyle \theta }column will output on its usual principal branch by using theIm(lnz)∈[0,2π){\displaystyle \operatorname {Im} \left(\ln z\right)\in [0,2\pi )}andIm(z)≥0{\displaystyle \operatorname {Im} \left({\sqrt {z}}\right)\geq 0}convention.
In this sense, all of the inverse trig functions can be thought of as specific cases of the complex-valued log function. Since these definition work for any complex-valuedz{\displaystyle z}, the definitions allow forhyperbolic anglesas outputs and can be used to further define theinverse hyperbolic functions. It's possible to algebraically prove these relations by starting with the exponential forms of the trigonometric functions and solving for the inverse function.
Using theexponential definition of sine, and lettingξ=eiϕ,{\displaystyle \xi =e^{i\phi },}
(the positive branch is chosen)
Inverse trigonometric functions are useful when trying to determine the remaining two angles of aright trianglewhen the lengths of the sides of the triangle are known. Recalling the right-triangle definitions of sine and cosine, it follows that
Often, the hypotenuse is unknown and would need to be calculated before using arcsine or arccosine using thePythagorean Theorem:a2+b2=h2{\displaystyle a^{2}+b^{2}=h^{2}}whereh{\displaystyle h}is the length of the hypotenuse. Arctangent comes in handy in this situation, as the length of the hypotenuse is not needed.
For example, suppose a roof drops 8 feet as it runs out 20 feet. The roof makes an angleθwith the horizontal, whereθmay be computed as follows:
The two-argumentatan2function computes the arctangent ofy/xgivenyandx, but with a range of(−π, π]. In other words,atan2(y,x)is the angle between the positivex-axis of a plane and the point(x,y)on it, with positive sign for counter-clockwise angles (upper half-plane,y> 0), and negative sign for clockwise angles (lower half-plane,y< 0). It was first introduced in many computer programming languages, but it is now also common in other fields of science and engineering.
In terms of the standardarctanfunction, that is with range of(−π/2, π/2), it can be expressed as follows:
atan2(y,x)={arctan(yx)x>0arctan(yx)+πy≥0,x<0arctan(yx)−πy<0,x<0π2y>0,x=0−π2y<0,x=0undefinedy=0,x=0{\displaystyle \operatorname {atan2} (y,x)={\begin{cases}\arctan \left({\frac {y}{x}}\right)&\quad x>0\\\arctan \left({\frac {y}{x}}\right)+\pi &\quad y\geq 0,\;x<0\\\arctan \left({\frac {y}{x}}\right)-\pi &\quad y<0,\;x<0\\{\frac {\pi }{2}}&\quad y>0,\;x=0\\-{\frac {\pi }{2}}&\quad y<0,\;x=0\\{\text{undefined}}&\quad y=0,\;x=0\end{cases}}}
It also equals theprincipal valueof theargumentof thecomplex numberx+iy.
This limited version of the function above may also be defined using thetangent half-angle formulaeas follows:atan2(y,x)=2arctan(yx2+y2+x){\displaystyle \operatorname {atan2} (y,x)=2\arctan \left({\frac {y}{{\sqrt {x^{2}+y^{2}}}+x}}\right)}provided that eitherx> 0ory≠ 0. However this fails if givenx≤ 0andy= 0so the expression is unsuitable for computational use.
The above argument order (y,x) seems to be the most common, and in particular is used inISO standardssuch as theC programming language, but a few authors may use the opposite convention (x,y) so some caution is warranted.(See variations atatan2 § Realizations of the function in common computer languages.)
In many applications[17]the solutiony{\displaystyle y}of the equationx=tan(y){\displaystyle x=\tan(y)}is to come as close as possible to a given value−∞<η<∞{\displaystyle -\infty <\eta <\infty }. The adequate solution is produced by the parameter modified arctangent function
The functionrni{\displaystyle \operatorname {rni} }rounds to the nearest integer.
For angles near 0 andπ, arccosine isill-conditioned, and similarly with arcsine for angles near −π/2 andπ/2. Computer applications thus need to consider the stability of inputs to these functions and the sensitivity of their calculations, or use alternate methods.[18]
|
https://en.wikipedia.org/wiki/Arcsin
|
Instatistics, thelogit(/ˈloʊdʒɪt/LOH-jit) function is thequantile functionassociated with the standardlogistic distribution. It has many uses indata analysisandmachine learning, especially indata transformations.
Mathematically, the logit is theinverseof thestandard logistic functionσ(x)=1/(1+e−x){\displaystyle \sigma (x)=1/(1+e^{-x})}, so the logit is defined as
Because of this, the logit is also called thelog-oddssince it is equal to thelogarithmof theoddsp1−p{\displaystyle {\frac {p}{1-p}}}wherepis a probability. Thus, the logit is a type of function that maps probability values from(0,1){\displaystyle (0,1)}to real numbers in(−∞,+∞){\displaystyle (-\infty ,+\infty )},[1]akin to theprobit function.
Ifpis aprobability, thenp/(1 −p)is the correspondingodds; thelogitof the probability is the logarithm of the odds, i.e.:
The base of thelogarithmfunction used is of little importance in the present article, as long as it is greater than 1, but thenatural logarithmwith baseeis the one most often used. The choice of base corresponds to the choice oflogarithmic unitfor the value: base 2 corresponds to ashannon, baseeto anat, and base 10 to ahartley; these units are particularly used in information-theoretic interpretations. For each choice of base, the logit function takes values between negative and positive infinity.
The“logistic” functionof any numberα{\displaystyle \alpha }is given by the inverse-logit:
The difference between thelogits of two probabilities is the logarithm of theodds ratio(R), thus providing a shorthand for writing the correct combination of odds ratiosonly by adding and subtracting:
TheTaylor seriesfor the logit function is given by:
Several approaches have been explored to adapt linear regression methods to a domain where the output is a probability value(0,1){\displaystyle (0,1)}, instead of any real number(−∞,+∞){\displaystyle (-\infty ,+\infty )}. In many cases, such efforts have focused on modeling this problem by mapping the range(0,1){\displaystyle (0,1)}to(−∞,+∞){\displaystyle (-\infty ,+\infty )}and then running the linear regression on these transformed values.[2]
In 1934,Chester Ittner Blissused the cumulative normal distribution function to perform this mapping and called his modelprobit, an abbreviation for "probability unit". This is, however, computationally more expensive.[2]
In 1944,Joseph Berksonused log of odds and called this functionlogit, an abbreviation for "logistic unit", following the analogy for probit:
"I use this term [logit] forlnp/q{\displaystyle \ln p/q}following Bliss, who called the analogous function which is linear onx{\displaystyle x}for the normal curve 'probit'."
Log odds was used extensively byCharles Sanders Peirce(late 19th century).[4]G. A. Barnardin 1949 coined the commonly used termlog-odds;[5][6]the log-odds of an event is the logit of the probability of the event.[7]Barnard also coined the termlodsas an abstract form of "log-odds",[8]but suggested that "in practice the term 'odds' should normally be used, since this is more familiar in everyday life".[9]
Closely related to thelogitfunction (andlogit model) are theprobit functionandprobit model. Thelogitandprobitare bothsigmoid functionswith a domain between 0 and 1, which makes them bothquantile functions– i.e., inverses of thecumulative distribution function(CDF) of aprobability distribution. In fact, thelogitis thequantile functionof thelogistic distribution, while theprobitis the quantile function of thenormal distribution. Theprobitfunction is denotedΦ−1(x){\displaystyle \Phi ^{-1}(x)}, whereΦ(x){\displaystyle \Phi (x)}is theCDFof the standard normal distribution, as just mentioned:
As shown in the graph on the right, thelogitandprobitfunctions are extremely similar when theprobitfunction is scaled, so that its slope aty= 0matches the slope of thelogit. As a result,probit modelsare sometimes used in place oflogit modelsbecause for certain applications (e.g., initem response theory) the implementation is easier.[14]
|
https://en.wikipedia.org/wiki/Logit
|
In statistics,nonlinear regressionis a form ofregression analysisin which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations (iterations).
In nonlinear regression, astatistical modelof the form,
y∼f(x,β){\displaystyle \mathbf {y} \sim f(\mathbf {x} ,{\boldsymbol {\beta }})}
relates a vector ofindependent variables,x{\displaystyle \mathbf {x} }, and its associated observeddependent variables,y{\displaystyle \mathbf {y} }. The functionf{\displaystyle f}is nonlinear in the components of the vector of parametersβ{\displaystyle \beta }, but otherwise arbitrary. For example, theMichaelis–Mentenmodel for enzyme kinetics has two parameters and one independent variable, related byf{\displaystyle f}by:[a]
f(x,β)=β1xβ2+x{\displaystyle f(x,{\boldsymbol {\beta }})={\frac {\beta _{1}x}{\beta _{2}+x}}}
This function, which is a rectangular hyperbola, isnonlinearbecause it cannot be expressed as alinear combinationof the twoβ{\displaystyle \beta }s.
Systematic errormay be present in the independent variables but its treatment is outside the scope of regression analysis. If the independent variables are not error-free, this is anerrors-in-variables model, also outside this scope.
Other examples of nonlinear functions includeexponential functions,logarithmic functions,trigonometric functions,power functions,Gaussian function, andLorentz distributions. Some functions, such as the exponential or logarithmic functions, can be transformed so that they are linear. When so transformed, standard linear regression can be performed but must be applied with caution. See§ Linearization §§ Transformation, below, for more details.
In general, there is no closed-form expression for the best-fitting parameters, as there is inlinear regression. Usually numericaloptimizationalgorithms are applied to determine the best-fitting parameters. Again in contrast to linear regression, there may be manylocal minimaof the function to be optimized and even the global minimum may produce abiasedestimate. In practice,estimated valuesof the parameters are used, in conjunction with the optimization algorithm, to attempt to find the global minimum of a sum of squares.
For details concerning nonlinear data modeling seeleast squaresandnon-linear least squares.
The assumption underlying this procedure is that the model can be approximated by a linear function, namely a first-orderTaylor series:
f(xi,β)≈f(xi,0)+∑jJijβj{\displaystyle f(x_{i},{\boldsymbol {\beta }})\approx f(x_{i},0)+\sum _{j}J_{ij}\beta _{j}}
whereJij=∂f(xi,β)∂βj{\displaystyle J_{ij}={\frac {\partial f(x_{i},{\boldsymbol {\beta }})}{\partial \beta _{j}}}}are Jacobian matrix elements. It follows from this that the least squares estimators are given by
β^≈(JTJ)−1JTy,{\displaystyle {\hat {\boldsymbol {\beta }}}\approx \mathbf {(J^{T}J)^{-1}J^{T}y} ,}comparegeneralized least squareswith covariance matrix proportional to the unit matrix. The nonlinear regression statistics are computed and used as in linear regression statistics, but usingJin place ofXin the formulas.
When the functionf(xi,β){\displaystyle f(x_{i},{\boldsymbol {\beta }})}itself is not known analytically, but needs to belinearly approximatedfromn+1{\displaystyle n+1}, or more, known values (wheren{\displaystyle n}is the number of estimators), the best estimator is obtained directly from theLinear Template Fitas[1]β^=((YM~)TΩ−1YM~)−1(YM~)TΩ−1(d−Ym¯){\displaystyle {\hat {\boldsymbol {\beta }}}=((\mathbf {Y{\tilde {M}}} )^{\mathsf {T}}{\boldsymbol {\Omega }}^{-1}\mathbf {Y{\tilde {M}}} )^{-1}(\mathbf {Y{\tilde {M}}} )^{\mathsf {T}}{\boldsymbol {\Omega }}^{-1}(\mathbf {d} -\mathbf {Y{\bar {m}})} }(see alsolinear least squares).
The linear approximation introducesbiasinto the statistics. Therefore, more caution than usual is required in interpreting statistics derived from a nonlinear model.
The best-fit curve is often assumed to be that which minimizes the sum of squaredresiduals. This is theordinary least squares(OLS) approach. However, in cases where the dependent variable does not have constant variance, or there are some outliers, a sum of weighted squared residuals may be minimized; seeweighted least squares. Each weight should ideally be equal to the reciprocal of the variance of the observation, or the reciprocal of the dependent variable to some power in the outlier case,[2]but weights may be recomputed on each iteration, in an iteratively weighted least squares algorithm.
Some nonlinear regression problems can be moved to a linear domain by a suitable transformation of the model formulation.
For example, consider the nonlinear regression problem
y=aebxU{\displaystyle y=ae^{bx}U}
with parametersaandband with multiplicative error termU. If we take the logarithm of both sides, this becomes
ln(y)=ln(a)+bx+u,{\displaystyle \ln {(y)}=\ln {(a)}+bx+u,}
whereu= ln(U), suggesting estimation of the unknown parameters by a linear regression of ln(y) onx, a computation that does not require iterative optimization. However, use of a nonlinear transformation requires caution. The influences of the data values will change, as will the error structure of the model and the interpretation of any inferential results. These may not be desired effects. On the other hand, depending on what the largest source of error is, a nonlinear transformation may distribute the errors in a Gaussian fashion, so the choice to perform a nonlinear transformation must be informed by modeling considerations.
ForMichaelis–Menten kinetics, the linearLineweaver–Burk plot
1v=1Vmax+KmVmax[S]{\displaystyle {\frac {1}{v}}={\frac {1}{V_{\max }}}+{\frac {K_{m}}{V_{\max }[S]}}}
of 1/vagainst 1/[S] has been much used. However, since it is very sensitive to data error and is strongly biased toward fitting the data in a particular range of the independent variable, [S], its use is strongly discouraged.
For error distributions that belong to theexponential family, a link function may be used to transform the parameters under theGeneralized linear modelframework.
Theindependentorexplanatory variable(say X) can be split up into classes or segments andlinear regressioncan be performed per segment. Segmented regression withconfidence analysismay yield the result that thedependentorresponsevariable(say Y) behaves differently in the various segments.[3]
The figure shows that thesoil salinity(X) initially exerts no influence on thecrop yield(Y) of mustard, until acriticalorthresholdvalue (breakpoint), after which the yield is affected negatively.[4]
|
https://en.wikipedia.org/wiki/Nonlinear_regression#Transformation
|
Instatistics, apower transformis a family of functions applied to create amonotonic transformationof data usingpower functions. It is adata transformationtechnique used tostabilize variance, make the data morenormal distribution-like, improve the validity of measures of association (such as thePearson correlationbetween variables), and for other data stabilization procedures.
Power transforms are used in multiple fields, includingmulti-resolution and wavelet analysis,[1]statistical data analysis, medical research, modeling of physical processes,[2]geochemical data analysis,[3]epidemiology[4]and many other clinical, environmental and social research areas.
The power transformation is defined as a continuous function of power parameterλ, typically given in piece-wise form that makes it continuous at the point of singularity (λ= 0). For data vectors (y1,...,yn) in which eachyi> 0, the power transform is
where
is thegeometric meanof the observationsy1, ...,yn. The case forλ=0{\displaystyle \lambda =0}is the limit asλ{\displaystyle \lambda }approaches 0. To see this, note thatyiλ=exp(λln(yi))=1+λln(yi)+O((λln(yi))2){\displaystyle y_{i}^{\lambda }=\exp({\lambda \ln(y_{i})})=1+\lambda \ln(y_{i})+O((\lambda \ln(y_{i}))^{2})}- usingTaylor series. Thenyiλ−1λ=ln(yi)+O(λ){\displaystyle {\dfrac {y_{i}^{\lambda }-1}{\lambda }}=\ln(y_{i})+O(\lambda )}, and everything butln(yi){\displaystyle \ln(y_{i})}becomes negligible forλ{\displaystyle \lambda }sufficiently small.
The inclusion of the (λ− 1)th power of the geometric mean in the denominator simplifies thescientific interpretation of any equation involvingyi(λ){\displaystyle y_{i}^{(\lambda )}}, because the units of measurement do not change asλchanges.
BoxandCox(1964) introduced the geometric mean into this transformation by first including theJacobianof rescaled power transformation
with the likelihood. This Jacobian is as follows:
This allows the normallog likelihood at its maximumto be written as follows:
From here, absorbingGM(y)2(λ−1){\displaystyle \operatorname {GM} (y)^{2(\lambda -1)}}into the expression forσ^2{\displaystyle {\hat {\sigma }}^{2}}produces an expression that establishes that minimizing the sum of squares ofresidualsfromyi(λ){\displaystyle y_{i}^{(\lambda )}}is equivalent to maximizing the sum of the normallog likelihoodof deviations from(yλ−1)/λ{\displaystyle (y^{\lambda }-1)/\lambda }and the log of the Jacobian of the transformation.
The value atY= 1 for anyλis 0, and thederivativewith respect toYthere is 1 for anyλ. SometimesYis a version of some other variable scaled to giveY= 1 at some sort of average value.
The transformation is apowertransformation, but done in such a way as to make itcontinuouswith the parameterλatλ= 0. It has proved popular inregression analysis, includingeconometrics.
Box and Cox also proposed a more general form of the transformation that incorporates a shift parameter.
which holds ifyi+ α > 0 for alli. If τ(Y, λ, α) follows atruncated normal distribution, thenYis said to follow aBox–Cox distribution.
Bickel and Doksum eliminated the need to use atruncated distributionby extending the range of the transformation to ally, as follows:
where sgn(.) is thesign function. This change in definition has little practical import as long asα{\displaystyle \alpha }is less thanmin(yi){\displaystyle \operatorname {min} (y_{i})}, which it usually is.[5]
Bickel and Doksum also proved that the parameter estimates areconsistentandasymptotically normalunder appropriate regularity conditions, though the standardCramér–Rao lower boundcan substantially underestimate the variance when parameter values are small relative to the noise variance.[5]However, this problem of underestimating the variance may not be a substantive problem in many applications.[6][7]
The one-parameter Box–Cox transformations are defined as
and the two-parameter Box–Cox transformations as
as described in the original article.[8][9]Moreover, the first transformations hold foryi>0{\displaystyle y_{i}>0}, and the second foryi>−λ2{\displaystyle y_{i}>-\lambda _{2}}.[8]
The parameterλ{\displaystyle \lambda }is estimated using theprofile likelihoodfunction and using goodness-of-fit tests.[10]
Confidence interval for the Box–Cox transformation can beasymptotically constructedusingWilks's theoremon theprofile likelihoodfunction to find all the possible values ofλ{\displaystyle \lambda }that fulfill the following restriction:[11]
The BUPA liver data set[12]contains data on liver enzymesALTandγGT. Suppose we are interested in using log(γGT) to predict ALT. A plot of the data appears in panel (a) of the figure. There appears to be non-constant variance, and a Box–Cox transformation might help.
The log-likelihood of the power parameter appears in panel (b). The horizontal reference line is at a distance of χ12/2 from the maximum and can be used to read off an approximate 95% confidence interval for λ. It appears as though a value close to zero would be good, so we take logs.
Possibly, the transformation could be improved by adding a shift parameter to the log transformation. Panel (c) of the figure shows the log-likelihood. In this case, the maximum of the likelihood is close to zero suggesting that a shift parameter is not needed. The final panel shows the transformed data with a superimposed regression line.
Note that although Box–Cox transformations can make big improvements in model fit, there are some issues that the transformation cannot help with. In the current example, the data are rather heavy-tailed so that the assumption of normality is not realistic and arobust regressionapproach leads to a more precise model.
Economists often characterize production relationships by some variant of the Box–Cox transformation.[13]
Consider a common representation of productionQas dependent on services provided by a capital stockKand by labor hoursN:
Solving forQby inverting the Box–Cox transformation we find
which is known as theconstant elasticity of substitution(CES)production function.
The CES production function is ahomogeneous functionof degree one.
Whenλ= 1, this produces the linear production function:
Whenλ→ 0 this produces the famousCobb–Douglasproduction function:
TheSOCRresource pages contain a number of hands-on interactive activities[14]demonstrating the Box–Cox (power) transformation using Java applets and charts. These directly illustrate the effects of this transform onQ–Q plots, X–Yscatterplots,time-seriesplots andhistograms.
The Yeo–Johnson transformation[15]allows also for zero and negative values ofy{\displaystyle y}.λ{\displaystyle \lambda }can be any real number, whereλ=1{\displaystyle \lambda =1}produces the identity transformation.
The transformation law reads:
The Box-Tidwell transformation is a statistical technique used to assess and correct non-linearity between predictor variables and thelogitin ageneralized linear model, particularly inlogistic regression. This transformation is useful when the relationship between the independent variables and the outcome is non-linear and cannot be adequately captured by the standard model.
The Box-Tidwell transformation was developed byGeorge E. P. Boxand John W. Tidwell in 1962 as an extension ofBox-Cox transformations, which are applied to the dependent variable. However, unlike the Box-Cox transformation, the Box-Tidwell transformation is applied to the independent variables in regression models. It is often used when the assumption of linearity between the predictors and the outcome is violated.
The general idea behind the Box-Tidwell transformation is to apply a power transformation to each independent variable Xi in the regression model:
Xi′=Xiλ{\displaystyle X_{i}'=X_{i}^{\lambda }}
Whereλ{\displaystyle \lambda }is the parameter estimated from the data. If Box-Tidwell Transformation is significantly different from 1, this indicates a non-linear relationship between Xi and the logit, and the transformation improves the model fit.
The Box-Tidwell test is typically performed by augmenting the regression model with terms likeXilog(Xi){\displaystyle X_{i}\log(X_{i})}and testing the significance of the coefficients. If significant, this suggests that a transformation should be applied to achieve a linear relationship between the predictor and the logit.
The transformation is beneficial inlogistic regressionorproportional hazards modelswhere non-linearity in continuous predictors can distort the relationship with the dependent variable. It is a flexible tool that allows the researcher to fit a more appropriate model to the data without guessing the relationship's functional form in advance.
Inlogistic regression, a key assumption is that continuous independent variables exhibit a linear relationship with the logit of the dependent variable. Violations of this assumption can lead to biased estimates and reduced model performance. The Box-Tidwell transformation is a method used to assess and correct such violations by determining whether a continuous predictor requires transformation to achieve linearity with the logit.
The Box-Tidwell transformation introduces an interaction term between each continuous variableXiand its natural logarithmlog(Xi){\displaystyle \log(X_{i})}:
Xilog(Xi){\displaystyle X_{i}\log(X_{i})}
This term is included in the logistic regression model to test whether the relationship betweenXiand the logit is non-linear. A statistically significant coefficient for this interaction term indicates a violation of the linearity assumption, suggesting the need for a transformation of the predictor. the Box-Tidwell transformation provides an appropriate power transformation to linearize the relationship, thereby improving model accuracy and validity. Conversely, non-significant results support the assumption of linearity.
One limitation of the Box-Tidwell transformation is that it only works for positive values of the independent variables. If your data contains negative values, the transformation cannot be applied directly without modifying the variables (e.g., adding a constant).
|
https://en.wikipedia.org/wiki/Power_transform
|
Inprobability theoryandstatistics, theχ2{\displaystyle \chi ^{2}}-distributionwithk{\displaystyle k}degrees of freedomis the distribution of a sum of the squares ofk{\displaystyle k}independentstandard normalrandom variables.[2]
The chi-squared distributionχk2{\displaystyle \chi _{k}^{2}}is a special case of thegamma distributionand the univariateWishart distribution. Specifically ifX∼χk2{\displaystyle X\sim \chi _{k}^{2}}thenX∼Gamma(α=k2,θ=2){\displaystyle X\sim {\text{Gamma}}(\alpha ={\frac {k}{2}},\theta =2)}(whereα{\displaystyle \alpha }is the shape parameter andθ{\displaystyle \theta }the scale parameter of the gamma distribution) andX∼W1(1,k){\displaystyle X\sim {\text{W}}_{1}(1,k)}.
Thescaled chi-squared distributions2χk2{\displaystyle s^{2}\chi _{k}^{2}}is a reparametrization of thegamma distributionand the univariateWishart distribution. Specifically ifX∼s2χk2{\displaystyle X\sim s^{2}\chi _{k}^{2}}thenX∼Gamma(α=k2,θ=2s2){\displaystyle X\sim {\text{Gamma}}(\alpha ={\frac {k}{2}},\theta =2s^{2})}andX∼W1(s2,k){\displaystyle X\sim {\text{W}}_{1}(s^{2},k)}.
The chi-squared distribution is one of the most widely usedprobability distributionsininferential statistics, notably inhypothesis testingand in construction ofconfidence intervals.[3][4][5][6]This distribution is sometimes called thecentral chi-squared distribution, a special case of the more generalnoncentral chi-squared distribution.[7]
The chi-squared distribution is used in the commonchi-squared testsforgoodness of fitof an observed distribution to a theoretical one, theindependenceof two criteria of classification ofqualitative data, and in finding the confidence interval for estimating the populationstandard deviationof a normal distribution from a sample standard deviation. Many other statistical tests also use this distribution, such asFriedman's analysis of variance by ranks.
IfZ1, ...,Zkareindependent,standard normalrandom variables, then the sum of their squares,
is distributed according to the chi-squared distribution withkdegrees of freedom. This is usually denoted as
The chi-squared distribution has one parameter: a positive integerkthat specifies the number ofdegrees of freedom(the number of random variables being summed,Zis).
The chi-squared distribution is used primarily in hypothesis testing, and to a lesser extent for confidence intervals for population variance when the underlying distribution is normal. Unlike more widely known distributions such as thenormal distributionand theexponential distribution, the chi-squared distribution is not as often applied in the direct modeling of natural phenomena. It arises in the following hypothesis tests, among others:
It is also a component of the definition of thet-distributionand theF-distributionused int-tests, analysis of variance, and regression analysis.
The primary reason for which the chi-squared distribution is extensively used in hypothesis testing is its relationship to the normal distribution. Many hypothesis tests use a test statistic, such as thet-statisticin at-test. For these hypothesis tests, as the sample size,n, increases, thesampling distributionof the test statistic approaches the normal distribution (central limit theorem). Because the test statistic (such ast) is asymptotically normally distributed, provided the sample size is sufficiently large, the distribution used for hypothesis testing may be approximated by a normal distribution. Testing hypotheses using a normal distribution is well understood and relatively easy. The simplest chi-squared distribution is the square of a standard normal distribution. So wherever a normal distribution could be used for a hypothesis test, a chi-squared distribution could be used.
Suppose thatZ{\displaystyle Z}is a random variable sampled from the standard normal distribution, where the mean is0{\displaystyle 0}and the variance is1{\displaystyle 1}:Z∼N(0,1){\displaystyle Z\sim N(0,1)}. Now, consider the random variableX=Z2{\displaystyle X=Z^{2}}. The distribution of the random variableX{\displaystyle X}is an example of a chi-squared distribution:X∼χ12{\displaystyle \ X\ \sim \ \chi _{1}^{2}}. The subscript 1 indicates that this particular chi-squared distribution is constructed from only 1 standard normal distribution. A chi-squared distribution constructed by squaring a single standard normal distribution is said to have 1 degree of freedom. Thus, as the sample size for a hypothesis test increases, the distribution of the test statistic approaches a normal distribution. Just as extreme values of the normal distribution have low probability (and give small p-values), extreme values of the chi-squared distribution have low probability.
An additional reason that the chi-squared distribution is widely used is that it turns up as the large sample distribution of generalizedlikelihood ratio tests(LRT).[8]LRTs have several desirable properties; in particular, simple LRTs commonly provide the highest power to reject the null hypothesis (Neyman–Pearson lemma) and this leads also to optimality properties of generalised LRTs. However, the normal and chi-squared approximations are only valid asymptotically. For this reason, it is preferable to use thetdistribution rather than the normal approximation or the chi-squared approximation for a small sample size. Similarly, in analyses of contingency tables, the chi-squared approximation will be poor for a small sample size, and it is preferable to useFisher's exact test. Ramsey shows that the exactbinomial testis always more powerful than the normal approximation.[9]
Lancaster shows the connections among the binomial, normal, and chi-squared distributions, as follows.[10]De Moivre and Laplace established that a binomial distribution could be approximated by a normal distribution. Specifically they showed the asymptotic normality of the random variable
wherem{\displaystyle m}is the observed number of successes inN{\displaystyle N}trials, where the probability of success isp{\displaystyle p}, andq=1−p{\displaystyle q=1-p}.
Squaring both sides of the equation gives
UsingN=Np+N(1−p){\displaystyle N=Np+N(1-p)},N=m+(N−m){\displaystyle N=m+(N-m)}, andq=1−p{\displaystyle q=1-p}, this equation can be rewritten as
The expression on the right is of the form thatKarl Pearsonwould generalize to the form
where
χ2{\displaystyle \chi ^{2}}= Pearson's cumulative test statistic, which asymptotically approaches aχ2{\displaystyle \chi ^{2}}distribution;Oi{\displaystyle O_{i}}= the number of observations of typei{\displaystyle i};Ei=Npi{\displaystyle E_{i}=Np_{i}}= the expected (theoretical) frequency of typei{\displaystyle i}, asserted by the null hypothesis that the fraction of typei{\displaystyle i}in the population ispi{\displaystyle p_{i}}; andn{\displaystyle n}= the number of cells in the table.[citation needed]
In the case of a binomial outcome (flipping a coin), the binomial distribution may be approximated by a normal distribution (for sufficiently largen{\displaystyle n}). Because the square of a standard normal distribution is the chi-squared distribution with one degree of freedom, the probability of a result such as 1 heads in 10 trials can be approximated either by using the normal distribution directly, or the chi-squared distribution for the normalised, squared difference between observed and expected value. However, many problems involve more than the two possible outcomes of a binomial, and instead require 3 or more categories, which leads to the multinomial distribution. Just as de Moivre and Laplace sought for and found the normal approximation to the binomial, Pearson sought for and found a degenerate multivariate normal approximation to the multinomial distribution (the numbers in each category add up to the total sample size, which is considered fixed). Pearson showed that the chi-squared distribution arose from such a multivariate normal approximation to the multinomial distribution, taking careful account of the statistical dependence (negative correlations) between numbers of observations in different categories.[10]
Theprobability density function(pdf) of the chi-squared distribution is
whereΓ(k/2){\textstyle \Gamma (k/2)}denotes thegamma function, which hasclosed-form values for integerk{\displaystyle k}.
For derivations of the pdf in the cases of one, two andk{\displaystyle k}degrees of freedom, seeProofs related to chi-squared distribution.
Itscumulative distribution functionis:
whereγ(s,t){\displaystyle \gamma (s,t)}is thelower incomplete gamma functionandP(s,t){\textstyle P(s,t)}is theregularized gamma function.
In a special case ofk=2{\displaystyle k=2}this function has the simple form:
which can be easily derived by integratingf(x;2)=12e−x/2{\displaystyle f(x;\,2)={\frac {1}{2}}e^{-x/2}}directly. The integer recurrence of the gamma function makes it easy to computeF(x;k){\displaystyle F(x;\,k)}for other small, evenk{\displaystyle k}.
Tables of the chi-squared cumulative distribution function are widely available and the function is included in manyspreadsheetsand allstatistical packages.
Lettingz≡x/k{\displaystyle z\equiv x/k},Chernoff boundson the lower and upper tails of the CDF may be obtained.[11]For the cases when0<z<1{\displaystyle 0<z<1}(which include all of the cases when this CDF is less than half):F(zk;k)≤(ze1−z)k/2.{\displaystyle F(zk;\,k)\leq (ze^{1-z})^{k/2}.}
The tail bound for the cases whenz>1{\displaystyle z>1}, similarly, is
For anotherapproximationfor the CDF modeled after the cube of a Gaussian, see underNoncentral chi-squared distribution.
The following is a special case of Cochran's theorem.
Theorem.IfZ1,...,Zn{\displaystyle Z_{1},...,Z_{n}}areindependentidentically distributed (i.i.d.),standard normalrandom variables, then∑t=1n(Zt−Z¯)2∼χn−12{\displaystyle \sum _{t=1}^{n}(Z_{t}-{\bar {Z}})^{2}\sim \chi _{n-1}^{2}}whereZ¯=1n∑t=1nZt.{\displaystyle {\bar {Z}}={\frac {1}{n}}\sum _{t=1}^{n}Z_{t}.}
Proof.LetZ∼N(0¯,11){\displaystyle Z\sim {\mathcal {N}}({\bar {0}},1\!\!1)}be a vector ofn{\displaystyle n}independent normally distributed random variables,
andZ¯{\displaystyle {\bar {Z}}}their average.
Then∑t=1n(Zt−Z¯)2=∑t=1nZt2−nZ¯2=Z⊤[11−1n1¯1¯⊤]Z=:Z⊤MZ{\displaystyle \sum _{t=1}^{n}(Z_{t}-{\bar {Z}})^{2}~=~\sum _{t=1}^{n}Z_{t}^{2}-n{\bar {Z}}^{2}~=~Z^{\top }[1\!\!1-{\textstyle {\frac {1}{n}}}{\bar {1}}{\bar {1}}^{\top }]Z~=:~Z^{\top }\!MZ}where11{\displaystyle 1\!\!1}is the identity matrix and1¯{\displaystyle {\bar {1}}}the all ones vector.M{\displaystyle M}has one eigenvectorb1:=1n1¯{\displaystyle b_{1}:={\textstyle {\frac {1}{\sqrt {n}}}}{\bar {1}}}with eigenvalue0{\displaystyle 0},
andn−1{\displaystyle n-1}eigenvectorsb2,...,bn{\displaystyle b_{2},...,b_{n}}(all orthogonal tob1{\displaystyle b_{1}}) with eigenvalue1{\displaystyle 1},
which can be chosen so thatQ:=(b1,...,bn){\displaystyle Q:=(b_{1},...,b_{n})}is an orthogonal matrix.
Since alsoX:=Q⊤Z∼N(0¯,Q⊤11Q)=N(0¯,11){\displaystyle X:=Q^{\top }\!Z\sim {\mathcal {N}}({\bar {0}},Q^{\top }\!1\!\!1Q)={\mathcal {N}}({\bar {0}},1\!\!1)},
we have∑t=1n(Zt−Z¯)2=Z⊤MZ=X⊤Q⊤MQX=X22+...+Xn2∼χn−12,{\displaystyle \sum _{t=1}^{n}(Z_{t}-{\bar {Z}})^{2}~=~Z^{\top }\!MZ~=~X^{\top }\!Q^{\top }\!MQX~=~X_{2}^{2}+...+X_{n}^{2}~\sim ~\chi _{n-1}^{2},}which proves the claim.
It follows from the definition of the chi-squared distribution that the sum of independent chi-squared variables is also chi-squared distributed. Specifically, ifXi,i=1,n¯{\displaystyle X_{i},i={\overline {1,n}}}are independent chi-squared variables withki{\displaystyle k_{i}},i=1,n¯{\displaystyle i={\overline {1,n}}}degrees of freedom, respectively, thenY=X1+⋯+Xn{\displaystyle Y=X_{1}+\cdots +X_{n}}is chi-squared distributed withk1+⋯+kn{\displaystyle k_{1}+\cdots +k_{n}}degrees of freedom.
The sample mean ofn{\displaystyle n}i.i.d.chi-squared variables of degreek{\displaystyle k}is distributed according to a gamma distribution with shapeα{\displaystyle \alpha }and scaleθ{\displaystyle \theta }parameters:
Asymptotically, given that for a shape parameterα{\displaystyle \alpha }going to infinity, a Gamma distribution converges towards a normal distribution with expectationμ=α⋅θ{\displaystyle \mu =\alpha \cdot \theta }and varianceσ2=αθ2{\displaystyle \sigma ^{2}=\alpha \,\theta ^{2}}, the sample mean converges towards:
X¯→n→∞N(μ=k,σ2=2k/n){\displaystyle {\overline {X}}\xrightarrow {n\to \infty } N(\mu =k,\sigma ^{2}=2\,k/n)}
Note that we would have obtained the same result invoking instead thecentral limit theorem, noting that for each chi-squared variable of degreek{\displaystyle k}the expectation isk{\displaystyle k}, and its variance2k{\displaystyle 2\,k}(and hence the variance of the sample meanX¯{\displaystyle {\overline {X}}}beingσ2=2kn{\displaystyle \sigma ^{2}={\frac {2k}{n}}}).
Thedifferential entropyis given by
whereψ(x){\displaystyle \psi (x)}is theDigamma function.
The chi-squared distribution is themaximum entropy probability distributionfor a random variateX{\displaystyle X}for whichE(X)=k{\displaystyle \operatorname {E} (X)=k}andE(ln(X))=ψ(k/2)+ln(2){\displaystyle \operatorname {E} (\ln(X))=\psi (k/2)+\ln(2)}are fixed. Since the chi-squared is in the family of gamma distributions, this can be derived by substituting appropriate values in theExpectation of the log moment of gamma. For derivation from more basic principles, see the derivation inmoment-generating function of the sufficient statistic.
The noncentral moments (raw moments) of a chi-squared distribution withk{\displaystyle k}degrees of freedom are given by[12][13]
Thecumulantsare readily obtained by apower seriesexpansion of the logarithm of the characteristic function:
withcumulant generating functionlnE[etX]=−k2ln(1−2t){\displaystyle \ln E[e^{tX}]=-{\frac {k}{2}}\ln(1-2t)}.
The chi-squared distribution exhibits strong concentration around its mean. The standard Laurent-Massart[14]bounds are:
One consequence is that, ifZ∼N(0,1)k{\displaystyle Z\sim N(0,1)^{k}}is a gaussian random vector inRk{\displaystyle \mathbb {R} ^{k}}, then as the dimensionk{\displaystyle k}grows, the squared length of the vector is concentrated tightly aroundk{\displaystyle k}with a widthk1/2+α{\displaystyle k^{1/2+\alpha }}:Pr(‖Z‖2∈[k−2k1/2+α,k+2k1/2+α+2kα])≥1−e−kα{\displaystyle Pr(\|Z\|^{2}\in [k-2k^{1/2+\alpha },k+2k^{1/2+\alpha }+2k^{\alpha }])\geq 1-e^{-k^{\alpha }}}where the exponentα{\displaystyle \alpha }can be chosen as any value inR{\displaystyle \mathbb {R} }.
Since the cumulant generating function forχ2(k){\displaystyle \chi ^{2}(k)}isK(t)=−k2ln(1−2t){\displaystyle K(t)=-{\frac {k}{2}}\ln(1-2t)}, and itsconvex dualisK∗(q)=12(q−k+klnkq){\displaystyle K^{*}(q)={\frac {1}{2}}(q-k+k\ln {\frac {k}{q}})}, the standardChernoff boundyieldslnPr(X≥(1+ϵ)k)≤−k2(ϵ−ln(1+ϵ))lnPr(X≤(1−ϵ)k)≤−k2(−ϵ−ln(1−ϵ)){\displaystyle {\begin{aligned}\ln Pr(X\geq (1+\epsilon )k)&\leq -{\frac {k}{2}}(\epsilon -\ln(1+\epsilon ))\\\ln Pr(X\leq (1-\epsilon )k)&\leq -{\frac {k}{2}}(-\epsilon -\ln(1-\epsilon ))\end{aligned}}}where0<ϵ<1{\displaystyle 0<\epsilon <1}. By the union bound,Pr(X∈(1±ϵ)k)≥1−2e−k2(12ϵ2−13ϵ3){\displaystyle Pr(X\in (1\pm \epsilon )k)\geq 1-2e^{-{\frac {k}{2}}({\frac {1}{2}}\epsilon ^{2}-{\frac {1}{3}}\epsilon ^{3})}}This result is used in proving theJohnson–Lindenstrauss lemma.[15]
By thecentral limit theorem, because the chi-squared distribution is the sum ofk{\displaystyle k}independent random variables with finite mean and variance, it converges to a normal distribution for largek{\displaystyle k}. For many practical purposes, fork>50{\displaystyle k>50}the distribution is sufficiently close to anormal distribution, so the difference is ignorable.[16]Specifically, ifX∼χ2(k){\displaystyle X\sim \chi ^{2}(k)}, then ask{\displaystyle k}tends to infinity, the distribution of(X−k)/2k{\displaystyle (X-k)/{\sqrt {2k}}}tendsto a standard normal distribution. However, convergence is slow as theskewnessis8/k{\displaystyle {\sqrt {8/k}}}and theexcess kurtosisis12/k{\displaystyle 12/k}.
The sampling distribution ofln(χ2){\displaystyle \ln(\chi ^{2})}converges to normality much faster than the sampling distribution ofχ2{\displaystyle \chi ^{2}},[17]as thelogarithmic transformremoves much of the asymmetry.[18]
Other functions of the chi-squared distribution converge more rapidly to a normal distribution. Some examples are:
A chi-squared variable withk{\displaystyle k}degrees of freedom is defined as the sum of the squares ofk{\displaystyle k}independentstandard normalrandom variables.
IfY{\displaystyle Y}is ak{\displaystyle k}-dimensional Gaussian random vector with mean vectorμ{\displaystyle \mu }and rankk{\displaystyle k}covariance matrixC{\displaystyle C}, thenX=(Y−μ)TC−1(Y−μ){\displaystyle X=(Y-\mu )^{T}C^{-1}(Y-\mu )}is chi-squared distributed withk{\displaystyle k}degrees of freedom.
The sum of squares ofstatistically independentunit-variance Gaussian variables which donothave mean zero yields a generalization of the chi-squared distribution called thenoncentral chi-squared distribution.
IfY{\displaystyle Y}is a vector ofk{\displaystyle k}i.i.d.standard normal random variables andA{\displaystyle A}is ak×k{\displaystyle k\times k}symmetric,idempotent matrixwithrankk−n{\displaystyle k-n}, then thequadratic formYTAY{\displaystyle Y^{T}AY}is chi-square distributed withk−n{\displaystyle k-n}degrees of freedom.
IfΣ{\displaystyle \Sigma }is ap×p{\displaystyle p\times p}positive-semidefinite covariance matrix with strictly positive diagonal entries, then forX∼N(0,Σ){\displaystyle X\sim N(0,\Sigma )}andw{\displaystyle w}a randomp{\displaystyle p}-vector independent ofX{\displaystyle X}such thatw1+⋯+wp=1{\displaystyle w_{1}+\cdots +w_{p}=1}andwi≥0,i=1,…,p,{\displaystyle w_{i}\geq 0,i=1,\ldots ,p,}then
The chi-squared distribution is also naturally related to other distributions arising from the Gaussian. In particular,
The chi-squared distribution is obtained as the sum of the squares ofkindependent, zero-mean, unit-variance Gaussian random variables. Generalizations of this distribution can be obtained by summing the squares of other types of Gaussian random variables. Several such distributions are described below.
IfX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}are chi square random variables anda1,…,an∈R>0{\displaystyle a_{1},\ldots ,a_{n}\in \mathbb {R} _{>0}}, then the distribution ofX=∑i=1naiXi{\displaystyle X=\sum _{i=1}^{n}a_{i}X_{i}}is a special case of aGeneralized Chi-squared Distribution.
A closed expression for this distribution is not known. It may be, however, approximated efficiently using theproperty of characteristic functionsof chi-square random variables.[21]
The noncentral chi-squared distribution is obtained from the sum of the squares of independent Gaussian random variables having unit variance andnonzeromeans.
The generalized chi-squared distribution is obtained from the quadratic formz'Azwherezis a zero-mean Gaussian vector having an arbitrary covariance matrix, andAis an arbitrary matrix.
The chi-squared distributionX∼χk2{\displaystyle X\sim \chi _{k}^{2}}is a special case of thegamma distribution, in thatX∼Γ(k2,12){\displaystyle X\sim \Gamma \left({\frac {k}{2}},{\frac {1}{2}}\right)}using the rate parameterization of the gamma distribution (orX∼Γ(k2,2){\displaystyle X\sim \Gamma \left({\frac {k}{2}},2\right)}using the scale parameterization of the gamma distribution)
wherekis an integer.
Because theexponential distributionis also a special case of the gamma distribution, we also have that ifX∼χ22{\displaystyle X\sim \chi _{2}^{2}}, thenX∼exp(12){\displaystyle X\sim \operatorname {exp} \left({\frac {1}{2}}\right)}is anexponential distribution.
TheErlang distributionis also a special case of the gamma distribution and thus we also have that ifX∼χk2{\displaystyle X\sim \chi _{k}^{2}}with evenk{\displaystyle k}, thenX{\displaystyle X}is Erlang distributed with shape parameterk/2{\displaystyle k/2}and scale parameter1/2{\displaystyle 1/2}.
The chi-squared distribution has numerous applications in inferentialstatistics, for instance inchi-squared testsand in estimatingvariances. It enters the problem of estimating the mean of a normally distributed population and the problem of estimating the slope of aregressionline via its role inStudent's t-distribution. It enters allanalysis of varianceproblems via its role in theF-distribution, which is the distribution of the ratio of two independent chi-squaredrandom variables, each divided by their respective degrees of freedom.
Following are some of the most common situations in which the chi-squared distribution arises from a Gaussian-distributed sample.
The chi-squared distribution is also often encountered inmagnetic resonance imaging.[22]
Thep{\textstyle p}-valueis the probability of observing a test statisticat leastas extreme in a chi-squared distribution. Accordingly, since thecumulative distribution function(CDF) for the appropriate degrees of freedom(df)gives the probability of having obtained a valueless extremethan this point, subtracting the CDF value from 1 gives thep-value. A lowp-value, below the chosen significance level, indicatesstatistical significance, i.e., sufficient evidence to reject the null hypothesis. A significance level of 0.05 is often used as the cutoff between significant and non-significant results.
The table below gives a number ofp-values matching toχ2{\displaystyle \chi ^{2}}for the first 10 degrees of freedom.
These values can be calculated evaluating thequantile function(also known as "inverse CDF" or "ICDF") of the chi-squared distribution;[24]e. g., theχ2ICDF forp= 0.05anddf = 7yields2.1673 ≈ 2.17as in the table above, noticing that1 –pis thep-valuefrom the table.
This distribution was first described by the German geodesist and statisticianFriedrich Robert Helmertin papers of 1875–6,[25][26]where he computed the sampling distribution of the sample variance of a normal population. Thus in German this was traditionally known as theHelmert'sche("Helmertian") or "Helmert distribution".
The distribution was independently rediscovered by the English mathematicianKarl Pearsonin the context ofgoodness of fit, for which he developed hisPearson's chi-squared test, published in 1900, with computed table of values published in (Elderton 1902), collected in (Pearson 1914, pp. xxxi–xxxiii, 26–28, Table XII).
The name "chi-square" ultimately derives from Pearson's shorthand for the exponent in amultivariate normal distributionwith the Greek letterChi, writing−½χ2for what would appear in modern notation as−½xTΣ−1x(Σ being thecovariance matrix).[27]The idea of a family of "chi-squared distributions", however, is not due to Pearson but arose as a further development due to Fisher in the 1920s.[25]
|
https://en.wikipedia.org/wiki/Wilson%E2%80%93Hilferty_transformation
|
Awhitening transformationorsphering transformationis alinear transformationthat transforms a vector ofrandom variableswith a knowncovariance matrixinto a set of new variables whose covariance is theidentity matrix, meaning that they areuncorrelatedand each havevariance1.[1]The transformation is called "whitening" because it changes the input vector into awhite noisevector.
Several other transformations are closely related to whitening:
SupposeX{\displaystyle X}is arandom (column) vectorwith non-singular covariance matrixΣ{\displaystyle \Sigma }and mean0{\displaystyle 0}. Then the transformationY=WX{\displaystyle Y=WX}with
awhitening matrixW{\displaystyle W}satisfying the conditionWTW=Σ−1{\displaystyle W^{\mathrm {T} }W=\Sigma ^{-1}}yields the whitened random vectorY{\displaystyle Y}with unit diagonal covariance.
IfX{\displaystyle X}has non-zero meanμ{\displaystyle \mu }, then whitening can be performed byY=W(X−μ){\displaystyle Y=W(X-\mu )}.
There are infinitely many possible whitening matricesW{\displaystyle W}that all satisfy the above condition. Commonly used choices areW=Σ−1/2{\displaystyle W=\Sigma ^{-1/2}}(Mahalanobis or ZCA whitening),W=LT{\displaystyle W=L^{T}}whereL{\displaystyle L}is theCholesky decompositionofΣ−1{\displaystyle \Sigma ^{-1}}(Cholesky whitening),[3]or the eigen-system ofΣ{\displaystyle \Sigma }(PCA whitening).[4]
Optimal whitening transforms can be singled out by investigating the cross-covariance and cross-correlation ofX{\displaystyle X}andY{\displaystyle Y}.[3]For example, the unique optimal whitening transformation achieving maximal component-wise correlation between originalX{\displaystyle X}and whitenedY{\displaystyle Y}is produced by the whitening matrixW=P−1/2V−1/2{\displaystyle W=P^{-1/2}V^{-1/2}}whereP{\displaystyle P}is the correlation matrix andV{\displaystyle V}the diagonal variance matrix.
Whitening a data matrix follows the same transformation as for random variables. An empirical whitening transform is obtained byestimating the covariance(e.g. bymaximum likelihood) and subsequently constructing a corresponding estimated whitening matrix (e.g. byCholesky decomposition).
This modality is a generalization of the pre-whitening procedure extended to more general spaces whereX{\displaystyle X}is usually assumed to be a random function or other random objects in aHilbert spaceH{\displaystyle H}. One of the main issues of extending whitening to infinite dimensions is that thecovariance operatorhas an unbounded inverse inH{\displaystyle H}. Nevertheless, if one assumes that Picard condition holds forX{\displaystyle X}in the range space of the covariance operator, whitening becomes possible.[5]A whitening operator can be then defined from the factorization of theMoore–Penrose inverseof the covariance operator, which has effective mapping on Karhunen–Loève type expansions ofX{\displaystyle X}. The advantage of these whitening transformations is that they can be optimized according to the underlying topological properties of the data, thus producing more robust whitening representations. High-dimensional features of the data can be exploited through kernel regressors or basis function systems.[6]
An implementation of several whitening procedures inR, including ZCA-whitening and PCA whitening but alsoCCA whitening, is available in the "whitening" R package[7]published onCRAN. The R package "pfica"[8]allows the computation of high-dimensional whitening representations using basis function systems (B-splines,Fourier basis, etc.).
|
https://en.wikipedia.org/wiki/Whitening_transformation
|
Geometric feature learningis a technique combiningmachine learningandcomputer visionto solve visual tasks. The main goal of this method is to find a set of representative features of geometric form to represent an object by collecting geometric features from images and learning them using efficientmachine learningmethods. Humans solve visual tasks and can give fast response to the environment by extracting perceptual information from what they see. Researchers simulate humans' ability of recognizing objects to solve computer vision problems. For example, M. Mata et al.(2002)[1]applied feature learning techniques to themobile robot navigationtasks in order to avoid obstacles. They usedgenetic algorithmsfor learning features andrecognizing objects(figures). Geometric feature learning methods can not only solve recognition problems but also predict subsequent actions by analyzing a set of sequential input sensory images, usually some extracting features of images. Through learning, some hypothesis of the next action are given and according to the probability of each hypothesis give a most probable action. This technique is widely used in the area ofartificial intelligence.
Geometric feature learning methods extract distinctive geometric features from images. Geometric features are features of objects constructed by a set of geometric elements like points, lines, curves or surfaces. These features can be corner features, edge features, Blobs, Ridges, salient points image texture and so on, which can be detected byfeature detectionmethods.
Geometric component feature is a combination of several primitive features and it always consists more than 2 primitive features like edges, corners or blobs. Extracting geometric feature vector at location x can be computed according to the reference point, which is shown below:
x means the location of the location of features,θ{\displaystyle \textstyle \theta }means the orientation,σ{\displaystyle \textstyle \sigma }means the intrinsic scale.
Boolean compound feature consists of two sub-features which can be primitive features or compound features. There are two type of boolean features: conjunctive feature whose value is the product of two sub-features and disjunctive features whose value is the maximum of the two sub-features.
Feature spacewas firstly considered in computer vision area by Segen.[4]He used multilevel graph to represent the geometric relations of local features.
There are many learning algorithms which can be applied to learn to finddistinctive featuresof objects in an image. Learning can be incremental, meaning that the object classes can be added at any time.
1.Acquire a new training image "I".
2.According to the recognition algorithm, evaluate the result. If the result is true, new object classes are recognised.
The key point of recognition algorithm is to find the most distinctive features among all features of all classes. So using below equation to maximise the featurefmax{\displaystyle \textstyle \ f_{max}}
Measure the value of a feature in images,fmax{\displaystyle \textstyle \ f_{max}}andffmax{\displaystyle \textstyle \ f_{f_{max}}}, and localise a feature:
Whereff(p)(x){\displaystyle \textstyle f_{f_{(p)}}(x)}is defined asff(p)(I)=max{0,f(p)T)f(x)‖f(p)‖‖f(x)‖}{\displaystyle \textstyle f_{f_{(p)}}(I)=max\left\{0,{\frac {f(p)^{T})f(x)}{\left\|f(p)\right\|\left\|f(x)\right\|}}\right\}}
After recognise the features, the results should be evaluated to determine whether the classes can be recognised, There are five evaluation categories of recognition results: correct, wrong, ambiguous, confused and ignorant. When the evaluation is correct, add a new training image and train it. If the recognition failed, the feature nodes should be maximise their distinctive power which is defined by the Kolmogorov-Smirno distance (KSD).
3.Feature learning algorithm
After a feature is recognised, it should be applied toBayesian networkto recognise the image, using the feature learning algorithm to test.
The probably approximately correct (PAC) model was applied by D. Roth (2002) to solve computer vision problem by developing a distribution-free learning theory based on this model.[5]This theory heavily relied on the development of feature-efficient learning approach. The goal of this algorithm is to learn an object represented by some geometric features in an image. The input is afeature vectorand the output is 1 which means successfully detect the object or 0 otherwise. The main point of this learning approach is collecting representative elements which can represent the object through a function and testing by recognising an object from image to find the representation with high probability.
The learning algorithm aims to predict whether the learned target conceptfT(X){\displaystyle \textstyle f_{T}(X)}belongs to a class, where X is the instance space consists with parameters and then test whether the prediction is correct.
After learning features, there should be some evaluation algorithms to evaluate the learning algorithms. D. Roth applied two learning algorithms:
The main purpose of SVM is to find ahyperplaneto separate the set of samples(xi,yi){\displaystyle \textstyle (x_{i},y_{i})}wherexi{\displaystyle \textstyle x_{i}}is an input vector which is a selection of featuresx∈RN{\displaystyle \textstyle x\in R^{N}}andyi{\displaystyle \textstyle y_{i}}is the label ofxi{\displaystyle \textstyle x_{i}}. The hyperplane has the following form:f(x)=sgn(∑i=1lyiαi⋅k(x,xi)+b)={1,positiveinputs−1,negativeinputs{\displaystyle \textstyle f(x)=sgn\left(\sum _{i=1}^{l}y_{i}\alpha _{i}\cdot k(x,x_{i})+b\right)=\left\{{\begin{matrix}1,positive\;inputs\\-1,negative\;inputs\end{matrix}}\right.}
k(x,xi)=ϕ(x)⋅ϕ(xi){\displaystyle \textstyle k(x,x_{i})=\phi (x)\cdot \phi (x_{i})}is a kernel function
Both algorithms separate training data by finding a linear function.
|
https://en.wikipedia.org/wiki/Geometric_feature_learning
|
Incomputer visionandimage processing, afeatureis a piece of information about the content of an image; typically about whether a certain region of the image has certain properties. Features may be specific structures in the image such as points, edges or objects. Features may also be the result of a generalneighborhood operationorfeature detectionapplied to the image. Other examples of features are related to motion in image sequences, or to shapes defined in terms of curves or boundaries between different image regions.
More broadly afeatureis any piece of information that is relevant for solving the computational task related to a certain application. This is the same sense asfeatureinmachine learningandpattern recognitiongenerally, though image processing has a very sophisticated collection of features. The feature concept is very general and the choice of features in a particular computer vision system may be highly dependent on the specific problem at hand.
There is no universal or exact definition of what constitutes a feature, and the exact definition often depends on the problem or the type of application. Nevertheless, a feature is typically defined as an "interesting" part of animage, and features are used as a starting point for many computer vision algorithms.
Since features are used as the starting point and main primitives for subsequent algorithms, the overall algorithm will often only be as good as its feature detector. Consequently, the desirable property for a feature detector isrepeatability: whether or not the same feature will be detected in two or more different images of the same scene.
Feature detection is a low-levelimage processingoperation. That is, it is usually performed as the first operation on an image and examines everypixelto see if there is a feature present at that pixel. If this is part of a larger algorithm, then the algorithm will typically only examine the image in the region of the features. As a built-in pre-requisite to feature detection, the input image is usually smoothed by aGaussiankernel in ascale-space representationand one or several feature images are computed, often expressed in terms of localimage derivativeoperations.
Occasionally, when feature detection iscomputationally expensiveand there are time constraints, a higher-level algorithm may be used to guide the feature detection stage so that only certain parts of the image are searched for features.
There are many computer vision algorithms that use feature detection as the initial step, so as a result, a very large number of feature detectors have been developed. These vary widely in the kinds of feature detected, the computational complexity and the repeatability.
When features are defined in terms of local neighborhood operations applied to an image, a procedure commonly referred to asfeature extraction, one can distinguish between feature detection approaches that produce local decisions whether there is a feature of a given type at a given image point or not, and those who produce non-binary data as result. The distinction becomes relevant when the resulting detected features are relatively sparse. Although local decisions are made, the output from a feature detection step does not need to be a binary image. The result is often represented in terms of sets of (connected or unconnected) coordinates of the image points where features have been detected, sometimes with subpixel accuracy.
When feature extraction is done without local decision making, the result is often referred to as afeature image. Consequently, a feature image can be seen as an image in the sense that it is a function of the same spatial (or temporal) variables as the original image, but where the pixel values hold information about image features instead of intensity or color. This means that a feature image can be processed in a similar way as an ordinary image generated by an image sensor. Feature images are also often computed as integrated step in algorithms for feature detection.
In some applications, it is not sufficient to extract only one type of feature to obtain the relevant information from the image data. Instead, two or more different features are extracted, resulting in two or more feature descriptors at each image point. A common practice is to organize the information provided by all these descriptors as the elements of one single vector, commonly referred to as afeature vector. The set of all possible feature vectors constitutes afeature space.[1]
A common example of feature vectors appears when each image point is to be classified as belonging to a specific class. Assuming that each image point has a corresponding feature vector based on a suitable set of features, meaning that each class is well separated in the corresponding feature space, the classification of each image point can be done using standardclassificationmethod.
Another and related example occurs whenneural network-based processing is applied to images. The input data fed to the neural network is often given in terms of a feature vector from each image point, where the vector is constructed from several different features extracted from the image data. During a learning phase, the network can itself find which combinations of different features are useful for solving the problem at hand.
Edges are points where there is a boundary (or an edge) between two image regions. In general, an edge can be of almost arbitrary shape, and may include junctions. In practice, edges are usually defined as sets of points in the image that have a stronggradientmagnitude. Furthermore, some common algorithms will then chain high gradient points together to form a more complete description of an edge. These algorithms usually place some constraints on the properties of an edge, such as shape, smoothness, and gradient value.
Locally, edges have a one-dimensional structure.
The terms corners and interest points are used somewhat interchangeably and refer to point-like features in an image, which have a local two-dimensional structure. The name "Corner" arose since early algorithms first performededge detection, and then analyzed the edges to find rapid changes in direction (corners). These algorithms were then developed so that explicit edge detection was no longer required, for instance by looking for high levels ofcurvaturein theimage gradient. It was then noticed that the so-called corners were also being detected on parts of the image that were not corners in the traditional sense (for instance a small bright spot on a dark background may be detected). These points are frequently known as interest points, but the term "corner" is used by tradition[citation needed].
Blobs provide a complementary description of image structures in terms of regions, as opposed to corners that are more point-like. Nevertheless, blob descriptors may often contain a preferred point (a local maximum of an operator response or a center of gravity) which means that many blob detectors may also be regarded as interest point operators. Blob detectors can detect areas in an image that are too smooth to be detected by a corner detector.
Consider shrinking an image and then performing corner detection. The detector will respond to points that are sharp in the shrunk image, but may be smooth in the original image. It is at this point that the difference between a corner detector and a blob detector becomes somewhat vague. To a large extent, this distinction can be remedied by including an appropriate notion of scale. Nevertheless, due to their response properties to different types of image structures at different scales, the LoG and DoHblob detectorsare also mentioned in the article oncorner detection.
For elongated objects, the notion ofridgesis a natural tool. A ridge descriptor computed from a grey-level image can be seen as a generalization of amedial axis. From a practical viewpoint, a ridge can be thought of as a one-dimensional curve that represents an axis of symmetry, and in addition has an attribute of local ridge width associated with each ridge point. Unfortunately, however, it is algorithmically harder to extract ridge features from general classes of grey-level images than edge-, corner- or blob features. Nevertheless, ridge descriptors are frequently used for road extraction in aerial images and for extracting blood vessels in medical images—seeridge detection.
Feature detectionincludes methods for computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not. The resulting features will be subsets of the image domain, often in the form of isolated points, continuous curves or connected regions.
The extraction of features are sometimes made over several scalings. One of these methods is thescale-invariant feature transform(SIFT).
Once features have been detected, a local image patch around the feature can be extracted. This extraction may involve quite considerable amounts of image processing. The result is known as a feature descriptor or feature vector. Among the approaches that are used to feature description, one can mentionN-jetsand local histograms (seescale-invariant feature transformfor one example of a local histogram descriptor). In addition to such attribute information, the feature detection step by itself may also provide complementary attributes, such as the edge orientation and gradient magnitude in edge detection and the polarity and the strength of the blob in blob detection.
A specific image feature, defined in terms of a specific structure in the image data, can often be represented in different ways. For example, an edge can be represented as aBoolean variablein each image point that describes whether an edge is present at that point. Alternatively, we can instead use a representation that provides acertainty measureinstead of a Boolean statement of the edge's existence and combine this with information about theorientationof the edge. Similarly, the color of a specific region can either be represented in terms of the average color (three scalars) or acolor histogram(three functions).
When a computer vision system or computer vision algorithm is designed the choice of feature representation can be a critical issue. In some cases, a higher level of detail in the description of a feature may be necessary for solving the problem, but this comes at the cost of having to deal with more data and more demanding processing. Below, some of the factors which are relevant for choosing a suitable representation are discussed. In this discussion, an instance of a feature representation is referred to as afeature descriptor, or simplydescriptor.
Two examples of image features are local edge orientation and local velocity in an image sequence. In the case of orientation, the value of this feature may be more or less undefined if more than one edge are present in the corresponding neighborhood. Local velocity is undefined if the corresponding image region does not contain any spatial variation. As a consequence of this observation, it may be relevant to use a feature representation that includes a measure of certainty or confidence related to the statement about the feature value. Otherwise, it is a typical situation that the same descriptor is used to represent feature values of low certainty and feature values close to zero, with a resulting ambiguity in the interpretation of this descriptor. Depending on the application, such an ambiguity may or may not be acceptable.
In particular, if a featured image will be used in subsequent processing, it may be a good idea to employ a feature representation that includes information aboutcertaintyorconfidence. This enables a new feature descriptor to be computed from several descriptors, for example, computed at the same image point but at different scales, or from different but neighboring points, in terms of a weighted average where the weights are derived from the corresponding certainties. In the simplest case, the corresponding computation can be implemented as a low-pass filtering of the featured image. The resulting feature image will, in general, be more stable to noise.
In addition to having certainty measures included in the representation, the representation of the corresponding feature values may itself be suitable for anaveragingoperation or not. Most feature representations can be averaged in practice, but only in certain cases can the resulting descriptor be given a correct interpretation in terms of a feature value. Such representations are referred to asaverageable.
For example, if the orientation of an edge is represented in terms of an angle, this representation must have a discontinuity where the angle wraps from its maximal value to its minimal value. Consequently, it can happen that two similar orientations are represented by angles that have a mean that does not lie close to either of the original angles and, hence, this representation is not averageable. There are other representations of edge orientation, such as thestructure tensor, which are averageable.
Another example relates to motion, where in some cases only the normal velocity relative to some edge can be extracted. If two such features have been extracted and they can be assumed to refer to same true velocity, this velocity is not given as the average of the normal velocity vectors. Hence, normal velocity vectors are not averageable. Instead, there are other representations of motions, using matrices or tensors, that give the true velocity in terms of an average operation of the normal velocity descriptors.[citation needed]
Features detected in each image can be matched across multiple images to establishcorresponding featuressuch ascorresponding points.
The algorithm is based on comparing and analyzing point correspondences between the reference image and the target image. If any part of the cluttered scene shares correspondences greater than the threshold, that part of the cluttered scene image is targeted and considered to include the reference object there.[18]
|
https://en.wikipedia.org/wiki/Feature_detection_(computer_vision)
|
Vector quantization(VQ) is a classicalquantizationtechnique fromsignal processingthat allows the modeling ofprobability density functionsby the distribution of prototype vectors. Developed in the early 1980s byRobert M. Gray, it was originally used fordata compression. It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them. Each group is represented by itscentroidpoint, as ink-meansand some otherclusteringalgorithms. In simpler terms, vector quantization chooses a set of points to represent a larger set of points.
The density matching property of vector quantization is powerful, especially for identifying the density of large and high-dimensional data. Since data points are represented by the index of their closest centroid, commonly occurring data have low error, and rare data high error. This is why VQ is suitable forlossy data compression. It can also be used for lossy data correction anddensity estimation.
Vector quantization is based on thecompetitive learningparadigm, so it is closely related to theself-organizing mapmodel and tosparse codingmodels used indeep learningalgorithms such asautoencoder.
The simplest training algorithm for vector quantization is:[1]
A more sophisticated algorithm reduces the bias in the density matching estimation, and ensures that all points are used, by including an extra sensitivity parameter[citation needed]:
It is desirable to use a cooling schedule to produce convergence: seeSimulated annealing. Another (simpler) method isLBGwhich is based onK-Means.
The algorithm can be iteratively updated with 'live' data, rather than by picking random points from a data set, but this will introduce some bias if the data are temporally correlated over many samples.
Vector quantization is used for lossy data compression, lossy data correction, pattern recognition, density estimation and clustering.
Lossy data correction, or prediction, is used to recover data missing from some dimensions. It is done by finding the nearest group with the data dimensions available, then predicting the result based on the values for the missing dimensions, assuming that they will have the same value as the group's centroid.
Fordensity estimation, the area/volume that is closer to a particular centroid than to any other is inversely proportional to the density (due to the density matching property of the algorithm).
Vector quantization, also called "block quantization" or "pattern matching quantization" is often used inlossy data compression. It works by encoding values from a multidimensionalvector spaceinto a finite set of values from a discretesubspaceof lower dimension. A lower-space vector requires less storage space, so the data is compressed. Due to the density matching property of vector quantization, the compressed data has errors that are inversely proportional to density.
The transformation is usually done byprojectionor by using acodebook. In some cases, a codebook can be also used toentropy codethe discrete value in the same step, by generating aprefix codedvariable-length encoded value as its output.
The set of discrete amplitude levels is quantized jointly rather than each sample being quantized separately. Consider ak-dimensional vector[x1,x2,...,xk]{\displaystyle [x_{1},x_{2},...,x_{k}]}of amplitude levels. It is compressed by choosing the nearest matching vector from a set ofn-dimensional vectors[y1,y2,...,yn]{\displaystyle [y_{1},y_{2},...,y_{n}]}, withn<k.
All possible combinations of then-dimensional vector[y1,y2,...,yn]{\displaystyle [y_{1},y_{2},...,y_{n}]}form thevector spaceto which all the quantized vectors belong.
Only the index of the codeword in the codebook is sent instead of the quantized values. This conserves space and achieves more compression.
Twin vector quantization(VQF) is part of theMPEG-4standard dealing with time domain weighted interleaved vector quantization.
The usage of video codecs based on vector quantization has declined significantly in favor of those based onmotion compensatedprediction combined withtransform coding, e.g. those defined inMPEGstandards, as the low decoding complexity of vector quantization has become less relevant.
VQ was also used in the eighties for speech[5]andspeaker recognition.[6]Recently it has also been used for efficientnearest neighbor search[7]and on-line signature recognition.[8]Inpattern recognitionapplications, one codebook is constructed for each class (each class being a user in biometric applications) using acoustic vectors of this user. In the testing phase the quantization distortion of a testing signal is worked out with the whole set of codebooks obtained in the training phase. The codebook that provides the smallest vector quantization distortion indicates the identified user.
The main advantage of VQ inpattern recognitionis its low computational burden when compared with other techniques such asdynamic time warping(DTW) andhidden Markov model(HMM). The main drawback when compared to DTW and HMM is that it does not take into account the temporal evolution of the signals (speech, signature, etc.) because all the vectors are mixed up. In order to overcome this problem a multi-section codebook approach has been proposed.[9]The multi-section approach consists of modelling the signal with several sections (for instance, one codebook for the initial part, another one for the center and a last codebook for the ending part).
As VQ is seeking for centroids as density points of nearby lying samples, it can be also directly used as a prototype-based clustering method: each centroid is then associated with one prototype.
By aiming to minimize the expected squared quantization error[10]and introducing a decreasing learning gain fulfilling the Robbins-Monro conditions, multiple iterations over the whole data set with a concrete but fixed number of prototypes converges to the solution ofk-meansclustering algorithm in an incremental manner.
VQ has been used to quantize a feature representation layer in the discriminator ofGenerative adversarial networks. The feature quantization (FQ) technique performs implicit feature matching.[11]It improves the GAN training, and yields an improved performance on a variety of popular GAN models: BigGAN for image generation, StyleGAN for face synthesis, and U-GAT-IT for unsupervised image-to-image translation.
Subtopics
Related topics
Part of this article was originally based on material from theFree On-line Dictionary of Computingand is used withpermissionunder the GFDL.
|
https://en.wikipedia.org/wiki/Vector_quantization
|
Inmachine learning, avariational autoencoder(VAE) is anartificial neural networkarchitecture introduced by Diederik P. Kingma andMax Welling.[1]It is part of the families ofprobabilistic graphical modelsandvariational Bayesian methods.[2]
In addition to being seen as anautoencoderneural network architecture, variational autoencoders can also be studied within the mathematical formulation ofvariational Bayesian methods, connecting a neural encoder network to its decoder through a probabilisticlatent space(for example, as amultivariate Gaussian distribution) that corresponds to the parameters of a variational distribution.
Thus, the encoder maps each point (such as an image) from a large complex dataset into a distribution within the latent space, rather than to a single point in that space. The decoder has the opposite function, which is to map from the latent space to the input space, again according to a distribution (although in practice, noise is rarely added during the decoding stage). By mapping a point to a distribution instead of a single point, the network can avoid overfitting the training data. Both networks are typically trained together with the usage of thereparameterization trick, although the variance of the noise model can be learned separately.[citation needed]
Although this type of model was initially designed forunsupervised learning,[3][4]its effectiveness has been proven forsemi-supervised learning[5][6]andsupervised learning.[7]
A variational autoencoder is a generative model with a prior and noise distribution respectively. Usually such models are trained using theexpectation-maximizationmeta-algorithm (e.g.probabilistic PCA, (spike & slab) sparse coding). Such a scheme optimizes a lower bound of the data likelihood, which is usually computationally intractable, and in doing so requires the discovery of q-distributions, or variationalposteriors. These q-distributions are normally parameterized for each individual data point in a separate optimization process. However, variational autoencoders use a neural network as an amortized approach to jointly optimize across data points. In that way, the same parameters are reused for multiple data points, which can result in massive memory savings. The first neural network takes as input the data points themselves, and outputs parameters for the variational distribution. As it maps from a known input space to the low-dimensional latent space, it is called the encoder.
The decoder is the second neural network of this model. It is a function that maps from the latent space to the input space, e.g. as the means of the noise distribution. It is possible to use another neural network that maps to the variance, however this can be omitted for simplicity. In such a case, the variance can be optimized with gradient descent.
To optimize this model, one needs to know two terms: the "reconstruction error", and theKullback–Leibler divergence(KL-D). Both terms are derived from the free energy expression of the probabilistic model, and therefore differ depending on the noise distribution and the assumed prior of the data, here referred to as p-distribution. For example, a standard VAE task such as IMAGENET is typically assumed to have a gaussianly distributed noise; however, tasks such as binarized MNIST require a Bernoulli noise. The KL-D from the free energy expression maximizes the probability mass of the q-distribution that overlaps with the p-distribution, which unfortunately can result in mode-seeking behaviour. The "reconstruction" term is the remainder of the free energy expression, and requires a sampling approximation to compute its expectation value.[8]
More recent approaches replaceKullback–Leibler divergence(KL-D) withvarious statistical distances, see"Statistical distance VAE variants"below.
From the point of view of probabilistic modeling, one wants to maximize the likelihood of the datax{\displaystyle x}by their chosen parameterized probability distributionpθ(x)=p(x|θ){\displaystyle p_{\theta }(x)=p(x|\theta )}. This distribution is usually chosen to be a GaussianN(x|μ,σ){\displaystyle N(x|\mu ,\sigma )}which is parameterized byμ{\displaystyle \mu }andσ{\displaystyle \sigma }respectively, and as a member of the exponential family it is easy to work with as a noise distribution. Simple distributions are easy enough to maximize, however distributions where a prior is assumed over the latentsz{\displaystyle z}results in intractable integrals. Let us findpθ(x){\displaystyle p_{\theta }(x)}viamarginalizingoverz{\displaystyle z}.
wherepθ(x,z){\displaystyle p_{\theta }({x,z})}represents thejoint distributionunderpθ{\displaystyle p_{\theta }}of the observable datax{\displaystyle x}and its latent representation or encodingz{\displaystyle z}. According to thechain rule, the equation can be rewritten as
In the vanilla variational autoencoder,z{\displaystyle z}is usually taken to be a finite-dimensional vector of real numbers, andpθ(x|z){\displaystyle p_{\theta }({x|z})}to be aGaussian distribution. Thenpθ(x){\displaystyle p_{\theta }(x)}is a mixture of Gaussian distributions.
It is now possible to define the set of the relationships between the input data and its latent representation as
Unfortunately, the computation ofpθ(z|x){\displaystyle p_{\theta }(z|x)}is expensive and in most cases intractable. To speed up the calculus to make it feasible, it is necessary to introduce a further function to approximate the posterior distribution as
withϕ{\displaystyle \phi }defined as the set of real values that parametrizeq{\displaystyle q}. This is sometimes calledamortized inference, since by "investing" in finding a goodqϕ{\displaystyle q_{\phi }}, one can later inferz{\displaystyle z}fromx{\displaystyle x}quickly without doing any integrals.
In this way, the problem is to find a good probabilistic autoencoder, in which the conditional likelihood distributionpθ(x|z){\displaystyle p_{\theta }(x|z)}is computed by theprobabilistic decoder, and the approximated posterior distributionqϕ(z|x){\displaystyle q_{\phi }(z|x)}is computed by theprobabilistic encoder.
Parametrize the encoder asEϕ{\displaystyle E_{\phi }}, and the decoder asDθ{\displaystyle D_{\theta }}.
Like manydeep learningapproaches that use gradient-based optimization, VAEs require a differentiable loss function to update the network weights throughbackpropagation.
For variational autoencoders, the idea is to jointly optimize the generative model parametersθ{\displaystyle \theta }to reduce the reconstruction error between the input and the output, andϕ{\displaystyle \phi }to makeqϕ(z|x){\displaystyle q_{\phi }({z|x})}as close as possible topθ(z|x){\displaystyle p_{\theta }(z|x)}. As reconstruction loss,mean squared errorandcross entropyare often used.
As distance loss between the two distributions the Kullback–Leibler divergenceDKL(qϕ(z|x)∥pθ(z|x)){\displaystyle D_{KL}(q_{\phi }({z|x})\parallel p_{\theta }({z|x}))}is a good choice to squeezeqϕ(z|x){\displaystyle q_{\phi }({z|x})}underpθ(z|x){\displaystyle p_{\theta }(z|x)}.[8][9]
The distance loss just defined is expanded as
Now define theevidence lower bound(ELBO):Lθ,ϕ(x):=Ez∼qϕ(⋅|x)[lnpθ(x,z)qϕ(z|x)]=lnpθ(x)−DKL(qϕ(⋅|x)∥pθ(⋅|x)){\displaystyle L_{\theta ,\phi }(x):=\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }({z|x})}}\right]=\ln p_{\theta }(x)-D_{KL}(q_{\phi }({\cdot |x})\parallel p_{\theta }({\cdot |x}))}Maximizing the ELBOθ∗,ϕ∗=argmaxθ,ϕLθ,ϕ(x){\displaystyle \theta ^{*},\phi ^{*}={\underset {\theta ,\phi }{\operatorname {argmax} }}\,L_{\theta ,\phi }(x)}is equivalent to simultaneously maximizinglnpθ(x){\displaystyle \ln p_{\theta }(x)}and minimizingDKL(qϕ(z|x)∥pθ(z|x)){\displaystyle D_{KL}(q_{\phi }({z|x})\parallel p_{\theta }({z|x}))}. That is, maximizing the log-likelihood of the observed data, and minimizing the divergence of the approximate posteriorqϕ(⋅|x){\displaystyle q_{\phi }(\cdot |x)}from the exact posteriorpθ(⋅|x){\displaystyle p_{\theta }(\cdot |x)}.
The form given is not very convenient for maximization, but the following, equivalent form, is:Lθ,ϕ(x)=Ez∼qϕ(⋅|x)[lnpθ(x|z)]−DKL(qϕ(⋅|x)∥pθ(⋅)){\displaystyle L_{\theta ,\phi }(x)=\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln p_{\theta }(x|z)\right]-D_{KL}(q_{\phi }({\cdot |x})\parallel p_{\theta }(\cdot ))}wherelnpθ(x|z){\displaystyle \ln p_{\theta }(x|z)}is implemented as−12‖x−Dθ(z)‖22{\displaystyle -{\frac {1}{2}}\|x-D_{\theta }(z)\|_{2}^{2}}, since that is, up to an additive constant, whatx|z∼N(Dθ(z),I){\displaystyle x|z\sim {\mathcal {N}}(D_{\theta }(z),I)}yields. That is, we model the distribution ofx{\displaystyle x}conditional onz{\displaystyle z}to be a Gaussian distribution centered onDθ(z){\displaystyle D_{\theta }(z)}. The distribution ofqϕ(z|x){\displaystyle q_{\phi }(z|x)}andpθ(z){\displaystyle p_{\theta }(z)}are often also chosen to be Gaussians asz|x∼N(Eϕ(x),σϕ(x)2I){\displaystyle z|x\sim {\mathcal {N}}(E_{\phi }(x),\sigma _{\phi }(x)^{2}I)}andz∼N(0,I){\displaystyle z\sim {\mathcal {N}}(0,I)}, with which we obtain by the formula forKL divergence of Gaussians:Lθ,ϕ(x)=−12Ez∼qϕ(⋅|x)[‖x−Dθ(z)‖22]−12(Nσϕ(x)2+‖Eϕ(x)‖22−2Nlnσϕ(x))+Const{\displaystyle L_{\theta ,\phi }(x)=-{\frac {1}{2}}\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\|x-D_{\theta }(z)\|_{2}^{2}\right]-{\frac {1}{2}}\left(N\sigma _{\phi }(x)^{2}+\|E_{\phi }(x)\|_{2}^{2}-2N\ln \sigma _{\phi }(x)\right)+Const}HereN{\displaystyle N}is the dimension ofz{\displaystyle z}. For a more detailed derivation and more interpretations of ELBO and its maximization, seeits main page.
To efficiently search forθ∗,ϕ∗=argmaxθ,ϕLθ,ϕ(x){\displaystyle \theta ^{*},\phi ^{*}={\underset {\theta ,\phi }{\operatorname {argmax} }}\,L_{\theta ,\phi }(x)}the typical method isgradient ascent.
It is straightforward to find∇θEz∼qϕ(⋅|x)[lnpθ(x,z)qϕ(z|x)]=Ez∼qϕ(⋅|x)[∇θlnpθ(x,z)qϕ(z|x)]{\displaystyle \nabla _{\theta }\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }({z|x})}}\right]=\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\nabla _{\theta }\ln {\frac {p_{\theta }(x,z)}{q_{\phi }({z|x})}}\right]}However,∇ϕEz∼qϕ(⋅|x)[lnpθ(x,z)qϕ(z|x)]{\displaystyle \nabla _{\phi }\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }({z|x})}}\right]}does not allow one to put the∇ϕ{\displaystyle \nabla _{\phi }}inside the expectation, sinceϕ{\displaystyle \phi }appears in the probability distribution itself. Thereparameterization trick(also known as stochastic backpropagation[10]) bypasses this difficulty.[8][11][12]
The most important example is whenz∼qϕ(⋅|x){\displaystyle z\sim q_{\phi }(\cdot |x)}is normally distributed, asN(μϕ(x),Σϕ(x)){\displaystyle {\mathcal {N}}(\mu _{\phi }(x),\Sigma _{\phi }(x))}.
This can be reparametrized by lettingε∼N(0,I){\displaystyle {\boldsymbol {\varepsilon }}\sim {\mathcal {N}}(0,{\boldsymbol {I}})}be a "standardrandom number generator", and constructz{\displaystyle z}asz=μϕ(x)+Lϕ(x)ϵ{\displaystyle z=\mu _{\phi }(x)+L_{\phi }(x)\epsilon }. Here,Lϕ(x){\displaystyle L_{\phi }(x)}is obtained by theCholesky decomposition:Σϕ(x)=Lϕ(x)Lϕ(x)T{\displaystyle \Sigma _{\phi }(x)=L_{\phi }(x)L_{\phi }(x)^{T}}Then we have∇ϕEz∼qϕ(⋅|x)[lnpθ(x,z)qϕ(z|x)]=Eϵ[∇ϕlnpθ(x,μϕ(x)+Lϕ(x)ϵ)qϕ(μϕ(x)+Lϕ(x)ϵ|x)]{\displaystyle \nabla _{\phi }\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }({z|x})}}\right]=\mathbb {E} _{\epsilon }\left[\nabla _{\phi }\ln {\frac {p_{\theta }(x,\mu _{\phi }(x)+L_{\phi }(x)\epsilon )}{q_{\phi }(\mu _{\phi }(x)+L_{\phi }(x)\epsilon |x)}}\right]}and so we obtained an unbiased estimator of the gradient, allowingstochastic gradient descent.
Since we reparametrizedz{\displaystyle z}, we need to findqϕ(z|x){\displaystyle q_{\phi }(z|x)}. Letq0{\displaystyle q_{0}}be the probability density function forϵ{\displaystyle \epsilon }, then[clarification needed]lnqϕ(z|x)=lnq0(ϵ)−ln|det(∂ϵz)|{\displaystyle \ln q_{\phi }(z|x)=\ln q_{0}(\epsilon )-\ln |\det(\partial _{\epsilon }z)|}where∂ϵz{\displaystyle \partial _{\epsilon }z}is the Jacobian matrix ofz{\displaystyle z}with respect toϵ{\displaystyle \epsilon }. Sincez=μϕ(x)+Lϕ(x)ϵ{\displaystyle z=\mu _{\phi }(x)+L_{\phi }(x)\epsilon }, this islnqϕ(z|x)=−12‖ϵ‖2−ln|detLϕ(x)|−n2ln(2π){\displaystyle \ln q_{\phi }(z|x)=-{\frac {1}{2}}\|\epsilon \|^{2}-\ln |\det L_{\phi }(x)|-{\frac {n}{2}}\ln(2\pi )}
Many variational autoencoders applications and extensions have been used to adapt the architecture to other domains and improve its performance.
β{\displaystyle \beta }-VAE is an implementation with a weighted Kullback–Leibler divergence term to automatically discover and interpret factorised latent representations. With this implementation, it is possible to force manifold disentanglement forβ{\displaystyle \beta }values greater than one. This architecture can discover disentangled latent factors without supervision.[13][14]
The conditional VAE (CVAE), inserts label information in the latent space to force a deterministic constrained representation of the learned data.[15]
Some structures directly deal with the quality of the generated samples[16][17]or implement more than one latent space to further improve the representation learning.
Some architectures mix VAE andgenerative adversarial networksto obtain hybrid models.[18][19][20]
It is not necessary to use gradients to update the encoder. In fact, the encoder is not necessary for the generative model.[21]
After the initial work of Diederik P. Kingma andMax Welling,[22]several procedures were
proposed to formulate in a more abstract way the operation of the VAE. In these approaches the loss function is composed of two parts :
We obtain the final formula for the loss:Lθ,ϕ=Ex∼Preal[‖x−Dθ(Eϕ(x))‖22]+d(μ(dz),Eϕ♯Preal)2{\displaystyle L_{\theta ,\phi }=\mathbb {E} _{x\sim \mathbb {P} ^{real}}\left[\|x-D_{\theta }(E_{\phi }(x))\|_{2}^{2}\right]+d\left(\mu (dz),E_{\phi }\sharp \mathbb {P} ^{real}\right)^{2}}
The statistical distanced{\displaystyle d}requires special properties, for instance it has to be posses a formula as expectation because the loss function will need to be optimized bystochastic optimization algorithms. Several distances can be chosen and this gave rise to several flavors of VAEs:
|
https://en.wikipedia.org/wiki/Variational_autoencoder
|
Control functions(also known astwo-stage residual inclusion) are statistical methods to correct forendogeneityproblems by modelling the endogeneity in theerror term. The approach thereby differs in important ways from other models that try to account for the sameeconometricproblem.Instrumental variables, for example, attempt to model the endogenous variableXas an ofteninvertiblemodel with respect to a relevant andexogenousinstrumentZ.Panel analysisuses special data properties to difference out unobserved heterogeneity that is assumed to be fixed over time.
Control functions were introduced byHeckmanand Robb[1]although the principle can be traced back to earlier papers.[2]A particular reason why they are popular is because they work for non-invertible models (such asdiscrete choice models) and allow forheterogeneouseffects, where effects at the individual level can differ from effects at the aggregate.[3]A well-known example of the control function approach is theHeckman correction.
Assume we start from a standard endogenous variable setup with additive errors, whereXis an endogenous variable, andZis an exogenous variable that can serve as an instrument.
A popular instrumental variable approach is to use a two-step procedure and estimate equation (2) first and then use the estimates of this first step to estimate equation (1) in a second step. The control function, however, uses that this model implies
The functionh(V) is effectively the control function that models the endogeneity and where this econometric approach lends its name from.[4]
In aRubin causal modelpotential outcomes framework, whereY1is the outcome variable of people for who the participation indicatorDequals 1, the control function approach leads to the following model
as long as the potential outcomesY0andY1are independent ofDconditional onXandZ.[5]
Since the second-stage regression includesgenerated regressors, its variance-covariance matrix needs to be adjusted.[6][7]
Wooldridge and Terza provide a methodology to both deal with and test for endogeneity within the exponential regression framework, which the following discussion follows closely.[8]While the example focuses on aPoisson regressionmodel, it is possible to generalize to other exponential regression models, although this may come at the cost of additional assumptions (e.g. for binary response or censored data models).
Assume the following exponential regression model, whereai{\displaystyle a_{i}}is an unobserved term in the latent variable. We allow for correlation betweenai{\displaystyle a_{i}}andxi{\displaystyle x_{i}}(implyingxi{\displaystyle x_{i}}is possibly endogenous), but allow for no such correlation betweenai{\displaystyle a_{i}}andzi{\displaystyle z_{i}}.
The variableszi{\displaystyle z_{i}}serve as instrumental variables for the potentially endogenousxi{\displaystyle x_{i}}. One can assume a linear relationship between these two variables or alternatively project the endogenous variablexi{\displaystyle x_{i}}onto the instruments to get the following reduced form equation:
The usual rank condition is needed to ensure identification. The endogeneity is then modeled in the following way, whereρ{\displaystyle \rho }determines the severity of endogeneity andvi{\displaystyle v_{i}}is assumed to be independent ofei{\displaystyle e_{i}}.
Imposing these assumptions, assuming the models are correctly specified, and normalizingE[exp(ei)]=1{\displaystyle \operatorname {E} [\exp(e_{i})]=1}, we can rewrite the conditional mean as follows:
Ifvi{\displaystyle v_{i}}were known at this point, it would be possible to estimate the relevant parameters byquasi-maximum likelihood estimation(QMLE). Following the two step procedure strategies, Wooldridge and Terza propose estimating equation (1) byordinary least squares. The fitted residuals from this regression can then be plugged into the estimating equation (2) and QMLE methods will lead to consistent estimators of the parameters of interest. Significance tests onρ^{\displaystyle {\hat {\rho }}}can then be used to test for endogeneity within the model.
The original Heckit procedure makesdistributional assumptionsabout the error terms, however, more flexible estimation approaches with weaker distributional assumptions have been established.[9]Furthermore, Blundell and Powell show how the control function approach can be particularly helpful in models with nonadditive errors, such as discrete choice models.[10]This latter approach, however, does implicitly make strong distributional and functional form assumptions.[5]
|
https://en.wikipedia.org/wiki/Control_function_(econometrics)
|
Instatisticsandeconometrics,optimal instrumentsare a technique for improving theefficiencyofestimatorsinconditional moment models, a class ofsemiparametric modelsthat generateconditional expectationfunctions. To estimate parameters of a conditional moment model, the statistician can derive anexpectationfunction (defining "moment conditions") and use thegeneralized method of moments(GMM). However, there are infinitely many moment conditions that can be generated from a single model; optimal instruments provide the most efficient moment conditions.
As an example, consider thenonlinear regressionmodel
whereyis ascalar(one-dimensional)random variable,xis arandom vectorwith dimensionk, andθis ak-dimensionalparameter. The conditional moment restrictionE[u∣x]=0{\displaystyle E[u\mid x]=0}is consistent with infinitely many moment conditions. For example:
More generally, for any vector-valuedfunctionzofx, it will be the case that
That is,zdefines a finite set oforthogonalityconditions.
A natural question to ask, then, is whether anasymptotically efficientset of conditions is available, in the sense that no other set of conditions achieves lowerasymptotic variance.[1]Both econometricians[2][3]and statisticians[4]have extensively studied this subject.
The answer to this question is generally that this finite set exists and have been proven for a wide range of estimators.Takeshi Amemiyawas one of the first to work on this problem and show the optimal number of instruments for nonlinearsimultaneous equation modelswith homoskedastic and serially uncorrelated errors.[5]The form of the optimal instruments was characterized byLars Peter Hansen,[6]and results for nonparametric estimation of optimal instruments are provided by Newey.[7]A result for nearest neighbor estimators was provided by Robinson.[8]
The technique of optimal instruments can be used to show that, in a conditional momentlinear regressionmodel withiiddata, the optimal GMM estimator isgeneralized least squares. Consider the model
whereyis a scalar random variable,xis ak-dimensional random vector, andθis ak-dimensional parameter vector. As above, the moment conditions are
wherez=z(x)is an instrument set of dimensionp(p≥k). The task is to choosezto minimize the asymptotic variance of the resulting GMM estimator. If the data areiid, the asymptotic variance of the GMM estimator is
whereσ2(x)≡E[u2∣x]{\displaystyle \sigma ^{2}(x)\equiv E[u^{2}\mid x]}.
The optimal instruments are given by
which produces the asymptotic variance matrix
These are the optimal instruments because for any otherz, the matrix
ispositive semidefinite.
Giveniiddata(y1,x1),…,(yN,xN){\displaystyle (y_{1},x_{1}),\dots ,(y_{N},x_{N})}, the GMM estimator corresponding toz∗(x){\displaystyle z^{*}(x)}is
which is the generalized least squares estimator. (It is unfeasible becauseσ2(·)is unknown.)[1]
|
https://en.wikipedia.org/wiki/Optimal_instruments
|
Manual image annotation is the process of manually defining regions in an image and creating a textual description of those regions. Such annotations can for instance be used to trainmachine learningalgorithms forcomputer visionapplications.
This is a list of computersoftwarewhich can be used for manual annotation of images.
|
https://en.wikipedia.org/wiki/List_of_manual_image_annotation_tools
|
Biological databasesare stores of biological information.[1]The journalNucleic Acids Researchregularly publishes special issues on biological databases and has a list of such databases. The 2018 issue has a list of about 180 such databases and updates to previously described databases.[2]Omics Discovery Indexcan be used to browse and search several biological databases. Furthermore, theNIAID Data Ecosystem Discovery Portaldeveloped by theNational Institute of Allergy and Infectious Diseases (NIAID)enables searching across databases.
Meta databases are databases of databases that collect data about data to generate new data. They are capable of merging information from different sources and making it available in a new and more convenient form, or with an emphasis on a particular disease or organism. Originally, metadata was only a common term referring simply todata about datasuch as tags, keywords, and markup headers.
Model organism databasesprovide in-depth biological data for intensively studied organisms.
The primary databases make up theInternational Nucleotide Sequence Database(INSD). The include:
DDBJ (Japan), GenBank (USA) and European Nucleotide Archive (Europe) are repositories for nucleotidesequencedata from allorganisms. All three accept nucleotide sequence submissions, and then exchange new and updated data on a daily basis to achieve optimal synchronisation between them. These three databases are primary databases, as they house original sequence data. They collaborate withSequence Read Archive(SRA), which archives raw reads from high-throughput sequencing instruments.
Secondary databases are:[clarification needed]
Other databases
Generic gene expression databases
Microarray gene expression databases
These databases collectgenomesequences, annotate and analyze them, and provide public access. Some addcurationof experimental literature to improve computed annotations. These databases may hold many species genomes, or a singlemodel organismgenome.
(See also:List of proteins in the human body)
Several publicly available data repositories and resources have been developed to support and manageproteinrelated information, biological knowledge discovery and data-driven hypothesis generation.[15]The databases in the table below are selected from the databases listed in theNucleic Acids Research (NAR)databases issues and database collection and the databases cross-referenced in theUniProtKB. Most of these databases are cross-referenced withUniProt/UniProtKB so that identifiers can be mapped to each other.[15]
Proteins in human:
There are about ~20,000 protein coding genes in the standard human genome. (Roughly ~1200 already haveWikipedia articles- theGene Wiki- about them) if we are Including splice variants, there could be as many as 500,000 unique human proteins[16]
Numerous databases collect information aboutspeciesand othertaxonomiccategories. The Catalogue of Life is a special case as it is a meta-database of about 150 specialized "global species databases" (GSDs) that have collected the names and other information on (almost) all described and thus "known" species.
Images play a critical role in biomedicine, ranging from images ofanthropologicalspecimens tozoology. However, there are relatively few databases dedicated to image collection, although some projects such asiNaturalistcollect photos as a main part of their data. A special case of "images" are 3-dimensional images such asprotein structuresor3D-reconstructionsof anatomical structures. Image databases include, among others:[22]
|
https://en.wikipedia.org/wiki/List_of_biological_databases
|
Adaptive controlis the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain.[1][2]For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption; a control law is needed that adapts itself to such changing conditions. Adaptive control is different fromrobust controlin that it does not needa prioriinformation about the bounds on these uncertain or time-varying parameters; robust control guarantees that if the changes are within given bounds the control law need not be changed, while adaptive control is concerned with control law changing itself.
The foundation of adaptive control isparameter estimation, which is a branch ofsystem identification. Common methods of estimation includerecursive least squaresandgradient descent. Both of these methods provide update laws that are used to modify estimates in real-time (i.e., as the system operates).Lyapunov stabilityis used to derive these update laws and show convergence criteria (typically persistent excitation; relaxation of this condition are studied in Concurrent Learning adaptive control).Projectionand normalization are commonly used to improve the robustness of estimation algorithms.
In general, one should distinguish between:
as well as between
Direct methods are ones wherein the estimated parameters are those directly used in the adaptive controller. In contrast, indirect methods are those in which the estimated parameters are used to calculate required controller parameters.[3]Hybrid methods rely on both estimation of parameters and direct modification of the control law.
There are several broad categories of feedback adaptive control (classification can vary):
Some special topics in adaptive control can be introduced as well:
In recent times, adaptive control has been merged with intelligent techniques such as fuzzy and neural networks to bring forth new concepts such as fuzzy adaptive control.
When designing adaptive control systems, special consideration is necessary ofconvergenceandrobustnessissues. Lyapunov stability is typically used to derive control adaptation laws and show .
Usually these methods adapt the controllers to both the process statics and dynamics. In special cases the adaptation can be limited to the static behavior alone, leading to adaptive control based on characteristic curves for the steady-states or to extremum value control, optimizing the steady state. Hence, there are several ways to apply adaptive control algorithms.
A particularly successful application of adaptive control has been adaptive flight control.[9][10]This body of work has focused on guaranteeing stability of a model reference adaptive control scheme using Lyapunov arguments. Several successful flight-test demonstrations have been conducted, including fault tolerant adaptive control.[11]
|
https://en.wikipedia.org/wiki/Adaptive_control
|
Acognitive modelis a representation of one or morecognitive processesin humans or other animals for the purposes of comprehension and prediction. There are many types of cognitivemodels, and they can range from box-and-arrow diagrams to a set of equations to software programs that interact with the same tools that humans use to complete tasks (e.g., computer mouse and keyboard).[1][page needed]In terms ofinformation processing,cognitive modelingis modeling of human perception, reasoning, memory and action.[2][3]
Cognitive models can be developed within or without acognitive architecture, though the two are not always easily distinguishable. In contrast to cognitive architectures, cognitive models tend to be focused on a single cognitive phenomenon or process (e.g., list learning), how two or more processes interact (e.g., visual search and decision making), or making behavioral predictions for a specific task or tool (e.g., how instituting a new software package will affect productivity). Cognitive architectures tend to be focused on the structural properties of the modeled system, and help constrain the development of cognitive models within the architecture.[4]Likewise, model development helps to inform limitations and shortcomings of the architecture. Some of the most popular architectures for cognitive modeling includeACT-R,Clarion,LIDA, andSoar.
Cognitive modeling historically developed withincognitive psychology/cognitive science(includinghuman factors), and has received contributions from the fields ofmachine learningandartificial intelligenceamong others.
A number of key terms are used to describe the processes involved in the perception, storage, and production of speech. Typically, they are used by speech pathologists while treating a child patient. The input signal is the speech signal heard by the child, usually assumed to come from an adult speaker. The output signal is the utterance produced by the child. The unseen psychological events that occur between the arrival of an input signal and the production of speech are the focus of psycholinguistic models. Events that process the input signal are referred to as input processes, whereas events that process the production of speech are referred to as output processes. Some aspects of speech processing are thought to happen online—that is, they occur during the actual perception
or production of speech and thus require a share of the attentional resources dedicated to the speech task. Other processes, thought to happen offline, take place as part of the child's background mental processing rather than during the time dedicated to the speech task.
In this sense, online processing is sometimes defined as occurring in real-time, whereas offline processing is said to be time-free (Hewlett, 1990). In box-and-arrow psycholinguistic models, each hypothesized level of representation or processing can be represented in a diagram by a “box,” and the relationships between them by “arrows,” hence the name. Sometimes (as in the models of Smith, 1973, and Menn, 1978, described later in this paper) the arrows represent processes additional to those shown in boxes. Such models make explicit the hypothesized information-
processing activities carried out in a particular cognitive function (such as language), in a manner analogous to computer flowcharts that depict the processes and decisions carried out by a computer program. Box-and-arrow models differ widely in the number of unseen psychological processes they describe and thus in the number of boxes they contain. Some have only one or two boxes between the input and output signals (e.g., Menn, 1978; Smith, 1973), whereas others have multiple boxes representing complex relationships between a number of different information-processing events (e.g., Hewlett, 1990; Hewlett, Gibbon, & Cohen- McKenzie, 1998; Stackhouse & Wells, 1997). The most important box, however, and the source of much ongoing debate, is that representing the underlying representation (or UR). In essence, an underlying representation captures information stored in a child's mind about a word he or she knows and uses. As the following description of several models will illustrate, the nature of this information and thus the type(s) of representation present in the child's knowledge base have captured the attention of researchers for some time. (Elise Baker et al. Psycholinguistic Models of Speech Development and Their Application to Clinical Practice. Journal of Speech, Language, and Hearing Research. June 2001. 44. p 685–702.)
Acomputational modelis a mathematical model incomputational sciencethat requires extensive computational resources to study the behavior of a complex system by computer simulation. Computational cognitive models examine cognition and cognitive functions by developing process-based computational models formulated as sets of mathematical equations or computer simulations.[5]The system under study is often a complexnonlinear systemfor which simple, intuitive analytical solutions are not readily available. Rather than deriving a mathematical analytical solution to the problem, experimentation with the model is done by changing the parameters of the system in the computer, and studying the differences in the outcome of the experiments. Theories of operation of the model can be derived/deduced from these computational experiments.
Examples of common computational models areweather forecastingmodels,earth simulatormodels,flight simulatormodels, molecularprotein foldingmodels, andneural networkmodels.
Asymbolicmodel is expressed in characters, usually non-numeric ones, that require translation before they can be used.
A cognitive model issubsymbolicif it is made by constituent entities that are not representations in their turn, e.g., pixels, sound images as perceived by the ear, signal samples; subsymbolic units in neural networks can be considered particular cases of this category.
Hybrid computers are computers that exhibit features of analog computers and digital computers. The digital component normally serves as the controller and provides logical operations, while the analog component normally serves as a solver of differential equations. See more details athybrid intelligent system.
In the traditionalcomputational approach,representationsare viewed as static structures of discretesymbols.Cognitiontakes place by transforming static symbol structures indiscrete, sequential steps.Sensoryinformation is transformed into symbolic inputs, which produce symbolic outputs that get transformed intomotoroutputs. The entire system operates in an ongoing cycle.
What is missing from this traditional view is that human cognition happenscontinuouslyand in real time. Breaking down the processes into discrete time steps may not fullycapturethis behavior. An alternative approach is to define a system with (1) a state of the system at any given time, (2) a behavior, defined as the change over time in overall state, and (3) a state set orstate space, representing the totality of overall states the system could be in.[6]The system is distinguished by the fact that a change in any aspect of the system state depends on other aspects of the same or other system states.[7]
A typicaldynamicalmodel isformalizedby severaldifferential equationsthat describe how the system's state changes over time. By doing so, the form of the space of possibletrajectoriesand the internal and external forces that shape a specific trajectory that unfold over time, instead of the physical nature of the underlyingmechanismsthat manifest this dynamics, carry explanatory force. On this dynamical view, parametric inputs alter the system's intrinsic dynamics, rather than specifying an internal state that describes some external state of affairs.
Early work in the application of dynamical systems to cognition can be found in the model ofHopfield networks.[8][9]These networks were proposed as a model forassociative memory. They represent the neural level ofmemory, modeling systems of around 30 neurons which can be in either an on or off state. By letting thenetworklearn on its own, structure and computational properties naturally arise. Unlike previous models, “memories” can be formed and recalled by inputting a small portion of the entire memory. Time ordering of memories can also be encoded. The behavior of the system is modeled withvectorswhich can change values, representing different states of the system. This early model was a major step toward a dynamical systems view of human cognition, though many details had yet to be added and more phenomena accounted for.
By taking into account theevolutionary developmentof the humannervous systemand the similarity of thebrainto other organs,Elmanproposed thatlanguageand cognition should be treated as a dynamical system rather than a digital symbol processor.[10]Neural networks of the type Elman implemented have come to be known asElman networks. Instead of treating language as a collection of staticlexicalitems andgrammarrules that are learned and then used according to fixed rules, the dynamical systems view defines thelexiconas regions of state space within a dynamical system. Grammar is made up ofattractorsand repellers that constrain movement in the state space. This means that representations are sensitive to context, with mental representations viewed as trajectories through mental space instead of objects that are constructed and remain static. Elman networks were trained with simple sentences to represent grammar as a dynamical system. Once a basic grammar had been learned, the networks could then parse complex sentences by predicting which words would appear next according to the dynamical model.[11]
A classic developmental error has been investigated in the context of dynamical systems:[12][13]TheA-not-B erroris proposed to be not a distinct error occurring at a specific age (8 to 10 months), but a feature of a dynamic learning process that is also present in older children. Children 2 years old were found to make an error similar to the A-not-B error when searching for toys hidden in a sandbox. After observing the toy being hidden in location A and repeatedly searching for it there, the 2-year-olds were shown a toy hidden in a new location B. When they looked for the toy, they searched in locations that were biased toward location A. This suggests that there is an ongoing representation of the toy's location that changes over time. The child's past behavior influences its model of locations of the sandbox, and so an account of behavior and learning must take into account how the system of the sandbox and the child's past actions is changing over time.[13]
One proposed mechanism of a dynamical system comes from analysis of continuous-timerecurrent neural networks(CTRNNs). By focusing on the output of the neural networks rather than their states and examining fully interconnected networks, three-neuroncentral pattern generator(CPG) can be used to represent systems such as leg movements during walking.[14]This CPG contains threemotor neuronsto control the foot, backward swing, and forward swing effectors of the leg. Outputs of the network represent whether the foot is up or down and how much force is being applied to generatetorquein the leg joint. One feature of this pattern is that neuron outputs are eitheroff or onmost of the time. Another feature is that the states are quasi-stable, meaning that they will eventually transition to other states. A simple pattern generator circuit like this is proposed to be a building block for a dynamical system. Sets of neurons that simultaneously transition from one quasi-stable state to another are defined as a dynamic module. These modules can in theory be combined to create larger circuits that comprise a complete dynamical system. However, the details of how this combination could occur are not fully worked out.
Modern formalizations of dynamical systems applied to the study of cognition vary. One such formalization, referred to as “behavioral dynamics”,[15]treats theagentand the environment as a pair ofcoupleddynamical systems based on classical dynamical systems theory. In this formalization, the information from theenvironmentinforms the agent's behavior and the agent's actions modify the environment. In the specific case ofperception-action cycles, the coupling of the environment and the agent is formalized by twofunctions. The first transforms the representation of the agents action into specific patterns of muscle activation that in turn produce forces in the environment. The second function transforms the information from the environment (i.e., patterns of stimulation at the agent's receptors that reflect the environment's current state) into a representation that is useful for controlling the agents actions. Other similar dynamical systems have been proposed (although not developed into a formal framework) in which the agent's nervous systems, the agent's body, and the environment are coupled together[16][17]
Behavioral dynamics have been applied to locomotive behavior.[15][18][19]Modeling locomotion with behavioral dynamics demonstrates that adaptive behaviors could arise from the interactions of an agent and the environment. According to this framework, adaptive behaviors can be captured by two levels of analysis. At the first level of perception and action, an agent and an environment can be conceptualized as a pair of dynamical systems coupled together by the forces the agent applies to the environment and by the structured information provided by the environment. Thus, behavioral dynamics emerge from the agent-environment interaction. At the second level of time evolution, behavior can be expressed as a dynamical system represented as a vector field. In this vector field, attractors reflect stable behavioral solutions, where as bifurcations reflect changes in behavior. In contrast to previous work on central pattern generators, this framework suggests that stable behavioral patterns are an emergent, self-organizing property of the agent-environment system rather than determined by the structure of either the agent or the environment.
In an extension of classicaldynamical systems theory,[20]rather than coupling the environment's and the agent's dynamical systems to each other, an “open dynamical system” defines a “total system”, an “agent system”, and a mechanism to relate these two systems. The total system is a dynamical system that models an agent in an environment, whereas the agent system is a dynamical system that models an agent's intrinsic dynamics (i.e., the agent's dynamics in the absence of an environment). Importantly, the relation mechanism does not couple the two systems together, but rather continuously modifies the total system into the decoupled agent's total system. By distinguishing between total and agent systems, it is possible to investigate an agent's behavior when it is isolated from the environment and when it is embedded within an environment. This formalization can be seen as a generalization from the classical formalization, whereby the agent system can be viewed as the agent system in an open dynamical system, and the agent coupled to the environment and the environment can be viewed as the total system in an open dynamical system.
In the context of dynamical systems andembodied cognition, representations can be conceptualized as indicators or mediators. In the indicator view, internal states carry information about the existence of an object in the environment, where the state of a system during exposure to an object is the representation of that object. In the mediator view, internal states carry information about the environment which is used by the system in obtaining its goals. In this more complex account, the states of the system carries information that mediates between the information the agent takes in from the environment, and the force exerted on the environment by the agents behavior. The application of open dynamical systems have been discussed for four types of classical embodied cognition examples:[21]
The interpretations of these examples rely on the followinglogic: (1) the total system captures embodiment; (2) one or more agent systems capture the intrinsic dynamics of individual agents; (3) the complete behavior of an agent can be understood as a change to the agent's intrinsic dynamics in relation to its situation in the environment; and (4) the paths of an open dynamical system can be interpreted as representational processes. These embodied cognition examples show the importance of studying the emergent dynamics of an agent-environment systems, as well as the intrinsic dynamics of agent systems. Rather than being at odds with traditional cognitive science approaches, dynamical systems are a natural extension of these methods and should be studied in parallel rather than in competition.
|
https://en.wikipedia.org/wiki/Cognitive_model
|
Computational electromagnetics(CEM),computational electrodynamicsorelectromagnetic modelingis the process of modeling the interaction ofelectromagnetic fieldswith physical objects and the environment using computers.
It typically involves usingcomputer programsto compute approximate solutions toMaxwell's equationsto calculateantennaperformance,electromagnetic compatibility,radar cross sectionand electromagneticwave propagationwhen not in free space. A large subfield isantenna modelingcomputer programs, which calculate theradiation patternand electrical properties of radio antennas, and are widely used to design antennas for specific applications.
Several real-world electromagnetic problems likeelectromagnetic scattering,electromagnetic radiation, modeling ofwaveguidesetc., are not analytically calculable, for the multitude of irregular geometries found in actual devices. Computational numerical techniques can overcome the inability to derive closed form solutions of Maxwell's equations under variousconstitutive relationsof media, andboundary conditions. This makescomputational electromagnetics(CEM) important to the design, and modeling of antenna, radar,satelliteand other communication systems,nanophotonicdevices and high speedsiliconelectronics,medical imaging, cell-phone antenna design, among other applications.
CEM typically solves the problem of computing theE(electric) andH(magnetic) fields across the problem domain (e.g., to calculate antennaradiation patternfor an arbitrarily shaped antenna structure). Also calculating power flow direction (Poynting vector), a waveguide'snormal modes, media-generated wave dispersion, and scattering can be computed from theEandHfields. CEM models may or may not assumesymmetry, simplifying real world structures to idealizedcylinders,spheres, and other regular geometrical objects. CEM models extensively make use of symmetry, and solve for reduced dimensionality from 3 spatial dimensions to 2D and even 1D.
Aneigenvalueproblem formulation of CEM allows us to calculate steady state normal modes in a structure.Transient responseand impulse field effects are more accurately modeled by CEM in time domain, byFDTD. Curved geometrical objects are treated more accurately as finite elementsFEM, or non-orthogonal grids.Beam propagation method(BPM) can solve for the power flow in waveguides. CEM is application specific, even if different techniques converge to the same field and power distributions in the modeled domain.
The most common numerical approach is to discretize ("mesh") the problem space in terms of grids or regular shapes ("cells"), and solve Maxwell's equations simultaneously across all cells. Discretization consumes computer memory, and solving the relevant equations takes significant time. Large-scale CEM problems face memory and CPU limitations, and combating these limitations is an active area of research. High performance clustering, vector processing, and/orparallelismis often required to make the computation practical. Some typical methods involve: time-stepping through the equations over the whole domain for each time instant; bandedmatrix inversionto calculate the weights of basis functions (when modeled by finite element methods); matrix products (when using transfer matrix methods); calculating numericalintegrals(when using themethod of moments); usingfast Fourier transforms; and time iterations (when calculating by the split-step method or by BPM).
Choosing the right technique for solving a problem is important, as choosing the wrong one can either result in incorrect results, or results which take excessively long to compute. However, the name of a technique does not always tell one how it is implemented, especially for commercial tools, which will often have more than one solver.
Davidson[1]gives two tables comparing the FEM, MoM and FDTD techniques in the way they are normally implemented. One table is for both open region (radiation and scattering problems) and another table is for guided wave problems.
Maxwell's equations can be formulated as ahyperbolic systemofpartial differential equations. This gives access to powerful techniques for numerical solutions.
It is assumed that the waves propagate in the (x,y)-plane and restrict the direction of the magnetic field to be parallel to thez-axis and thus the electric field to be parallel to the (x,y) plane. The wave is called a transverse magnetic (TM) wave. In 2D and no polarization terms present, Maxwell's equations can then be formulated as:∂∂tu¯+A∂∂xu¯+B∂∂yu¯+Cu¯=g¯{\displaystyle {\frac {\partial }{\partial t}}{\bar {u}}+A{\frac {\partial }{\partial x}}{\bar {u}}+B{\frac {\partial }{\partial y}}{\bar {u}}+C{\bar {u}}={\bar {g}}}whereu,A,B, andCare defined asu¯=(ExEyHz),A=(000001ϵ01μ0),B=(00−1ϵ000−1μ00),C=(σϵ000σϵ0000).{\displaystyle {\begin{aligned}{\bar {u}}&=\left({\begin{matrix}E_{x}\\E_{y}\\H_{z}\end{matrix}}\right),\\[1ex]A&=\left({\begin{matrix}0&0&0\\0&0&{\frac {1}{\epsilon }}\\0&{\frac {1}{\mu }}&0\end{matrix}}\right),\\[1ex]B&=\left({\begin{matrix}0&0&{\frac {-1}{\epsilon }}\\0&0&0\\{\frac {-1}{\mu }}&0&0\end{matrix}}\right),\\[1ex]C&=\left({\begin{matrix}{\frac {\sigma }{\epsilon }}&0&0\\0&{\frac {\sigma }{\epsilon }}&0\\0&0&0\end{matrix}}\right).\end{aligned}}}
In this representation,g¯{\displaystyle {\bar {g}}}is theforcing function, and is in the same space asu¯{\displaystyle {\bar {u}}}. It can be used to express an externally applied field or to describe an optimizationconstraint. As formulated above:g¯=(Ex,constraintEy,constraintHz,constraint).{\displaystyle {\bar {g}}=\left({\begin{matrix}E_{x,{\text{constraint}}}\\E_{y,{\text{constraint}}}\\H_{z,{\text{constraint}}}\end{matrix}}\right).}
g¯{\displaystyle {\bar {g}}}may also be explicitly defined equal to zero to simplify certain problems, or to find acharacteristic solution, which is often the first step in a method to find the particular inhomogeneous solution.
Thediscrete dipole approximationis a flexible technique for computing scattering and absorption by targets of arbitrarygeometry. The formulation is based on integral form of Maxwell equations. The DDA is an approximation of the continuum target by a finite array of polarizable points. The points acquiredipole momentsin response to the local electric field. The dipoles of course interact with one another via their electric fields, so the DDA is also sometimes referred to as the coupleddipoleapproximation. The resulting linear system of equations is commonly solved usingconjugate gradientiterations. The discretization matrix has symmetries (the integral form of Maxwell equations has form of convolution) enablingfast Fourier transformto multiply matrix times vector during conjugate gradient iterations.
Themethod of moments(MoM)[2]orboundary element method(BEM) is a numerical computational method of solving linear partial differential equations which have been formulated asintegral equations(i.e. inboundary integralform). It can be applied in many areas of engineering and science includingfluid mechanics,acoustics,electromagnetics,fracture mechanics, andplasticity.
MoM has become more popular since the 1980s. Because it requires calculating only boundary values, rather than values throughout the space, it is significantly more efficient in terms of computational resources for problems with a small surface/volume ratio. Conceptually, it works by constructing a "mesh" over the modeled surface. However, for many problems, MoM are significantly computationally less efficient than volume-discretization methods (finite element method,finite difference method,finite volume method). Boundary element formulations typically give rise to fully populated matrices. This means that the storage requirements and computational time will tend to grow according to the square of the problem size. By contrast, finite element matrices are typically banded (elements are only locally connected) and the storage requirements for the system matrices typically grow linearly with the problem size. Compression techniques (e.g.multipole expansions or adaptive cross approximation/hierarchical matrices) can be used to ameliorate these problems, though at the cost of added complexity and with a success-rate that depends heavily on the nature and geometry of the problem.
MoM is applicable to problems for whichGreen's functionscan be calculated. These usually involve fields inlinearhomogeneousmedia. This places considerable restrictions on the range and generality of problems suitable for boundary elements. Nonlinearities can be included in the formulation, although they generally introduce volume integrals which require the volume to be discretized before solution, removing an oft-cited advantage of MoM.
Thefast multipole method(FMM) is an alternative to MoM or Ewald summation. It is an accurate simulation technique and requires less memory and processor power than MoM. The FMM was first introduced byGreengardandRokhlin[3][4]and is based on themultipole expansiontechnique. The first application of the FMM in computational electromagnetics was by Engheta et al.(1992).[5]The FMM has also applications in computational bioelectromagnetics in theCharge based boundary element fast multipole method. FMM can also be used to accelerate MoM.
While the fast multipole method is useful for accelerating MoM solutions of integral equations with static or frequency-domain oscillatory kernels, theplane wave time-domain (PWTD)algorithm employs similar ideas to accelerate the MoM solution of time-domain integral equations involving theretarded potential. The PWTD algorithm was introduced in 1998 by Ergin, Shanker, and Michielssen.[6]
Thepartial element equivalent circuit(PEEC) is a 3D full-wave modeling method suitable for combinedelectromagneticandcircuitanalysis. Unlike MoM, PEEC is a fullspectrummethod valid fromdcto the maximumfrequencydetermined by the meshing. In the PEEC method, theintegral equationis interpreted asKirchhoff's voltage lawapplied to a basic PEEC cell which results in a complete circuit solution for 3D geometries. The equivalent circuit formulation allows for additionalSPICEtype circuit elements to be easily included. Further, the models and the analysis apply to both the time and the frequency domains. The circuit equations resulting from the PEEC model are easily constructed using a modifiedloop analysis(MLA) ormodified nodal analysis(MNA) formulation. Besides providing a direct current solution, it has several other advantages over a MoM analysis for this class of problems since any type of circuit element can be included in a straightforward way with appropriate matrix stamps. The PEEC method has recently been extended to include nonorthogonal geometries.[7]This model extension, which is consistent with the classicalorthogonalformulation, includes the Manhattan representation of the geometries in addition to the more generalquadrilateralandhexahedralelements. This helps in keeping the number of unknowns at a minimum and thus reduces computational time for nonorthogonal geometries.[8]
The Cagniard-deHoop method of moments (CdH-MoM) is a 3-D full-wave time-domain integral-equation technique that is formulated via theLorentz reciprocity theorem. Since the CdH-MoM heavily relies on theCagniard-deHoop method, a joint-transform approach originally developed for the analytical analysis of seismic wave propagation in the crustal model of the Earth, this approach is well suited for the TD EM analysis of planarly-layered structures. The CdH-MoM has been originally applied to time-domain performance studies of cylindrical and planar antennas[9]and, more recently, to the TD EM scattering analysis of transmission lines in the presence of thin sheets[10]and electromagnetic metasurfaces,[11][12]for example.
Finite-difference frequency-domain (FDFD) provides a rigorous solution to Maxwell’s equations in the frequency-domain using the finite-difference method.[13]FDFD is arguably the simplest numerical method that still provides a rigorous solution. It is incredibly versatile and able to solve virtually any problem in electromagnetics. The primary drawback of FDFD is poor efficiency compared to other methods. On modern computers, however, a huge array of problems are easily handled such as calculated guided modes in waveguides, calculating scattering from an object, calculating transmission and reflection from photonic crystals, calculate photonic band diagrams, simulating metamaterials, and much more.
FDFD may be the best "first" method to learn in computational electromagnetics (CEM). It involves almost all the concepts encountered with other methods, but in a much simpler framework. Concepts include boundary conditions, linear algebra, injecting sources, representing devices numerically, and post-processing field data to calculate meaningful things. This will help a person learn other techniques as well as provide a way to test and benchmark those other techniques.
FDFD is very similar to finite-difference time-domain (FDTD). Both methods represent space as an array of points and enforces Maxwell’s equations at each point. FDFD puts this large set of equations into a matrix and solves all the equations simultaneously using linear algebra techniques. In contrast, FDTD continually iterates over these equations to evolve a solution over time. Numerically, FDFD and FDTD are very similar, but their implementations are very different.
Finite-difference time-domain(FDTD) is a popular CEM technique. It is easy to understand. It has an exceptionally simple implementation for a full wave solver. It is at least an order of magnitude less work to implement a basic FDTD solver than either an FEM or MoM solver. FDTD is the only technique where one person can realistically implement oneself in a reasonable time frame, but even then, this will be for a quite specific problem.[1]Since it is a time-domain method, solutions can cover a wide frequency range with a single simulation run, provided the time step is small enough to satisfy theNyquist–Shannon sampling theoremfor the desired highest frequency.
FDTD belongs in the general class of grid-based differential time-domain numerical modeling methods.Maxwell's equations(inpartial differentialform) are modified to central-difference equations, discretized, and implemented in software. The equations are solved in a cyclic manner: theelectric fieldis solved at a given instant in time, then themagnetic fieldis solved at the next instant in time, and the process is repeated over and over again.
The basic FDTD algorithm traces back to a seminal 1966 paper by Kane Yee inIEEE Transactions on Antennas and Propagation.Allen Tafloveoriginated the descriptor "Finite-difference time-domain" and its corresponding "FDTD" acronym in a 1980 paper inIEEE Trans. Electromagn. Compat.Since about 1990, FDTD techniques have emerged as the primary means to model many scientific and engineering problems addressing electromagnetic wave interactions with material structures. An effective technique based on a time-domain finite-volume discretization procedure was introduced by Mohammadian et al. in 1991.[14]Current FDTD modeling applications range from near-DC (ultralow-frequency geophysics involving the entire Earth-ionospherewaveguide) throughmicrowaves(radar signature technology, antennas, wireless communications devices, digital interconnects, biomedical imaging/treatment) to visible light (photonic crystals, nanoplasmonics,solitons, andbiophotonics). Approximately 30 commercial and university-developed software suites are available.
Among many time domain methods, discontinuous Galerkin time domain (DGTD) method has become popular recently since it integrates advantages of both the finite volume time domain (FVTD) method and the finite element time domain (FETD) method. Like FVTD, the numerical flux is used to exchange information between neighboring elements, thus all operations of DGTD are local and easily parallelizable. Similar to FETD, DGTD employs unstructured mesh and is capable of high-order accuracy if the high-order hierarchical basis function is adopted. With the above merits, DGTD method is widely implemented for the transient analysis of multiscale problems involving large number of unknowns.[15][16]
MRTD is an adaptive alternative to the finite difference time domain method (FDTD) based onwaveletanalysis.
Thefinite element method(FEM) is used to find approximate solution ofpartial differential equations(PDE) andintegral equations. The solution approach is based either on eliminating the time derivatives completely (steady state problems), or rendering the PDE into an equivalentordinary differential equation, which is then solved using standard techniques such asfinite differences, etc.
In solvingpartial differential equations, the primary challenge is to create an equation which approximates the equation to be studied, but which isnumerically stable, meaning that errors in the input data and intermediate calculations do not accumulate and destroy the meaning of the resulting output. There are many ways of doing this, with various advantages and disadvantages. The finite element method is a good choice for solving partial differential equations over complex domains or when the desired precision varies over the entire domain.
The finite integration technique (FIT) is a spatial discretization scheme to numerically solve electromagnetic field problems in time and frequency domain. It preserves basictopologicalproperties of the continuous equations such as conservation of charge and energy. FIT was proposed in 1977 byThomas Weilandand has been enhanced continually over the years.[17]This method covers the full range of electromagnetics (from static up to high frequency) and optic applications and is the basis for commercial simulation tools: CST Studio Suite developed byComputer Simulation Technology(CST AG) and
Electromagnetic Simulation solutions developed byNimbic.
The basic idea of this approach is to apply the Maxwell equations in integral form to a set of staggered grids. This method stands out due to high flexibility in geometric modeling and boundary handling as well as incorporation of arbitrary material distributions and material properties such asanisotropy, non-linearity and dispersion. Furthermore, the use of a consistent dual orthogonal grid (e.g.Cartesian grid) in conjunction with an explicit time integration scheme (e.g. leap-frog-scheme) leads to compute and memory-efficient algorithms, which are especially adapted for transient field analysis inradio frequency(RF) applications.
This class of marching-in-time computational techniques for Maxwell's equations uses either discrete Fourier ordiscrete Chebyshev transformsto calculate the spatial derivatives of the electric and magnetic field vector components that are arranged in either a 2-D grid or 3-D lattice of unit cells. PSTD causes negligible numerical phase velocity anisotropy errors relative to FDTD, and therefore allows problems of much greater electrical size to be modeled.[18]
PSSD solves Maxwell's equations by propagating them forward in a chosen spatial direction. The fields are therefore held as a function of time, and (possibly) any transverse spatial dimensions. The method is pseudo-spectral because temporal derivatives are calculated in the frequency domain with the aid of FFTs. Because the fields are held as functions of time, this enables arbitrary dispersion in the propagation medium to be rapidly and accurately modelled with minimal effort.[19]However, the choice to propagate forward in space (rather than in time) brings with it some subtleties, particularly if reflections are important.[20]
Transmission line matrix(TLM) can be formulated in several means as a direct set of lumped elements solvable directly by a circuit solver (ala SPICE,HSPICE, et al.), as a custom network of elements or via ascattering matrixapproach. TLM is a very flexible analysis strategy akin to FDTD in capabilities, though more codes tend to be available with FDTD engines.
This is an implicit method. In this method, in two-dimensional case, Maxwell equations are computed in two steps, whereas in three-dimensional case Maxwell equations are divided into three spatial coordinate directions. Stability and dispersion analysis of the three-dimensional LOD-FDTD method have been discussed in detail.[21][22]
Eigenmode expansion(EME) is a rigorous bi-directional technique to simulate electromagnetic propagation which relies on the decomposition of the electromagnetic fields into a basis set of local eigenmodes. The eigenmodes are found by solving Maxwell's equations in each local cross-section. Eigenmode expansion can solve Maxwell's equations in 2D and 3D and can provide a fully vectorial solution provided that the mode solvers are vectorial. It offers very strong benefits compared with the FDTD method for the modelling of optical waveguides, and it is a popular tool for the modelling offiber opticsandsilicon photonicsdevices.
Physical optics(PO) is the name of ahigh frequency approximation(short-wavelengthapproximation) commonly used in optics,electrical engineeringandapplied physics. It is an intermediate method between geometric optics, which ignoreswaveeffects, and full waveelectromagnetism, which is a precisetheory. The word "physical" means that it is more physical thangeometrical opticsand not that it is an exact physical theory.
The approximation consists of using ray optics to estimate the field on a surface and thenintegratingthat field over the surface to calculate the transmitted or scattered field. This resembles theBorn approximation, in that the details of the problem are treated as aperturbation.
Theuniform theory of diffraction(UTD) is ahigh frequencymethod for solvingelectromagneticscatteringproblems from electrically small discontinuities or discontinuities in more than one dimension at the same point.
Theuniform theory of diffractionapproximatesnear fieldelectromagnetic fields as quasi optical and uses ray diffraction to determine diffraction coefficients for each diffracting object-source combination. These coefficients are then used to calculate the field strength andphasefor each direction away from the diffracting point. These fields are then added to the incident fields and reflected fields to obtain a total solution.
Validation is one of the key issues facing electromagnetic simulation users. The user must understand and master the validity domain of its simulation. The measure is, "how far from the reality are the results?"
Answering this question involves three steps: comparison between simulation results and analytical formulation, cross-comparison between codes, and comparison of simulation results with measurement.
For example, assessing the value of theradar cross sectionof a plate with the analytical formula:RCSPlate=4πA2λ2,{\displaystyle {\text{RCS}}_{\text{Plate}}={\frac {4\pi A^{2}}{\lambda ^{2}}},}whereAis the surface of the plate andλ{\displaystyle \lambda }is the wavelength. The next curve presenting the RCS of a plate computed at 35GHzcan be used as reference example.
One example is the cross comparison of results from method of moments and asymptotic methods in their validity domains.[23]
The final validation step is made by comparison between measurements and simulation. For example, the RCS calculation[24]and the measurement[25]of a complex metallic object at 35 GHz. The computation implements GO, PO and PTD for the edges.
Validation processes can clearly reveal that some differences can be explained by the differences between the experimental setup and its reproduction in the simulation environment.[26]
There are now many efficient codes for solving electromagnetic scattering problems. They are listed as:
Solutions which are analytical, such as Mie solution for scattering by spheres or cylinders, can be used to validate more involved techniques.
|
https://en.wikipedia.org/wiki/Computational_electromagnetics
|
Computer-aided design(CAD) is the use ofcomputers(orworkstations) to aid in the creation, modification, analysis, or optimization of adesign.[1]: 3This software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, and to create a database for manufacturing.[1]: 4Designs made through CAD software help protect products and inventions when used inpatentapplications. CAD output is often in the form of electronic files for print,machining, or other manufacturing operations. The termscomputer-aided drafting(CAD) andcomputer-aided design and drafting(CADD) are also used.[2]
Its use in designing electronic systems is known aselectronic design automation(EDA). Inmechanical designit is known asmechanical design automation(MDA), which includes the process of creating atechnical drawingwith the use ofcomputer software.[3]
CAD software for mechanical design uses eithervector-based graphicsto depict the objects of traditional drafting, or may also produceraster graphicsshowing the overall appearance of designed objects. However, it involves more than just shapes. As in the manualdraftingoftechnicalandengineering drawings, the output of CAD must convey information, such asmaterials,processes,dimensions, andtolerances, according to application-specific conventions.
CAD may be used to design curves and figures intwo-dimensional(2D) space; or curves,surfaces, and solids inthree-dimensional(3D) space.[4][5]: 71, 106
CAD is an importantindustrial artextensively used in many applications, includingautomotive,shipbuilding, andaerospaceindustries, industrial andarchitectural design(building information modeling),prosthetics, and many more. CAD is also widely used to producecomputer animationforspecial effectsin movies,advertisingand technical manuals, often called DCCdigital content creation. The modern ubiquity and power of computers means that even perfume bottles and shampoo dispensers are designed using techniques unheard of by engineers of the 1960s. Because of its enormous economic importance, CAD has been a major driving force for research incomputational geometry,computer graphics(both hardware and software), anddiscrete differential geometry.[6]
The design ofgeometric modelsfor object shapes, in particular, is occasionally calledcomputer-aided geometric design(CAGD).[7]
Computer-aided design is one of the many tools used by engineers and designers and is used in many ways depending on the profession of the user and the type of software in question.
CAD is one part of the whole digital product development (DPD) activity within theproduct lifecycle management(PLM) processes, and as such is used together with other tools, which are either integrated modules or stand-alone products, such as:
CAD is also used for the accurate creation of photo simulations that are often required in the preparation of environmental impact reports, in which computer-aided designs of intended buildings are superimposed into photographs of existing environments to represent what that locale will be like, where the proposed facilities are allowed to be built. Potential blockage of view corridors and shadow studies are also frequently analyzed through the use of CAD.[8]
There are several different types of CAD,[9]each requiring the operator to think differently about how to use them and design their virtual components in a different manner. Virtually all of CAD tools rely onconstraintconcepts that are used to define geometric or non-geometric elements of a model.
There are many producers of the lower-end 2D sketching systems, including a number of free and open-source programs. These provide an approach to the drawing process where scale and placement on the drawing sheet can easily be adjusted in the final draft as required, unlike in hand drafting.
3Dwireframeis an extension of 2D drafting into athree-dimensional space. Each line has to be manually inserted into the drawing. The final product has no mass properties associated with it and cannot have features directly added to it, such as holes. The operator approaches these in a similar fashion to the 2D systems, although many 3D systems allow using the wireframe model to make the final engineering drawing views.
3D "dumb" solidsare created in a way analogous to manipulations of real-world objects. Basic three-dimensional geometric forms (e.g., prisms, cylinders, spheres, or rectangles) have solid volumes added or subtracted from them as if assembling or cutting real-world objects. Two-dimensional projected views can easily be generated from the models. Basic 3D solids do not usually include tools to easily allow the motion of the components, set their limits to their motion, or identify interference between components.
There are several types of3Dsolid modeling
Top-end CAD systems offer the capability to incorporate more organic, aesthetic and ergonomic features into the designs.Freeform surface modelingis often combined with solids to allow the designer to create products that fit the human form and visual requirements as well as they interface with the machine.
Originally software for CAD systems was developed with computer languages such asFortran,ALGOLbut with the advancement ofobject-oriented programmingmethods this has radically changed. Typical modernparametric feature-based modelerandfreeform surfacesystems are built around a number of keyCmodules with their ownAPIs. A CAD system can be seen as built up from the interaction of agraphical user interface(GUI) withNURBSgeometry orboundary representation(B-rep) data via ageometric modeling kernel. A geometry constraint engine may also be employed to manage the associative relationships between geometry, such as wireframe geometry in a sketch or components in an assembly.
Unexpected capabilities of these associative relationships have led to a new form ofprototypingcalleddigital prototyping. In contrast to physical prototypes, which entail manufacturing time in the design. That said, CAD models can be generated by a computer after the physical prototype has been scanned using anindustrial CT scanningmachine. Depending on the nature of the business, digital or physical prototypes can be initially chosen according to specific needs.
Today, CAD systems exist for all the major platforms (Windows,Linux,UNIXandMac OS X); some packages support multiple platforms.[11]
Currently, no special hardware is required for most CAD software. However, some CAD systems can do graphically and computationally intensive tasks, so a moderngraphics card, high speed (and possibly multiple)CPUsand large amounts ofRAMmay be recommended.
The human-machine interface is generally via acomputer mousebut can also be via a pen and digitizinggraphics tablet. Manipulation of the view of the model on the screen is also sometimes done with the use of aSpacemouse/SpaceBall. Some systems also support stereoscopic glasses forviewing the 3D model. Technologies that in the past were limited to larger installations or specialist applications have become available to a wide group of users. These include theCAVEorHMDsand interactivedeviceslike motion-sensingtechnology
Starting with the IBM Drafting System in the mid-1960s, computer-aided design systems began to provide more capabilitties than just an ability to reproduce manual drafting with electronic drafting, and thecost-benefitfor companies to switch to CAD became apparent. The software automated many tasks that are taken for granted from computer systems today, such as automated generation ofbills of materials, auto layout inintegrated circuits, interference checking, and many others. Eventually, CAD provided the designer with the ability to perform engineering calculations.[5]During this transition, calculations were still performed either by hand or by those individuals who could run computer programs. CAD was a revolutionary change in the engineering industry, where draftsman, designer, and engineer roles that had previously been separate began to merge. CAD is an example of the pervasive effect computers were beginning to have on the industry.
Current computer-aided design software packages range from 2Dvector-based drafting systems to 3Dsolidandsurface modelers. Modern CAD packages can also frequently allow rotations in three dimensions, allowing viewing of a designed object from any desired angle, even from the inside looking out.[5]Some CAD software is capable of dynamic mathematical modeling.[5]
CAD technology is used in the design of tools and machinery and in the drafting and design of all types of buildings, from small residential types (houses) to the largest commercial and industrial structures (hospitals and factories).[12]
CAD is mainly used for detailed design of 3D models or 2D drawings of physical components, but it is also used throughout the engineering process from conceptual design and layout of products, through strength and dynamic analysis of assemblies to definition of manufacturing methods of components. It can also be used to design objects such as jewelry, furniture, appliances, etc. Furthermore, many CAD applications now offer advanced rendering and animation capabilities so engineers can better visualize their product designs.4D BIMis a type of virtual construction engineering simulation incorporating time or schedule-related information for project management.
CAD has become an especially important technology within the scope ofcomputer-aided technologies, with benefits such as lower product development costs and a greatly shorteneddesign cycle. CAD enables designers to layout and develop work on screen, print it out and save it for future editing, saving time on their drawings.
In the 2000s, some CAD system software vendors shipped their distributions with a dedicated license manager software that controlled how often or how many users can utilize the CAD system.[5]: 166It could run either on a local machine (by loading from a local storage device) or a localnetwork fileserverand was usually tied to a specific IP address in latter case.[5]: 166
CAD software enables engineers and architects to design, inspect and manage engineering projects within an integratedgraphical user interface(GUI) on apersonal computersystem. Most applications supportsolid modelingwithboundary representation(B-Rep) andNURBSgeometry, and enable the same to be published in a variety of formats.[citation needed]
Based on market statistics,commercial softwarefrom Autodesk,Dassault Systems,Siemens PLM Software, and PTC dominate the CAD industry.[13][14]The following is a list of major CAD applications, grouped by usage statistics.[15]
|
https://en.wikipedia.org/wiki/Computer-aided_design
|
Engineering optimization[1][2][3]is the subject which usesoptimizationtechniques to achieve design goals inengineering.[4][5]It is sometimes referred to asdesign optimization.
This engineering-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Engineering_optimization
|
Finite element method(FEM) is a popular method for numerically solvingdifferential equationsarising in engineering andmathematical modeling. Typical problem areas of interest include the traditional fields ofstructural analysis,heat transfer,fluid flow, mass transport, andelectromagnetic potential. Computers are usually used to perform the calculations required. With high-speedsupercomputers, better solutions can be achieved and are often required to solve the largest and most complex problems.
FEM is a generalnumerical methodfor solvingpartial differential equationsin two- or three-space variables (i.e., someboundary value problems). There are also studies about using FEM to solve high-dimensional problems.[1]To solve a problem, FEM subdivides a large system into smaller, simpler parts calledfinite elements. This is achieved by a particular spacediscretizationin the space dimensions, which is implemented by the construction of ameshof the object: the numerical domain for the solution that has a finite number of points. FEM formulation of a boundary value problem finally results in a system ofalgebraic equations. The method approximates the unknown function over the domain.[2]The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. FEM then approximates a solution by minimizing an associated error function via thecalculus of variations.
Studying oranalyzinga phenomenon with FEM is often referred to as finite element analysis (FEA).
The subdivision of a whole domain into simpler parts has several advantages:[3]
A typical approach using the method involves the following steps:
The global system of equations uses known solution techniques and can be calculated from theinitial valuesof the original problem to obtain a numerical answer.
In the first step above, the element equations are simple equations that locally approximate the original complex equations to be studied, where the original equations are oftenpartial differential equations(PDEs). To explain the approximation of this process, FEM is commonly introduced as a special case of theGalerkin method. The process, in mathematical language, is to construct an integral of theinner productof the residual and theweight functions; then, set the integral to zero. In simple terms, it is a procedure that minimizes the approximation error by fitting trial functions into the PDE. The residual is the error caused by the trial functions, and the weight functions arepolynomialapproximation functions that project the residual. The process eliminates all the spatial derivatives from the PDE, thus approximating the PDE locally using the following:
These equation sets are element equations. They arelinearif the underlying PDE is linear and vice versa. Algebraic equation sets that arise in the steady-state problems are solved usingnumerical linear algebraicmethods. In contrast,ordinary differential equationsets that occur in the transient problems are solved by numerical integrations using standard techniques such asEuler's methodor theRunge–Kutta method.
In the second step above, a global system of equations is generated from the element equations by transforming coordinates from the subdomains' local nodes to the domain's global nodes. This spatial transformation includes appropriateorientation adjustmentsas applied in relation to the referencecoordinate system. The process is often carried out using FEM software withcoordinatedata generated from the subdomains.
The practical application of FEM is known as finite element analysis (FEA). FEA, as applied inengineering, is a computational tool for performingengineering analysis. It includes the use ofmesh generationtechniques for dividing acomplex probleminto smaller elements, as well as the use of software coded with a FEM algorithm. When applying FEA, the complex problem is usually a physical system with the underlyingphysics, such as theEuler–Bernoulli beam equation, theheat equation, or theNavier–Stokes equations, expressed in either PDEs orintegral equations, while the divided, smaller elements of the complex problem represent different areas in the physical system.
FEA may be used for analyzing problems over complicated domains (e.g., cars and oil pipelines) when the domain changes (e.g., during a solid-state reaction with a moving boundary), when the desired precision varies over the entire domain, or when the solution lacks smoothness. FEA simulations provide a valuable resource, as they remove multiple instances of creating and testing complex prototypes for various high-fidelity situations.[citation needed]For example, in a frontal crash simulation, it is possible to increase prediction accuracy in important areas, like the front of the car, and reduce it in the rear of the car, thus reducing the cost of the simulation. Another example would be innumerical weather prediction, where it is more important to have accurate predictions over developing highly nonlinear phenomena, such astropical cyclonesin the atmosphere oreddiesin the ocean, rather than relatively calm areas.
A clear, detailed, and practical presentation of this approach can be found in the textbookThe Finite Element Method for Engineers.[4]
While it is difficult to quote the date of the invention of FEM, the method originated from the need to solve complexelasticityandstructural analysisproblems incivilandaeronautical engineering.[5]Its development can be traced back to work byAlexander Hrennikoff[6]andRichard Courant[7]in the early 1940s. Another pioneer wasIoannis Argyris. In the USSR, the introduction of the practical application of FEM is usually connected withLeonard Oganesyan.[8]It was also independently rediscovered in China byFeng Kangin the late 1950s and early 1960s, based on the computations of dam constructions, where it was called the "finite difference method" based on variation principles. Although the approaches used by these pioneers are different, they share one essential characteristic: themeshdiscretizationof a continuous domain into a set of discrete sub-domains, usually called elements.
Hrennikoff's work discretizes the domain by using alatticeanalogy, while Courant's approach divides the domain into finite triangular sub-regions to solvesecond-orderelliptic partial differential equationsthat arise from the problem of thetorsionof acylinder. Courant's contribution was evolutionary, drawing on a large body of earlier results for PDEs developed byLord Rayleigh,Walther Ritz, andBoris Galerkin.
The application of FEM gained momentum in the 1960s and 1970s due to the developments ofJ. H. Argyrisand his co-workers at theUniversity of Stuttgart;R. W. Cloughand his co-workers atUniversity of California Berkeley;O. C. Zienkiewiczand his co-workersErnest Hinton,Bruce Irons,[9]and others atSwansea University;Philippe G. Ciarletat the University ofParis 6; andRichard Gallagherand his co-workers atCornell University. During this period, additional impetus was provided by the available open-source FEM programs. NASA sponsored the original version ofNASTRAN. University of California Berkeley made the finite element programs SAP IV[10]and, later,OpenSeeswidely available. In Norway, the ship classification society Det Norske Veritas (nowDNV GL) developedSesamin 1969 for use in the analysis of ships.[11]A rigorous mathematical basis for FEM was provided in 1973 with a publication byGilbert StrangandGeorge Fix.[12]The method has since been generalized for thenumerical modelingof physical systems in a wide variety ofengineeringdisciplines, such aselectromagnetism,heat transfer, andfluid dynamics.[13][14]
A finite element method is characterized by avariational formulation, a discretization strategy, one or more solution algorithms, and post-processing procedures.
Examples of the variational formulation are theGalerkin method, the discontinuous Galerkin method, mixed methods, etc.
A discretization strategy is understood to mean a clearly defined set of procedures that cover (a) the creation of finite element meshes, (b) the definition of basis function on reference elements (also called shape functions), and (c) the mapping of reference elements onto the elements of the mesh. Examples of discretization strategies are the h-version,p-version,hp-version,x-FEM,isogeometric analysis, etc. Each discretization strategy has certain advantages and disadvantages. A reasonable criterion in selecting a discretization strategy is to realize nearly optimal performance for the broadest set of mathematical models in a particular model class.
Various numerical solution algorithms can be classified into two broad categories; direct and iterative solvers. These algorithms are designed to exploit the sparsity of matrices that depend on the variational formulation and discretization strategy choices.
Post-processing procedures are designed to extract the data of interest from a finite element solution. To meet the requirements of solution verification, postprocessors need to provide fora posteriorierror estimation in terms of the quantities of interest. When the errors of approximation are larger than what is considered acceptable, then the discretization has to be changed either by an automated adaptive process or by the action of the analyst. Some very efficient postprocessors provide for the realization ofsuperconvergence.
The following two problems demonstrate the finite element method.
P1 is a one-dimensional problemP1:{u″(x)=f(x)in(0,1),u(0)=u(1)=0,{\displaystyle {\text{ P1 }}:{\begin{cases}u''(x)=f(x){\text{ in }}(0,1),\\u(0)=u(1)=0,\end{cases}}}wheref{\displaystyle f}is given,u{\displaystyle u}is an unknown function ofx{\displaystyle x}, andu″{\displaystyle u''}is the second derivative ofu{\displaystyle u}with respect tox{\displaystyle x}.
P2 is a two-dimensional problem (Dirichlet problem)P2:{uxx(x,y)+uyy(x,y)=f(x,y)inΩ,u=0on∂Ω,{\displaystyle {\text{P2 }}:{\begin{cases}u_{xx}(x,y)+u_{yy}(x,y)=f(x,y)&{\text{ in }}\Omega ,\\u=0&{\text{ on }}\partial \Omega ,\end{cases}}}
whereΩ{\displaystyle \Omega }is a connected open region in the(x,y){\displaystyle (x,y)}plane whose boundary∂Ω{\displaystyle \partial \Omega }is nice (e.g., asmooth manifoldor apolygon), anduxx{\displaystyle u_{xx}}anduyy{\displaystyle u_{yy}}denote the second derivatives with respect tox{\displaystyle x}andy{\displaystyle y}, respectively.
The problem P1 can be solved directly by computingantiderivatives. However, this method of solving theboundary value problem(BVP) works only when there is one spatial dimension. It does not generalize to higher-dimensional problems or problems likeu+V″=f{\displaystyle u+V''=f}. For this reason, we will develop the finite element method for P1 and outline its generalization to P2.
Our explanation will proceed in two steps, which mirror two essential steps one must take to solve a boundary value problem (BVP) using the FEM.
After this second step, we have concrete formulae for a large but finite-dimensional linear problem whose solution will approximately solve the original BVP. This finite-dimensional problem is then implemented on acomputer.
The first step is to convert P1 and P2 into their equivalentweak formulations.
Ifu{\displaystyle u}solves P1, then for any smooth functionv{\displaystyle v}that satisfies the displacement boundary conditions, i.e.v=0{\displaystyle v=0}atx=0{\displaystyle x=0}andx=1{\displaystyle x=1}, we have
Conversely, ifu{\displaystyle u}withu(0)=u(1)=0{\displaystyle u(0)=u(1)=0}satisfies (1) for every smooth functionv(x){\displaystyle v(x)}then one may show that thisu{\displaystyle u}will solve P1. The proof is easier for twice continuously differentiableu{\displaystyle u}(mean value theorem) but may be proved in adistributionalsense as well.
We define a new operator or mapϕ(u,v){\displaystyle \phi (u,v)}by usingintegration by partson the right-hand-side of (1):
where we have used the assumption thatv(0)=v(1)=0{\displaystyle v(0)=v(1)=0}.
If we integrate by parts using a form ofGreen's identities, we see that ifu{\displaystyle u}solves P2, then we may defineϕ(u,v){\displaystyle \phi (u,v)}for anyv{\displaystyle v}by∫Ωfvds=−∫Ω∇u⋅∇vds≡−ϕ(u,v),{\displaystyle \int _{\Omega }fv\,ds=-\int _{\Omega }\nabla u\cdot \nabla v\,ds\equiv -\phi (u,v),}
where∇{\displaystyle \nabla }denotes thegradientand⋅{\displaystyle \cdot }denotes thedot productin the two-dimensional plane. Once moreϕ{\displaystyle \,\!\phi }can be turned into an inner product on a suitable spaceH01(Ω){\displaystyle H_{0}^{1}(\Omega )}of once differentiable functions ofΩ{\displaystyle \Omega }that are zero on∂Ω{\displaystyle \partial \Omega }. We have also assumed thatv∈H01(Ω){\displaystyle v\in H_{0}^{1}(\Omega )}(seeSobolev spaces). The existence and uniqueness of the solution can also be shown.
We can loosely think ofH01(0,1){\displaystyle H_{0}^{1}(0,1)}to be theabsolutely continuousfunctions of(0,1){\displaystyle (0,1)}that are0{\displaystyle 0}atx=0{\displaystyle x=0}andx=1{\displaystyle x=1}(seeSobolev spaces). Such functions are (weakly) once differentiable, and it turns out that the symmetricbilinear mapϕ{\displaystyle \!\,\phi }then defines aninner productwhich turnsH01(0,1){\displaystyle H_{0}^{1}(0,1)}into aHilbert space(a detailed proof is nontrivial). On the other hand, the left-hand-side∫01f(x)v(x)dx{\displaystyle \int _{0}^{1}f(x)v(x)dx}is also an inner product, this time on theLp spaceL2(0,1){\displaystyle L^{2}(0,1)}. An application of theRiesz representation theoremfor Hilbert spaces shows that there is a uniqueu{\displaystyle u}solving (2) and, therefore, P1. This solution is a-priori only a member ofH01(0,1){\displaystyle H_{0}^{1}(0,1)}, but usingellipticregularity, will be smooth iff{\displaystyle f}is.
P1 and P2 are ready to be discretized, which leads to a common sub-problem (3). The basic idea is to replace the infinite-dimensional linear problem:
with a finite-dimensional version:
whereV{\displaystyle V}is a finite-dimensionalsubspaceofH01{\displaystyle H_{0}^{1}}. There are many possible choices forV{\displaystyle V}(one possibility leads to thespectral method). However, we takeV{\displaystyle V}as a space of piecewise polynomial functions for the finite element method.
We take the interval(0,1){\displaystyle (0,1)}, choosen{\displaystyle n}values ofx{\displaystyle x}with0=x0<x1<⋯<xn<xn+1=1{\displaystyle 0=x_{0}<x_{1}<\cdots <x_{n}<x_{n+1}=1}and we defineV{\displaystyle V}by:V={v:[0,1]→R:vis continuous,v|[xk,xk+1]is linear fork=0,…,n, andv(0)=v(1)=0}{\displaystyle V=\{v:[0,1]\to \mathbb {R} \;:v{\text{ is continuous, }}v|_{[x_{k},x_{k+1}]}{\text{ is linear for }}k=0,\dots ,n{\text{, and }}v(0)=v(1)=0\}}
where we definex0=0{\displaystyle x_{0}=0}andxn+1=1{\displaystyle x_{n+1}=1}. Observe that functions inV{\displaystyle V}are not differentiable according to the elementary definition of calculus. Indeed, ifv∈V{\displaystyle v\in V}then the derivative is typically not defined at anyx=xk{\displaystyle x=x_{k}},k=1,…,n{\displaystyle k=1,\ldots ,n}. However, the derivative exists at every other value ofx{\displaystyle x}, and one can use this derivative forintegration by parts.
We needV{\displaystyle V}to be a set of functions ofΩ{\displaystyle \Omega }. In the figure on the right, we have illustrated atriangulationof a 15-sidedpolygonalregionΩ{\displaystyle \Omega }in the plane (below), and apiecewise linear function(above, in color) of this polygon which is linear on each triangle of the triangulation; the spaceV{\displaystyle V}would consist of functions that are linear on each triangle of the chosen triangulation.
One hopes that as the underlying triangular mesh becomes finer and finer, the solution of the discrete problem (3) will, in some sense, converge to the solution of the original boundary value problem P2. To measure this mesh fineness, the triangulation is indexed by a real-valued parameterh>0{\displaystyle h>0}which one takes to be very small. This parameter will be related to the largest or average triangle size in the triangulation. As we refine the triangulation, the space of piecewise linear functionsV{\displaystyle V}must also change withh{\displaystyle h}. For this reason, one often readsVh{\displaystyle V_{h}}instead ofV{\displaystyle V}in the literature. Since we do not perform such an analysis, we will not use this notation.
To complete the discretization, we must select abasisofV{\displaystyle V}. In the one-dimensional case, for each control pointxk{\displaystyle x_{k}}we will choose the piecewise linear functionvk{\displaystyle v_{k}}inV{\displaystyle V}whose value is1{\displaystyle 1}atxk{\displaystyle x_{k}}and zero at everyxj,j≠k{\displaystyle x_{j},\;j\neq k}, i.e.,vk(x)={x−xk−1xk−xk−1ifx∈[xk−1,xk],xk+1−xxk+1−xkifx∈[xk,xk+1],0otherwise,{\displaystyle v_{k}(x)={\begin{cases}{x-x_{k-1} \over x_{k}\,-x_{k-1}}&{\text{ if }}x\in [x_{k-1},x_{k}],\\{x_{k+1}\,-x \over x_{k+1}\,-x_{k}}&{\text{ if }}x\in [x_{k},x_{k+1}],\\0&{\text{ otherwise}},\end{cases}}}
fork=1,…,n{\displaystyle k=1,\dots ,n}; this basis is a shifted and scaledtent function. For the two-dimensional case, we choose again one basis functionvk{\displaystyle v_{k}}per vertexxk{\displaystyle x_{k}}of the triangulation of the planar regionΩ{\displaystyle \Omega }. The functionvk{\displaystyle v_{k}}is the unique function ofV{\displaystyle V}whose value is1{\displaystyle 1}atxk{\displaystyle x_{k}}and zero at everyxj,j≠k{\displaystyle x_{j},\;j\neq k}.
Depending on the author, the word "element" in the "finite element method" refers to the domain's triangles, the piecewise linear basis function, or both. So, for instance, an author interested in curved domains might replace the triangles with curved primitives and so might describe the elements as being curvilinear. On the other hand, some authors replace "piecewise linear" with "piecewise quadratic" or even "piecewise polynomial". The author might then say "higher order element" instead of "higher degree polynomial". The finite element method is not restricted to triangles (tetrahedra in 3-d or higher-order simplexes in multidimensional spaces). Still, it can be defined on quadrilateral subdomains (hexahedra, prisms, or pyramids in 3-d, and so on). Higher-order shapes (curvilinear elements) can be defined with polynomial and even non-polynomial shapes (e.g., ellipse or circle).
Examples of methods that use higher degree piecewise polynomial basis functions are thehp-FEMandspectral FEM.
More advanced implementations (adaptive finite element methods) utilize a method to assess the quality of the results (based on error estimation theory) and modify the mesh during the solution aiming to achieve an approximate solution within some bounds from the exact solution of the continuum problem. Mesh adaptivity may utilize various techniques; the most popular are:
The primary advantage of this choice of basis is that the inner products⟨vj,vk⟩=∫01vjvkdx{\displaystyle \langle v_{j},v_{k}\rangle =\int _{0}^{1}v_{j}v_{k}\,dx}andϕ(vj,vk)=∫01vj′vk′dx{\displaystyle \phi (v_{j},v_{k})=\int _{0}^{1}v_{j}'v_{k}'\,dx}will be zero for almost allj,k{\displaystyle j,k}.
(The matrix containing⟨vj,vk⟩{\displaystyle \langle v_{j},v_{k}\rangle }in the(j,k){\displaystyle (j,k)}location is known as theGramian matrix.)
In the one dimensional case, thesupportofvk{\displaystyle v_{k}}is the interval[xk−1,xk+1]{\displaystyle [x_{k-1},x_{k+1}]}. Hence, the integrands of⟨vj,vk⟩{\displaystyle \langle v_{j},v_{k}\rangle }andϕ(vj,vk){\displaystyle \phi (v_{j},v_{k})}are identically zero whenever|j−k|>1{\displaystyle |j-k|>1}.
Similarly, in the planar case, ifxj{\displaystyle x_{j}}andxk{\displaystyle x_{k}}do not share an edge of the triangulation, then the integrals∫Ωvjvkds{\displaystyle \int _{\Omega }v_{j}v_{k}\,ds}and∫Ω∇vj⋅∇vkds{\displaystyle \int _{\Omega }\nabla v_{j}\cdot \nabla v_{k}\,ds}are both zero.
If we writeu(x)=∑k=1nukvk(x){\displaystyle u(x)=\sum _{k=1}^{n}u_{k}v_{k}(x)}andf(x)=∑k=1nfkvk(x){\displaystyle f(x)=\sum _{k=1}^{n}f_{k}v_{k}(x)}then problem (3), takingv(x)=vj(x){\displaystyle v(x)=v_{j}(x)}forj=1,…,n{\displaystyle j=1,\dots ,n}, becomes
If we denote byu{\displaystyle \mathbf {u} }andf{\displaystyle \mathbf {f} }the column vectors(u1,…,un)t{\displaystyle (u_{1},\dots ,u_{n})^{t}}and(f1,…,fn)t{\displaystyle (f_{1},\dots ,f_{n})^{t}}, and if we letL=(Lij){\displaystyle L=(L_{ij})}andM=(Mij){\displaystyle M=(M_{ij})}be matrices whose entries areLij=ϕ(vi,vj){\displaystyle L_{ij}=\phi (v_{i},v_{j})}andMij=∫vivjdx{\displaystyle M_{ij}=\int v_{i}v_{j}dx}then we may rephrase (4) as
It is not necessary to assumef(x)=∑k=1nfkvk(x){\displaystyle f(x)=\sum _{k=1}^{n}f_{k}v_{k}(x)}. For a general functionf(x){\displaystyle f(x)}, problem (3) withv(x)=vj(x){\displaystyle v(x)=v_{j}(x)}forj=1,…,n{\displaystyle j=1,\dots ,n}becomes actually simpler, since no matrixM{\displaystyle M}is used,
whereb=(b1,…,bn)t{\displaystyle \mathbf {b} =(b_{1},\dots ,b_{n})^{t}}andbj=∫fvjdx{\displaystyle b_{j}=\int fv_{j}dx}forj=1,…,n{\displaystyle j=1,\dots ,n}.
As we have discussed before, most of the entries ofL{\displaystyle L}andM{\displaystyle M}are zero because the basis functionsvk{\displaystyle v_{k}}have small support. So we now have to solve a linear system in the unknownu{\displaystyle \mathbf {u} }where most of the entries of the matrixL{\displaystyle L}, which we need to invert, are zero.
Such matrices are known assparse matrices, and there are efficient solvers for such problems (much more efficient than actually inverting the matrix.) In addition,L{\displaystyle L}is symmetric and positive definite, so a technique such as theconjugate gradient methodis favored. For problems that are not too large, sparseLU decompositionsandCholesky decompositionsstill work well. For instance,MATLAB's backslash operator (which uses sparse LU, sparse Cholesky, and other factorization methods) can be sufficient for meshes with a hundred thousand vertices.
The matrixL{\displaystyle L}is usually referred to as thestiffness matrix, while the matrixM{\displaystyle M}is dubbed themass matrix.
In general, the finite element method is characterized by the following process.
Separate consideration is the smoothness of the basis functions. For second-orderelliptic boundary value problems, piecewise polynomial basis function that is merely continuous suffice (i.e., the derivatives are discontinuous.) For higher-order partial differential equations, one must use smoother basis functions. For instance, for a fourth-order problem such asuxxxx+uyyyy=f{\displaystyle u_{xxxx}+u_{yyyy}=f}, one may use piecewise quadratic basis functions that areC1{\displaystyle C^{1}}.
Another consideration is the relation of the finite-dimensional spaceV{\displaystyle V}to its infinite-dimensional counterpart in the examples aboveH01{\displaystyle H_{0}^{1}}. Aconforming element methodis one in which spaceV{\displaystyle V}is a subspace of the element space for the continuous problem. The example above is such a method. If this condition is not satisfied, we obtain anonconforming element method, an example of which is the space of piecewise linear functions over the mesh, which are continuous at each edge midpoint. Since these functions are generally discontinuous along the edges, this finite-dimensional space is not a subspace of the originalH01{\displaystyle H_{0}^{1}}.
Typically, one has an algorithm for subdividing a given mesh. If the primary method for increasing precision is to subdivide the mesh, one has anh-method (his customarily the diameter of the largest element in the mesh.) In this manner, if one shows that the error with a gridh{\displaystyle h}is bounded above byChp{\displaystyle Ch^{p}}, for someC<∞{\displaystyle C<\infty }andp>0{\displaystyle p>0}, then one has an orderpmethod. Under specific hypotheses (for instance, if the domain is convex), a piecewise polynomial of orderd{\displaystyle d}method will have an error of orderp=d+1{\displaystyle p=d+1}.
If instead of makinghsmaller, one increases the degree of the polynomials used in the basis function, one has ap-method. If one combines these two refinement types, one obtains anhp-method (hp-FEM). In the hp-FEM, the polynomial degrees can vary from element to element. High-order methods with large uniformpare called spectral finite element methods (SFEM). These are not to be confused withspectral methods.
For vector partial differential equations, the basis functions may take values inRn{\displaystyle \mathbb {R} ^{n}}.
The Applied Element Method or AEM combines features of both FEM andDiscrete element methodor (DEM).
Yang and Lui introduced the Augmented-Finite Element Method, whose goal was to model the weak and strong discontinuities without needing extra DoFs, as PuM stated.
The Cut Finite Element Approach was developed in 2014.[15]The approach is "to make the discretization as independent as possible of the geometric description and minimize the complexity of mesh generation, while retaining the accuracy and robustness of a standard finite element method."[16]
The generalized finite element method (GFEM) uses local spaces consisting of functions, not necessarily polynomials, that reflect the available information on the unknown solution and thus ensure good local approximation. Then apartition of unityis used to “bond” these spaces together to form the approximating subspace. The effectiveness of GFEM has been shown when applied to problems with domains having complicated boundaries, problems with micro-scales, and problems with boundary layers.[17]
The mixed finite element method is a type of finite element method in which extra independent variables are introduced as nodal variables during the discretization of a partial differential equation problem.
Thehp-FEMcombines adaptively elements with variable sizehand polynomial degreepto achieve exceptionally fast, exponential convergence rates.[18]
Thehpk-FEMcombines adaptively elements with variable sizeh, polynomial degree of the local approximationsp, and global differentiability of the local approximations (k-1) to achieve the best convergence rates.
Theextended finite element method(XFEM) is a numerical technique based on the generalized finite element method (GFEM) and the partition of unity method (PUM). It extends the classical finite element method by enriching the solution space for solutions to differential equations with discontinuous functions. Extended finite element methods enrich the approximation space to naturally reproduce the challenging feature associated with the problem of interest: the discontinuity, singularity, boundary layer, etc. It was shown that for some problems, such an embedding of the problem's feature into the approximation space can significantly improve convergence rates and accuracy. Moreover, treating problems with discontinuities with XFEMs suppresses the need to mesh and re-mesh the discontinuity surfaces, thus alleviating the computational costs and projection errors associated with conventional finite element methods at the cost of restricting the discontinuities to mesh edges.
Several research codes implement this technique to various degrees:
XFEM has also been implemented in codes like Altair Radios, ASTER, Morfeo, and Abaqus. It is increasingly being adopted by other commercial finite element software, with a few plugins and actual core implementations available (ANSYS, SAMCEF, OOFELIE, etc.).
The introduction of the scaled boundary finite element method (SBFEM) came from Song and Wolf (1997).[19]The SBFEM has been one of the most profitable contributions in the area of numerical analysis of fracture mechanics problems. It is a semi-analytical fundamental-solutionless method combining the advantages of finite element formulations and procedures and boundary element discretization. However, unlike the boundary element method, no fundamental differential solution is required.
The S-FEM, Smoothed Finite Element Methods, is a particular class of numerical simulation algorithms for the simulation of physical phenomena. It was developed by combining mesh-free methods with the finite element method.
Spectral element methods combine the geometric flexibility of finite elements and the acute accuracy of spectral methods. Spectral methods are the approximate solution of weak-form partial equations based on high-order Lagrangian interpolants and used only with certain quadrature rules.[20]
Loubignac iterationis an iterative method in finite element methods.
The crystal plasticity finite element method (CPFEM) is an advanced numerical tool developed by Franz Roters. Metals can be regarded as crystal aggregates, which behave anisotropy under deformation, such as abnormal stress and strain localization. CPFEM, based on the slip (shear strain rate), can calculate dislocation, crystal orientation, and other texture information to consider crystal anisotropy during the routine. It has been applied in the numerical study of material deformation, surface roughness, fractures, etc.
The virtual element method (VEM), introduced by Beirão da Veiga et al. (2013)[21]as an extension ofmimeticfinite difference(MFD) methods, is a generalization of the standard finite element method for arbitrary element geometries. This allows admission of general polygons (orpolyhedrain 3D) that are highly irregular and non-convex in shape. The namevirtualderives from the fact that knowledge of the local shape function basis is not required and is, in fact, never explicitly calculated.
Some types of finite element methods (conforming, nonconforming, mixed finite element methods) are particular cases of thegradient discretization method(GDM). Hence the convergence properties of the GDM, which are established for a series of problems (linear and nonlinear elliptic problems, linear, nonlinear, and degenerate parabolic problems), hold as well for these particular FEMs.
Thefinite difference method(FDM) is an alternative way of approximating solutions of PDEs. The differences between FEM and FDM are:
Generally, FEM is the method of choice in all types of analysis in structural mechanics (i.e., solving for deformation and stresses in solid bodies or dynamics of structures). In contrast,computational fluid dynamics(CFD) tend to use FDM or other methods likefinite volume method(FVM). CFD problems usually require discretization of the problem into a large number of cells/gridpoints (millions and more). Therefore the cost of the solution favors simpler, lower-order approximation within each cell. This is especially true for 'external flow' problems, like airflow around the car, airplane, or weather simulation.
Another method used for approximating solutions to a partial differential equation is theFast Fourier Transform(FFT), where the solution is approximated by a fourier series computed using the FFT. For approximating the mechanical response of materials under stress, FFT is often much faster,[24]but FEM may be more accurate.[25]One example of the respective advantages of the two methods is in simulation ofrollinga sheet ofaluminum(an FCC metal), anddrawinga wire oftungsten(a BCC metal). This simulation did not have a sophisticated shape update algorithm for the FFT method. In both cases, the FFT method was more than 10 times as fast as FEM, but in the wire drawing simulation, where there were large deformations ingrains, the FEM method was much more accurate. In the sheet rolling simulation, the results of the two methods were similar.[25]FFT has a larger speed advantage in cases where the boundary conditions are given in the materialsstrain, and loses some of its efficiency in cases where thestressis used to apply the boundary conditions, as more iterations of the method are needed.[26]
The FE and FFT methods can also be combined in avoxelbased method (2) to simulate deformation in materials, where the FE method is used for the macroscale stress and deformation, and the FFT method is used on the microscale to deal with the effects of microscale on the mechanical response.[27]Unlike FEM, FFT methods’ similarities to image processing methods means that an actual image of the microstructure from a microscope can be input to the solver to get a more accurate stress response. Using a real image with FFT avoids meshing the microstructure, which would be required if using FEM simulation of the microstructure, and might be difficult. Because fourier approximations are inherently periodic, FFT can only be used in cases of periodic microstructure, but this is common in real materials.[27]FFT can also be combined with FEM methods by using fourier components as the variational basis for approximating the fields inside an element, which can take advantage of the speed of FFT based solvers.[28]
Various specializations under the umbrella of the mechanical engineering discipline (such as aeronautical, biomechanical, and automotive industries) commonly use integrated FEM in the design and development of their products. Several modern FEM packages include specific components such as thermal, electromagnetic, fluid, and structural working environments. In a structural simulation, FEM helps tremendously in producing stiffness and strength visualizations and minimizing weight, materials, and costs.[29]
FEM allows detailed visualization of where structures bend or twist, indicating the distribution of stresses and displacements. FEM software provides a wide range of simulation options for controlling the complexity of modeling and system analysis. Similarly, the desired level of accuracy required and associated computational time requirements can be managed simultaneously to address most engineering applications. FEM allows entire designs to be constructed, refined, and optimized before the design is manufactured. The mesh is an integral part of the model and must be controlled carefully to give the best results. Generally, the higher the number of elements in a mesh, the more accurate the solution of the discretized problem. However, there is a value at which the results converge, and further mesh refinement does not increase accuracy.[30]
This powerful design tool has significantly improved both the standard of engineering designs and the design process methodology in many industrial applications.[32]The introduction of FEM has substantially decreased the time to take products from concept to the production line.[32]Testing and development have been accelerated primarily through improved initial prototype designs using FEM.[33]In summary, benefits of FEM include increased accuracy, enhanced design and better insight into critical design parameters, virtual prototyping, fewer hardware prototypes, a faster and less expensive design cycle, increased productivity, and increased revenue.[32]
In the 1990s FEM was proposed for use in stochastic modeling for numerically solving probability models[34]and later for reliability assessment.[35]
FEM is widely applied for approximating differential equations that describe physical systems. This method is very popular in the community ofComputational fluid dynamics, and there are many applications for solvingNavier–Stokes equationswith FEM.[36][37][38]Recently, the application of FEM has been increasing in the researches of computational plasma. Promising numerical results using FEM forMagnetohydrodynamics,Vlasov equation, andSchrödinger equationhave been proposed.[39][40]
|
https://en.wikipedia.org/wiki/Finite_element_method
|
Instatistics, originally ingeostatistics,krigingorKriging(/ˈkriːɡɪŋ/), also known asGaussian process regression, is a method ofinterpolationbased onGaussian processgoverned by priorcovariances. Under suitable assumptions of the prior, kriging gives thebest linear unbiased prediction(BLUP) at unsampled locations.[1]Interpolating methods based on other criteria such assmoothness(e.g.,smoothing spline) may not yield the BLUP. The method is widely used in the domain ofspatial analysisandcomputer experiments. The technique is also known asWiener–Kolmogorov prediction, afterNorbert WienerandAndrey Kolmogorov.
The theoretical basis for the method was developed by the French mathematicianGeorges Matheronin 1960, based on the master's thesis ofDanie G. Krige, the pioneering plotter of distance-weighted average gold grades at theWitwatersrandreef complex inSouth Africa. Krige sought to estimate the most likely distribution of gold based on samples from a few boreholes. The English verb isto krige, and the most common noun iskriging. The word is sometimes capitalized asKrigingin the literature.
Though computationally intensive in its basic formulation, kriging can be scaled to larger problems using variousapproximation methods.
Kriging predicts the value of a function at a given point by computing a weighted average of the known values of the function in the neighborhood of the point. The method is closely related toregression analysis. Both theories derive abest linear unbiased estimatorbased on assumptions oncovariances, make use ofGauss–Markov theoremto prove independence of the estimate and error, and use very similar formulae. Even so, they are useful in different frameworks: kriging is made for estimation of a single realization of a random field, while regression models are based on multiple observations of a multivariate data set.
The kriging estimation may also be seen as asplinein areproducing kernel Hilbert space, with the reproducing kernel given by the covariance function.[2]The difference with the classical kriging approach is provided by the interpretation: while the spline is motivated by a minimum-norm interpolation based on a Hilbert-space structure, kriging is motivated by an expected squared prediction error based on a stochastic model.
Kriging withpolynomial trend surfacesis mathematically identical togeneralized least squarespolynomialcurve fitting.
Kriging can also be understood as a form ofBayesian optimization.[3]Kriging starts with apriordistributionoverfunctions. This prior takes the form of a Gaussian process:N{\displaystyle N}samples from a function will benormally distributed, where thecovariancebetween any two samples is the covariance function (orkernel) of the Gaussian process evaluated at the spatial location of two points. Asetof values is then observed, each value associated with a spatial location. Now, a new value can be predicted at any new spatial location by combining the Gaussian prior with a Gaussianlikelihood functionfor each of the observed values. The resultingposteriordistribution is also Gaussian, with a mean and covariance that can be simply computed from the observed values, their variance, and the kernel matrix derived from the prior.
In geostatistical models, sampled data are interpreted as the result of a random process. The fact that these models incorporate uncertainty in their conceptualization doesn't mean that the phenomenon – the forest, the aquifer, the mineral deposit – has resulted from a random process, but rather it allows one to build a methodological basis for the spatial inference of quantities in unobserved locations and to quantify the uncertainty associated with the estimator.
Astochastic processis, in the context of this model, simply a way to approach the set of data collected from the samples. The first step in geostatistical modulation is to create a random process that best describes the set of observed data.
A value from locationx1{\displaystyle x_{1}}(generic denomination of a set ofgeographic coordinates) is interpreted as a realizationz(x1){\displaystyle z(x_{1})}of therandom variableZ(x1){\displaystyle Z(x_{1})}. In the spaceA{\displaystyle A}, where the set of samples is dispersed, there areN{\displaystyle N}realizations of the random variablesZ(x1),Z(x2),…,Z(xN){\displaystyle Z(x_{1}),Z(x_{2}),\ldots ,Z(x_{N})}, correlated between themselves.
The set of random variables constitutes a random function, of which only one realization is known – the setz(xi){\displaystyle z(x_{i})}of observed data. With only one realization of each random variable, it's theoretically impossible to determine anystatistical parameterof the individual variables or the function. The proposed solution in the geostatistical formalism consists inassumingvarious degrees ofstationarityin the random function, in order to make the inference of some statistic values possible.
For instance, if one assumes, based on the homogeneity of samples in areaA{\displaystyle A}where the variable is distributed, the hypothesis that thefirst momentis stationary (i.e. all random variables have the same mean), then one is assuming that the mean can be estimated by the arithmetic mean of sampled values.
The hypothesis of stationarity related to thesecond momentis defined in the following way: the correlation between two random variables solely depends on the spatial distance between them and is independent of their location. Thus ifh=x2−x1{\displaystyle \mathbf {h} =x_{2}-x_{1}}andh=|h|{\displaystyle h=|\mathbf {h} |}, then:
For simplicity, we defineC(xi,xj)=C(Z(xi),Z(xj)){\displaystyle C(x_{i},x_{j})=C{\big (}Z(x_{i}),Z(x_{j}){\big )}}andγ(xi,xj)=γ(Z(xi),Z(xj)){\displaystyle \gamma (x_{i},x_{j})=\gamma {\big (}Z(x_{i}),Z(x_{j}){\big )}}.
This hypothesis allows one to infer those two measures – thevariogramand thecovariogram:
where:
In this set,(i,j){\displaystyle (i,\;j)}and(j,i){\displaystyle (j,\;i)}denote the same element. Generally an "approximate distance"h{\displaystyle h}is used, implemented using a certain tolerance.
Spatial inference, or estimation, of a quantityZ:Rn→R{\displaystyle Z\colon \mathbb {R} ^{n}\to \mathbb {R} }, at an unobserved locationx0{\displaystyle x_{0}}, is calculated from a linear combination of the observed valueszi=Z(xi){\displaystyle z_{i}=Z(x_{i})}and weightswi(x0),i=1,…,N{\displaystyle w_{i}(x_{0}),\;i=1,\ldots ,N}:
The weightswi{\displaystyle w_{i}}are intended to summarize two extremely important procedures in a spatial inference process:
When calculating the weightswi{\displaystyle w_{i}}, there are two objectives in the geostatistical formalism:unbiasandminimal variance of estimation.
If the cloud of real valuesZ(x0){\displaystyle Z(x_{0})}is plotted against the estimated valuesZ^(x0){\displaystyle {\hat {Z}}(x_{0})}, the criterion for global unbias,intrinsic stationarityorwide sense stationarityof the field, implies that the mean of the estimations must be equal to mean of the real values.
The second criterion says that the mean of the squared deviations(Z^(x)−Z(x)){\displaystyle {\big (}{\hat {Z}}(x)-Z(x){\big )}}must be minimal, which means that when the cloud of estimated valuesversusthe cloud real values is more disperse, the estimator is more imprecise.
Depending on the stochastic properties of the random field and the various degrees of stationarity assumed, different methods for calculating the weights can be deduced, i.e. different types of kriging apply. Classical methods are:
The unknown valueZ(x0){\displaystyle Z(x_{0})}is interpreted as a random variable located inx0{\displaystyle x_{0}}, as well as the values of neighbors samplesZ(xi),i=1,…,N{\displaystyle Z(x_{i}),\ i=1,\ldots ,N}. The estimatorZ^(x0){\displaystyle {\hat {Z}}(x_{0})}is also interpreted as a random variable located inx0{\displaystyle x_{0}}, a result of the linear combination of variables.
Kriging seeks to minimize the mean square value of the following error in estimatingZ(x0){\displaystyle Z(x_{0})}, subject to lack of bias:
The two quality criteria referred to previously can now be expressed in terms of the mean and variance of the new random variableϵ(x0){\displaystyle \epsilon (x_{0})}:
Since the random function is stationary,E[Z(xi)]=E[Z(x0)]=m{\displaystyle E[Z(x_{i})]=E[Z(x_{0})]=m}, the weights must sum to 1 in order to ensure that the model is unbiased. This can be seen as follows:
Two estimators can haveE[ϵ(x0)]=0{\displaystyle E[\epsilon (x_{0})]=0}, but the dispersion around their mean determines the difference between the quality of estimators. To find an estimator with minimum variance, we need to minimizeE[ϵ(x0)2]{\displaystyle E[\epsilon (x_{0})^{2}]}.
Seecovariance matrixfor a detailed explanation.
where the literals{Varxi,Varx0,Covxix0}{\displaystyle \left\{\operatorname {Var} _{x_{i}},\operatorname {Var} _{x_{0}},\operatorname {Cov} _{x_{i}x_{0}}\right\}}stand for
Once defined the covariance model orvariogram,C(h){\displaystyle C(\mathbf {h} )}orγ(h){\displaystyle \gamma (\mathbf {h} )}, valid in all field of analysis ofZ(x){\displaystyle Z(x)}, then we can write an expression for the estimation variance of any estimator in function of the covariance between the samples and the covariances between the samples and the point to estimate:
Some conclusions can be asserted from this expression. The variance of estimation:
Solving this optimization problem (seeLagrange multipliers) results in thekriging system:
The additional parameterμ{\displaystyle \mu }is aLagrange multiplierused in the minimization of the kriging errorσk2(x){\displaystyle \sigma _{k}^{2}(x)}to honor the unbiasedness condition.
Simple kriging is mathematically the simplest, but the least general.[9]It assumes theexpectationof therandom fieldis known and relies on acovariance function. However, in most applications neither the expectation nor the covariance are known beforehand.
The practical assumptions for the application ofsimple krigingare:
The covariance function is a crucial design choice, since it stipulates the properties of the Gaussian process and thereby the behaviour of the model. The covariance function encodes information about, for instance, smoothness and periodicity, which is reflected in the estimate produced. A very common covariance function is the squared exponential, which heavily favours smooth function estimates.[10]For this reason, it can produce poor estimates in many real-world applications, especially when the true underlying function contains discontinuities and rapid changes.
Thekriging weightsofsimple kriginghave no unbiasedness condition and are given by thesimple kriging equation system:
This is analogous to a linear regression ofZ(x0){\displaystyle Z(x_{0})}on the otherz1,…,zn{\displaystyle z_{1},\ldots ,z_{n}}.
The interpolation by simple kriging is given by
The kriging error is given by
which leads to the generalised least-squares version of theGauss–Markov theorem(Chiles & Delfiner 1999, p. 159):
See alsoBayesian Polynomial Chaos
Although kriging was developed originally for applications in geostatistics, it is a general method of statistical interpolation and can be applied within any discipline to sampled data from random fields that satisfy the appropriate mathematical assumptions. It can be used where spatially related data has been collected (in 2-D or 3-D) and estimates of "fill-in" data are desired in the locations (spatial gaps) between the actual measurements.
To date kriging has been used in a variety of disciplines, including the following:
Another very important and rapidly growing field of application, inengineering, is the interpolation of data coming out as response variables of deterministic computer simulations,[28]e.g.finite element method(FEM) simulations. In this case, kriging is used as ametamodelingtool, i.e. a black-box model built over a designed set ofcomputer experiments. In many practical engineering problems, such as the design of ametal formingprocess, a single FEM simulation might be several hours or even a few days long. It is therefore more efficient to design and run a limited number of computer simulations, and then use a kriging interpolator to rapidly predict the response in any other design point. Kriging is therefore used very often as a so-calledsurrogate model, implemented insideoptimizationroutines.[29]Kriging-based surrogate models may also be used in the case of mixed integer inputs.[30]
|
https://en.wikipedia.org/wiki/Kriging
|
Inmathematics, alinear approximationis an approximation of a generalfunctionusing alinear function(more precisely, anaffine function). They are widely used in the method offinite differencesto produce first order methods for solving or approximating solutions to equations.
Given a twice continuously differentiable functionf{\displaystyle f}of onerealvariable,Taylor's theoremfor the casen=1{\displaystyle n=1}states thatf(x)=f(a)+f′(a)(x−a)+R2{\displaystyle f(x)=f(a)+f'(a)(x-a)+R_{2}}whereR2{\displaystyle R_{2}}is the remainder term. The linear approximation is obtained by dropping the remainder:f(x)≈f(a)+f′(a)(x−a).{\displaystyle f(x)\approx f(a)+f'(a)(x-a).}
This is a good approximation whenx{\displaystyle x}is close enough toa{\displaystyle a};since a curve, when closely observed, will begin to resemble a straight line. Therefore, the expression on the right-hand side is just the equation for thetangent lineto the graph off{\displaystyle f}at(a,f(a)){\displaystyle (a,f(a))}. For this reason, this process is also called thetangent line approximation. Linear approximations in this case are further improved when thesecond derivativeof a,f″(a){\displaystyle f''(a)}, is sufficiently small (close to zero) (i.e., at or near aninflection point).
Iff{\displaystyle f}isconcave downin the interval betweenx{\displaystyle x}anda{\displaystyle a}, the approximation will be an overestimate (since the derivative is decreasing in that interval). Iff{\displaystyle f}isconcave up, the approximation will be an underestimate.[1]
Linear approximations forvectorfunctions of a vector variable are obtained in the same way, with the derivative at a point replaced by theJacobianmatrix. For example, given a differentiable functionf(x,y){\displaystyle f(x,y)}with real values, one can approximatef(x,y){\displaystyle f(x,y)}for(x,y){\displaystyle (x,y)}close to(a,b){\displaystyle (a,b)}by the formulaf(x,y)≈f(a,b)+∂f∂x(a,b)(x−a)+∂f∂y(a,b)(y−b).{\displaystyle f\left(x,y\right)\approx f\left(a,b\right)+{\frac {\partial f}{\partial x}}\left(a,b\right)\left(x-a\right)+{\frac {\partial f}{\partial y}}\left(a,b\right)\left(y-b\right).}
The right-hand side is the equation of the plane tangent to the graph ofz=f(x,y){\displaystyle z=f(x,y)}at(a,b).{\displaystyle (a,b).}
In the more general case ofBanach spaces, one hasf(x)≈f(a)+Df(a)(x−a){\displaystyle f(x)\approx f(a)+Df(a)(x-a)}whereDf(a){\displaystyle Df(a)}is theFréchet derivativeoff{\displaystyle f}ata{\displaystyle a}.
Gaussian opticsis a technique ingeometrical opticsthat describes the behaviour of light rays in optical systems by using theparaxial approximation, in which only rays which make small angles with theoptical axisof the system are considered.[2]In this approximation, trigonometric functions can be expressed as linear functions of the angles. Gaussian optics applies to systems in which all the optical surfaces are either flat or are portions of asphere. In this case, simple explicit formulae can be given for parameters of an imaging system such as focal distance, magnification and brightness, in terms of the geometrical shapes and material properties of the constituent elements.
The period of swing of asimple gravity pendulumdepends on itslength, the localstrength of gravity, and to a small extent on the maximumanglethat the pendulum swings away from vertical,θ0, called theamplitude.[3]It is independent of themassof the bob. The true periodTof a simple pendulum, the time taken for a complete cycle of an ideal simple gravity pendulum, can be written in several different forms (seependulum), one example being theinfinite series:[4][5]T=2πLg(1+116θ02+113072θ04+⋯){\displaystyle T=2\pi {\sqrt {L \over g}}\left(1+{\frac {1}{16}}\theta _{0}^{2}+{\frac {11}{3072}}\theta _{0}^{4}+\cdots \right)}
whereLis the length of the pendulum andgis the localacceleration of gravity.
However, if one takes the linear approximation (i.e. if the amplitude is limited to small swings,[Note 1]) theperiodis:[6]
In the linear approximation, the period of swing is approximately the same for different size swings: that is,the period is independent of amplitude. This property, calledisochronism, is the reason pendulums are so useful for timekeeping.[7]Successive swings of the pendulum, even if changing in amplitude, take the same amount of time.
The electrical resistivity of most materials changes with temperature. If the temperatureTdoes not vary too much, a linear approximation is typically used:ρ(T)=ρ0[1+α(T−T0)]{\displaystyle \rho (T)=\rho _{0}[1+\alpha (T-T_{0})]}whereα{\displaystyle \alpha }is called thetemperature coefficient of resistivity,T0{\displaystyle T_{0}}is a fixed reference temperature (usually room temperature), andρ0{\displaystyle \rho _{0}}is the resistivity at temperatureT0{\displaystyle T_{0}}. The parameterα{\displaystyle \alpha }is an empirical parameter fitted from measurement data. Because the linear approximation is only an approximation,α{\displaystyle \alpha }is different for different reference temperatures. For this reason it is usual to specify the temperature thatα{\displaystyle \alpha }was measured at with a suffix, such asα15{\displaystyle \alpha _{15}}, and the relationship only holds in a range of temperatures around the reference.[8]When the temperature varies over a large temperature range, the linear approximation is inadequate and a more detailed analysis and understanding should be used.
|
https://en.wikipedia.org/wiki/Linear_approximation
|
Amental modelis an internal representation of externalreality: that is, a way of representing reality within one'smind. Such models arehypothesizedto play a major role incognition,reasoninganddecision-making. The term for this concept was coined in 1943 byKenneth Craik, who suggested that the mind constructs "small-scalemodels" of reality that it uses to anticipate events. Mental models can help shapebehaviour, including approaches to solving problems and performing tasks.
Inpsychology, the termmental modelsis sometimes used to refer tomental representationsor mental simulation generally. The concepts ofschemaandconceptual modelsare cognitively adjacent. Elsewhere, it is used to refer to the"mental model" theory of reasoningdeveloped byPhilip Johnson-LairdandRuth M. J. Byrne.
The termmental modelis believed to have originated withKenneth Craikin his 1943 bookThe Nature of Explanation.[1][2]Georges-Henri LuquetinLe dessin enfantin(Children's drawings), published in 1927 by Alcan, Paris, argued that children construct internal models, a view that influenced, among others, child psychologistJean Piaget.
Jay Wright Forresterdefined general mental models thus:
The image of the world around us, which we carry in our head, is just a model. Nobody in his head imagines all the world, government or country. He has only selected concepts, and relationships between them, and uses those to represent the real system (Forrester, 1971).
Philip Johnson-LairdpublishedMental Models: Towards a Cognitive Science of Language, Inference and Consciousnessin 1983. In the same year,Dedre Gentnerand Albert Stevens edited a collection of chapters in a book also titledMental Models.[3]The first line of their book explains the idea further: "One function of this chapter is to belabor the obvious; people's views of the world, of themselves, of their own capabilities, and of the tasks that they are asked to perform, or topics they are asked to learn, depend heavily on the conceptualizations that they bring to the task." (see the book:Mental Models).
Since then, there has been much discussion and use of the idea inhuman-computer interactionandusabilityby researchers includingDonald NormanandSteve Krug(in his bookDon't Make Me Think).Walter KintschandTeun A. van Dijk, using the termsituation model(in their bookStrategies of Discourse Comprehension, 1983), showed the relevance of mental models for the production and comprehension ofdiscourse.
Charlie Mungerpopularized the use of multi-disciplinary mental models for making business and investment decisions.[4]
One view of human reasoning is that it depends on mental models. In this view, mental models can be constructed from perception, imagination, or the comprehension of discourse (Johnson-Laird, 1983). Such mental models are similar to architects' models or to physicists' diagrams in that their structure is analogous to the structure of the situation that they represent, unlike, say, the structure of logical forms used in formal rule theories of reasoning. In this respect, they are a little like pictures in thepicture theory of languagedescribed by philosopherLudwig Wittgensteinin 1922.Philip Johnson-LairdandRuth M.J. Byrnedeveloped theirmental model theory of reasoningwhich makes the assumption that reasoning depends, not on logical form, but on mental models (Johnson-Laird and Byrne, 1991).
Mental models are based on a small set of fundamental assumptions (axioms), which distinguish them from other proposed representations in thepsychology of reasoning(Byrne and Johnson-Laird, 2009). Each mental model represents a possibility. A mental model represents one possibility, capturing what is common to all the different ways in which the possibility may occur (Johnson-Laird and Byrne, 2002). Mental models are iconic, i.e., each part of a model corresponds to each part of what it represents (Johnson-Laird, 2006). Mental models are based on a principle of truth: they typically represent only those situations that are possible, and each model of a possibility represents only what is true in that possibility according to the proposition. However, mental models can represent what is false, temporarily assumed to be true, for example, in the case ofcounterfactual conditionalsandcounterfactual thinking(Byrne, 2005).
People infer that a conclusion is valid if it holds in all the possibilities. Procedures for reasoning with mental models rely on counter-examples to refute invalid inferences; they establish validity by ensuring that a conclusion holds over all the models of the premises. Reasoners focus on a subset of the possible models of multiple-model problems, often just a single model. The ease with which reasoners can make deductions is affected by many factors, including age and working memory (Barrouillet, et al., 2000). They reject a conclusion if they find a counterexample, i.e., a possibility in which the premises hold, but the conclusion does not (Schroyens, et al. 2003; Verschueren, et al., 2005).
Scientific debate continues about whether human reasoning is based on mental models, versus formalrules of inference(e.g., O'Brien, 2009), domain-specific rules of inference (e.g., Cheng & Holyoak, 2008; Cosmides, 2005), or probabilities (e.g., Oaksford and Chater, 2007). Many empirical comparisons of the different theories have been carried out (e.g., Oberauer, 2006).
A mental model is generally:
Mental models are a fundamental way to understand organizational learning. Mental models, in popular science parlance, have been described as "deeply held images of thinking and acting".[8]Mental models are so basic to understanding the world that people are hardly conscious of them.
S.N. Groesser and M. Schaffernicht (2012) describe three basic methods which are typically used:
These methods allow showing a mental model of a dynamic system, as an explicit, written model about a certain system based on internal beliefs. Analyzing these graphical representations has been an increasing area of research across many social science fields.[9]Additionally software tools that attempt to capture and analyze the structural and functional properties of individual mental models such as Mental Modeler, "a participatory modeling tool based in fuzzy-logic cognitive mapping",[10]have recently been developed and used to collect/compare/combine mental model representations collected from individuals for use in social science research, collaborative decision-making, and natural resource planning.
In the simplification of reality, creating a model can find a sense of reality, seeking to overcomesystemic thinkingandsystem dynamics.
These two disciplines can help to construct a better coordination with the reality of mental models and simulate it accurately. They increase the probability that the consequences of how to decide and act in accordance with how to plan.[5]
Experimental studies carried out inweightlessness[11]and on Earth usingneuroimaging[12]showed that humans are endowed with a mental model of the effects of gravity on object motion.
After analyzing the basic characteristics, it is necessary to bring the process of changing the mental models, or the process of learning.Learningis a back-loopprocess, andfeedbackloops can be illustrated as: single-loop learning or double-loop learning.
Mental models affect the way that people work with information, and also how they determine the final decision. The decision itself changes, but the mental models remain the same. It is the predominant method of learning, because it is very convenient.
Double-loop learning (see diagram below) is used when it is necessary to change the mental model on which a decision depends. Unlike single loops, this model includes a shift in understanding, from simple and static to broader and more dynamic, such as taking into account the changes in the surroundings and the need for expression changes in mental models.[6]
|
https://en.wikipedia.org/wiki/Mental_model
|
Mental rotationis the ability torotatemental representationsoftwo-dimensionalandthree-dimensionalobjects as it is related to the visual representation of such rotation within the human mind.[1]There is a relationship between areas of the brain associated withperceptionand mental rotation. There could also be a relationship between the cognitiverateof spatial processing, generalintelligenceand mental rotation.[2][3][4]
Mental rotation can be described as the brain moving objects in order to help understand what they are and where they belong. Mental rotation has been studied to try to figure out how the mind recognizes objects in their environment. Researchers generally call such objectsstimuli. Mental rotation is one cognitive function for the person to figure out what the altered object is.
Mental rotation can be separated into the followingcognitivestages:[2]
Originally developed in 1978 by Vandenberg and Kuse[5]based on the research by Shepard and Metzler (1971),[1]aMental Rotation Test(MRT) consists of a participant comparing two3Dobjects (orletters), often rotated in some axis, and states if they are the sameimageor if they aremirror images(enantiomorphs).[1]Commonly, the test will have pairs of images each rotated a specific number ofdegrees(e.g. 0°, 60°, 120° or 180°). A set number of pairs will be split between being the same image rotated, while others are mirrored. The researcher judges the participant on howaccuratelyandrapidlythey can distinguish between the mirrored and non-mirrored pairs.[6]
Roger Shepardand Jacqueline Metzler (1971) were some of the first to research the phenomenon.[7]Their experiment specifically tested mental rotation on three-dimensional objects. Each subject was presented with multiple pairs of three-dimensional, asymmetrical lined or cubed objects. The experiment was designed to measure how long it would take each subject to determine whether the pair of objects were indeed the same object or two different objects. Theirresearchshowed that thereaction timefor participants to decide if the pair of items matched or not waslinearlyproportionalto theangleof rotation from the original position. That is, the more an object has been rotated from the original, the longer it takes an individual to determine if the two images are of the same object or enantiomorphs.[8]
In 1978,Steven G. Vandenbergand Allan R. Kuse developed the Mental Rotations Test (MRT) to assess mental rotation abilities that was based on Shepard and Metzler's (1971) original study. TheMental Rotations Testwas constructed using India ink drawings. Each stimulus was a two-dimensional image of a three-dimensional object drawn by a computer. The image was then displayed on an oscilloscope. Each image was then shown at different orientations rotated around the vertical axis. The original test contained 20 items, demanding the comparison of four figures with a criterion figure, with two of them being correct. Following the basic ideas of Shepard and Metzler's experiment, this study found a significant difference in the mental rotation scores between men and women, with men performing better. Correlations with other measures showed strong association with tests of spatial visualization and no association with verbal ability.[9][10]
In 2000, a study was conducted to find out which part of the brain is activated during mental rotation. Seven volunteers (four males and three females) between the ages of twenty-nine to sixty-six participated in this experiment. For the study, the subjects were shown eight characters 4 times each (twice in normal orientation and twice reversed) and the subjects had to decide if the character was in its normal configuration or if it was the mirror image. During this task, a PET scan was performed and revealed activation in the right posterior parietal lobe.[11]
Functional magnetic resonance imaging (fMRI)studies of brain activation during mental rotation reveal consistent increased activation of the parietal lobe, specifically theinter-parietal sulcus, that is dependent on the difficulty of the task. In general, the larger the angle of rotation, the more brain activity associated with the task. This increased brain activation is accompanied by longer times to complete the rotation task and higher error rates. Researchers have argued that the increased brain activation, increased time, and increased error rates indicate that task difficulty is proportional to the angle of rotation.[12][13]
A 2006 study observed the following brain areas to be activated during mental rotation as compared to baseline: bilateral medial temporal gyrus, left medial occipital gyrus, bilateral superior occipital gyrus, bilateral superior parietal lobe, and left inferior occipital gyrus during the rotation task.[14]
A study from 2008 suggested that differences may occur early during development. The experiment was done on 3- to 4-month-old infants using a 2D mental rotation task. They used a preference apparatus that consists of observing during how much time the infant is looking at the stimulus. They started by familiarizing the participants with the number "1" and its rotations. Then they showed them a picture of a "1" rotated and its mirror image. It appears that gendered differences may appear early in development, as the study showed that males are more responsive to the mirror image. According to the study, this may mean that males and females process mental rotation differently even as infants.[15]Supporting the presence of such differences early in development, other studies have found that gendered differences in mental rotation tests were visible in all age groups, including young children. Interestingly, these differences emerged much later for other categories of spatial tests.[16]
In 2020,Advances in Child Development and Behaviorpublished a review that examined mental rotation abilities during very early development.[17]The authors concluded that an ability to mentally rotate objects can be detected in infants as young as 3 months of age. Also, MR processes in infancy likely remain stable over time into adulthood. Additional variables that appeared to influence infants' MR performance include motor activity, stimulus complexity, hormone levels, and parental attitudes[18]
Physical objects that people imagine rotating in everyday life have many properties, such as textures, shapes, and colors. A study at theUniversity of California Santa Barbarawas conducted to specifically test the extent to which visual information, such as color, is represented during mental rotation. This study used several methods such as reaction time studies, verbal protocol analysis, and eye tracking. In the initial reaction time experiments, those with poor rotational ability were affected by the colors of the image, whereas those with good rotational ability were not. Overall, those with poor ability were faster and more accurate identifying images that were consistently colored. The verbal protocol analysis showed that the subjects with lowspatial abilitymentioned color in their mental rotation tasks more often than participants with high spatial ability. One thing that can be shown through this experiment is that those with higher rotational ability will be less likely to represent color in their mental rotation. Poor rotators will be more likely to represent color in their mental rotation using piecemeal strategies (Khooshabeh & Hegarty, 2008).
Research on how athleticism and artistic ability affect mental rotation has been conducted. Pietsch, S., & Jansen, P. (2012) showed that people who were athletes or musicians had faster reaction times than people who were not. They tested this by splitting people from the age of 18 and higher into three groups. The groups consisted of music students, sports students, and education students. It was found that students who were focused on sports or music did much better than those who were education majors. Also, it was found that the male athletes and education majors in the experiment were faster than the respective females, but male and female musicians showed no significant difference in reaction time.
A 2007 study supported the results that musicians perform better on mental rotation tasks than non-musicians. In particular, orchestral musicians' MRT task performance exhibited aptitude levels significantly higher than the population baseline.[19]
Moreau, D., Clerc, et al. (2012) also investigated if athletes were more spatially aware than non-athletes. This experiment took undergraduate college students and tested them with the mental rotation test before any sport training, and then again afterward. The participants were trained in two different sports to see if this would help their spatial awareness. It was found that the participants did better on the mental rotation test after they had trained in the sports, than they did before the training. This experiment brought to the research that if people could find ways to train their mental rotation skills they could perform better in high context activities with greater ease.
Researchers studied the difference in mental rotation ability between gymnasts, handball, and soccer players with both in-depth and in-plane rotations. Results suggested that athletes were better at performing mental rotation tasks that were more closely related to their sport of expertise.[20]
There is a correlation in mental rotation and motor ability in children, and this connection is especially strong in boys ages 7–8. The study showed that there is considerable overlap between spatial reasoning and athletic ability, even among young children.[21]
A mental rotation test (MRT) was carried out on gymnasts, orienteers, runners, and non athletes. Results showed that non athletes were greatly outperformed by gymnasts and orienteers, but not runners. Gymnasts (egocentric athletes) did not outperform orienteers (allocentric athletes).[22]Egocentric indicates understanding the position of your body as it relates to objects in space, and allocentric indicates understanding the relation of multiple objects in space independently of the self-perspective.
A study investigated the effect of mental rotation on postural stability. Participants performed a MR (mental rotation) task involving either foot stimuli, hand stimuli, or non-body stimuli (a car) and then had to balance on one foot. The results suggested that MR tasks involving foot stimuli were more effective at improving balance than hand or car stimuli, even after 60 minutes.[23]
Contrary to what one might expect, previous studies examining whether artists are superior at mental rotation have been mixed, and a recent study substantiates the null findings. It has been theorized that artists are adept at recognizing, creating, and activating visual stimuli, but not necessarily at manipulating them.[24]
A 2018 study examined the effect of studying various subjects within higher education on mental rotation ability.[25]The researchers found that architecture students performed significantly better than art students, who performed significantly better than both psychology and business majors, with gender and other demographic differences accounted for. These findings make sense intuitively, given that architecture students are highly acquainted with manipulating the orientation of structures in space.
Following the Vandenberg and Kuse study, subsequent research attempted to assess the presence of gendered differences in mental rotation ability. For the first couple of decades immediately following the research, the topic was addressed in different meta-analyses with inconclusive results. However, Voyer et al. conducted a comprehensive review in 1995, which showed that gender differences were reliable and more pronounced in specific tasks, indicating that sex affects the processes underlying performance in spatial memory tests. Analogous to other types of spatial reasoning tasks, men tended to outperform women by a statistically significant margin[16]among the MR literature.
As mentioned above, many studies have shown that there is a difference between male and female performance in mental rotation tasks. To learn more about this difference, brain activation during a mental rotation task was studied. In 2012, a study[26]was done in which males and females were asked to execute a mental rotation task, and their brain activity was recorded with an fMRI. The researchers found a difference of brain activation: males presented a stronger activity in the area of the brain used in a mental rotation task.
Furthermore, sex-related differences in mental rotation abilities may reflect evolutionary differences. Men assumed the role of hunting and foraging, which necessitates a greater degree of visual-spatial processing than the child-rearing and domestic tasks which women performed. Biologically, males receive higher fetal exposure to androgens than females, and retain these relatively higher levels for life. This difference plays a significant role in human sexual dimorphism, and may be a causal factor in the differences observed regarding mental rotation. Interestingly, women with congenital adrenal hyperplasia (CAH), who are exposed to higher levels of fetal androgen than control women, tend to perform better on the MRT than women with normal amounts of fetal androgen exposure.[27]Additionally, the significant role of hormonal variation between the sexes was supported by a 2004 study, which revealed that testosterone (a principal androgen) level in young men was negatively correlated with the number of errors and response time in the MRT.[28]Therefore, higher levels of testosterone probably contribute to better performance.
Another study from 2015 was focused on women and their abilities in a mental rotation task and anemotion recognitiontask. In this experiment they induced a feeling or a situation in which women feel more powerful or less powerful. They were able to conclude that women in a situation of power are better in a mental rotation task (but less performant in an emotion recognition task) than other women.[29]Interestingly, the types of cognitive strategies that men and women typically employ may be a contributing factor. The literature has established that men generally prefer holistic strategies, whereas women prefer analytic-verbal strategies and focus on specific parts of the whole puzzle. Women tended to act more conservatively as well, sacrificing time to double-check the incorrect items more often than men. Consequently, women require more time to execute their technique when completing tasks like the MRT. In order to determine the extent of this variable's significance, Hirnstein et al. (2009) created a modified MRT in which the number of matching figures could vary between zero and four, which, compared to the original MRT, favored the strategy most often employed by women. The research found that gender differences declined somewhat, but men still outperformed women.[30]
Along the same lines, a 2021 study found intriguing results in an attempt to discern the mechanisms behind the established gender disparity. The researchers hypothesized that task characteristics, not only anatomical or social differences, could explain men's advantage in mental rotation. In particular, the objects to be rotated were changed from the typical geometric or spherical shapes to male or female stereotyped objects, such as a tractor and a stroller, respectively. The results revealed significant gender differences only when male-stereotyped objects were used as rotational material. When female-stereotyped rotational material was used, men and women performed equally. This finding may explain underlying causes behind the usual disparate outcomes, in that the male ability to do somewhat better on MRT tests probably stems from the evolutionary applicability of spatial reasoning. Objects that aren't relevant to historical male gender roles, and are consequently generally unfamiliar to men, are much more difficult for men to conceptualize spatially than more familiar shapes.[31]Likewise, other recent studies suggest that difference between Mental rotation cognition task are a consequence of procedure and artificiality of the stimuli. A 2017 study leveraged photographs and three-dimensional models, evaluating multiple approaches and stimuli. Results show that changing the stimuli can eliminate any male advantages found from the Vandenberg and Kuse test (1978).[32]
Studying differences between male and female brains can have interesting applications. For example, it could help in the understanding of theautism spectrum disorders. One of the theories concerning autism is the EMB (extreme male brain). This theory considers autistic people to have an "extreme male brain". In a study[33]from 2015, researchers confirmed that there is a difference between male and female in mental rotation task (by studying people without autism): males are more successful. Then they highlighted the fact that autistic people do not have this "male performance" in a mental rotation task. They conclude their study by "autistic people do not have an extreme version of a male cognitive profile as proposed by the EMB theory".[33]
Much of the current and future research directions pertain to expanding on what has been established by the literature and investigating underlying causes behind previous results. Future studies will consider additional factors that could influence MR ability, including demographics, various aptitudes, personality, rare/deviant psychological profiles, among others. Many current and future studies are and will be examining the ways that certain brain abnormalities, including many of those caused by traumatic injuries, affect one's ability to perform mental rotation. There is some evidence that what appears to be mental rotation in depth is actually a response to the properties of flat pictures.[34][35]
There may be relationships between competent bodily movement and the speed with which individuals can perform mental rotation. Researchers found children who trained with mental rotation tasks had improved strategy skills after practicing.[36]People use many different strategies to complete tasks; psychologists will study participants who use specific cognitive skills to compare competency and reaction times.[37]Others will continue to examine the differences in competency of mental rotation based on the objects being rotated.[38]Participants' identification with the object could hinder or help their mental rotation abilities across gender and ages to support the earlier claim that males have faster reaction times.[26][39][40]Psychologists will continue to test similarities between mental rotation and physical rotation, examining the difference in reaction times and relevance to environmental implications.[41]
|
https://en.wikipedia.org/wiki/Mental_rotation
|
Amirror neuronis aneuronthatfiresboth when an animal acts and when the animal observes the same action performed by another.[1][2][3]Thus, the neuron "mirrors" the behavior of the other, as though the observer were itself acting. Mirror neurons are not always physiologically distinct from other types of neurons in the brain; their main differentiating factor is their response patterns.[4]By this definition, such neurons have been directly observed in humans[5]and otherprimates,[6]as well as in birds.[7]
In humans, brain activity consistent with that of mirror neurons has been found in thepremotor cortex, thesupplementary motor area, theprimary somatosensory cortex, and theinferior parietal cortex.[8]The function of the mirror system in humans is a subject of much speculation. Birds have been shown to have imitative resonance behaviors and neurological evidence suggests the presence of some form of mirroring system.[6][9]To date, no widely accepted neural or computational models have been put forward to describe how mirror neuron activity supports cognitive functions.[10][11][12]
The subject of mirror neurons continues to generate intense debate. In 2014,Philosophical Transactions of the Royal Society Bpublished a special issue entirely devoted to mirror neuron research.[13]Some researchers speculate that mirror systems may simulate observed actions, and thus contribute totheory of mindskills,[14][15]while others relate mirror neurons tolanguageabilities.[16]Neuroscientists such as Marco Iacoboni have argued that mirror neuron systems in thehuman brainhelp humans understand the actions and intentions of other people. In addition, Iacoboni has argued that mirror neurons are the neural basis of the human capacity for emotions such asempathy.[17]
In the 1980s and 1990s, neurophysiologistsGiacomo Rizzolatti, Giuseppe Di Pellegrino,Luciano Fadiga, Leonardo Fogassi, andVittorio Galleseat theUniversity of Parmaplacedelectrodesin theventral premotor cortexof themacaquemonkey to studyneuronsspecialized in the control of hand and mouth actions; for example, taking hold of an object and manipulating it. During each experiment, the researchers allowed the monkey to reach for pieces of food, and recorded from single neurons in the monkey's brain, thus measuring the neuron's response to certain movements.[18][19]They found that some neurons responded when the monkey observed a person picking up a piece of food, and also when the monkey itself picked up the food.
The discovery was initially submitted toNature, but was rejected for its "lack of general interest" before being published in a less competitive journal.[20]
A few years later, the same group published another empirical paper, discussing the role of the mirror-neuron system in action recognition, and proposing that the humanBroca's areawas thehomologueregion of the monkey ventral premotor cortex.[21]While these papers reported the presence of mirror neurons responding to hand actions, a subsequent study by Pier Francesco Ferrari and colleagues[22]described the presence of mirror neurons responding to mouth actions and facial gestures.
Further experiments confirmed that about 10% of neurons in the monkey inferior frontal and inferior parietal cortex have "mirror" properties and give similar responses to performed hand actions and observed actions. In 2002Christian Keysersand colleagues reported that, in both humans and monkeys, the mirror system also responds to the sound of actions.[3][23][24]
Reports on mirror neurons have been widely published[21]and confirmed[25]with mirror neurons found in both inferior frontal and inferior parietal regions of the brain. Recently, evidence fromfunctional neuroimagingstrongly suggests that humans have similar mirror neurons systems: researchers have identified brain regions which respond during both action and observation of action. Not surprisingly, these brain regions include those found in the macaque monkey.[1]However,functional magnetic resonance imaging(fMRI) can examine the entire brain at once and suggests that a much wider network of brain areas shows mirror properties in humans than previously thought. These additional areas include thesomatosensory cortexand are thought to make the observer feel what it feels like to move in the observed way.[26][27]
Many implicitly assume that the mirroring function of mirror neurons is due primarily to heritable genetic factors and that the genetic predisposition to develop mirror neurons evolved because they facilitate action understanding.[28]In contrast, a number of theoretical accounts argue that mirror neurons could simply emerge due to learned associations, including theHebbian Theory,[29]theAssociative LearningTheory,[28]andCanalization.[30]
The first animal in which researchers have studied mirror neurons individually is themacaque monkey. In these monkeys, mirror neurons are found in theinferior frontal gyrus(region F5) and theinferior parietal lobule.[1]
Mirror neurons are believed to mediate the understanding of other animals'behaviour. For example, a mirror neuron which fires when the monkey rips a piece of paper would also fire when the monkey sees a person rip paper, or hears paper ripping (without visual cues). These properties have led researchers to believe that mirror neurons encode abstract concepts of actions like 'ripping paper', whether the action is performed by the monkey or another animal.[1]
The function of mirror neurons in macaques remains unknown. Adult macaques do not seem to learn by imitation. Recent experiments by Ferrari and colleagues suggest that infant macaques can imitate a human's face movements, though only asneonatesand during a limited temporal window.[31]Even if it has not yet been empirically demonstrated, it has been proposed that mirror neurons cause this behaviour and other imitative phenomena.[32]Indeed, there is limited understanding of the degree to which monkeys show imitative behaviour.[10]
In adult monkeys, mirror neurons may enable the monkey to understand what another monkey is doing, or to recognize the other monkey's action.[33]
A number of studies have shown that rats and mice show signs of distress while witnessing another rodent receive footshocks.[34]The group ofChristian Keysers'srecorded from neurons while rats experienced pain or witnessed the pain of others, and has revealed the presence of pain mirror neurons in the rat's anterior cingulate cortex, i.e. neurons that respond both while an animal experiences pain and while witnessing the pain of others.[35]Deactivating this region of the cingulate cortex led to reduced emotional contagion in the rats, so that observer rats showed reduced distress while witnessing another rat experience pain.[35]The homologous part of the anterior cingulate cortex has been associated with empathy for pain in humans,[36]suggesting a homology between the systems involved in emotional contagion in rodents and empathy/emotional contagion for pain in humans.
It is not normally possible to study single neurons in the human brain, so most evidence for mirror neurons in humans is indirect. Brain imaging experiments usingfunctional magnetic resonance imaging(fMRI) have shown that the humaninferior frontal cortexandsuperior parietal lobeare active when the person performs an action and also when the person sees another individual performing an action. It has been suggested that these brain regions contain mirror neurons, and they have been defined as the human mirror neuron system.[37]More recent experiments have shown that even at the level of single participants, scanned using fMRI, large areas containing multiple fMRI voxels increase their activity both during the observation and execution of actions.[26]
Neuropsychological studies looking at lesion areas that cause action knowledge, pantomime interpretation, and biologicalmotion perceptiondeficits have pointed to a causal link between the integrity of the inferior frontal gyrus and these behaviours.[38][39][40]Transcranial magnetic stimulationstudies have confirmed this as well.[41][42]These results indicate the activation in mirror neuron related areas are unlikely to be just epiphenomenal.
A study published in April 2010 reports recordings from single neurons with mirror properties in the human brain.[43]Mukamelet al.(Current Biology, 2010) recorded from the brains of 21 patients who were being treated at Ronald Reagan UCLA Medical Center for intractableepilepsy. The patients had been implanted with intracranial depth electrodes to identify seizure foci for potential surgical treatment. Electrode location was based solely on clinical criteria; the researchers, with the patients' consent, used the same electrodes to "piggyback" their research. The researchers found a small number of neurons that fired or showed their greatest activity both when the individual performed a task and when they observed a task. Other neurons had anti-mirror properties: they responded when the participant performed an action, but were inhibited when the participant saw that action.
The mirror neurons found were located in the supplementary motor area and medial temporal cortex (other brain regions were not sampled). For purely practical reasons, these regions are not the same as those in which mirror neurons had been recorded from in the monkey: researchers in Parma were studying the ventral premotor cortex and the associated inferior parietal lobe, two regions in which epilepsy rarely occurs, and hence, single cell recordings in these regions are not usually done in humans. On the other hand, no one has to date looked for mirror neurons in the supplementary motor area or the medial temporal lobe in the monkey. Together, this therefore does not suggest that humans and monkeys have mirror neurons in different locations, but rather that they may have mirror neurons both in the ventral premotor cortex and inferior parietal lobe, where they have been recorded in the monkey, and in the supplementary motor areas and medial temporal lobe, where they have been recorded from in human – especially because detailed human fMRI analyses suggest activity compatible with the presence of mirror neurons in all these regions.[26]
Another study has suggested that human beings do not necessarily have more mirror neurons than monkeys, but instead that there is a core set of mirror neurons used in action observation and execution. However, for other proposed functions of mirror neurons the mirror system may have the ability to recruit other areas of the brain when doing its auditory, somatosensory, and affective components.[44]
Human infant data using eye-tracking measures suggest that the mirror neuron system develops before 12 months of age and that this system may help human infants understand other people's actions.[45]A critical question concerns how mirror neurons acquire mirror properties. Two closely related models postulate that mirror neurons are trained throughHebbian[46]orAssociative learning[47][48][12](seeAssociative Sequence Learning). However, if premotor neurons need to be trained by action in order to acquire mirror properties, it is unclear how newborn babies are able to mimic the facial gestures of another person (imitation of unseen actions), as suggested by the work ofMeltzoffand Moore. One possibility is that the sight of tongue protrusion recruits aninnate releasing mechanismin neonates. Careful analysis suggests that 'imitation' of this single gesture may account for almost all reports of facial mimicry by new-born infants.[49]
Many studies link mirror neurons to understanding goals and intentions. Fogassi et al. (2005)[25]recorded the activity of 41 mirror neurons in the inferior parietal lobe (IPL) of two rhesus macaques. The IPL has long been recognized as an association cortex that integrates sensory information. The monkeys watched an experimenter either grasp an apple and bring it to his mouth or grasp an object and place it in a cup.
Only the type of action, and not the kinematic force with which models manipulated objects, determined neuron activity. It was also significant that neurons fired before the monkey observed the human model starting the second motor act (bringing the object to the mouth or placing it in a cup). Therefore, IPL neurons "code the same act (grasping) in a different way according to the final goal of the action in which the act is embedded."[25]They may furnish a neural basis for predicting another individual's subsequent actions and inferring intention.[25]
Understanding intention can be broken down into various stages such as body perception and action identification. These stages correlate with various regions of the brain, for example for body parts/shapes match with the extrastriate and fusiform body areas of the brain. The action itself is identified and facilitated by the mirror neuron system.[50]Action understanding falls into two different processing levels, the mirror neuron system and the mentalizing system. Expected actions are primarily processed by the mirror neuron system and unexpected actions are processed by a combination of the mirror neuron system and the mentalizing system.[51]
Another possible function of mirror neurons would be facilitation of learning. The mirror neurons code the concrete representation of the action, i.e., the representation that would be activated if the observer acted. This would allow us to simulate (to repeat internally) the observed action implicitly (in the brain) to collect our own motor programs of observed actions and to get ready to reproduce the actions later. It is implicit training. Due to this, the observer will produce the action explicitly (in his/her behavior) with agility and finesse. This happens due to associative learning processes. The more frequently a synaptic connection is activated, the stronger it becomes.[52]
Stephanie Preston andFrans de Waal,[53]Jean Decety,[54][55]andVittorio Gallese[56][57]andChristian Keysers[3]have independently argued that the mirror neuron system is involved inempathy. A large number of experiments using fMRI,electroencephalography(EEG) andmagnetoencephalography(MEG) have shown that certain brain regions (in particular the anteriorinsula,anterior cingulate cortex, and inferior frontal cortex) are active when people experience an emotion (disgust, happiness, pain, etc.) and when they see another person experiencing an emotion.[58][59][60][61][62][63][64]David FreedbergandVittorio Gallesehave also put forward the idea that this function of the mirror neuron system is crucial foraestheticexperiences.[65]Nevertheless, an experiment aimed at investigating the activity of mirror neurons in empathy conducted by Soukayna Bekkali and Peter Enticott at the University of Deakin yielded a different result. After analyzing the report's data, they came up with two conclusions about motor empathy and emotional empathy. First, there is no relationship between motor empathy and the activity of mirror neurons. Second, there is only weak evidence of these neurons' activity in the inferior frontal gyrus (IFG), and no evidence of emotional empathy associated with mirror neurons in key brain regions (inferior parietal lobule: IPL). In other words, there has not been an exact conclusion about the role of mirror neurons in empathy and if they are essential for human empathy.[66]However, these brain regions are not quite the same as the ones which mirror hand actions, and mirror neurons for emotional states or empathy have not yet been described in monkeys.
In a recent study, done in 2022, sixteen hand actions were given for each assignment. The assignment pictured both an activity word phase and the intended word phase. The hand actions were selected in "trails" each introduced twice. One of the times was with a matching phase and the other time was with a misleading word phase. The action words were depicted in two to three words with each beginning with the word "to". For instance, "to point" (action) or "to spin" (intention).
Participants were expected to answer whether the correct word phase matched the corresponding action or intention word. The word phase had to be answered within 3000 ms, with a 1000 ms black screen between each image. The black screens purpose was for an adequate amount of time in between responses. Participants pressed on the keyboard "x" or "m" to indicate their responses in a yes/no format.[67]
Christian Keysers at theSocial Brain Laband colleagues have shown that people who are more empathic according to self-report questionnaires have stronger activations both in the mirror system for hand actions[68]and the mirror system for emotions,[63]providing more direct support for the idea that the mirror system is linked to empathy.
Some researchers observed that the human mirror system does not passively respond to the observation of actions but is influenced by the mindset of the observer.[69]Researchers observed the link of the mirror neurons during empathetic engagement in patient care.[70]
Studies in rats have shown that theanterior cingulate cortexcontains mirror neurons for pain, i.e. neurons responding both during the first-hand experience of pain and while witnessing the pain of others,[35]and inhibition of this region leads to reducedemotional contagionin rats[35]and mice,[34]and reduced aversion towards harming others.[71]This provides causal evidence for a link between pain mirror neurons, andemotional contagionandprosocial behavior, two phenomena associated with empathy, in rodents. That brain activity in the homologous brain region is associated with individual variability in empathy in humans[36]suggests that a similar mechanism may be at play across mammals.
V. S. Ramachandranhas speculated that mirror neurons may provide the neurological basis of human self-awareness.[72]In an essay written for the Edge Foundation in 2009 Ramachandran gave the following explanation of his theory: "... I also speculated that these neurons can not only help simulate other people's behavior but can be turned 'inward'—as it were—to create second-order representations or meta-representations of yourownearlier brain processes. This could be the neural basis of introspection, and of the reciprocity of self awareness and other awareness. There is obviously a chicken-or-egg question here as to which evolved first, but... The main point is that the two co-evolved, mutually enriching each other to create the mature representation of self that characterizes modern humans."[73]
In humans, functional MRI studies have reported finding areas homologous to the monkey mirror neuron system in the inferior frontal cortex, close to Broca's area, one of the hypothesized language regions of the brain. This has led to suggestions that human language evolved from a gesture performance/understanding system implemented in mirror neurons. Mirror neurons have been said to have the potential to provide a mechanism for action-understanding, imitation-learning, and the simulation of other people's behaviour.[74]This hypothesis is supported by somecytoarchitectonichomologies between monkey premotor area F5 and human Broca's area.[75]Rates ofvocabularyexpansion link to the ability ofchildrento vocally mirror non-words and so to acquire the new word pronunciations. Suchspeech repetitionoccurs automatically, fast[76]and separately in the brain tospeech perception.[77][78]Moreover, such vocal imitation can occur without comprehension such as inspeech shadowing[79]andecholalia.[80]
Further evidence for this link comes from a recent study in which the brain activity of two participants was measured using fMRI while they were gesturing words to each other using hand gestures with a game ofcharades– a modality that some have suggested might represent the evolutionary precursor of human language. Analysis of the data usingGranger Causalityrevealed that the mirror-neuron system of the observer indeed reflects the pattern of activity in the motor system of the sender, supporting the idea that the motor concept associated with the words is indeed transmitted from one brain to another using the mirror system[81]
The mirror neuron system seems to be inherently inadequate to play any role insyntax, given that this definitory property of human languages which is implemented in hierarchical recursive structure is flattened into linear sequences of phonemes making the recursive structure not accessible to sensory detection[82]
The term is commonly used to refer to cases in which an individual, having observed a body movement, unintentionally performs a similar body movement or alters the way that a body movement is performed. Automatic imitation rarely involves overt execution of matching responses. Instead the effects typically consist of reaction time, rather than accuracy, differences between compatible and incompatible trials.
Research reveals that the existence of automatic imitation, which is a covert form of imitation, is distinct from spatial compatibility. It also indicates that, although automatic imitation is subject to input modulation by attentional processes, and output modulation by inhibitory processes, it is mediated by learned, long-term sensorimotor associations that cannot be altered directly by intentional processes. Many researchers believe that automatic imitation is mediated by the mirror neuron system.[83]Additionally, there are data that demonstrate that our postural control is impaired when people listen to sentences about other actions. For example, if the task is to maintain posture, people do it worse when they listen to sentences like this: "I get up, put on my slippers, go to the bathroom." This phenomenon may be due to the fact that during action perception there is similar motor cortex activation as if a human being performed the same action (mirror neurons system).[84]
In contrast with automatic imitation,motor mimicryis observed in (1) naturalistic social situations and (2) via measures of action frequency within a session rather than measures of speed and/or accuracy within trials.[85]
The integration of research on motor mimicry and automatic imitation could reveal plausible indications that these phenomena depend on the same psychological and neural processes. Preliminary evidence however comes from studies showing that social priming has similar effects on motor mimicry.[86][87]
Nevertheless, the similarities between automatic imitation, mirror effects, and motor mimicry have led some researchers to propose that automatic imitation is mediated by the mirror neuron system and that it is a tightly controlled laboratory equivalent of the motor mimicry observed in naturalistic social contexts. If true, then automatic imitation can be used as a tool to investigate how the mirror neuron system contributes to cognitive functioning and how motor mimicry promotes prosocial attitudes and behavior.[88][89]
Meta-analysis of imitation studies in humans suggest that there is enough evidence of mirror system activation during imitation that mirror neuron involvement is likely, even though no published studies have recorded the activities of singular neurons. However, it is likely insufficient for motor imitation. Studies show that regions of the frontal and parietal lobes that extend beyond the classical mirror system are equally activated during imitation. This suggests that other areas, along with the mirror system are crucial to imitation behaviors.[8]
It has also been proposed that problems with the mirror neuron system may underlie cognitive disorders, particularlyautism.[90][91]However the connection between mirror neuron dysfunction and autism is tentative and it remains to be demonstrated how mirror neurons are related to many of the important characteristics of autism.[10]
Some researchers claim there is a link between mirror neuron deficiency andautism. EEG recordings from motor areas are suppressed when someone watches another person move, a signal that may relate to mirror neuron system. Additionally, the correlation can be measured with eye-movement tracking of biological motions, together with EEG recordings, mu suppression index can be calculated.[92]This suppression was less in children with autism.[90]Although these findings have been replicated by several groups,[93][94]other studies have not found evidence of a dysfunctional mirror neuron system in autism.[10]In 2008, Oberman et al. published a research paper that presented conflicting EEG evidence. Oberman and Ramachandran found typical mu-suppression for familiar stimuli, but not for unfamiliar stimuli, leading them to conclude that the mirror neuron system of children with ASD (Autism Spectrum Disorder) was functional, but less sensitive than that of typical children.[95]Based on the conflicting evidence presented by mu-wave suppression experiments,Patricia Churchlandhas cautioned that mu-wave suppression results cannot be used as a valid index for measuring the performance of mirror neuron systems.[96]Recent research indicates that mirror neurons do not play a role in autism:
...no clear cut evidence emerges for a fundamental mirror system deficit in autism. Behavioural studies have shown that people with autism have a good understanding of action goals. Furthermore, two independent neuroimaging studies have reported that the parietal component of the mirror system is functioning typically in individuals with autism.[97]
Some anatomical differences have been found in the mirror neuron related brain areas in adults with autism spectrum disorders, compared to non-autistic adults. All these cortical areas were thinner and the degree of thinning was correlated with autism symptom severity, a correlation nearly restricted to these brain regions.[98]Based on these results, some researchers claim that autism is caused by impairments in the mirror neuron system, leading to disabilities in social skills, imitation, empathy and theory of mind.[who?]
Many researchers have pointed out that the "broken mirrors" theory of autism is overly simplistic, and mirror neurons alone cannot explain the differences found in individuals with autism. First of all, as noted above, none of these studies were direct measures of mirror neuron activity - in other words fMRI activity or EEG rhythm suppression do not unequivocally index mirror neurons. Dinstein and colleagues found normal mirror neuron activity in people with autism using fMRI.[99]In individuals with autism, deficits in intention understanding, action understanding and biological motion perception (the key functions of mirror neurons) are not always found,[100][101]or are task dependent.[102][103]Today, very few people believe an all-or-nothing problem with the mirror system can underlie autism. Instead, "additional research needs to be done, and more caution should be used when reaching out to the media."[104]
Research from 2010[99]concluded that autistic individuals do not exhibit mirror neuron dysfunction, although the small sample size limits the extent to which these results can be generalized. A more recent review argued there was not enough neurological evidence to support this “broken-mirror theory” of autism.[105]
InPhilosophy of mind, mirror neurons have become the primary rallying call ofsimulation theoristsconcerning our "theory of mind." "Theory of mind" refers to our ability to infer another person's mental state (i.e., beliefs and desires) from experiences or their behaviour.
There are several competing models which attempt to account for our theory of mind; the most notable in relation to mirror neurons is simulation theory. According to simulation theory, theory of mind is available because wesubconsciouslyempathize with the person we're observing and, accounting for relevant differences, imagine what we would desire and believe in that scenario.[106][107]Mirror neurons have been interpreted as the mechanism by which we simulate others in order to better understand them, and therefore their discovery has been taken by some as a validation of simulation theory (which appeared a decade before the discovery of mirror neurons).[56]More recently, Theory of Mind and Simulation have been seen as complementary systems, with different developmental time courses.[108][109][110]
At the neuronal-level, in a 2015 study by Keren Haroush and Ziv Williams using jointly interacting primates performing an iterated prisoner's dilemma game, the authors identified neurons in theanterior cingulate cortexthat selectively predicted an opponent's yet unknown decisions or covert state of mind. These "other-predictive neurons" differentiated between self and other decisions and were uniquely sensitive to social context, but they did not encode the opponent's observed actions or receipt of reward. These cingulate cells may therefore importantly complement the function of mirror neurons by providing additional information about other social agents that is not immediately observable or known.[111]
A series of recent studies conducted by Yawei Cheng, using a variety of neurophysiological measures, includingMEG,[112]spinal reflex excitability,[113]electroencephalography,[114][115]have documented the presence of a gender difference in the human mirror neuron system, with female participants exhibiting stronger motor resonance than male participants.
In another study, sex-based differences among mirror neuron mechanisms was reinforced in that the data showed enhanced empathetic ability in females relative to males[citation needed]. During an emotional social interaction, females showed a greater ability in emotional perspective taking[clarification needed]than did males when interacting with another person face-to-face. However, in the study, data showed that when it came to recognizing the emotions of others, all participants' abilities were very similar and there was no key difference between the male and female subjects.[116]
Baland Jalal andV. S. Ramachandranhave hypothesized that the mirror neuron system is important in giving rise to the intruder hallucination and out-of-body experiences duringsleep paralysis.[117]According to this theory, sleep paralysis leads to disinhibition of the mirror neuron system, paving the way for hallucinations of human-like shadowy beings. The deafferentation of sensory information during sleep paralysis is proposed as the mechanism for such mirror neuron disinhibition.[117]The authors suggest that their hypothesis on the role of the mirror neuron system could be tested:
"These ideas could be explored using neuroimaging, to examine the selective activation of brain regions associated with mirror neuron activity, when the individual is hallucinating an intruder or having an out-of-body experience during sleep paralysis ."[117]
Recent research, which measured mu-wave suppression, suggests that mirror neuron activity is positively correlated with psychotic symptoms (i.e., greater mu suppression/mirror neuron activity was highest among subjects with the greater severity of psychotic symptoms). Researchers concluded that "higher mirror neuron activity may be the underpinning of schizophrenia sensory gating deficits and may contribute to sensory misattributions particularly in response to socially relevant stimuli, and be a putative mechanism for delusions and hallucinations."[118]
Although some in the scientific community have expressed excitement about the discovery of mirror neurons, there are scientists who have expressed doubts about both the existence and role of mirror neurons in humans. The consensus today[as of?]seems to be that the importance of so-called mirror neurons is widely overblown. According to scientists such as Hickok, Pascolo, and Dinstein, it is not clear whether mirror neurons really form a distinct class of cells (as opposed to an occasional phenomenon seen in cells that have other functions),[119]and whether mirror activity is a distinct type of response or simply an artifact of an overall facilitation of the motor system.[11]
In 2008, Ilan Dinstein et al. argued that the original analyses were unconvincing because they were based on qualitative descriptions of individual cell properties, and did not take into account the small number of strongly mirror-selective neurons in motor areas.[10]Other scientists have argued that the measurements of neuron fire delay seem not to be compatible with standard reaction times,[119]and pointed out that nobody has reported that an interruption of the motor areas in F5 would produce a decrease in action recognition.[11]Critics of this argument have replied that these authors have missed human neuropsychological andTMSstudies reporting disruption of these areas do indeed cause action deficits[39][41]without affecting other kinds of perception.[40]
In 2009, Lingnau et al. carried out an experiment in which they compared motor acts that were first observed and then executed to motor acts that were first executed and then observed. They concluded that there was a significant asymmetry between the two processes that indicated that mirror neurons do not exist in humans. They stated "Crucially, we found no signs of adaptation for motor acts that were first executed and then observed. Failure to find cross-modal adaptation for executed and observed motor acts is not compatible with the core assumption
of mirror neuron theory, which holds that action recognition and understanding are based on motor simulation."[120]However, in the same year, Kilner et al. showed that if goal directed actions are used as stimuli, both IPL and premotor regions show the repetition suppression between observation and execution that is predicted by mirror neurons.[121]
In 2009, Greg Hickok published an extensive argument against the claim that mirror neurons are involved in action-understanding: "Eight Problems for the Mirror Neuron Theory of Action Understanding in Monkeys and Humans." He concluded that "The early hypothesis that these cells underlie action understanding is likewise an interesting and prima facie reasonable idea. However, despite its widespread acceptance, the proposal has never been adequately tested in monkeys, and in humans there is strong empirical evidence, in the form of physiological and neuropsychological (double-) dissociations, against the claim."[11]
Vladimir Kosonogov sees another contradiction. The proponents of mirror neuron theory of action understanding postulate that the mirror neurons code the goals of others' actions because they are activated if the observed action is goal-directed. However, the mirror neurons are activated only when the observed action is goal-directed (object-directed action or a communicative gesture, which certainly has a goal too). How do they "know" that the definite action is goal-directed? At what stage of their activation do they detect a goal of the movement or its absence? In his opinion, the mirror neuron system can be activated only after the goal of the observed action is attributed by some other brain structures.[52]
Neurophilosophers such asPatricia Churchlandhave expressed both scientific and philosophical objections to the theory that mirror neurons are responsible for understanding the intentions of others. In chapter 5 of her 2011 book,Braintrust, Churchland points out that the claim that mirror neurons are involved in understanding intentions (through simulating observed actions) is based on assumptions that are clouded by unresolved philosophical issues. She makes the argument that intentions are understood (coded) at a more complex level of neural activity than that of individual neurons. Churchland states that "A neuron, though computationally complex, is just a neuron. It is not an intelligent homunculus. If a neural network represents something complex, such as an intention [to insult], it must have the right input and be in the right place in the neural circuitry to do that."[122]
Cecilia Heyeshas advanced the theory that mirror neurons are the byproduct of associative learning as opposed to evolutionary adaptation. She argues that mirror neurons in humans are the product of social interaction and not an evolutionary adaptation for action-understanding. In particular, Heyes rejects the theory advanced by V.S. Ramachandran that mirror neurons have been "the driving force behind the great leap forward in human evolution."[12][123]
Notes
Further reading
|
https://en.wikipedia.org/wiki/Mirror_neuron
|
Model-dependent realismis a view of scientific inquiry that focuses on the role ofscientific modelsof phenomena.[1]It claims reality should be interpreted based upon these models, and where several models overlap in describing a particular subject, multiple, equally valid, realities exist. It claims that it is meaningless to talk about the "true reality" of a model as we can never beabsolutelycertain of anything. The only meaningful thing is theusefulness of the model.[2]The term "model-dependent realism" was coined byStephen HawkingandLeonard Mlodinowin their 2010 bookThe Grand Design.[3]
Model-dependent realism asserts that all we can know about "reality" consists of networks ofworld picturesthat explainobservationsby connecting them byrulesto concepts defined inmodels. Will an ultimatetheory of everythingbe found? Hawking and Mlodinow suggest it is unclear:
In the history of science we have discovered a sequence of better and better theories or models, from Plato to the classical theory of Newton to modern quantum theories. It is natural to ask: Will this sequence eventually reach an end point, an ultimate theory of the universe, that will include all forces and predict every observation we can make, or will we continue forever finding better theories, but never one that cannot be improved upon? We do not yet have a definitive answer to this question...[4]
Aworld pictureconsists of the combination of a set of observations accompanied by a conceptual model and by rules connecting the model concepts to the observations. Different world pictures that describe particular data equally well all have equal claims to be valid. There is no requirement that a world picture be unique, or even that the data selected include all available observations. The universe of all observations at present is covered by anetworkof overlapping world pictures and, where overlap occurs; multiple, equally valid, world pictures exist. At present, science requires multiple models to encompass existing observations:
Like the overlapping maps in a Mercator projection, where the ranges of different versions overlap, they predict the same phenomena. But just as there is no flat map that is a good representation of the earth's entire surface, there is no single theory that is a good representation of observations in all situations[5]
Where several models are found for the same phenomena, no single model is preferable to the others within that domain of overlap.
While not rejecting the idea of "reality-as-it-is-in-itself", model-dependent realism suggests that we cannot know "reality-as-it-is-in-itself", but only an approximation of it provided by the intermediary of models. The view of models in model-dependent realism also is related to theinstrumentalistapproach to modern science, that a concept or theory should be evaluated by how effectively it explains and predicts phenomena, as opposed to how accurately it describes objective reality (a matter possibly impossible to establish). A model is a good model if it:[6]
"If the modifications needed to accommodate new observations become too baroque, it signals the need for a new model."[7]Of course, an assessment like that is subjective, as are the other criteria.[8]According to Hawking and Mlodinow, even very successful models in use today do not satisfy all these criteria, which are aspirational in nature.[9]
|
https://en.wikipedia.org/wiki/Model-dependent_realism
|
Incomputational modelling,multiphysics simulation(often shortened to simply "multiphysics") is defined as the simultaneous simulation of different aspects of a physical system or systems and the interactions among them.[1]For example, simultaneous simulation of the physical stress on an object, the temperature distribution of the object and the thermal expansion which leads to the variation of the stress and temperature distributions would be considered a multiphysics simulation.[2]Multiphysics simulation is related to multiscale simulation, which is the simultaneous simulation of a single process on either multiple time or distance scales.[3]
As aninterdisciplinaryfield, multiphysics simulation can span many science and engineering disciplines. Simulation methods frequently includenumerical analysis,partial differential equationsandtensor analysis.[4]
The implementation of a multiphysics simulation follows a typical series of steps:[1]
Mathematical models used in multiphysics simulations are generally a set of coupled equations. The equations can be divided into three categories according to the nature and intended role:governing equation,auxiliary equationsandboundary/initial conditions. A governing equation describes a major physical mechanism or process. Multiphysics simulations are numerically implemented withdiscretizationmethods such as thefinite element method,finite difference method, orfinite volume method.[5]
Generally speaking, multiphysics simulation is much harder than that for individual aspects of the physical processes.
The main extra issue is how to integrate the multiple aspects of the processes with proper handling of the interactions among them.
Such issues become quite difficult when different types of numerical methods are used for the simulations of individual physical aspects.
For example, when simulating afluid-structure interactionproblem with typical Eulerian finite volume method for flow
and Lagrangian finite element method for structure dynamics.
|
https://en.wikipedia.org/wiki/Multiphysics
|
Semiconductor device modelingcreates models for the behavior ofsemiconductor devicesbased on fundamental physics, such as the doping profiles of the devices. It may also include the creation ofcompact models(such as the well known SPICEtransistormodels), which try to capture the electrical behavior of such devices but do not generally derive them from the underlying physics. Normally it starts from the output of asemiconductor process simulation.
The figure to the right provides a simplified conceptual view of "the big picture". This figure shows two inverter stages and the resulting input-output voltage-time plot of the circuit. From the digital systems point of view the key parameters of interest are: timing delays, switching power, leakage current and cross-coupling (crosstalk) with other blocks. The voltage levels and transition speed are also of concern.
The figure also shows schematically the importance of Ionversus Ioff, which in turn is related to drive-current (and mobility) for the "on" device and several leakage paths for the "off" devices. Not shown explicitly in the figure are the capacitances—both intrinsic and parasitic—that affect dynamic performance.
The power scaling which is now a major driving force in the industry is reflected in the simplified equation shown in the figure—critical parameters are capacitance, power supply and clocking frequency. Key parameters that relate device behavior to system performance include thethreshold voltage, driving current and subthreshold characteristics.
It is the confluence of system performance issues with the underlying technology and device design variables that results in the ongoing scaling laws that we now codify asMoore's law.
The physics and modeling of devices inintegrated circuitsis dominated by MOS and bipolar transistor modeling. However, other devices are important, such as memory devices, that have rather different modeling requirements. There are of course also issues ofreliability engineering—for example, electro-static discharge (ESD) protection circuits and devices—where substrate and parasitic devices are of pivotal importance. These effects and modeling are not considered by most device modeling programs; the interested reader is referred to several excellent monographs in the area of ESD and I/O modeling.[1][2][3]
Physics driven device modeling is intended to be accurate, but it is not fast enough for higher level tools, includingcircuit simulatorssuch asSPICE. Therefore, circuit simulators normally use more empirical models (often called compact models) that do not directly model the underlying physics. For example,inversion-layer mobility modeling, or the modeling of mobility and its dependence on physical parameters, ambient and operating conditions is an important topic both forTCAD(technology computer aided design) physical models and for circuit-level compact models. However, it is not accurately modeled from first principles, and so resort is taken to fitting experimental data. For mobility modeling at the physical level the electrical variables are the various scattering mechanisms, carrier densities, and local potentials and fields, including their technology and ambient dependencies.
By contrast, at the circuit-level, models parameterize the effects in terms of terminal voltages and empirical scattering parameters. The two representations can be compared, but it is unclear in many cases how the experimental data is to be interpreted in terms of more microscopic behavior.
The evolution of technology computer-aided design (TCAD)—the synergistic combination of process, device and circuit simulation and modeling tools—finds its roots inbipolartechnology, starting in the late 1960s, and the challenges of junction isolated, double-and triple-diffused transistors. These devices and technology were the basis of the first integrated circuits; nonetheless, many of the scaling issues and underlying physical effects are integral toIC design, even after four decades of IC development. With these early generations of IC, process variability and parametric yield were an issue—a theme that will reemerge as a controlling factor in future IC technology as well.
Process control issues—both for the intrinsic devices and all the associated parasitics—presented formidable challenges and mandated the development of a range of advanced physical models for process and device simulation. Starting in the late 1960s and into the 1970s, the modeling approaches exploited were dominantly one- and two-dimensional simulators. While TCAD in these early generations showed exciting promise in addressing the physics-oriented challenges of bipolar technology, the superior scalability and power consumption of MOS technology revolutionized the IC industry. By the mid-1980s, CMOS became the dominant driver for integrated electronics. Nonetheless, these early TCAD developments[4][5]set the stage for their growth and broad deployment as an essential toolset that has leveraged technology development through the VLSI and ULSI eras which are now the mainstream.
IC development for more than a quarter-century has been dominated by the MOS technology. In the 1970s and 1980s NMOS was favored owing to speed and area advantages, coupled with technology limitations and concerns related to isolation, parasitic effects and process complexity. During that era of NMOS-dominatedLSIand the emergence of VLSI, the fundamental scaling laws of MOS technology were codified and broadly applied.[6]It was also during this period that TCAD reached maturity in terms of realizing robust process modeling (primarily one-dimensional) which then became an integral technology design tool, used universally across the industry.[7]At the same time device simulation, dominantly two-dimensional owing to the nature of MOS devices, became the work-horse of technologists in the design and scaling of devices.[8][9]The transition fromNMOStoCMOStechnology resulted in the necessity of tightly coupled and fully 2D simulators for process and device simulations. This third generation of TCAD tools became critical to address the full complexity of twin-well CMOS technology (see Figure 3a), including issues of design rules and parasitic effects such aslatchup.[10][11]An abbreviated perspective of this period, through the mid-1980s, is given in;[12]and from the point of view of how TCAD tools were used in the design process, see.[13]
|
https://en.wikipedia.org/wiki/Semiconductor_device_modeling
|
Incognitive psychology,spatial cognitionis the acquisition, organization, utilization, and revision ofknowledgeabout spatial environments. It is most about how animals, including humans, behave within space and the knowledge they built around it, rather than space itself. These capabilities enable individuals to manage basic and high-levelcognitive tasksin everyday life. Numerous disciplines (such ascognitive psychology,neuroscience,artificial intelligence,geographic information science,cartography, etc.) work together to understand spatial cognition in different species, especially in humans. Thereby, spatial cognition studies also have helped to link cognitive psychology and neuroscience. Scientists in both fields work together to figure out what role spatial cognition plays in the brain as well as to determine the surrounding neurobiological infrastructure.
In humans, spatial cognition is closely related to how people talk about their environment, find their way in new surroundings, and planroutes. Thus a wide range of studies is based on participants reports, performance measures and similar, for example in order to determine cognitivereference framesthat allow subjects to perform. In this context the implementation ofvirtual realitybecomes more and more widespread among researchers, since it offers the opportunity to confront participants with unknown environments in a highly controlled manner.[1]
Spatial cognition can be seen from a psychological point of view, meaning that people's behaviour within that space is key. When people behave in space, they usecognitive maps, the most evolved form of spatial cognition. When using cognitive maps, information aboutlandmarksand the routes between landmarks are stored and used.[2]This knowledge can be built from various sources; from a tightly coordinated vision and locomotion (movement), but also from map symbols, verbal descriptions, and computer-based pointing systems. According to Montello, space is implicitly referring to a person's body and their associated actions. He mentions different kinds of space; figural space which is a space smaller than the body, vista space which the space is more extended than the human body, environmental space which is learned by locomotion, and geographical space which is the biggest space and can only be learned through cartographic representation.
Space is represented in the human brain, this can also lead to distortions. When perceiving space and distance, a distortion can occur. Distances are perceived differently on whether they are considered between a given location and a location that has a highcognitive saliency, meaning that it stands out. Different perceived locations and distances can have a "reference point", which are better known than others, more frequently visited and more visible.[3]There are other kinds ofdistortionsas well. Furthermore, there the distortion in distance estimation and the distortion in angle alignment. Distortion in angle alignment means that your personal north will be viewed as "the north". The map is mentally represented according to the orientation of the personal point of view of learning. Since perceived distortion is "subjective" and not necessarily correlated with "objective distance", distortions can happen in this phenomenon too. There can be an overestimation in downtown routes, routes with turns, curved routes and borders or obstacles.
A classical approach to the acquisition of spatial knowledge, proposed by Siegel & White in 1975, defines three types of spatial knowledge – landmarks, route knowledge and survey knowledge – and draws a picture of these three as stepstones in a successive development of spatial knowledge.[4]
Within this framework, landmarks can be understood as salient objects in the environment of an actor, which are memorized without information about any metric relations at first. By traveling between landmarks, route knowledge evolves, which can be seen as sequential information about the space which connects landmarks. Finally, increased familiarity with an environment allows the development of so-called survey knowledge, which integrates both landmarks and routes and relates it to a fixed coordinate system, i.e. in terms of metric relations and alignment to absolute categories like compass bearings etc. This results in abilities like taking shortcuts never taken before, for example.
More recently, newer findings challenged this stairway-like model of acquisition of spatial knowledge. Whereas familiarity with an environment seems to be a crucial predictor of navigational performance indeed,[5][6]in many cases even survey knowledge can be established after minimal exploration of a new environment.[7][8][9]
In this context, Daniel R. Montello proposed a new framework, indicating, that the changes in spatial knowledge ongoing with growing experience are rather quantitative than qualitative, i. e. different types of spatial knowledge become just more precise and confident.[10]Furthermore, the use of these different types seems to be predominantly task-dependent,[5][6]which leads to the conclusion that spatial navigation in everyday life requires multiple strategies with different emphasis on landmarks, routes and overall survey knowledge.
The space can be classified according to its extension as proposed byMontello, distinguishing between figural space, vista space, environmental space and geographical space. Figural space is the first and most restricted space that refers to the area that a person's body covers without any movement, including objects that can be easily reached. Vista space is the second subspace that refers to the space beyond the body but that is still close enough to be completely visualized without moving, for example, a room. Environmental space is the third subspace which is said to "contain" the body because of its large size and can only be fully explored through movement since all objects and space are not directly visible, like in a city.[11]Environmental space is the most relevant subspace to humans for navigation because they best allow for movement throughout space in order to understand our environment.[12]Geographical space is the last level because it is so large that it can not be explored through movement alone and can only be fully understood through cartographic representations which can illustrate an entire continent on a map.[11]
In order to build spatial knowledge, people construct a cognitive reality in which they compute their environment based on a reference point. This framing of the environment is a reference frame.[13]
Usually there is a distinction made between egocentric (Latinego:"I") and allocentric (ancient Greekallos:"another, external") reference frames; Egocentric frame of reference refers to placing yourself in the environment and viewing it in the first person, which means that objects' locations are understood relative to yourself.[13]The egocentric frame of reference is centered around the body. Allocentric frame of reference on the other hand, refers to objects' location based on other objects or landmarks around it. Allocentric frame of reference is centered around the world around you, not around yourself. However, a third distinction can also be made, namely the geocentric reference frame.[14][15]It is similar to the allocentric reference frame in the way that it has the capacity to encode a location independent from the position of the observer. It achieves this by encoding the space relative to axes that are distributed over an extended space, not by referring to salient landmarks. The geocentric space is most commonly coordinated in terms of longitude and latitude. The difference between an allocentric reference frame and a geocentric reference frame is that an allocentric reference frame is used for smaller-scale environments, whereas a geocentric reference frame is used for large-scale environments, like earth.
Whilst spatial information can be stored into these different frames, they already seem to develop together in early stages of childhood[16]and appear to be necessarily used in combination in order to solve everyday life tasks.[17][18][19]
A reference frame can also be used while navigating in space. Here, information is encoded in a way that it effects how we memorize it. This reference frame is used when the observer has to communicate with another person about the objects contained in that space.
When navigating a space, an observer can take on either a route perspective or a survey perspective. A route perspective is when the observer navigates in relation to their own body and location, whereas a survey perspective is a bird-eye view of the environment, in order to navigate a space. The usage of a route perspective has no influence on the survey perspective in the activation of the brain, and vice versa. A perspective can be purely route or survey, but often it is a mix of the two that is used innavigation. People can switch between the two seamlessly, and often without noticing.[20]
Active navigation appears to have a bigger impact on the establishment of route knowledge,[19][21][22]whereas the use of amapseemingly better supports survey knowledge about more large-scaled complex environments.[19][22][23]
There are also individual differences when it comes to experiencing space and the spatial cognition that people have. When looking at individual differences, it appears that most people have a preference for one reference frame with a different use of strategies to represent space. Some people have an inclination towards a route view (also called route strategy), while others have a preference towards a survey view (also called survey or orientation strategy).[24]The people that prefer a route perspective also tend to describe a space more in an egocentric frame of reference. People who have an inclination towards a survey perspective also tend to use an allocentric frame of reference more often. It has been observed that the latter perform better in navigational tasks when they have to learn a route from a map. These individual differences are self-reported with questionnaires.[25]
However, the perspective choice is also influenced by characteristics of the environment.[26]When there is a single path in the environment, people usually choose to employ a route perspective. When the environment is open and filled with landmarks, however, people tend to choose a survey perspective.
In this context, a discussion came up about different reference frames, which are the frameworks wherein spatial information is encoded. In general, two of them can be distinguished as the egocentric (Latinego:"I") and the allocentric (ancient Greekallos:"another, external") reference frame.
Within an egocentric reference frame, spatial information is encoded in terms of relations to the physical body of a navigator, whereas the allocentric reference frame defines relations of objects among each other, that is independent of the physical body of an "observer" and thus in a more absolute way, which takes metrical conditions and general alignments like cardinal directions into account.[27]This suggests, that route knowledge, which is supported by direct navigation, is more likely to be encoded within an egocentric reference frame[4][28]and survey knowledge, which is supported by map learning, to be more likely to be encoded within an allocentric reference frame in turn.[4][23]Furthermore, an interaction between egocentric and allocentric view is possible. This combination is mostly used when imagining a spatial environment, and this creates a richer representation of the environment. However, when a perspective that has not yet been discovered, it is more demanding to use this technique.[29]
As there are biases in other topics of psychology, there are also biases within the concept of spatial cognition. People make systematic errors when they utilize or try to retain information from spatial representations of the environment, such as geographic maps.[30]This shows that their mental representation of the maps and the knowledge they reflect are systematically distorted. Distortions are repetitive errors (bias) that people show in their cognitive maps when they are asked to estimate distances or angles. When an organism’s natural spatial perception is harmed, spatial distortion arises. This can be created experimentally in a variety of sensory modalities. Different types of distortions exist.
First of all, people tend to make errors when it comes to estimating a distance. When compared to their true measurements on a curved surface of the globe, there is a misconception of shape, size, distance, or direction between geographical landmarks. This appears to happen because you cannot display 3D surfaces into two perfect dimensions. People tend to regularize their cognitive maps by distorting the position of relatively small features (e.g., cities) to make them conform with the position of larger features (e.g., state boundaries).[31]Our route lengths tend to be overestimated, routes with major bends and curves are estimated longer than lineair routes.[32]When interpreting the geographical relationships between two locations that are in separate geographical or political entities, people make enormous systematic errors.[33]The presence of a border, physical as well as emotional, contributes to biases in estimating distances between elements. People tend to overestimate the distance between two cities that belonged to two different regions or countries. The distortion of distance might also be caused by the presence of salient landmarks. Some environmental features are not cognitively equal; some may be larger, older, more well-known or more central in our daily life activities. These landmarks are frequently used as reference elements for less salient elements. When one element in a location is more salient, the distance between the reference point and the other point is estimated as shorter.[34]
Second, there is a distortion when it comes to alignment. Alignment means arrangement in a straight line.[35]When objects are aligned with each other it is much easier to estimate the distance between these objects and to switch between different egocentric viewpoints of both objects. When a mental representation of any spatial environment needs to be created, people tend to have way more errors when the object in a spatial environment are not aligned with one another. This is especially the case when the different objects are memorized separately. When a person sees an object, there will be less errors in spatial cognition when the placement of this object is facing the person's egocentric north. The performance within spatial cognition is the best when the orientation is north-facing and decreases linearly with the angle of misalignment.[36]
Finally, the angle in which an object is placed in relation to another object, plays a major role in having distortions when it comes to spatial cognition. The amount of angular errors increased significantly when the angle between two objects exceeds 90 degrees. This phenomenon occurs in all age groups, e.g. younger, middle-aged and older adults. When an angle is unknown and has to be estimated, people tend to guess close to 90 degrees. Besides that, the angular error also increases when the object or place towards which we are pointing (outside our visual field) is further away from our egocentric space. Familiarity plays an important role. Pointing errors are less towards places that are familiar than towards unfamiliar places. When people have to use theirspatial memoryto guess an angle, forward errors are significantly smaller than backward errors, implying that memorizing the opposite direction is more difficult than memorizing the forward direction of travel.[37]
There are many strategies used to spatially encode the environment, and they are often used together within the same task. In a recent study, König et aliae[38]provided further evidence by letting participants learn the positions of streets and houses from an interactive map. Participants reproduced their knowledge in both relative and absolute terms by indicating the positions of houses and streets in relation to one another and their absolute locations using cardinal directions. Some participants were allowed three seconds to form their description, while others were not given a time limit. Their conclusions show that positions of houses were best remembered in relative tasks, while streets were best remembered in absolute tasks, and that increasing allotted time for cognitive reasoning improved performance for both.
These findings suggest, that circumscribed objects like houses, which would be sensory available at one moment during an active exploration, are more likely to be encoded in a relative/binary coded way and that time for cognitive reasoning allows the conversion into an absolute/unitary coded format, which is the deduction of their absolute position in line with cardinal directions, compass bearings etc. Contrary, bigger and more abstract objects like streets are more likely to be encoded in an absolute manner from the beginning.
That confirms the view of mixed strategies, in this case that spatial information of different objects is coded in distinct ways within the same task. Moreover, the orientation and location of objects like houses seems to be primarily learned in an action-oriented way, which is also in line with anenactiveframework for human cognition.
In a study of two congeneric rodent species, sex differences in hippocampal size were predicted by sex-specific patterns of spatial cognition. Hippocampal size is known to correlate positively with maze performance in laboratory mouse strains and with selective pressure for spatial memory among passerine bird species. In polygamous vole species (Rodentia:Microtus), males range more widely than females in the field and perform better on laboratory measures of spatial ability; both of these differences are absent in monogamous vole species. Ten females and males were taken from natural populations of two vole species, the polygamous meadow vole,M. pennsylvanicus, and the monogamous pine vole,M. pinetorum. Only in the polygamous species do males have larger hippocampi relative to the entire brain than do females.[39]This study shows that spatial cognition can vary depending on gender.
One study aimed to determine whether male cuttlefish (Sepia officinalis; cephalopod mollusc) range over a larger area than females and whether this difference is associated with a cognitive dimorphism in orientation abilities. First, we assessed the distance travelled by sexually immature and mature cuttlefish of both sexes when placed in an open field (test 1). Second, cuttlefish were trained to solve a spatial task in aT-maze, and the spatial strategy preferentially used (right/left turn or visual cues) was determined (test 2). The results showed that sexually mature males travelled a longer distance in test 1, and were more likely to use visual cues to orient in test 2, compared with the other three groups.[40]
Navigationis the ability of animals including humans to locate, track, and follow paths to arrive at a desired destination.[41][42]
Navigation requires information about the environment to be acquired from the body andlandmarksof the environment asframes of referenceto create amental representationof the environment, forming acognitive map. Humans navigate by transitioning between different spaces and coordinating both egocentric and allocentric frames of reference.[citation needed]
Navigation has two major components: locomotion and wayfinding.[43]Locomotion is the process of movement from one place to another, in animals including humans. Locomotion helps you understand an environment by moving through a space in order to create a mental representation of it.[44]Wayfindingis defined as an active process of following or deciding upon a path between one place to another through mental representations.[45]It involves processes such as representation, planning and decision which help to avoid obstacles, to stay on course or to regulate pace when approaching particular objects.[43][46]
Navigation and wayfinding can be approached in the environmental space. According toDan Montello's space classification, there are four levels of space with the third being environmental. The environmental space represents a very large space, like a city, and can only be fully explored through movement since all objects and space are not directly visible.[13]AlsoBarbara Tverskysystematized the space, but this time taking into consideration the three dimensions that correspond to theaxesof the human body and its extensions: above/below, front/back and left/right. Tversky ultimately proposed a fourfold classification of navigable space: space of the body, space around the body, space of navigation and space of graphics.[47]
In human navigation people visualize different routes in their minds to plan how to get from one place to another. The things which they rely on to plan these routes vary from person to person and are the basis of differing navigational strategies.
Some people use measures of distance and absolute directional terms (north, south, east, and west) in order to visualize the best pathway from point to point. The use of these more general, external cues as directions is considered part of an allocentric navigation strategy.Allocentric navigationis typically seen in males and is beneficial primarily in large and/or unfamiliar environments.[48]This likely has some basis in evolution when males would have to navigate through large and unfamiliar environments while hunting.[49]The use of allocentric strategies when navigating primarily activates the hippocampus and parahippocampus in the brain. This navigation strategy relies more on a mental, spatial map than visible cues, giving it an advantage in unknown areas but a flexibility to be used in smaller environments as well. The fact that it is mainly males that favor this strategy is likely related to the generalization that males are better navigators than females as it is better able to be applied in a greater variety of settings.[48]
Egocentric navigationrelies on more local landmarks and personal directions (left/right) to navigate and visualize a pathway. This reliance on more local and well-known stimuli for finding their way makes it difficult to apply in new locations, but is instead most effective in smaller, familiar environments.[48]Evolutionarily, egocentric navigation likely comes from our ancestors who would forage for their food and need to be able to return to the same places daily to find edible plants. This foraging usually occurred in relatively nearby areas and was most commonly done by the females in hunter-gatherer societies.[49]Females, today, are typically better at knowing where various landmarks are and often rely on them when giving directions. Egocentric navigation causes high levels of activation in the right parietal lobe and prefrontal regions of the brain that are involved in visuospatial processing.[48]
Franz and Mallot proposed a navigation hierarchy inRobotics and Autonomous Systems 30(2006):[50]
There are two types of human wayfinding: aided and unaided.[13]Aided wayfinding requires a person to use various types ofmedia, such asmaps,GPS,directional signage, etc., in their navigation process which generally involves low spatial reasoning and is less cognitively demanding.
Unaided wayfinding involves no such devices for the person who is navigating.[13]Unaided wayfinding can be subdivided into ataxonomyof tasks depending on whether it is undirected or directed, which basically makes the distinction of whether there is a precise destination or not: undirected wayfinding means that a person is simplyexploringan environment for pleasure without any set destination.[51]
Directed wayfinding, instead, can be further subdivided into search vs. target approximation.[51]Search means that a person does not know where the destination is located and must find it either in an unfamiliar environment, which is labeled as an uninformed search, or in a familiar environment, labeled as an informed search.[citation needed]
In target approximation, on the other hand, the location of the destination is known to the navigator but a further distinction is made based on whether the navigator knows how to arrive or not to the destination. Path following means that the environment, the path, and the destination are all known which means that the navigator simply follows the path they already know and arrive at the destination without much thought. For example, when you are in your city and walking on the same path as you normally take from your house to your job or university.[51]
However, path finding means that the navigator knows where the destination is but does not know the route they have to take to arrive at the destination: you know where a specific store is but you do not know how to arrive there or what path to take. If the navigator does not know the environment, it is called path search which means that only the destination is known while neither the path nor the environment is: you are in a new city and need to arrive at the train station but do not know how to get there.[51]
Path planning, on the other hand, means that the navigator knows both where the destination is and is familiar with the environment so they only need to plan the route or path that they should take to arrive at their target. For example, if you are in your city and need to get to a specific store that you know the destination of but do not know the specific path you need to take to get there.[51]
Navigation and wayfinding may differ between people by gender, age, and other attributes. In the spatial cognition domain, such factors can be:
Experimental, correlational and case study approaches are used to find patterns in individual differences.Correlationsapproach is based on a modality to understand individual differences in navigation andwayfindingabilities to compare groups or examining the relation between variables at the continuous level.Experimentalapproach examines the causality of the relationship between variables. It manipulates one variable (independent variable) and investigates the impact on environment recall (dependent variable).Case studies approachis used to understand to what extent a particular profile is related to spatial representation and associated features such as, cases of brain lesions or degenerative diseases (involving brain structures and network of cognitive map) or cases of cognitive and behavioural difficulties in acquiring environment information in absence of brain deficits (as in the case ofdevelopmental topographical disorientation).[58]
Evidence shows there is a link between small scale spatial abilities and large scale spatial abilities. More specifically, there is a relation between visuospatial abilities (small scale abilities) with wayfinding attitudes (spatial self evaluation on large scale) on one’s ability to create a mental representation of the environment, or environment representation (large scale abilities).[59]
Evidence presented in this section will focus on the research findings of correlational studies. Correlational studies between variables at a continuous level aim to test the degree to which small-scale visuospatial cognitive abilities and large-scale abilities are related.[60][61]
Moreover, correlational studies are also based on comparing groups on individual differences of navigation and are wayfinding related . This may involve comparing the extreme scores of individual differences of participants (high vs low self reports in wayfinding attitude, high vs low small-scale abilities) and examining the difference in spatial and environment learning.[62][63]Or comparing the extreme high and low performance (after an environment learning task, high or low) and examining the difference in small-scale spatial abilities and wayfinding attitudes.[59]
Concerning the correlational studies at continuous level a pioneering study was made by Allen et al. (1996). They asked participants to take a stroll in a small city. The authors measured recall performance and assessed visuospatial (small scale) abilities. Visuospatial abilities were measured by assessing spatial visualization, mental rotation and spatial memory tasks. The structural equation model showed that spatial sequential memory serves as a mediator in the relationship between the visuospatial ability factor and environmental knowledge[60]
Further, Hegarthy et al., (2006) asked participants to learn a path in a real, virtual, and videotaped environment. After the learning phase, they were asked to estimate the distance and direction of certain landmarks in the environment. Participants performed a battery of verbal and spatial tasks.[61]
Using a structural equation model, results indicate that sense of direction and spatial ability factors are related; and that both factors are linked to verbal ability. However, verbal ability does not predict environment (navigation) learning. Instead, both spatial ability and sense of direction predict environmental learning, sense of direction predicts direct experience, and visuospatial ability shares a strong link to visual learning (both virtual and videotaped). Both correlational studies showed the relation between small scale spatial abilities with large scale spatial abilities (examined with navigation learning).[60][61]Allen et al., (1996) suggests that the relation between these variables is mediated. A confirmation that the relation between small scale spatial abilities with large scale spatial abilities can be mediated is shown by other evidence.[60]For instance Meneghetti et al., (2016) showed that mental rotation abilities (small scale ability) are related to environment learning (path virtually acquired – a reproduction of large scale ability-) by the mediation of visuospatial working memory (i.e. the ability to process and maintain temporary visuospatial information).[64]
An example of group comparison based on individual preferences is offered by Pazzaglia & Taylor (2007). They selected individuals with high and low preferences survey preference (i.e. preference to form a mental map) to examine the difference in performance in environment learning (by several task). The results showed that high survey group made better performance, especially less navigation errors, than low survey group.[62]
An example of group comparison based on spatial environment performance is offered by Weisberg et al. (2014). They asked participants to learn paths in a virtual environment. They were tested for their visuospatial abilities (small scale) and wayfinding preferences. Then, they performed pointing performance (within and between routes) and model building. The results showed that participants making good pointing performance (between and within the paths) showed high visuospatial abilities (mental rotation) and wayfinding preferences (sense of direction).[65]
Gender is a source of individual differences in navigation and wayfinding. Men show more confidence during navigation in comparison to women and in the final environment representation accuracy even the gender difference can be attenuated by some factors (as outcome variables, feedback, familiarity).[66][67]
Females experience higher levels of spatial anxiety than men.[54]Further two differentwayfinding strategiesare used by men and women: women prefer to use route strategy more, whilst men use survey (orientation) strategy more.[54]Route strategy is related to following directional instructions, whilst survey (orientation) strategy is the use of references in the environment in relation to their position.
Examining relations at the continuous level, gender is a predictor that can influence navigation success - both males and females can perform successfully. However, the ability to formmental representationsof new environments after navigation is impacted by different patterns of relations involvingstrategy, beliefs/self-efficacyandvisuospatial cognitive abilities. Therefore, both males and females involve the use of visuospatial individual factors, abilities and inclinations, that with different patterns of relations influence navigation and wayfinding performance.[57]
In case of older adults, abilities in the spatial domain decrease. However, this is a generalization that can be error-prone. Indeed, it is necessary to consider what kind ofspatial abilitywe are considering, whether it is small scale, large scale spatial ability, or the spatial self-evaluations (as wayfinding attitudes), and how these variables are related to each other. Moreover, some other factors that decline withagingcould also impact spatial abilities, such asmemory functions, executive control, and other cognitive factors.[68]
Small-scale abilities, such asmental rotation,spatial visualization, spatial perception,[69]andperspective takingdecline.[70][71]Even the course of decreasing is related to the type of abilities, task features, and otherindividual differences(such as gender and expertise in these abilities). In general, the abilities decline around 60, and can start as early as 50 in perspective taking.
Concerningwayfinding attitudes, generally self-reported ones, evidence suggests that they tend to be quite stable across the lifespan, such as sense of direction,[72]with some changes such as the light increase ofspatial anxiety.[71]
Spatial learning and representation abilities also tend to decrease with age. Differences between young and older adults are related to several factors, both at the individual and at the environmental level. In fact, older adults are more likely to decline in spatial tasks based onallocentric knowledge(self-to object relations) with respect toegocentric knowledge(self-to object relation).[73]When the task requires to recognize information, there is less age difference with respect to whenactive recallis required. When the environment is familiar, it is less subject to gender differences with respect to young adults. In studies involving healthy adults aged 18-78, it was found that difficulty increased, particularly from age 70.[68]Biological factors involved in the decline is the decreased activity of thehippocampus, theparahippocampal gyrus, and theretrosplenial cortex, resulting in difficulties in acquiring new spatial knowledge and applying them.[74]
Despite the decline of spatial abilities (such as visuospatial working memory and rotation), both spatial abilities and wayfinding attitudes contribute to different extents to maintain spatial learning andnavigationaccuracy in elderly.[75]Indeed studies with samples of older adults showed that despite the decline of spatial abilities (small-scale), the latter still have a functional role in environment learning.[76][77]Other studies showed the positive role of wayfinding attitudes, such as pleasure in exploring places, in maintaining spatial learning accuracy. This is beneficial because spatial learning is crucial for elders’ security, and subsequently, their autonomy, an indicator of quality of life.[75]
|
https://en.wikipedia.org/wiki/Spatial_cognition
|
Incognitive psychologyandneuroscience,spatial memoryis a form of memory responsible for the recording and recovery of information needed to plan a course to a location and to recall the location of an object or the occurrence of an event.[1]Spatial memory is necessary for orientation in space.[2][3]Spatial memory can also be divided into egocentric and allocentric spatial memory.[4]A person's spatial memory is required to navigate in a familiar city. A rat's spatial memory is needed to learn the location of food at the end of amaze. In both humans and animals, spatial memories are summarized as acognitive map.[5]
Spatial memory has representations within working,short-term memoryandlong-term memory. Research indicates that there are specific areas of the brain associated with spatial memory.[6]Many methods are used for measuring spatial memory in children, adults, and animals.[5]
Short-term memory(STM) can be described as a system allowing one to temporarily store and manage information that is necessary to complete complex cognitive tasks.[7]Tasks which employ short-term memory includelearning,reasoning, and comprehension.[7]Spatial memory is a cognitive process that enables a person to remember different locations as well as spatial relations between objects.[7]This allows one to remember where an object is in relation to another object;[7]for instance, allowing someone tonavigatein a familiar city. Spatial memories are said to form after a person has already gathered and processedsensoryinformation about her or his environment.[7]
Working memory(WM) can be described as a limited capacity system that allows one to temporarily store and process information.[8]This temporary store enables one to complete or work on complex tasks while being able to keep information in mind.[8]For instance, the ability to work on a complicated mathematical problem utilizes one's working memory.[citation needed]
One influential theory of WM is theBaddeleyand Hitchmulti-component model of working memory.[8][9]The most recent version of this model suggests that there are four subcomponents to WM:phonological loop, thevisuo-spatial sketchpad, thecentral executive, and theepisodic buffer.[8]One component of this model, the visuo-spatial sketchpad, is likely responsible for the temporary storage, maintenance, and manipulation of both visual and spatial information.[8][9]
In contrast to the multi-component model, some researchers believe that STM should be viewed as a unitary construct.[9]In this respect, visual, spatial, and verbal information are thought to be organized by levels of representation rather than the type of store to which they belong.[9]Within the literature, it is suggested that further research into the fractionation of STM and WM be explored.[9][10]However, much of the research into the visuo-spatial memory construct have been conducted in accordance to the paradigm advanced by Baddeley and Hitch.[8][9][10][11][12]
Research into the exact function of the visuo-spatial sketchpad has indicated that both spatialshort-term memoryand working memory are dependent on executive resources and are not entirely distinct.[8]For instance, performance on a working memory but not on a short-term memory task was affected byarticulatory suppressionsuggesting that impairment on the spatial task was caused by the concurrent performance on a task that had extensive use of executive resources.[8]Results have also found that performances were impaired on STM and WM tasks with executive suppression.[8]This illustrates how, within the visuo-spatial domain, both STM and WM require similar utility of the central executive.[8]
Additionally, during a spatial visualisation task (which is related to executive functioning and not STM or WM) concurrent executive suppression impaired performance indicating that the effects were due to common demands on the central executive and not short-term storage.[8]The researchers concluded with the explanation that the central executive employscognitive strategiesenabling participants to both encode and maintain mental representations during short-term memory tasks.[8]
Although studies suggest that the central executive is intimately involved in a number of spatial tasks, the exact way in which they are connected remains to be seen.[13]
Spatial memory recall is built upon ahierarchical structure. People remember the general layout of a particular space and then "cue target locations" within that spatial set.[14]This paradigm includes an ordinal scale of features that an individual must attend to in order to inform his or her cognitive map.[15]Recollection of spatial details is a top-down procedure that requires an individual to recall the superordinate features of a cognitive map, followed by the ordinate and subordinate features. Two spatial features are prominent in navigating a path: general layout and landmark orienting (Kahana et al., 2006). People are not only capable of learning about the spatial layout of their surroundings, but they can also piece together novel routes and new spatial relations through inference.[citation needed]
A cognitive map is "a mental model of objects' spatial configuration that permits navigation along optimal path between arbitrary pairs of points."[16]This mental map is built upon two fundamental bedrocks: layout, also known as route knowledge, and landmark orientation. Layout is potentially the first method of navigation that people learn to utilize; its workings reflect our most basic understandings of the world.[citation needed]
Hermer and Spelke (1994) determined that when toddlers begin to walk, around eighteen months, they navigate by their sense of the world's layout. McNamara, Hardy and Hirtle identified region membership as a major building block of anyone's cognitive map (1989). Specifically, region membership is defined by any kind of boundary, whether physical, perceptual or subjective (McNamara et al., 1989). Boundaries are among the most basic and endemic qualities in the world around us. These boundaries are nothing more than axial lines which are a feature that people are biased towards when relating to space; for example, one axial line determinant is gravity (McNamara & Shelton, 2001; Kim & Penn, 2004). Axial lines aid everyone in apportioning our perceptions into regions. This parceled world idea is further supported by the finding that items that get recalled together are more likely than not to also be clustered within the same region of one's larger cognitive map.[15]Clustering shows that people tend to chunk information together according to smaller layouts within a larger cognitive map.[citation needed]
Boundaries are not the only determinants of layout. Clustering also demonstrates another important property of relation to spatial conceptions, which is that spatial recall is a hierarchical process. When someone recalls an environment or navigates terrain, that person implicitly recalls the overall layout at first. Then, due to the concept's "rich correlational structure", a series of associations become activated.[14]Eventually, the resulting cascade of activations will awaken the particular details that correspond with the region being recalled. This is how people encode many entities from varying ontological levels, such as the location of a stapler; in a desk; which is in the office.
One can recall from only one region at a time (a bottleneck). A bottleneck in a person's cognitive navigational system could be an issue. For instance, if there were a need for a sudden detour on a long road trip. Lack of experience in a locale, or simply sheer size, can disorient one's mental layout, especially in a large and unfamiliar place with many overwhelming stimuli. In these environments, people are still able to orient themselves, and find their way around using landmarks. This ability to "prioritize objects and regions in complex scenes for selection (and) recognition" was labeled by Chun and Jiang in 1998. Landmarks give people guidance by activating "learned associations between the global context and target locations."[14]Mallot and Gillner (2000) showed that subjects learned an association between a specific landmark and the direction of a turn, thereby furthering the relationship between associations and landmarks.[17]Shelton and McNamara (2001) succinctly summed up why landmarks, as markers, are so helpful: "location...cannot be described without making reference to the orientation of the observer."
People use both the layout of a particular space and the presence of orienting landmarks in order to navigate. Psychologists have yet to explain whether layout affects landmarks or if landmarks determine the boundaries of a layout. Because of this, the concept suffers from achicken and the eggparadox. McNamara has found that subjects use "clusters of landmarks as intrinsic frames of reference," which only confuses the issue further.[16]
People perceive objects in their environment relative to other objects in that same environment. Landmarks and layout are complementary systems for spatial recall, but it is unknown how these two systems interact when both types of information are available. As a result, people have to make certain assumptions about the interaction between the two systems. For example, cognitive maps are not "absolute" but rather, as anyone can attest, are "used to provide a default...(which) modulated according to...task demands."[14]Psychologists also think that cognitive maps are instance based, which accounts for "discriminative matching to past experience."[14]
This field has traditionally been hampered by confounding variables, such as cost and the potential for previous exposure to an experimental environment. Technological advancements, including those in virtual reality technology, have made findings more accessible. Virtual reality affords experimenters the luxury of extreme control over their test environment. Any variable can be manipulated, including things that would not be possible in reality.
During a 2006 study, researchers designed three different virtual towns, each of which had its own "unique road layout and a unique set of five stores."[16]However, the overall footprint of the different maps was exactly the same size, 80 sq. units. In this experiment, participants had to partake in two different sets of trials.
A study conducted at the University of Maryland compared the effect of different levels of immersion on spatial memory recall.[18]In the study, 40 participants used both a traditional desktop and a head-mounted display to view two environments, a medieval town, and an ornate palace, where they memorized two sets of 21 faces presented as 3D portraits. After viewing these 21 faces for 5 minutes, followed by a brief rest period, the faces in the virtual environments were replaced with numbers, and participants recalled which face was at each location. The study found on average, those who used the head-mounted display recalled the faces 8.8% more accurately, and with a greater confidence. The participants state that leveraging their innate vestibular and proprioceptive senses with the head-mounted display and mapping aspects of the environment relative to their body, elements that are absent with the desktop, was key to their success.
Within the literature, there is evidence that experts in a particular field are able to perform memory tasks in accordance with their skills at an exceptional level.[12]The level of skill displayed by experts may exceed the limits of the normal capacity of both STM and WM.[12]Because experts have an enormous amount of prelearned and task-specific knowledge, they may be able toencodeinformation in a more efficient way.[12]
An interesting study investigatingtaxidrivers' memory for streets inHelsinki,Finland, examined the role of prelearned spatial knowledge.[12]This study compared experts to a control group to determine how this prelearned knowledge in their skill domain allows them to overcome the capacity limitations of STM and WM.[12]The study used four levels of spatial randomness:
The results of this study indicate that the taxi drivers' (experts') recall of streets was higher in both the route order condition and the map order condition than in the two random conditions.[12]This indicates that the experts were able to use their prelearned spatial knowledge to organize the information in such a way that they surpassed STM and WM capacity limitations.[12]The organization strategy that the drivers employed is known aschunking.[12]Additionally, the comments made by the experts during the procedure point towards their use of route knowledge in completing the task.[12]To ensure that it was in fact spatial information that they were encoding, the researchers also presented lists in alphabetical order andsemanticcategories.[12]However, the researchers found that it was in fact spatial information that the experts were chunking, allowing them to surpass the limitations of both visuo-spatial STM and WM.[12]
Certain species ofparidaeandcorvidae(such as theblack-capped chickadeeand thescrub jay) are able to use spatial memory to remember where, when and what type of food they have cached.[19]Studies on rats and squirrels have also suggested that they are able to use spatial memory to locate previously hidden food.[19]Experiments using the radial maze have allowed researchers to control for a number of variables, such as the type of food hidden, the locations where the food is hidden, the retention interval, as well as any odor cues that could skew results of memory research.[19]Studies have indicated that rats have memory for where they have hidden food and what type of food they have hidden.[19]This is shown in retrieval behavior, such that the rats are selective in going more often to the arms of the maze where they have previously hidden preferred food than to arms with less preferred food or where no food was hidden.[19]
The evidence for the spatial memory of some species of animals, such as rats, indicates that they do use spatial memory to locate and retrieve hidden food stores.[19]
A study usingGPS trackingto see wheredomestic catsgo when their owners let them outside reported that cats have substantial spatial memory. Some of the cats in the study demonstrated exceptional long term spatial memory. One of them, usually traveling no further than 200 m (660 ft) to 250 m (820 ft) from its home, unexpectedly traveled some 1,250 m (4,100 ft) from its home. Researchers initially thought this to be a GPS malfunction, but soon discovered that the cat's owners went out of town that weekend, and that the house the cat went to was the owner's old house. The owners and the cat had not lived in that house for well over a year.[20]
Logie (1995) proposed that thevisuo-spatial sketchpadis broken down into two subcomponents, one visual and one spatial.[11]These are the visual cache and the inner scribe, respectively.[11]The visual cache is a temporary visual store including such dimensions as color and shape.[11]Conversely, the inner scribe is a rehearsal mechanism for visual information and is responsible for information concerning movement sequences.[11]Although a general lack of consensus regarding this distinction has been noted in the literature,[10][21][22]there is a growing amount of evidence that the two components are separate and serve different functions.[citation needed]
Visual memoryis responsible for retaining visual shapes and colors (i.e., what), whereas spatial memory is responsible for information about locations and movement (i.e., where). This distinction is not always straightforward since part of visual memory involves spatial information and vice versa. For example, memory for object shapes usually involves maintaining information about the spatial arrangement of the features which define the object in question.[21]
In practice, the two systems work together in some capacity but different tasks have been developed to highlight the unique abilities involved in either visual or spatial memory. For example, the visual patterns test (VPT) measures visual span whereas the Corsi Blocks Task measures spatial span. Correlational studies of the two measures suggest a separation between visual and spatial abilities, due to a lack of correlation found between them in both healthy andbrain damagedpatients.[10]
Support for the division of visual and spatial memory components is found through experiments using thedual-task paradigm. A number of studies have shown that the retention of visual shapes or colors (i.e., visual information) is disrupted by the presentation of irrelevant pictures or dynamic visual noise. Conversely, the retention of location (i.e., spatial information) is disrupted only by spatial tracking tasks, spatial tapping tasks, and eye movements.[21][22]For example, participants completed both the VPT and the Corsi Blocks Task in a selective interference experiment. During the retention interval of the VPT, the subject viewed irrelevant pictures (e.g.,avant-gardepaintings). The spatial interference task required participants to follow, by touching the stimuli, an arrangement of small wooden pegs which were concealed behind a screen. Both the visual and spatial spans were shortened by their respective interference tasks, confirming that the Corsi Blocks Task relates primarily to spatial working memory.[10]
There are a variety of tasks psychologists use to measure spatial memory on adults, children and animal models. These tasks allow professionals to identify cognitive irregularities in adults and children and allows researchers to administer varying types of drugs and/or lesions in participants and measure the consequential effects on spatial memory.
The Corsi block-tapping test, also known as the Corsi span rest, is apsychological testcommonly used to determine the visual-spatial memory span and the implicit visual-spatial learning abilities of an individual.[23][24]Participants sit with nine wooden 3x3-cm blocks fastened before them on a 25- x 30-cm baseboard in a standard random order. The experiment taps onto the blocks a sequence pattern which participants must then replicate. The blocks are numbered on the experimenters' side to allow for efficient pattern demonstration. The sequence length increases each trial until the participant is no longer able to replicate the pattern correctly. The test can be used to measure both short-term and long-term spatial memory, depending on the length of time between test and recall.
The test was created byCanadianneuropsychologistPhillip Corsi, who modeled it afterHebb'sdigit spantask by replacing the numerical test items with spatial ones. On average, most participants achieve a span of five items on the Corsi span test and seven on the digit span task.[citation needed]
The visual pattern span is similar to the Corsi block tapping test but regarded as a more pure test of visual short-term recall.[25]Participants are presented with a series of matrix patterns that have half their cells colored and the other half blank. The matrix patterns are arranged in a way that is difficult to code verbally, forcing the participant to rely on visual spatial memory. Beginning with a small 2 x 2 matrix, participants copy the matrix pattern from memory into an empty matrix. The matrix patterns are increased in size and complexity at a rate of two cells until the participant's ability to replicate them breaks down. On average, participants' performance tends to break down at sixteen cells.[citation needed]
This task is designed to measure spatial memory abilities in children.[23]The experimenter asks the participant to visualize a blank matrix with a little man. Through a series of directional instructions such as forwards, backwards, left or right, the experimenter guides the participant's little man on a pathway throughout the matrix. At the end, the participant is asked to indicate on a real matrix where the little man that he or she visualized finished. The length of the pathway varies depending on the level of difficulty (1–10) and the matrices themselves may vary in length from 2 x 2 cells to 6 x 6.[citation needed]
Dynamic mazes are intended for measuring spatial ability in children. With this test, an experimenter presents the participant with a drawing of a maze with a picture of a man in the center.[23]While the participant watches, the experimenter uses his or her finger to trace a pathway from the opening of the maze to the drawing of the man. The participant is then expected to replicate the demonstrated pathway through the maze to the drawing of the man. Mazes vary in complexity as difficulty increases.[citation needed]
First pioneered by Olton and Samuelson in 1976,[26]the radial arm maze is designed to test the spatial memory capabilities of rats. Mazes are typically designed with a center platform and a varying number of arms[27]branching off with food placed at the ends. The arms are usually shielded from each other in some way but not to the extent that external cues cannot be used as reference points.[citation needed]
In most cases, the rat is placed in the center of the maze and needs to explore each arm individually to retrieve food while simultaneously remembering which arms it has already pursued. The maze is set up so the rat is forced to return to the center of the maze before pursuing another arm. Measures are usually taken to prevent the rat from using itsolfactorysenses tonavigatesuch as placing extra food throughout the bottom of the maze.[citation needed]
The Morris water navigation task is a classic test for studying spatial learning and memory in rats[28]and was first developed in 1981 by Richard G. Morris for whom the test is named. The subject is placed in a round tank of translucent water with walls that are too high for it to climb out and water that is too deep for it to stand in. The walls of the tank are decorated with visual cues to serve as reference points. The rat must swim around the pool until by chance it discovers just below the surface the hidden platform onto which it can climb.[citation needed]
Typically, rats swim around the edge of the pool first before venturing out into the center in a meandering pattern before stumbling upon the hidden platform. However, as time spent in the pool increases experience, the amount of time needed to locate the platform decreases, with veteran rats swimming directly to the platform almost immediately after being placed in the water. Due to the nature of task involving rats to swim, most researchers believe that habituation is required to decrease the stress levels of the animal. The stress of the animal may impair cognitive testing results.[29]
Thehippocampusprovides animals with a spatial map of their environment.[30]It stores information regarding non-egocentric space (egocentric means in reference to one's body position in space) and therefore supports viewpoint independence in spatial memory.[31]This means that it allows for viewpoint manipulation from memory. It is important for long-term spatial memory of allocentric space (reference to external cues in space).[32]Maintenance and retrieval of memories are thus relational orcontext dependent.[33]The hippocampus makes use of reference and working memory and has the important role of processing information about spatial locations.[34]
Blockingplasticityin this region results in problems in goal-directed navigation and impairs the ability to remember precise locations.[35]Amnesicpatients with damage to the hippocampus cannot learn or remember spatial layouts, and patients having undergone hippocampal removal are severely impaired in spatial navigation.[31][36]
Monkeys with lesions to this area cannot learn object-place associations and rats also display spatial deficits by not reacting to spatial change.[31][37]In addition, rats with hippocampal lesions were shown to have temporally ungraded (time-independent)retrograde amnesiathat is resistant to recognition of a learned platform task only when the entire hippocampus is lesioned, but not when it is partially lesioned.[38]Deficits in spatial memory are also found in spatial discrimination tasks.[36]
Large differences in spatial impairment are found among thedorsalandventralhippocampus. Lesions to the ventral hippocampus have no effect on spatial memory, while the dorsal hippocampus is required for retrieval, processing short-term memory and transferring memory from the short term to longer delay periods.[39][40][41]Infusion ofamphetamineinto the dorsal hippocampus has also been shown to enhance memory for spatial locations learned previously.[42]These findings indicate that there is afunctional dissociationbetween the dorsal and ventral hippocampus.[citation needed]
Hemispheric differences within the hippocampus are also observed. A study onLondontaxi drivers, asked drivers to recall complex routes around the city as well as famouslandmarksfor which the drivers had no knowledge of their spatial location. This resulted in an activation of the right hippocampus solely during recall of the complex routes which indicates that the right hippocampus is used for navigation in large scale spatial environments.[43]
The hippocampus is known to contain two separate memory circuits. One circuit is used for recollection-based place recognition memory and includes theentorhinal-CA1 system,[44]while the other system, consisting of the hippocampustrisynaptic loop(entohinal-dentate-CA3-CA1) is used for place recall memory[45]and facilitation of plasticity at the entorhinal-dentate synapse in mice is sufficient to enhance place recall.[46]
Place cellsare also found in the hippocampus.
Theparietal cortexencodes spatial information using an egocentric frame of reference. It is therefore involved in the transformation of sensory information coordinates into action or effector coordinates by updating the spatial representation of the body within the environment.[47]As a result, lesions to the parietal cortex produce deficits in the acquisition and retention of egocentric tasks, whereas minor impairment is seen among allocentric tasks.[48]
Rats with lesions to theanteriorregion of theposterior parietal cortexreexplore displaced objects, while rats with lesions to theposteriorregion of the posterior parietal cortex displayed no reaction to spatial change.[37]
Parietal cortex lesions are also known to produce temporally ungradedretrograde amnesia.[49]
The dorsalcaudal medialentorhinal cortex(dMEC) contains a topographically organized map of the spatial environment made up ofgrid cells.[50]This brain region thus transforms sensory input from the environment and stores it as a durable allocentric representation in the brain to be used forpath integration.[51]
The entorhinal cortex contributes to the processing and integration of geometric properties and information in the environment.[52]Lesions to this region impair the use ofdistalbut notproximallandmarks during navigation and produces a delay-dependent deficit in spatial memory that is proportional to the length of the delay.[53][54]Lesions to this region are also known to create retention deficits for tasks learned up to 4 weeks but not 6 weeks prior to the lesions.[49]
Memory consolidationin the entorhinal cortex is achieved through extracellular signal-regulatedkinaseactivity.[55]
The medialprefrontal cortexprocesses egocentric spatial information. It participates in the processing of short-term spatial memory used to guide planned search behavior and is believed to join spatial information with itsmotivationalsignificance.[41][56]The identification of neurons that anticipate expectedrewardsin a spatial task support this hypothesis. The medial prefrontal cortex is also implicated in the temporal organization of information.[57]
Hemisphere specialization is found in this brain region. The left prefrontal cortex preferentially processes categorical spatial memory including source memory (reference to spatial relationships between a place or event), while the right prefrontal cortex preferentially processes coordinate spatial memory including item memory (reference to spatial relationships between features of an item).[58]
Lesions to the medial prefrontal cortex impair the performance of rats on a previously trained radial arm maze, but rats can gradually improve to the level of the controls as a function of experience.[59]Lesions to this area also cause deficits on delayed nonmatching-to-positions tasks and impairments in the acquisition of spatial memory tasks during training trials.[60][61]
Theretrosplenial cortexis involved in the processing of allocentric memory andgeometric propertiesin the environment.[52]Inactivation of this region accounts for impaired navigation in the dark and it may be involved in the process ofpath integration.[62]
Lesions to the retrosplenial cortex consistently impair tests of allocentric memory, while sparing egocentric memory.[63]Animals with lesions to the caudal retrosplenial cortex show impaired performance on a radial arm maze only when the maze is rotated to remove their reliance on intramaze cues.[64]
In humans, damage to the retrosplenial cortex results in topographical disorientation. Most cases involve damage to the right retrosplenial cortex and include Brodmann area 30. Patients are often impaired at learning new routes and at navigating through familiar environments.[65]However, most patients usually recover within 8 weeks.
The retrosplenial cortex preferentially processes spatial information in the right hemisphere.[65]
Theperirhinal cortexis associated with both spatial reference and spatial working memory.[34]It processes relational information of environmental cues and locations.[citation needed]
Lesions in the perirhinal cortex account for deficits in reference memory and working memory, and increase the rate offorgettingof information during training trials of the Morris water maze.[66]This accounts for the impairment in the initial acquisition of the task. Lesions also cause impairment on an object location task and reduce habituation to a novel environment.[34]
Spatial memories are formed after an animal gathers and processes sensory information about its surroundings (especiallyvisionandproprioception). In general, mammals require a functioning hippocampus (particularly area CA1) in order to form and process memories about space. There is some evidence that human spatial memory is strongly tied to the right hemisphere of the brain.[67][68][69]
Spatial learning requires bothNMDAandAMPAreceptors, consolidation requires NMDA receptors, and the retrieval of spatial memories requires AMPA receptors.[70]In rodents, spatial memory has been shown to covary with the size of a part of the hippocampalmossy fiberprojection.[71]
The function of NMDA receptors varies according to the subregion of the hippocampus. NMDA receptors are required in the CA3 of the hippocampus when spatial information needs to be reorganized, while NMDA receptors in the CA1 are required in the acquisition and retrieval of memory after a delay, as well as in the formation of CA1 place fields.[72]Blockade of the NMDA receptors prevents induction oflong-term potentiationand impairs spatial learning.[73]
The CA3 of the hippocampus plays an especially important role in the encoding and retrieval of spatial memories. The CA3 is innervated by two afferent paths known as the perforant path (PPCA3) and thedentate gyrus(DG)-mediated mossy fibers (MFs). The first path is regarded as the retrieval index path while the second is concerned with encoding.[74]
Topographical disorientation (TD) is a cognitive disorder that results in the individual being unable to orient his or herself in the real or virtual environment. Patients also struggle with spatial-information dependent tasks. These problems could possibly be the result of a disruption in the ability to access one's cognitive map, a mental representation of the surrounding environment or the inability to judge objects' location in relation to one's self.[75]
Developmental topographical disorientation(DTD) is diagnosed when patients have shown an inability tonavigateeven familiar surroundings since birth and show no apparent neurological causes for this deficiency such as lesioning or brain damage. DTD is a relatively new disorder and can occur in varying degrees of severity.[citation needed]
A study was done to see if topographical disorientation had an effect on individuals who had mild cognitive impairment (MCI). The study was done by recruiting forty-one patients diagnosed with MCI and 24 healthy control individuals. The standards that were set for this experiment were:[citation needed]
TD was assessed clinically in all participants. Neurological and neuropsychological evaluations were determined by a magnetic imaging scan which was performed on each participant. Voxel-based morphometry was used to compare patterns of gray-matter atrophy between patients with and without TD, and a group of normal controls. The outcome of the experiment was that they found TD in 17 out of the 41 MCI patients (41.4%). The functional abilities were significantly impaired in MCI patients with TD compared to in MCI patients without TD and that the presence of TD in MCI patients is associated with loss of gray matter in the medial temporal regions, including the hippocampus.[76]
Research with rats indicates that spatial memory may be adversely affected byneonataldamage to the hippocampus in a way that closely resemblesschizophrenia. Schizophrenia is thought to stem fromneurodevelopmentalproblems shortly after birth.[77]
Rats are commonly used as models of schizophrenia patients. Experimenters create lesions in the ventral hippocampal area shortly after birth, a procedure known as neonatal ventral hippocampal lesioning (NVHL). Adult rats with NVHL show typical indicators of schizophrenia, such as hypersensitivity topsychostimulants, reduced social interactions and impairedprepulse inhibition, working memory and set-shifting.[78][79][80][81][82]Similar to schizophrenia, impaired rats fail to use environmental context in spatial learning tasks such as showing difficulty completing the radial arm maze and the Moris water maze.[83][84][85]
Endonuclease VIII-like 1 (NEIL1) is aDNA repairenzyme that is widely expressed throughout thebrain. NEIL1 is aDNA glycosylasethat initiates the first step inbase excision repairby cleaving bases damaged by reactive oxygen species and then introducing a DNA strand break via an associatedlyasereaction. This enzyme recognizes and removesoxidized DNA basesincludingformamidopyrimidine,thymine glycol,5-hydroxyuraciland5-hydroxycytosine. NEIL1 promotes short-term spatial memory retention.[86]Mice lacking NEIL1 have impaired short-term spatial memory retention in a water maze test.[86]
Global Positioning System(GPS) technology has revolutionized the way we navigate and explore our environment. GPS has become an essential tool in our daily lives, providing real-time information about ourlocationand the directions we need to take to reach our destination. However, some researchers have raised concerns about the impact of GPS use on our spatial learning and memory.Spatial learningrefers to our ability to perceive, remember, and use spatial information acquired in the environment.Memory, on the other hand, involves our ability to store and retrieve information about the world around us. Both spatial learning and memory are crucial for our ability tonavigateand explore our environment effectively.
The use of GPS has been shown to have both positive and negative effects on spatial learning and memory. Research has shown that people who rely on GPS for navigation are less likely to develop and use mental maps and have a harder time remembering details about the environment, as GPS use can lead to a decline in those skills over time.[87]Furthermore, GPS users tend to rely more on thetechnologythan on their owncognitive abilities, leading to a loss of confidence in their navigational skills.[88]
However, this loss in confidence in one's own skills is counteracted by the knowledge that getting lost is no longer a problem, thanks to the GPS on our phones, which in turn restores our confidence in ourwayfindingability. Some beneficial outcomes attributed to GPS assistance are more efficient and accurate navigation, coupled with a significant reduction in thecognitive loadrequired for navigation. When people use GPS devices, they do not have to worry about remembering the route, paying attention tolandmarks, or constantly checkingmaps. This can free up cognitive resources for other tasks, leading to better performance on such tasks and higher levels of concentration and focus. This allows to free up cognitive resources to facilitateinformation processingand learning.[89]
To compensate for the issues that arise from GPS use, there has been substantial research that proposes alternative forms ofGPS navigationor additions to the existing ones that have been shown to enhance spatial learning. A study from 2021 implemented a3Dspatial audio system similar to an auditory compass, where users are directed towards their destination without explicit directions. Rather than being led passively through verbal directions, users are encouraged to take an active role in their own spatial navigation. This led to more accuratecognitive mapsof space, an improvement which was demonstrated when the participants of the study drew precisemapsafter performing ascavenger hunttask.[90]Another study suggested highlighting local features likelandmarks, along the route and atdecision points; or highlighting structural features that provide global orientation (not the details concerning the route taken by the study's participants, but landmarks of the larger area surrounding it). The study showed that accentuating local features in wayfinding maps (GPS) supports the acquisition of route knowledge, which was measured with a pointing and a global feature recall task.[91]
Also, inblindandvisually impairedpeople the use of GPS provide advantages in spatial learning and memory. Blind and visually impaired people often need to obtain information about locations ahead of time and practice along a specific route with the help of a relative, friend or specialized instructor before traveling the route to said destination independently. GPS comes in by offering helpful information therefore allowing them to become more independent and confident with their travel to a specific destination.[citation needed]
Another research paper claims that a GPS can be used for patients suffering fromdementia.
In a study done in 2014, drivers with mild to very mildAlzheimer's disease(AD) were administered 3 driving trials with different GPS settings (normal, visual-only and audio-only). The participants were required to perform a variety of driving tasks on adriving simulatorfollowing the GPS instructions. This study has found that using single, simple auditory instructions with the absence of the visual output of the GPS could potentially help people with mild AD to improve their driving ability and reach their destination, therefore confirming that GPS use does reduce cognitive loads.[92]
Since GPS use would help the patients with wayfinding, it would allow them to stay safe in public, reclaim their sense ofself-sufficiency, and discourage "wandering". Overall, evidence is strongest about the use of GPS technologies for averting harm and promotingwellbeing.[93]
The impact of GPS use on spatial learning and memory is not yet fully understood, and further research is needed to explore the long-term effects of GPS use on these cognitive processes. However, it is clear that GPS technology has both benefits and drawbacks, and users should be aware of the potential impact of their reliance on GPS.
In conclusion, GPS technology has revolutionized the way we navigate and explore our environment, but its impact on our spatial learning and memory is still a subject of debate. While GPS use can help people navigate more efficiently, confidently, and aid populations who would otherwise be significantly hindered; its use may lead to a decline in spatial cognitive skills over time. Therefore, it is essential for users to balance the benefits and drawbacks of GPS use and to be aware of its potential impact on their cognitive abilities.[citation needed]
Nonverbal learning disability(NVLD) is characterized by normal verbal abilities but impaired visuospatial abilities. Problem areas for children with nonverbal learning disability include arithmetic, geometry, and science. Impairments in spatial memory are linked to nonverbal learning disorder and other learning difficulties.[94]
Arithmeticword problemsinvolve written text containing a set of data followed by one or more questions and require the use of the four basic arithmetic operations (addition, subtraction, multiplication, or division).[22]Researchers suggest that successful completion of arithmetic word problems involves spatialworking memory(involved in building schematic representations) which facilitates the creation of spatial relationships between objects. Creating spatial relationships between objects is an important part of solving word problems because mental operations and transformations are required.[22]
Researchers investigated the role of spatial memory and visual memory in the ability to complete arithmetic word problems. Children in the study completed the Corsi block task (forward and backward series) and a spatial matrix task, as well as a visual memory task called the house recognition test. Poorproblem-solverswere impaired on the Corsi block tasks and the spatial matrix task, but performed normally on the house recognition test when compared to normally achieving children. The experiment demonstrated that poor problem solving is related specifically to deficient processing of spatial information.[22]
Sleephas been found to benefit spatial memory, by enhancing hippocampal-dependentmemory consolidation,[95]elevating different pathways which are responsible for synaptic strength, control plasticity-related gene transcription and protein translation (Dominique Piber, 2021).[96]Hippocampal areas activated in route-learning are reactivated during subsequent sleep (NREM sleepin particular). One study demonstrated that the actual extent of reactivation during sleep correlated with the improvement in route retrieval and therefore memory performance the following day.[97]The study established the idea that sleep enhances the systems-level process of consolidation that consequently enhances/improves behavioral performance. A period of wakefulness has no effect on stabilizing memory traces, in comparison to a period of sleep. Sleep after the first post-training night, i.e., on the second night, does not benefit spatial memory consolidation further. Therefore, sleeping in the first post-training night e.g. after learning a route, is most important.[95]
Further, it has been illustrated that early and late nocturnal sleep have different effects on spatial memory. N3 of the NREM sleep, also referred to asslow wave sleep(SWS), is supposed to have a salient role for the sleep-dependent creation of spatial memory in humans. Particularly in the study conducted by Plihal and Born (1999),[98]the performance on mental rotation tasks was higher among participants who had early sleep intervals (23.00–02.00 am) after learning the task compared to the ones who had late sleep intervals (03.00–06.00 am). These results suggest that early sleep, which is rich in SWS, has certain benefits for the formation of spatial memory. When researchers examined whether early sleep would have such an impact on word stem priming task (verbal task), the results were the opposite. This was not surprising for researchers as priming tasks mostly rely onprocedural memory, and thus, it benefits more late retention sleep (dominated byREM sleep) rather than early.[98]
Sleep deprivationand sleep has also been a researched association. Sleep deprivation hinders memory performance improvement due to an active disruption of spatial memory consolidation.[95]As a result, spatial memory is enhanced by a period of sleep. Similar results were confirmed by another study examining the impact of total sleep deprivation (TSD) on rats' spatial memory (Guan et al., 2004).[99]In the first experiment conducted, the rats were trained inMorris water mazefor 12 trials in 6 hours to find a hidden platform (transparent and not visible in the water) by using spatial cues in the environment. In each trial, they started from a different point and were allowed to swim for a maximum of 120 s to reach the platform. After the learning phase, they gave a probe trial to test spatial memory (after 24 h). In this trial, the hidden platform was removed from the maze and the time animals spent in the target area (which was occupied by hidden platform before) was a measure of spatial memory persistence. The control rats, who had spontaneous sleep, spent significantly more time in the target quadrant compared to ones who had total sleep deprivation. In terms of spatial learning, which is indicated by the latency to find the hidden platform, there were no differences. For both control and sleep deprived rats, the time required to find a platform was decreasing with every new trial.[99]
In the second experiment, the rats were trained to swim to a visible platform whose location was changed in each trial. For every new trial, the rats started from the opposite side of the platform. After the training in a single trial, their memory was tested after 24 h. Platform was still in the maze. The distance and the time they needed to swim to the visible platform were considered as non-spatial memory measures. No significant difference has been found between sleep deprived rats and control rats. Similarly, in terms of spatial learning, which is indicated by latency to reach the visible platform, there were no significant differences. TSD does not affect non-spatial learning and non-spatial memory.[99]
In reference to the effects of sleep deprivation on humans, Dominique Piber (2021)[96]featured in his literature review the clinical observations which shows that people with severesleep disordersfrequently have abnormalities in spatial memory. As visible in the studies of both,insomniapatients who suffer from a sleep disorder which features interrupted, non-restorative sleep and deficits in cognitive performance during the day, are documented to have a negative performance in a spatial task, in comparison with the healthy participants (Li et al., 2016;[100]Chen et al., 2016;[101]Khassawneh et al., 2018;[102]He et al., 2021[103]).
Likewise, dreaming has an important role in spatial memory. A study conducted by Wamsley and Stickgold (2019)[104]proved that participants, who incorporate a recent learning experience into their overnight dream content, show an increased overnight performance improvement. Thus, supporting the hypothesis that dreaming reflects memory processing in the sleeping brain. Moreover, according to the authors, one of the explanations is that maze‐related dreams are indicators that performance‐relevant components of task memory are being reactivated in the sleeping brain. Additionally, the study supports the idea that dream reports can include an experimental learning task during all stages of sleep, including REM and NREM.[104]
Virtual reality(VR) has also been used to study the connection between dreams and spatial memory. Ribeiro, Gounden, and Quaglino (2021)[105]proposed spatialized elements in a VR context and found that after a full night of sleep in a home setting, when the material studied was incorporated into the dream content, the recall performance of these elements was better than the performance obtained after a comparable wake period.[105]
|
https://en.wikipedia.org/wiki/Spatial_memory
|
Inpsychologyandphilosophy,theory of mind(often abbreviated to ToM) refers to the capacity to understand other individuals by ascribingmental statesto them. A theory of mind includes the understanding that others'beliefs,desires,intentions,emotions, andthoughtsmay be different from one's own.[1]Possessing a functional theory of mind is crucial for success in everyday humansocial interactions. People utilize a theory of mind whenanalyzing,judging, andinferringother people's behaviors.
Theory of mind was first conceptualized by researchers evaluating the presence oftheory of mind in animals.[2][3]Today, theory of mind research also investigates factors affecting theory of mind in humans, such as whether drug and alcohol consumption,language development, cognitive delays, age, and culture can affect a person's capacity to display theory of mind.
It has been proposed that deficits in theory of mind may occur in people withautism,[5]anorexia nervosa,[6]schizophrenia,dysphoria,addiction,[7]andbrain damagecaused byalcohol's neurotoxicity.[8][9]Neuroimagingshows that themedial prefrontal cortex(mPFC), the posteriorsuperior temporal sulcus(pSTS), theprecuneus, and theamygdalaare associated with theory of mind tasks. Patients withfrontal lobeortemporoparietal junctionlesions find some theory of mind tasks difficult. One's theory of mind develops in childhood as theprefrontal cortexdevelops.[10]
The "theory of mind" is described as atheorybecause the behavior of the other person, such as their statements and expressions, is the only thing being directly observed; no one has direct access to the mind of another, and the existence and nature of the mind must be inferred.[11]It is typically assumed others have minds analogous to one's own; this assumption is based on three reciprocal social interactions, as observed injoint attention,[12]the functional use of language,[13]and the understanding of others' emotions and actions.[14]Theory of mind allows one to attribute thoughts, desires, and intentions to others, to predict or explain their actions, and to posit their intentions. It enables one to understand that mental states can be the cause of—and can be used to explain and predict—the behavior of others.[11]Being able to attribute mental states to others and understanding them as causes of behavior implies, in part, one must be able to conceive of the mind as a "generator of representations".[15]If a person does not have a mature theory of mind, it may be a sign of cognitive or developmental impairment.[16]
Theory of mind appears to be an innate potential ability in humans that requires social and other experience over many years for its full development. Different people may develop more or less effective theories of mind.Neo-Piagetian theories of cognitive developmentmaintain that theory of mind is a byproduct of a broaderhypercognitiveability of the human mind to register, monitor, and represent its own functioning.[17]
Empathy—the recognition and understanding of the states of mind of others, including their beliefs, desires, and particularly emotions—is a related concept. Empathy is often characterized as the ability to "put oneself into another's shoes". Recentneuro-ethologicalstudies of animal behavior suggest that rodents may exhibit empathetic abilities.[18]While empathy is known as emotional perspective-taking, theory of mind is defined as cognitive perspective-taking.[19]
Research on theory of mind, in humans and animals, adults and children, normally and atypically developing, has grown rapidly in the years sincePremackand Guy Woodruff's 1978 paper, "Does the chimpanzee have a theory of mind?".[11]The field ofsocial neurosciencehas also begun to address this debate by imaging the brains of humans while they perform tasks that require the understanding of an intention, belief, or other mental state in others.
An alternative account of theory of mind is given inoperantpsychology and providesempirical evidencefor a functional account of both perspective-taking and empathy. The most developed operant approach is founded on research on derived relational responding[jargon]and is subsumed withinrelational frame theory. Derived relational responding relies on the ability to identifyderived relations, or relationships between stimuli that are not directly learned orreinforced; for example, if "snake" is related to "danger" and "danger" is related to "fear", people may know to fear snakes even without learning an explicit connection between snakes and fear.[20]According to this view, empathy and perspective-taking comprise a complex set of derived relational abilities based on learning to discriminate and respond verbally to ever more complex relations between self, others, place, and time, and through established relations.[21][22][23]
Discussions of theory of mind have their roots in philosophical debate from the time ofRené Descartes'Second Meditation, which set the foundations for considering the science of the mind.
Two differing approaches in philosophy for explaining theory of mind aretheory-theoryandsimulation theory.[24]Theory-theory claims that individuals use "theories" grounded infolk psychologyto reason about others' minds. According to theory-theory, these folk psychology theories are developed automatically and innately by concepts and rules we have for ourselves, and then instantiated through social interactions.[25]In contrast, simulation-theory argues that individuals simulate the internal states of others to build mental models for their cognitive processes. A basic example of this is someone imagining themselves in the position of another person to infer the other person's thoughts and feelings.[26]Theory of mind is also closely related toperson perceptionandattribution theoryfromsocial psychology.
It is common and intuitive to assume that others have minds. Peopleanthropomorphizenon-human animals, inanimate objects, and even natural phenomena.Daniel Dennettreferred to this tendency as taking an "intentional stance" toward things: we assume they have intentions, to help predict their future behavior.[27]However, there is an important distinction between taking an "intentional stance" toward something and entering a "shared world" with it. The intentional stance is a functional relationship, describing the use of a theory due to its practical utility, rather than the accuracy of its representation of the world. As such, it is something people resort to during interpersonal interactions. A shared world is directly perceived and its existence structures reality itself for the perceiver. It is not just a lens, through which the perceiver views the world; it in many ways constitutes the cognition, as both its object and the blueprint used to structure perception into understanding.
The philosophical roots of another perspective, therelational frame theory(RFT) account of theory of mind, arise from contextual psychology, which refers to the study of organisms (both human and non-human) interacting in and with a historical and current situational context. It is an approach based oncontextualism, a philosophy in which any event is interpreted as an ongoing act inseparable from its current and historical context and in whicha radically functional approach to truth and meaningis adopted. As a variant of contextualism, RFT focuses on the construction of practical, scientific knowledge. This scientific form of contextual psychology is virtually synonymous with the philosophy of operant psychology.[28]
The study of which animals are capable of attributing knowledge and mental states to others, as well as the development of this ability in humanontogenyandphylogeny, identifies several behavioral precursors to theory of mind. Understanding attention, understanding of others' intentions, and imitative experience with others are hallmarks of a theory of mind that may be observed early in the development of what later becomes a full-fledged theory.
Simon Baron-Cohenproposed that infants' understanding of attention in others acts as a critical precursor to the development of theory of mind.[12]Understanding attention involves understanding that seeing can be directed selectively as attention, that the looker assesses the seen object as "of interest", and that seeing can induce beliefs. A possible illustration of theory of mind in infants is joint attention. Joint attention refers to when two people look at and attend to the same thing. Parents often use the act of pointing to prompt infants to engage in joint attention; understanding this prompt requires that infants take into account another person's mental state and understand that the person notices an object or finds it of interest. Baron-Cohen speculates that the inclination to spontaneously reference an object in the world as of interest, via pointing, ("Proto declarative pointing") and to likewise appreciate the directed attention of another, may be the underlying motive behind all human communication.[12]
Understanding others' intentions is another critical precursor to understanding other minds because intentionality is a fundamental feature of mental states and events. The "intentional stance" was defined byDaniel Dennett[29]as an understanding that others' actions are goal-directed and arise from particular beliefs or desires. Both two and three-year-old children could discriminate when an experimenter intentionally or accidentally marked a box with stickers.[30]Even earlier in development,Andrew N. Meltzofffound that 18-month-old infants could perform target tasks involving the manipulation of objects that adult experimenters attempted and failed, suggesting the infants could represent the object-manipulating behavior of adults as involving goals and intentions.[31]While attribution of intention and knowledge is investigated in young humans and nonhuman animals to detect precursors to a theory of mind, Gagliardi et al. have pointed out that even adult humans do not always act in a way consistent with an attributional perspective (i.e., based on attribution of knowledge to others).[32]In their experiment, adult human subjects attempted to choose the container baited with a small object from a selection of four containers when guided by confederates who could not see which container was baited.
Research in developmental psychology suggests that an infant's ability to imitate others lies at the origins of both theory of mind and other social-cognitive achievements likeperspective-takingand empathy.[33]According to Meltzoff, the infant's innate understanding that others are "like me" allows them to recognize the equivalence between the physical and mental states apparent in others and those felt by the self. For example, the infant uses their own experiences, orienting their head and eyes toward an object of interest to understand the movements of others who turn toward an object; that is, they will generally attend to objects of interest or significance. Some researchers in comparative disciplines have hesitated to put too much weight on imitation as a critical precursor to advanced human social-cognitive skills like mentalizing and empathizing, especially if true imitation is no longer employed by adults. A test of imitation by Alexandra Horowitz found that adult subjects imitated an experimenter demonstrating a novel task far less closely than children did. Horowitz points out that the precise psychological state underlying imitation is unclear and cannot, by itself, be used to draw conclusions about the mental states of humans.[34]
While much research has been done on infants, theory of mind develops continuously throughout childhood and into late adolescence as thesynapsesin the prefrontal cortex develop. The prefrontal cortex is thought to be involved in planning and decision-making.[35]Children seem to develop theory of mind skills sequentially. The first skill to develop is the ability to recognize that others have diverse desires. Children are able to recognize that others have diverse beliefs soon after. The next skill to develop is recognizing that others have access to different knowledge bases. Finally, children are able to understand that others may have false beliefs and that others are capable of hiding emotions. While this sequence represents the general trend in skill acquisition, it seems that more emphasis is placed on some skills in certain cultures, leading to more valued skills to develop before those that are considered not as important. For example, inindividualisticcultures such as the United States, a greater emphasis is placed on the ability to recognize that others have different opinions and beliefs. In acollectivisticculture, such as China, this skill may not be as important and therefore may not develop until later.[36]
There is evidence that the development of theory of mind is closely intertwined with language development in humans. One meta-analysis showed a moderate to strong correlation (r= 0.43) between performance on theory of mind and language tasks.[37]Both language and theory of mind begin to develop around the same time in children (between ages two and five), but many other abilities develop during this same time period as well, and they do not produce such high correlations with one another nor with theory of mind.
Pragmatic theories of communication assume that infants must possess an understanding of beliefs and mental states of others to infer the communicative content that proficient language users intend to convey.[38]Since spoken phrases can have different meanings depending on context, theory of mind can play a crucial role in understanding the intentions of others and inferring the meaning of words. Some empirical results suggest that even 13-month-old infants have an early capacity for communicative mind-reading that enables them to infer what relevant information is transferred between communicative partners, which implies that human language relies at least partially on theory of mind skills.[39]
Carol A. Miller posed further possible explanations for this relationship. Perhaps the extent of verbal communication and conversation involving children in a family could explain theory of mind development. Such language exposure could help introduce a child to the different mental states and perspectives of others.[40]Empirical findings indicate that participation in family discussion predicts scores on theory of mind tasks,[41]and that deaf children who have hearing parents and may not be able to communicate with their parents much during early years of development tend to score lower on theory of mind tasks.[42]
Another explanation of the relationship between language and theory of mind development has to do with a child's understanding of mental-state words such as "think" and "believe". Since a mental state is not something that one can observe from behavior, children must learn the meanings of words denoting mental states from verbal explanations alone, requiring knowledge of the syntactic rules, semantic systems, and pragmatics of a language.[40]Studies have shown that understanding of these mental state words predicts theory of mind in four-year-olds.[43]
A third hypothesis is that the ability to distinguish a whole sentence ("Jimmy thinks the world is flat") from its embedded complement ("the world is flat") and understand that one can be true while the other can be false is related to theory of mind development. Recognizing these complements as being independent of one another is a relatively complex syntactic skill and correlates with increased scores on theory of mind tasks in children.[44]
There is also evidence that the areas of the brain responsible for language and theory of mind are closely connected. Thetemporoparietal junction(TPJ) is involved in the ability to acquire new vocabulary, as well as to perceive and reproduce words. The TPJ also contains areas that specialize in recognizing faces, voices, and biological motion, and in theory of mind. Since all of these areas are located so closely together, it is reasonable to suspect that they work together. Studies have reported an increase in activity in the TPJ when patients are absorbing information through reading or images regarding other peoples' beliefs but not while observing information about physical control stimuli.[45]
Adults have theory of mind concepts that they developed as children (concepts such as belief, desire, knowledge, and intention). They use these concepts to meet the diverse demands of social life, ranging from snap decisions about how to trick an opponent in a competitive game, to keeping up with who knows what in a fast-moving conversation, to judging the guilt or innocence of the accused in a court of law.[46]
Boaz Keysar, Dale Barr, and colleagues found that adults often failed tousetheir theory of mind abilities to interpret a speaker's message, and acted as if unaware that the speaker lacked critical knowledge about a task. In one study, a confederate instructed adult participants to rearrange objects, some of which were not visible to the confederate, as part of a communication game. Only objects that were visible to both the confederate and the participant were part of the game. Despite knowing that the confederate could not see some of the objects, a third of the participants still tried to move those objects.[47]Other studies show that adults are prone toegocentric biases, with which they are influenced by their own beliefs, knowledge, or preferences when judging those of other people, or that they neglect other people's perspectives entirely.[48]There is also evidence that adults with greater memory,inhibitory capacity, and motivation are more likely to use their theory of mind abilities.[49]
In contrast, evidence about indirect effects of thinking about other people's mental states suggests that adults may sometimes use their theory of mind automatically. Agnes Kovacs and colleagues measured the time it took adults to detect the presence of a ball as it was revealed from behind an occluder. They found that adults' speed of response was influenced by whether another person (the "agent") in the scene thought there was a ball behind the occluder, even though adults were not asked to pay attention to what the agent thought.[50]
Dana Samson and colleagues measured the time it took adults to judge the number of dots on the wall of a room. They found that adults responded more slowly when another person standing in the room happened to see fewer dots than they did, even when they had never been asked to pay attention to what the person could see.[51]It has been questioned whether these "altercentric biases" truly reflect automatic processing of what another person is thinking or seeing or, instead, reflect attention and memory effects cued by the other person, but not involving any representation of what they think or see.[52]
Different theories seek to explain such results. If theory of mind is automatic, this would help explain how people keep up with the theory of mind demands of competitive games and fast-moving conversations. It might also explain evidence that human infants and some non-human species sometimes appear capable of theory of mind, despite their limited resources for memory and cognitive control.[53]If theory of mind is effortful and not automatic, on the other hand, this explains why it feels effortful to decide whether a defendant is guilty or whether a negotiator is bluffing. Economy of effort would help explain why people sometimes neglect to use their theory of mind.
Ian Apperly andStephen Butterfillsuggested that people have "two systems" for theory of mind,[54]in common with "two systems" accounts in many other areas of psychology.[55]In this account, "system 1" is cognitively efficient and enables theory of mind for a limited but useful set of circumstances. "System 2" is cognitively effortful, but enables much more flexible theory of mind abilities. PhilosopherPeter Carruthersdisagrees, arguing that the same core theory of mind abilities can be used in both simple and complex ways.[56]The account has been criticized by Celia Heyes who suggests that "system 1" theory of mind abilities do not require representation of mental states of other people, and so are better thought of as "sub-mentalizing".[52]
In older age, theory of mind capacities decline, irrespective of how exactly they are tested.[57]However, the decline in other cognitive functions is even stronger, suggesting that social cognition is better preserved. In contrast to theory of mind, empathy shows no impairments in aging.[58][59]
There are two kinds of theory of mind representations: cognitive (concerning mental states, beliefs, thoughts, and intentions) and affective (concerning the emotions of others). Cognitive theory of mind is further separated into first order (e.g., I think she thinks that) and second order (e.g. he thinks that she thinks that). There is evidence that cognitive and affective theory of mind processes are functionally independent from one another.[60]In studies of Alzheimer's disease, which typically occurs in older adults, patients display impairment with second order cognitive theory of mind, but usually not with first order cognitive or affective theory of mind. However, it is difficult to discern a clear pattern of theory of mind variation due to age. There have been many discrepancies in the data collected thus far, likely due to small sample sizes and the use of different tasks that only explore one aspect of theory of mind. Many researchers suggest that theory of mind impairment is simply due to the normal decline in cognitive function.[61]
Researchers propose that five key aspects of theory of mind develop sequentially for all children between the ages of three and five:[62]diverse desires, diverse beliefs, knowledge access, false beliefs, and hidden emotions.[62]Australian, American, and European children acquire theory of mind in this exact order,[10]and studies with children in Canada, India, Peru, Samoa, and Thailand indicate that they all pass the false belief task at around the same time, suggesting that children develop theory of mind consistently around the world.[63]
However, children fromIranandChinadevelop theory of mind in a slightly different order. Although they begin the development of theory of mind around the same time, toddlers from these countries understand knowledge access before Western children but take longer to understand diverse beliefs.[10][16]Researchers believe this swap in the developmental order is related to the culture ofcollectivismin Iran and China, which emphasizes interdependence and shared knowledge as opposed to the culture ofindividualismin Western countries, which promotes individuality and accepts differing opinions. Because of these different cultural values, Iranian and Chinese children might take longer to understand that other people have different beliefs and opinions. This suggests that the development of theory of mind is not universal and solely determined by innate brain processes but also influenced by social and cultural factors.[10]
Theory of mind can help historians to more properly understand historical figures' characters, for exampleThomas Jefferson. Emancipationists likeDouglas L. Wilsonand scholars at the Thomas Jefferson Foundation view Jefferson as an opponent of slavery all his life, noting Jefferson's attempts within the limited range of options available to him to undermine slavery, his many attempts at abolition legislation, the manner in which he provided for slaves, and his advocacy of their more humane treatment. This view contrasts with that of revisionists likePaul Finkelman, who criticizes Jefferson for racism, slavery, and hypocrisy. Emancipationist views on this hypocrisy recognize that if he tried to be true to his word, it would have alienated his fellow Virginians. In another example,Franklin D. Rooseveltdid not join NAACP leaders in pushing for federal anti-lynching legislation, as he believed that such legislation was unlikely to pass and that his support for it would alienate Southern congressmen, including many of Roosevelt's fellow Democrats.
Whether children younger than three or four years old have a theory of mind is a topic of debate among researchers. It is a challenging question, due to the difficulty of assessing what pre-linguistic children understand about others and the world. Tasks used in research into the development of theory of mind must take into account theumwelt[64]of the pre-verbal child.
One of the most important milestones in theory of mind development is the ability to attributefalse belief: in other words, to understand that other people can believe things which are not true. To do this, it is suggested, one must understand how knowledge is formed, that people's beliefs are based on their knowledge, that mental states can differ from reality, and that people's behavior can be predicted by their mental states. Numerous versions of false-belief task have been developed, based on the initial task created by Wimmer and Perner (1983).[65]
In the most common version of the false-belief task (often called theSally-Anne test), children are told a story about Sally and Anne. Sally has a marble, which she places into her basket, and then leaves the room. While she is out of the room, Anne takes the marble from the basket and puts it into the box. The child being tested is then asked where Sally will look for the marble once she returns. The child passes the task if she answers that Sally will look in the basket, where Sally put the marble; the child fails the task if she answers that Sally will look in the box. To pass the task, the child must be able to understand that another's mental representation of the situation is different from their own, and the child must be able to predict behavior based on that understanding.[66]Another example depicts a boy who leaves chocolate on a shelf and then leaves the room. His mother puts it in the fridge. To pass the task, the child must understand that the boy, upon returning, holds the false belief that his chocolate is still on the shelf.[67]
The results of research using false-belief tasks have been called into question: most typically developing children are able to pass the tasks from around age four.[68]Yet early studies asserted that 80% of children diagnosed with autism were unable to pass this test, while children with other disabilities likeDown syndromewere able to.[69]However this assertion could not be replicated by later studies.[70][71][72][73]It instead was concluded that children fail these tests due to a lack of understanding of extraneous processes and a basic lack of mental processing capabilities.[74]
Adults may also struggle with false beliefs, for instance when they showhindsight bias.[75]In one experiment, adult subjects who were asked for an independent assessment were unable to disregard information on actual outcome. Also in experiments with complicated situations, when assessing others' thinking, adults can fail to correctly disregard certain information that they have been given.[67]
Other tasks have been developed to try to extend the false-belief task. In the "unexpected contents" or "smarties" task, experimenters ask children what they believe to be the contents of a box that looks as though it holdsSmartieschocolates. After the child guesses "Smarties", it is shown that the box in fact contained pencils. The experimenter then re-closes the box and asks the child what she thinks another person, who has not been shown the true contents of the box, will think is inside. The child passes the task if he/she responds that another person will think that "Smarties" exist in the box, but fails the task if she responds that another person will think that the box contains pencils. Gopnik & Astington found that children pass this test at age four or five years.[76]Though the use of such implicit tests has yet to reach a consensus on their validity and reproducibility of study results.[77]
The "false-photograph" task[78]also measures theory of mind development. In this task, children must reason about what is represented in a photograph that differs from the current state of affairs. Within the false-photograph task, either a location or identity change exists.[79]In the location-change task, the examiner puts an object in one location (e.g. chocolate in an open green cupboard), whereupon the child takes a Polaroid photograph of the scene. While the photograph is developing, the examiner moves the object to a different location (e.g. a blue cupboard), allowing the child to view the examiner's action. The examiner asks the child two control questions: "When we first took the picture, where was the object?" and "Where is the object now?" The subject is also asked a "false-photograph" question: "Where is the object in the picture?" The child passes the task if he/she correctly identifies the location of the object in the picture and the actual location of the object at the time of the question. However, the last question might be misinterpreted as "Where in this room is the object that the picture depicts?" and therefore some examiners use an alternative phrasing.[80]
To make it easier for animals, young children, and individuals with classicalautismto understand and perform theory of mind tasks, researchers have developed tests in which verbal communication is de-emphasized: some whose administration does not involve verbal communication on the part of the examiner, some whose successful completion does not require verbal communication on the part of the subject, and some that meet both of those standards. One category of tasks uses a preferential-looking paradigm, withlooking timeas the dependent variable. For instance, nine-month-old infants prefer looking at behaviors performed by a human hand over those made by an inanimate hand-like object.[81]Other paradigms look at rates of imitative behavior, the ability to replicate and complete unfinished goal-directed acts,[31]and rates of pretend play.[82]
Research on the early precursors of theory of mind has invented ways to observe preverbal infants' understanding of other people's mental states, including perception and beliefs. Using a variety of experimental procedures, studies show that infants from their first year of life have an implicit understanding of what other people see[83]and what they know.[84][85]A popular paradigm used to study infants' theory of mind is the violation-of-expectation procedure, which exploits infants' tendency to look longer at unexpected and surprising events compared to familiar and expected events. The amount of time they look at an event gives researchers an indication of what infants might be inferring, or their implicit understanding of events. One study using this paradigm found that 16-month-olds tend to attribute beliefs to a person whose visual perception was previously witnessed as being "reliable", compared to someone whose visual perception was "unreliable". Specifically, 16-month-olds were trained to expect a person's excited vocalization and gaze into a container to be associated with finding a toy in the reliable-looker condition or an absence of a toy in the unreliable-looker condition. Following this training phase, infants witnessed, in an object-search task, the same persons searching for a toy either in the correct or incorrect location after they both witnessed the location of where the toy was hidden. Infants who experienced the reliable looker were surprised and therefore looked longer when the person searched for the toy in the incorrect location compared to the correct location. In contrast, the looking time for infants who experienced the unreliable looker did not differ for either search locations. These findings suggest that 16-month-old infants can differentially attribute beliefs about a toy's location based on the person's prior record of visual perception.[86]
With the methods used to test theory of mind, it has been experimentally shown that very simple robots that only react by reflexes and are not built to have any complex cognition at all can pass the tests for having theory of mind abilities that psychology textbooks assume to be exclusive to humans older than four or five years. Whether such a robot passes the test is influenced by completely non-cognitive factors such as placement of objects and the structure of the robot body influencing how the reflexes are conducted. It has therefore been suggested that theory of mind tests may not actually test cognitive abilities.[87]
Furthermore, early research into theory of mind in autistic children[69]is argued to constituteepistemological violencedue to implicit or explicit negative and universal conclusions about autistic individuals being drawn from empirical data that viably supports other (non-universal) conclusions.[88]
Theory of mind impairment, ormind-blindness, describes a difficulty someone would have with perspective-taking. Individuals with theory of mind impairment struggle to see phenomena from any other perspective than their own.[89]Individuals who experience a theory of mind deficit have difficulty determining the intentions of others, lack understanding of how their behavior affects others, and have a difficult time with social reciprocity.[90]Theory of mind deficits have been observed in people withautism spectrumdisorders,schizophrenia,nonverbal learning disorderand along with people under the influence of alcohol and narcotics, sleep-deprived people, and people who are experiencing severe emotional or physical pain. Theory of mind deficits have also been observed in deaf children who are late signers (i.e. are born to hearing parents), but such a deficit is due to the delay in language learning, not any cognitive deficit, and therefore disappears once the child learns sign language.[91]
In 1985Simon Baron-Cohen,Alan M. Leslie, andUta Frithsuggested that children withautismdo not employ theory of mind and that autistic children have particular difficulties with tasks requiring the child to understand another person's beliefs.[69]These difficulties persist when children are matched for verbal skills and they have been taken as a key feature of autism.[92]Although in a 2019 review, Gernsbacher and Yergeau argued that "the claim that autistic people lack a theory of mind is empirically questionable", as there have been numerous failed replications of classic ToM studies and the meta-analytical effect sizes of such replications were minimal to small.[70]
Many individuals classified as autistic have severe difficulty assigning mental states to others, and some seem to lack theory of mind capabilities.[93]Researchers who study the relationship between autism and theory of mind attempt to explain the connection in a variety of ways. One account assumes that theory of mind plays a role in the attribution of mental states to others and in childhood pretend play.[94]According to Leslie,[94]theory of mind is the capacity to mentally represent thoughts, beliefs, and desires, regardless of whether the circumstances involved are real. This might explain why some autistic individuals show extreme deficits in both theory of mind and pretend play. However, Hobson proposes a social-affective justification,[95]in which deficits in theory of mind in autistic people result from a distortion in understanding and responding to emotions. He suggests that typically developing individuals, unlike autistic individuals, are born with a set of skills (such as social referencing ability) that later lets them comprehend and react to other people's feelings. Other scholars emphasize that autism involves a specific developmental delay, so that autistic children vary in their deficiencies, because they experience difficulty in different stages of growth. Very early setbacks can alter proper advancement of joint-attention behaviors, which may lead to a failure to form a full theory of mind.[93]
It has been speculated that theory of mind exists on acontinuumas opposed to the traditional view of a discrete presence or absence.[82]While some research has suggested that some autistic populations are unable to attribute mental states to others,[12]recent evidence points to the possibility of coping mechanisms that facilitate the attribution of mental states.[96]A binary view regarding theory of mind contributes to thestigmatizationof autistic adults who do possess perspective-taking capacity, as the assumption that autistic people do not have empathy can become a rationale fordehumanization.[97]
Tine et al. report that autistic children score substantially lower on measures of social theory of mind (i.e., "reasoning aboutothers'mental states", p. 1) in comparison to children diagnosed withAsperger syndrome.[98]
Generally, children with more advanced theory of mind abilities display more advanced social skills, greater adaptability to new situations, and greater cooperation with others. As a result, these children are typically well-liked. However, "children may use their mind-reading abilities to manipulate, outwit, tease, or trick their peers."[99]Individuals possessing inferior theory of mind skills, such as children with autism spectrum disorder, may be socially rejected by their peers since they are unable to communicate effectively.Social rejectionhas been proven to negatively impact a child's development and can put the child at greater risk of developing depressive symptoms.[100]
Peer-mediated interventions (PMI) are a school-based treatment approach for children and adolescents with autism spectrum disorder in which peers are trained to be role models in order to promote social behavior. Laghi et al. studied whether analysis of prosocial (nice) and antisocial (nasty) theory-of-mind behaviors could be used, in addition to teacher recommendations, to select appropriate candidates for PMI programs. Selecting children with advanced theory-of-mind skills who use them in prosocial ways will theoretically make the program more effective. While the results indicated that analyzing the social uses of theory of mind of possible candidates for a PMI program may increase the program's efficacy, it may not be a good predictor of a candidate's performance as a role model.[35]
A 2014 Cochrane review on interventions based on theory of mind found that such a theory could be taught to individuals with autism but claimed little evidence of skill maintenance, generalization to other settings, or development effects on related skills.[101]
Some 21st century studies have shown that the results of some studies of theory of mind tests on autistic people may be misinterpreted based on thedouble empathy problem, which proposes that rather than autistic people specifically having trouble with theory of mind, autistic people and non-autistic people have equal difficulty understanding one-another due to their neurological differences.[102]Studies have shown that autistic adults perform better in theory of mind tests when paired with other autistic adults[103]as well as possibly autistic close family members.[104]Academics who acknowledge the double empathy problem also propose that it is likely autistic people understand non-autistic people to a higher degree than vice-versa, due to the necessity of functioning in a non-autistic society.[105]
Psychopathyis another deficit that is of large importance when discussing theory of mind. While psychopathic individuals show impaired emotional behavior including a lack of emotional responsiveness to others and deficient empathy, as well as impaired social behavior, there are many controversies regarding psychopathic individuals' theory of mind.[106]Many different studies provide contradictory information on a correlation between theory of mind impairment and psychopathic individuals.
There have been some speculations made about the similarities between individuals with autism and psychopathic individuals in the theory of mind performance. In this study in 2008, the Happé's advanced test of theory of mind was presented to a group of 25 psychopaths, and 25 non-psychopathsincarcerated. This test showed that there was not a difference in the performance of the task for the psychopaths and non-psychopaths. However, they were able to see that the psychopaths were performing significantly better than the most highly able adult autistic population.[107]This shows that there is not a similarity between individuals with autism and psychopathic individuals.
There have been repetitive suggestions regarding the possibility that a deficient or biased grasp of others’ mental states, or theory of mind, could potentially contribute to antisocial behavior, aggression, and psychopathy.[108]In one study named ‘Reading the Mind in the Eyes’, the participants view photographs of an individual’s eye and had to attribute a mental state, or emotion, to the individual. This is an interesting test becauseMagnetic resonance imagingstudies showed that this task produced increased activity in the dorsolateral prefrontal and the left medial frontal cortices, the superior temporal gyrus, and the left amygdala. There is extensive literature suggesting amygdala dysfunction in psychopathy however, this test shows that both groups of psychopathic and non-psychopathic adults performed equally well on the test.[108]Thus, disregarding that there isn’t Theory of Mind impairment in psychopathic individuals.
In another study using asystemic reviewandmeta-analysis, data was gathered from 42 different studies and found that psychopathic traits are associated with impairment in the theory of mind task performance. This relationship was not regulated by age, population, psychopathy measurement (self-report versus clinical checklist) or theory of mind task type (cognitive versus affective).[109]This study used past studies to show that there is a relationship between psychopathic individuals and theory of mind impairments.
In 2009 a study was conducted to test whether impairment in the emotional aspects of theory of mind rather that the general theory of mind abilities may account for some of the impaired social behavior in psychopathy.[106]This study involved criminal offenders diagnosed withantisocial personality disorderwho had high psychopathy features, participants with localized lesions in theorbitofrontal cortex, participants with non-frontal lesions, and healthy control subjects. Subjects were tested with a task that examines affective versus cognitive theory of mind. They found that the individuals with psychopathy and those with orbitofrontal cortex lesions were both impaired on the affective theory of mind but not in cognitive theory of mind when compared to the control group.[106]
Individuals diagnosed withschizophreniacan show deficits in theory of mind. Mirjam Sprong and colleagues investigated the impairment by examining 29 different studies, with a total of over 1500 participants.[110]Thismeta-analysisshowed significant and stable deficit of theory of mind in people with schizophrenia. They performed poorly on false-belief tasks, which test the ability to understand that others can hold false beliefs about events in the world, and also on intention-inference tasks, which assess the ability to infer a character's intention from reading a short story. Schizophrenia patients withnegative symptoms, such as lack of emotion, motivation, or speech, have the most impairment in theory of mind and are unable to represent the mental states of themselves and of others. Paranoid schizophrenic patients also perform poorly because they have difficulty accurately interpreting others' intentions. The meta-analysis additionally showed that IQ, gender, and age of the participants do not significantly affect the performance of theory of mind tasks.[110]
Research suggests that impairment in theory of mind negatively affects clinical insight—the patient's awareness of their mental illness.[111]Insight requires theory of mind; a patient must be able to adopt a third-person perspective and see the self as others do.[112]A patient with good insight can accurately self-represent, by comparing himself with others and by viewing himself from the perspective of others.[111]Insight allows a patient to recognize and react appropriately to his symptoms. A patient who lacks insight does not realize that he has a mental illness, because of his inability to accurately self-represent. Therapies that teach patients perspective-taking and self-reflection skills can improve abilities in reading social cues and taking the perspective of another person.[111]
Research indicates that theory-of-mind deficit is a stable trait-characteristic rather than a state-characteristic of schizophrenia.[113]The meta-analysis conducted by Sprong et al. showed that patients in remission still had impairment in theory of mind. This indicates that the deficit is not merely a consequence of the active phase of schizophrenia.[110]
Schizophrenic patients' deficit in theory of mind impairs their interactions with others. Theory of mind is particularly important for parents, who must understand the thoughts and behaviors of their children and react accordingly. Dysfunctional parenting is associated with deficits in the first-order theory of mind, the ability to understand another person's thoughts, and in the second-order theory of mind, the ability to infer what one person thinks about another person's thoughts.[114]Compared with healthy mothers, mothers with schizophrenia are found to be more remote, quiet, self-absorbed, insensitive, unresponsive, and to have fewer satisfying interactions with their children.[114]They also tend to misinterpret their children's emotional cues, and often misunderstand neutral faces as negative.[114]Activities such as role-playing and individual or group-based sessions are effective interventions that help the parents improve on perspective-taking and theory of mind.[114]There is a strong association between theory of mind deficit and parental role dysfunction.
Impairments in theory of mind, as well as other social-cognitive deficits, are commonly found in people who havealcohol use disorders, due to theneurotoxiceffects of alcohol on the brain, particularly theprefrontal cortex.[8]
Individuals in amajor depressive episode, a disorder characterized by social impairment, show deficits in theory of mind decoding.[115]Theory of mind decoding is the ability to use information available in the immediate environment (e.g., facial expression, tone of voice, body posture) to accurately label the mental states of others. The opposite pattern, enhanced theory of mind, is observed in individuals vulnerable to depression, including those individuals with pastmajor depressive disorder (MDD),[116]dysphoric individuals,[117]and individuals with a maternal history of MDD.[118]
Children diagnosed withdevelopmental language disorder(DLD) exhibit much lower scores on reading and writing sections of standardized tests, yet have a normal nonverbal IQ. These language deficits can be any specific deficits in lexical semantics, syntax, or pragmatics, or a combination of multiple problems. Such children often exhibit poorer social skills than normally developing children, and seem to have problems decoding beliefs in others. A recent meta-analysis confirmed that children with DLD have substantially lower scores on theory of mind tasks compared to typically developing children.[119]This strengthens the claim that language development is related to theory of mind.
Research on theory of mind inautismled to the view that mentalizing abilities are subserved by dedicated mechanisms that can—in some cases—be impaired while general cognitive function remains largely intact.
Neuroimagingresearch supports this view, demonstrating specific brain regions are consistently engaged during theory of mind tasks.Positron emission tomography(PET) research on theory of mind, using verbal and pictorial story comprehension tasks, identifies a set of brain regions including themedial prefrontal cortex(mPFC), and area around posteriorsuperior temporal sulcus(pSTS), and sometimesprecuneusandamygdala/temporopolar cortex.[120][121]Research on the neural basis of theory of mind has diversified, with separate lines of research focusing on the understanding of beliefs, intentions, and more complex properties of minds such as psychological traits.
Studies fromRebecca Saxe's lab at MIT, using a false-belief versus false-photograph task contrast aimed at isolating the mentalizing component of the false-belief task, have consistently found activation in the mPFC, precuneus, and temporoparietal junction (TPJ), right-lateralized.[122][123]In particular, Saxe et al. proposed that the right TPJ (rTPJ) is selectively involved in representing the beliefs of others.[124]Some debate exists, as the same rTPJ region is consistently activated during spatial reorienting of visual attention;[125][126]Jean Decetyfrom the University of Chicago and Jason Mitchell from Harvard thus propose that the rTPJ subserves a more general function involved in both false-belief understanding and attentional reorienting, rather than a mechanism specialized for social cognition. However, it is possible that the observation of overlapping regions for representing beliefs and attentional reorienting may simply be due to adjacent, but distinct, neuronal populations that code for each. The resolution of typical fMRI studies may not be good enough to show that distinct/adjacent neuronal populations code for each of these processes. In a study following Decety and Mitchell, Saxe and colleagues used higher-resolution fMRI and showed that the peak of activation for attentional reorienting is approximately 6–10 mm above the peak for representing beliefs. Further corroborating that differing populations of neurons may code for each process, they found no similarity in the patterning of fMRI response across space.[127]
Using single-cell recordings in the humandorsomedial prefrontal cortex(dmPFC), researchers atMGHidentified neurons that encode information about others' beliefs, which were distinct from self-beliefs, across different scenarios in a false-belief task. They further showed that these neurons could provide detailed information about others' beliefs, and could accurately predict these beliefs' verity.[128]These findings suggest a prominent role of distinct neuronal populations in the dmPFC in theory of mind complemented by the TPJ and pSTS.
Functional imaging also illuminates the detection of mental state information in animations of moving geometric shapes similar to those used in Heider and Simmel (1944),[129]which typical humans automatically perceive as social interactions laden with intention and emotion. Three studies found remarkably similar patterns of activation during the perception of such animations versus a random or deterministic motion control: mPFC, pSTS,fusiform face area(FFA), and amygdala were selectively engaged during the theory of mind condition.[130]Another study presented subjects with an animation of two dots moving with a parameterized degree of intentionality (quantifying the extent to which the dots chased each other), and found that pSTS activation correlated with this parameter.[131]
A separate body of research implicates the posterior superior temporal sulcus in the perception of intentionality in human action. This area is also involved in perceiving biological motion, including body, eye, mouth, and point-light display motion.[132]One study found increased pSTS activation while watching a human lift his hand versus having his hand pushed up by a piston (intentional versus unintentional action).[133]Several studies found increased pSTS activation when subjects perceive a human action that is incongruent with the action expected from the actor's context and inferred intention. Examples would be: a human performing a reach-to-grasp motion on empty space next to an object, versus grasping the object;[134]a human shifting eye gaze toward empty space next to a checkerboard target versus shifting gaze toward the target;[135]an unladen human turning on a light with his knee, versus turning on a light with his knee while carrying a pile of books;[136]and a walking human pausing as he passes behind a bookshelf, versus walking at a constant speed.[137]In these studies, actions in the "congruent" case have a straightforward goal, and are easy to explain in terms of the actor's intention. The incongruent actions, on the other hand, require further explanation (why would someone twist empty space next to a gear?), and apparently demand more processing in the STS. This region is distinct from the temporoparietal area activated during false belief tasks.[137]pSTS activation in most of the above studies was largely right-lateralized, following the general trend in neuroimaging studies of social cognition and perception. Also right-lateralized are the TPJ activation during false belief tasks, the STS response to biological motion, and the FFA response to faces.
Neuropsychologicalevidence supports neuroimaging results regarding the neural basis of theory of mind. Studies with patients with a lesion of thefrontal lobesand thetemporoparietal junctionof the brain (between thetemporal lobeandparietal lobe) report that they have difficulty with some theory of mind tasks.[138]This shows that theory of mind abilities are associated with specific parts of the human brain. However, the fact that themedial prefrontal cortexand temporoparietal junction are necessary for theory of mind tasks does not imply that these regions are specific to that function.[125][139]TPJ and mPFC may subserve more general functions necessary for Theory of Mind.
Research byVittorio Gallese, Luciano Fadiga, andGiacomo Rizzolatti[140]shows that some sensorimotorneurons, referred to asmirror neuronsand first discovered in thepremotor cortexofrhesus monkeys, may be involved in action understanding. Single-electrode recording revealed that these neurons fired when a monkey performed an action, as well as when the monkey viewed another agent performing the same action.fMRIstudies with human participants show brain regions (assumed to contain mirror neurons) that are active when one person sees another person's goal-directed action.[141]These data led some authors to suggest that mirror neurons may provide the basis for theory of mind in the brain, and to supportsimulation theory of mind reading.[142]
There is also evidence against a link between mirror neurons and theory of mind. First,macaque monkeyshave mirror neurons but do not seem to have a 'human-like' capacity to understand theory of mind and belief. Second, fMRI studies of theory of mind typically report activation in the mPFC, temporal poles, and TPJ or STS,[143]but those brain areas are not part of the mirror neuron system. Some investigators, like developmental psychologistAndrew Meltzoffand neuroscientistJean Decety, believe that mirror neurons merely facilitate learning through imitation and may provide a precursor to the development of theory of mind.[144]Others, like philosopherShaun Gallagher, suggest that mirror-neuron activation, on a number of counts, fails to meet the definition of simulation as proposed by the simulation theory of mindreading.[145][146]
Several neuroimaging studies have looked at the neural basis for theory of mind impairment in subjects withAsperger syndromeandhigh-functioning autism(HFA). The first PET study of theory of mind in autism (also the first neuroimaging study using a task-induced activation paradigm in autism) replicated a prior study in non autistic individuals, which employed a story-comprehension task.[147]This study found displaced and diminishedmPFCactivation in subjects with autism. However, because the study used only six subjects with autism, and because the spatial resolution of PET imaging is relatively poor, these results should be considered preliminary.
A subsequent fMRI study scanned normally developing adults and adults with HFA while performing a "reading the mind in the eyes" task: viewing a photo of a human's eyes and choosing which of two adjectives better describes the person's mental state, versus a gender discrimination control.[148]The authors found activity inorbitofrontal cortex, STS, and amygdala in normal subjects, and found less amygdala activation and abnormal STS activation in subjects with autism.
A more recent PET study looked at brain activity in individuals with HFA and Asperger syndrome while viewing Heider-Simmel animations (see above) versus a random motion control.[149]In contrast to normally-developing subjects, those with autism showed little STS or FFA activation, and lessmPFCand amygdala activation. Activity inextrastriate regionsV3 and LO was identical across the two groups, suggesting intact lower-level visual processing in the subjects with autism. The study also reported less functional connectivity between STS and V3 in the autism group. However decreased temporal correlation between activity in STS and V3 would be expected simply from the lack of an evoked response in STS to intent-laden animations in subjects with autism. A more informative analysis would be to compute functional connectivity after regressing out evoked responses from all-time series.
A subsequent study, using the incongruent/congruent gaze-shift paradigm described above, found that in high-functioning adults with autism, posterior STS (pSTS) activation was undifferentiated while they watched a human shift gaze toward a target and then toward adjacent empty space.[150]The lack of additional STS processing in the incongruent state may suggest that these subjects fail to form an expectation of what the actor should do given contextual information, or that feedback about the violation of this expectation does not reach STS. Both explanations involve an impairment or deficit in the ability to link eye gaze shifts with intentional explanations. This study also found a significant anticorrelation between STS activation in the incongruent-congruent contrast and social subscale score on theAutism Diagnostic Interview-Revised, but not scores on the other subscales.
An fMRI study demonstrated that the righttemporoparietal junction(rTPJ) of higher-functioning adults with autism was not more selectively activated for mentalizing judgments when compared to physical judgments about self and other.[151]rTPJ selectivity for mentalizing was also related to individual variation on clinical measures of social impairment: individuals whose rTPJ was increasingly more active for mentalizing compared to physical judgments were less socially impaired, while those who showed little to no difference in response to mentalizing or physical judgments were the most socially impaired. This evidence builds on work in typical development that suggests rTPJ is critical for representing mental state information, whether it is about oneself or others. It also points to an explanation at the neural level for the pervasivemind-blindnessdifficulties in autism that are evident throughout the lifespan.[152]
The brain regions associated with theory of mind include thesuperior temporal gyrus(STS), the temporoparietal junction (TPJ), the medial prefrontal cortex (mPFC), the precuneus, and the amygdala.[153]The reduced activity in the mPFC of individuals with schizophrenia is associated with theory of mind deficit and may explain impairments in social function among people with schizophrenia.[154]Increased neural activity in mPFC is related to better perspective-taking, emotion management, and increased social functioning.[154]Disrupted brain activities in areas related to theory of mind may increase social stress or disinterest in social interaction, and contribute to the social dysfunction associated with schizophrenia.[154]
Group member average scores of theory of mind abilities, measured with the Reading the Mind in the Eyes test[155](RME), are possibly drivers of successful group performance.[156]High group average scores on the RME are correlated with thecollective intelligencefactorc, defined as a group's ability to perform a wide range of mental tasks,[156][157]a group intelligence measure similar to thegfactor for general individual intelligence. RME is a theory of mind test for adults[155]that shows sufficient test-retest reliability[158]and constantly differentiates control groups from individuals with functional autism orAsperger syndrome.[155]It is one of the most widely accepted and well-validated tests for theory of mind abilities within adults.[159]
The evolutionary origin of theory of mind remains obscure. While many theories make claims about its role in the development of human language and social cognition, few of them specify in detail any evolutionary neurophysiological precursors. One theory claims that theory of mind has its roots in two defensive reactions—immobilization stress andtonic immobility—which are implicated in the handling of stressful encounters and also figure prominently in mammalian childrearing practice.[160]Their combined effect seems capable of producing many of the hallmarks of theory of mind, such as eye-contact, gaze-following, inhibitory control, and intentional attributions.
An open question is whether non-human animals have ageneticendowment andsocialenvironment that allows them to acquire a theory of mind like human children do.[11]This is a contentious issue because of the difficulty of inferring fromanimal behaviorthe existence ofthinkingor of particular thoughts, or the existence of a concept ofselforself-awareness,consciousness, andqualia. One difficulty with non-human studies of theory of mind is the lack of sufficient numbers of naturalistic observations, giving insight into what the evolutionary pressures might be on a species' development of theory of mind.
Non-human research still has a major place in this field. It is especially useful in illuminating which nonverbal behaviors signify components of theory of mind, and in pointing to possible stepping points in the evolution of that aspect of social cognition. While it is difficult to study human-like theory of mind and mental states in species of whose potential mental states we have an incomplete understanding, researchers can focus on simpler components of more complex capabilities. For example, many researchers focus on animals' understanding of intention, gaze, perspective, or knowledge (of what another being has seen). A study that looked at understanding of intention in orangutans, chimpanzees, and children showed that all three species understood the difference between accidental and intentional acts.[30]
Individuals exhibit theory of mind by extrapolating another's internal mental states from their observable behavior. So one challenge in this line of research is to distinguish this from more run-of-the-mill stimulus-response learning, with the other's observable behavior being the stimulus.
Recently,[may be outdated as of March 2022]most non-human theory of mind research has focused on monkeys and great apes, who are of most interest in the study of the evolution of human social cognition. Other studies relevant to attributions theory of mind have been conducted usingplovers[161]and dogs,[162]which show preliminary evidence of understanding attention—one precursor of theory of mind—in others.
There has been some controversy over the interpretation of evidence purporting to show theory of mind ability—or inability—in animals.[163]For example, Povinelliet al.[164]presented chimpanzees with the choice of two experimenters from whom to request food: one who had seen where food was hidden, and one who, by virtue of one of a variety of mechanisms (having a bucket or bag over his head, a blindfold over his eyes, or being turned away from the baiting) does not know, and can only guess. They found that the animals failed in most cases to differentially request food from the "knower". By contrast, Hare, Call, and Tomasello found that subordinate chimpanzees were able to use the knowledge state of dominant rival chimpanzees to determine which container of hidden food they approached.[53]William Field andSue Savage-Rumbaughbelieve that bonobos have developed theory of mind, and cite their communications with a captive bonobo,Kanzi, as evidence.[165]
In one experiment, ravens (Corvus corax) took into account visual access of unseen conspecifics. The researchers argued that "ravens can generalize from their own perceptual experience to infer the possibility of being seen".[166]
Evolutionary anthropologist Christopher Krupenye studied the existence of theory of mind, and particularly false beliefs, in non-human primates.[167]
Keren HaroushandZiv Williamsoutlined the case for a group ofneuronsin primates' brains that uniquely predicted the choice selection of their interacting partner. These primates' neurons, located in theanterior cingulate cortexof rhesus monkeys, were observed using single-unit recording while the monkeys played a variant of the iterativeprisoner's dilemmagame.[168]By identifying cells that represent the yet unknown intentions of a game partner, Haroush & Williams' study supports the idea that theory of mind may be a fundamental and generalized process, and suggests thatanterior cingulate cortexneurons may act to complement the function of mirror neurons duringsocial interchange.[169]
|
https://en.wikipedia.org/wiki/Theory_of_mind
|
Neural architecture search(NAS)[1][2]is a technique for automating the design ofartificial neural networks(ANN), a widely used model in the field ofmachine learning. NAS has been used to design networks that are on par with or outperform hand-designed architectures.[3][4]Methods for NAS can be categorized according to the search space, search strategy and performance estimation strategy used:[1]
NAS is closely related tohyperparameter optimization[5]andmeta-learning[6]and is a subfield ofautomated machine learning(AutoML).[7]
Reinforcement learning(RL) can underpin a NAS search strategy. Barret Zoph andQuoc Viet Le[3]applied NAS with RL targeting theCIFAR-10dataset and achieved a network architecture that rivals the best manually-designed architecture for accuracy, with an error rate of 3.65, 0.09 percent better and 1.05x faster than a related hand-designed model. On thePenn Treebankdataset, that model composed a recurrent cell that outperformsLSTM, reaching a test set perplexity of 62.4, or 3.6 perplexity better than the prior leading system. On the PTB character language modeling task it achieved bits per character of 1.214.[3]
Learning a model architecture directly on a large dataset can be a lengthy process. NASNet[4][8]addressed this issue by transferring a building block designed for a small dataset to a larger dataset. The design was constrained to use two types ofconvolutionalcells to return feature maps that serve two main functions when convoluting an input feature map:normal cellsthat return maps of the same extent (height and width) andreduction cellsin which the returned feature map height and width is reduced by a factor of two. For the reduction cell, the initial operation applied to the cell's inputs uses a stride of two (to reduce the height and width).[4]The learned aspect of the design included elements such as which lower layer(s) each higher layer took as input, the transformations applied at that layer and to merge multiple outputs at each layer. In the studied example, the best convolutional layer (or "cell") was designed for the CIFAR-10 dataset and then applied to theImageNetdataset by stacking copies of this cell, each with its own parameters. The approach yielded accuracy of 82.7% top-1 and 96.2% top-5. This exceeded the best human-invented architectures at a cost of 9 billion fewerFLOPS—a reduction of 28%. The system continued to exceed the manually-designed alternative at varying computation levels. The image features learned from image classification can be transferred to other computer vision problems. E.g., for object detection, the learned cells integrated with the Faster-RCNN framework improved performance by 4.0% on theCOCOdataset.[4]
In the so-called Efficient Neural Architecture Search (ENAS), a controller discovers architectures by learning to search for an optimal subgraph within a large graph. The controller is trained withpolicy gradientto select a subgraph that maximizes the validation set's expected reward. The model corresponding to the subgraph is trained to minimize a canonicalcross entropyloss. Multiple child models share parameters, ENAS requires fewer GPU-hours than other approaches and 1000-fold less than "standard" NAS. On CIFAR-10, the ENAS design achieved a test error of 2.89%, comparable to NASNet. On Penn Treebank, the ENAS design reached test perplexity of 55.8.[9]
An alternative approach to NAS is based onevolutionary algorithms, which has been employed by several groups.[10][11][12][13][14][15][16]An Evolutionary Algorithm for Neural Architecture Search generally performs the following procedure.[17]First a pool consisting of different candidate architectures along with their validation scores (fitness) is initialised. At each step the architectures in the candidate pool are mutated (e.g.: 3x3 convolution instead of a 5x5 convolution). Next the new architectures are trained from scratch for a few epochs and their validation scores are obtained. This is followed by replacing the lowest scoring architectures in the candidate pool with the better, newer architectures. This procedure is repeated multiple times and thus the candidate pool is refined over time. Mutations in the context of evolving ANNs are operations such as adding or removing a layer, which include changing the type of a layer (e.g., from convolution to pooling), changing the hyperparameters of a layer, or changing the training hyperparameters. OnCIFAR-10andImageNet, evolution and RL performed comparably, while both slightly outperformedrandom search.[13][12]
Bayesian Optimization(BO), which has proven to be an efficient method for hyperparameter optimization, can also be applied to NAS. In this context, the objective function maps an architecture to its validation error after being trained for a number of epochs. At each iteration, BO uses a surrogate to model this objective function based on previously obtained architectures and their validation errors. One then chooses the next architecture to evaluate by maximizing an acquisition function, such as expected improvement, which provides a balance between exploration and exploitation. Acquisition function maximization and objective function evaluation are often computationally expensive for NAS, and make the application of BO challenging in this context. Recently, BANANAS[18]has achieved promising results in this direction by introducing a high-performing instantiation of BO coupled to a neural predictor.
Another group used ahill climbingprocedure that applies network morphisms, followed by short cosine-annealing optimization runs. The approach yielded competitive results, requiring resources on the same order of magnitude as training a single network. E.g., on CIFAR-10, the method designed and trained a network with an error rate below 5% in 12 hours on a single GPU.[19]
While most approaches solely focus on finding architecture with maximal predictive performance, for most practical applications other objectives are relevant, such as memory consumption, model size or inference time (i.e., the time required to obtain a prediction). Because of that, researchers created amulti-objectivesearch.[16][20]
LEMONADE[16]is an evolutionary algorithm that adoptedLamarckismto efficiently optimize multiple objectives. In every generation, child networks are generated to improve thePareto frontierwith respect to the current population of ANNs.
Neural Architect[20]is claimed to be a resource-aware multi-objective RL-based NAS with network embedding and performance prediction. Network embedding encodes an existing network to a trainable embedding vector. Based on the embedding, a controller network generates transformations of the target network. A multi-objective reward function considers network accuracy, computational resource and training time. The reward is predicted by multiple performance simulation networks that are pre-trained or co-trained with the controller network. The controller network is trained via policy gradient. Following a modification, the resulting candidate network is evaluated by both an accuracy network and a training time network. The results are combined by a reward engine that passes its output back to the controller network.
RL or evolution-based NAS require thousands of GPU-days of searching/training to achieve state-of-the-art computer vision results as described in the NASNet, mNASNet and MobileNetV3 papers.[4][21][22]
To reduce computational cost, many recent NAS methods rely on the weight-sharing idea.[23][24]In this approach, a single overparameterized supernetwork (also known as the one-shot model) is defined. A supernetwork is a very largeDirected Acyclic Graph(DAG) whose subgraphs are different candidate neural networks. Thus, in a supernetwork, the weights are shared among a large number of different sub-architectures that have edges in common, each of which is considered as a path within the supernet. The essential idea is to train one supernetwork that spans many options for the final design rather than generating and training thousands of networks independently. In addition to the learned parameters, a set of architecture parameters are learnt to depict preference for one module over another. Such methods reduce the required computational resources to only a few GPU days.
More recent works further combine this weight-sharing paradigm, with a continuous relaxation of the search space,[25][26][27][28]which enables the use of gradient-based optimization methods. These approaches are generally referred to as differentiable NAS and have proven very efficient in exploring the search space of neural architectures. One of the most popular algorithms amongst the gradient-based methods for NAS is DARTS.[27]However, DARTS faces problems such as performance collapse due to an inevitable aggregation of skip connections and poor generalization which were tackled by many future algorithms.[29][30][31][32]Methods like[30][31]aim at robustifying DARTS and making the validation accuracy landscape smoother by introducing a Hessian norm based regularisation and random smoothing/adversarial attack respectively. The cause of performance degradation is later analyzed from the architecture selection aspect.[33]
Differentiable NAS has shown to produce competitive results using a fraction of the search-time required by RL-based search methods. For example, FBNet (which is short for Facebook Berkeley Network) demonstrated that supernetwork-based search produces networks that outperform the speed-accuracy tradeoff curve of mNASNet and MobileNetV2 on the ImageNet image-classification dataset. FBNet accomplishes this using over 400xlesssearch time than was used for mNASNet.[34][35][36]Further, SqueezeNAS demonstrated that supernetwork-based NAS produces neural networks that outperform the speed-accuracy tradeoff curve of MobileNetV3 on the Cityscapes semantic segmentation dataset, and SqueezeNAS uses over 100x less search time than was used in the MobileNetV3 authors' RL-based search.[37][38]
Neural architecture search often requires large computational resources, due to its expensive training and evaluation phases. This further leads to a large carbon footprint required for the evaluation of these methods. To overcome this limitation, NAS benchmarks[39][40][41][42]have been introduced, from which one can either query or predict the final performance of neural architectures in seconds. A NAS benchmark is defined as a dataset with a fixed train-test split, a search space, and a fixed training pipeline (hyperparameters). There are primarily two types of NAS benchmarks: a surrogate NAS benchmark and a tabular NAS benchmark. A surrogate benchmark uses a surrogate model (e.g.: a neural network) to predict the performance of an architecture from the search space. On the other hand, a tabular benchmark queries the actual performance of an architecture trained up to convergence. Both of these benchmarks are queryable and can be used to efficiently simulate many NAS algorithms using only a CPU to query the benchmark instead of training an architecture from scratch.
Survey articles.
|
https://en.wikipedia.org/wiki/Neural_architecture_search
|
NNI(Neural Network Intelligence) is afree and open-sourceAutoMLtoolkit developed byMicrosoft.[3][4]It is used to automatefeature engineering,model compression,neural architecture search, andhyper-parameter tuning.[5][6]
The source code is licensed underMIT Licenseand available onGitHub.[7]
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Neural_Network_Intelligence
|
ModelOps(modeloperationsor model operationalization), as defined byGartner, "is focused primarily on thegovernanceandlifecycle managementof a wide range of operationalizedartificial intelligence(AI) anddecision models, includingmachine learning,knowledge graphs, rules, optimization, linguistic andagent-based models" inMulti-Agent Systems.[1]"ModelOps lies at the heart of any enterprise AI strategy".[2]It orchestrates the model lifecycles of all models in production across the entire enterprise, from putting a model into production, then evaluating and updating the resulting application according to a set of governance rules, including both technical and business key performance indicators (KPI's). It grants business domain experts the capability to evaluate AI models in production, independent ofdata scientists.[3]
AForbesarticle promoted ModelOps: "As enterprises scale up their AI initiatives to become a true Enterprise AI organization, having full operationalized analytics capability puts ModelOps in the center, connecting bothDataOpsandDevOps."[4]
In a 2018 Gartner survey, 37% of respondents reported that they had deployed AI in some form; however, Gartner pointed out that enterprises were still far from implementing AI, citing deployment challenges.[5]Enterprises were accumulating undeployed, unused, and unrefreshed models, and manually deployed, often at a business unit level, increasing the risk exposure of the entire enterprise.[6]Independent analyst firm Forrester also covered this topic in a 2018 report on machine learning andpredictive analyticsvendors: “Data scientists regularly complain that their models are only sometimes or never deployed. A big part of the problem is organizational chaos in understanding how to apply and design models into applications. But another big part of the problem is technology. Models aren’t like software code because they need model management.”[7]
In December 2018, Waldemar Hummer and Vinod Muthusamy of IBM Research AI, proposed ModelOps as "a programming model for reusable, platform-independent, and composable AI workflows" on IBM Programming Languages Day.[8]In their presentation, they noted the difference between the application development lifecycle, represented byDevOps, and the AI application lifecycle.[9]
The goal for developing ModelOps was to address the gap between model deployment and model governance, ensuring that all models were running in production with strong governance, aligned with technical and business KPI's, while managing the risk. In their presentation, Hummer and Muthusamy described a programmatic solution for AI-aware staged deployment and reusable components that would enable model versions to match business apps, and which would include AI model concepts such as model monitoring, drift detection, and active learning. The solution would also address the tension between model performance and business KPI's, application and model logs, and model proxies and evolving policies. Various cloud platforms were part of the proposal. In June 2019, Hummer, Muthusamy, Thomas Rausch, Parijat Dube, and Kaoutar El Maghraoui presented a paper at the 2019 IEEE International Conference on Cloud Engineering (IC2E).[10]The paper expanded on their 2018 presentation, proposing ModelOps as a cloud-based framework and platform for end-to-end development and lifecycle management of artificial intelligence (AI) applications. In the abstract, they stated that the framework would show how it is possible to extend the principles of software lifecycle management to enable automation, trust, reliability, traceability, quality control, and reproducibility of AI model pipelines.[11]In March 2020, ModelOp, Inc. published the first comprehensive guide to ModelOps methodology. The objective of this publication was to provide an overview of the capabilities of ModelOps, as well as the technical and organizational requirements for implementing ModelOps practices.[12]
One typical use case for ModelOps is in the financial services sector, where hundreds oftime-seriesmodels are used to focus on strict rules for bias and auditability. In these cases, model fairness and robustness are critical, meaning the models have to be fair and accurate, and they have to run reliably. ModelOps automates the model lifecycle of models in production. Such automation includes designing the model lifecycle, inclusive of technical, business and compliance KPI's and thresholds, to govern and monitor the model as it runs, monitoring the models for bias and other technical and business anomalies, and updating the model as needed without disrupting the applications. ModelOps is the dispatcher that keeps all of the trains running on time and on the right track, ensuring risk control, compliance and business performance.
Another use case is the monitoring of a diabetic's blood sugar levels based on a patient's real-time data. The model that can predict hypoglycemia must be constantly refreshed with the current data, business KPI's and anomalies should be continuously monitored and must be available in a distributed environment, so the information is available on a mobile device as well as reporting to a larger system. The orchestration, governance, retraining, monitoring, and refreshing is done with ModelOps.
The ModelOps process focuses on automating the governance, management and monitoring of models in production across the enterprise, enabling AI and application developers to easily plug in lifecycle capabilities (such as bias-detection, robustness and reliability, drift detection, technical, business and compliance KPI's, regulatory constraints and approval flows) for putting AI models into production as business applications. The process starts with a standard representation of candidate models for production that includes ametamodel(the model specification) with all of the component and dependent pieces that go into building the model, such as the data, the hardware and software environments, the classifiers, and code plug-ins, and most importantly, the business and compliance/risk KPI's.
MLOps(machine learning operations) is a discipline that enables data scientists and IT professionals to collaborate and communicate while automating machine learning algorithms. It extends and expands on the principles ofDevOpsto support the automation of developing and deploying machine learning models and applications.[13]As a practice, MLOps involves routine machine learning (ML) models. However, the variety and uses of models have changed to include decision optimization models,optimizationmodels, andtransformational modelsthat are added to applications. ModelOps is an evolution of MLOps that expands its principles to include not just the routine deployment of machine learning models but also the continuous retraining, automated updating, and synchronized development and deployment of more complex machine learning models.[14]ModelOps refers to the operationalization of all AI models, including the machine learning models with which MLOps is concerned.[15]
|
https://en.wikipedia.org/wiki/ModelOps
|
Big data ethics, also known simply asdata ethics, refers to systemizing, defending, and recommending concepts of right and wrong conduct in relation todata, in particularpersonal data.[1]Since the dawn of theInternetthe sheer quantity and quality of data has dramatically increased and is continuing to do so exponentially.Big datadescribes this large amount of data that is so voluminous and complex that traditional data processing application software is inadequate to deal with them. Recent innovations in medical research and healthcare, such as high-throughput genome sequencing, high-resolution imaging, electronic medical patient records and a plethora of internet-connected health devices have triggered adata delugethat will reach the exabyte range in the near future. Data ethics is of increasing relevance as the quantity of data increases because of the scale of the impact.
Big data ethics are different frominformation ethicsbecause the focus of information ethics is more concerned with issues ofintellectual propertyand concerns relating to librarians, archivists, and information professionals, while big data ethics is more concerned with collectors and disseminators ofstructuredorunstructured datasuch asdata brokers, governments, and large corporations. However, sinceartificial intelligenceormachine learning systemsare regularly built using big data sets, the discussions surrounding data ethics are often intertwined with those in the ethics of artificial intelligence.[2]More recently, issues of big data ethics have also been researched in relation with other areas of technology and science ethics, includingethics in mathematicsandengineering ethics, as many areas of applied mathematics and engineering use increasingly large data sets.
Data ethics is concerned with the following principles:[3]
Ownership of data involves determining rights and duties over property, such as the ability to exercise individual control over (including limit the sharing of) personal data comprising one'sdigital identity. The question of data ownership arises when someone records observations on an individual person. The observer and the observed both state a claim to the data. Questions also arise as to the responsibilities that the observer and the observed have in relation to each other. These questions have become increasingly relevant with the Internet magnifying the scale and systematization of observing people and their thoughts. The question of personal data ownership relates to questions of corporate ownership and intellectual property.[4]
In the European Union, some people argue that theGeneral Data Protection Regulationindicates that individuals own their personal data, although this is contested.[5]
Concerns have been raised around how biases can be integrated into algorithm design resulting in systematic oppression[6]whether consciously or unconsciously. These manipulations often stem from biases in the data, the design of the algorithm, or the underlying goals of the organization deploying them. One major cause ofalgorithmic biasis that algorithms learn from historical data, which may perpetuate existing inequities. In many cases, algorithms exhibit reduced accuracy when applied to individuals from marginalized or underrepresented communities. A notable example of this is pulse oximetry, which has shown reduced reliability for certain demographic groups due to a lack of sufficient testing or information on these populations.[7]Additionally, many algorithms are designed to maximize specific metrics, such as engagement or profit, without adequately considering ethical implications. For instance, companies like Facebook and Twitter have been criticized for providing anonymity to harassers and for allowing racist content disguised as humor to proliferate, as such content often increases engagement.[8]These challenges are compounded by the fact that many algorithms operate as "black boxes" for proprietary reasons, meaning that the reasoning behind their outputs is not fully understood by users. This opacity makes it more difficult to identify and address algorithmic bias.
In terms of governance, big data ethics is concerned with which types of inferences and predictions should be made using big data technologies such as algorithms.[9]
Anticipatory governance is the practice of usingpredictive analyticsto assess possible future behaviors.[10]This has ethical implications because it affords the ability to target particular groups and places which can encourage prejudice and discrimination[10]For example,predictive policinghighlights certain groups or neighborhoods which should be watched more closely than others which leads to more sanctions in these areas, and closer surveillance for those who fit the same profiles as those who are sanctioned.[11]
The term "control creep" refers to data that has been generated with a particular purpose in mind but which is repurposed.[10]This practice is seen with airline industry data which has been repurposed for profiling and managing security risks at airports.[10]
Privacy has been presented as a limitation to data usage which could also be considered unethical.[12]For example, the sharing of healthcare data can shed light on the causes of diseases, the effects of treatments, an can allow for tailored analyses based on individuals' needs.[12]This is of ethical significance in the big data ethics field because while many value privacy, the affordances of data sharing are also quite valuable, although they may contradict one's conception of privacy. Attitudes against data sharing may be based in a perceived loss of control over data and a fear of the exploitation of personal data.[12]However, it is possible to extract the value of data without compromising privacy.
Government surveillance of big data has the potential to undermine individual privacy by collecting and storing data on phone calls, internet activity, and geolocation, among other things. For example, the NSA’s collection of metadata exposed in global surveillance disclosures raised concerns about whether privacy was adequately protected, even when the content of communications was not analyzed. The right to privacy is often complicated by legal frameworks that grant governments broad authority over data collection for “national security” purposes. In the United States, the Supreme Court has not recognized a general right to "informational privacy," or control over personal information, though legislators have addressed the issue selectively through specific statutes.[13]From an equity perspective, government surveillance and privacy violations tend to disproportionately harm marginalized communities. Historically, activists involved in theCivil rights movementwere frequently targets of government surveillance as they were perceived as subversive elements. Programs such asCOINTELPROexemplified this pattern, involving espionage against civil rights leaders. This pattern persists today, with evidence of ongoing surveillance of activists and organizations.[14]
Additionally, the use of algorithms by governments to act on data obtained without consent introduces significant concerns about algorithmic bias. Predictive policing tools, for example, utilize historical crime data to predict “risky” areas or individuals, but these tools have been shown to disproportionately target minority communities.[15]One such tool, theCOMPASsystem, is a notable example; Black defendants are twice as likely to be misclassified as high risk compared to white defendants, and Hispanic defendants are similarly more likely to be classified as high risk than their white counterparts.[16]Marginalized communities often lack the resources or education needed to challenge these privacy violations or protect their data from nonconsensual use. Furthermore, there is a psychological toll, known as the “chilling effect,” where the constant awareness of being surveilled disproportionately impacts communities already facing societal discrimination. This effect can deter individuals from engaging in legal but potentially "risky" activities, such as protesting or seeking legal assistance, further limiting their freedoms and exacerbating existing inequities.
Some scholars such as Jonathan H. King and Neil M. Richards are redefining the traditional meaning of privacy, and others to question whether or not privacy still exists.[9]In a 2014 article for theWake Forest Law Review, King and Richard argue that privacy in the digital age can be understood not in terms of secrecy but in term of regulations which govern and control the use of personal information.[9]In the European Union, the right to be forgotten entitles EU countries to force the removal or de-linking of personal data from databases at an individual's request if the information is deemed irrelevant or out of date.[17]According to Andrew Hoskins, this law demonstrates the moral panic of EU members over the perceived loss of privacy and the ability to govern personal data in the digital age.[18]In the United States, citizens have the right to delete voluntarily submitted data.[17]This is very different from the right to be forgotten because much of the data produced using big data technologies and platforms are not voluntarily submitted.[17]While traditional notions of privacy are under scrutiny, different legal frameworks related to privacy in the EU and US demonstrate how countries are grappling with these concerns in the context of big data. For example, the "right to be forgotten" in the EU and the right to delete voluntarily submitted data in the US illustrate the varying approaches to privacy regulation in the digital age.[19]
The difference in value between the services facilitated by tech companies and the equity value of these tech companies is the difference in the exchange rate offered to the citizen and the "market rate" of the value of their data. Scientifically there are many holes in this rudimentary calculation: the financial figures of tax-evading companies are unreliable, either revenue or profit could be more appropriate, how a user is defined, a large number of individuals are needed for the data to be valuable, possible tiered prices for different people in different countries, etc. Although these calculations are crude, they serve to make the monetary value of data more tangible. Another approach is to find the data trading rates in the black market. RSA publishes a yearly cybersecurity shopping list that takes this approach.[20]
This raises the economic question of whether free tech services in exchange for personal data is a worthwhile implicit exchange for the consumer. In the personal data trading model, rather than companies selling data, an owner can sell their personal data and keep the profit.[21]
The idea of open data is centered around the argument that data should be freely available and should not have restrictions that would prohibit its use, such as copyright laws. As of 2014[update]many governments had begun to move towards publishing open datasets for the purpose of transparency and accountability.[22]This movement has gained traction via "open data activists" who have called for governments to make datasets available to allow citizens to themselves extract meaning from the data and perform checks and balances themselves.[22][9]King and Richards have argued that this call for transparency includes a tension between openness and secrecy.[9]
Activists and scholars have also argued that because this open-sourced model of data evaluation is based on voluntary participation, the availability of open datasets has a democratizing effect on a society, allowing any citizen to participate.[23]To some, the availability of certain types of data is seen as a right and an essential part of a citizen's agency.[23]
Open Knowledge Foundation(OKF) lists several dataset types it argues should be provided by governments for them to be truly open.[24]OKF has a tool called the Global Open Data Index (GODI), a crowd-sourced survey for measuring the openness of governments,[24]based on itsOpen Definition. GODI aims to be a tool for providing feedback to governments about the quality of their open datasets.[25]
Willingness to share data varies from person to person. Preliminary studies have been conducted into the determinants of the willingness to share data. For example, some have suggested that baby boomers are less willing to share data than millennials.[26]
The fallout fromEdward Snowden’s disclosuresin 2013 significantly reshaped public discourse around data collection and the privacy principle of big data ethics. The case revealed that governments controlled and possessed far more information about civilians than previously understood, violating the principle of ownership, particularly in ways that disproportionately affected disadvantaged communities. For instance, activists were frequently targeted, including members of movements such as Occupy Wall Street and Black Lives Matter.[14]This revelation prompted governments and organizations to revisit data collection and storage practices to better protect individual privacy while also addressing national security concerns. The case also exposed widespread online surveillance of other countries and their citizens, raising important questions about data sovereignty and ownership. In response, some countries, such as Brazil and Germany, took action to push back against these practices.[14]However, many developing nations lacked the technological independence necessary or were too generally dependent on the nations surveilling them to resist such surveillance, leaving them at a disadvantage in addressing these concerns.
TheCambridge Analytica scandalhighlighted significant ethical concerns in the use of big data. Data was harvested from approximately 87 million Facebook users without their explicit consent and used to display targeted political advertisements. This violated the currency principle of big data ethics, as individuals were initially unaware of how their data was being exploited. The scandal revealed how data collected for one purpose could be repurposed for entirely different uses, bypassing users' consent and emphasizing the need for explicit and informed consent in data usage.[27]Additionally, the algorithms used for ad delivery were opaque, challenging the principles of transaction transparency and openness. In some cases, the political ads spread misinformation,[27]often disproportionately targeting disadvantaged groups and contributing to knowledge gaps. Marginalized communities and individuals with lower digital literacy were disproportionately affected as they were less likely to recognize or act against exploitation. In contrast, users with more resources or digital literacy could better safeguard their data, exacerbating existing power imbalances.
|
https://en.wikipedia.org/wiki/Big_data_ethics
|
Big data maturity models (BDMM)are the artifacts used to measure big data maturity.[1]These models help organizations to create structure around their big data capabilities and to identify where to start.[2]They provide tools that assist organizations to define goals around their big data program and to communicate their big data vision to the entire organization. BDMMs also provide a methodology to measure and monitor the state of a company's big data capability, the effort required to complete their current stage or phase of maturity and to progress to the next stage. Additionally, BDMMs measure and manage the speed of both the progress and adoption of big data programs in the organization.[1]
The goals of BDMMs are:
Key organizational areas refer to "people, process and technology" and the subcomponents include[3]alignment, architecture, data,data governance, delivery, development, measurement, program governance, scope, skills, sponsorship,statistical modelling, technology, value and visualization.
The stages or phases in BDMMs depict the various ways in which data can be used in an organization and is one of the key tools to set direction and monitor the health of an organization's big data programs.[4][5]
An underlying assumption is that a high level of big data maturity correlates with an increase in revenue and reduction in operational expense. However, reaching the highest level of maturity involves major investments over many years.[6]Only a few companies are considered to be at a "mature" stage of big data and analytics.[7]These include internet-based companies (such asLinkedIn,Facebook, andAmazon) and other non-Internet-based companies, including financial institutions (fraud analysis, real-time customer messaging and behavioral modeling) and retail organizations (click-streamanalytics together with self-service analytics for teams).[6]
Big data maturity models can be broken down into three broad categories namely:[1]
Descriptive models assess the current firm maturity through qualitative positioning of the firm in various stages or phases. The model does not provide any recommendations as to how a firm would improve their big data maturity.
This descriptive model aims to assess the value generated from big data investments towards supporting strategic business initiatives.
Maturity levels
The model consists of the following maturity levels:
Assessment areas
Maturity levels also cover areas in matrix format focusing on: business strategy, information, analytics, culture and execution, architecture and governance.
[8]
Consisting of an assessment survey, this big data maturity model assesses an organization's readiness to execute big data initiatives. Furthermore, the model aims to identify the steps and appropriate technologies that will lead an organization towards big data maturity.[9]
Comparative big data maturity models aim to benchmark an organization in relation to its industry peers and normally consist of a survey containing quantitative and qualitative information.
The CSC big data maturity tool acts as a comparative tool to benchmark an organization's big data maturity. A survey is undertaken and the results are then compared to other organizations within a specific industry and within the wider market.[10]
The TDWI big data maturity model is a model in the current big data maturity area and therefore consists of a significant body of knowledge.[6]
Maturity stages
The different stages of maturity in the TDWI BDMM can be summarized as follows:
Stage 1: Nascent
The nascent stage as a pre–big data environment. During this stage:
Stage 2: Pre-adoption
During the pre-adoption stage:
Stage 3: Early adoptionThe "chasm"There is then generally a series of hurdles it needs to overcome. These hurdles include:
Stage 4: Corporate adoption
The corporate adoption stage is characterized by the involvement of end-users, an organization gains further insight and the way of conducting business is transformed. During this stage:
Stage 5: Mature / visionary
Only a few organizations can be considered as visionary in terms of big data and big data analytics. During this stage an organization:
Research findings
TDWI[6]did an assessment on 600 organizations and found that the majority of organizations are either in the pre-adoption (50%) or early adoption (36%) stages. Additionally, only 8% of the sample have managed to move past the chasm towards corporate adoption or being mature/visionary.
The majority of prescriptive BDMMs follow a similar modus operandi in that the current situation is first assessed followed by phases plotting the path towards increased big data maturity. Examples are:
This maturity model is prescriptive in the sense that the model consists of four distinct phases that each plot a path towards big data maturity. Phases are:
[11]
The Radcliffe big data maturity model, as other models, also consists of distinct maturity levels ranging from:
[5]
This BDMM provides a framework that not only enables organizations to view the extent of their current maturity, but also to identify goals and opportunities for growth in big data maturity. The model consists of four stages namely,
[4]
The prescriptive model proposed by Van Veenstra aims to firstly explore the existing big data environment of the organization followed by exploitation opportunities and a growth path towards big data maturity. The model makes use of four phases namely:
[12]
Current BDMMs have been evaluated under the following criteria:[1]
The TDWI and CSC have the strongest overall performance with steady scores in each of the criteria groups. The overall results communicate that the top performer models are extensive, balanced, well-documented, easy to use, and they address a good number of big data capabilities that are utilized in business value creation. The models of Booz & Company and Knowledgent are close seconds and these mid-performers address big data value creation in a commendable manner, but fall short when examining the completeness of the models and the ease of application. Knowledgent suffers from poor quality of development, having barely documented any of its development processes. The rest of the models, i.e. Infotech, Radcliffe, van Veenstra and IBM, have been categorized as low performers. Whilst their content is well aligned with business value creation through big data capabilities, they all lack quality of development, ease of application and extensiveness. Lowest scores were awarded to IBM and Van Veenstra, since both are providing low level guidance for the respective maturity model's practical use, and they completely lack in documentation, ultimately resulting in poor quality of development and evaluation.[1]
|
https://en.wikipedia.org/wiki/Big_data_maturity_model
|
Big memorycomputers are machines with a large amount ofrandom-access memory(RAM). The computers are required for databases, graph analytics, or more generally,high-performance computing,data scienceandbig data.[1]Some database systems calledin-memory databasesare designed to run mostly in memory, rarely if ever retrieving data from disk or flash memory. Seelist of in-memory databases.
The performance of big memory systems depends on how thecentral processing units(CPUs) access the memory, via a conventionalmemory controlleror vianon-uniform memory access(NUMA). Performance also depends on the size and design of theCPU cache.
Performance also depends onoperating system(OS) design. Thehuge pagesfeature in Linux and other OSes can improve the efficiency ofvirtual memory.[2]Thetransparent huge pagesfeature in Linux can offer better performance for some big-memory workloads.[3]The "Large-Page Support" in Microsoft Windows enables server applications to establish large-page memory regions which are typically three orders of magnitude larger than the native page size.[4]
|
https://en.wikipedia.org/wiki/Big_memory
|
Data curationis the organization and integration ofdatacollected from various sources. It involves annotation, publication and presentation of the data so that the value of the data is maintained over time, and the data remains available for reuse and preservation. Data curation includes "all the processes needed for principled andcontrolled datacreation, maintenance, andmanagement, together with the capacity to add value to data".[1]In science, data curation may indicate the process of extraction of important information from scientific texts, such as research articles by experts, to be converted into an electronic format, such as an entry of abiological database.[2]
In the modern era ofbig data, the curation of data has become more prominent, particularly forsoftwareprocessing high volume and complex data systems.[3]The term is also used within the humanities,[4]where increasing cultural and scholarly data fromdigital humanitiesprojects requires the expertise and analytical practices of data curation.[5]In broad terms, curation means a range of activities and processes done to create, manage, maintain, andvalidateacomponent.[6]Specifically, data curation is the attempt to determine what information is worth saving and for how long.[7]
Theuser, rather than the database itself, typically initiates data curation and maintainsmetadata.[8]According to theUniversity of Illinois' Graduate School of Library and Information Science, "Data curation is the active and on-going management of data through its lifecycle of interest and usefulness to scholarship, science, and education; curation activities enable data discovery and retrieval, maintain quality, add value, and provide for re-use over time."[9]The data curation workflow is distinct fromdata qualitymanagement,data protection,lifecycle management, anddata movement.[8]
Census data has been available in tabulated punch card form since the early 20th century and has been electronic since the 1960s.[10]TheInter-university Consortium for Political and Social Research (ICPSR)website marks 1962 as the date of their first Survey Data Archive.[11]
Deep background on data libraries appeared in a 1982 issue of the Illinois journal,Library Trends.[12]For historical background on the data archive movement, see "Social Scientific Information Needs for Numeric Data: The Evolution of the International Data Archive Infrastructure."[13]The exact curation process undertaken within any organisation depends on the volume of data, how much noise the data contains, and what the expected future use of the data means to its dissemination.[3]
The crises in space data led to the 1999 creation of theOpen Archival Information System (OAIS)model,[14]stewarded by theConsultative Committee for Space Data Systems (CCSDS), which was formed in 1982.[15]
The termdata curationis sometimes used in the context ofbiological databases, where specific biological information is firstly obtained from a range of research articles and then stored within a specific category of database. For instance, information about anti-depressant drugs can be obtained from various sources and, after checking whether they are available as a database or not, they are saved under a drug's database's anti-depressive category. Enterprises are also utilizing data curation within their operational and strategic processes to ensure data quality and accuracy.[16][17]
The Dissemination Information Packages (DIPS) for Information Reuse (DIPIR) project is studying research data produced and used by quantitative social scientists, archaeologists, and zoologists. The intended audience is researchers who use secondary data and the digital curators, digital repository managers, data center staff, and others who collect, manage, and store digital information.[18]
TheProtein Data Bankwas established in 1971 atBrookhaven National Laboratory, and has grown into a global project.[19]A database for three-dimensional structural data of proteins and other large biological molecules, the PDB contains over 120,000 structures, all standardized, validated against experimental data, and annotated.
FlyBase, the primary repository of genetic and molecular data for the insect familyDrosophilidae, dates back to 1992. FlyBase annotates the entireDrosophila melanogastergenome.[20]
TheLinguistic Data Consortiumis a data repository for linguistic data, dating back to 1992.[21]
TheSloan Digital Sky Surveybegan surveying the night sky in 2000.[22]Computer scientistJim Gray, while working on the data architecture of the SDSS, championed the idea of data curation in the sciences.[23]
DataNetwas a research program of the U.S. National Science Foundation Office of Cyberinfrastructure, funding data management projects in the sciences.[24]DataONE(Data Observation Network for Earth) is one of the projects funded throughDataNet, helping the environmental science community preserve and share data.[25]
|
https://en.wikipedia.org/wiki/Data_curation
|
Data defined storage(also referred to as adata centric approach) is amarketingterm formanaging, protecting, and realizing the value from data by combining application, information andstoragetiers.[1]
This is a process in which users, applications, and devices gain access to a repository of capturedmetadatathat allows them toaccess,queryandmanipulaterelevant data, transforming it intoinformationwhile also establishing a flexible andscalableplatform for storing the underlying data. The technology is said toabstractthe data entirely from the storage, trying to provide fully transparent access for users.
Data defined storage explains information aboutmetadatawith an emphasis on the content, meaning and value of information over the media, type and location of data. Data-centric management enables organizations to adopt a single, unified approach to managing data across large,distributed locations, which includes the use of content and metadata indexing. The technology pillars include:
Data defined storage focuses on the benefits of bothobject storageandsoftware-defined storagetechnologies. However, object and software-defined storage can only be mapped to media independent data storage, which enables a media agnostic infrastructure - utilizing any type of storage, including low cost commodity storage to scale out to petabyte-level capacities. Data defined storage unifies all data repositories and exposes globally distributed stores through the global namespace, eliminatingdata silosand improving storage utilization.
The first marketing campaign to use the term data defined storage was from the company Tarmin, for its product GridBank. The term may have been mentioned as early as 2013.[2]
The term was used forobject storagewithopen protocolaccess for file system virtualization, such asCIFS,NFS,FTPas well asREST APIsand other cloud protocols such asAmazon S3,CDMIandOpenStack.
|
https://en.wikipedia.org/wiki/Data_defined_storage
|
Data engineeringrefers to the building ofsystemsto enable the collection and usage ofdata. This data is usually used to enable subsequentanalysisanddata science, which often involvesmachine learning.[1][2]Making the data usable usually involves substantialcomputeandstorage, as well asdata processing.
Around the 1970s/1980s the terminformation engineering methodology(IEM) was created to describedatabase designand the use ofsoftwarefor data analysis and processing.[3][4]These techniques were intended to be used bydatabase administrators(DBAs) and bysystems analystsbased upon an understanding of the operational processing needs of organizations for the 1980s. In particular, these techniques were meant to help bridge the gap between strategic business planning and information systems. A key early contributor (often called the "father" of information engineering methodology) was the AustralianClive Finkelstein, who wrote several articles about it between 1976 and 1980, and also co-authored an influentialSavant Institutereport on it with James Martin.[5][6][7]Over the next few years, Finkelstein continued work in a more business-driven direction, which was intended to address a rapidly changing business environment; Martin continued work in a more data processing-driven direction. From 1983 to 1987, Charles M. Richter, guided by Clive Finkelstein, played a significant role in revamping IEM as well as helping to design the IEM software product (user data), which helped automate IEM.
In the early 2000s, the data and data tooling was generally held by theinformation technology(IT) teams in most companies.[8]Other teams then used data for their work (e.g. reporting), and there was usually little overlap in data skillset between these parts of the business.
In the early 2010s, with the rise of theinternet, the massive increase in data volumes, velocity, and variety led to the termbig datato describe the data itself, and data-driven tech companies likeFacebookandAirbnbstarted using the phrasedata engineer.[3][8]Due to the new scale of the data, major firms likeGoogle, Facebook,Amazon,Apple,Microsoft, andNetflixstarted to move away from traditionalETLand storage techniques. They started creatingdata engineering, a type ofsoftware engineeringfocused on data, and in particularinfrastructure,warehousing,data protection,cybersecurity,mining,modelling,processing, andmetadatamanagement.[3][8]This change in approach was particularly focused oncloud computing.[8]Data started to be handled and used by many parts of the business, such assalesandmarketing, and not just IT.[8]
High-performance computing is critical for the processing and analysis of data. One particularly widespread approach to computing for data engineering isdataflow programming, in which the computation is represented as adirected graph(dataflow graph); nodes are the operations, and edges represent the flow of data.[9]Popular implementations includeApache Spark, and thedeep learningspecificTensorFlow.[9][10][11]More recent implementations, such asDifferential/TimelyDataflow, have usedincremental computingfor much more efficient data processing.[9][12][13]
Data is stored in a variety of ways, one of the key deciding factors is in how the data will be used.
Data engineers optimize data storage and processing systems to reduce costs. They use data compression, partitioning, and archiving.
If the data is structured and some form ofonline transaction processingis required, thendatabasesare generally used.[14]Originally mostlyrelational databaseswere used, with strongACIDtransaction correctness guarantees; most relational databases useSQLfor their queries. However, with the growth of data in the 2010s,NoSQLdatabases have also become popular since theyhorizontally scaledmore easily than relational databases by giving up the ACID transaction guarantees, as well as reducing theobject-relational impedance mismatch.[15]More recently,NewSQLdatabases — which attempt to allow horizontal scaling while retaining ACID guarantees — have become popular.[16][17][18][19]
If the data is structured andonline analytical processingis required (but not online transaction processing), thendata warehousesare a main choice.[20]They enable data analysis, mining, andartificial intelligenceon a much larger scale than databases can allow,[20]and indeed data often flow from databases into data warehouses.[21]Business analysts, data engineers, and data scientists can access data warehouses using tools such as SQL orbusiness intelligencesoftware.[21]
Adata lakeis a centralized repository for storing, processing, and securing large volumes of data. A data lake can containstructured datafromrelational databases,semi-structured data,unstructured data, andbinary data. A data lake can be created on premises or in a cloud-based environment using the services frompublic cloudvendors such asAmazon,Microsoft, orGoogle.
If the data is less structured, then often they are just stored asfiles. There are several options:
The number and variety of different data processes and storage locations can become overwhelming for users. This inspired the usage of aworkflow management system(e.g.Airflow) to allow the data tasks to be specified, created, and monitored.[24]The tasks are often specified as adirected acyclic graph (DAG).[24]
Business objectives that executives set for what's to come are characterized in key business plans, with their more noteworthy definition in tactical business plans and implementation in operational business plans. Most businesses today recognize the fundamental need to grow a business plan that follows this strategy. It is often difficult to implement these plans because of the lack of transparency at the tactical and operational degrees of organizations. This kind of planning requires feedback to allow for early correction of problems that are due to miscommunication and misinterpretation of the business plan.
The design of data systems involves several components such as architecting data platforms, and designing data stores.[25][26]
This is the process of producing adata model, anabstract modelto describe the data and relationships between different parts of the data.[27]
Adata engineeris a type of software engineer who createsbig dataETLpipelines to manage the flow of data through the organization. This makes it possible to take huge amounts of data and translate it intoinsights.[28]They are focused on the production readiness of data and things like formats, resilience, scaling, and security. Data engineers usually hail from a software engineering background and are proficient in programming languages likeJava,Python,Scala, andRust.[29][3]They will be more familiar with databases, architecture, cloud computing, andAgile software development.[3]
Data scientistsare more focused on the analysis of the data, they will be more familiar withmathematics,algorithms,statistics, andmachine learning.[3][30]
|
https://en.wikipedia.org/wiki/Data_engineering
|
Data lineagerefers to the process of tracking how data is generated, transformed, transmitted and used across a system over time.[1]It documents data's origins, transformations and movements, providing detailed visibility into its life cycle. This process simplifies the identification of errors indata analyticsworkflows, by enabling users to trace issues back to their root causes.[2]
Data lineage facilitates the ability to replay specific segments or inputs of thedataflow. This can be used indebuggingor regenerating lost outputs. Indatabase systems, this concept is closely related todata provenance, which involves maintaining records of inputs, entities, systems and processes that influence data.
Data provenance provides a historical record of data origins and transformations. It supports forensic activities such as data-dependency analysis, error/compromise detection, recovery, auditing and compliance analysis: "Lineageis a simple type ofwhy provenance."[3]
Data governanceplays a critical role in managing metadata by establishing guidelines, strategies and policies. Enhancing data lineage withdata qualitymeasures andmaster data managementadds business value. Although data lineage is typically represented through agraphical user interface(GUI), the methods for gathering and exposingmetadatato this interface can vary. Based on the metadata collection approach, data lineage can be categorized into three types: Those involving software packages for structured data,programming languagesandBig datasystems.
Data lineage information includes technical metadata about data transformations. Enriched data lineage may include additional elements such as data quality test results, reference data,data models, business terminology,data stewardshipinformation,program managementdetails andenterprise systemsassociated with data points and transformations. Data lineage visualization tools often include masking features that allow users to focus on information relevant to specific use cases. To unify representations across disparate systems,metadata normalizationor standardization may be required.
Representation broadly depends on the scope of themetadata managementand reference point of interest. Data lineage provides sources of the data and intermediate data flow hops from the reference point withbackward data lineage, leading to the final destination's data points and its intermediate data flows withforward data lineage. These views can be combined withend-to-endlineagefor a reference point that provides a complete audit trail of that data point of interest from sources to their final destinations. As the data points or hops increase, the complexity of such representation becomes incomprehensible. Thus, the best feature of the data lineage view is the ability to simplify the view by temporarily masking unwanted peripheral data points. Tools with the masking feature enablescalabilityof the view and enhance analysis with the best user experience for both technical and business users. Data lineage also enables companies to trace sources of specific business data to track errors, implement changes in processes and implementingsystem migrationsto save significant amounts of time and resources. Data lineage can improve efficiency in business intelligence BI processes.[4]
Data lineage can berepresented visuallyto discover the data flow and movement from its source to destination via various changes and hops on its way in the enterprise environment. This includes how the data is transformed along the way, how the representation and parameters change and how the data splits or converges after each hop. A simple representation of the Data Lineage can be shown with dots and lines, where dots represent data containers for data points, and lines connecting them represent transformations the data undergoes between the data containers.
Data lineage can be visualized at various levels based on the granularity of the view. At a very high-level, data lineage is visualized as systems that the data interacts with before it reaches its destination. At its most granular, visualizations at the data point level can provide the details of the data point and its historical behavior, attribute properties and trends anddata qualityof the data passed through that specific data point in the data lineage.
The scope of the data lineage determines the volume of metadata required to represent its data lineage. Usually,data governanceanddata managementof an organization determine the scope of the data lineage based on theirregulations, enterprise data management strategy, data impact, reporting attributes and criticaldata elementsof the organization.
Distributed systems likeGoogleMap Reduce,[5]MicrosoftDryad,[6]Apache Hadoop[7](an open-source project) and Google Pregel[8]provide such platforms for businesses and users. However, even with these systems,Big Dataanalytics can take several hours, days or weeks to run, simply due to the data volumes involved. For example, a ratings prediction algorithm for theNetflix Prize challengetook nearly 20 hours to execute on 50 cores, and a large-scale image processing task to estimate geographic information took 3 days to complete using 400 cores.[9]"TheLarge Synoptic Survey Telescopeis expected to generateterabytesof data every night and eventually store more than 50petabytes, while in thebioinformaticssector, the 12 largestgenome sequencinghouses in the world now store petabytes of data apiece.[10][failed verification]It is very difficult for adata scientistto trace an unknown or an unanticipated result.
Big data analytics is the process of examining large data sets to uncover hidden patterns, unknowncorrelations,market trends, customer preferences and other useful business information.Machine learning, among other algorithms, is used to transform and analyze the data. Due to the large size of the data, there could be unknown features in the data.
The massive scale andunstructurednature of data, the complexity of these analytics pipelines, and long runtimes pose significant manageability and debugging challenges. Even a single error in these analytics can be extremely difficult to identify and remove. While one may debug them by re-running the entire analytics through a debugger for stepwise debugging, this can be expensive due to the amount of time and resources needed.
Auditing and data validation are other major problems due to the growing ease of access to relevant data sources for use in experiments, the sharing of data between scientific communities and use of third-party data in business enterprises.[11][12][13][14]As such, more cost-efficient ways of analyzingdata intensive scale-able computing(DISC) are crucial to their continued effective use.
According to an EMC/IDC study,[15]2.8ZBof data were created and replicated in 2012. Furthermore, the same study states that thedigital universewill double every two years between now and 2020, and that there will be approximately 5.2TBof data for every person in 2020. Based on current technology, the storage of this much data will mean greater energy usage by data centers.[16]
Unstructured datausually refers to information that doesn't reside in a traditional row-column database. Unstructured data files often include text andmultimediacontent, such ase-mailmessages, word processing documents,videos,photos,audio files, presentations,web pagesand many other kinds of business documents. While these types of files may have an internal structure, they are still considered "unstructured" because the data they contain doesn't fit neatly into a database. The amount of unstructured data in enterprises is growing many times faster than structured databases are growing.Big datacan include both structured and unstructured data, but IDC estimates that 90 percent ofBig Datais unstructured data.[17]
The fundamental challenge of unstructured data sources is that they are difficult for non-technical business users and data analysts alike to unbox, understand and prepare for analytic use. Beyond issues of structure, the sheer volume of this type of data contributes to such difficulty. Because of this, current data mining techniques often leave out valuable information and make analyzing unstructured data laborious and expensive.[18]
In today's competitive business environment, companies have to find and analyze the relevant data they need quickly. The challenge is going through the volumes of data and accessing the level of detail needed, all at a high speed. The challenge only grows as the degree of granularity increases. One possible solution ishardware. Some vendors are using increased memory andparallel processingto crunch large volumes of data quickly. Another method is putting datain-memorybut using agrid computingapproach, where many machines are used to solve a problem. Both approaches allow organizations to explore huge data volumes. Even with this level of sophisticated hardware and software, a few of the image processing tasks in large scale take a few days to few weeks.[19]Debugging of the data processing is extremely hard due to long run times.
A third approach of advanced data discovery solutions combinesself-service data prepwith visual data discovery, enabling analysts to simultaneously prepare and visualize dataside-by-sidein an interactive analysis environment offered by newer companies, such asTrifacta,Alteryxand others.[20]
Another method to track data lineage isspreadsheetprograms such asExcelthat offer users cell-level lineage, or the ability to see which cells are dependent on another. However, the structure of the transformation is lost. Similarly,ETLor mapping software provide transform-level lineage, yet this view typically doesn't display data and is too coarse-grainedto distinguish between transforms that are logically independent (e.g. transforms that operate on distinct columns) or dependent.[21]Big Dataplatforms have a very complicated structure, where data is distributed across a vast range. Typically, the jobs are mapped into several machines and results are later combined by the reduce operations. Debugging aBig Datapipeline becomes very challenging due to the very nature of the system. It will not be an easy task for the data scientist to figure out which machine's data has outliers and unknown features causing a particular algorithm to give unexpected results.
Data provenance or data lineage can be used to make the debugging ofBig Datapipeline easier. This necessitates the collection of data about data transformations. The below section will explain data provenance in more detail.
Scientificdata provenanceprovides a historical record of the data and its origins. The provenance of data which is generated by complex transformations such as workflows is of considerable value to scientists.[22]From it, one can ascertain the quality of the data based on its ancestral data and derivations, track back sources of errors, allow automated re-enactment of derivations to update a data, and provide attribution of data sources. Provenance is also essential to the business domain where it can be used to drill down to the source of data in adata warehouse, track the creation of intellectual property and provide an audit trail for regulatory purposes.
The use of data provenance is proposed in distributed systems to trace records through a dataflow, replay the dataflow on a subset of its original inputs and debug data flows. To do so, one needs to keep track of the set of inputs to each operator, which were used to derive each of its outputs. Although there are several forms of provenance, such as copy-provenance and how-provenance,[14][23]the information we need is a simple form ofwhy-provenance, or lineage, as defined by Cui et al.[24]
PROV is aW3Crecommendation of 2013,
Intuitively, for an operatorT{\displaystyle T}producing outputo{\displaystyle o}, lineage consists of triplets of form{i,T,o}{\displaystyle \{i,T,o\}}, wherei{\displaystyle i}is the set of inputs toT{\displaystyle T}used to deriveo{\displaystyle o}.[3]A query that finds the inputs deriving an output is called abackward tracing query, while one that finds the outputs produced by an input is called aforward tracing query.[27]Backward tracing is useful for debugging, while forward tracing is useful for tracking error propagation.[27]Tracing queries also form the basis for replaying an original dataflow.[12][24][27]However, to efficiently use lineage in aDISCsystem, we need to be able to capture lineage at multiple levels (or granularities) of operators and data, capture accurate lineage for DISC processing constructs and be able to trace through multiple dataflow stages efficiently.
A DISC system consists of several levels of operators anddata, and different use cases of lineage can dictate the level at which lineage needs to be captured. Lineage can be captured at the level of the job, using files and giving lineage tuples of form {IF i, M RJob, OF i }, lineage can also be captured at the level of each task, using records and giving, for example, lineage tuples of form {(k rr, v rr ), map, (k m, v m )}. The first form of lineage is called coarse-grain lineage, while the second form is called fine-grain lineage. Integrating lineage across different granularities enables users to ask questions such as "Which file read by a MapReduce job produced this particular output record?" and can be useful in debugging across different operators and data granularities within a dataflow.[3]
To capture end-to-end lineage in a DISC system, we use the Ibis model,[28]which introduces the notion of containment hierarchies for operators and data. Specifically, Ibis proposes that an operator can be contained within another and such a relationship between two operators is calledoperator containment. Operator containment implies that the contained (or child) operator performs a part of the logical operation of the containing (or parent) operator.[3]For example, a MapReduce task is contained in a job. Similar containment relationships exist for data as well, Known as data containment. Data containment implies that the contained data is a subset of the containing data (superset).
Data lineage systems can be categorized as either eager or lazy.[27]
Eager collection systems capture the entire lineage of the data flow at run time. The kind of lineage they capture may be coarse-grain or fine-grain, but they do not require any further computations on the data flow after its execution.
Lazy lineage collection typically captures only coarse-grain lineage at run time. These systems incur low capture overheads due to the small amount of lineage they capture. However, to answer fine-grain tracing queries, they must replay the data flow on all (or a large part) of its input and collect fine-grain lineage during the replay. This approach is suitable for forensic systems, where a user wants to debug an observed bad output.
Eager fine-grain lineage collection systems incur higher capture overheads than lazy collection systems. However, they enable sophisticated replay and debugging.[3]
An actor is an entity that transforms data; it may be a Dryad vertex, individual map and reduce operators, a MapReduce job, or an entire dataflow pipeline. Actors act as black boxes and the inputs and outputs of an actor are tapped to capture lineage in the form of associations, where an association is a triplet{i,T,o}{\displaystyle \{i,T,o\}}that relates an inputi{\displaystyle i}with an outputo{\displaystyle o}for an actorT{\displaystyle T}. The instrumentation thus captures lineage in a dataflow one actor at a time, piecing it into a set of associations for each actor. The system developer needs to capture the data an actor reads (from other actors) and the data an actor writes (to other actors). For example, a developer can treat the Hadoop Job Tracker as an actor by recording the set of files read and written by each job.[29]
Association is a combination of the inputs, outputs and the operation itself. The operation is represented in terms of a black box also known as the actor. The associations describe the transformations that are applied to the data. The associations are stored in the association tables. Each unique actor is represented by its association table. An association itself looks like {i, T, o} where i is the set of inputs to the actor T and o is the set of outputs produced by the actor. Associations are the basic units of Data Lineage. Individual associations are later clubbed together to construct the entire history of transformations that were applied to the data.[3]
Big datasystems increase capacity by adding new hardware or software entities into the distributed system. This process is calledhorizontal scaling. The distributed system acts as a single entity at the logical level even though it comprises multiple hardware and software entities. The system should continue to maintain this property after horizontal scaling. An important advantage of horizontal scalability is that it can provide the ability to increase capacity on the fly. The biggest plus point is that horizontal scaling can be done using commodity hardware.
The horizontal scaling feature ofBig Datasystems should be taken into account while creating the architecture of lineage store. This is essential because the lineage store itself should also be able to scale in parallel with theBig Datasystem. The number of associations and amount of storage required to store lineage will increase with the increase in size and capacity of the system. The architecture ofBig Datasystems makes use of a single lineage store not appropriate and impossible to scale. The immediate solution to this problem is to distribute the lineage store itself.[3]
The best-case scenario is to use a local lineage store for every machine in the distributed system network. This allows the lineage store also to scale horizontally. In this design, the lineage of data transformations applied to the data on a particular machine is stored on the local lineage store of that specific machine. The lineage store typically stores association tables. Each actor is represented by its own association table. The rows are the associations themselves, and the columns represent inputs and outputs. This design solves two problems. It allows horizontal scaling of the lineage store. If a single centralized lineage store was used, then this information had to be carried over the network, which would cause additional network latency. The network latency is also avoided by the use of a distributed lineage store.[29]
The information stored in terms of associations needs to be combined by some means to get the data flow of a particular job. In a distributed system a job is broken down into multiple tasks. One or more instances run a particular task. The results produced on these individual machines are later combined together to finish the job. Tasks running on different machines perform multiple transformations on the data in the machine. All the transformations applied to the data on a machine is stored in the local lineage store of that machines. This information needs to be combined together to get the lineage of the entire job. The lineage of the entire job should help the data scientist understand the data flow of the job and he/she can use the data flow to debug theBig Datapipeline. The data flow is reconstructed in 3 stages.
The first stage of the data flow reconstruction is the computation of the association tables. The association tables exist for each actor in each local lineage store. The entire association table for an actor can be computed by combining these individual association tables. This is generally done using a series of equality joins based on the actors themselves. In few scenarios the tables might also be joined using inputs as the key. Indexes can also be used to improve the efficiency of a join. The joined tables need to be stored on a single instance or a machine to further continue processing. There are multiple schemes that are used to pick a machine where a join would be computed. The easiest one being the one with minimum CPU load. Space constraints should also be kept in mind while picking the instance where join would happen.
The second step in data flow reconstruction is computing an association graph from the lineage information. The graph represents the steps in the data flow. The actors act as vertices and the associations act as edges. Each actor T is linked to its upstream and downstream actors in the data flow. An upstream actor of T is one that produced the input of T, while a downstream actor is one that consumes the output of T. Containment relationships are always considered while creating the links. The graph consists of three types of links or edges.
The simplest link is an explicitly specified link between two actors. These links are explicitly specified in the code of a machine learning algorithm. When an actor is aware of its exact upstream or downstream actor, it can communicate this information to lineage API. This information is later used to link these actors during the tracing query. For example, in theMapReducearchitecture, each map instance knows the exact record reader instance whose output it consumes.[3]
Developers can attach data flowarchetypesto each logical actor. A data flow archetype explains how the child types of an actor type arrange themselves in a data flow. With the help of this information, one can infer a link between each actor of a source type and a destination type. For example, in theMapReducearchitecture, the map actor type is the source for reduce, and vice versa. The system infers this from the data flow archetypes and duly links map instances with reduce instances. However, there may be several MapReduce jobs in the data flow and linking all map instances with all reduce instances can create false links. To prevent this, such links are restricted to actor instances contained within a common actor instance of a containing (or parent) actor type. Thus, map and reduce instances are only linked to each other if they belong to the same job.[3]
In distributed systems, sometimes there are implicit links, which are not specified during execution. For example, an implicit link exists between an actor that wrote to a file and another actor that read from it. Such links connect actors which use a common data set for execution. The dataset is the output of the first actor and the input of the actor follows it.[3]
The final step in the data flow reconstruction is thetopological sortingof the association graph. The directed graph created in the previous step is topologically sorted to obtain the order in which the actors have modified the data. This record of modifications by the different actors involved is used to track the data flow of theBig Datapipeline or task.
This is the most crucial step inBig Datadebugging. The captured lineage is combined and processed to obtain the data flow of the pipeline. The data flow helps the data scientist or a developer to look deeply into the actors and their transformations. This step allows the data scientist to figure out the part of the algorithm that is generating the unexpected output. ABig Datapipeline can go wrong in two broad ways. The first is a presence of a suspicious actor in the dataflow. The second is the existence of outliers in the data.
The first case can be debugged by tracing the dataflow. By using lineage and data-flow information together a data scientist can figure out how the inputs are converted into outputs. During the process actors that behave unexpectedly can be caught. Either these actors can be removed from the data flow, or they can be augmented by new actors to change the dataflow. The improved dataflow can be replayed to test the validity of it. Debugging faulty actors include recursively performing coarse-grain replay on actors in the dataflow,[30]which can be expensive in resources for long dataflows. Another approach is to manually inspect lineage logs to find anomalies,[13][31]which can be tedious and time-consuming across several stages of a dataflow. Furthermore, these approaches work only when the data scientist can discover bad outputs. To debug analytics without known bad outputs, the data scientist needs to analyze the dataflow for suspicious behavior in general. However, often, a user may not know the expected normal behavior and cannot specify predicates. This section describes a debugging methodology for retrospectively analyzing lineage to identify faulty actors in a multi-stage dataflow. We believe[unbalanced opinion?]that sudden changes in an actor's behavior, such as its average selectivity, processing rate or output size, is characteristic of an anomaly. Lineage can reflect such changes in actor behavior over time and across different actor instances. Thus, mining lineage to identify such changes can be useful in debugging faulty actors in a dataflow.
The second problem i.e. the existence of outliers can also be identified by running the dataflow step wise and looking at the transformed outputs. The data scientist finds a subset of outputs that are not in accordance with the rest of outputs. The inputs which are causing these bad outputs are outliers in the data. This problem can be solved by removing the set of outliers from the data and replaying the entire dataflow. It can also be solved by modifying the machine learning algorithm by adding, removing or moving actors in the dataflow. The changes in the dataflow are successful if the replayed dataflow does not produce bad outputs.
Although the utilization of data lineage methodologies represents a novel approach to the debugging ofBig Datapipelines, the process is not straightforward. A number of challenges must be addressed, including the scalability of the lineage store, the fault tolerance of the lineage store, the accurate capture of lineage for black box operators, and numerous other considerations. These challenges must be carefully evaluated in order to develop a realistic design for data lineage capture, taking into account the inherent trade-offs between them.
DISCsystems are primarily batch processing systems designed for high throughput. They execute several jobs per analytics, with several tasks per job. The overall number of operators executing at any time in a cluster can range from hundreds to thousands depending on the cluster size. Lineage capture for these systems must be able scale to both large volumes of data and numerous operators to avoid being a bottleneck for the DISC analytics.
Lineage capture systems must also be fault tolerant to avoid rerunning data flows to capture lineage. At the same time, they must also accommodate failures in the DISC system. To do so, they must be able to identify a failed DISC task and avoid storing duplicate copies of lineage between the partial lineage generated by the failed task and duplicate lineage produced by the restarted task. A lineage system should also be able to gracefully handle multiple instances of local lineage systems going down. This can be achieved by storing replicas of lineage associations in multiple machines. The replica can act like a backup in the event of the real copy being lost.
Lineage systems for DISC dataflows must be able to capture accurate lineage across black-box operators to enable fine-grain debugging. Current approaches to this include Prober, which seeks to find the minimal set of inputs that can produce a specified output for a black-box operator by replaying the dataflow several times to deduce the minimal set,[32]and dynamic slicing[33]to capture lineage forNoSQLoperators through binary rewriting to compute dynamic slices. Although producing highly accurate lineage, such techniques can incur significant time overheads for capture or tracing, and it may be preferable to instead trade some accuracy for better performance. Thus, there is a need for a lineage collection system for DISC dataflows that can capture lineage from arbitrary operators with reasonable accuracy, and without significant overheads in capture or tracing.
Tracing is essential for debugging, during which a user can issue multiple tracing queries. Thus, it is important that tracing has fast turnaround times. Ikeda et al.[27]can perform efficient backward tracing queries for MapReduce dataflows but are not generic to different DISC systems and do not perform efficient forward queries. Lipstick,[34]a lineage system for Pig,[35]while able to perform both backward and forward tracing, is specific to Pig and SQL operators and can only perform coarse-grain tracing for black-box operators. Thus, there is a need for a lineage system that enables efficient forward and backward tracing for generic DISC systems and dataflows with black-box operators.
Replaying only specific inputs or portions of dataflow is crucial for efficient debugging and simulating what-if scenarios. Ikeda et al. present a methodology for a lineage-based refresh, which selectively replays updated inputs to recompute affected outputs.[36]This is useful during debugging for re-computing outputs when a bad input has been fixed. However, sometimes a user may want to remove the bad input and replay the lineage of outputs previously affected by the error to produce error-free outputs. We call this an exclusive replay. Another use of replay in debugging involves replaying bad inputs for stepwise debugging (called selective replay). Current approaches to using lineage in DISC systems do not address these. Thus, there is a need for a lineage system that can perform both exclusive and selective replays to address different debugging needs.
One of the primary debugging concerns in DISC systems is identifying faulty operators. In long dataflows with several hundreds of operators or tasks, manual inspection can be tedious and prohibitive. Even if lineage is used to narrow the subset of operators to examine, the lineage of a single output can still span several operators. There is a need for an inexpensive automated debugging system, which can substantially narrow the set of potentially faulty operators, with reasonable accuracy, to minimize the amount of manual examination required.
|
https://en.wikipedia.org/wiki/Data_lineage
|
Dataphilanthropyrefers to the practice of privatecompaniesdonatingcorporate data. This data is usually donated tononprofitsor donation-run organizations that have difficulty keeping up with expensive data collection technology.[1]The concept was introduced through theUnited Nations Global Pulseinitiative in 2011 to explore corporate data assets forhumanitarian,academic, andsocietalcauses.[2]For example, anonymized mobile data could be used to track disease outbreaks, or data on consumer actions may be shared with researchers to study public health and economic trends.[3]
A large portion of data collected from theinternetconsists of user-generated content, such asblogs, social media posts, and information submitted throughlead generationand data forms. Additionally, corporations gather and analyze consumer data to gain insight into customer behavior, identify potential markets, and inform investment decisions. United Nations Global Pulse directorRobert Kirkpatrickhas referred to this type of data as "massive passive data" or "data exhaust."[4]
While data philanthropy can enhance development policies, making users' private data available to various organizations raises concerns regardingprivacy, ownership, and the equitable use of data.[5]Different techniques, such asdifferential privacyandalphanumericstrings of information, can allow access to personal data while ensuring user anonymity. However, even if these algorithms work,re-identificationmay still be possible.[6]
Another challenge is convincing corporations to share their data. The data collected by corporations provides them with market competitiveness and insight regardingconsumer behavior. Corporations may fear losing their competitive edge if they share the information they have collected with the public.[6]
Numerous moral challenges are also encountered. In 2016,Mariarosaria Taddeo, a digital ethics professor at theUniversity of Oxford, proposed an ethical framework to address them.[7]
The goal of data philanthropy is to create a globaldata commonswhere companies, governments, and individuals can contribute anonymous, aggregated datasets.[2]The United Nations Global Pulse offers four different tactics that companies can use to share their data that preserve consumer anonymity:[6]
Many corporations take part in data philanthropy, including social networking platforms (e.g.,Facebook,Twitter), telecommunications providers (e.g.,Verizon,AT&T), and search engines (e.g.,Google,Bing). Collecting and sharing anonymized, aggregated user-generated data is made available through data-sharing systems to support research, policy development, and social impact initiatives. By participating in such efforts, these organizations contribute to causes regarded as beneficial to society, allowing institutions to give back meaningfully. With the onset oftechnologicaladvancements, the sharing of data on a global scale and an in-depth analysis of these data structures could mitigate the effects of global issues such asnatural disastersandepidemics. Robert Kirkpatrick, the Director of the United Nations Global Pulse, has argued that this aggregated information is beneficial for the common good and can lead to developments inresearchanddataproduction in a range of varied fields.[4]
Health researchers use digital disease detection by collecting data from various sources—such as social media platforms (e.g.,Twitter,Facebook), mobile devices (e.g.,cell phones,smartphones), online search queries,mobile apps, and sensor data from wearables and environmental sensors—to monitor and predict the spread of infectious diseases. This approach allows them to track and anticipate outbreaks of epidemics (e.g.,COVID-19,Ebola), pandemics, vector-borne diseases (e.g.,malaria,dengue fever), and respiratory illnesses (e.g.,influenza,SARS), improving response and intervention strategies for thespread of diseases.[8]
In 2008,Centers for Disease Control and Preventioncollaborated withGoogleand launchedGoogle Flu Trends, a website that tracked flu-related searches and user locations to track the spread of the flu. Users could visit Google Flu Trends to compare the amount of flu-related search activity versus the reported numbers of flu outbreaks on a graphical map. One drawback of this method of tracking was that Google searches are sometimes performed due to curiosity rather than when an individual is suffering from the flu. According to Ashley Fowlkes, an epidemiologist in theCDCInfluenza division, "The Google Flu Trends system tries to account for that type ofmedia biasby modeling search terms over time to see which ones remain stable."[8]Google Flu Trends is no longer publishing current flu estimates on the public website; however, visitors to the site can still view and download previous estimates. Current data can be shared with verified researchers.[9]
A study from theHarvard School of Public Health(HSPH), published in the October 12, 2012 issue ofScience, discussed how phone data helped curb the spread ofmalariain Kenya. The researchers mapped phone calls and texts made by 14,816,521 Kenyanmobile phonesubscribers.[10]When individuals left their primary living location, the destination and length of journey were calculated. This data was then compared to a 2009 malariaprevalencemap to estimate the disease's commonality in each location. Combining all this information, the researchers could estimate the probability of an individual carrying malaria and map the movement of the disease. This research can be used to track the spread of similar diseases.[10]
Calling patterns ofmobile phoneusers can determine thesocioeconomicstandings of the populace, which can be used to deduce "its access to housing, education, healthcare, and basic services such as water and electricity."[4]Researchers fromColumbia UniversityandKarolinska Instituteused dailySIM cardlocation data from both before and after the2010 Haiti earthquaketo estimate the movement of people both in response to the earthquake and during the related2010 Haiti cholera outbreak.[11]Their research suggests that mobile phone data can provide rapid and accurate estimates of population movements during disasters and outbreaks of infectious disease. Big data can also provide information on looming disasters and can assist relief organizations in rapid-response and locating displaced individuals. By analyzing specific patterns within this 'big data', governments andNGOscan enhance responses to disruptive events such as natural disasters, diseaseoutbreaks, and global economic crises. Leveraging real-time information enables a deeper understanding of individual well-being, allowing for more effective interventions. Corporations utilize digital services, such as human sensor systems, to detect and solve impending problems withincommunities. This is a strategy used by the private sector to anonymously share customer information for public benefit, while preserving user privacy.[4]
Povertystill remains a worldwide issue, with over 2.5 billion people[12]currently impoverished. Statistics indicate the widespread use of mobile phones, even within impoverished communities.[13]Additional data can be collected throughInternet access, social media, utility payments andgovernmentalstatistics. Data-driven activities can lead to the accumulation of 'big data', which in turn can assist international non-governmental organizations in documenting and evaluating the needs of underprivileged populations.[10]Through data philanthropy,NGOscan distribute information while cooperating with governments and private companies.[12]
Data philanthropy incorporates aspects of social philanthropy by allowingcorporationsto create profound impacts through the act of giving back by dispersing proprietary datasets.[14]Thepublic sectorcollects and preserves information, considered an essential asset. Companies track and analyze users' online activities to gain insight into their needs related to new products and services.[15]These companies view the welfare of the population as key to business expansion and progression by using their data to highlight global citizens' issues.[4]Expertsin the private sector emphasize the importance of integrating diverse data sources—such as retail, mobile, and social media data—to develop essential solutions for global challenges. InData Philanthropy: New Paradigms for Collaborative Problem Solving(2022), authorsStefaan Verhulstand Andrew Young discuss this approach. Robert Kirkpatrick argues that, although sharing private information carries inherent risks, it ultimately yields public benefits, supporting the common good.[16]– via Harvard Business Review(subscription required)The digital revolution causes an extensive production ofbig datathat is user-generated and available on the web. Corporations accumulate information on customer preferences through the digital services they utilize and products they purchase to gain clear insights on their clientele and future market opportunities.[4]However, the rights of individuals concerning privacy and ownership of data are controversial, as governments and other institutions can use this collective data forunethicalpurposes.
Data philanthropy is crucial in the academic field. Researchers face numerous challenges in obtaining data, which is often restricted to a select group of individuals who have exclusive access to certain resources, such as social media feeds. This limited access allows these researchers to generate additional insights and pursue innovative studies. For instance,X Corp. (formerly Twitter Inc.) offers access to its real-time APIs at different price points, such as $5,000 for the ability to read 1,000,000 posts each month, a cost that frequently exceeds the financial capabilities of many researchers.
Data philanthropy aids thehuman rightsmovement by assisting in dispersing evidence fortruth commissionsandwar crimes tribunals. Advocates for human rights gather data on abuses occurring within countries, which is then used for scientific analysis to raise awareness and drive action. For example, non-profit organizations compile data from human rights monitors in war zones to assist theUN High Commissioner for Human Rights. This data uncovers inconsistencies in the number of war casualties, leading to international attention and influencing global policy discussions.[17]
|
https://en.wikipedia.org/wiki/Data_philanthropy
|
Adocument-oriented database, ordocument store, is acomputer programand data storage system designed for storing, retrieving and managing document-oriented information, also known assemi-structured data.[1]
Document-oriented databases are one of the main categories ofNoSQLdatabases, and the popularity of the term "document-oriented database" has grown[2]with the use of the term NoSQL itself.XML databasesare a subclass of document-oriented databases that are optimized to work withXMLdocuments.Graph databasesare similar, but add another layer, therelationship, which allows them to link documents for rapid traversal.
Document-oriented databases are inherently a subclass of thekey-value store, another NoSQL database concept. The difference[contradictory]lies in the way the data is processed; in a key-value store, the data is considered to be inherently opaque to the database, whereas a document-oriented system relies on internal structure in thedocumentin order to extractmetadatathat the database engine uses for further optimization. Although the difference is often negligible due to tools in the systems,[a]conceptually the document-store is designed to offer a richer experience with modern programming techniques.
Document databases[b]contrast strongly with the traditionalrelational database(RDB). Relational databases generally store data in separatetablesthat are defined by the programmer, and a single object may be spread across several tables. Document databases store all information for a given object in a single instance in the database, and every stored object can be different from every other. This eliminates the need forobject-relational mappingwhile loading data into the database.
The central concept of a document-oriented database is the notion of adocument. While each document-oriented database implementation differs on the details of this definition, in general, they all assume documents encapsulate and encode data (or information) in some standard format or encoding. Encodings in use includeXML,YAML,JSON, as well as binary forms likeBSON.
Documents in a document store are roughly equivalent to the programming concept of an object. They are not required to adhere to a standard schema, nor will they have all the same sections, slots, parts or keys. Generally, programs using objects have many different types of objects, and those objects often have many optional fields. Every object, even those of the same class, can look very different. Document stores are similar in that they allow different types of documents in a single store, allow the fields within them to be optional, and often allow them to be encoded using different encoding systems. For example, the following is a document, encoded in JSON:
A second document might be encoded in XML as:
These two documents share some structural elements with one another, but each also has unique elements. The structure and text and other data inside the document are usually referred to as the document'scontentand may be referenced via retrieval or editing methods, (see below). Unlike a relational database where every record contains the same fields, leaving unused fields empty; there are no empty 'fields' in either document (record) in the above example. This approach allows new information to be added to some records without requiring that every other record in the database share the same structure.
Document databases typically provide for additionalmetadatato be associated with and stored along with the document content. That metadata may be related to facilities the datastore provides for organizing documents, providing security, or other implementation specific features.
The core operations that a document-oriented database supports for documents are similar to other databases, and while the terminology is not perfectly standardized, most practitioners will recognize them asCRUD:
Documents are addressed in the database via a uniquekeythat represents that document. This key is a simpleidentifier(or ID), typically astring, aURI, or apath. The key can be used to retrieve the document from the database. Typically the database retains anindexon the key to speed up document retrieval, and in some cases the key is required to create or insert the document into the database.
Another defining characteristic of a document-oriented database is that, beyond the simple key-to-document lookup that can be used to retrieve a document, the database offers an API or query language that allows the user to retrieve documents based on content (or metadata). For example, you may want a query that retrieves all the documents with a certain field set to a certain value. The set of query APIs or query language features available, as well as the expected performance of the queries, varies significantly from one implementation to another. Likewise, the specific set of indexing options and configuration that are available vary greatly by implementation.
It is here that the document store varies most from the key-value store. In theory, the values in a key-value store are opaque to the store, they are essentially black boxes. They may offer search systems similar to those of a document store, but may have less understanding about the organization of the content. Document stores use the metadata in the document to classify the content, allowing them, for instance, to understand that one series of digits is a phone number, and another is a postal code. This allows them to search on those types of data, for instance, all phone numbers containing 555, which would ignore the zip code 55555.
Document databases typically provide some mechanism for updating or editing the content (or metadata) of a document, either by allowing for replacement of the entire document, or individual structural pieces of the document.
Document database implementations offer a variety of ways of organizing documents, including notions of
Sometimes these organizational notions vary in how much they are logical vs physical, (e.g. on disk or in memory), representations.
A document-oriented database is a specializedkey-value store, which itself is another NoSQL database category. In a simple key-value store, the document content is opaque. A document-oriented database provides APIs or a query/update language that exposes the ability to query or update based on the internal structure in thedocument. This difference may be minor for users that do not need richer query, retrieval, or editing APIs that are typically provided by document databases. Modern key-value stores often include features for working with metadata, blurring the lines between document stores.
Some search engine (akainformation retrieval) systems likeApache SolrandElasticsearchprovide enough of the core operations on documents to fit the definition of a document-oriented database.
In a relational database, data is first categorized into a number of predefined types, andtablesare created to hold individual entries, orrecords, of each type. The tables define the data within each record'sfields, meaning that every record in the table has the same overall form. The administrator also defines therelationshipsbetween the tables, and selects certain fields that they believe will be most commonly used for searching and definesindexeson them. A key concept in the relational design is that any data that may be repeated is normally placed in its own table, and if these instances are related to each other, a column is selected to group them together, theforeign key. This design is known asdatabase normalization.[3]
For example, an address book application will generally need to store the contact name, an optional image, one or more phone numbers, one or more mailing addresses, and one or more email addresses. In a canonical relational database, tables would be created for each of these rows with predefined fields for each bit of data: the CONTACT table might include FIRST_NAME, LAST_NAME and IMAGE columns, while the PHONE_NUMBER table might include COUNTRY_CODE, AREA_CODE, PHONE_NUMBER and TYPE (home, work, etc.). The PHONE_NUMBER table also contains a foreign key column, "CONTACT_ID", which holds the unique ID number assigned to the contact when it was created. In order to recreate the original contact, the database engine uses the foreign keys to look for the related items across the group of tables and reconstruct the original data.
In contrast, in a document-oriented database there may be no internal structure that maps directly onto the concept of a table, and the fields and relationships generally don't exist as predefined concepts. Instead, all of the data for an object is placed in a single document, and stored in the database as a single entry. In the address book example, the document would contain the contact's name, image, and any contact info, all in a single record. That entry is accessed through its key, which allows the database to retrieve and return the document to the application. No additional work is needed to retrieve the related data; all of this is returned in a single object.
A key difference between the document-oriented and relational models is that the data formats are not predefined in the document case. In most cases, any sort of document can be stored in any database, and those documents can change in type and form at any time. If one wishes to add a COUNTRY_FLAG to a CONTACT, this field can be added to new documents as they are inserted, this will have no effect on the database or the existing documents already stored. To aid retrieval of information from the database, document-oriented systems generally allow the administrator to providehintsto the database to look for certain types of information. These work in a similar fashion to indexes in the relational case. Most also offer the ability to add additional metadata outside of the content of the document itself, for instance, tagging entries as being part of an address book, which allows the programmer to retrieve related types of information, like "all the address book entries". This provides functionality similar to a table, but separates the concept (categories of data) from its physical implementation (tables).
In the classic normalized relational model, objects in the database are represented as separate rows of data with no inherent structure beyond that given to them as they are retrieved. This leads to problems when trying to translate programming objects to and from their associated database rows, a problem known asobject-relational impedance mismatch.[4]Document stores more closely, or in some cases directly, map programming objects into the store. These are often marketed using the termNoSQL.
Most XML databases are document-oriented databases.
|
https://en.wikipedia.org/wiki/Document-oriented_database
|
This is an alphabetical list of notable IT companies using the marketing termbig data:
|
https://en.wikipedia.org/wiki/List_of_big_data_companies
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.