text
stringlengths
16
172k
source
stringlengths
32
122
Eliminative materialism(also calledeliminativism) is amaterialistposition in thephilosophy of mindthat expresses the idea that the majority ofmental statesinfolk psychologydo not exist.[1]Some supporters of eliminativism argue that no coherentneural basiswill be found for many everyday psychological concepts such asbeliefordesire, since they are poorly defined. The argument is that psychological concepts ofbehaviorandexperienceshould be judged by how well they reduce to the biological level.[2]Other versionsentailthe nonexistence of conscious mental states such aspainandvisual perceptions.[3] Eliminativism about a class of entities is the view that the class of entities does not exist.[4]For example,materialismtends to be eliminativist about thesoul; modern chemists are eliminativist aboutphlogiston; modern biologists are eliminativist aboutélan vital; and modern physicists are eliminativist aboutluminiferous ether. Eliminativematerialismis the relatively new (1960s–70s) idea that certain classes of mental entities that common sense takes for granted, such as beliefs, desires, and the subjective sensation of pain, do not exist.[5][6]The most common versions are eliminativism aboutpropositional attitudes, as expressed byPaulandPatricia Churchland,[7]and eliminativism aboutqualia(subjective interpretations about particular instances of subjective experience), as expressed byDaniel Dennett,Georges Rey,[3]andJacy Reese Anthis.[8] In the context ofmaterialistunderstandings ofpsychology, eliminativism is the opposite ofreductive materialism, arguing that mental states as conventionally understooddoexist, anddirectly correspond to the physical state of the nervous system.[9]An intermediate position, revisionary materialism, often argues the mental state in question will prove to besomewhatreducible to physical phenomena—with some changes needed to the commonsense concept.[1][10] Since eliminative materialism arguably claims that future research will fail to find a neuronal basis for various mental phenomena, it may need to wait for science to progress further. One might question the position on these grounds, but philosophers like Churchland argue that eliminativism is often necessary in order to open the minds of thinkers to new evidence and better explanations.[9]Views closely related to eliminativism includeillusionismandquietism. Various arguments have been made for and against eliminative materialism over the last 50 years. The view's history can be traced toDavid Hume, who rejected the idea of the "self" on the grounds that it was not based on any impression.[11]Most arguments for the view are based on the assumption that people's commonsense view of the mind is actually an implicit theory. It is to be compared and contrasted with other scientific theories in its explanatory success, accuracy, and ability to predict the future. Eliminativists argue that commonsense "folk" psychology has failed and will eventually need to be replaced by explanations derived from neuroscience. These philosophers therefore tend to emphasize the importance of neuroscientific research as well as developments inartificial intelligence. Philosophers who argue against eliminativism may take several approaches. Simulation theorists, like Robert Gordon[12]andAlvin Goldman,[13]argue that folk psychology is not a theory, but depends on internal simulation of others, and therefore is not subject to falsification in the same way that theories are.Jerry Fodor, among others,[14]argues that folk psychology is, in fact, a successful (even indispensable) theory. Another view is that eliminativism assumes the existence of the beliefs and other entities it seeks to "eliminate" and is thus self-refuting.[15] Eliminativism maintains that the commonsense understanding of the mind is mistaken, and thatneurosciencewill one day reveal that mental states talked about in everyday discourse, using words such as "intend", "believe", "desire", and "love", do not refer to anything real. Because of the inadequacy of natural languages, people mistakenly think that they have such beliefs and desires.[2]Some eliminativists, such asFrank Jackson, claim thatconsciousnessdoes not exist except as anepiphenomenonofbrainfunction; others, such as Georges Rey, claim that the concept will eventually be eliminated as neuroscience progresses.[3][16]Consciousness and folk psychology are separate issues, and it is possible to take an eliminative stance on one but not the other.[4]The roots of eliminativism go back to the writings ofWilfred Sellars,W.V.O. Quine,Paul Feyerabend, andRichard Rorty.[5][6][17]The term "eliminative materialism" was first introduced byJames Cornmanin 1968 while describing a version of physicalism endorsed by Rorty. The laterLudwig Wittgensteinwas also an important inspiration for eliminativism, particularly with his attack on "private objects" as "grammatical fictions".[4] Early eliminativists such as Rorty and Feyerabend often confused two different notions of the sort of elimination that the term "eliminative materialism" entailed. On the one hand, they claimed, thecognitive sciencesthat will ultimately give people a correct account of the mind's workings will not employ terms that refer to commonsense mental states like beliefs and desires; these states will not be part of theontologyof a mature cognitive science.[5][6]But critics immediately countered that this view was indistinguishable from theidentity theory of mind.[2][18]Quine himself wondered what exactly was so eliminative about eliminative materialism: Is physicalism a repudiation of mental objects after all, or a theory of them? Does it repudiate the mental state of pain or anger in favor of its physical concomitant, or does it identify the mental state with a state of the physical organism (and so a state of the physical organism with the mental state)?[19] On the other hand, the same philosophers claimed that commonsense mental states simply do not exist. But critics pointed out that eliminativists could not have it both ways: either mental states exist and will ultimately be explained in terms of lower-level neurophysiological processes, or they do not.[2][18]Modern eliminativists have much more clearly expressed the view that mental phenomena simply do not exist and will eventually be eliminated from people's thinking about the brain in the same way that demons have been eliminated from people's thinking about mental illness and psychopathology.[4] While it was a minority view in the 1960s, eliminative materialism gained prominence and acceptance during the 1980s.[20]Proponents of this view, such asB.F. Skinner, often made parallels to previous superseded scientific theories (such as that ofthe four humours, thephlogiston theoryofcombustion, and thevital forcetheory of life) that have all been successfully eliminated in attempting to establish their thesis about the nature of the mental. In these cases, science has not produced more detailed versions or reductions of these theories, but rejected them altogether as obsolete.Radical behaviorists, such as Skinner, argued that folk psychology is already obsolete and should be replaced by descriptions of histories ofreinforcementandpunishment.[21]Such views were eventually abandoned. Patricia and Paul Churchland argued thatfolk psychologywill be gradually replaced asneurosciencematures.[20] Eliminativism is not only motivated by philosophical considerations, but is also a prediction about what form future scientific theories will take. Eliminativist philosophers therefore tend to be concerned with data from the relevantbrainandcognitive sciences.[22]In addition, because eliminativism is essentially predictive in nature, different theorists can and often do predict which aspects of folk psychology will be eliminated from folk psychological vocabulary. None of these philosophers are eliminativiststout court.[23][24][25] Today, the eliminativist view is most closely associated with the Churchlands, who deny the existence ofpropositional attitudes(a subclass ofintentional states), and withDaniel Dennett, who is generally considered an eliminativist aboutqualiaand phenomenal aspects of consciousness. One way to summarize the difference between the Churchlands' view and Dennett's is that the Churchlands are eliminativists about propositional attitudes, butreductionistsabout qualia, while Dennett is an anti-reductionist about propositional attitudes and an eliminativist about qualia.[4][25][26][27] More recently, Brian Tomasik andJacy Reese Anthishave made various arguments for eliminativism.[28][29]Elizabeth Irvine has argued that both science and folk psychology do not treatmental statesas having phenomenal properties so the hard problem "may not be a genuine problem for non-philosophers (despite its overwhelming obviousness to philosophers), and questions about consciousness may well 'shatter' into more specific questions about particular capacities."[30]In 2022, Anthis publishedConsciousness Semanticism: A Precise Eliminativist Theory of Consciousness, which asserts that "formal argumentation from precise semantics" dissolves the hard problem because of the contradiction between precision implied in philosophical theory and the vagueness in its definition, which implies there is no fact of the matter for phenomenological consciousness.[8] Eliminativists such as Paul and Patricia Churchland argue thatfolk psychologyis a fully developed but non-formalized theory of human behavior. It is used to explain and make predictions about human mental states and behavior. This view is often referred to as thetheory of mindor just simplytheory-theory, for it theorizes the existence of an unacknowledged theory. As atheoryin the scientific sense, eliminativists maintain, folk psychology must be evaluated on the basis of its predictive power and explanatory success as a research program for the investigation of the mind/brain.[31][32] Such eliminativists have developed different arguments to show that folk psychology is a seriously mistaken theory and should be abolished. They argue that folk psychology excludes from its purview or has traditionally been mistaken about many important mental phenomena that can and are being examined and explained by modern neuroscience. Some examples aredreaming,consciousness,mental disorders,learningprocesses, andmemoryabilities. Furthermore, they argue, folk psychology's development in the last 2,500 years has not been significant and it is therefore stagnant. Theancient Greeksalready had a folk psychology comparable to modern views. But in contrast to this lack of development, neuroscience is rapidly progressing and, in their view, can explain manycognitive processesthat folk psychology cannot.[22][33] Folk psychology retains characteristics of now obsolete theories or legends from the past. Ancient societies tried to explain the physical mysteries ofnatureby ascribing mental conditions to them in such statements as "the sea is angry". Gradually, these everyday folk psychological explanations were replaced by more efficient scientific descriptions. Today, eliminativists argue, there is no reason not to accept an effective scientific account of cognition. If such an explanation existed, then there would be no need for folk-psychological explanations of behavior, and the latter would be eliminated the same way as themythologicalexplanations the ancients used.[34] Another line of argument is the meta-induction based on what eliminativists view as the disastrous historical record of folk theories in general. Ancient pre-scientific "theories" of folk biology, folk physics, and folk cosmology have all proven radically wrong. Eliminativists argue the same in the case of folk psychology. There seems no logical basis, to the eliminativist, to make an exception just because folk psychology has lasted longer and is more intuitive or instinctively plausible than other folk theories.[33]Indeed, the eliminativists warn, considerations of intuitive plausibility may be precisely the result of the deeply entrenched nature in society of folk psychology itself. It may be that people's beliefs and other such states are as theory-laden as external perceptions and hence that intuitions will tend to be biased in their favor.[23] Much of folk psychology involves the attribution ofintentional states(or more specifically as a subclass,propositional attitudes). Eliminativists point out that these states are generally ascribed syntactic and semantic properties. An example of this is thelanguage of thoughthypothesis, which attributes a discrete, combinatorial syntax and other linguistic properties to these mental phenomena. Eliminativists argue that such discrete, combinatorial characteristics have no place in neuroscience, which speaks ofaction potentials, spikingfrequencies, and other continuous and distributed effects. Hence, the syntactic structures assumed by folk psychology have no place in such a structure as the brain.[22]To this there have been two responses. On the one hand, some philosophers deny that mental states are linguistic and see this as astraw manargument.[35][36]The other view is represented by those who subscribe to "a language of thought". They assert that mental states can bemultiply realizedand that functional characterizations are just higher-level characterizations of what happens at the physical level.[37][38] It has also been argued against folk psychology that the intentionality of mental states like belief implies that they have semantic qualities. Specifically, their meaning is determined by the things they are about in the external world. This makes it difficult to explain how they can play the causal roles they are supposed to in cognitive processes.[39] In recent years, this latter argument has been fortified by the theory ofconnectionism. Many connectionist models of the brain have been developed in which the processes of language learning and other forms of representation are highly distributed and parallel. This tends to indicate that such discrete and semantically endowed entities as beliefs and desires are unnecessary.[40] The problem of intentionality poses a significant challenge to materialist accounts of cognition. If thoughts are neural processes, we must explain how specific neural networks can be "about" external objects or concepts. We can think about Paris, for instance, but there is no clear mechanism by which neurons can represent a city.[41] Traditional analogies fail to explain this phenomenon. Unlike a photograph, neurons do not physically resemble Paris. Nor can we appeal to conventional symbolism, as we might with a stop sign representing the action of stopping. Such symbols derive their meaning from social agreement and interpretation, which are not applicable to a brain's workings. Attempts to posit a separate neural process that assigns meaning to the "Paris neurons" merely shift the problem without resolving it, as we then need to explain how this secondary process can assign meaning, initiating an infinite regress.[42] The only way to break this regress is to postulate matter with intrinsic meaning, independent of external interpretation. But our current understanding of physics precludes the existence of such matter. The fundamental particles and forces physics describes have no inherent semantic properties that could ground intentionality. This physical limitation presents a formidable obstacle to materialist theories of mind that rely on neural representations. It suggests that intentionality, as commonly understood, may be incompatible with a purely physicalist worldview. This suggests that our folk psychological concepts of intentional states will be eliminated in light of scientific understanding.[41] Another argument for eliminative materialism stems from evolutionary theory. This argument suggests that natural selection, the process shaping our neural architecture, cannot solve the "disjunction problem", which challenges the idea that neural states can store specific, determinate propositional content. Natural selection, as Darwin described it, is primarily a process of selection against rather than selection for traits. It passively filters out traits below a certain fitness threshold rather than actively choosing beneficial ones. This lack of foresight or purpose in evolution becomes problematic when considering how neural states could represent unique propositions.[43][44] The disjunction problem arises from the fact that natural selection cannot discriminate between coextensive properties. For example, consider two genes close together on a chromosome. One gene might code for a beneficial trait, while the other codes for a neutral or even harmful trait. Due to their proximity, these genes are often inherited together, a phenomenon known as genetic linkage. Natural selection cannot distinguish between these linked traits; it can only act on their combined effect on the organism's fitness. Only random processes like genetic crossover—where chromosomes exchange genetic material during reproduction—can break these linkages. Until such a break occurs, natural selection remains "blind" to the linked genes' individual effects.[44][45] Eliminativists argue that if natural selection—the process responsible for shaping our neural architecture—cannot solve the disjunction problem, then our brains cannot store unique, non-disjunctive propositions, as required by folk psychology. Instead, they suggest that neural states contain inherently disjunctive or indeterminate content. This argument leads eliminativists to reject the notion that neural states have specific, determinate informational content corresponding to the discrete, non-disjunctive propositions of folk psychology. This evolutionary argument adds to the eliminativist case that our commonsense understanding of beliefs, desires, and other propositional attitudes is flawed and should be replaced by a neuroscientific account that acknowledges the indeterminate nature of neural representations.[46][47] Some eliminativists reject intentionality while accepting the existence of qualia. Other eliminativists reject qualia while accepting intentionality. Many philosophers argue that intentionality cannot exist without consciousness and vice versa, and so any philosopher who accepts one while rejecting the other is being inconsistent. They argue that, to be consistent, one must accept both qualia and intentionality or reject them both. Philosophers who argue for such a position includePhilip Goff, Terence Horgan, Uriah Kriegal, and John Tienson.[48][49]The philosopherKeith Frankishaccepts the existence of intentionality but holds to illusionism about consciousness because he rejects qualia. Goff notes that beliefs are a kind of propositional thought. The thesis of eliminativism seems so obviously wrong to many critics, who find it undeniable that people know immediately and indubitably that they have minds, that argumentation seems unnecessary. This sort of intuition-pumping is illustrated by asking what happens when one asks oneself honestly if one has mental states.[50]Eliminativists object to such a rebuttal of their position by claiming that intuitions often are mistaken. Analogies from the history of science are frequently invoked to buttress this observation: it may appear obvious that the sun travels around the earth, for example, but this was nevertheless proved wrong. Similarly, it may appear obvious that apart from neural events there are also mental conditions, but that could be false.[23] But even if one accepts the susceptibility to error of people's intuitions, the objection can be reformulated: if the existence of mental conditions seems perfectly obvious and is central to our conception of the world, then enormously strong arguments are needed to deny their existence. Furthermore, these arguments, to be consistent, must be formulated in a way that does not presuppose the existence of entities like "mental states", "logical arguments", and "ideas", lest they beself-contradictory.[51]Those who accept this objection say that the arguments for eliminativism are far too weak to establish such a radical claim and that there is thus no reason to accept eliminativism.[50] Some philosophers, such asPaul Boghossian, have attempted to show that eliminativism is in some senseself-refuting, since the theory presupposes the existence of mental phenomena. If eliminativism is true, then eliminativists must accept anintentionalproperty liketruth, supposing that in order to assert something one must believe it. Hence, for eliminativism to be asserted as a thesis, the eliminativist must believe that it is true; if so, there are beliefs, and eliminativism is false.[15][52] Georges ReyandMichael Devittreply to this objection by invokingdeflationary semantic theoriesthat avoid analyzingpredicateslike "x is true" as expressing a real property. They are instead construed as logical devices, so that asserting that a sentence is true is just a quoted way of asserting the sentence itself. To say "'God exists' is true" is just to say "God exists". This way, Rey and Devitt argue, insofar as dispositional replacements of "claims" and deflationary accounts of "true" are coherent, eliminativism is not self-refuting.[53] Several philosophers, such as the Churchlands andAlex Rosenberg,[43][54]have developed a theory of structural resemblance or physical isomorphism that could explain how neural states can instantiate truth within thecorrespondence theory of truth. Neuroscientists use the word "representation" to identify the neural circuits' encoding of inputs from the peripheral nervous system in, for example, the visual cortex. But they use the word without according it any commitment to intentional content. In fact, there is an explicit commitment to describing neural representations in terms of structures of neural axonal discharges that are physically isomorphic to the inputs that cause them. Suppose that this way of understanding representation in the brain is preserved in the long-term course of research providing an understanding of how the brain processes and stores information. Then there will be considerable evidence that the brain is a neural network whose physical structure is identical to the aspects of its environment it tracks and whose representations of these features consist in this physical isomorphism.[44] Experiments in the 1980s withmacaquesisolated the structural resemblance between input vibrations the finger feels, measured in cycles per second, and representations of them in neural circuits, measured in action-potential spikes per second. This resemblance between two easily measured variables makes it unsurprising that they would be among the first such structural resemblances to be discovered. Macaques and humans have the same peripheral nervous system sensitivities and can make the same tactile discriminations. Subsequent research into neural processing has increasingly vindicated a structural resemblance or physical isomorphism approach to how information enters the brain and is stored and deployed.[43][55] This isomorphism between brain and world is not a matter of some relationship between reality and a map of reality stored in the brain. Maps require interpretation if they are to be about what they map, and eliminativism and neuroscience share a commitment to explaining the appearance of aboutness by purely physical relationships between informational states in the brain and what they "represent". The brain-to-world relationship must be a matter of physical isomorphism—sameness of form, outline, structure—that does not require interpretation.[44] This machinery can be applied to make "sense" of eliminativism in terms of the sentences eliminativists say or write. When we say that eliminativism is true, that the brain does not store information in the form of unique sentences, statements, expressing propositions or anything like them, there is a set of neural circuits that has no trouble coherently carrying this information. There is a possible translation manual that will guide us back from the vocalization or inscription eliminativists express to these circuits. These neural structures will differ from the neural circuits of those who explicitly reject eliminativism in ways that our translation manual will presumably shed some light on, giving us a neurological handle on disagreement and on the structural differences in neural circuitry, if any, between asserting p and asserting not-p when p expresses the eliminativist thesis.[43] The physical isomorphism approach faces indeterminacy problems. Any given structure in the brain will be causally related to, and isomorphic in various respects to, many different structures in external reality. But we cannot discriminate the one it is intended to represent or that it is supposed to be true "of". These locutions are heavy with just the intentionality that eliminativism denies. Here is a problem of underdetermination or holism that eliminativism shares with intentionality-dependent theories of mind. Here, we can only invoke pragmatic criteria for discriminating successful structural representations—the substitution of true ones for unsuccessful ones—the ones we used to call false.[43] Dennett notes that it is possible that such indeterminacy problems remain only hypothetical, not occurring in reality. He constructs a 4x4 "Quinian crossword puzzle" with words that must satisfy both the across and down definitions. Since there are multiple constraints on this puzzle, there is one solution. Thus we can think of the brain and its relation to the external world as a very large crossword puzzle that must satisfy exceedingly many constraints to which there is only one possible solution. Therefore, in reality we may end up with only one physical isomorphism between the brain and the external world.[47] When indeterminacy problems arose because the brain is physically isomorphic to multiple structures of the external world, it was urged that a pragmatic approach be used to resolve the problem. Another approach argues that thepragmatic theory of truthshould be used from the start to decide whether certain neural circuits store true information about the external world.Pragmatismwas founded byCharles Sanders PeirceandWilliam James, and later refined by our understanding of thephilosophy of science. According to pragmatism, to say thatgeneral relativityis true is to say that it makes more accurate predictions than other theories (Newtonian mechanics,Aristotle's physics, etc.). If computer circuits lack intentionality and do not store information using propositions, then in what sense can computer A have true information about the world while computer B lacks it? If the computers were instantiated inautonomous cars, we could test whether A or B successfully complete a cross-country road trip. If A succeeds while B fails, the pragmatist can say that A holds true information about the world, because A's information allows it to make more accurate predictions (relative to B) about the world and to move around its environment more successfully. Similarly, if brain A has information that enables the biological organism to make more accurate predictions about the world and helps the organism successfully move around in the environment, then A has true information about the world. Although not advocates of eliminativism, John Shook and Tibor Solymosi argue that pragmatism is a promising program for understanding advancements in neuroscience and integrating them into a philosophical picture of the world.[56] The reason naturalism cannot be pragmatic in its epistemology starts with its metaphysics. Science tells us that we are components of the natural realm, indeed latecomers in the 13.8-billion-year-old universe. The universe was not organized around our needs and abilities, and what works for us is just a set of contingent facts that could have been otherwise. Once we have begun discovering things about the universe that work for us, science sets out to explain why they do. It is clear that one explanation for why things work for us that we must rule out as unilluminating, indeed question-begging, is that they work for us because they work for us. If something works for us, enables us to meet our needs and wants, there must be an explanation reflecting facts about us and the world that produce the needs and the means to satisfy them.[46] The explanation of why scientific methods work for us must be a causal explanation. It must show what facts about reality make the methods we employ to acquire knowledge suitable for doing so. The explanation must show that our methods work — for example, have reliable technological application — not by coincidence, still less miracle or accident. That means there must be some facts, events, processes that operate in reality and brought about our pragmatic success. The demand that success be explained is a consequence of science's epistemology. If the truth of such explanations consists in the fact that they work for us (as pragmatism requires), then the explanation of why our scientific methods work is that they work. That is not a satisfying explanation.[46] Some philosophers argue that folk psychology is quite successful.[14][57][58]Simulation theorists doubt that people's understanding of the mental can be explained in terms of a theory at all. Rather they argue that people's understanding of others is based on internal simulations of how they would act and respond in similar situations.[12][13]Jerry Fodorbelieves in folk psychology's success as a theory, because it makes for an effective way of communication in everyday life that can be implemented with few words. Such effectiveness could not be achieved with complex neuroscientific terminology.[14] Another problem for the eliminativist is the consideration that human beings undergo subjectiveexperiencesand hence their conscious mental states havequalia. Since qualia are generally regarded as characteristics of mental states, their existence does not seem compatible with eliminativism.[59]Eliminativists such as Dennett and Rey respond by rejecting qualia.[60][61]Opponents of eliminativism see this response as problematic, since many claim that existence of qualia is perfectly obvious. Many philosophers consider the "elimination" of qualia implausible, if not incomprehensible. They assert that, for instance, the existence of pain is simply beyond denial.[59] Admitting that the existence of qualia seems obvious, Dennett nevertheless holds that "qualia" is a theoretical term from an outdated metaphysics stemming fromCartesianintuitions. He argues that a precise analysis shows that the term is in the long run empty and full of contradictions. Eliminativism's claim about qualia is that there is no unbiased evidence for such experiences when regarded as something more thanpropositional attitudes.[25][62]In other words, it does not deny that pain exists, but holds that it exists independently of its effect on behavior. Influenced by Wittgenstein'sPhilosophical Investigations, Dennett and Rey have defended eliminativism about qualia even when other aspects of the mental are accepted. Dennett offers philosophical thought experiments to argue that qualia do not exist.[63]First he lists five properties of qualia: The first thought experiment Dennett uses to demonstrate that qualia lack the listed necessary properties to exist involvesinverted qualia: consider two people who have different qualia but the same external physical behavior. But now the qualia supporter can present an "intrapersonal" variation. Suppose a neurosurgeon works on your brain and you discover that grass now looks red. Would this not be a case where we could confirm the reality of qualia—by noticing how the qualia have changed while every other aspect of our conscious experience remains the same? Not quite, Dennett replies via the next "intuition pump" (his term for an intuition-based thought experiment), "alternative neurosurgery". There are two different ways the neurosurgeon might have accomplished the inversion. First, they might have tinkered with something "early on", so that signals from the eye when you look at grass contain the information "red" rather than "green". This would result in genuine qualia inversion. But they might instead have tinkered with your memory. Here your qualia would remain the same, but your memory would be altered so that your current green experience would contradict your earlier memories of grass. You would still feel that the color of grass had changed, but here the qualia have not changed, but your memories have. Would you be able to tell which of these scenarios is correct? No: your perceptual experience tells you that something has changed but not whether your qualia have changed. Dennett concludes, since (by hypothesis) the two surgical procedures can yield exactly the same introspective effects while only one inverts the qualia, nothing in the subject's experience can favor one hypothesis over the other. So unless he seeks outside help, the state of his own qualia must be as unknowable to him as the state of anyone else's. It is questionable, in short, that we have direct, infallible access to our conscious experience.[63] Dennett's second thought experiment involves beer. Many people think of beer as an acquired taste: one's first sip is often unpleasant, but one gradually comes to enjoy it. But wait, Dennett asks—what is the "it" here? Compare the flavor of that first taste with the flavor now. Does the beer taste exactly the same both then and now, only now you like that taste whereas before you disliked it? Or is it that the way beer tastes gradually shifts—so that the taste you did not like at the beginning is not the same taste you now like? In fact most people simply cannot tell which is the correct analysis. But that is to give up again on the idea that we have special and infallible access to our qualia. Further, when forced to choose, many people feel that the second analysis is more plausible. But then if one's reactions to an experience are in any way constitutive of it, the experience is not so "intrinsic" after all—and another qualia property falls.[63] Dennett's third thought experiment involves inverted goggles. Scientists have devised special eyeglasses that invert up and down for the wearer. When you put them on, everything looks upside down. When subjects first put them on, they can barely walk around without stumbling. But after subjects wear them for a while, something surprising occurs. They adapt and become able to walk around as easily as before. When you ask them whether they adapted by re-inverting their visual field or simply got used to walking around in an upside-down world, they cannot say. So as in our beer-drinking case, either we simply do not have the special, infallible access to our qualia that would allow us to distinguish the two cases or the way the world looks to us is actually a function of how we respond to the world—in which case qualia are not "intrinsic" properties of experience.[63] Edward Feser objects to Dennett's position as follows. That you need to appeal to third-person neurological evidence to determine whether your memory of your qualia has been tampered with does not seem to show that your qualia themselves—past or present—can be known only by appealing to that evidence. You might still be directly aware of your qualia from the first-person, subjective point of view even if you do not know whether they are the same as the qualia you had yesterday—just as you might really be aware of the article in front of you even if you do not know whether it is the same as the article you saw yesterday. Questions about memory do not necessarily bear on the nature of your awareness of objects present here and now (even if they bear on what you can justifiably claim to know about such objects), whatever those objects happen to be. Dennett's assertion that scientific objectivity requires appealing exclusively to third-person evidence appears mistaken. What scientific objectivity requires is not denial of the first-person subjective point of view but rather a means of communicating inter-subjectively about what one can grasp only from that point of view. Given the relational structure first-person phenomena like qualia appear to exhibit—a structure thatCarnapdevoted great effort to elucidating—such a means seems available: we can communicate what we know about qualia in terms of their structural relations to one another. Dennett fails to see that qualia can be essentially subjective and still relational or non-intrinsic, and thus communicable. This communicability ensures that claims about qualia are epistemologically objective; that is, they can in principle be grasped and evaluated by all competent observers even though they are claims about phenomena that are arguably not metaphysically objective, i.e., about entities that exist only as grasped by a subject of experience. It is only the former sort of objectivity that science requires. It does not require the latter, and cannot plausibly require it if the first-person realm of qualia is what we know better than anything else.[64] Illusionismis an active program within eliminative materialism to explainphenomenal consciousnessas an illusion. It is promoted by the philosophersDaniel Dennett,Keith Frankish, andJay Garfield, and the neuroscientistMichael Graziano.[65][66]Graziano has advanced theattention schema theory of consciousnessand postulates that consciousness is an illusion.[67][68]According toDavid Chalmers, proponents argue that once we can explain consciousness as an illusion without the need for a realist view of consciousness, we can construct a debunking argument against realist views of consciousness.[69]This line of argument draws from other debunking arguments like theevolutionary debunking argumentin the field ofmetaethics. Such arguments note that morality is explained by evolution without positingmoral realism, so there is a sufficient basis to debunk moral realism.[70] Illusionists generally hold that once it is explained why people believe and say they are conscious, the hard problem of consciousness will dissolve. Chalmers agrees that a mechanism for these beliefs and reports can and should be identified using the standard methods of physical science, but disagrees that this would support illusionism, saying that the datum illusionism fails to account for is not reports of consciousness but rather first-person consciousness itself.[71]He separates consciousness from beliefs and reports about consciousness, but holds that a fully satisfactory theory of consciousness should explain how the two are "inextricably intertwined" so that their alignment does not require an inexplicable coincidence.[71]Illusionism has also been criticized by the philosopherJesse Prinz.[72]
https://en.wikipedia.org/wiki/Eliminative_materialism
Quantum machine learningis the integration ofquantum algorithmswithinmachine learningprograms.[1][2][3][4][5][6][7][8] The most common use of the term refers to machine learning algorithms for the analysis of classical data executed on aquantum computer, i.e. quantum-enhanced machine learning.[9][10][11]While machine learning algorithms are used to compute immense quantities of data, quantum machine learning utilizesqubitsand quantum operations or specialized quantum systems to improve computational speed and data storage done by algorithms in a program.[12]This includes hybrid methods that involve both classical and quantum processing, where computationally difficult subroutines are outsourced to a quantum device.[13][14][15]These routines can be more complex in nature and executed faster on a quantum computer.[7]Furthermore, quantum algorithms can be used to analyzequantum statesinstead of classical data.[16][17] Beyond quantum computing, the term "quantum machine learning" is also associated with classical machine learning methods applied to data generated from quantum experiments (i.e.machine learning of quantum systems), such as learning thephase transitionsof a quantum system[18][19]or creating new quantum experiments.[20][21][22] Quantum machine learning also extends to a branch of research that explores methodological and structural similarities between certain physical systems and learning systems, in particular neural networks. For example, some mathematical and numerical techniques from quantum physics are applicable to classical deep learning and vice versa.[23][24][25] Furthermore, researchers investigate more abstract notions of learning theory with respect to quantum information, sometimes referred to as "quantum learning theory".[26][27] Quantum-enhanced machine learning refers toquantum algorithmsthat solve tasks in machine learning, thereby improving and often expediting classical machine learning techniques. Such algorithms typically require one to encode the given classical data set into a quantum computer to make it accessible for quantum information processing. Subsequently, quantum information processing routines are applied and the result of the quantum computation is read out by measuring the quantum system. For example, the outcome of the measurement of a qubit reveals the result of a binary classification task. While many proposals of quantum machine learning algorithms are still purely theoretical and require a full-scale universalquantum computerto be tested, others have been implemented on small-scale or special purpose quantum devices. Associative (or content-addressable memories) are able to recognize stored content on the basis of a similarity measure, rather than fixed addresses, like in random access memories. As such they must be able to retrieve both incomplete and corrupted patterns, the essential machine learning task of pattern recognition. Typical classical associative memories store p patterns in theO(n2){\displaystyle O(n^{2})}interactions (synapses) of a real,  symmetric energy matrix over a network of n artificial neurons. The encoding is such that the desired patterns are local minima of the energy functional and retrieval is done by minimizing the total energy, starting from an initial configuration. Unfortunately, classical associative memories are severely limited by the phenomenon ofcross-talk. When too many patterns are stored, spurious memories appear which quickly proliferate, so that the energy landscape becomes disordered and no retrieval is anymore possible. The number of storable patterns is typically limited by a linear function of the number of neurons,p≤O(n){\displaystyle p\leq O(n)}. Quantum associative memories[2][3][4](in their simplest realization) store patterns in a unitary matrix U acting on theHilbert spaceof n qubits. Retrieval is realized by theunitary evolutionof a fixed initial state to aquantum superpositionof the desired patterns with probability distribution peaked on the most similar pattern to an input. By its very quantum nature, the retrieval process is thus probabilistic. Because quantum associative memories are free from cross-talk, however, spurious memories are never generated. Correspondingly, they have a superior capacity than classical ones. The number of parameters in the unitary matrix U isO(pn){\displaystyle O(pn)}. One can thus have efficient, spurious-memory-free quantum associative memories for any polynomial number of patterns. A number of quantum algorithms for machine learning are based on the idea of amplitude encoding, that is, to associate theamplitudesof a quantum state with the inputs and outputs of computations.[30][31][32]Since a state ofn{\displaystyle n}qubits is described by2n{\displaystyle 2^{n}}complex amplitudes, this information encoding can allow for an exponentially compact representation. Intuitively, this corresponds to associating a discrete probability distribution over binary random variables with a classical vector. The goal of algorithms based on amplitude encoding is to formulate quantum algorithms whoseresourcesgrow polynomially in the number of qubitsn{\displaystyle n}, which amounts to a logarithmictime complexityin the number of amplitudes and thereby the dimension of the input. Many quantum machine learning algorithms in this category are based on variations of thequantum algorithm for linear systems of equations[33](colloquially called HHL, after the paper's authors) which, under specific conditions, performs a matrix inversion using an amount of physical resources growing only logarithmically in the dimensions of the matrix. One of these conditions is that aHamiltonianwhich entry wise corresponds to the matrix can be simulated efficiently, which is known to be possible if the matrix is sparse[34]or low rank.[35]For reference, any known classical algorithm formatrix inversionrequires a number of operations that growsmore than quadratically in the dimension of the matrix(e.g.O(n2.373){\displaystyle O{\mathord {\left(n^{2.373}\right)}}}), but they are not restricted to sparse matrices. Quantum matrix inversion can be applied to machine learning methods in which the training reduces to solving alinear system of equations, for example in least-squares linear regression,[31][32]the least-squares version ofsupport vector machines,[30]and Gaussian processes.[36] A crucial bottleneck of methods that simulate linear algebra computations with the amplitudes of quantum states is state preparation, which often requires one to initialise a quantum system in a state whose amplitudes reflect the features of the entire dataset. Although efficient methods for state preparation are known for specific cases,[37][38]this step easily hides the complexity of the task.[39][40] VQAs are one of the most studied classes of quantum algorithms, as modern research demonstrates their applicability to the vast majority of known major applications of the quantum computer, and they appear to be a leading hope for gaining quantum supremacy.[41]VQAs are a mixed quantum-classical approach where the quantum processor prepares quantum states and measurement is made and the optimization is done by a classical computer. VQAs are considered best for NISQ as VQAs are noise tolerant compared to other algorithms and give quantum superiority with only a few hundred qubits. Researchers have studied circuit-based algorithms to solve optimization problems and find the ground state energy of complex systems, which were difficult to solve or required a large time to perform the computation using a classical computer.[42][43] Variational Quantum Circuits also known as Parametrized Quantum Circuits (PQCs) are based on Variational Quantum Algorithms (VQAs). VQCs consist of three parts: preparation of initial states, quantum circuit, and measurement. Researchers are extensively studying VQCs, as it uses the power of quantum computation to learn in a short time and also use fewer parameters than its classical counterparts. It is theoretically and numerically proven that we can approximate non-linear functions, like those used in neural networks, on quantum circuits. Due to VQCs superiority, neural network has been replaced by VQCs in Reinforcement Learning tasks and Generative Algorithms. The intrinsic nature of quantum devices towards decoherence, random gate error and measurement errors caused to have high potential to limit the training of the variation circuits. Training the VQCs on the classical devices before employing them on quantum devices helps to overcome the problem of decoherence noise that came through the number of repetitions for training.[44][45][46] Pattern reorganization is one of the important tasks of machine learning,binary classificationis one of the tools or algorithms to find patterns. Binary classification is used insupervised learningand inunsupervised learning. In quantum machine learning, classical bits are converted to qubits and they are mapped to Hilbert space; complex value data are used in a quantum binary classifier to use the advantage of Hilbert space.[47][48]By exploiting the quantum mechanic properties such as superposition, entanglement, interference the quantum binary classifier produces the accurate result in short period of time.[49] Another approach to improving classical machine learning with quantum information processing usesamplitude amplificationmethods based onGrover's searchalgorithm, which has been shown to solve unstructured search problems with a quadratic speedup compared to classical algorithms. These quantum routines can be employed for learning algorithms that translate into an unstructured search task, as can be done, for instance, in the case of thek-medians[50]and thek-nearest neighbors algorithms.[9]Other applications include quadratic speedups in the training ofperceptron[51]and the computation ofattention.[52] An example of amplitude amplification being used in a machine learning algorithm is Grover's search algorithm minimization. In which a subroutine uses Grover's search algorithm to find an element less than some previously defined element. This can be done with an oracle that determines whether or not a state with a corresponding element is less than the predefined one. Grover's algorithm can then find an element such that our condition is met. The minimization is initialized by some random element in our data set, and iteratively does this subroutine to find the minimum element in the data set. This minimization is notably used in quantum k-medians, and it has a speed up of at leastO(nk){\displaystyle {\mathcal {O}}\left({\sqrt {\frac {n}{k}}}\right)}compared to classical versions of k-medians, wheren{\displaystyle n}is the number of data points andk{\displaystyle k}is the number of clusters.[50] Amplitude amplification is often combined withquantum walksto achieve the same quadratic speedup. Quantum walks have been proposed to enhance Google's PageRank algorithm[53]as well as the performance of reinforcement learning agents in the projective simulation framework.[54] Reinforcement learningis a branch of machine learning distinct from supervised and unsupervised learning, which also admits quantum enhancements.[55][54][56]In quantum-enhanced reinforcement learning, a quantum agent interacts with a classical or quantum environment and occasionally receives rewards for its actions, which allows the agent to adapt its behavior—in other words, to learn what to do in order to gain more rewards. In some situations, either because of the quantum processing capability of the agent,[54]or due to the possibility to probe the environment insuperpositions,[29]a quantum speedup may be achieved. Implementations of these kinds of protocols have been proposed for systems oftrapped ions[57]andsuperconducting circuits.[58]A quantum speedup of the agent's internal decision-making time[54]has been experimentally demonstrated in trapped ions,[59]while a quantum speedup of the learning time in a fully coherent (`quantum') interaction between agent and environment has been experimentally realized in a photonic setup.[60] Quantum annealingis an optimization technique used to determine the local minima and maxima of a function over a given set of candidate functions. This is a method of discretizing a function with many local minima or maxima in order to determine the observables of the function. The process can be distinguished fromSimulated annealingby theQuantum tunnelingprocess, by which particles tunnel through kinetic or potential barriers from a high state to a low state. Quantum annealing starts from a superposition of all possible states of a system, weighted equally. Then the time-dependentSchrödinger equationguides the time evolution of the system, serving to affect the amplitude of each state as time increases. Eventually, the ground state can be reached to yield the instantaneous Hamiltonian of the system. As the depth of the quantum circuit advances onNISQdevices, the noise level rises, posing a significant challenge to accurately computing costs and gradients on training models. The noise tolerance will be improved by using the quantumperceptronand the quantum algorithm on the currently accessible quantum hardware.[citation needed] A regular connection of similar components known asneuronsforms the basis of even the most complex brain networks. Typically, a neuron has two operations: the inner product and anactivation function. As opposed to the activation function, which is typicallynonlinear, the inner product is a linear process. With quantum computing, linear processes may be easily accomplished additionally,  due to the simplicity of implementation, the threshold function is preferred by the majority of quantum neurons for activation functions.[citation needed] Sampling from high-dimensional probability distributions is at the core of a wide spectrum of computational techniques with important applications across science, engineering, and society. Examples includedeep learning,probabilistic programming, and other machine learning and artificial intelligence applications. A computationally hard problem, which is key for some relevant machine learning tasks, is the estimation of averages over probabilistic models defined in terms of aBoltzmann distribution. Sampling from generic probabilistic models is hard: algorithms relying heavily on sampling are expected to remain intractable no matter how large and powerful classical computing resources become. Even though quantum annealers, like those produced by D-Wave Systems, were designed for challenging combinatorial optimization problems, it has been recently recognized as a potential candidate to speed up computations that rely on sampling by exploiting quantum effects.[61] Some research groups have recently explored the use of quantum annealing hardware for trainingBoltzmann machinesanddeep neural networks.[62][63][64]The standard approach to training Boltzmann machines relies on the computation of certain averages that can be estimated by standardsamplingtechniques, such asMarkov chain Monte Carloalgorithms. Another possibility is to rely on a physical process, like quantum annealing, that naturally generates samples from a Boltzmann distribution. The objective is to find the optimal control parameters that best represent the empirical distribution of a given dataset. The D-Wave 2X system hosted at NASA Ames Research Center has been recently used for the learning of a special class of restricted Boltzmann machines that can serve as a building block for deep learning architectures.[63]Complementary work that appeared roughly simultaneously showed that quantum annealing can be used for supervised learning in classification tasks.[62]The same device was later used to train a fully connected Boltzmann machine to generate, reconstruct, and classify down-scaled, low-resolution handwritten digits, among other synthetic datasets.[65]In both cases, the models trained by quantum annealing had a similar or better performance in terms of quality. The ultimate question that drives this endeavour is whether there is quantum speedup in sampling applications. Experience with the use of quantum annealers for combinatorial optimization suggests the answer is not straightforward. Reverse annealing has been used as well to solve a fully connected quantum restricted Boltzmann machine.[66] Inspired by the success of Boltzmann machines based on classical Boltzmann distribution, a new machine learning approach based on quantum Boltzmann distribution of a transverse-field Ising Hamiltonian was recently proposed.[67]Due to the non-commutative nature of quantum mechanics, the training process of the quantum Boltzmann machine can become nontrivial. This problem was, to some extent, circumvented by introducing bounds on the quantum probabilities, allowing the authors to train the model efficiently by sampling. It is possible that a specific type of quantum Boltzmann machine has been trained in the D-Wave 2X by using a learning rule analogous to that of classical Boltzmann machines.[65][64][68] Quantum annealing is not the only technology for sampling. In a prepare-and-measure scenario, a universal quantum computer prepares a thermal state, which is then sampled by measurements. This can reduce the time required to train a deep restricted Boltzmann machine, and provide a richer and more comprehensive framework for deep learning than classical computing.[69]The same quantum methods also permit efficient training of full Boltzmann machines and multi-layer, fully connected models and do not have well-known classical counterparts. Relying on an efficient thermal state preparation protocol starting from an arbitrary state, quantum-enhancedMarkov logic networksexploit the symmetries and the locality structure of theprobabilistic graphical modelgenerated by afirst-order logictemplate.[70][19]This provides an exponential reduction in computational complexity in probabilistic inference, and, while the protocol relies on a universal quantum computer, under mild assumptions it can be embedded on contemporary quantum annealing hardware. Quantum analogues or generalizations of classical neural nets are often referred to asquantum neural networks. The term is claimed by a wide range of approaches, including the implementation and extension of neural networks using photons, layered variational circuits or quantum Ising-type models. Quantum neural networks are often defined as an expansion on Deutsch's model of a quantum computational network.[71]Within this model, nonlinear and irreversible gates, dissimilar to the Hamiltonian operator, are deployed to speculate the given data set.[71]Such gates make certain phases unable to be observed and generate specific oscillations.[71]Quantum neural networks apply the principals quantum information and quantum computation to classical neurocomputing.[72]Current research shows that QNN can exponentially increase the amount of computing power and the degrees of freedom for a computer, which is limited for a classical computer to its size.[72]A quantum neural network has computational capabilities to decrease the number of steps, qubits used, and computation time.[71]The wave function to quantum mechanics is the neuron for Neural networks. To test quantum applications in a neural network, quantum dot molecules are deposited on a substrate of GaAs or similar to record how they communicate with one another. Each quantum dot can be referred as an island of electric activity, and when such dots are close enough (approximately 10 - 20 nm)[73]electrons can tunnel underneath the islands. An even distribution across the substrate in sets of two create dipoles and ultimately two spin states, up or down. These states are commonly known as qubits with corresponding states of|0⟩{\displaystyle |0\rangle }and|1⟩{\displaystyle |1\rangle }in Dirac notation.[73] A novel design for multi-dimensional vectors that uses circuits as convolution filters[74]is QCNN. It was inspired by the advantages of CNNs[75][76]and the power of QML. It is made using a combination of a variational quantum circuit(VQC)[77]and adeep neural network[78](DNN), fully utilizing the power of extremely parallel processing on a superposition of a quantum state with a finite number of qubits. The main strategy is to carry out an iterative optimization process in theNISQ[79]devices, without the negative impact of noise, which is possibly incorporated into the circuit parameter, and without the need for quantum error correction.[80] The quantum circuit must effectively handle spatial information in order for QCNN to function as CNN. The convolution filter is the most basic technique for making use of spatial information. One or more quantum convolutional filters make up a quantum convolutional neural network (QCNN), and each of these filters transforms input data using a quantum circuit that can be created in an organized or randomized way. Three parts that make up the quantum convolutional filter are: the encoder, the parameterized quantum circuit (PQC),[81]and the measurement. The quantum convolutional filter can be seen as an extension of the filter in the traditional CNN because it was designed with trainable parameters. Quantum neural networks take advantage of the hierarchical structures,[82]and for each subsequent layer, the number of qubits from the preceding layer is decreased by a factor of two. For n input qubits, these structure have O(log(n)) layers, allowing for shallow circuit depth. Additionally, they are able to avoid "barren plateau," one of the most significant issues with PQC-based algorithms, ensuring trainability.[83]Despite the fact that the QCNN model does not include the corresponding quantum operation, the fundamental idea of thepooling layeris also offered to assure validity. In QCNN architecture, the pooling layer is typically placed between succeeding convolutional layers. Its function is to shrink the representation's spatial size while preserving crucial features, which allows it to reduce the number of parameters, streamline network computing, and manage over-fitting. Such process can be accomplished applyingfull Tomographyon the state to reduce it all the way down to one qubit and then processed it in subway. The most frequently used unit type in thepooling layeris max pooling, although there are other types as well. Similar toconventional feed-forwardneural networks, the last module is a fully connected layer with full connections to all activations in the preceding layer. Translational invariance, which requires identical blocks of parameterized quantum gates within a layer, is a distinctive feature of the QCNN architecture.[84] Dissipative QNNs (DQNNs) are constructed from layers of qubits coupled by perceptron called building blocks, which have an arbitrary unitary design. Each node in the network layer of a DQNN is given a distinct collection of qubits, and each qubit is also given a unique quantum perceptron unitary to characterize it.[85][86]The input states information are transported through the network in a feed-forward fashion, layer-to-layer transition mapping on the qubits of the two adjacent layers, as the name implies. Dissipative term also refers to the fact that the output layer is formed by the ancillary qubits while the input layers are dropped while tracing out the final layer.[87]When performing a broad supervised learning task, DQNN are used to learn a unitary matrix connecting the input and output quantum states. The training data for this task consists of the quantum state and the corresponding classical labels. Inspired by the extremely successful classicalGenerative adversarial network(GAN),[88]dissipative quantum generative adversarial network (DQGAN) is introduced forunsupervised learningof the unlabeled training data . The generator and the discriminator are the two DQNNs that make up a single DQGAN.[86]The generator's goal is to create false training states that the discriminator cannot differentiate from the genuine ones, while the discriminator's objective is to separate the real training states from the fake states created by the generator. The relevant features of the training set are learned by the generator by alternate and adversarial training of the networks that aid in the production of sets that extend the training set. DQGAN has a fully quantum architecture and is trained in quantum data. Entangled Hidden Markov Models AnEntangled Hidden Markov Model(EHMM) is a quantum extension of the classical Hidden Markov Model (HMM), introduced by Abdessatar Souissi and El Gheteb Souedidi.[89]EHMMs establish a bridge between classical probability and quantum entanglement, providing a more profound understanding of quantum systems using observational data. Let \( d_H, d_O \) be two positive integers representing the dimensions of the hidden and observable states, respectively. Define: - \( \mathcal{M}_{d_H} \) as the \( C^* \)-algebra of \( d_H \times d_H \) matrices. - \( \mathcal{M}_{d_O} \) as the \( C^* \)-algebra of \( d_O \times d_O \) matrices. - The identity element in \( \mathcal{M}_{d_H} \) is denoted by \( \mathbb{I}_{d_H} \). - The Schur (Hadamard) product for two matrices \( A, B \in \mathcal{M}_{d_H} \) is defined as: Define the hidden and observable sample algebras: \[ \mathcal{A}_H = \bigotimes_{\mathbb{N}} \mathcal{M}_{d_H}, \quad \mathcal{A}_O = \bigotimes_{\mathbb{N}} \mathcal{M}_{d_O}, \] with the full sample algebra: \[ \mathcal{A}_{H,O} = \bigotimes_{\mathbb{N}} (\mathcal{M}_{d_H} \otimes \mathcal{M}_{d_O}). \] Hidden Quantum Markov Models(HQMMs) are a quantum-enhanced version of classicalHidden Markov Models(HMMs), which are typically used to model sequential data in various fields likeroboticsandnatural language processing.[90]Unlike other quantum-enhanced machine learning algorithms, HQMMs can be viewed as models inspired by quantum mechanics that can be run on classical computers as well.[91]Where classical HMMs use probability vectors to represent hidden 'belief' states, HQMMs use the quantum analogue:density matrices. Recent work has extended HQMMs through the introduction of **Entangled Hidden Markov Models (EHMMs)**, which incorporate quantum entanglement into their structure.[92]The EHMM framework builds upon classical HQMMs by defining entangled transition expectations, which allow for enhanced modeling of quantum systems.[93]Additionally, EHMMs have been linked to Matrix Product States (MPS) and provide a new perspective on probabilistic graphical models in quantum settings. Since classical HMMs are a particular kind ofBayes net, HQMMs and EHMMs provide insights into quantum-analogousBayesian inference, offering new pathways for modeling quantum probability and non-classical correlations in quantum information processing. Furthermore, empirical studies suggest that EHMMs improve the ability to model sequential data when compared to their classical counterparts, though further research is required to fully understand these benefits. A linear map \( \mathcal{E}_H : \mathcal{M}_{d_H} \otimes \mathcal{M}_{d_H} \to \mathcal{M}_{d_H} \) is called a **transition expectation** if it is completely positive and identity-preserving \cite{SSB23}. Similarly, a linear map \( \mathcal{E}_{H,O} : \mathcal{M}_{d_H} \otimes \mathcal{M}_{d_O} \to \mathcal{M}_{d_H} \) is called an **emission operator** if it is completely positive and identity-preserving. In the most general case of quantum machine learning, both the learning device and the system under study, as well as their interaction, are fully quantum. This section gives a few examples of results on this topic. One class of problem that can benefit from the fully quantum approach is that of 'learning' unknown quantum states, processes or measurements, in the sense that one can subsequently reproduce them on another quantum system. For example, one may wish to learn a measurement that discriminates between two coherent states, given not a classical description of the states to be discriminated, but instead a set of example quantum systems prepared in these states. The naive approach would be to first extract a classical description of the states and then implement an ideal discriminating measurement based on this information. This would only require classical learning. However, one can show that a fully quantum approach is strictly superior in this case.[94](This also relates to work on quantum pattern matching.[95]) The problem of learning unitary transformations can be approached in a similar way.[96] Going beyond the specific problem of learning states and transformations, the task ofclusteringalso admits a fully quantum version, wherein both the oracle which returns the distance between data-points and the information processing device which runs the algorithm are quantum.[97]Finally, a general framework spanning supervised, unsupervised and reinforcement learning in the fully quantum setting was introduced in,[29]where it was also shown that the possibility of probing the environment in superpositions permits a quantum speedup in reinforcement learning. Such a speedup in the reinforcement-learning paradigm has been experimentally demonstrated in a photonic setup.[60] The need for models that can be understood by humans emerges in quantum machine learning in analogy to classical machine learning and drives the research field of explainable quantum machine learning (or XQML[98]in analogy toXAI/XML). These efforts are often also referred to as Interpretable Machine Learning (IML, and by extension IQML).[99]XQML/IQML can be considered as an alternative research direction instead of finding a quantum advantage.[100]For example, XQML has been used in the context of mobile malware detection and classification.[101]QuantumShapley valueshave also been proposed to interpret gates within a circuit based on a game-theoretic approach.[98]For this purpose, gates instead of features act as players in a coalitional game with a value function that depends on measurements of the quantum circuit of interest. Additionally, a quantum version of the classical technique known as LIME (Linear Interpretable Model-Agnostic Explanations)[102]has also been proposed, known as Q-LIME.[103] The term "quantum machine learning" sometimes refers to classical machine learning performed on data from quantum systems. A basic example of this isquantum state tomography, where a quantum state is learned from measurement. Other applications include learning Hamiltonians[104]and automatically generating quantum experiments.[20] Quantum learning theory pursues a mathematical analysis of the quantum generalizations of classical learning models and of the possible speed-ups or other improvements that they may provide. The framework is very similar to that of classicalcomputational learning theory, but the learner in this case is a quantum information processing device, while the data may be either classical or quantum. Quantum learning theory should be contrasted with the quantum-enhanced machine learning discussed above, where the goal was to consider specific problems and to use quantum protocols to improve the time complexity of classical algorithms for these problems. Although quantum learning theory is still under development, partial results in this direction have been obtained.[105] The starting point in learning theory is typically a concept class, a set of possible concepts. Usually a concept is a function on some domain, such as{0,1}n{\displaystyle \{0,1\}^{n}}. For example, the concept class could be the set ofdisjunctive normal form(DNF) formulas on n bits or the set of Boolean circuits of some constant depth. The goal for the learner is to learn (exactly or approximately) an unknown target concept from this concept class. The learner may be actively interacting with the target concept, or passively receiving samples from it. In active learning, a learner can make membership queries to the target concept c, asking for its value c(x) on inputs x chosen by the learner. The learner then has to reconstruct the exact target concept, with high probability. In the model of quantum exact learning, the learner can make membership queries in quantum superposition. If the complexity of the learner is measured by the number of membership queries it makes, then quantum exact learners can be polynomially more efficient than classical learners for some concept classes, but not more.[106]If complexity is measured by the amount of time the learner uses, then there are concept classes that can be learned efficiently by quantum learners but not by classical learners (under plausible complexity-theoretic assumptions).[106] A natural model of passive learning is Valiant'sprobably approximately correct (PAC) learning. Here the learner receives random examples (x,c(x)), where x is distributed according to some unknown distribution D. The learner's goal is to output a hypothesis function h such that h(x)=c(x) with high probability when x is drawn according to D. The learner has to be able to produce such an 'approximately correct' h for every D and every target concept c in its concept class. We can consider replacing the random examples by potentially more powerful quantum examples∑xD(x)|x,c(x)⟩{\displaystyle \sum _{x}{\sqrt {D(x)}}|x,c(x)\rangle }. In the PAC model (and the related agnostic model), this doesn't significantly reduce the number of examples needed: for every concept class, classical and quantum sample complexity are the same up to constant factors.[107]However, for learning under some fixed distribution D, quantum examples can be very helpful, for example for learning DNF under the uniform distribution.[108]When considering time complexity, there exist concept classes that can be PAC-learned efficiently by quantum learners, even from classical examples, but not by classical learners (again, under plausible complexity-theoretic assumptions).[106] This passive learning type is also the most common scheme in supervised learning: a learning algorithm typically takes the training examples fixed, without the ability to query the label of unlabelled examples. Outputting a hypothesis h is a step of induction. Classically, an inductive model splits into a training and an application phase: the model parameters are estimated in the training phase, and the learned model is applied an arbitrary many times in the application phase. In the asymptotic limit of the number of applications, this splitting of phases is also present with quantum resources.[109] The earliest experiments were conducted using the adiabaticD-Wavequantum computer, for instance, to detect cars in digital images using regularized boosting with a nonconvex objective function in a demonstration in 2009.[110]Many experiments followed on the same architecture, and leading tech companies have shown interest in the potential of quantum machine learning for future technological implementations. In 2013, Google Research,NASA, and theUniversities Space Research Associationlaunched theQuantum Artificial Intelligence Labwhich explores the use of the adiabatic D-Wave quantum computer.[111][112]A more recent example trained a probabilistic generative models with arbitrary pairwise connectivity, showing that their model is capable of generating handwritten digits as well as reconstructing noisy images of bars and stripes and handwritten digits.[65] Using a different annealing technology based onnuclear magnetic resonance(NMR), a quantumHopfield networkwas implemented in 2009 that mapped the input data and memorized data to Hamiltonians, allowing the use of adiabatic quantum computation.[113]NMR technology also enables universal quantum computing,[citation needed]and it was used for the first experimental implementation of a quantum support vector machine to distinguish hand written number ‘6’ and ‘9’ on a liquid-state quantum computer in 2015.[114]The training data involved the pre-processing of the image which maps them to normalized 2-dimensional vectors to represent the images as the states of a qubit. The two entries of the vector are the vertical and horizontal ratio of the pixel intensity of the image. Once the vectors are defined on thefeature space, the quantum support vector machine was implemented to classify the unknown input vector. The readout avoids costlyquantum tomographyby reading out the final state in terms of direction (up/down) of the NMR signal. Photonic implementations are attracting more attention,[115]not the least because they do not require extensive cooling. Simultaneous spoken digit and speaker recognition and chaotic time-series prediction were demonstrated at data rates beyond 1 gigabyte per second in 2013.[116]Using non-linear photonics to implement an all-optical linear classifier, a perceptron model was capable of learning the classification boundary iteratively from training data through a feedback rule.[117]A core building block in many learning algorithms is to calculate the distance between two vectors: this was first experimentally demonstrated for up to eight dimensions using entangled qubits in a photonic quantum computer in 2015.[118] Recently, based on a neuromimetic approach, a novel ingredient has been added to the field of quantum machine learning, in the form of a so-called quantum memristor, a quantized model of the standard classicalmemristor.[119]This device can be constructed by means of a tunable resistor, weak measurements on the system, and a classical feed-forward mechanism. An implementation of a quantum memristor in superconducting circuits has been proposed,[120]and an experiment with quantum dots performed.[121]A quantum memristor would implement nonlinear interactions in the quantum dynamics which would aid the search for a fully functional quantum neural network. Since 2016, IBM has launched an online cloud-based platform for quantum software developers, called theIBM Q Experience. This platform consists of several fully operational quantum processors accessible via the IBM Web API. In doing so, the company is encouraging software developers to pursue new algorithms through a development environment with quantum capabilities. New architectures are being explored on an experimental basis, up to 32 qubits, using both trapped-ion and superconductive quantum computing methods. In October 2019, it was noted that the introduction of Quantum Random Number Generators (QRNGs) to machine learning models including Neural Networks and Convolutional Neural Networks for random initial weight distribution and Random Forests for splitting processes had a profound effect on their ability when compared to the classical method of Pseudorandom Number Generators (PRNGs).[122]However, in a more recent publication from 2021, these claims could not be reproduced for Neural Network weight initialization and no significant advantage of using QRNGs over PRNGs was found.[123]The work also demonstrated that the generation of fair random numbers with a gate quantum computer is a non-trivial task on NISQ devices, and QRNGs are therefore typically much more difficult to use in practice than PRNGs. A paper published in December 2018 reported on an experiment using a trapped-ion system demonstrating a quantum speedup of the deliberation time of reinforcement learning agents employing internal quantum hardware.[59] In March 2021, a team of researchers from Austria, The Netherlands, the US and Germany reported the experimental demonstration of a quantum speedup of the learning time of reinforcement learning agents interacting fully quantumly with the environment.[124][60]The relevant degrees of freedom of both agent and environment were realized on a compact and fully tunable integrated nanophotonic processor. Whilemachine learningitself is now not only a research field but an economically significant and fast growing industry andquantum computingis a well established field of both theoretical and experimental research, quantum machine learning remains a purely theoretical field of studies. Attempts to experimentally demonstrate concepts of quantum machine learning remain insufficient.[citation needed]Further, another obstacle exists at the prediction stage because the outputs of quantum learning models are inherently random.[125]This creates an often considerable overhead, as many executions of a quantum learning model have to be aggregated to obtain an actual prediction. Many of the leading scientists that extensively publish in the field of quantum machine learning warn about the extensive hype around the topic and are very restrained if asked about its practical uses in the foreseeable future. Sophia Chen[126]collected some of the statements made by well known scientists in the field:
https://en.wikipedia.org/wiki/Quantum_machine_learning
CoDiis acellular automaton(CA) model forspiking neural networks(SNNs).[1]CoDi is an acronym for Collect and Distribute, referring to the signals and spikes in a neural network. CoDi uses avon Neumann neighborhoodmodified for a three-dimensional space; each cell looks at the states of its six orthogonal neighbors and its own state. In a growth phase aneural networkis grown in the CA-space based on an underlyingchromosome. There are four types of cells:neuronbody,axon,dendriteand blank. The growth phase is followed by a signaling- or processing-phase. Signals are distributed from the neuron bodies via their axon tree and collected from connection dendrites.[1]These two basic interactions cover every case, and they can be expressed simply, using a small number of rules. Theneuronbody cells collect neural signals from the surroundingdendriticcells and apply an internally defined function to the collected data. In the CoDi model the neurons sum the incoming signal values and fire after a threshold is reached. This behavior of the neuron bodies can be modified easily to suit a given problem. The output of the neuron bodies is passed on to its surroundingaxoncells. Axonal cells distribute data originating from the neuron body. Dendritic cells collect data and eventually pass it to the neuron body. These two types of cell-to-cell interaction cover all kinds of cell encounters. Every cell has a gate, which is interpreted differently depending on the type of the cell. A neuron cell uses this gate to store its orientation, i.e. the direction in which the axon is pointing. In an axon cell, the gate points to the neighbor from which the neural signals are received. An axon cell accepts input only from this neighbor, but makes its own output available to all its neighbors. In this way axon cells distribute information. The source of information is always a neuron cell. Dendritic cells collect information by accepting information from any neighbor. They give their output, (e.g. a Boolean OR operation on the binary inputs) only to the neighbor specified by their own gate. In this way, dendritic cells collect andsumneural signals, until the final sum of collected neural signals reaches the neuron cell. Each axonal and dendritic cellbelongsto exactly one neuron cell. This configuration of the CA-space is guaranteed by the preceding growth phase. The CoDi model does not use explicit synapses, because dendrite cells that are in contact with an axonal trail (i.e. have an axon cell as neighbor) collect the neural signals directly from the axonal trail. This results from the behavior of axon cells, which distribute to every neighbor, and from the behavior of the dendrite cells, which collect from any neighbor. The strength of a neuron-neuron connection (a synapse) is represented by the number of their neighboring axon and dendrite cells. The exact structure of the network and the position of the axon-dendrite neighbor pairs determine the time delay and strength (weight) of a neuron-neuron connection. This principle infers that a single neuron-neuron connection can consist of several synapse with different time delays with independent weights. The chromosome is initially distributed throughout the CA-space, so that every cell in the CA-space contains one instruction of the chromosome, i.e. one growth instruction, so that the chromosome belongs to the network as a whole. The distributed chromosome technique of the CoDi model makes maximum use of the available CA-space and enables the growth of any type of network connectivity. The local connection of the grown circuitry to its chromosome, allows local learning to be combined with the evolution of grown neural networks. Growth signals are passed to the direct neighbors of the neuron cell according to its chromosome information. The blank neighbors, which receive a neural growth signal, turn into either an axon cell or a dendrite cell. The growth signals include information containing the cell type of the cell that is to be grown from the signal. To decide in which directions axonal or dendritic trails should grow, the grown cells consult their chromosome information which encodes the growth instructions. These growth instructions can have an absolute or a relative directional encoding. An absolute encoding masks the six neighbors (i.e. directions) of a 3D cell with six bits. After a cell is grown, it accepts growth signals only from the direction from which it received its first signal. Thisreception directioninformation is stored in thegateposition of each cell's state. The states of our CAs have two parts, which are treated in different ways. The first part of the cell-state contains the cell's type and activity level and the second part serves as an interface to the cell's neighborhood by containing the input signals from the neighbors. Characteristic of our CA is that only part of the state of a cell is passed to its neighbors, namely the signal and then only to those neighbors specified in the fixed part of the cell state. This CA is calledpartitioned, because the state is partitioned into two parts, the first being fixed and the second is variable for each cell. The advantage of this partitioning-technique is that the amount of information that defines the new state of a CA cell is kept to a minimum, due to its avoidance of redundant information exchange. Since CAs are only locally connected, they are ideal for implementation on purely parallel hardware. When designing the CoDi CA-based neural networks model, the objective was to implement them directly in hardware (FPGAs). Therefore, the CA was kept as simple as possible, by having a small number of bits to specify the state, keeping the CA rules few in number, and having few cellular neighbors. The CoDi model was implemented in the FPGA based CAM-Brain Machine (CBM) by Korkin.[2] CoDi was introduced by Gers et al. in 1998.[1]A specialized parallel machine based on FPGA Hardware (CAM) to run the CoDi model on a large scale was developed by Korkin et al.[2]De Garis conducted a series of experiments on the CAM-machine evaluating the CoDi model. The original model, where learning is based on evolutionary algorithms, has been augmented with a local learning rule via feedback from dendritic spikes by Schwarzer.[3]
https://en.wikipedia.org/wiki/CoDi
Acognitive computeris a computer that hardwiresartificial intelligenceandmachine learningalgorithms into anintegrated circuitthat closely reproduces the behavior of the human brain.[1]It generally adopts aneuromorphic engineeringapproach. Synonyms includeneuromorphic chipandcognitive chip.[2][3] In 2023, IBM's proof-of-concept NorthPole chip (optimized for 2-, 4- and 8-bit precision) achieved remarkable performance inimage recognition.[4] In 2013, IBM developedWatson, a cognitive computer that usesneural networksanddeep learningtechniques.[5]The following year, it developed the 2014TrueNorthmicrochip architecture[6]which is designed to be closer in structure to the human brain than thevon Neumann architectureused in conventional computers.[1]In 2017,Intelalso announced its version of a cognitive chip in "Loihi, which it intended to be available to university and research labs in 2018. Intel (most notably with its Pohoiki Beach and Springs systems[7][8]),Qualcomm, and others are improving neuromorphic processors steadily. TrueNorth was aneuromorphicCMOSintegrated circuitproduced byIBMin 2014.[9]It is amanycore processornetwork on a chipdesign, with 4096cores, each one having 256 programmable simulatedneuronsfor a total of just over a million neurons. In turn, each neuron has 256 programmable "synapses" that convey the signals between them. Hence, the total number of programmable synapses is just over 268 million (228). Its basictransistor countis 5.4 billion. Memory, computation, and communication are handled in each of the 4096 neurosynaptic cores, TrueNorth circumvents thevon Neumann-architecturebottleneck and is very energy-efficient, with IBM claiming a power consumption of 70milliwattsand a power density that is 1/10,000th of conventional microprocessors.[10]TheSyNAPSEchip operates at lower temperatures and power because it only draws power necessary for computation.[11]Skyrmionshave been proposed as models of the synapse on a chip.[12][13] The neurons are emulated using a Linear-Leak Integrate-and-Fire (LLIF) model, a simplification of the leakyintegrate-and-firemodel.[14] According to IBM, it does not have aclock,[15]operates onunary numbers, and computes by counting to a maximum of 19 bits.[6][16]The cores are event-driven by using both synchronous and asynchronous logic, and are interconnected through an asynchronouspacket-switchedmesh network on chip (NOC).[16] IBM developed a new network to program and use TrueNorth. It included a simulator, a new programming language, anintegrated programming environment, and libraries.[15]This lack ofbackward compatibilitywith any previous technology (e.g., C++ compilers) poses seriousvendor lock-inrisks and other adverse consequences that may prevent it from commercialization in the future.[15][failed verification] In 2018, a cluster of TrueNorth network-linked to a master computer was used in stereo vision research that attempted to extract the depth of rapidly moving objects in a scene.[17] In 2023, IBM released its NorthPole chip, which is aproof-of-conceptfor dramatically improving performance by intertwining compute with memory on-chip, thus eliminating theVon Neumann bottleneck. It blends approaches from IBM's 2014 TrueNorth system with modern hardware designs to achieve speeds about 4,000 times faster than TrueNorth. It can runResNet-50orYolo-v4image recognitiontasks about 22 times faster, with 25 times less energy and 5 times less space, when compared toGPUswhich use the same12-nm node processthat it was fabricated with. It includes 224 MB ofRAMand 256processor coresand can perform 2,048 operations per core per cycle at 8-bit precision, and 8,192 operations at 2-bit precision. It runs at between 25 and 425MHz.[4][18][19][20]This is an inferencing chip, but it cannot yet handle GPT-4 because of memory and accuracy limitations[21] Pohoiki Springs is a system that incorporates Intel's self-learning neuromorphic chip, named Loihi, introduced in 2017, perhaps named after the HawaiianseamountLōʻihi. Intel claims Loihi is about 1000 times more energy efficient than general-purpose computing systems used to train neural networks. In theory, Loihi supports both machine learning training and inference on the same silicon independently of a cloud connection, and more efficiently thanconvolutional neural networksordeep learningneural networks. Intel points to a system for monitoring a person's heartbeat, taking readings after events such as exercise or eating, and using the chip to normalize the data and work out the ‘normal’ heartbeat. It can then spot abnormalities and deal with new events or conditions. The first iteration of the chip was made using Intel's 14 nm fabrication process and houses 128 clusters of 1,024artificial neuronseach for a total of 131,072 simulated neurons.[22]This offers around 130 millionsynapses, far less than the human brain's 800trillionsynapses, and behind IBM'sTrueNorth.[23]Loihi is available for research purposes among more than 40 academic research groups as aUSBform factor.[24][25] In October 2019, researchers fromRutgers Universitypublished a research paper to demonstrate theenergy efficiencyof Intel's Loihi in solvingsimultaneous localization and mapping.[26] In March 2020, Intel andCornell Universitypublished a research paper to demonstrate the ability of Intel's Loihi to recognize differenthazardous materials, which could eventually aid to "diagnose diseases, detect weapons andexplosives, findnarcotics, and spot signs of smoke andcarbon monoxide".[27] Intel's Loihi 2, named Pohoiki Beach, was released in September 2021 with 64 cores.[28]It boasts faster speeds, higher-bandwidth inter-chip communications for enhanced scalability, increased capacity per chip, a more compact size due to process scaling, and improved programmability.[29] Hala Point packages 1,152 Loihi 2 processors produced on Intel 3 process node in a six-rack-unit chassis. The system supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores, consuming 2,600 watts of power. It includes over 2,300 embedded x86 processors for ancillary computations. Intel claimed in 2024 that Hala Point was the world’s largest neuromorphic system. It uses Loihi 2 chips. It is claimed to offer 10x more neuron capacity and up to 12x higher performance. Hala Point provides up to 20 quadrillion operations per second, (20 petaops), with efficiency exceeding 15 trillion (8-bit) operations S−1W−1on conventional deep neural networks. Hala Point integrates processing, memory and communication channels in a massively parallelized fabric, providing 16 PB S−1of memory bandwidth, 3.5 PB S−1of inter-core communication bandwidth, and 5 TB S−1of inter-chip bandwidth. The system can process its 1.15 billion neurons 20 times faster than a human brain. Its neuron capacity is roughly equivalent to that of anowlbrain or the cortex of acapuchin monkey. Loihi-based systems can perform inference and optimization using 100 times less energy at speeds as much as 50 times faster than CPU/GPU architectures. Intel claims that Hala Point can create LLMs but this has not been done.[30]Much further research is needed[21] SpiNNaker(Spiking Neural Network Architecture) is amassively parallel,manycoresupercomputer architecturedesigned by the Advanced Processor Technologies Research Group at theDepartment of Computer Science, University of Manchester.[31] Critics argue that a room-sized computer – as in the case ofIBM'sWatson– is not a viable alternative to a three-pound human brain.[32]Some also cite the difficulty for a single system to bring so many elements together, such as the disparate sources of information as well as computing resources.[33] In 2021,The New York Timesreleased Steve Lohr's article "What Ever Happened to IBM’s Watson?".[34]He wrote about some costly failures of IBM Watson. One of them, a cancer-related project called the Oncology Expert Advisor,[35]was abandoned in 2016 as a costly failure. During the collaboration, Watson could not use patient data. Watson struggled to decipher doctors’ notes and patient histories.
https://en.wikipedia.org/wiki/Cognitive_computer
Computational neuroscience(also known astheoretical neuroscienceormathematical neuroscience) is a branch ofneurosciencewhich employsmathematics,computer science, theoretical analysis and abstractions of the brain to understand the principles that govern thedevelopment,structure,physiologyandcognitive abilitiesof thenervous system.[1][2][3][4] Computational neuroscience employs computational simulations[5]to validate and solve mathematical models, and so can be seen as a sub-field of theoretical neuroscience; however, the two fields are often synonymous.[6]The term mathematical neuroscience is also used sometimes, to stress the quantitative nature of the field.[7] Computational neuroscience focuses on the description ofbiologicallyplausibleneurons(andneural systems) and their physiology and dynamics, and it is therefore not directly concerned with biologically unrealistic models used inconnectionism,control theory,cybernetics,quantitative psychology,machine learning,artificial neural networks,artificial intelligenceandcomputational learning theory;[8][9][10]although mutual inspiration exists and sometimes there is no strict limit between fields,[11][12][13]with model abstraction in computational neuroscience depending on research scope and the granularity at which biological entities are analyzed. Models in theoretical neuroscience are aimed at capturing the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, and chemical coupling vianetwork oscillations, columnar and topographic architecture, nuclei, all the way up to psychological faculties like memory, learning and behavior. These computational models frame hypotheses that can be directly tested by biological or psychological experiments. The term 'computational neuroscience' was introduced byEric L. Schwartz, who organized a conference, held in 1985 inCarmel, California, at the request of the Systems Development Foundation to provide a summary of the current status of a field which until that point was referred to by a variety of names, such as neural modeling, brain theory and neural networks. The proceedings of this definitional meeting were published in 1990 as the bookComputational Neuroscience.[14]The first of the annual open international meetings focused on Computational Neuroscience was organized byJames M. Bowerand John Miller inSan Francisco, Californiain 1989.[15]The first graduate educational program in computational neuroscience was organized as the Computational and Neural Systems Ph.D. program at theCalifornia Institute of Technologyin 1985. The early historical roots of the field[16]can be traced to the work of people includingLouis Lapicque,Hodgkin&Huxley,HubelandWiesel, andDavid Marr. Lapicque introduced theintegrate and firemodel of the neuron in a seminal article published in 1907,[17]a model still popular forartificial neural networksstudies because of its simplicity (see a recent review[18]). About 40 years later,HodgkinandHuxleydeveloped thevoltage clampand created the first biophysical model of theaction potential.HubelandWieseldiscovered that neurons in theprimary visual cortex, the first cortical area to process information coming from theretina, have oriented receptive fields and are organized in columns.[19]David Marr's work focused on the interactions between neurons, suggesting computational approaches to the study of how functional groups of neurons within thehippocampusandneocortexinteract, store, process, and transmit information. Computational modeling of biophysically realistic neurons and dendrites began with the work ofWilfrid Rall, with the first multicompartmental model usingcable theory. Research in computational neuroscience can be roughly categorized into several lines of inquiry. Most computational neuroscientists collaborate closely with experimentalists in analyzing novel data and synthesizing new models of biological phenomena. Even a single neuron has complex biophysical characteristics and can perform computations (e.g.[20]). Hodgkin and Huxley'soriginal modelonly employed two voltage-sensitive currents (Voltage sensitive ion channels are glycoprotein molecules which extend through the lipid bilayer, allowing ions to traverse under certain conditions through the axolemma), the fast-acting sodium and the inward-rectifying potassium. Though successful in predicting the timing and qualitative features of the action potential, it nevertheless failed to predict a number of important features such as adaptation andshunting. Scientists now believe that there are a wide variety of voltage-sensitive currents, and the implications of the differing dynamics, modulations, and sensitivity of these currents is an important topic of computational neuroscience.[21] The computational functions of complexdendritesare also under intense investigation. There is a large body of literature regarding how different currents interact with geometric properties of neurons.[22] There are many software packages, such asGENESISandNEURON, that allow rapid and systematicin silicomodeling of realistic neurons.Blue Brain, a project founded byHenry Markramfrom theÉcole Polytechnique Fédérale de Lausanne, aims to construct a biophysically detailed simulation of acortical columnon theBlue Genesupercomputer. Modeling the richness of biophysical properties on the single-neuron scale can supply mechanisms that serve as the building blocks for network dynamics.[23]However, detailed neuron descriptions are computationally expensive and this computing cost can limit the pursuit of realistic network investigations, where many neurons need to be simulated. As a result, researchers that study large neural circuits typically represent each neuron and synapse with an artificially simple model, ignoring much of the biological detail. Hence there is a drive to produce simplified neuron models that can retain significant biological fidelity at a low computational overhead. Algorithms have been developed to produce faithful, faster running, simplified surrogate neuron models from computationally expensive, detailed neuron models.[24] Glial cells participate significantly in the regulation of neuronal activity at both the cellular and the network level. Modeling this interaction allows to clarify thepotassium cycle,[25][26]so important for maintaining homeostasis and to prevent epileptic seizures. Modeling reveals the role of glial protrusions that can penetrate in some cases the synaptic cleft to interfere with the synaptic transmission and thus control synaptic communication.[27] Computational neuroscience aims to address a wide array of questions, including: How doaxonsanddendritesform during development? How do axons know where to target and how to reach these targets? How do neurons migrate to the proper position in the central and peripheral systems? How do synapses form? We know frommolecular biologythat distinct parts of the nervous system release distinct chemical cues, fromgrowth factorstohormonesthat modulate and influence the growth and development of functional connections between neurons. Theoretical investigations into the formation and patterning of synaptic connection and morphology are still nascent. One hypothesis that has recently garnered some attention is theminimal wiring hypothesis, which postulates that the formation of axons and dendrites effectively minimizes resource allocation while maintaining maximal information storage.[28] Early models on sensory processing understood within a theoretical framework are credited toHorace Barlow. Somewhat similar to the minimal wiring hypothesis described in the preceding section, Barlow understood the processing of the early sensory systems to be a form ofefficient coding, where the neurons encoded information which minimized the number of spikes. Experimental and computational work have since supported this hypothesis in one form or another. For the example of visual processing, efficient coding is manifested in the forms of efficient spatial coding, color coding, temporal/motion coding, stereo coding, and combinations of them.[29] Further along the visual pathway, even the efficiently coded visual information is too much for the capacity of the information bottleneck, the visual attentional bottleneck.[30]A subsequent theory,V1 Saliency Hypothesis (V1SH), has been developed on exogenous attentional selection of a fraction of visual input for further processing, guided by a bottom-up saliency map in the primary visual cortex.[31] Current research in sensory processing is divided among a biophysical modeling of different subsystems and a more theoretical modeling of perception. Current models of perception have suggested that the brain performs some form ofBayesian inferenceand integration of different sensory information in generating our perception of the physical world.[32][33] Many models of the way the brain controls movement have been developed. This includes models of processing in the brain such as the cerebellum's role for error correction, skill learning in motor cortex and the basal ganglia, or the control of the vestibulo ocular reflex. This also includes many normative models, such as those of the Bayesian or optimal control flavor which are built on the idea that the brain efficiently solves its problems. Earlier models ofmemoryare primarily based on the postulates ofHebbian learning. Biologically relevant models such asHopfield nethave been developed to address the properties of associative (also known as "content-addressable") style of memory that occur in biological systems. These attempts are primarily focusing on the formation of medium- andlong-term memory, localizing in thehippocampus. One of the major problems in neurophysiological memory is how it is maintained and changed through multiple time scales. Unstablesynapsesare easy to train but also prone to stochastic disruption. Stablesynapsesforget less easily, but they are also harder to consolidate. It is likely that computational tools will contribute greatly to our understanding of how synapses function and change in relation to external stimulus in the coming decades. Biological neurons are connected to each other in a complex, recurrent fashion. These connections are, unlike mostartificial neural networks, sparse and usually specific. It is not known how information is transmitted through such sparsely connected networks, although specific areas of the brain, such as thevisual cortex, are understood in some detail.[34]It is also unknown what the computational functions of these specific connectivity patterns are, if any. The interactions of neurons in a small network can be often reduced to simple models such as theIsing model. Thestatistical mechanicsof such simple systems are well-characterized theoretically. Some recent evidence suggests that dynamics of arbitrary neuronal networks can be reduced to pairwise interactions.[35]It is not known, however, whether such descriptive dynamics impart any important computational function. With the emergence oftwo-photon microscopyandcalcium imaging, we now have powerful experimental methods with which to test the new theories regarding neuronal networks. In some cases the complex interactions betweeninhibitoryandexcitatoryneurons can be simplified usingmean-field theory, which gives rise to thepopulation modelof neural networks.[36]While many neurotheorists prefer such models with reduced complexity, others argue that uncovering structural-functional relations depends on including as much neuronal and network structure as possible. Models of this type are typically built in large simulation platforms like GENESIS or NEURON. There have been some attempts to provide unified methods that bridge and integrate these levels of complexity.[37] Visual attention can be described as a set of mechanisms that limit some processing to a subset of incoming stimuli.[38]Attentional mechanisms shape what we see and what we can act upon. They allow for concurrent selection of some (preferably, relevant) information and inhibition of other information. In order to have a more concrete specification of the mechanism underlying visual attention and the binding of features, a number of computational models have been proposed aiming to explain psychophysical findings. In general, all models postulate the existence of a saliency or priority map for registering the potentially interesting areas of the retinal input, and a gating mechanism for reducing the amount of incoming visual information, so that the limited computational resources of the brain can handle it.[39]An example theory that is being extensively tested behaviorally and physiologically is theV1 Saliency Hypothesisthat a bottom-up saliency map is created in the primary visual cortex to guide attention exogenously.[31]Computational neuroscience provides a mathematical framework for studying the mechanisms involved in brain function and allows complete simulation and prediction of neuropsychological syndromes. Computational modeling of higher cognitive functions has only recently[when?]begun. Experimental data comes primarily fromsingle-unit recordinginprimates. Thefrontal lobeandparietal lobefunction as integrators of information from multiple sensory modalities. There are some tentative ideas regarding how simple mutually inhibitory functional circuits in these areas may carry out biologically relevant computation.[40] Thebrainseems to be able to discriminate and adapt particularly well in certain contexts. For instance, human beings seem to have an enormous capacity for memorizing andrecognizing faces. One of the key goals of computational neuroscience is to dissect how biological systems carry out these complex computations efficiently and potentially replicate these processes in building intelligent machines. The brain's large-scale organizational principles are illuminated by many fields, including biology, psychology, and clinical practice.Integrative neuroscienceattempts to consolidate these observations through unified descriptive models and databases of behavioral measures and recordings. These are the bases for some quantitative modeling of large-scale brain activity.[41] The Computational Representational Understanding of Mind (CRUM) is another attempt at modeling human cognition through simulated processes like acquired rule-based systems in decision making and the manipulation of visual representations in decision making. One of the ultimate goals of psychology/neuroscience is to be able to explain the everyday experience of conscious life.Francis Crick,Giulio TononiandChristof Kochmade some attempts to formulate consistent frameworks for future work inneural correlates of consciousness(NCC), though much of the work in this field remains speculative.[42] Computational clinical neuroscienceis a field that brings together experts in neuroscience,neurology,psychiatry,decision sciencesand computational modeling to quantitatively define and investigate problems inneurologicalandpsychiatric diseases, and to train scientists and clinicians that wish to apply these models to diagnosis and treatment.[43][44] Predictive computational neuroscience is a recent field that combines signal processing, neuroscience, clinical data and machine learning to predict the brain during coma[45]or anesthesia.[46]For example, it is possible to anticipate deep brain states using the EEG signal. These states can be used to anticipate hypnotic concentration to administrate to the patient. Computational psychiatryis a new emerging field that brings together experts inmachine learning,neuroscience,neurology,psychiatry,psychologyto provide an understanding of psychiatric disorders.[47][48][49] A neuromorphic computer/chip is any device that uses physical artificial neurons (made from silicon) to do computations (See:neuromorphic computing,physical neural network). One of the advantages of using aphysical modelcomputer such as this is that it takes the computational load of the processor (in the sense that the structural and some of the functional elements don't have to be programmed since they are in hardware). In recent times,[50]neuromorphic technology has been used to build supercomputers which are used in international neuroscience collaborations. Examples include theHuman Brain ProjectSpiNNakersupercomputer and the BrainScaleS computer.[51]
https://en.wikipedia.org/wiki/Computational_neuroscience
Neuroethologyis the evolutionary and comparative approach to the study ofanimalbehavior and its underlying mechanistic control by the nervous system.[1][2][3]It is an interdisciplinary science that combines bothneuroscience(study of the nervous system) andethology(study of animal behavior in natural conditions). A central theme of neuroethology, which differentiates it from other branches of neuroscience, is its focus on behaviors that have been favored bynatural selection(e.g., finding mates, navigation, locomotion, and predator avoidance) rather than on behaviors that are specific to a particular disease state or laboratory experiment. Neuroethologists hope to uncover general principles of the nervous system from the study of animals with exaggerated or specialized behaviors. They endeavor to understand how the nervous system translates biologically relevant stimuli into natural behavior. For example, many bats are capable ofecholocationwhich is used for prey capture and navigation. The auditory system of bats is often cited as an example for how acoustic properties of sounds can be converted into a sensory map of behaviorally relevant features of sounds.[4] Neuroethology is an integrative approach to the study of animal behavior that draws upon several disciplines. Its approach stems from thetheorythat animals' nervous systems have evolved to address problems of sensing and acting in certain environmental niches and that their nervous systems are best understood in the context of the problems they have evolved to solve. In accordance withKrogh's principle, neuroethologists often study animals that are "specialists" in the behavior the researcher wishes to study e.g. honeybees and social behavior, bat echolocation, owl sound localization, etc. The scope of neuroethological inquiry might be summarized byJörg-Peter Ewert, a pioneer of neuroethology, when he considers the types of questions central to neuroethology in his 1980 introductory text to the field: Often central to addressing questions in neuroethology are comparative methodologies, drawing upon knowledge about related organisms' nervous systems, anatomies, life histories, behaviors and environmental niches. While it is not unusual for many types of neurobiology experiments to give rise to behavioral questions, many neuroethologists often begin their research programs by observing a species' behavior in its natural environment. Other approaches to understanding nervous systems include the systems identification approach, popular inengineering. The idea is to stimulate the system using a non-natural stimulus with certain properties. The system's response to the stimulus may be used to analyze the operation of the system. Such an approach is useful forlinearsystems, but the nervous system is notoriouslynonlinear, and neuroethologists argue that such an approach is limited. This argument is supported by experiments in the auditory system, which show that neural responses to complex sounds, like social calls, can not be predicted by the knowledge gained from studying the responses due to pure tones (one of the non-natural stimuli favored by auditory neurophysiologists). This is because of the non-linearity of the system. Modern neuroethology is largely influenced by the research techniques used. Neural approaches are necessarily very diverse, as is evident through the variety of questions asked, measuring techniques used, relationships explored, and model systems employed. Techniques utilized since 1984 include the use of intracellular dyes, which make maps of identified neurons possible, and the use of brain slices, which bring vertebrate brains into better observation through intracellular electrodes (Hoyle 1984). Currently, other fields toward which neuroethology may be headed includecomputational neuroscience,molecular genetics,neuroendocrinologyandepigenetics. The existing field of neural modeling may also expand into neuroethological terrain, due to its practical uses inrobotics. In all this, neuroethologists must use the right level of simplicity to effectively guide research towards accomplishing the goals of neuroethology. Critics of neuroethology might consider it a branch of neuroscience concerned with 'animal trivia'. Though neuroethological subjects tend not to be traditional neurobiological model systems (i.e.Drosophila,C. elegans, orDanio rerio), neuroethological approaches emphasizing comparative methods have uncovered many concepts central to neuroscience as a whole, such aslateral inhibition,coincidence detection, andsensory maps. The discipline of neuroethology has also discovered and explained the only vertebrate behavior for which the entire neural circuit has been described: theelectric fishjamming avoidance response. Beyond its conceptual contributions, neuroethology makes indirect contributions to advancing human health. By understanding simpler nervous systems, many clinicians have used concepts uncovered by neuroethology and other branches ofneuroscienceto develop treatments for devastating human diseases. Neuroethology owes part of its existence to the establishment of ethology as a unique discipline withinzoology. Although animal behavior had been studied since the time ofAristotle(384–342 BC), it was not until the early twentieth century that ethology finally became distinguished from natural science (a strictly descriptive field) and ecology. The main catalysts behind this new distinction were the research and writings ofKonrad LorenzandNiko Tinbergen. Konrad Lorenz was born in Austria in 1903, and is widely known for his contribution of the theory offixed action patterns(FAPs): endogenous, instinctive behaviors involving a complex sequence of movements that are triggered ("released") by a certain kind of stimulus. This sequence always proceeds to completion, even if the original stimulus is removed. It is also species-specific and performed by nearly all members. Lorenz constructed his famous "hydraulic model" to help illustrate this concept, as well as the concept of action specific energy, or drives. Niko Tinbergen was born in the Netherlands in 1907 and worked closely with Lorenz in the development of the FAP theory; their studies focused on the egg retrieval response of nesting geese. Tinbergen performed extensive research on the releasing mechanisms of particular FAPs, and used the bill-pecking behavior of baby herring gulls as his model system. This led to the concept of thesupernormal stimulus. Tinbergen is also well known for hisfour questionsthat he believed ethologists should be asking about any given animal behavior; among these is that of the mechanism of the behavior, on a physiological, neural andmolecularlevel, and this question can be thought of in many regards as the keystone question in neuroethology. Tinbergen also emphasized the need for ethologists and neurophysiologists to work together in their studies, a unity that has become a reality in the field of neuroethology. Unlikebehaviorism, which studies animals' reactions to non-naturalstimuliin artificial,laboratoryconditions, ethology sought to categorize and analyze the natural behaviors of animals in afield setting. Similarly, neuroethology asks questions about the neural bases ofnaturally occurringbehaviors, and seeks to mimic the natural context as much as possible in the laboratory. Although the development of ethology as a distinct discipline was crucial to the advent of neuroethology, equally important was the development of a more comprehensive understanding ofneuroscience. Contributors to this new understanding were the Spanish Neuroanatomist,Ramon y Cajal(born in 1852), and physiologistsCharles Sherrington,Edgar Adrian,Alan Hodgkin, andAndrew Huxley. Charles Sherrington, who was born in Great Britain in 1857, is famous for his work on the nerve synapse as the site of transmission of nerve impulses, and for his work on reflexes in the spinal cord. His research also led him to hypothesize that every muscular activation is coupled to an inhibition of the opposing muscle. He was awarded a Nobel Prize for his work in 1932 along with Lord Edgar Adrian who made the first physiological recordings of neural activity from single nerve fibers. Alan Hodgkin and Andrew Huxley (born 1914 and 1917, respectively, in Great Britain), are known for their collaborative effort to understand the production of action potentials in the giant axons of squid. The pair also proposed the existence of ion channels to facilitate action potential initiation, and were awarded the Nobel Prize in 1963 for their efforts. As a result of this pioneering research, many scientists then sought to connect the physiological aspects of the nervous and sensory systems to specific behaviors. These scientists –Karl von Frisch,Erich von Holst, andTheodore Bullock– are frequently referred to as the "fathers" of neuroethology.[5]Neuroethology did not really come into its own, though, until the 1970s and 1980s, when new, sophisticated experimental methods allowed researchers such asMasakazu Konishi,Walter Heiligenberg,Jörg-Peter Ewert, and others to study the neural circuits underlying verifiable behavior. The International Society for Neuroethology represents the present discipline of neuroethology, which was founded on the occasion of the NATO-Advanced Study Institute "Advances in Vertebrate Neuroethology" (August 13–24, 1981) organized by J.-P. Ewert, D.J. Ingle and R.R. Capranica, held at the University of Kassel in Hofgeismar, Germany (cf. report Trends in Neurosci. 5:141-143,1982). Its first president wasTheodore H. Bullock. The society has met every three years since its first meeting in Tokyo in 1986. Its membership draws from many research programs around the world; many of its members are students and faculty members from medical schools and neurobiology departments from various universities. Modern advances inneurophysiologytechniques have enabled more exacting approaches in an ever-increasing number of animal systems, as size limitations are being dramatically overcome. Survey of the most recent (2007) congress of the ISN meeting symposia topics gives some idea of the field's breadth: Neuroethology can help create advancements intechnologythrough an advanced understanding of animal behavior. Model systems were generalized from the study of simple and related animals to humans. For example, the neuronal cortical space map discovered in bats, a specialized champion of hearing and navigating, elucidated the concept of a computational space map. In addition, the discovery of the space map in the barn owl led to the first neuronal example of theJeffressmodel. This understanding is translatable to understanding spatial localization in humans, a mammalian relative of the bat. Today, knowledge learned from neuroethology are being applied in new technologies. For example, Randall Beer and his colleagues used algorithms learned from insect walking behavior to create robots designed to walk on uneven surfaces (Beer et al.). Neuroethology and technology contribute to one another bidirectionally. Neuroethologists seek to understand the neural basis of a behavior as it would occur in an animal's natural environment but the techniques for neurophysiological analysis are lab-based, and cannot be performed in the field setting. This dichotomy between field and lab studies poses a challenge for neuroethology. From the neurophysiology perspective, experiments must be designed for controls and objective rigor, which contrasts with the ethology perspective – that the experiment be applicable to the animal's natural condition, which is uncontrolled, or subject to the dynamics of the environment. An early example of this is when Walter Rudolf Hess developed focal brain stimulation technique to examine a cat's brain controls of vegetative functions in addition to other behaviors. Even though this was a breakthrough in technological abilities and technique, it was not used by many neuroethologists originally because it compromised a cat's natural state, and, therefore, in their minds, devalued the experiments' relevance to real situations. When intellectual obstacles like this were overcome, it led to a golden age of neuroethology, by focusing on simple and robust forms of behavior, and by applying modern neurobiological methods to explore the entire chain of sensory and neural mechanisms underlying these behaviors (Zupanc 2004). New technology allows neuroethologists to attach electrodes to even very sensitive parts of an animal such as its brain while it interacts with its environment.[6]The founders of neuroethology ushered this understanding and incorporated technology and creative experimental design. Since then even indirect technological advancements such as battery-powered and waterproofed instruments have allowed neuroethologists to mimic natural conditions in the lab while they study behaviors objectively. In addition, the electronics required for amplifying neural signals and for transmitting them over a certain distance have enabled neuroscientists to record from behaving animals[7]performing activities in naturalistic environments. Emerging technologies can complement neuroethology, augmenting the feasibility of this valuable perspective of natural neurophysiology. Another challenge, and perhaps part of the beauty of neuroethology, is experimental design. The value of neuroethological criteria speak to the reliability of these experiments, because these discoveries represent behavior in the environments in which they evolved. Neuroethologists foresee future advancements through using new technologies and techniques, such as computational neuroscience, neuroendocrinology, and molecular genetics that mimic natural environments.[8] In 1963, Akira Watanabe and Kimihisa Takeda discovered the behavior of thejamming avoidance responsein the knifefishEigenmanniasp. In collaboration with T.H. Bullock and colleagues, the behavior was further developed. Finally, the work ofW. Heiligenbergexpanded it into a full neuroethology study by examining the series of neural connections that led to the behavior.Eigenmanniais a weakly electric fish that can generate electric discharges through electrocytes in its tail. Furthermore, it has the ability to electrolocate by analyzing the perturbations in its electric field. However, when the frequency of a neighboring fish's current is very close (less than 20 Hz difference) to that of its own, the fish will avoid having their signals interfere through a behavior known as Jamming Avoidance Response. If the neighbor's frequency is higher than the fish's discharge frequency, the fish will lower its frequency, and vice versa. The sign of the frequency difference is determined by analyzing the "beat" pattern of the incoming interference which consists of the combination of the two fish's discharge patterns. Neuroethologists performed several experiments underEigenmannia's natural conditions to study how it determined the sign of the frequency difference. They manipulated the fish's discharge by injecting it with curare which prevented its natural electric organ from discharging. Then, an electrode was placed in its mouth and another was placed at the tip of its tail. Likewise, the neighboring fish's electric field was mimicked using another set of electrodes. This experiment allowed neuroethologists to manipulate different discharge frequencies and observe the fish's behavior. From the results, they were able to conclude that the electric field frequency, rather than an internal frequency measure, was used as a reference. This experiment is significant in that not only does it reveal a crucial neural mechanism underlying the behavior but also demonstrates the value neuroethologists place on studying animals in their natural habitats. The recognition of prey and predators in the toad was first studied in depth byJörg-Peter Ewert(Ewert 1974; see also 2004). He began by observing the natural prey-catching behavior of the common toad (Bufo bufo) and concluded that the animal followed a sequence that consisted of stalking, binocular fixation, snapping, swallowing and mouth-wiping. However, initially, the toad's actions were dependent on specific features of the sensory stimulus: whether it demonstrated worm or anti-worm configurations. It was observed that the worm configuration, which signaled prey, was initiated by movement along the object's long axis, whereas anti-worm configuration, which signaled predator, was due to movement along the short axis. (Zupanc 2004). Ewert and coworkers adopted a variety of methods to study the predator versus prey behavior response. They conducted recording experiments where they inserted electrodes into the brain, while the toad was presented with worm or anti-worm stimuli. This technique was repeated at different levels of the visual system and also allowedfeature detectorsto be identified. In focus was the discovery of prey-selective neurons in the optic tectum, whose axons could be traced towards the snapping pattern generating cells in the hypoglossal nucleus. The discharge patterns of prey-selective tectal neurons in response to prey objects – in freely moving toads – "predicted" prey-catching reactions such as snapping. Another approach, called stimulation experiment, was carried out in freely moving toads. Focal electrical stimuli were applied to different regions of the brain, and the toad's response was observed. When the thalamic-pretectal region was stimulated, the toad exhibited escape responses, but when the tectum was stimulated in an area close to prey-selective neurons, the toad engaged in prey catching behavior (Carew 2000). Furthermore, neuroanatomical experiments were carried out where the toad's thalamic-pretectal/tectal connection was lesioned and the resulting deficit noted: the prey-selective properties were abolished both in the responses of prey-selective neurons and in the prey catching behavior. These and other experiments suggest that prey selectivity results from pretecto-tectal influences. Ewert and coworkers showed in toads that there are stimulus-response mediating pathways that translate perception (of visual sign stimuli) into action (adequate behavioral responses). In addition there are modulatory loops that initiate, modify or specify this mediation (Ewert 2004). Regarding the latter, for example, the telencephalic caudal ventral striatum is involved in a loop gating the stimulus-response mediation in a manner of directed attention. The telencephalic ventral medial pallium („primordium hippocampi"), however, is involved in loops that either modify prey-selection due to associative learning or specify prey-selection due to non-associative learning, respectively. Computational neuroethology (CN[9]or CNE[10]) is concerned with the computer modelling of the neural mechanisms underlying animal behaviors. Together with the term "artificial ethology," the term "computational neuroethology" was first published in literature by Achacoso and Yamamoto in the Spring of 1990,[11]based on their pioneering work on theconnectomeof C. elegans in 1989,[12]with further publications in 1992.[13][14]Computational neuroethology was argued for in depth later in 1990 byRandall Beer[15]and byDave Cliff[16]both of whom acknowledged the strong influence ofMichael Arbib'sRana Computatrixcomputational model of neural mechanisms for visual guidance in frogs and toads.[17] CNE systems work within a closed-loop environment; that is, they perceive their (perhaps artificial) environment directly, rather than through human input, as is typical inAIsystems.[9][18]For example, Barlow et al. developed a time-dependent model for the retina of the horseshoe crabLimulus polyphemuson aConnection Machine(Model CM-2).[19]Instead of feeding the model retina with idealized input signals, they exposed the simulation to digitized video sequences made underwater, and compared its response with those of real animals.
https://en.wikipedia.org/wiki/Neuroethology
Neuroinformaticsis the emergent field that combinesinformaticsandneuroscience. Neuroinformatics is related with neuroscience data and information processing byartificial neural networks.[1]There are three main directions where neuroinformatics has to be applied:[2] Neuroinformatics encompassesphilosophy(computational theory of mind),psychology(information processing theory),computer science(natural computing,bio-inspired computing), among others disciplines. Neuroinformatics doesn't deal with matter or energy,[3]so it can be seen as a branch ofneurobiologythat studies various aspects ofnervous systems. The termneuroinformaticsseems to be used synonymously withcognitive informatics, described byJournal of Biomedical Informaticsas interdisciplinary domain that focuses on human information processing, mechanisms and processes within the context ofcomputingand computing applications.[4]According toGerman National Library, neuroinformatics is synonymous withneurocomputing.[5]AtProceedings of the 10th IEEE International Conference on Cognitive Informatics and Cognitive Computingwas introduced the following description:Cognitive Informatics (CI) as a transdisciplinary enquiry of computer science, information sciences, cognitive science, and intelligence science. CI investigates into the internal information processing mechanisms and processes of the brain and natural intelligence, as well as their engineering applications in cognitive computing.[6]According toINCF, neuroinformatics is a research field devoted to the development of neuroscience data and knowledge bases together with computational models.[7] Models of neural computation are attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof. Due to the complexity of nervous system behavior, the associated experimental error bounds are ill-defined, but the relative merit of the different models of a particular subsystem can be compared according to how closely they reproduce real-world behaviors or respond to specific input signals. In the closely related field of computational neuroethology, the practice is to include the environment in the model in such a way that the loop is closed. In the cases where competing models are unavailable, or where only gross responses have been measured or quantified, a clearly formulated model can guide the scientist in designing experiments to probe biochemical mechanisms or network connectivity. Artificial neural networks (ANNs), usually simply called neural networks (NNs), are computing systems vaguely inspired by thebiological neural networksthat constitute animalbrains.[8]An ANN is based on a collection of connected units or nodes calledartificial neurons, which loosely model theneuronsin a biological brain. Each connection, like thesynapsesin a biological brain, can transmit a signal to other neurons. An artificial neuron that receives a signal then processes it and can signal neurons connected to it. The "signal" at a connection is areal number, and the output of each neuron is computed by some non-linear function of the sum of its inputs. The connections are callededges. Neurons and edges typically have aweightthat adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold. Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times. Brain emulationis the concept of creating a functioning computational model andemulationof a brain or part of a brain. In December 2006,[9]theBlue Brainproject completed a simulation of a rat'sneocortical column. The neocortical column is considered the smallest functional unit of theneocortex. The neocortex is the part of the brain thought to be responsible for higher-order functions like conscious thought, and contains 10,000 neurons in the rat brain (and 108synapses). In November 2007,[10]the project reported the end of its first phase, delivering a data-driven process for creating, validating, and researching the neocortical column. Anartificial neural networkdescribed as being "as big and as complex as half of a mouse brain"[11]was run on an IBMBlue Genesupercomputer by the University of Nevada's research team in 2007. Each second of simulated time took ten seconds of computer time. The researchers claimed to observe "biologically consistent" nerve impulses that flowed through the virtual cortex. However, the simulation lacked the structures seen in real mice brains, and they intend to improve the accuracy of the neuron and synapse models.[12]Mind uploadingis the process ofscanninga physical structure of the brain accurately enough to create anemulationof the mental state (including long-term memory and "self") and copying it to a computer in adigitalform. Thecomputerwould then run asimulationof the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having asentientconsciousmind.[13][14][15]Substantial mainstream research in related areas is being conducted in animal brain mapping and simulation, development of faster supercomputers,virtual reality,brain–computer interfaces,connectomics, and information extraction from dynamically functioning brains.[16]According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are currently under active development; however, they will admit that others are, as yet, very speculative, but say they are still in the realm of engineering possibility. Research on brain–computer interface began in the 1970s at theUniversity of California, Los Angelesunder a grant from theNational Science Foundation, followed by a contract fromDARPA.[17][18]The papers published after this research also mark the first appearance of the expressionbrain–computer interfacein scientific literature. Recently, studies inHuman-computer interactionthrough the application ofmachine learningwith statistical temporal features extracted from thefrontal lobe,EEG brainwavedata has shown high levels of success in classifyingmental states(Relaxed, Neutral, Concentrating) mental emotional states (Negative, Neutral, Positive)[19]andthalamocortical dysrhythmia.[20] Neuroinformatics is the scientific study of information flow and processing in the nervous system. Institute scientists utilize brain imaging techniques, such asmagnetic resonance imaging, to reveal the organization of brain networks involved in human thought. Brain simulation is the concept of creating a functioningcomputer modelof a brain or part of a brain. There are three main directions where neuroinformatics has to be applied: Brain simulationis the concept of creating a functioning computational model of a brain or part of a brain.[21]In December 2006,[22]theBlue Brainproject completed a simulation of a rat'sneocortical column. The neocortical column is considered the smallest functional unit of theneocortex. The neocortex is the part of the brain thought to be responsible for higher-order functions like conscious thought, and contains 10,000 neurons in the rat brain (and 108synapses). In November 2007,[23]the project reported the end of its first phase, delivering a data-driven process for creating, validating, and researching the neocortical column. Anartificial neural networkdescribed as being "as big and as complex as half of a mouse brain"[24]was run on an IBMBlue Genesupercomputer by the University of Nevada's research team in 2007. Each second of simulated time took ten seconds of computer time. The researchers claimed to observe "biologically consistent" nerve impulses that flowed through the virtual cortex. However, the simulation lacked the structures seen in real mice brains, and they intend to improve the accuracy of the neuron and synapse models.[25] Mind uploadingis the process ofscanninga physical structure of the brain accurately enough to create anemulationof the mental state (including long-term memory and "self") and copying it to a computer in adigitalform. Thecomputerwould then run asimulationof the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having asentientconsciousmind.[13][26][15]Substantial mainstream research in related areas is being conducted in animal brain mapping and simulation, development of faster supercomputers,virtual reality,brain–computer interfaces,connectomics, and information extraction from dynamically functioning brains.[27]According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are currently under active development; however, they will admit that others are, as yet, very speculative, but say they are still in the realm of engineering possibility. Neuroinformatics (in context oflibrary science) is also devoted to the development of neurobiology knowledge with computational models and analytical tools for sharing, integration, and analysis of experimentaldataand advancement of theories about thenervous systemfunction. In the INCF context, this field refers to scientific information about primary experimental data, ontology, metadata, analytical tools, and computational models of the nervous system. The primary data includes experiments and experimental conditions concerning the genomic, molecular, structural, cellular, networks, systems and behavioural level, in all species and preparations in both the normal and disordered states.[28]In the recent decade, as vast amounts of diverse data about the brain were gathered by many research groups, the problem was raised of how to integrate the data from thousands of publications in order to enable efficient tools for further research. The biological and neuroscience data are highly interconnected and complex, and by itself, integration represents a great challenge for scientists. The United StatesNational Institute of Mental Health(NIMH), theNational Institute of Drug Abuse(NIDA) and theNational Science Foundation(NSF) provided the National Academy of SciencesInstitute of Medicinewith funds to undertake a careful analysis and study of the need to introduce computational techniques to brain research. The positive recommendations were reported in 1991.[29]This positive report enabled NIMH, now directed by Allan Leshner, to create the "Human Brain Project" (HBP), with the first grants awarded in 1993. Next, Koslow pursued the globalization of the HPG and neuroinformatics through theEuropean Unionand theOffice for Economic Co-operation and Development(OECD), Paris, France. Two particular opportunities occurred in 1996. The two related initiatives were combined to form the United States proposal on "Biological Informatics". This initiative was supported by theWhite House Office of Science and Technology Policyand presented at the OECD MSF by Edwards and Koslow. An MSF committee was established on Biological Informatics with two subcommittees: 1. Biodiversity (Chair, James Edwards, NSF), and 2. Neuroinformatics (Chair, Stephen Koslow, NIH). At the end of two years the Neuroinformatics subcommittee of the Biological Working Group issued a report supporting a global neuroinformatics effort. Koslow, working with the NIH and the White House Office of Science and Technology Policy to establishing a new Neuroinformatics working group to develop specific recommendation to support the more general recommendations of the first report. The Global Science Forum (GSF; renamed from MSF) of the OECD supported this recommendation. This scheme should eliminate national and disciplinary barriers and provide a most efficient approach to global collaborative research and data sharing. In this new scheme, each country will be expected to fund the participating researchers from their country. The GSF neuroinformatics committee then developed a business plan for the operation, support and establishment of the INCF which was supported and approved by the GSF Science Ministers at its 2004 meeting. In 2006 the INCF was created and its central office established and set into operation at the Karolinska Institute, Stockholm, Sweden under the leadership ofSten Grillner. Sixteen countries (Australia, Canada, China, the Czech Republic, Denmark, Finland, France, Germany, India, Italy, Japan, the Netherlands, Norway, Sweden, Switzerland, the United Kingdom and the United States), and the EU Commission established the legal basis for the INCF and Programme in International Neuroinformatics (PIN). To date, eighteen countries (Australia, Belgium, Czech Republic, Finland, France, Germany, India, Italy, Japan, Malaysia, Netherlands, Norway, Poland, Republic of Korea, Sweden, Switzerland, the United Kingdom and the United States) are members of the INCF. Membership is pending for several other countries. The goal of the INCF is to coordinate and promote international activities in neuroinformatics. The INCF contributes to the development and maintenance of database and computational infrastructure and support mechanisms for neuroscience applications. The system is expected to provide access to all freely accessible human brain data and resources to the international research community. The more general task of INCF is to provide conditions for developing convenient and flexible applications for neuroscience laboratories in order to improve our knowledge about the human brain and its disorders.
https://en.wikipedia.org/wiki/Neuroinformatics
Motion perceptionis the process of inferring the speed and direction of elements in a scene based onvisual,vestibularandproprioceptiveinputs. Although this process appears straightforward to most observers, it has proven to be a difficult problem from a computational perspective, and difficult to explain in terms ofneuralprocessing. Motion perception is studied by many disciplines, includingpsychology(i.e.visual perception),neurology,neurophysiology,engineering, andcomputer science. The inability to perceive motion is calledakinetopsiaand it may be caused by a lesion tocorticalareaV5in theextrastriate cortex.Neuropsychologicalstudies of a patient who could not see motion, seeing the world in a series of static "frames" instead, suggested that visual area V5 in humans[1]is homologous to motion processing area V5/MT in primates.[2][3][4] When two or more stimuli are alternatively switched on and off, they can produce two distinct motion perceptions. The first, known asbeta movement, is demonstrated in the yellow-ball figure and forms the basis for electronicnews tickerdisplays. However, at faster alternation rates, and when the distance between the stimuli is optimal, an illusory "object"—matching the background color—appears to move between the stimuli, alternately occluding them. This phenomenon is called thephi phenomenonand is often described as an example of "pure" motion detection, uncontaminated by form cues, unlike beta movement.[5]Nevertheless, this description is somewhat paradoxical since creating such motion without figural percepts is impossible. The phi phenomenon has been referred to as "first-order" motion perception. Werner E. Reichardt and Bernard Hassenstein have modelled it in terms of relatively simple "motion sensors" in the visual system, that have evolved to detect a change in luminance at one point on the retina and correlate it with a change in luminance at a neighbouring point on the retina after a short delay. Sensors that are proposed to work this way have been referred to as eitherHassenstein-Reichardt detectorsafter the scientistsBernhard HassensteinandWerner Reichardt, who first modelled them,[6]motion-energy sensors,[7]or Elaborated Reichardt Detectors.[8]These sensors are described as detecting motion by spatio-temporalcorrelationand are considered by some to be plausible models for how the visual system may detect motion. (Although, again, the notion of a "pure motion" detector suffers from the problem that there is no "pure motion" stimulus, i.e. a stimulus lacking perceived figure/ground properties). There is still considerable debate regarding the accuracy of the model and exact nature of this proposed process. It is not clear how the model distinguishes between movements of the eyes and movements of objects in the visual field, both of which produce changes in luminance on points on the retina. Second-ordermotion is when the moving contour is defined bycontrast,texture, flicker or some other quality that does not result in an increase in luminance or motion energy in theFourier spectrumof the stimulus.[9][10]There is much evidence to suggest that early processing of first- and second-order motion is carried out by separate pathways.[11]Second-order mechanisms have poorer temporal resolution and arelow-passin terms of the range ofspatial frequenciesto which they respond. (The notion that neural responses are attuned to frequency components of stimulation suffers from the lack of a functional rationale and has been generally criticized by G. Westheimer (2001) in an article called "The Fourier Theory of Vision.") Second-order motion produces a weakermotion aftereffectunless tested with dynamically flickering stimuli.[12] The motion direction of a contour is ambiguous, because the motion component parallel to the line cannot be inferred based on the visual input. This means that a variety of contours of different orientations moving at different speeds can cause identical responses in a motion sensitive neuron in the visual system. Some have speculated that, having extracted the hypothesized motion signals (first- or second-order) from the retinal image, the visual system must integrate those individuallocalmotion signals at various parts of the visual field into a 2-dimensional orglobalrepresentation of moving objects and surfaces. (It is not clear how this 2D representation is then converted into the perceived 3D percept) Further processing is required to detect coherent motion or "global motion" present in a scene.[13] The ability of a subject to detect coherent motion is commonly tested using motion coherence discrimination tasks. For these tasks, dynamic random-dot patterns (also calledrandom dot kinematograms) are used that consist in 'signal' dots moving in one direction and 'noise' dots moving in random directions. The sensitivity to motion coherence is assessed by measuring the ratio of 'signal' to 'noise' dots required to determine the coherent motion direction. The required ratio is called themotion coherence threshold. As in other aspects of vision, the observer's visual input is generally insufficient to determine the true nature of stimulus sources, in this case their velocity in the real world. Inmonocular visionfor example, the visual input will be a 2D projection of a 3D scene. The motion cues present in the 2D projection will by default be insufficient to reconstruct the motion present in the 3D scene. Put differently, many 3D scenes will be compatible with a single 2D projection. The problem of motion estimation generalizes tobinocular visionwhen we consider occlusion or motion perception at relatively large distances, wherebinocular disparityis a poor cue to depth. This fundamental difficulty is referred to as theinverse problem.[14] Nonetheless, some humans do perceive motion in depth. There are indications that the brain uses various cues, in particular temporal changes in disparity as well as monocular velocity ratios, for producing a sensation of motion in depth.[15]Two different binocular cues of the perception motion in depth are hypothesized: Inter-ocular velocity difference (IOVD) and changing disparity (CD) over time. Motion in depth based on inter-ocular velocity differences can be tested using dedicated binocularly uncorrelated random-dot kinematograms.[16]Study results indicate that the processing of these two binocular cues – IOVD and CD – may use fundamentally different low-level stimulus features, which may be processed jointly that later stages.[17][18]Additionally, as monocular cue, also the changing size of retinal images contributes to motion in depth detection. Detection and discrimination of motion can be improved by training with long-term results. Participants trained to detect the movements of dots on a screen in only one direction become particularly good at detecting small movements in the directions around that in which they have been trained. This improvement was still present 10 weeks later. Howeverperceptual learningis highly specific. For example, the participants show no improvement when tested around other motion directions, or for other sorts of stimuli.[19] Acognitive mapis a type of mental representation which serves an individual to acquire, code, store, recall, and decode information about the relative locations and attributes of phenomena in their spatial environment.[20][21]Place cellswork with other types ofneuronsin thehippocampusand surrounding regions of the brain to perform this kind of spatial processing,[22]but the ways in which they function within the hippocampus are still being researched.[23] Many species of mammals can keep track of spatial location even in the absence of visual, auditory, olfactory, or tactile cues, by integrating their movements—the ability to do this is referred to in the literature aspath integration. A number of theoretical models have explored mechanisms by which path integration could be performed byneural networks. In most models, such as those of Samsonovich and McNaughton (1997)[24]or Burak and Fiete (2009),[25]the principal ingredients are (1) an internal representation of position, (2) internal representations of the speed and direction of movement, and (3) a mechanism for shifting the encoded position by the right amount when the animal moves. Because cells in theMedial Entorhinal Cortex (MEC)encode information about position (grid cells[26]) and movement (head direction cellsand conjunctive position-by-direction cells[27]), this area is currently viewed as the most promising candidate for the place in the brain where path integration occurs. Motion sensing using vision is crucial for detecting a potential mate, prey, or predator, and thus it is found both in vertebrates and invertebrates vision throughout a wide variety of species, although it is not universally found in all species. In vertebrates, the process takes place in retina and more specifically inretinal ganglion cells, which are neurons that receive input frombipolar cellsandamacrine cellson visual information and process output to higher regions of the brain including, thalamus, hypothalamus, and mesencephalon. The study of directionally selective units began with a discovery of such cells in the cerebral cortex of cats byDavid HubelandTorsten Wieselin 1959. Following the initial report, an attempt to understand the mechanism of directionally selective cells was pursued byHorace B. BarlowandWilliam R. Levickin 1965.[28]Their in-depth experiments in rabbit's retina expanded the anatomical and physiological understanding of the vertebrate visual system and ignited the interest in the field. Numerous studies that followed thereafter have unveiled the mechanism of motion sensing in vision for the most part.Alexander BorstandThomas Euler's 2011 review paper, "Seeing Things in Motion: Models, Circuits and Mechanisms".[29]discusses certain important findings from the early discoveries to the recent work on the subject, coming to the conclusion of the current status of the knowledge. Direction selective (DS) cells in the retina are defined as neurons that respond differentially to the direction of a visual stimulus. According to Barlow and Levick (1965), the term is used to describe a group of neurons that "gives a vigorous discharge of impulses when a stimulus object is moved through its receptive field in one direction."[28]This direction in which a set of neurons respond most strongly to is their "preferred direction". In contrast, they do not respond at all to the opposite direction, "null direction". The preferred direction is not dependent on the stimulus—that is, regardless of the stimulus' size, shape, or color, the neurons respond when it is moving in their preferred direction, and do not respond if it is moving in the null direction. There are three known types of DS cells in the vertebrate retina of the mouse, ON/OFF DS ganglion cells, ON DS ganglion cells, and OFF DS ganglion cells. Each has a distinctive physiology and anatomy. Analogous directionally selective cells are not thought to exist in the primate retina.[30] ON/OFF DS ganglion cells act as local motion detectors. They fire at the onset and offset of a stimulus (a light source). If a stimulus is moving in the direction of the cell's preference, it will fire at the leading and the trailing edge. Their firing pattern is time-dependent and is supported by theReichardt-Hassenstainmodel, which detects spatiotemporal correlation between the two adjacent points. The detailed explanation of the Reichardt-Hassenstain model will be provided later in the section. The anatomy of ON/OFF cells is such that the dendrites extend to two sublaminae of the inner plexiform layer and make synapses with bipolar and amacrine cells. They have four subtypes, each with its own preference for direction. Unlike ON/OFF DS ganglion cells that respond both to the leading and the trailing edge of a stimulus, ON DS ganglion cells are responsive only to a leading edge. The dendrites of ON DS ganglion cells are monostratified and extend into the inner sublamina of the inner plexiform layer. They have three subtypes with different directional preferences. OFF DS ganglion cells act as a centripetal motion detector, and they respond only to the trailing edge of a stimulus. They are tuned to upward motion of a stimulus. The dendrites are asymmetrical and arbor in to the direction of their preference.[29] The first DS cells in invertebrates were found infliesin a brain structure called thelobula plate. The lobula plate is one of the three stacks of theneuropilsin the fly'soptic lobe. The "tangential cells" of thelobula platecomposed of roughly about 50 neurons, and they arborize extensively in the neuropile. The tangential cells are known to be directionally selective with distinctive directional preference. One of which is Horizontally Sensitive (HS) cells, such as theH1 neuron, that depolarize most strongly in response to stimulus moving in a horizontal direction (preferred direction). On the other hand, they hyperpolarize when the direction of motion is opposite (null direction). Vertically Sensitive (VS) cells are another group of cells that are most sensitive to vertical motion. They depolarize when a stimulus is moving downward and hyperpolarize when it is moving upward. Both HS and VS cells respond with a fixed preferred direction and a null direction regardless of the color or contrast of the background or the stimulus. It is now known that motion detection in vision is based on the Hassenstein-Reichardt detector model.[31]This is a model used to detect correlation between the two adjacent points. It consists of two symmetrical subunits. Both subunits have a receptor that can be stimulated by an input (light in the case of visual system). In each subunit, when an input is received, a signal is sent to the other subunit. At the same time, the signal is delayed in time within the subunit, and after the temporal filter, is then multiplied by the signal received from the other subunit. Thus, within each subunit, the two brightness values, one received directly from its receptor with a time delay and the other received from the adjacent receptor, are multiplied. The multiplied values from the two subunits are then subtracted to produce an output. The direction of selectivity or preferred direction is determined by whether the difference is positive or negative. The direction which produces a positive outcome is the preferred direction. In order to confirm that the Reichardt-Hassenstein model accurately describes the directional selectivity in the retina, the study was conducted using optical recordings of free cytosolic calcium levels after loading a fluorescent indicator dye into the fly tangential cells. The fly was presented uniformly moving gratings while the calcium concentration in the dendritic tips of the tangential cells was measured. The tangential cells showed modulations that matched the temporal frequency of the gratings, and the velocity of the moving gratings at which the neurons respond most strongly showed a close dependency on the pattern wavelength. This confirmed the accuracy of the model both at the cellular and the behavioral level.[32] Although the details of the Hassenstein-Reichardt model have not been confirmed at an anatomical and physiological level, the site of subtraction in the model is now being localized to the tangential cells. When depolarizing current is injected into the tangential cell while presenting a visual stimulus, the response to the preferred direction of motion decreased, and the response to the null direction increased. The opposite was observed with hyperpolarizing current. The T4 and T5 cells, which have been selected as a strong candidate for providing input to the tangential cells, have four subtypes that each project into one of the four strata of the lobula plate that differ in the preferred orientation.[29] One of the early works on DS cells in vertebrates was done on the rabbit retina by H. Barlow and W. Levick in 1965. Their experimental methods include variations to the slit-experiments and recording of the action potentials in the rabbit retina. The basic set-up of the slit experiment was they presented a moving black-white grating through a slit of various widths to a rabbit and recorded the action potentials in the retina. This early study had a large impact on the study of DS cells by laying down the foundation for later studies. The study showed that DS ganglion cells derive their property from the basis of sequence-discriminating activity of subunits, and that this activity may be the result of inhibitory mechanism in response to the motion of image in the null direction. It also showed that the DS property of retinal ganglion cells is distributed over the entire receptive field, and not limited to specific zones. Direction selectivity is contained for two adjacent points in the receptive field separated by as small as 1/4°, but selectivity decreased with larger separations. They used this to support their hypothesis that discrimination of sequences gives rise to direction selectivity because normal movement would activate adjacent points in a succession.[28] ON/OFF DS ganglion cells can be divided into 4 subtypes differing in their directional preference, ventral, dorsal, nasal, or temporal. The cells of different subtypes also differ in their dendritic structure and synaptic targets in the brain. The neurons that were identified to prefer ventral motion were also found to have dendritic projections in the ventral direction. Also, the neurons that prefer nasal motion had asymmetric dendritic extensions in the nasal direction. Thus, a strong association between the structural and functional asymmetry in ventral and nasal direction was observed. With a distinct property and preference for each subtype, there was an expectation that they could be selectively labeled by molecular markers. The neurons that were preferentially responsive to vertical motion were indeed shown to be selectively expressed by a specific molecular marker. However, molecular markers for other three subtypes have not been yet found.[33] The direction selective (DS) ganglion cells receive inputs from bipolar cells andstarburst amacrine cells. The DS ganglion cells respond to their preferred direction with a large excitatory postsynaptic potential followed by a small inhibitory response. On the other hand, they respond to their null direction with a simultaneous small excitatory postsynaptic potential and a large inhibitory postsynaptic potential. Starburst amacrine cells have been viewed as a strong candidate for direction selectivity in ganglion cells because they can release both GABA and Ach. Their dendrites branch out radiantly from a soma, and there is a significant dendritic overlap. Optical measurements of Ca2+concentration showed that they respond strongly to the centrifugal motion (the outward motion from the soma to the dendrites), while they don't respond well to the centripetal motion (the inward motion from the dendritic tips to the soma). When the starburst cells were ablated with toxins, direction selectivity was eliminated. Moreover, their release of neurotransmitters itself, specifically calcium ions, reflect direction selectivity, which may be presumably attributed to the synaptic pattern. The branching pattern is organized such that certain presynaptic input will have more influence on a given dendrite than others, creating a polarity in excitation and inhibition. Further evidence suggests that starburst cells release inhibitory neurotransmitters, GABA onto each other in a delayed and prolonged manner. This accounts for the temporal property of inhibition.[29] In addition to spatial offset due to GABAergic synapses, the important role of chloride transporters has started to be discussed. The popular hypothesis is that starburst amacrine cells differentially express chloride transporters along the dendrites. Given this assumption, some areas along the dendrite will have a positive chloride-ion equilibrium potential relative to the resting potential while others have a negative equilibrium potential. This means that GABA at one area will be depolarizing and at another area hyperpolarizing, accounting for the spatial offset present between excitation and inhibition.[34] Recent research (published March 2011) relying onserial block-face electron microscopy(SBEM) has led to identification of the circuitry that influences directional selectivity. This new technique provides detailed images of calcium flow and anatomy of dendrites of bothstarburst amacrine(SAC) and DS ganglion cells. By comparing the preferred directions of ganglion cells with their synapses on SAC's, Briggman et al. provide evidence for a mechanism primarily based on inhibitory signals from SAC's[35]based on an oversampled serial block-face scanning electron microscopy study of one sampled retina, that retinal ganglion cells may receive asymmetrical inhibitory inputs directly from starburst amacrine cells, and therefore computation of directional selectivity also occurs postsynaptically. Such postsynaptic models are unparsimonious, and so if any given starburst amacrine cells conveys motion information to retinal ganglion cells then any computing of 'local' direction selectivity postsynaptically by retinal ganglion cells is redundant and dysfunctional. Anacetylcholine(ACh) transmission model of directionally selective starburst amacrine cells provides a robust topological underpinning of a motion sensing in the retina.[36]
https://en.wikipedia.org/wiki/Motion_perception
Systems neuroscienceis a subdiscipline ofneuroscienceandsystems biologythat studies the structure and function of various neural circuits and systems that make up the central nervous system of an organism.[1]Systems neuroscience encompasses a number of areas of study concerned with hownerve cellsbehave when connected together to formneural pathways,neural circuits, and largerbrain networks. At this level of analysis, neuroscientists study how different neural circuits work together to analyze sensory information, form perceptions of the external world, form emotions, make decisions, and execute movements.[2]Researchers in systems neuroscience are concerned with the relation betweenmolecularandcellularapproaches to understanding brain structure and function, as well as with the study of high-level mental functions such aslanguage,memory, andself-awareness(which are the purview ofbehavioralandcognitiveneuroscience). To deepen their understanding of these relations and understanding, systems neuroscientists typically employ techniques for understanding networks of neurons as they are seen to function, by way ofelectrophysiologyusing eithersingle-unit recordingor multi-electrode recording,functional magnetic resonance imaging(fMRI), andPET scans.[1]The term is commonly used in an educational framework: a common sequence of graduate school neuroscience courses consists of cellular/molecular neuroscience for the first semester, then systems neuroscience for the second semester. It is also sometimes used to distinguish a subdivision within a neuroscience department in a university. Systems neuroscience has three major branches in relation to measuring the brain: behavioral neuroscience, computational modeling, and brain activity. Through these three branches, it breaks down the core concepts of systems neuroscience and provides valuable information about how the functional systems of an organism interact independently and intertwined with one another. Behavioral neurosciencein relation to systems neuroscience focuses on representational dissimilarity matrices (RDMs), which categorizes brain activity patterns and compares them across different conditions, such as the dissimilar level of brain activity observing an animal in comparison to an inanimate object. These models give a quantitative representation of behavior while providing comparable models of the patterns observed.5Correlations or anticorrelations between brain-activity patterns are used during experimental conditions to distinguish the processing of each brain region when stimuli is presented. Computational modelsprovide a base form of brain-activity level, which is typically represented by the firing of a single neuron. This is essential for understanding systems neuroscience as it shows the physical changes that occur during functional changes in an organism. While these models are important for understanding brain-activity, one-to-one correspondence ofneuron firinghas not been completely uncovered yet. Different measurements of the same activity lead to different patterns, when in theory, the patterns should be the same, or at least similar to one another. However, studies show fundamental differences when it comes to measuring the brain, and science strives to investigate this dissimilarity. Brain activityand brain imaging help scientists understand the differences between functional systems of an organism in combination with computational models and the understanding of behavioral neuroscience. The three major branches of systems neuroscience work together to provide the most accurate information about brain activity asneuroimagingallows in its current state. While there can always be improvements to brain-activity measurements, typical imaging studies through electrophysiology can already provide massive amounts of information about the systems of an organism and how they may work intertwined with one another. For example, using the core branches of systems neuroscience, scientists have been able to dissect a migraine’s attack on the nervous system by observing brain-activity dissimilarities and using computational modeling to compare the differences of a functioning brain and a brain affected by a migraine.6 Systems neuroscience is observed throughelectrophysiology, which focuses on the electrical activity of biological systems in an organism. Through electrophysiology studies, the activity levels of different systems in the body help explain abnormalities of systematic functioning, such as an abnormal heartbeat rhythm or a stroke. While the main focus of electrophysiology is the heart, it does provide informational scanning of brain activity in relation to other bodily functions, which can be useful for the connection of neurological activity between systems. Although systems neuroscience is generally observed in relation to a human’s level of functioning, many studies have been conducted ondrosophila, or the small fruit fly, as it is considered to be easier due to the simpler brain structure and more controllable genetic and environmental factors from an experimental standpoint. While there are strong dissimilarities between the functioning capabilities of a fruit fly in comparison to a human, these studies still provide valuable insight on how a human brain might work. Neural circuitsand neuron firing is more easily observable in fruit flies through functional brain imaging, as neuronal pathways are simplified and, therefore, are easier to follow. These pathways may be simple, but by understanding the basis of neuron firing, this can lead to important studies on a human’s neuronal pathway and eventually to a one-to-one neuron correspondence when a system is functioning.7
https://en.wikipedia.org/wiki/Systems_neuroscience
Adeductive classifieris a type ofartificial intelligenceinference engine. It takes as input a set of declarations in aframe languageabout a domain such as medical research or molecular biology. For example, the names ofclasses, sub-classes, properties, and restrictions on allowable values. The classifier determines if the various declarations are logically consistent and if not will highlight the specific inconsistent declarations and the inconsistencies among them. If the declarations are consistent the classifier can then assert additional information based on the input. For example, it can add information about existing classes, create additional classes, etc. This differs from traditional inference engines that trigger off of IF-THEN conditions in rules. Classifiers are also similar totheorem proversin that they take as input and produce output viafirst-order logic. Classifiers originated withKL-ONEframe languages. They are increasingly significant now that they form a part in the enabling technology of theSemantic Web. Modern classifiers leverage theWeb Ontology Language. The models they analyze and generate are calledontologies.[1] A classic problem inknowledge representationfor artificial intelligence is the trade off between theexpressive powerand thecomputational efficiencyof the knowledge representation system. The most powerful form of knowledge representation is first-order logic. However, it is not possible to implement knowledge representation that provides the complete expressive power of first-order logic. Such a representation will include the capability to represent concepts such as the set of all integers which are impossible to iterate through. Implementing an assertion quantified for an infinite set by definition results in an undecidable non-terminating program. However, the problem is deeper than not being able to implement infinite sets. As Levesque demonstrated, the closer a knowledge representation mechanism comes to first-order logic, the more likely it is to result in expressions that require infinite or unacceptably large resources to compute.[2] As a result of this trade-off, a great deal of early work on knowledge representation for artificial intelligence involved experimenting with various compromises that provide a subset of first-order logic with acceptable computation speeds. One of the first and most successful compromises was to develop languages based predominately onmodus ponens, i.e. IF-THEN rules.Rule-based systemswere the predominant knowledge representation mechanism for virtually all earlyexpert systems. Rule-based systems provided acceptable computational efficiency while still providing powerful knowledge representation. Also, rules were highly intuitive to knowledge workers. Indeed, one of the data points that encouraged researchers to develop rule-based knowledge representation was psychological research that humans often represented complex logic via rules.[3] However, after the early success of rule-based systems there arose more pervasive use of frame languages instead of or more often combined with rules. Frames provided a more natural way to represent certain types of concepts, especially concepts in subpart or subclass hierarchies. This led to development of a new kind of inference engine known as a classifier. A classifier could analyze a class hierarchy (also known as anontology) and determine if it was valid. If the hierarchy was invalid the classifier would highlight the inconsistent declarations. For a language to utilize a classifier it required a formal foundation. The first language to successfully demonstrate a classifier was the KL-ONE family of languages. TheLOOM languagefrom ISI was heavily influenced by KL-ONE. LOOM also was influenced by the rising popularity of object-oriented tools and environments. Loom provided a true object-oriented capability (e.g. message passing) in addition to frame language capabilities. Classifiers play a significant role in the vision for the next generation Internet known as the Semantic Web. The Web Ontology Language provides a formalism that can be validated and reasoned on via classifiers such as Hermit and Fact++.[4] The earliest versions of classifiers werelogic theorem provers. The first classifier to work with a frame language was theKL-ONEclassifier.[5][6]A later system built on common lisp was LOOM from the Information Sciences Institute. LOOM provided true object-oriented capabilities leveraging the Common Lisp Object System, along with a frame language.[7]In the Semantic Web theProtegetool fromStanfordprovides classifiers (also known as reasoners) as part of the default environment.[8]
https://en.wikipedia.org/wiki/Deductive_classifier
Afaceted classificationis aclassification schemeused in organizing knowledge into a systematic order. A faceted classification uses semantic categories, either general or subject-specific, that are combined to create the full classification entry. Many library classification systems use a combination of a fixed, enumerative taxonomy of concepts with subordinate facets that further refine the topic. There are two primary types of classification used for information organization: enumerative and faceted. An enumerative classification contains a full set of entries for all concepts.[1]A faceted classification system uses a set of semantically cohesive categories that are combined as needed to create an expression of a concept. In this way, the faceted classification is not limited to already defined concepts. While this makes the classification quite flexible, it also makes the resulting expression of topics complex.[2]To the extent possible, facets represent "clearly defined, mutually exclusive, and collectively exhaustive aspects of a subject. The premise is that any subject or class can be analyzed into its component parts (i.e., its aspects, properties, or characteristics)."[3]Some commonly used general-purpose facets are time, place, and form.[4] There are few purely faceted classifications; the best known of these is theColon ClassificationofShiyali Ramamrita Ranganathan, a general knowledge classification for libraries. Some other faceted classifications are specific to special topics, such as the Art and Architecture Thesaurus and the faceted classification of occupational safety and health topics created by D. J. Foskett for the International Labour Organization.[5] Many library classifications combine the enumerative and faceted classification techniques. TheDewey Decimal Classification, theLibrary of Congress Classification, and theUniversal Decimal Classificationall make use of facets at various points in their enumerated classification schedules. The allowed facets vary based on the subject area of the classification. These facets are recorded as tables that represent recurring types of subdivisions within subject areas. There are general facets that can be used wherever appropriate, such as geographic subdivisions of the topic. Other tables are applied only to specific areas of the schedules. Facets can be combined to create a complex subject statement.[4] Daniel Joudrey and Arlene Taylor describe faceted classification using an analogy: "If one thinks of each of the faces of a cut and polished diamond as a facet of the whole, one can picture a classification notation that has small notations standing for subparts of the whole topic, which are pieced together to create a complete classification notation."[6] Faceted classifications exhibit many of the same problems as classifications based on a hierarchy. In particular, some concepts could belong in more than one facet, so their placement in the classification may appear to be arbitrary to the classifier. It also tends to result in a complex notation because each facet must be distinguishable as recorded.[2] Search in systems with faceted classification can enable a user to navigate information along multiple paths corresponding to different orderings of the facets. This contrasts with traditional taxonomies in which the hierarchy of categories is fixed and unchanging.[7]It is also possible to use facets to filter search results to more quickly find desired results. TheColon Classificationdeveloped byS. R. Ranganathanis an example of general faceted classification designed to be applied to all library materials. In the Colon Classification system, a book is assigned a set of values from each independent facet.[8]This facet formula uses punctuation marks and symbols placed between the facets to connect them. Colon classification was named after its use of the colon as the primary symbol in its notation.[9][10] Ranganathan stated thathierarchical classificationschemes like the Dewey Decimal Classification (DDC) or the Library of Congress Subject Headings are too limiting and finite to use for modern classification and that many items can pertain information to more than one subject. He organized his classification scheme into 42 classes. Each class can be categorized according to particular characteristics, that he called facets. Ranganathan said that there are five fundamental categories that can be used to demonstrate the facets of a subject: personality, material, energy, space and time. He called this the PMEST formula:[11] Another example of a faceted classification scheme is theUniversal Decimal Classification(UDC), a complex multilingual classification that can be used in all fields of knowledge.[12]The Universal Decimal Classification scheme was created at the end of the nineteenth century by Belgian bibliographersPaul OtletandHenri la Fontaine. The goal of their system was to create an index that would be able to record knowledge even if it is stored in non-conventional ways including materials in notebooks and ephemera. They also wanted their index to organize material systematically instead of alphabetically.[13] The UDC has an overall taxonomy of knowledge that is extended with a number of facets, such as language, form, place and time. Each facet has its own symbol in the notation, such as: "=" for language; "-02" for materials, "[...]" for subordinate concepts.[4] D. J. Foskett, a member of theClassification Research Groupin London, developed classification of occupational safety and health materials for the library of theInternational Labour Organization.[5][14]After a study of the literature in the field, he created the classification with the following facets: Notation was solely alphabetic, with the sub-facets organized hierarchically using extended codes, such as "g Industrial equipment and processes", "ge Machines".[14] While not strictly a classification system, theAATuses facets similar to those of Ranganathan's Colon Classification: Hierarchical classification refers to the classification of objects using onesinglehierarchical taxonomy. Faceted classification may actually employ hierarchy in one or more of its facets, but allows for the use of more than one taxonomy to classify objects.
https://en.wikipedia.org/wiki/Faceted_classification
The field ofcomplex networkshas emerged as an important area of science to generate novel insights into nature of complex systems[1]The application of network theory toclimate scienceis a young and emerging field.[2][3][4]To identify and analyze patterns in global climate, scientists model climate data as complex networks. Unlike most real-world networks wherenodesandedgesare well defined, in climate networks, nodes are identified as the sites in a spatial grid of the underlying global climate data set, which can be represented at various resolutions. Two nodes are connected by an edge depending on the degree of statistical similarity (that may be related to dependence) between the corresponding pairs oftime-seriestaken from climate records.[3][5]The climate network approach enables novel insights into the dynamics of theclimate systemover different spatial and temporal scales.[3] Depending upon the choice ofnodesand/oredges, climate networks may take many different forms, shapes, sizes and complexities. Tsonis et al. introduced the field of complex networks to climate. In their model, the nodes for the network were constituted by a single variable (500 hPa) fromNCEP/NCAR Reanalysisdatasets. In order to estimate theedgesbetween nodes,correlation coefficientat zero time lag between all possible pairs of nodes were estimated. A pair of nodes was considered to be connected, if theircorrelation coefficientis above a threshold of 0.5.[1] Steinhaeuser and team introduced the novel technique ofmultivariatenetworks inclimateby constructing networks from several climate variables separately and capture their interaction in multivariate predictive model. It was demonstrated in their studies that in context of climate, extracting predictors based onclusterattributes yield informative precursors to improvepredictiveskills.[5] Kawale et al. presented a graph based approach to find dipoles in pressure data. Given the importance ofteleconnection, this methodology has potential to provide significant insights.[6] Imme et al. introduced a new type of network construction in climate based on temporal probabilistic graphical model, which provides an alternative viewpoint by focusing on information flow within network over time.[7] Agarwal et al. proposed advanced linear[8]and nonlinear[9]methods to construct and investigate climate networks at different timescales. Climate networks constructed using SST datasets at different timescale averred that multi-scale analysis of climatic processes holds the promise of better understanding thesystem dynamicsthat may be missed when processes are analyzed at one timescale only[10] Climate networks enable insights into thedynamicsofclimatesystem over many spatial scales. The localdegree centralityand related measures have been used to identify super-nodes and to associate them to known dynamical interrelations in the atmosphere, calledteleconnectionpatterns. It was observed that climate networks possess“small world”properties owing to the long-range spatial connections.[2] Steinhaeuser et al. applied complex networks to explore the multivariate andmulti-scaledependence in climate data. Findings of the group suggested a close similarity of observed dependence patterns in multiple variables over multiple time and spatial scales.[4] Tsonis and Roeber investigated the coupling architecture of the climate network. It was found that the overall network emerges from intertwined subnetworks. One subnetwork is operating at higher altitudes and other is operating in the tropics, while the equatorial subnetwork acts as an agent linking the 2 hemispheres . Though, both networks possessSmall World Property, the 2 subnetworks are significantly different from each other in terms of network properties likedegree distribution.[11] Donges et al. applied climate networks for physics and nonlinear dynamical interpretations in climate. The team used measure of node centrality,betweenness centrality(BC) to demonstrate the wave-like structures in theBCfields of climate networks constructed from monthly averaged reanalysis and atmosphere-ocean coupled general circulation model(AOGCM)surface air temperature(SAT) data.[12] Teleconnectionsare spatial patterns in the atmosphere that link weather and climate anomalies over large distances across the globe. Teleconnections have the characteristics that they are persistent, lasting for 1 to 2 weeks, and often much longer, and they are recurrent, as similar patterns tend to occur repeatedly. The presence of teleconnections is associated with changes in temperature, wind, precipitation, atmospheric variables of greatest societal interest.[13] There are numerous computational challenges that arise at various stages of the network construction and analysis process in field of climate networks:[14]
https://en.wikipedia.org/wiki/Climate_as_complex_networks
Communicative ecologyis a conceptual model used in the field of media and communications research. The model is used to analyse and represent the relationships betweensocial interactions,discourse, and communicationmediaand technology of individuals, collectives and networks in physical and digital environments. Broadly, the term communicative ecology refers to "the context in whichcommunicationprocesses occur"(Foth & Hearn, 2007, p. 9). These processes are seen to involve people communicating with others in theirsocial networks, both face-to-face and using a mix of media and communication technologies(Tacchi, Slater & Hearn, 2003)(Tacchi, et al. 2007). The communicative ecology model enables researchers to take a holistic approach to understanding the dynamic interrelationships between social dimensions, discourse and communications technology in both physical and digital environments. The use of an ecological metaphor markedly expands the potential sphere of inquiry for communications and media research. It shifts the focus away from studies focusing on single communications devices or applications, for example the mobile telephone or email, towards whole system interactions. Consequently, it extends the possibility of research into population change and lifecycles, spatiotemporal dynamics, networks and clusters, and power relations within the frame of a communicative ecology. Also, as ecologies are not isolated entities, further questions regarding the similarities, differences, interrelationships and transactions between ecologies can be examined. A richer understanding can then be derived from micro and macro level analysis of the social and cultural context of communication(Foth & Hearn, 2007). The concept of communicative ecology has emerged amidst concerns that studies attempting to identify causal relationships between discrete technologies and social impacts neglect variables that are salient to the successful implementation and uptake of technologiesin situ(Dourish, 2004). This mirrors the way in which the biological field ofecologyemerged from the perceived inadequacies of studies of single species of flora and fauna. In a similar way, researchers who use the communicative ecology framework argue that media technologies should not be examined independent of their context of use. They assert thatnew mediamust be both studied and designed with reference to the users' wider set of social relationships, the nature of the communication itself and other media in use. Through use ofethnographic approachesa richer, more nuanced understanding of the communicative system of a given setting is able to be developed. New media are usually introduced into existing communication structures and must compete for attention in relation to the users' existing portfolio of communication tools. Consequently, if a new communication technology does not complement or enhance the existing toolset it risks rejection. The communicative ecology model allows researchers to examine how a new form of media or technology may or may not be integrated into existing communication patterns(Tacchi, 2006). However, the potential utility of the communicative ecology model is far broader than this. Any new form of social intervention, content or technology must be locally appropriate for it to succeed(Tacchi, 2006). Hence, the communicative ecology framework is useful forhuman-computer interactiondesigners, content creators, and inurban informatics,urban planning,community developmentandeducationwhen seeking opportunities to enhance or augment local, social, and communication practices. The concept of communicative ecology is derived from Altheide's "ecology of communication"(1994;1995). Altheide developed this concept to examine the mutually influential relationships between information technology, communication formats and social activities, within the context of people's social and physical environments, as they define and experience them. The concept is influenced byMcLuhan's(1962)research onmedia ecology, which demonstrates that new media and technology can influence communicative content, and also thesymbolic interactionistperspective of communication as embedded in context(Barnlund, 1979). Altheide considers ecology of communication to be a fluid construct that can be used as a frame to investigate the ways in which social activities are being created and modified through the use of technologies that, in turn, give rise to new communication formats. He is particularly interested in the relationship between social activities and technologies for surveillance and control. The communicative ecology concept has been further developed for use in studies of information and communication technologies (ICT) initiatives in developing nations(Slater, Tacchi & Lewis, 2002). A guide to the study of communicative ecologies using the ethnographic action research method, developed with the support of UNESCO, has spawned a proliferation of empirical research(Tacchi et al., 2007)(Tacchi, Slater & Hearn, 2003). Many of these investigations have focused on ICT for development projects associated with community technology centres and local information networks in South Asian and African nations(Slater et al., 2002;Slater & Tacchi, 2004;Pringle, Bajracharya, & Bajracharya, 2004;Sharma, 2005;Nair, Jennaway & Skuse, 2006;Rangaswamy, 2007). In these studies, local community members are often engaged as active participants in a research and project development process that provides opportunities for them to gain the ICT literacy skills necessary to create locally meaningful content(Subramanian, Nair & Sharma, 2004;Tacchi, 2005a,2007;Tacchi & Watkins, 2007). Many of these research activities investigate and support interventions that aim to alleviate poverty(Slater & Kwami, 2005), educate(Subramanian, 2005)and promote thedigital inclusionnecessary for citizens to actively participate in civic life and have their voices heard(Tacchi, 2005b;Skuse & Cousins, 2007,2008;Skuse, Fildes, Tacchi, Martin & Baulch, 2007;. Some studies have reported on the use of a particular ICT, for examplecommunity radio(Tacchi, 2005c)ormobile phones(Horst & Miller, 2006;Miller, 2007), in relation to broader communication patterns. More recently, the communicative ecology framework has been extended in studies of the nature of media use to support social networks inurban villagesand inner-city apartment buildings(Foth & Hearn, 2007). This paper introduced the concept of dimensions to the communicative ecology model.Button and Partridge (2007)used the model to examine the online communicative ecology of neighbourhood websites. The model has also been used to investigate how students communicate and reflect on their learning(Berry & Hamilton, 2006). A special issue of the Electronic Journal of Communication showcased the versatility of the communicative ecology approach(Hearn & Foth, 2007). In this issue,Allison (2007)looked at the communicative ecology from the perspective of the individual, whereasWilkin, Ball-Rokeach, Matsaganis and Cheong (2007)used a panoptic perspective to compare the ecologies of geo-ethnic communities.Peeples and Mitchell (2007)used the model to explore the social activity of protest.Powell (2007)focused on a particular medium, public internet access, in an urban context.Shepherd, Arnold, Bellamy and Gibbs (2007)extended the concept to attend to the material and spatial aspects of the communicative ecology of the domestic sphere. The term "communicative ecology" has also been used in other studies with various interpretations.Interactional sociolinguistsuse the term to describe the local communicative environment of a particular setting in which discourse is contextualised. Using methods drawn fromlinguistic anthropology, their research begins with a period of ethnography in which a rich understanding of the local communicative ecology is formed. Discourse is then analysed in relation to this ecological context(Gumperz, 1999).Roberts (2005)describes a communicative ecology as comprising the identity of participants, the topics of communication and the ways in which things are communicated, including tone of voice, directness, etc.Beier (2001)draws onHymes'(1974)work inethnography of communicationand uses the concept to understand the range of communicative practices of the Nanti people as a system of interaction. From anapplied linguisticsperspective,McArthur (2005)describes a communicative ecology as embracing the nature and evolution of language, media and communication technologies. He uses the term to discuss the interactions between the world's languages and communication technologies.Wagner (2004)uses the term to refer to the deep structures of meaning and communicative action that human language shares with other species, particularly thebonobo. In cultural studies of terrorism,White (2003)uses the term to describe the interchange of signs within interacting networks of individuals and collectives. In their study ofcomputer-mediated communicationin the workplace,Yates, Orlikowski and Woerner (2003)draw uponErickson's (2000)work on genre ecologies to suggest a communicative ecology can be identified by the types and frequencies of communicative practices, such as email threading activities. Their version of communicative ecology is influenced by members of a workplace engaging in common activities, the length of time over which interaction takes place, whether communication media is synchronous or asynchronous and members' linguistic or cultural background. There is not a single, agreed upon communicative ecology model, rather, this section highlights that there are various approaches to understanding and applying the model in various contexts. Furthermore, concepts that bear some similarity to communicative ecology includeactor-network theory(Latour, 2005),activity theory(Nardi, 1996)the communication infrastructure model(Ball-Rokeach, Kim & Matei, 2001)and the personal communication system(Boase, 2008). Often in sociological literature, an ecology is seen to be anchored in a geographical area of human settlement. In the case of a communicative ecology, while the majority of studies have been conducted in physical environments, it is also possible to use the framework to examine ecologies grounded in an online environment. In many cases, communicative ecologies move seamlessly across both kinds of settings. For example, settings may include both public and private spaces, transport infrastructure and websites, in any combination. Different settings have distinct affordances that may facilitate or hinder communication within an ecology. In a physical setting this might mean a neighbourhood has several coffee shops and parks where residents can interact. In an online setting, certain design features may enable certain types of communication and constrain others. For example, discussion boards facilitate one-to-many or many-to-many collective forms of communication but not one-to-one or peer-to-peer style networked communication that would be better served bySMSorinstant messaging(Foth & Hearn, 2007). Similar to biological ecologies, communicative ecologies have lifecycles. They can be described as new or well-established, active or dormant, or in a period of growth or decline. For example, residents of a new master-planned housing estate will have a young communicative ecology that is in a period of growth but may need cultivation in order to become active. In this case, sociocultural animation of the ecology may be required for it to become socially sustainable(Tacchi et al., 2003). Communicative ecologies can be conceived as having three layers and differing across several spectral dimensions. The nature of a communicative ecology changes as its members engage in and transition between different types of activities. A communicative ecology has three layers: social, discursive and technological(Foth & Hearn, 2007). These layers are seen to be intricately entwined and mutually constitutive, rather than discrete, hierarchical or as having unidirectional causal relationships. While it is challenging to consider each layer in isolation, analysing each layer independently can be a beneficial preliminary step prior to the examination of the complex, mutual shaping relationships that form part of the holistic view of a communicative ecology. The social layer refers to people and the various social structures with which they identify themselves, ranging from informal personal networks to formal institutions. For example, this may include groups of friends, formal community organisations and companies. The discursive layer refers to the themes or content of both mediated and unmediated communication. The technological layer comprises communication media and technologies. This includes both traditional media, such as newspapers and television, and new media including mobile phones and social networking sites. The devices and applications within this layer are differentiated according to the communication model they facilitate. For example, collective communication is made possible through one-to-many or many-to-many forms of media, such as television or online discussion boards, whereas networked communication can be enabled through one-to-one or peer-to-peer media, including instant messaging or SMS(Foth & Hearn, 2007). The layered nature of the communicative ecology framework enables the investigation of research questions surrounding the media preferences of diverse individuals and groups and how these choices influence their relationships. It also allows researchers to explore the nature of discourse between individuals and within groups, and how communication changes according to the nature of people's relationships with one another. The communicative ecology model is also useful for considering how different topics of communication affect choice of media and how different media shape communicative content. Communicative ecologies vary across several spectral dimensions. The dimensions identified to date, include networked/collective, global/local and online/offline(Foth & Hearn, 2007). The dimensional properties of communicative ecologies allow researchers to consider both the relative strength of each characteristic, and also how individuals and the ecology itself transitions fluidly between dimensions. For example, researchers can question how an individual or group's choice of media changes as they transition between networked and collective forms of interaction. They may also consider how different media may facilitate or constrain either networked or collective interaction. If interested in the global or local characteristics of an ecology, researchers may examine how communication with proximate others may be mediated differently from communication with others in distant locales. They could also explore which communication topics are more likely to occur at the local level rather than within globally distributed social networks. As users of new technologies now move seamlessly between what was formerly constituted as online and offline domains, researchers may use the communicative ecology model to address questions of how and why people choose certain assemblages of online and offline media to achieve particular communicative goals. Investigation of how the nature of discourse may affect the choice of online or offline modes of interaction is also possible. Communicative ecologies can also be characterised across several other dimensions. One example is the private/public dimension. People may choose to interact and communicate with each other in private settings, such as their home or via email, or in more public settings, such as a restaurant or chat room. The nature of an individual's communicative ecology changes as they transition between types of activity. For example, they may choose to use different media when communicating with colleagues as compared to planning an evening out with friends. Similarly, a workplace's communicative ecology may differ from that of a tennis club or a loosely joined network of environmental activists. The activities of everyday life can be grouped into various types. Five example categories are as follows. The first three are derived fromStebbins's (2007)typology of leisure activities. These groupings enable exploration of the patterns of social interactions, topics of communication and media applications that may be specific to an activity type. The study of a communicative ecology requires decisions to be made concerning the scope of both data collection and analysis. Prior to data collection, a decision must be made as to the investigative frame of the study, whereas decisions regarding the analytical scope of a study may be made after a rich picture of the ecology has emerged. While an ecological study aims to be holistic, an appropriate frame for the study must be decided upon at the outset. The scope of communicative ecology research settings is generally restricted to a bounded geographical space. It has been proposed that the communicative ecology framework is suited to studies at the level of the dwelling, neighbourhood, suburb or city(Hearn & Foth, 2007). However, it is also suitable for studies of communicative ecologies that are grounded in online environments. In this case, the setting could be limited to one or several websites(Button & Partridge, 2007). Temporal considerations may also shape data collection procedures. For example, an ecology could be examined at a single point of time or longitudinally. It is possible that the nature of a communicative ecology may also vary according to time of day, week or season. The analysis of a communicative ecology can occur at both macro and micro levels within this spatiotemporal frame. It can help to think of a communicative ecology as a map and its edges as the frame. We can increase the resolution of certain features by using a magnifying glass. By stepping away from the map, we can increase the granularity of our view and see how features interrelate with the ecology as a whole. In this way, the interrelationships between agents are not ignored, as they may be in studies that focus on single communication devices or applications, but may be temporarily set aside while other features are examined more closely. By viewing the map as a whole first, the researcher may be able to make better analytical choices than was possible prior to data collection. Some possible methods for delimiting the analytical scope of a communicative ecology study include using features of a layer or another characteristic. For example, the analysis could be delimited to examining the ecology of an individual, a small social network or group, or by certain demographic characteristics. It could focus on the ecology of a specific theme of communication, form of media or technology, setting or activity. Alternatively, it could investigate a single dimension of a communicative ecology, for example, only local or public forms of communication. Communicative ecology researchers speak in terms of "mapping" the ecology. This term may be misleading as it could appear to indicate the creation of cartographic renderings of the communicative ecology as it relates to its locality. Mapping the ecology, in the main, refers to drawing conceptual maps and creating or collecting oral or written descriptions of the phenomena that constitute the communicative ecology. There are two primary perspectives taken to communicative ecology research that are loosely correlative with theemic and eticpositions taken in classic ethnographic studies. A researcher can work from the outside of the ecology looking in with the aim of creating a holistic overview. Alternatively, they can position themselves within the communicative ecology with the aim of looking at it from the participants' points of view. The external view is useful if a comparison between local systems is desired. A centric view is better suited to understanding how people construct and make sense of their communicative ecology. The choice of perspective may enhance or limit the utility of the data. For example, a birds-eye view may fail to capture significant individual differences in the experience of a communicative ecology, such as those brought about by differing wealth or literacy levels. Ideally, communicative ecology research should use a variety of perspectives in order to obtain a more complete representation and deeper understanding. The study of communicative ecologies is commonly associated with a research approach known as ethnographic action research. This approach combines ethnographic methods, including participant observation and in-depth interviews, with participatory methods and action research. The ethnographic methods enable researchers to develop a rich understanding of the meanings derived from media and communication technologies. The action research methods allow the study to be located in not only communication theory, but also grassroots communication practice. In this approach, participants can act as co-investigators in cycles of inquiry, action and reflection and researchers are able to give back in a way that will develop the communicative ecology. In this way, ethnographic action research is suited to both research and project development agendas(Tacchi, 2006). Research approaches used to date include: Methods related to these approaches include: Ethnographic action research: Training handbook. New Delhi: UNESCO.
https://en.wikipedia.org/wiki/Communicative_ecology
Core–periphery structureis anetwork theorymodel. There are two main intuitions behind the definition of core–periphery network structures; one assumes that a network can only have one core, whereas the other allows for the possibility of multiple cores. These two intuitive conceptions serve as the basis for two modes of core–periphery structures. This model assumes that there are two classes of nodes. The first consists of a cohesive core sub-graph in which the nodes are highly interconnected, and the second is made up of a peripheral set of nodes that is loosely connected to the core. In an ideal core–periphery matrix, core nodes are adjacent to other core nodes and to some peripheral nodes while peripheral nodes are not connected with other peripheral nodes (Borgatti & Everett, 2000, p. 378). This requires, however, that there be an a priori partition that indicates whether a node belongs to the core or periphery. This model allows for the existence of three or more partitions of node classes. However, including more classes makes modifications to the discrete model more difficult.[clarification needed]Borgatti & Everett (1999) suggest that, in order to overcome this problem, each node be assigned a measure of ‘coreness’ that will determine its class. Nevertheless, the threshold of what constitutes a high ‘coreness’ value must be justified theoretically. Hubs are commonly found inempirical networksand pose a problem for community detection as they usually have strong ties to many communities. Identifying core–periphery structures can help circumvent this problem by categorizing hubs as part of the network's core (Rombach et al., 2014, p. 160). Likewise, though all core nodes have high centrality measures, not all nodes with high centrality measures belong to the core. It is possible to find that a set of highly central nodes in a graph does not make an internally cohesive subgraph (Borgatti & Everett, 2000)... The concept was first introduced into economics as "centre-periphery" byRaúl Prebischin the 1950s, but the origin of the idea could ultimately be traced back toThünen'sIsolated State(1826).[1]However, the qualitative notion thatsocial networkscan have a core–periphery structure has a long history in disciplines such associology,international relations(Nemeth & Smith, 1985), andeconomics(Snyder & Kick, 1979). Observed trade flows and diplomatic ties among countries fit this structure.Paul Krugman(1991) suggests that when transportation costs are low enough manufacturers concentrate in a single region known as the core and other regions (the periphery) limit themselves to the supply of agricultural goods. The "centre-periphery" model was classically developed byJohn Friedmannin 1966 in his bookRegional Development Policy: A Case Study of Venezuela.[2] Forregional relations and variations in RussiaProfessorNatalia Zubarevichproposed an extension of the centre-periphery model and is known as the author of the "theory of four Russias".[3]According to Zubarevich, the different speed ofsocial modernisationis more accurately explained by the centre-periphery model. The entire population of the country can be divided into three roughly equal parts - about a third of citizens in each. Theunderdevelopedrepublics, where 6% of the country's population lives - this "fourth" Russia has its own specific features.[4][5][6]According to the concept, "First Russia" is thecities with millions of inhabitants, i.e. the most modernised and economically developed territories. "Second Russia" are medium-sized cities with a pronounced industrial profile. "Third Russia" - small towns, workers' settlements, rural areas. Compared to "first" and "second Russia" - this is a deep periphery in terms of the quality of socio-economic life. The "fourth Russia" is made up of the national republics of the Caucasus, as well as the south of Siberia (Tuva, the Altai Republic). These territories also represent a periphery, but a specific one: thedemographic transitionhas not been completed here,urbanisationis in its infancy, andpatriarchal-clanprinciples are still strong in society.[7]Monoprofile towns (monotowns) are the most unstable part of the "second Russia".[8]
https://en.wikipedia.org/wiki/Core-periphery_structure
In the mathematical field ofgraph theory, theErdős–Rényi modelrefers to one of two closely related models for generatingrandom graphsor theevolution of a random network. These models are named afterHungarianmathematiciansPaul ErdősandAlfréd Rényi, who introduced one of the models in 1959.[1][2]Edgar Gilbertintroduced the other model contemporaneously with and independently of Erdős and Rényi.[3]In the model of Erdős and Rényi, all graphs on a fixed vertex set with a fixed number of edges are equally likely. In the model introduced by Gilbert, also called theErdős–Rényi–Gilbert model,[4]each edge has a fixed probability of being present or absent,independentlyof the other edges. These models can be used in theprobabilistic methodto prove the existence of graphs satisfying various properties, or to provide a rigorous definition of what it means for a property to hold foralmost allgraphs. There are two closely related variants of the Erdős–Rényi random graph model. The behavior of random graphs are often studied in the case wheren{\displaystyle n}, the number of vertices, tends to infinity. Althoughp{\displaystyle p}andM{\displaystyle M}can be fixed in this case, they can also be functions depending onn{\displaystyle n}. For example, the statement that almost every graph inG(n,2ln⁡(n)/n){\displaystyle G(n,2\ln(n)/n)}is connected means that, asn{\displaystyle n}tends to infinity, the probability that a graph onn{\displaystyle n}vertices with edge probability2ln⁡(n)/n{\displaystyle 2\ln(n)/n}is connected tends to1{\displaystyle 1}. The expected number of edges inG(n,p) is(n2)p{\displaystyle {\tbinom {n}{2}}p}, with a standard deviation asymptotic tos(n)=np(1−p){\displaystyle s(n)=n{\sqrt {p(1-p)}}}. Therefore, a rough heuristic is that if some property ofG(n,M) withM=(n2)p{\displaystyle M={\tbinom {n}{2}}p}does not significantly change in behavior ifMis changed by up tos(n), thenG(n,p) should share that behavior. This is formalized in a result of Łuczak.[5]Suppose thatPis a graph property such that for every sequenceM=M(n) with|M−(n2)p|=O(s(n)){\displaystyle |M-{\tbinom {n}{2}}p|=O(s(n))}, the probability that a graph sampled fromG(n,M) has propertyPtends toaasn→ ∞. Then the probability thatG(n,p) has propertyPalso tends toa. Implications in the other direction are less reliable, but a partial converse (also shown by Łuczak) is known whenPismonotonewith respect to the subgraph ordering (meaning that ifAis a subgraph ofBandBsatisfiesP, thenAwill satisfyPas well). Letε(n)≫s(n)/n3{\displaystyle \varepsilon (n)\gg s(n)/n^{3}}, and suppose that a monotone propertyPis true of bothG(n,p–ε) andG(n,p+ε) with a probability tending to the same constantaasn→ ∞. Then the probability thatG(n,(n2)p){\displaystyle G(n,{\tbinom {n}{2}}p)}has propertyPalso tends toa. For example, both directions of equivalency hold ifPis the property of beingconnected, or ifPis the property of containing aHamiltonian cycle. However, properties that are not monotone (e.g. the property of having an even number of edges) or that change too rapidly (e.g. the property of having at least12(n2){\displaystyle {\tfrac {1}{2}}{\tbinom {n}{2}}}edges) may behave differently in the two models. In practice, theG(n,p) model is the one more commonly used today, in part due to the ease of analysis allowed by the independence of the edges. With the notation above, a graph inG(n,p) has on average(n2)p{\displaystyle {\tbinom {n}{2}}p}edges. The distribution of thedegreeof any particular vertex isbinomial:[6] wherenis the total number of vertices in the graph. Since this distribution isPoissonfor largenandnp= const. In a 1960 paper, Erdős and Rényi[7]described the behavior ofG(n,p) very precisely for various values ofp. Their results included that: Thusln⁡nn{\displaystyle {\tfrac {\ln n}{n}}}is a sharp threshold for the connectedness ofG(n,p). Further properties of the graph can be described almost precisely asntends to infinity. For example, there is ak(n) (approximately equal to 2log2(n)) such that the largestcliqueinG(n, 0.5) has almost surely either sizek(n) ork(n) + 1.[8] Thus, even though finding the size of the largest clique in a graph isNP-complete, the size of the largest clique in a "typical" graph (according to this model) is very well understood. Edge-dual graphs of Erdos-Renyi graphs are graphs with nearly the same degree distribution, but with degree correlations and a significantly higherclustering coefficient.[9] Inpercolation theoryone examines a finite or infinite graph and removes edges (or links) randomly. Thus the Erdős–Rényi process is in fact unweighted link percolation on thecomplete graph. (One refers to percolation in which nodes and/or links are removed with heterogeneous weights as weighted percolation). As percolation theory has much of its roots inphysics, much of the research done was on thelatticesin Euclidean spaces. The transition atnp= 1 from giant component to small component has analogs for these graphs, but for lattices the transition point is difficult to determine. Physicists often refer to study of the complete graph as amean field theory. Thus the Erdős–Rényi process is the mean-field case of percolation. Some significant work was also done on percolation on random graphs. From a physicist's point of view this would still be a mean-field model, so the justification of the research is often formulated in terms of the robustness of the graph, viewed as a communication network. Given a random graph ofn≫ 1 nodes with an average degree⟨k⟩{\displaystyle \langle k\rangle }. Remove randomly a fraction1−p′{\displaystyle 1-p'}of nodes and leave only a fractionp′{\displaystyle p'}from the network. There exists a critical percolation thresholdpc′=1⟨k⟩{\displaystyle p'_{c}={\tfrac {1}{\langle k\rangle }}}below which the network becomes fragmented while abovepc′{\displaystyle p'_{c}}a giant connected component of ordernexists. The relative size of the giant component,P∞, is given by[7][1][2][10] Both of the two major assumptions of theG(n,p) model (that edges are independent and that each edge is equally likely) may be inappropriate for modeling certain real-life phenomena. Erdős–Rényi graphs have low clustering, unlike many social networks.[11]Some modeling alternatives includeBarabási–Albert modelandWatts and Strogatz model. These alternative models are not percolation processes, but instead represent a growth and rewiring model, respectively. Another alternative family of random graph models, capable of reproducing many real-life phenomena, areexponential random graph models. TheG(n,p) model was first introduced byEdgar Gilbertin a 1959 paper studying the connectivity threshold mentioned above.[3]TheG(n,M) model was introduced by Erdős and Rényi in their 1959 paper. As with Gilbert, their first investigations were as to the connectivity ofG(n,M), with the more detailed analysis following in 1960. A continuum limit of the graph was obtained whenp{\displaystyle p}is of order1/n{\displaystyle 1/n}.[12]Specifically, consider the sequence of graphsGn:=G(n,1/n+λn−43){\displaystyle G_{n}:=G(n,1/n+\lambda n^{-{\frac {4}{3}}})}forλ∈R{\displaystyle \lambda \in \mathbb {R} }. The limit object can be constructed as follows: Applying this procedure, one obtains a sequence of random infinite graphs of decreasing sizes:(Γi)i∈N{\displaystyle (\Gamma _{i})_{i\in \mathbb {N} }}. The theorem[12]states that this graph corresponds in a certain sense to the limit object ofGn{\displaystyle G_{n}}asn→+∞{\displaystyle n\to +\infty }.
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model
This is aglossary of graph theory.Graph theoryis the study ofgraphs, systems of nodes orverticesconnected in pairs by lines oredges.
https://en.wikipedia.org/wiki/Glossary_of_graph_theory
Innetwork science, agradient networkis a directedsubnetworkof an undirected "substrate"networkwhere eachnodehas an associatedscalar potentialand one out-link that points to the node with the smallest (or largest) potential in its neighborhood, defined as the union of itself and itsneighborson the substrate network.[1] Transport takes place on a fixed networkG=G(V,E){\displaystyle G=G(V,E)}called the substrate graph. It hasNnodes,V={0,1,...,N−1}{\displaystyle V=\{0,1,...,N-1\}}and the set of edgesE={(i,j)|i,j∈V}{\displaystyle E=\{(i,j)|i,j\in V\}}. Given a nodei, we can define its set of neighbors in G by Si(1)= {j ∈ V | (i,j)∈ E}. Let us also consider a scalar field,h= {h0, ..,hN−1} defined on the set of nodes V, so that every node i has a scalar valuehiassociated to it. Gradient ∇hion a network:∇hi={\displaystyle =}(i, μ(i))i.e. the directed edge fromitoμ(i), whereμ(i) ∈ Si(1)∪ {i}, and hμhas the maximum value inhj|j∈Si(1)∪i{\displaystyle {h_{j}|j\in S_{i}^{(1)}\cup {i}}}. Gradient network:∇G={\displaystyle G=}∇G{\displaystyle G}(V,F){\displaystyle (V,F)}whereFis the set of gradient edges onG. In general, the scalar field depends on time, due to the flow, external sources and sinks on the network. Therefore, the gradient network ∇G{\displaystyle G}will be dynamic.[3] The concept of a gradient network was first introduced by Toroczkai and Bassler (2004).[4][5] Generally, real-world networks (such ascitation graphs, theInternet, cellular metabolic networks, the worldwide airport network), which often evolve to transport entities such as information, cars, power, water, forces, and so on, are not globally designed; instead, they evolve and grow through local changes. For example, if arouteron the Internet is frequently congested and packets are lost or delayed due to that, it will be replaced by several interconnected new routers.[2] Moreover, this flow is often generated or influenced by local gradients of a scalar. For example: electric current is driven by a gradient of electric potential. In information networks, properties of nodes will generate a bias in the way of information is transmitted from a node to its neighbors. This idea motivated the approach to study the flow efficiency of a network by using gradient networks, when the flow is driven by gradients of ascalar fielddistributed on the network.[2][3] Recent research[which?][needs update]investigates the connection betweennetwork topologyand the flow efficiency of the transportation.[2] In a gradient network, thein-degreeof a node i,ki(in)is the number of gradient edges pointing into i, and the in-degree distribution isR(l)=P{ki(in)=l}{\displaystyle R(l)=P\{k_{i}^{(in)}=l\}}. When the substrate G is a random graph and each pair of nodes is connected with probabilityP(i.e. anErdős–Rényi random graph), the scalarshiare i.i.d. (independent identically distributed) the exact expression forR(l)is given by In the limitN→∞{\displaystyle N\to \infty }andP→0{\displaystyle P\to 0}, the degree distribution becomes the power law This shows in this limit, the gradient network of random network is scale-free.[3] Furthermore, if the substrate network G is scale-free, like in theBarabási–Albert model, then the gradient network also follow the power-law with the same exponent as those of G.[2] The fact that the topology of the substrate network influence the level ofnetwork congestioncan be illustrated by a simple example: if the network has a star-like structure, then at the central node, the flow would become congested because the central node should handle all the flow from other nodes. However, if the network has a ring-like structure, since every node takes the same role, there is no flow congestion. Under assumption that the flow is generated by gradients in the network, flow efficiency on networks can be characterized through the jamming factor (or congestion factor), defined as follows: whereNreceiveis the number of nodes that receive gradient flow and Nsendis the number of nodes that send gradient flow. The value ofJis between 0 and 1;J=0{\displaystyle J=0}means no congestion, andJ=1{\displaystyle J=1}corresponds to maximal congestion. In the limitN→∞{\displaystyle N\to \infty }, for anErdős–Rényi random graph, the congestion factor becomes This result shows that random networks are maximally congested in that limit. On the contrary, for ascale-free network,Jis a constant for anyN, which means that scale-free networks are not prone to maximal jamming.[6] One problem in communication networks is understanding how to control congestion and maintain normal and efficient network function.[7] Zonghua Liu et al. (2006) showed that congestion is more likely to occur at the nodes with high degrees in networks, and an efficient approach of selectively enhancing the message-process capability of a small fraction (e.g. 3%) of nodes is shown to perform just as well as enhancing the capability of all nodes.[7] Ana L Pastore y Piontti et al. (2008) showed that relaxational dynamics[clarification needed]can reduce network congestion.[8] Pan et al. (2011) studied jamming properties in a scheme where edges are given weights of a power of the scalar difference between node potentials.[9][clarification needed] Niu and Pan (2016) showed that congestion can be reduced by introducing a correlation between the gradient field and the local network topology.[10][clarification needed]
https://en.wikipedia.org/wiki/Gradient_network
Inmathematics,higher category theoryis the part ofcategory theoryat ahigher order, which means that some equalities are replaced by explicitarrowsin order to be able to explicitly study the structure behind those equalities. Higher category theory is often applied inalgebraic topology(especially inhomotopy theory), where one studies algebraicinvariantsofspaces, such as thefundamentalweak ∞-groupoid. In higher category theory, the concept of higher categorical structures, such as (∞-categories), allows for a more robust treatment ofhomotopy theory, enabling one to capture finer homotopical distinctions, such as differentiating twotopological spacesthat have the same fundamental group but differ in their higherhomotopy groups. This approach is particularly valuable when dealing with spaces with intricate topological features,[1]such as theEilenberg-MacLane space. An ordinarycategoryhasobjectsandmorphisms, which are called1-morphismsin the context of higher category theory. A2-categorygeneralizes this by also including2-morphismsbetween the1-morphisms. Continuing this up ton-morphismsbetween (n− 1)-morphisms gives ann-category. Just as the category known asCat, which is thecategory of small categoriesandfunctorsis actually a2-categorywithnatural transformationsas its2-morphisms, the categoryn-Catof (small)n-categories is actually an (n+ 1)-category. Ann-categoryis defined by induction onnby: So a1-categoryis just a (locally small) category. Themonoidalstructure ofSetis the one given by thecartesian productas tensor and asingletonas unit. In fact any category with finiteproductscan be given a monoidal structure. The recursive construction ofn-Catworks fine because if a categoryChas finite products, the category ofC-enriched categories has finite products too. While this concept is too strict for some purposes in for example,homotopy theory, where "weak" structures arise in the form of higher categories,[2]strict cubical higher homotopy groupoids have also arisen as giving a new foundation for algebraic topology on the border betweenhomologyandhomotopy theory; see the articleNonabelian algebraic topology, referenced in the book below. In weakn-categories, the associativity and identity conditions are no longer strict (that is, they are not given by equalities), but rather are satisfied up to an isomorphism of the next level. An example intopologyis the composition ofpaths, where the identity and association conditions hold only up toreparameterization, and hence up tohomotopy, which is the2-isomorphismfor this2-category. Thesen-isomorphisms must well behave betweenhom-setsand expressing this is the difficulty in the definition of weakn-categories. Weak2-categories, also calledbicategories, were the first to be defined explicitly. A particularity of these is that a bicategory with one object is exactly amonoidal category, so that bicategories can be said to be "monoidal categories with many objects." Weak3-categories, also calledtricategories, and higher-level generalizations are increasingly harder to define explicitly. Several definitions have been given, and telling when they are equivalent, and in what sense, has become a new object of study in category theory. Weak Kan complexes, or quasi-categories, aresimplicial setssatisfying a weak version of the Kan condition.André Joyalshowed that they are a good foundation for higher category theory by constructing theJoyal model structureon thecategory of simplicial sets, whose fibrant objects are exactly quasi-categories. Recently, in 2009, the theory has been systematized further byJacob Luriewho simply calls them infinity categories, though the latter term is also a generic term for all models of (infinity,k) categories for anyk. Simplicially enriched categories, or simplicial categories, are categories enriched over simplicial sets. However, when we look at them as a model for(infinity, 1)-categories, then many categorical notions (e.g.,limits) do not agree with the corresponding notions in the sense of enriched categories. The same for other enriched models like topologically enriched categories. Topologically enriched categories (sometimes simply called topological categories) are categories enriched over some convenient category of topological spaces, e.g. the category ofcompactly generatedHausdorff spaces. These are models of higher categories introduced by Hirschowitz and Simpson in 1998,[3]partly inspired by results of Graeme Segal in 1974.
https://en.wikipedia.org/wiki/Higher_category_theory
Theimmune network theoryis a theory of how theadaptive immune systemworks, that has been developed since 1974 mainly byNiels Jerne[1]andGeoffrey W. Hoffmann.[2][3]The theory states that the immune system is an interacting network of lymphocytes and molecules that have variable (V) regions. These V regions bind not only to things that are foreign to the vertebrate, but also to other V regions within the system. The immune system is therefore seen as a network, with the components connected to each other by V-V interactions. It has been suggested that the phenomena that the theory describes in terms of networks are also explained byclonal selection theory.[4][5] The scope of the symmetrical network theory developed by Hoffmann includes the phenomena of low dose and high dose tolerance, first reported for a single antigen byAvrion Mitchison,[6]and confirmed by Geoffrey Shellam andSir Gustav Nossal,[7]the helper[8]and suppressor roles[9]of T cells, the role of non-specific accessory cells in immune responses,[10]and the very important phenomenon called I-J. Jerne was awarded theNobel Prizefor Medicine or Physiology in 1984 partly for his work towards the clonal selection theory, as well as his proposal of the immune network concept.[11] The immune network theory has also inspired a subfield ofoptimizationalgorithms similar toartificial neural networks.[12] Heinz Kohler was involved in earlyidiotypicnetwork research and was the first to suggest that idiotypic network interactions are symmetrical.[13][3]He developed a detailed immune network theory based on symmetrical stimulatory, inhibitory and killing interactions. It offers a framework for understanding a large number of immunological phenomena based on a small number of postulates. The theory involves roles for B cells that make antibodies, T cells that regulate the production of antibodies by B cells, and non-specific accessory cells (A cells). Antibodies called IgG have two V regions and a molecular weight of 150,000. A central role in the theory is played by specific T cell factors, which have a molecular weight of approximately 50,000, and are postulated in the theory to have only one V region.[14][10][15]Hoffmann has proposed that for brevity specific T cell factors should be called tabs.[3]Tabs are able to exert a powerful suppressive effect on the production of IgG antibodies in response to foreign substances (antigens), as was demonstrated rigorously by Takemori and Tada.[14]Hoffmann and Gorczynski have reproduced the Takemori and Tada experiment, confirming the existence of specific T cell factors.[16]In the symmetrical network theory tabs are able to block V regions and also to have a stimulatory role when bound to a tab receptor on A cells. Symmetrical stimulatory interactions follow from the postulate that activation of B cells, T cells and A cells involves cross-linking of receptors. The symmetrical network theory has been developed with the assistance of mathematical modeling. In order to exhibit immune memory to any combination of a large number of different pathogens, the system has a large number of stable steady states. The system is also able to switch between steady states as has been observed experimentally. For example, low or high doses of an antigen can cause the system to switch to a suppressed state for the antigen, while intermediate doses can cause the induction of immunity. The theory accounts for the ability of T cells to have regulatory roles in both helping and suppressing immune responses. In 1976 Murphy et al. and Tada et al. independently reported a phenomenon in mice called I-J.[17][18]From the perspective of the symmetrical network theory, I-J is one of the most important phenomena in immunology, while for many immunologists who are not familiar with the details of the theory, I-J "does not exist". In practice I-J is defined by anti-I-J antibodies, that are produced when mice of certain strains are immunized with tissue of certain other strains; see Murphy et al. and Tada et al., op cit. I-J was found by these authors to map to within the Major Histocompatibility Complex, but no gene could be found at the site where I-J had been mapped in numerous experiments.[19]The absence of I-J gene(s) within the MHC at the place where I-J had been mapped became known as the "I-J paradox". This paradox resulted in regulatory T cells and tabs, which both express I-J determinants, falling out of favour, together with the symmetrical network theory, that is based on the existence of tabs. In the meantime however, it has been shown that the I-J paradox can be resolved in the context of the symmetrical network theory.[20] The resolution of the I-J paradox involves a process of mutual selection (or "co-selection") of regulatory T cells and helper T cells, meaning that (a) those regulatory T cells are selected that have V regions with complementarity to as many helper T cells as possible, and (b) helper T cells are selected not only on the basis of their V regions having some affinity for MHC class II, but also on the basis of the V regions having some affinity for the selected regulatory T cell V regions. The helper T cells and regulatory T cells that are co-selected are then a mutually stabilizing construct, and for a given mouse genome, more than one such mutually stabilizing set can exist. This resolution of the I-J paradox leads to some testable predictions. However, considering the importance of the (unfound) I-J determinant for the theory, the I-J paradox solution is still subject to strong criticism, e.g.Falsifiability. An immune network model forHIVpathogenesis was published in 1994 postulating that HIV-specific T cells are preferentially infected (Hoffmann, 1994, op cit.). The publication of this paper was followed in 2002 with the publication of a paper entitled "HIV preferentially infects HIV specific CD4+ T cells."[21] Under the immune network theory, the main cause for progression toAIDSafter HIV infection is not the direct killing of infectedT helper cellsby the virus. Following an infection with HIV that manages to establish itself, there is a complex interaction between the HIV virus, the T helper cells that it infects, andregulatory T cells.[22]These threequasispeciesapply selective pressure on one another and co-evolve in such a way that the viralepitopeseventually come to mimick the V regions of the main population of T regulatory cells. Once this happens, anti-HIV antibodies can bind to and kill most of the host's T regulatory cell population. This results in the dysregulation of the immune system, and eventually to other further anti-self reactions, including against the T helper cell population. At that point, the adaptive immune system is completely compromised and AIDS ensues. Hence in this model, the onset of AIDS is primarily anauto-immunereaction triggered by the cross-reaction of anti-HIV antibodies with T regulatory cells. Once this induced auto-immunity sets in, removing the HIV virus itself (for instance viaHAART) would not be sufficient to restore proper immune function. The co-evolution of the quasispecies mentioned above will take a variable time depending on the initial conditions at the time of infection (i.e. the epitopes of the first infection and the steady state of the host's immune cell population), which would explain why there is a variable period, which differs greatly between individual patients, between HIV infection and the onset of AIDS. It also suggests that conventional vaccines are unlikely to be successful, since they would not prevent the auto-immune reaction. In fact such vaccines may do more harm in certain cases, since if the original infection comes from a source with a "mature" infection, those virions will have a high affinity for anti-HIV T helper cells (see above), and so increasing the anti-HIV population via vaccination only serves to provide the virus with more easy targets. A hypothetical HIV vaccine concept based on immune network theory has been described.[23]The vaccine concept was based on a network theory resolution of the Oudin-Cazenave paradox.[24]This is a phenomenon that makes no sense in the context of clonal selection, without taking idiotypic network interactions into account. The vaccine concept comprised complexes of an anti-anti-HIV antibody and an HIV antigen, and was designed to induce the production of broadly neutralizing anti-HIV antibodies. A suitable anti-anti-HIV antibody envisaged for use in this vaccine is the monoclonal antibody 1F7, which was discovered by Sybille Muller and Heinz Kohler and their colleagues.[25]This monoclonal antibody binds to all of six well characterized broadly neutralizing anti-HIV antibodies.[26] A vaccine concept based on a more recent extension of immune network theory and also based on much more data has been described by Reginald Gorczynski and Geoffrey Hoffmann.[27]The vaccine typically involves three immune systems, A, B and C that can be combined to make an exceptionally strong immune system in a treated vertebrate C. In mouse models the vaccine has been shown to be effective in the prevention of inflammatory bowel disease; the prevention of tumour growth and prevention of metastases in a transplantable breast cancer; and in the treatment of an allergy. The immune system of C is stimulated by a combination of A anti-B (antigen-specific) and B anti-anti-B (antiidiotypic) antibodies. The former stimulate anti-anti-B T cells and the latter stimulate anti-B T cells within C. Mutual selection ("co-selection") of the anti-B and anti-anti-B T cells takes the system to a new stable steady state in which there are elevated levels of these two populations of T cells. An untreated vertebrate C with self antigens denoted C is believed to have a one-dimensional axis of lymphocytes that is defined by co-selection of anti-C and anti-anti-C lymphocytes. The treated vertebrate C has a two dimensional system of lymphocytes defined by co-selection of both anti-C and anti-anti-C lymphocytes and co-selection of anti-B and anti-anti-B lymphocytes. Experiments indicate that the two-dimensional system is more stable than the one-dimensional system.
https://en.wikipedia.org/wiki/Immune_network_theory
Irregular warfare(IW) is defined inUnited Statesjoint doctrine as "a violent struggle among state and non-state actors for legitimacy and influence over the relevant populations" and in U.S. law as "Department of Defense activities not involving armed conflict that support predetermined United States policy and military objectives conducted by, with, and through regular forces, irregular forces, groups, and individuals."[1][2]In practice, control of institutions and infrastructure is also important. Concepts associated with irregular warfare are older than the term itself.[3] Irregular warfare favors indirect warfare andasymmetric warfareapproaches, though it may employ the full range of military and other capabilities in order to erode the adversary's power, influence, and will. It is inherently a protracted struggle that will test the resolve of astateand its strategic partners.[4][5][6][7][8] The term "irregular warfare" in Joint doctrine was settled upon in distinction from "traditional warfare" and "unconventional warfare", and to differentiate it as such; it is unrelated to the distinction between "regular" and "irregular forces".[9] One of the earliest known uses of the termirregular warfareisCharles Edward Callwell's classic 1896 publication for theUnited KingdomWar Office,Small Wars: Their Principles and Practices, where he noted in defining 'small wars': "Small wars include the partisan warfare which usually arises when trained soldiers are employed in the quelling of sedition and of insurrections in civilised countries; they include campaigns of conquest when a Great Power adds the territory of barbarous races to its possessions; and they include punitive expeditions against tribes bordering upon distant colonies....Whenever a regular army finds itself engaged upon hostilities against irregular forces, or forces which in their armament, their organization, and their discipline are palpably inferior to it, the conditions of the campaign become distinct from the conditions of modern regular warfare, and it is with hostilities of this nature that this volume proposes to deal. Upon the organization of armies for irregular warfare valuable information is to be found in many instructive military works, official and non-official."[10] A similar usage appears in the 1986 English edition of "Modern Irregular Warfare in Defense Policy and as a Military Phenomenon" by formerNaziofficerFriedrich August Freiherr von der Heydte. The original 1972 German edition of the book is titled "Der Moderne Kleinkrieg als Wehrpolitisches und Militarisches Phänomen". The German word "Kleinkrieg" is literally translated as "Small War."[11]The word "Irregular," used in the title of the English translation of the book, seems to be a reference to non "regular armed forces" as per theThird Geneva Convention. Another early use of the term is in a 1996Central Intelligence Agency(CIA) document by Jeffrey B. White.[12]Majormilitary doctrinedevelopments related to IW were done between 2004 and 2007[13]as a result of theSeptember 11 attackson theUnited States.[14][15][unreliable source?]A key proponent of IW within US Department of Defense (DoD) isMichael G. Vickers, a former paramilitary officer in the CIA.[16]The CIA'sSpecial Activities Center(SAC) is the premiere Americanparamilitaryclandestineunit for creating and for combating irregular warfare units.[17][18][19]For example, SAC paramilitary officers created and led successful irregular units from the Hmong tribe during the war in Laos in the 1960s,[20]from theNorthern Allianceagainst theTalibanduring the war in Afghanistan in 2001,[21]and from theKurdishPeshmergaagainstAnsar al-Islamand the forces ofSaddam Husseinduring the war in Iraq in 2003.[22][23][24] Nearly all modern wars include at least some element of irregular warfare. Since the time of Napoleon, approximately 80% of conflict has been irregular in nature. However, the following conflicts may be considered to have exemplified by irregular warfare:[3][12] Activities and types of conflict included in IW are: According to the DoD, there are five core activities of IW: As a result of DoD Directive 3000.07,[6]United States armed forcesare studying[when?]irregular warfare concepts usingmodeling and simulation.[29][30][31] There have been several militarywargamesandmilitary exercisesassociated with IW, including: Individuals:
https://en.wikipedia.org/wiki/Irregular_warfare
Network dynamicsis a research field for the study ofnetworkswhose status changes in time. The dynamics may refer to the structure of connections of the units of a network,[1][2]to the collective internal state of the network,[3][4]or both. The networked systems could be from the fields ofbiology,chemistry,physics,sociology,economics,computer science, etc. Networked systems are typically characterized ascomplex systemsconsisting of many units coupled by specific, potentially changing, interaction topologies. For adynamical systems' approach to discrete network dynamics, seesequential dynamical system. Thiscombinatorics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Network_dynamics
Network formationis an aspect ofnetwork sciencethat seeks to model how a network evolves by identifying which factors affect itsstructureand how these mechanisms operate. Network formationhypothesesare tested by using either a dynamic model with an increasing network size or by making anagent-based modelto determine which network structure is theequilibrium[broken anchor]in a fixed-size network. A dynamic model, often used byphysicistsandbiologists, begins as a small network or even a single node. The modeler then uses a (usually randomized) rule on how newly arrivednodesformlinksin order to increase the size of the network. The aim is to determine what the properties the network will be when it grows in size. In this way, researchers try to reproduce properties common in most real networks, such as thesmall world networkproperty or thescale-free networkproperty. These properties are common in almost every real network including theWorld Wide Web, themetabolic networkor the network of international air routes. The oldest model of this type is theErdős-Rényi model, in which new nodes randomly choose other nodes to connect to. A second well-known model is theWatts and Strogatz model, which starts from a standard two-dimensionallatticeand evolves by replacing links randomly. These models display some realistic network properties, but fail to account for others. One of the most influential models of network formation is theBarabási-Albert model. Here, the network also starts from a small system, and incoming nodes choose their links randomly, but the randomization is not uniform. Instead, nodes which already possess a greater number of links will have a higher likelihood of becoming connected to incoming nodes. This mechanism is known aspreferential attachment. In comparison to previous models, the Barabbas-Albert model seems to more accurately reflect phenomena observed in real-world networks. The second approach to model network formation is agent- orgame theory-based modelling. In these models, a network with fixed number of nodes or agents is created. Every agent is givenutility function, a representation of its linking preferences, and directed to form links with other nodes based upon it. Usually, forming or maintaining a link will have a cost, but having connections to other nodes will have benefits. The method tests the hypothesis that, given some initial setting and parameter values, a certain network structure will emerge as an equilibrium of this game. Since the number of nodes usually fixed, they can very rarely explain the properties of huge real-world networks; however, they are very useful to examine the network formation in smaller groups. Jackson and Wolinsky pioneered these types of models in a 1996 paper, which has since inspired several game-theoretic models.[1]These models were further developed by Jackson and Watts, who put this approach to a dynamic setting to see how the network structure evolve over time.[2] Usually, games with known network structure are widely applicable; however, there are various settings when players interact without fully knowing who their neighbors are and what the network structure is. These games can be modeled usingincomplete information network games. There are very few models that try to combine the two approaches. However, in 2007, Jackson and Rogers modeled a growing network in which new nodes chose their connections partly based on random choices and partly based on maximizing their utility function.[3]With this general framework, modelers can reproduce almost every stylized trait of real-life networks.
https://en.wikipedia.org/wiki/Network_formation
Anetworkis an abstract structure capturing only the basics of connection patterns and little else. Because it is a generalized pattern, tools developed for analyzing,modelingand understanding networks can theoretically be implemented across disciplines. As long as a system can be represented by a network, there is an extensive set of tools –mathematical,computational, andstatistical– that are well-developed and if understood can be applied to the analysis of the system of interest. Tools that are currently employed inrisk assessmentare often sufficient, but model complexity and limitations of computational power can tether risk assessors to involve more causal connections and account for moreBlack Swanevent outcomes. By applyingnetwork theorytools to risk assessment, computational limitations may be overcome and result in broader coverage of events with a narrowed range of uncertainties.[1] Decision-making processes are not incorporated into routine risk assessments; however, they play a critical role in such processes.[2]It is therefore very important for risk assessors to minimizeconfirmation biasby carrying out their analysis and publishing their results with minimal involvement of external factors such as politics, media, and advocates. In reality, however, it is nearly impossible to break theiron triangleamong politicians, scientists (in this case, risk assessors), and advocates and media.[3]Risk assessors need to be sensitive to the difference between risk studies and risk perceptions.[4][5]One way to bring the two closer is to provide decision-makers with data they can easily rely on and understand. Employing networks in the risk analysis process can visualize causal relationships and identify heavily-weighted or important contributors to the probability of the critical event.[6] Bow-tie diagrams,cause-and-effect diagrams,Bayesian networks(adirected acyclicnetwork) andfault treesare few examples of how network theories can be applied in risk assessment.[7] In epidemiology risk assessments (Figure 7 and 9), once a network model was constructed, we can visually see then quantify and evaluate the potential exposure or infection risk of people related to the well-connected patients (Patient 1, 6, 35, 130 and 127 in Figure 7) or high-traffic places (Hotel M in Figure 9). In ecological risk assessments (Figure 8), through a network model we can identify thekeystone speciesand determine how widespread the impacts will extend from the potential hazards being investigated. Risk assessment is a method for dealing with uncertainty. For it to be beneficial to the overall risk management and decision making process, it must be able to capture extreme and catastrophic events. Risk assessment involves two parts: risk analysis and risk evaluation, although the term “risk assessment” can be seen used indistinguishable with “risk analysis”. In general, risk assessment can be divided into these steps:[8] Naturally, the number of steps required varies with each assessment. It depends on the scope of the analysis and the complexity of the study object.[9]Because these is always varies degrees of uncertainty involved in any risk analysis process, sensitivity and uncertainty analysis are usually carried out to mitigate the level of uncertainty and therefore improve the overall risk assessment result. A network is a simplified representation that reduces a system to an abstract structure. Simply put, it is a collection of points linked together by lines. Each point is known as a “vertex” (multiple: “vertices”) or “nodes”, and each line as “edges” or “links”.[10]Network modeling and studying have already been applied in many areas, including computer, physical, biological, ecological, logistical and social science. Through the studying of these models, we gain insights into the nature of individual components (i.e. vertices), connections or interactions between those components (i.e. edges), as well as the pattern of connections (i.e. network). Undoubtedly, modifications of the structure (or pattern) of any given network can have a big effect on the behavior of the system it depicts. For example, connections in a social network affect how people communicate, exchange news, travel, and, less obviously, spread diseases. In order to gain better understanding of how each of these systems functions, some knowledge of the structure of the network is necessary. Small-World Effect Degree, Hubs, and Paths Centrality Components Directed Networks Weighted Network Trees Early social network studies can be traced back to the end of the nineteenth century. However well-documented studies and foundation of this field are usually attributed to a psychiatrist named Jacob Moreno. He published a book entitledWho Whall Survive?in 1934 which laid out the foundation forsociometry(later known associal network analysis). Another famous contributor to the early development of social network analysis is a perimental psychologist known asStanley Milgram. His"small-world" experimentsgave rise to concepts such assix degrees of separationand well-connected acquaintances (also known as "sociometric superstars"). This experiment was recently repeated by Doddset al.by means of email messages, and the basic results were similar to Milgram's. The estimated true average path length (that is, the number of edges the email message has to pass from one unique individual to the intended targets in different countries) for the experiment was around five to seven, which is not much deviated from the original six degree of separation.[14] Afood web, orfood chain, is an example of directed network which describes the prey-predator relationship in a given ecosystem. Vertices in this type of network represent species, and the edges the prey-predator relationship. A collection of species may be represented by a single vertex if all members in that collection prey upon and are preyed on by the same organisms. A food web is often acyclic, with few exceptions such as adults preys on juveniles and parasitism.[15] Epidemiologyis closely related to social network. Contagious diseases can spread through connection networks such as work space, transportation, intimate body contacts and water system (see Figure 7 and 9). Though it only exists virtually, a computer viruses spread across internet networks are not much different from their physical counterparts. Therefore, understanding each of these network patterns can no doubt aid us in more precise prediction of the outcomes of epidemics and preparing better disease prevention protocols. The simplest model of infection is presented as aSI(susceptible - infected) model. Most diseases, however, do not behave in such simple manner. Therefore, many modifications to this model were made such as theSIR(susceptible – infected – recovered), theSIS(the secondSdenotesreinfection) andSIRSmodels. The idea oflatencyis taken into accounts in models such asSEIR(whereEstands forexposed). The SIR model is also known as theReed-Frost model.[16] To factor these into an outbreak network model, one must consider the degree distributions of vertices in the giant component of the network (outbreaks in small components are isolation and die out quickly, which does not allow the outbreaks to become epidemics). Theoretically, weighted network can provide more accurate information on exposure probability of vertices but more proofs are needed. Pastor-Satorraset al.pioneered much work in this area, which began with the simplest form (theSImodel) and applied to networks drawn from the configuration model.[17] The biology of how an infection causes disease in an individual is complicated and is another type of disease pattern specialists are interested in (a process known aspathogenesiswhich involves immunology of the host andvirulence factorsof the pathogen).
https://en.wikipedia.org/wiki/Network_theory_in_risk_assessment
Network topologyis the arrangement of the elements (links,nodes, etc.) of a communication network.[1][2]Network topology can be used to define or describe the arrangement of various types of telecommunication networks, includingcommand and controlradio networks,[3]industrialfieldbussesandcomputer networks. Network topology is thetopological[4]structure of a network and may be depicted physically or logically. It is an application ofgraph theory[3]wherein communicating devices are modeled as nodes and the connections between the devices are modeled as links or lines between the nodes.Physical topologyis the placement of the various components of a network (e.g., device location and cable installation), whilelogical topologyillustrates how data flows within a network. Distances between nodes, physical interconnections,transmission rates, or signal types may differ between two different networks, yet their logical topologies may be identical. A network's physical topology is a particular concern of thephysical layerof theOSI model. Examples of network topologies are found inlocal area networks(LAN), a common computer network installation. Any given node in the LAN has one or more physical links to other devices in the network; graphically mapping these links results in a geometric shape that can be used to describe the physical topology of the network. A wide variety of physical topologies have been used in LANs, includingring,bus,meshandstar. Conversely, mapping thedata flowbetween the components determines the logical topology of the network. In comparison,Controller Area Networks, common in vehicles, are primarily distributedcontrol systemnetworks of one or more controllers interconnected with sensors and actuators over, invariably, a physical bus topology. Two basic categories of network topologies exist, physical topologies and logical topologies.[5] Thetransmission mediumlayout used to link devices is the physical topology of the network. For conductive or fiber optical mediums, this refers to the layout ofcabling, the locations of nodes, and the links between the nodes and the cabling.[1]The physical topology of a network is determined by the capabilities of the network access devices and media, the level of control or fault tolerance desired, and the cost associated with cabling or telecommunication circuits. In contrast, logical topology is the way that the signals act on the network media,[6]or the way that the data passes through the network from one device to the next without regard to the physical interconnection of the devices.[7]A network's logical topology is not necessarily the same as its physical topology. For example, the originaltwisted pair Ethernetusingrepeater hubswas a logical bus topology carried on a physical star topology.Token Ringis a logical ring topology, but is wired as a physical star from themedia access unit. Physically,Avionics Full-Duplex Switched Ethernet(AFDX) can be a cascaded star topology of multiple dual redundant Ethernet switches; however, theAFDX virtual linksare modeled astime-switchedsingle-transmitter bus connections, thus following the safety model of asingle-transmitter bus topologypreviously used in aircraft. Logical topologies are often closely associated withmedia access controlmethods and protocols. Some networks are able to dynamically change their logical topology through configuration changes to theirroutersand switches. The transmission media (often referred to in the literature as thephysical media) used to link devices to form a computer network includeelectrical cables(Ethernet,HomePNA,power line communication,G.hn),optical fiber(fiber-optic communication), andradio waves(wireless networking). In theOSI model, these are defined at layers 1 and 2 — the physical layer and the data link layer. A widely adoptedfamilyof transmission media used in local area network (LAN) technology is collectively known asEthernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined byIEEE 802.3. Ethernet transmits data over both copper and fiber cables. Wireless LAN standards (e.g. those defined byIEEE 802.11) use radio waves, or others useinfraredsignals as a transmission medium.Power line communicationuses a building's power cabling to transmit data. The orders of the following wired technologies are, roughly, from slowest to fastest transmission speed. Price is a main factor distinguishing wired- and wireless technology options in a business. Wireless options command a price premium that can make purchasing wired computers, printers and other devices a financial benefit. Before making the decision to purchase hard-wired technology products, a review of the restrictions and limitations of the selections is necessary. Business and employee needs may override any cost considerations.[12] There have been various attempts at transporting data over exotic media: Both cases have a largeround-trip delay time, which gives slow two-way communication, but does not prevent sending large amounts of information. Network nodes are the points of connection of the transmission medium to transmitters and receivers of the electrical, optical, or radio signals carried in the medium. Nodes may be associated with a computer, but certain types may have only a microcontroller at a node or possibly no programmable device at all. In the simplest of serial arrangements, oneRS-232transmitter can be connected by a pair of wires to one receiver, forming two nodes on one link, or a Point-to-Point topology. Some protocols permit a single node to only either transmit or receive (e.g.,ARINC 429). Other protocols have nodes that can both transmit and receive into a single channel (e.g.,CANcan have many transceivers connected to a single bus). While the conventionalsystembuilding blocks of acomputer networkincludenetwork interface controllers(NICs),repeaters,hubs,bridges,switches,routers,modems,gateways, andfirewalls, most address network concerns beyond the physical network topology and may be represented as single nodes on a particular physical network topology. Anetwork interface controller(NIC) iscomputer hardwarethat provides a computer with the ability to access the transmission media, and has the ability to process low-level network information. For example, the NIC may have a connector for accepting a cable, or an aerial for wireless transmission and reception, and the associated circuitry. The NIC responds to traffic addressed to anetwork addressfor either the NIC or the computer as a whole. InEthernetnetworks, each network interface controller has a uniqueMedia Access Control(MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, theInstitute of Electrical and Electronics Engineers(IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is sixoctets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce. Arepeateris anelectronicdevice that receives a networksignal, cleans it of unnecessary noise and regenerates it. The signal may be reformed orretransmittedat a higher power level, to the other side of an obstruction possibly using a different transmission medium, so that the signal can cover longer distances without degradation. Commercial repeaters have extendedRS-232segments from 15 meters to over a kilometer.[15]In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart. Repeaters work within the physical layer of the OSI model, that is, there is no end-to-end change in the physical protocol across the repeater, or repeater pair, even if a different physical layer may be used between the ends of the repeater, or repeater pair. Repeaters require a small amount of time to regenerate the signal. This can cause apropagation delaythat affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters that can be used in a row, e.g., the Ethernet5-4-3 rule. A repeater with multiple ports is known as hub, anEthernet hubin Ethernet networks, aUSB hubin USB networks. Anetwork bridgeconnects and filters traffic between twonetwork segmentsat thedata link layer(layer 2) of theOSI modelto form a single network. This breaks the network's collision domain but maintains a unified broadcast domain. Network segmentation breaks down a large, congested network into an aggregation of smaller, more efficient networks. Bridges come in three basic types: Anetwork switchis a device that forwards and filtersOSI layer 2datagrams(frames) betweenportsbased on the destination MAC address in each frame.[16]A switch is distinct from a hub in that it only forwards the frames to the physical ports involved in the communication rather than all ports connected. It can be thought of as a multi-port bridge.[17]It learns to associate physical ports to MAC addresses by examining the source addresses of received frames. If an unknown destination is targeted, the switch broadcasts to all ports but the source. Switches normally have numerous ports, facilitating a star topology for devices, and cascading additional switches. Multi-layer switchesare capable of routing based on layer 3 addressing or additional logical levels. The termswitchis often used loosely to include devices such as routers and bridges, as well as devices that may distribute traffic based on load or based on application content (e.g., a WebURLidentifier). Arouteris aninternetworkingdevice that forwardspacketsbetween networks by processing the routing information included in the packet or datagram (Internet protocol information from layer 3). The routing information is often processed in conjunction with therouting table(or forwarding table). A router uses its routing table to determine where to forward packets. A destination in a routing table can include ablack holebecause data can go into it, however, no further processing is done for said data, i.e. the packets are dropped. Modems(MOdulator-DEModulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or morecarrier signalsaremodulatedby the digital signal to produce ananalog signalthat can be tailored to give the required properties for transmission. Modems are commonly used for telephone lines, using adigital subscriber linetechnology. Afirewallis a network device for controlling network security and access rules. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase incyber attacks. The study of network topology recognizes eight basic topologies: point-to-point, bus, star, ring or circular, mesh, tree, hybrid, or daisy chain.[18] The simplest topology with a dedicated link between two endpoints. Easiest to understand, of the variations of point-to-point topology, is a point-to-pointcommunication channelthat appears, to the user, to be permanently associated with the two endpoints. A child'stin can telephoneis one example of aphysical dedicatedchannel. Usingcircuit-switchingorpacket-switchingtechnologies, a point-to-point circuit can be set up dynamically and dropped when no longer needed. Switched point-to-point topologies are the basic model of conventionaltelephony. The value of a permanent point-to-point network is unimpeded communications between the two endpoints. The value of an on-demand point-to-point connection is proportional to the number of potential pairs of subscribers and has been expressed asMetcalfe's Law. Daisy chainingis accomplished by connecting each computer in series to the next. If a message is intended for a computer partway down the line, each system bounces it along in sequence until it reaches the destination. A daisy-chained network can take two basic forms: linear and ring. In local area networks using bus topology, each node is connected by interface connectors to a single central cable. This is the 'bus', also referred to as thebackbone, ortrunk– alldata transmissionbetween nodes in the network is transmitted over this common transmission medium and is able to bereceivedby all nodes in the network simultaneously.[1] A signal containing the address of the intended receiving machine travels from a source machine in both directions to all machines connected to the bus until it finds the intended recipient, which then accepts the data. If the machine address does not match the intended address for the data, the data portion of the signal is ignored. Since the bus topology consists of only one wire it is less expensive to implement than other topologies, but the savings are offset by the higher cost of managing the network. Additionally, since the network is dependent on the single cable, it can be thesingle point of failureof the network. In this topology data being transferred may be accessed by any node. In a linear bus network, all of the nodes of the network are connected to a common transmission medium which has just two endpoints. When the electrical signal reaches the end of the bus, the signal is reflected back down the line, causing unwanted interference. To prevent this, the two endpoints of the bus are normally terminated with a device called aterminator. In a distributed bus network, all of the nodes of the network are connected to a common transmission medium with more than two endpoints, created by adding branches to the main section of the transmission medium – the physical distributed bus topology functions in exactly the same fashion as the physical linear bus topology because all nodes share a common transmission medium. In star topology (also called hub-and-spoke), every peripheral node (computer workstation or any other peripheral) is connected to a central node called a hub or switch. The hub is the server and the peripherals are the clients. The network does not necessarily have to resemble a star to be classified as a star network, but all of the peripheral nodes on the network must be connected to one central hub. All traffic that traverses the network passes through the central hub, which acts as asignal repeater. The star topology is considered the easiest topology to design and implement. One advantage of the star topology is the simplicity of adding additional nodes. The primary disadvantage of the star topology is that the hub represents a single point of failure. Also, since all peripheral communication must flow through the central hub, the aggregate central bandwidth forms a network bottleneck for large clusters. The extended star network topology extends a physical star topology by one or more repeaters between the central node and theperipheral(or 'spoke') nodes. The repeaters are used to extend the maximum transmission distance of the physical layer, the point-to-point distance between the central node and the peripheral nodes. Repeaters allow greater transmission distance, further than would be possible using just the transmitting power of the central node. The use of repeaters can also overcome limitations from the standard upon which the physical layer is based. A physical extended star topology in which repeaters are replaced with hubs or switches is a type of hybrid network topology and is referred to as a physical hierarchical star topology, although some texts make no distinction between the two topologies. A physical hierarchical star topology can also be referred as a tier-star topology. This topology differs from atree topologyin the way star networks are connected together. A tier-star topology uses a central node, while a tree topology uses a central bus and can also be referred as a star-bus network. A distributed star is a network topology that is composed of individual networks that are based upon the physical star topology connected in a linear fashion – i.e., 'daisy-chained' – with no central or top level connection point (e.g., two or more 'stacked' hubs, along with their associated star connected nodes or 'spokes'). A ring topology is adaisy chainin a closed loop. Data travels around the ring in one direction. When one node sends data to another, the data passes through each intermediate node on the ring until it reaches its destination. The intermediate nodes repeat (retransmit) the data to keep the signal strong.[5]Every node is a peer; there is no hierarchical relationship of clients and servers. If one node is unable to retransmit data, it severs communication between the nodes before and after it in the bus. Advantages: Disadvantages: The value of fully meshed networks is proportional to the exponent of the number of subscribers, assuming that communicating groups of any two endpoints, up to and including all the endpoints, is approximated byReed's Law. In afully connected network, all nodes are interconnected. (Ingraph theorythis is called acomplete graph.) The simplest fully connected network is a two-node network. A fully connected network doesn't need to usepacket switchingorbroadcasting. However, since the number of connections grows quadratically with the number of nodes: c=n(n−1)2.{\displaystyle c={\frac {n(n-1)}{2}}.\,} This makes it impractical for large networks. This kind of topology does not trip and affect other nodes in the network. In a partially connected network, certain nodes are connected to exactly one other node; but some nodes are connected to two or more other nodes with a point-to-point link. This makes it possible to make use of some of the redundancy of mesh topology that is physically fully connected, without the expense and complexity required for a connection between every node in the network. Hybrid topology is also known as hybrid network.[19]Hybrid networks combine two or more topologies in such a way that the resulting network does not exhibit one of the standard topologies (e.g., bus, star, ring, etc.). For example, atree network(orstar-bus network) is a hybrid topology in whichstar networksare interconnected viabus networks.[20][21]However, a tree network connected to another tree network is still topologically a tree network, not a distinct network type. A hybrid topology is always produced when two different basic network topologies are connected. Astar-ringnetwork consists of two or more ring networks connected using amultistation access unit(MAU) as a centralized hub. Snowflaketopology is meshed at the core, but tree shaped at the edges.[22] Two other hybrid network types arehybrid meshandhierarchical star.[20] Thestar topologyreduces the probability of a network failure by connecting all of the peripheral nodes (computers, etc.) to a central node. When the physical star topology is applied to a logical bus network such asEthernet, this central node (traditionally a hub) rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on the network, sometimes including the originating node. Allperipheralnodes may thus communicate with all others by transmitting to, and receiving from, the central node only. Thefailureof atransmission linelinking any peripheral node to the central node will result in the isolation of that peripheral node from all others, but the remaining peripheral nodes will be unaffected. However, the disadvantage is that the failure of the central node will cause the failure of all of the peripheral nodes. If the central node ispassive, the originating node must be able to tolerate the reception of anechoof its own transmission, delayed by the two-wayround triptransmission time(i.e. to and from the central node) plus any delay generated in the central node. Anactivestar network has an active central node that usually has the means to prevent echo-related problems. Atree topology(a.k.a.hierarchical topology) can be viewed as a collection of star networks arranged in ahierarchy. Thistree structurehas individual peripheral nodes (e.g. leaves) which are required to transmit to and receive from one other node only and are not required to act as repeaters or regenerators. Unlike the star network, the functionality of the central node may be distributed. As in the conventional star network, individual nodes may thus still be isolated from the network by a single-point failure of a transmission path to the node. If a link connecting a leaf fails, that leaf is isolated; if a connection to a non-leaf node fails, an entire section of the network becomes isolated from the rest. To alleviate the amount of network traffic that comes from broadcasting all signals to all nodes, more advanced central nodes were developed that are able to keep track of the identities of the nodes that are connected to the network. Thesenetwork switcheswilllearnthe layout of the network bylisteningon each port during normal data transmission, examining thedata packetsand recording the address/identifier of each connected node and which port it is connected to in alookup tableheld in memory. This lookup table then allows future transmissions to be forwarded to the intended destination only. Daisy chain topology is a way of connecting network nodes in a linear or ring structure. It is used to transmit messages from one node to the next until they reach the destination node. A daisy chain network can have two types: linear and ring. A linear daisy chain network is like an electrical series, where the first and last nodes are not connected. A ring daisy chain network is where the first and last nodes are connected, forming a loop. In a partially connected mesh topology, there are at least two nodes with two or more paths between them to provide redundant paths in case the link providing one of the paths fails. Decentralization is often used to compensate for the single-point-failure disadvantage that is present when using a single device as a central node (e.g., in star and tree networks). A special kind of mesh, limiting the number of hops between two nodes, is ahypercube. The number of arbitrary forks in mesh networks makes them more difficult to design and implement, but their decentralized nature makes them very useful. This is similar in some ways to agrid network, where a linear or ring topology is used to connect systems in multiple directions. A multidimensional ring has atoroidaltopology, for instance. Afully connected network,complete topology, orfull mesh topologyis a network topology in which there is a direct link between all pairs of nodes. In a fully connected network with n nodes, there aren(n−1)2{\displaystyle {\frac {n(n-1)}{2}}\,}direct links. Networks designed with this topology are usually very expensive to set up, but provide a high degree of reliability due to the multiple paths for data that are provided by the large number of redundant links between nodes. This topology is mostly seen inmilitaryapplications.
https://en.wikipedia.org/wiki/Network_topology
Networks in labor economicsrefers to the effectsocial networkshave on jobseekers obtaining employment. Research suggests that around half of the employed workforce found their jobs through social contacts.[1]It is believed that social networks not only contribute to the efficiency of job searching but can also explain, at least partly, wage differences and other inequalities in the workforce. Various models are used to quantify this effect, all having their own strengths and weaknesses. Models generally have to simplify the complex nature of social networks. In some economic models, the role of social networks in job searching often use exogenous job networks. Using this framework, Calvo-Armegnol and Jackson[2][3]were able to point out some network related labor market issues. In their basic model, in which they attempt to formalize the transmission of job information among individuals, the agents can be either employed with some non-zero, or unemployed with zero wages. The agents can get information about a job, and when they do so, they can decide whether to keep that information for themselves or pass it to their contacts. In the other phase, employed agents can lose their job with a given probability. Important indication of their model is that if someone who is employed has the information about a job, she will pass it to her unemployed acquaintances who will then become employed. Therefore, there is a positive correlation between labor outcomes of an individual and her contacts. On the other hand, it can also give an explanation for long term unemployment. If someone's acquaintances are unemployed as well, she has less chance to hear of some job opportunity. They also conclude that different initial wage and employment can cause different drop-outs rates from the labor market, thus, it can explain the existence of wage inequalities across social groups. Calvo-Armengol and Jackson prove that position in the network, and structure of the network affect the probability of being unemployed as well. The effectiveness of job searching with personal contacts is the consequence not only the individuals’ but the employers’ behavior as well. They often choose to hire acquaintances of their current employees instead of using a bigger pool of applicants. It is due to the information asymmetry, as they hardly know anything about the productivity of the applicant, and revealing it would be rather time-consuming and expensive. However, employees might be aware both their contacts unobserved characteristics and the specific expectations of employers, so they can improve this imbalance. Another benefit for the firm is that, due to the personal bond, present employees are motivated to choose a candidate who will perform well, since after the recommendation, their reputation is also at stake. Dustman, Glitz andSchönberg[4]showed that using personal connections in job search increases the initial wage and decreases the probability of leaving the firm. Referral based job networks can function even if there is no direct link between the referee and the potential worker. In the model of Finneran and Kelly,[5]there is a hierarchical network in which workers have the opportunity to refer their acquaintances if their employer hires. Workers are referred for a job with some increasing probability regarding their ability and productivity. In a hierarchical model like this, workers who work at a lower level, far from the information, never get an offer. However, the authors have shown that there is a threshold of this referral probability over which even those skilled worker can be referred who are low in the hierarchy. So there is a critical density of referral linkages that exists, under which no qualified workers can be referred; however, if the density of these linkages is high enough, all qualified workers will match with a job, despite their position in the network.
https://en.wikipedia.org/wiki/Networks_in_labor_economics
Innetwork science,preferential attachmentmeans that nodes of a network tend to connect to those nodes which have more links. If the network is growing and new nodes tend to connect to existing ones with linear probability in the degree of the existing nodes then preferential attachment leads to ascale-free network. If this probability is sub-linear then the network’s degree distribution isstretched exponentialand hubs are much smaller than in ascale-free network. If this probability is super-linear then almost all nodes are connected to a few hubs. According to Kunegis, Blattner, and Moser several online networks follow anon-linear preferential attachmentmodel. Communication networks and online contact networks are sub-linear while interaction networks are super-linear.[1]The co-author network among scientists also shows the signs of sub-linear preferential attachment.[2] For simplicity it can be assumed that the probability with which a new node connects to an existing one follows a power function of the existing nodes’ degreek: whereα> 0. This is a good approximation for a lot of real networks such as the Internet, the citation network or the actor network. Ifα= 1 then the preferential attachment is linear. Ifα< 1 then it is sub-linear while ifα> 1 then it is super-linear.[3] In measuring preferential attachment from real networks, the above log-linearity functional formkαcan be relaxed to a free form function, i.e.π(k) can be measured for eachkwithout any assumptions on the functional form ofπ(k). This is believed to be more flexible, and allows the discovery of non-log-linearity of preferential attachment in real networks.[4] In this case the new nodes still tend to connect to the nodes with higher degree but this effect is smaller than in the case of linear preferential attachment. There are less hubs and their size is also smaller than in a scale-free network. The size of the largest component logarithmically depends on the number of nodes: so it is smaller than the polynomial dependence.[5] Ifα> 1 then a few nodes tend to connect to every other node in the network. Forα> 2 this process happens more extremely, the number of connections between other nodes is still finite in the limit whenngoes to infinity. So the degree of the largest hub is proportional to the system size:[5]
https://en.wikipedia.org/wiki/Non-linear_preferential_attachment
Inphysics,chemistry, andmaterials science,percolation(fromLatinpercolare'to filter, trickle through') refers to the movement andfilteringof fluids through porous materials. It is described byDarcy's law. Broader applications have since been developed that cover connectivity of many systems modeled as lattices or graphs, analogous to connectivity of lattice components in the filtration problem that modulates capacity for percolation. During the last decades,percolation theory, the mathematical study ofpercolation, has brought new understanding and techniques to a broad range of topics in physics, materials science,complex networks,epidemiology, and other fields. For example, ingeology, percolation refers to filtration of water through soil and permeable rocks. The water flows torechargethegroundwaterin thewater tableandaquifers. In places whereinfiltration basinsorseptic drain fieldsare planned to dispose of substantial amounts of water, apercolation testis needed beforehand to determine whether the intended structure is likely to succeed or fail. In two dimensional square lattice percolation is defined as follows. A site is "occupied" with probability p or "empty" (in which case its edges are removed) with probability 1 – p; the corresponding problem is called site percolation, see Fig. 2. Percolation typically exhibitsuniversality.Statistical physicsconcepts such as scaling theory,renormalization,phase transition,critical phenomenaandfractalsare used to characterize percolation properties.Combinatoricsis commonly employed to studypercolation thresholds. Due to the complexity involved in obtaining exact results from analytical models of percolation, computer simulations are typically used. The current fastest algorithm for percolation was published in 2000 byMark Newmanand Robert Ziff.[1]
https://en.wikipedia.org/wiki/Percolation
Policy network analysisis a field of research inpolitical sciencefocusing on the links and interdependence between government's sections and other societal actors, aiming to understand thepolicy-makingprocess andpublic policyoutcomes.[1] Although the number of definitions is almost as large as the number of approaches of analysis,Rhodes[1]: 426aims to offer a minimally exclusive starting point: "Policy networks are sets of formal institutional and informal linkages between governmental and other actors structured around shared if endlessly negotiated beliefs and interests in public policy making and implementation." As Thatcher[2]: 391notes, policy network approaches initially aimed to model specific forms of state-interest group relations, without giving exhaustive typologies. The most widely used paradigm of the 1970s and 1980s only analyzed two specific types of policy networks: policy communities and issue networks. Justifications of the usage of these concepts were deduced from empirical case studies.[2] Policy Communities in which you refer to relatively slowly changing networks defining the context of policy-making in specific policy segments. The network links are generally perceived as the relational ties between bureaucrats, politicians and interest groups. The main characteristic of policy communities – compared to issue networks – is that the boundaries of the networks are more stable and more clearly defined. This concept was studied in the context of policy-making in the United Kingdom.[2] In contrast,issue networks– a concept established in literature about United States government - refer to a looser system, where a relatively large number of stakeholders are involved. Non-government actors in these networks usually include not only interest group representatives but also professional or academic experts. An important characteristic of issue network is that membership is constantly changing, interdependence is often asymmetric and – compared to policy communities – it is harder to identify dominant actors.[3] New typological approaches appeared in the early 1990s and late 1980s with the aim of grouping policy networks into a system of mutually exclusive and commonly exhaustive categories.[2]One possible logic of typology is based on the degree of integration, membership size and distribution of resources in the network. This categorization – perhaps most importantly represented by R. A. W. Rhodes – allows the combination of policy communities and issue networks with categories like professional network, intragovernmental network and producer network.[4]Other approaches identify categories based on distinct patterns of state-interest group relations. Patterns include corporatism and pluralism,iron triangles, subgovernment andclientelismwhile the differentiation is based on membership, stability and sectorality.[5] As the field of policy network analysis grew since the late 20th century, scholars developed competing descriptive, theoretical and prescriptive accounts. Each type gives different specific content for the term policy network and uses different research methodologies.[1] For several authors, policy networks describe specific forms of government policy-making. The three most important forms are interest intermediation, interorganizational analysis, and governance.[1] An approach developed from the literature on US pluralism, policy networks are often analyzed in order to identify the most important actors influencing governmental decision-making. From this perspective, a network-based assessment is useful to describe power positions, the structure ofoligopolyin political markets, and the institutions of interest negotiation.[1] Another branch of descriptive literature, which emerged from the study of European politics, aims to understand the interdependency in decision-making between formal political institutions and the corresponding organizational structures. This viewpoint emphasizes the importance of overlapping organizational responsibilities and the distribution of power in shaping specific policy outcomes.[6] A third direction of descriptive scholarship is to describe general patterns of policy-making – the formal institutions of power-sharing between government, independent state bodies and the representatives of employer and labor interests.[7][8] The two most important theoretical approaches aiming to understand and explain actor's behavior in policy networks are the following: power dependence and rational choice.[1] In power dependence models, policy networks are understood as mechanism of exchanging resources between organizations in the networks. The dynamic of exchange is determined by the comparative value of resources (f.e. legal, political or financial in nature) and individual capacities to deploy them in order create better bargaining positions and achieve higher degrees of autonomy.[1] In policy network analysis, theorists complement standard rational choice arguments with the insights ofnew institutionalism. This "actor-centered institutionalism" is used to describe policy networks as structural arrangements between relatively stable sets of public and private players. Rational choice theorists identifylinksbetween network actors as channels to exchange multiple goods (f.e. knowledge, resources and information).[1] The prescriptive literature on policy networks focuses on the phenomenon's role in constraining or enabling certain governmental action. From this viewpoint, networks are seen as central elements of the realm of policy-making at least partially defining the desirability of status quo – thus a possible target of reform initiatives.[1]The three most common network management approaches are the following: instrumental (a focus on altering dependency relation), institutional (a focus on rules, incentives and culture) and interactive (a focus on communication and negotiation).[9] As Rhodes[1]points out, there is a long-lasting debate in the field about general theories predicting the emergence of specific networks and corresponding policy outcomes depending on specific conditions. No theories have succeeded in achieving this level of generality yet and some scholars doubt they ever will. Other debates are focusing on describing and theorizing change in policy networks. While some political scientists state that this might not be possible,[10]other scholars have made efforts towards the understanding of policy network dynamics. One example is the advocacy coalition framework, which aims to analyze the effect of commonly represented beliefs (in coalitions) on policy outcomes.[1][11]
https://en.wikipedia.org/wiki/Policy_network_analysis
Quantum complex networksarecomplex networkswhose nodes arequantum computingdevices.[1][2]Quantum mechanicshas been used to create securequantum communicationschannels that are protected from hacking.[3][4]Quantum communications offer the potential for secureenterprise-scale solutions.[5][2][6] In theory, it is possible to take advantage ofquantum mechanicsto createsecure communicationsusing features such asquantum key distributionis an application ofquantum cryptographythat enablessecure communications[3]Quantum teleportationcan transfer data at a higher rate than classical channels.[4][relevant?] Successfulquantum teleportationexperiments in 1998.[7]Prototypical quantum communication networks arrived in 2004.[8]Large scalecommunication networkstend to have non-trivial topologies and characteristics, such assmall world effect,community structure, orscale-free.[6] Inquantum information theory,qubitsare analogous tobitsin classical systems. Aqubitis a quantum object that, when measured, can be found to be in one of only two states, and that is used to transmit information.[3]Photon polarizationornuclear spinare examples of binary phenomena that can be used as qubits.[3] Quantum entanglementis a physical phenomenon characterized by correlation between the quantum states of two or more physically separate qubits.[3]Maximally entangled states are those that maximize theentropy of entanglement.[9][10]In the context of quantum communication, entangled qubits are used as aquantum channel.[3] Bell measurementis a kind of joint quantum-mechanical measurement of two qubits such that, after the measurement, the two qubits are maximally entangled.[3][10] Entanglement swappingis a strategy used in the study of quantum networks that allows connections in the network to change.[1][11]For example, given 4 qubits, A, B, C and D, such that qubits C and D belong to the same station[clarification needed], while A and C belong to two different stations[clarification needed], and where qubit A is entangled with qubit C and qubit B is entangled with qubit D. Performing aBell measurementfor qubits A and B, entangles qubits A and B. It is also possible to entangle qubits C and D, despite the fact that these two qubits never interact directly with each other. Following this process, the entanglement between qubits A and C, and qubits B and D are lost. This strategy can be used to definenetwork topology.[1][11][12] While models for quantum complex networks are not of identical structure, usually a node represents a set of qubits in the same station (where operations likeBell measurementsandentanglement swappingcan be applied) and an edge between nodei{\displaystyle i}andj{\displaystyle j}means that a qubit in nodei{\displaystyle i}is entangled to a qubit in nodej{\displaystyle j}, although those two qubits are in different places and so cannot physically interact.[1][11]Quantum networks where the links are interaction terms[clarification needed]instead of entanglement are also of interest.[13][which?] Each node in the network contains a set of qubits in different states. To represent the quantum state of these qubits, it is convenient to useDirac notationand represent the two possible states of each qubit as|0⟩{\displaystyle |0\rangle }and|1⟩{\displaystyle |1\rangle }.[1][11]In this notation, two particles are entangled if the jointwave function,|ψij⟩{\displaystyle |\psi _{ij}\rangle }, cannot be decomposed as[3][10] where|ϕ⟩i{\displaystyle |\phi \rangle _{i}}represents the quantum state of the qubit at nodeiand|ϕ⟩j{\displaystyle |\phi \rangle _{j}}represents the quantum state of the qubit at nodej. Another important concept is maximally entangled states. The four states (theBell states) that maximize theentropy of entanglementbetween two qubits can be written as follows:[3][10] The quantum random network model proposed by Perseguers et al. (2009)[1]can be thought of as a quantum version of theErdős–Rényi model. In this model, each node containsN−1{\displaystyle N-1}qubits, one for each other node. The degree of entanglement between a pair of nodes, represented byp{\displaystyle p}, plays a similar role to the parameterp{\displaystyle p}in the Erdős–Rényi model in which two nodes form a connection with probabilityp{\displaystyle p}, whereas in the context of quantum random networks,p{\displaystyle p}refers to the probability of converting an entangled pair of qubits to a maximally entangled state using onlylocal operations and classical communication.[14] Using Dirac notation, a pair of entangled qubits connecting the nodesi{\displaystyle i}andj{\displaystyle j}is represented as Forp=0{\displaystyle p=0}, the two qubits are not entangled: and forp=1{\displaystyle p=1}, we obtain the maximally entangled state: For intermediate values ofp{\displaystyle p},0<p<1{\displaystyle 0<p<1}, any entangled state is, with probabilityp{\displaystyle p}, successfully converted to the maximally entangled state using LOCC operations.[14] One feature that distinguishes this model from its classical analogue is the fact that, in quantum random networks, links are only truly established after they are measured, and it is possible to exploit this fact to shape the final state of the network.[relevant?]For an initial quantum complex network with an infinite number of nodes, Perseguers et al.[1]showed that, the right measurements andentanglement swapping, make it possible[how?]to collapse the initial network to a network containing any finite subgraph, provided thatp{\displaystyle p}scales withN{\displaystyle N}asp∼NZ{\textstyle p\sim N^{Z}}, whereZ≥−2{\displaystyle Z\geq -2}. This result is contrary to classical graph theory, where the type of subgraphs contained in a network is bounded by the value ofz{\displaystyle z}.[15][why?] Entanglement percolation models attempt to determine whether a quantum network is capable of establishing a connection between two arbitrary nodes through entanglement, and to find the best strategies to create such connections.[11][16] Cirac et al. (2007)[16]applied a model to complex networks by Cuquet et al. (2009),[11]in which nodes are distributed in a lattice[16]or in a complex network,[11]and each pair of neighbors share two pairs of entangled qubits that can be converted to a maximally entangled qubit pair with probabilityp{\displaystyle p}. We can think of maximally entangled qubits as the true links between nodes. In classicalpercolation theory, with a probabilityp{\displaystyle p}that two nodes are connected,p{\displaystyle p}has a critical value (denoted bypc{\displaystyle p_{c}}), so that ifp>pc{\displaystyle p>p_{c}}a path between two randomly selected nodes exists with a finite probability, and forp<pc{\displaystyle p<p_{c}}the probability of such a path existing is asymptotically zero.[17]pc{\displaystyle p_{c}}depends only on the network topology.[17] A similar phenomenon was found in the model proposed by Cirac et al. (2007),[16]where the probability of forming a maximally entangled state between two randomly selected nodes is zero ifp<pc{\displaystyle p<p_{c}}and finite ifp>pc{\displaystyle p>p_{c}}. The main difference between classical and entangled percolation is that, in quantum networks, it is possible to change the links in the network, in a way changing the effective topology of the network. As a result,pc{\displaystyle p_{c}}depends on the strategy used to convert partially entangled qubits to maximally connected[clarification needed]qubits.[11][16]With a naïve approach,pc{\displaystyle p_{c}}for a quantum network is equal topc{\displaystyle p_{c}}for a classic network with the same topology.[16]Nevertheless, it was shown that is possible to take advantage of quantum swapping to lowerpc{\displaystyle p_{c}}both inregular lattices[16]andcomplex networks.[11]
https://en.wikipedia.org/wiki/Quantum_complex_network
Ingraph theory, aflow network(also known as atransportation network) is adirected graphwhere each edge has acapacityand each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge. Often inoperations research, a directed graph is called anetwork, the vertices are callednodesand the edges are calledarcs. A flow must satisfy the restriction that the amount of flow into a node equals the amount of flow out of it, unless it is asource, which has only outgoing flow, orsink, which has only incoming flow. A flow network can be used to model traffic in a computer network, circulation with demands, fluids in pipes, currents in an electrical circuit, or anything similar in which something travels through a network of nodes. As such, efficient algorithms for solving network flows can also be applied to solve problems that can be reduced to a flow network, including survey design, airline scheduling,image segmentation, and thematching problem. Anetworkis a directed graphG= (V,E)with a non-negativecapacityfunctioncfor each edge, and without multiple arcs (i.e. edges with the same source and target nodes).Without loss of generality, we may assume that if(u,v) ∈E, then(v,u)is also a member ofE. Additionally, if(v,u) ∉Ethen we may add(v,u)toEand then set thec(v,u) = 0. If two nodes inGare distinguished – one as the sourcesand the other as the sinkt– then(G,c,s,t)is called aflow network.[1] Flow functions model the net flow of units between pairs of nodes, and are useful when asking questions such aswhat is the maximum number of units that can be transferred from the source node s to the sink node t?The amount of flow between two nodes is used to represent the net amount of units being transferred from one node to the other. Theexcessfunctionxf:V→ ℝrepresents the net flow entering a given nodeu(i.e. the sum of the flows enteringu) and is defined byxf(u)=∑w∈Vf(w,u)−∑w∈Vf(u,w).{\displaystyle x_{f}(u)=\sum _{w\in V}f(w,u)-\sum _{w\in V}f(u,w).}A nodeuis said to beactiveifxf(u) > 0(i.e. the nodeuconsumes flow),deficientifxf(u) < 0(i.e. the nodeuproduces flow), orconservingifxf(u) = 0. In flow networks, the sourcesis deficient, and the sinktis active. Pseudo-flows, feasible flows, and pre-flows are all examples of flow functions. Thevalue|f|of a feasible flowffor a network, is the net flow into the sinktof the flow network, that is:|f| =xf(t). Note, the flow value in a network is also equal to the total outgoing flow of sources, that is:|f| = −xf(s). Also, if we defineAas a set of nodes inGsuch thats∈Aandt∉A, the flow value is equal to the total net flow going out of A (i.e.|f| =fout(A) −fin(A)).[2]The flow value in a network is the total amount of flow fromstot. Flow decomposition[3]is a process of breaking down a given flow into a collection of path flows and cycle flows. Every flow through a network can be decomposed into one or more paths and corresponding quantities, such that each edge in the flow equals the sum of all quantities of paths that pass through it. Flow decomposition is a powerful tool in optimization problems to maximize or minimize specific flow parameters. We do not use multiple arcs within a network because we can combine those arcs into a single arc. To combine two arcs into a single arc, we add their capacities and their flow values, and assign those to the new arc: Along with the other constraints, the skew symmetry constraint must be remembered during this step to maintain the direction of the original pseudo-flow arc. Adding flow to an arc is the same as adding an arc with the capacity of zero.[citation needed] Theresidual capacityof an arcewith respect to a pseudo-flowfis denotedcf, and it is the difference between the arc's capacity and its flow. That is,cf(e) =c(e) −f(e). From this we can construct aresidual network, denotedGf(V,Ef), with a capacity functioncfwhich models the amount ofavailablecapacity on the set of arcs inG= (V,E). More specifically, capacity functioncfof each arc(u,v)in the residual network represents the amount of flow which can be transferred fromutovgiven the current state of the flow within the network. This concept is used inFord–Fulkerson algorithmwhich computes themaximum flowin a flow network. Note that there can be an unsaturated path (a path with available capacity) fromutovin the residual network, even though there is no such path fromutovin the original network.[citation needed]Since flows in opposite directions cancel out,decreasingthe flow fromvtouis the same asincreasingthe flow fromutov. Anaugmenting pathis a path(u1,u2, ...,uk)in the residual network, whereu1=s,uk=t, andfor allui,ui+ 1(cf(ui,ui+ 1) > 0) (1 ≤ i < k). More simply, an augmenting path is an available flow path from the source to the sink. A network is atmaximum flowif and only if there is no augmenting path in the residual networkGf. Thebottleneckis the minimum residual capacity of all the edges in a given augmenting path.[2]See example explained in the "Example" section of this article. The flow network is at maximum flow if and only if it has a bottleneck with a value equal to zero. If any augmenting path exists, its bottleneck weight will be greater than 0. In other words, if there is a bottleneck value greater than 0, then there is an augmenting path from the source to the sink. However, we know that if there is any augmenting path, then the network is not at maximum flow, which in turn means that, if there is a bottleneck value greater than 0, then the network is not at maximum flow. The term "augmenting the flow" for an augmenting path means updating the flowfof each arc in this augmenting path to equal the capacitycof the bottleneck. Augmenting the flow corresponds to pushing additional flow along the augmenting path until there is no remaining available residual capacity in the bottleneck. Sometimes, when modeling a network with more than one source, asupersourceis introduced to the graph.[4]This consists of a vertex connected to each of the sources with edges of infinite capacity, so as to act as a global source. A similar construct for sinks is called asupersink.[5] In Figure 1 you see a flow network with source labeleds, sinkt, and four additional nodes. The flow and capacity is denotedf/c{\displaystyle f/c}. Notice how the network upholds the capacity constraint and flow conservation constraint. The total amount of flow fromstotis 5, which can be easily seen from the fact that the total outgoing flow fromsis 5, which is also the incoming flow tot. By the skew symmetry constraint, fromctoais -2 because the flow fromatocis 2. In Figure 2 you see the residual network for the same given flow. Notice how there is positive residual capacity on some edges where the original capacity is zero in Figure 1, for example for the edge(d,c){\displaystyle (d,c)}. This network is not atmaximum flow. There is available capacity along the paths(s,a,c,t){\displaystyle (s,a,c,t)},(s,a,b,d,t){\displaystyle (s,a,b,d,t)}and(s,a,b,d,c,t){\displaystyle (s,a,b,d,c,t)}, which are then the augmenting paths. The bottleneck of the(s,a,c,t){\displaystyle (s,a,c,t)}path is equal tomin(c(s,a)−f(s,a),c(a,c)−f(a,c),c(c,t)−f(c,t)){\displaystyle \min(c(s,a)-f(s,a),c(a,c)-f(a,c),c(c,t)-f(c,t))}=min(cf(s,a),cf(a,c),cf(c,t)){\displaystyle =\min(c_{f}(s,a),c_{f}(a,c),c_{f}(c,t))}=min(5−3,3−2,2−1){\displaystyle =\min(5-3,3-2,2-1)}=min(2,1,1)=1{\displaystyle =\min(2,1,1)=1}. Picture a series of water pipes, fitting into a network. Each pipe is of a certain diameter, so it can only maintain a flow of a certain amount of water. Anywhere that pipes meet, the total amount of water coming into that junction must be equal to the amount going out, otherwise we would quickly run out of water, or we would have a buildup of water. We have a water inlet, which is the source, and an outlet, the sink. A flow would then be one possible way for water to get from source to sink so that the total amount of water coming out of the outlet is consistent. Intuitively, the total flow of a network is the rate at which water comes out of the outlet. Flows can pertain to people or material over transportation networks, or to electricity overelectrical distributionsystems. For any such physical network, the flow coming into any intermediate node needs to equal the flow going out of that node. This conservation constraint is equivalent toKirchhoff's current law. Flow networks also find applications inecology: flow networks arise naturally when considering the flow of nutrients and energy between different organisms in afood web. The mathematical problems associated with such networks are quite different from those that arise in networks of fluid or traffic flow. The field of ecosystem network analysis, developed byRobert Ulanowiczand others, involves using concepts frominformation theoryandthermodynamicsto study the evolution of these networks over time. The simplest and most common problem using flow networks is to find what is called themaximum flow, which provides the largest possible total flow from the source to the sink in a given graph. There are many other problems which can be solved using max flow algorithms, if they are appropriately modeled as flow networks, such asbipartite matching, theassignment problemand thetransportation problem. Maximum flow problems can be solved inpolynomial timewith various algorithms (see table). Themax-flow min-cut theoremstates that finding a maximal network flow is equivalent to finding acutof minimum capacity that separates the source and the sink, where a cut is the division of vertices such that the source is in one division and the sink is in another. Richard Peng, Maximilian Probst Gutenberg, Sushant Sachdeva In amulti-commodity flow problem, you have multiple sources and sinks, and various "commodities" which are to flow from a given source to a given sink. This could be for example various goods that are produced at various factories, and are to be delivered to various given customers through thesametransportation network. In aminimum cost flow problem, each edgeu,v{\displaystyle u,v}has a given costk(u,v){\displaystyle k(u,v)}, and the cost of sending the flowf(u,v){\displaystyle f(u,v)}across the edge isf(u,v)⋅k(u,v){\displaystyle f(u,v)\cdot k(u,v)}. The objective is to send a given amount of flow from the source to the sink, at the lowest possible price. In acirculation problem, you have a lower boundℓ(u,v){\displaystyle \ell (u,v)}on the edges, in addition to the upper boundc(u,v){\displaystyle c(u,v)}. Each edge also has a cost. Often, flow conservation holds forallnodes in a circulation problem, and there is a connection from the sink back to the source. In this way, you can dictate the total flow withℓ(t,s){\displaystyle \ell (t,s)}andc(t,s){\displaystyle c(t,s)}. The flowcirculatesthrough the network, hence the name of the problem. In anetwork with gainsorgeneralized networkeach edge has again, a real number (not zero) such that, if the edge has gaing, and an amountxflows into the edge at its tail, then an amountgxflows out at the head. In asource localization problem, an algorithm tries to identify the most likely source node of information diffusion through a partially observed network. This can be done in linear time for trees and cubic time for arbitrary networks and has applications ranging from tracking mobile phone users to identifying the originating source of disease outbreaks.[8]
https://en.wikipedia.org/wiki/Random_networks
The spread ofrumorsis an important form ofcommunicationin society. There are two approaches to investigating the rumor spreading process: microscopic models and the macroscopic models. The macroscopic models propose a macro view about this process and are mainly based on the widely-used Daley-Kendall and Maki-Thompson models. Particularly, rumor spread can be viewed as astochastic processin social networks. By contrast, the microscopic models are more interested on micro-level interactions between individuals. In the last few years, there has been a growing interest in rumor propagation in online social networks problems where different approaches have been proposed. The first category is mainly based on the epidemic models. Pioneering research on rumor propagation using these models started during the 1960s.[1] Astandard modelof rumor spreading was introduced by Daley and Kendall.[1]Assume there are N people in total and those people in the network are categorized into three groups: ignorants, spreaders and stiflers, which are denoted as S, I, and R respectively hereinafter (in correspondance with theSIR model): The rumor is propagated through the population by pair-wise contacts between spreaders and others in the population. Any spreader involved in a pair-wise meeting attempts to “infect” the other individual with the rumor. In the case this other individual is an ignorant, he or she becomes a spreader. In the other two cases, either one or both of those involved in the meeting learn that the rumor is known and decided not to tell the rumor anymore, thereby turning into stiflers. One variant is the Maki-Thompson model.[2]In this model, rumor is spread by directed contacts of the spreaders with others in the population. Furthermore, when a spreader contacts another spreader only the initiating spreader becomes a stifler. Therefore, three types of interactions can happen with certain rates. Of course we always have conservation of individuals: The change in each class in a small time interval is: Since we knowS{\displaystyle S},I{\displaystyle I}andR{\displaystyle R}sum up toN{\displaystyle N}, we can reduce one equation from the above, which leads to a set of differential equations using relative variablex=I/N{\displaystyle x=I/N}andy=S/N{\displaystyle y=S/N}as follows which we can write Compared with the ordinarySIR model, we see that the only difference to the ordinarySIR modelis that we have a factorα+β{\displaystyle \alpha +\beta }in the first equation instead of justα{\displaystyle \alpha }. We immediately see that the ignorants can only decrease sincex,y≥0{\displaystyle x,y\geq 0}anddydt≤0{\displaystyle {dy \over dt}\leq 0}. Also, if which means the rumor model exhibits an “epidemic” even for arbitrarily small rate parameters. We model the process introduced above on a network in discrete time, that is, we can model it as a DTMC. Say we have a network with N nodes, then we can defineXi(t){\displaystyle X_{i}(t)}to be the state of node i at time t. ThenX(t){\displaystyle X(t)}is a stochastic process onS={S,I,R}N{\displaystyle S=\{S,I,R\}^{N}}. At a single moment, some node i and node j interact with each other, and then one of them will change its state. Thus we define the functionf{\displaystyle f}so that forx{\displaystyle x}inS{\displaystyle S},f(x,i,j){\displaystyle f(x,i,j)}is when the state of network isx{\displaystyle x}, node i and node j interact with each other, and one of them will change its state. The transition matrix depends on the number of ties of node i and node j, as well as the state of node i and node j. For anyy=f(x,i,j){\displaystyle y=f(x,i,j)}, we try to findP(x,y){\displaystyle P(x,y)}. If node i is in state I and node j is in state S, thenP(x,y)=αAji/ki{\displaystyle P(x,y)=\alpha A_{ji}/k_{i}}; if node i is in state I and node j is in state I, thenP(x,y)=βAji/ki{\displaystyle P(x,y)=\beta A_{ji}/k_{i}}; if node i is in state I and node j is in state R, thenP(x,y)=βAji/ki{\displaystyle P(x,y)=\beta A_{ji}/k_{i}}. For all othery{\displaystyle y},P(x,y)=0{\displaystyle P(x,y)=0}. The procedure on a network is as follows:[3] pj=Ajiki{\displaystyle p_{j}={A_{ji} \over k_{i}}} We would expect that this process spreads the rumor throughout a considerable fraction of the network. Note however that if we have a stronglocal clusteringaround a node, what can happen is that many nodes become spreaders and have neighbors who are spreaders. Then, every time we pick one of those, they will recover and can extinguish the rumor spread. On the other hand, if we have a network that issmall world, that is, a network in which the shortest path between two randomly chosen nodes is much smaller than that one would expect, we can expect the rumor spread far away. Also we can compute the final number of people who once spread the news, this is given byr∞=1−e−(α+ββ)r∞{\displaystyle r_{\infty }=1-e^{-({\alpha +\beta \over \beta })r_{\infty }}}In networks the process that does not have a threshold in a well mixed population, exhibits a clear cut phase-transition in small worlds. The following graph illustrates the asymptotic value ofr∞{\displaystyle r_{\infty }}as a function of the rewiring probabilityp{\displaystyle p}. The microscopic approaches are more focused on interactions between individuals: "who influenced whom." Models include theindependent cascademodel, linearthreshold model,[4]energy model,[5]HISBmodel,[6]and Galam's Model.[7] The HISBmodel is a rumor propagation model that can reproduce a trend of this phenomenon and provide indicators to assess the impact of the rumor to effectively understand thediffusion processand reduce its influence. The variety that exists in human nature makes their decision-making ability pertaining to spreading information unpredictable, which is the primary challenge to model such a complex phenomenon. Hence, this model considers the impact of human individual and social behaviors in the spreading process of the rumors. The HISBmodel proposes an approach that is parallel to other models in the literature and concerned more with how individuals spread rumors. Therefore, it tries to understand the behavior of individuals, as well as their social interactions in OSNs, and highlight their impact on the dissemination of rumors. Thus, the model, attempts to answer the following question: When does an individual spread a rumor? When does an individual accept rumors? In which OSN does this individual spread the rumors? First, it proposes a formulation of individual behavior towards a rumor analog to damped harmonic motion, which incorporates the opinions of individuals in the propagation process. Furthermore, it establishes rules of rumor transmission between individuals. As a result, it presents the HISBmodel propagation process, where new metrics are introduced to accurately assess the impact of a rumor spreading through OSNs.
https://en.wikipedia.org/wiki/Rumor_spread_in_social_network
Aservice networkis a structure that brings together several entities to deliver a particular service. For instance, one organisation (the buyer) may sub-contract another organisation (the supplier) to deliver after-sales services to a third party (the customer).[1]The buyer may use more than one supplier. Likewise, the supplier may participate in other networks. The rationale for a service network is that each organisation is focusing on what they do best.[2] Aservice networkcan also be defined as a collection of people and information brought together on theinternetto provide a specificserviceor achieve a common business objective. It is an evolving extension ofservice systemsand appliesEnterprise 2.0technologies, also known asenterprise social software, to enable corporations to leverage the advances of the consumer internet for the benefit of business. In this case, the service network is designed to benefit from thewisdom of crowdsand a human's natural tendency and desire to share information, collaborate, and self organize into communities of common interests and objectives. In business, the value of collaboration is clearly recognized, but the ability is often hampered by rigid organizational boundaries and fragmented information systems. A service network enables businesses to realize the benefits ofmass collaborationdespite the constraints of modern organizational structures and systems. The world's economy is shifting rapidly from agriculture and manufacturing to services. When the United States declared independence, 90% of the world's economy was on the farm.[3]Today, the services sector accounts for approximately 80% of the U.S. economy.[4]But unlike traditional disciplines like computer science and engineering, innovation and investment directed towardsservice innovationhad historically not kept pace with its growth. However, in 2007, momentum and investment in service innovation grew dramatically and the creation and evolution ofservice networksbegan in earnest along with many other service initiatives. The termservice networkis increasingly being used within the context ofservice innovationinitiatives that span academia, business, and government. Some examples include: Investments in service innovation include, but are not limited to, service networks. Service networks are typically delivered as an online or hosted solution, also referred to assoftware as a service (SaaS)solutions. It is possible for participants to have adversarial relationships with other members of the service network .[10]For instance, manufacturers may attempt todisintermediateservice firms when it is more profitable for the manufacturer to replace a whole product rather than repair it. One example inaviationis how manufacturers of airframes and components attempt to sign service contracts withairlines, capturing in the process the aftersales service market previously operated bymaintenanceand repair service firms.[10]The result is a network with internal adversarial dynamics.
https://en.wikipedia.org/wiki/Service_network
Asmall-world networkis agraphcharacterized by a highclustering coefficientand lowdistances. In an example of the social network, high clustering implies the high probability that two friends of one person are friends themselves. The low distances, on the other hand, mean that there is a short chain of social connections between any two people (this effect is known assix degrees of separation).[1]Specifically, a small-world network is defined to be a network where thetypicaldistanceLbetween two randomly chosen nodes (the number of steps required) grows proportionally to thelogarithmof the number of nodesNin the network, that is:[2] while theglobal clustering coefficientis not small. In the context of a social network, this results in thesmall world phenomenonof strangers being linked by a short chain ofacquaintances. Many empirical graphs show the small-world effect, includingsocial networks, wikis such as Wikipedia,gene networks, and even the underlying architecture of theInternet. It is the inspiration for manynetwork-on-chiparchitectures in contemporarycomputer hardware.[3] A certain category of small-world networks were identified as a class ofrandom graphsbyDuncan WattsandSteven Strogatzin 1998.[4]They noted that graphs could be classified according to two independent structural features, namely theclustering coefficient, and average node-to-nodedistance(also known asaverage shortest path length). Purely random graphs, built according to theErdős–Rényi (ER) model, exhibit a small average shortest path length (varying typically as the logarithm of the number of nodes) along with a small clustering coefficient. Watts and Strogatz measured that in fact many real-world networks have a small average shortest path length, but also a clustering coefficient significantly higher than expected by random chance. Watts and Strogatz then proposed a novel graph model, currently named theWatts and Strogatz model, with (i) a small average shortest path length, and (ii) a large clustering coefficient. The crossover in the Watts–Strogatz model between a "large world" (such as a lattice) and a small world was first described by Barthelemy and Amaral in 1999.[5]This work was followed by many studies, including exact results (Barrat and Weigt, 1999; Dorogovtsev andMendes; Barmpoutis and Murray, 2010). Small-world networks tend to containcliques, and near-cliques, meaning sub-networks which have connections between almost any two nodes within them. This follows from the defining property of a highclustering coefficient. Secondly, most pairs of nodes will be connected by at least one short path. This follows from the defining property that the mean-shortest path length be small. Several other properties are often associated with small-world networks. Typically there is an over-abundance ofhubs– nodes in the network with a high number of connections (known as highdegreenodes). These hubs serve as the common connections mediating the short path lengths between other edges. By analogy, the small-world network of airline flights has a small mean-path length (i.e. between any two cities you are likely to have to take three or fewer flights) because many flights are routed throughhubcities. This property is often analyzed by considering the fraction of nodes in the network that have a particular number of connections going into them (the degree distribution of the network). Networks with a greater than expected number of hubs will have a greater fraction of nodes with high degree, and consequently the degree distribution will be enriched at high degree values. This is known colloquially as afat-tailed distribution. Graphs of very different topology qualify as small-world networks as long as they satisfy the two definitional requirements above. Network small-worldness has been quantified by a small-coefficient,σ{\displaystyle \sigma }, calculated by comparing clustering and path length of a given network to anErdős–Rényi modelwith same degree on average.[6][7] Another method for quantifying network small-worldness utilizes the original definition of the small-world network comparing the clustering of a given network to an equivalent lattice network and its path length to an equivalent random network. The small-world measure (ω{\displaystyle \omega }) is defined as[8] Where the characteristic path lengthLand clustering coefficientCare calculated from the network you are testing,Cℓis the clustering coefficient for an equivalent lattice network andLris the characteristic path length for an equivalent random network. Still another method for quantifying small-worldness normalizes both the network's clustering and path length relative to these characteristics in equivalent lattice and random networks. The Small World Index (SWI) is defined as[9] Bothω′ and SWI range between 0 and 1, and have been shown to capture aspects of small-worldness. However, they adopt slightly different conceptions of ideal small-worldness. For a given set of constraints (e.g. size, density, degree distribution), there exists a network for whichω′ = 1, and thusωaims to capture the extent to which a network with given constraints as small worldly as possible. In contrast, there may not exist a network for which SWI = 1, thus SWI aims to capture the extent to which a network with given constraints approaches the theoretical small world ideal of a network whereC≈CℓandL≈Lr.[9] Small-world properties are found in many real-world phenomena, including websites with navigation menus, food webs, electric power grids, metabolite processing networks,networks of brain neurons, voter networks, telephone call graphs, and airport networks.[10]Cultural networks[11]and wordco-occurrence networks[12]have also been shown to be small-world networks. Networks ofconnected proteinshave small world properties such as power-law obeying degree distributions.[13]Similarlytranscriptional networks, in which the nodes aregenes, and they are linked if one gene has an up or down-regulatory genetic influence on the other, have small world network properties.[14] In another example, the famous theory of "six degrees of separation" between people tacitly presumes that thedomain of discourseis the set of people alive at any one time. The number of degrees of separation betweenAlbert EinsteinandAlexander the Greatis almost certainly greater than 30[15]and this network does not have small-world properties. A similarly constrained network would be the "went to school with" network: if two people went to the same college ten years apart from one another, it is unlikely that they have acquaintances in common amongst the student body. Similarly, the number of relay stations through which a message must pass was not always small. In the days when the post was carried by hand or on horseback, the number of times a letter changed hands between its source and destination would have been much greater than it is today. The number of times a message changed hands in the days of the visual telegraph (circa 1800–1850) was determined by the requirement that two stations be connected by line-of-sight. Tacit assumptions, if not examined, can cause a bias in the literature on graphs in favor of finding small-world networks (an example of thefile drawer effect resulting from the publication bias). It is hypothesized by some researchers, such asAlbert-László Barabási, that the prevalence of small world networks in biological systems may reflect an evolutionary advantage of such an architecture. One possibility is that small-world networks are more robust to perturbations than other network architectures. If this were the case, it would provide an advantage to biological systems that are subject to damage bymutationorviral infection. In a small world network with a degree distribution following apower-law, deletion of a random node rarely causes a dramatic increase inmean-shortest pathlength (or a dramatic decrease in theclustering coefficient). This follows from the fact that most shortest paths between nodes flow throughhubs, and if a peripheral node is deleted it is unlikely to interfere with passage between other peripheral nodes. As the fraction of peripheral nodes in a small world network is much higher than the fraction ofhubs, the probability of deleting an important node is very low. For example, if the small airport inSun Valley, Idahowas shut down, it would not increase the average number of flights that other passengers traveling in the United States would have to take to arrive at their respective destinations. However, if random deletion of a node hits a hub by chance, the average path length can increase dramatically. This can be observed annually when northern hub airports, such as Chicago'sO'Hare airport, are shut down because of snow; many people have to take additional flights. By contrast, in a random network, in which all nodes have roughly the same number of connections, deleting a random node is likely to increase the mean-shortest path length slightly but significantly for almost any node deleted. In this sense, random networks are vulnerable to random perturbations, whereas small-world networks are robust. However, small-world networks are vulnerable to targeted attack of hubs, whereas random networks cannot be targeted for catastrophic failure. The main mechanism to construct small-world networks is theWatts–Strogatz mechanism. Small-world networks can also be introduced with time-delay,[16]which will not only produce fractals but also chaos[17]under the right conditions, or transition to chaos in dynamics networks.[18] Soon after the publication ofWatts–Strogatz mechanism, approaches have been developed byMashaghiand co-workers to generate network models that exhibit high degree correlations, while preserving the desired degree distribution and small-world properties. These approaches are based on edge-dual transformation and can be used to generate analytically solvable small-world network models for research into these systems.[19] Degree–diametergraphs are constructed such that the number of neighbors each vertex in the network has is bounded, while the distance from any given vertex in the network to any other vertex (thediameterof the network) is minimized. Constructing such small-world networks is done as part of the effort to find graphs of order close to theMoore bound. Another way to construct a small world network from scratch is given in Barmpoutiset al.,[20]where a network with very small average distance and very large average clustering is constructed. A fast algorithm of constant complexity is given, along with measurements of the robustness of the resulting graphs. Depending on the application of each network, one can start with one such "ultra small-world" network, and then rewire some edges, or use several small such networks as subgraphs to a larger graph. Small-world properties can arise naturally in social networks and other real-world systems via the process ofdual-phase evolution. This is particularly common where time or spatial constraints limit the addition of connections between vertices The mechanism generally involves periodic shifts between phases, with connections being added during a "global" phase and being reinforced or removed during a "local" phase. Small-world networks can change from scale-free class to broad-scale class whose connectivity distribution has a sharp cutoff following a power law regime due to constraints limiting the addition of new links.[21]For strong enough constraints, scale-free networks can even become single-scale networks whose connectivity distribution is characterized as fast decaying.[21]It was also shown analytically that scale-free networks are ultra-small, meaning that the distance scales according toL∝log⁡log⁡N{\displaystyle L\propto \log \log N}.[22] The advantages to small world networking forsocial movement groupsare their resistance to change due to the filtering apparatus of using highly connected nodes, and its better effectiveness in relaying information while keeping the number of links required to connect a network to a minimum.[23] The small world network model is directly applicable toaffinity grouptheory represented in sociological arguments byWilliam Finnegan. Affinity groups are social movement groups that are small and semi-independent pledged to a larger goal or function. Though largely unaffiliated at the node level, a few members of high connectivity function as connectivity nodes, linking the different groups through networking. This small world model has proven an extremely effective protest organization tactic against police action.[24]Clay Shirkyargues that the larger the social network created through small world networking, the more valuable the nodes of high connectivity within the network.[23]The same can be said for the affinity group model, where the few people within each group connected to outside groups allowed for a large amount of mobilization and adaptation. A practical example of this is small world networking through affinity groups that William Finnegan outlines in reference to the1999 Seattle WTO protests. Many networks studied in geology and geophysics have been shown to have characteristics of small-world networks. Networks defined in fracture systems and porous substances have demonstrated these characteristics.[25]The seismic network in the Southern California region may be a small-world network.[26]The examples above occur on very different spatial scales, demonstrating thescale invarianceof the phenomenon in the earth sciences. Small-world networks have been used to estimate the usability of information stored in large databases. The measure is termed the Small World Data Transformation Measure.[27][28]The greater the database links align to a small-world network the more likely a user is going to be able to extract information in the future. This usability typically comes at the cost of the amount of information that can be stored in the same repository. TheFreenetpeer-to-peer network has been shown to form a small-world network in simulation,[29]allowing information to be stored and retrieved in a manner that scales efficiency as the network grows. Nearest Neighbor Searchsolutions likeHNSWuse small-world networks to efficiently find the information in large item corpuses.[30][31] Both anatomical connections in thebrain[32]and the synchronization networks of cortical neurons[33]exhibit small-world topology. Structural and functional connectivity in the brain has also been found to reflect the small-world topology of short path length and high clustering.[34]The network structure has been found in the mammalian cortex across species as well as in large scale imaging studies in humans.[35]Advances inconnectomicsandnetwork neuroscience, have found the small-worldness of neural networks to be associated with efficient communication.[36] In neural networks, short pathlength between nodes and high clustering at network hubs supports efficient communication between brain regions at the lowest energetic cost.[36]The brain is constantly processing and adapting to new information and small-world network model supports the intense communication demands of neural networks.[37]High clustering of nodes forms local networks which are often functionally related. Short path length between these hubs supports efficient global communication.[38]This balance enables the efficiency of the global network while simultaneously equipping the brain to handle disruptions and maintain homeostasis, due to local subsystems being isolated from the global network.[39]Loss of small-world network structure has been found to indicate changes in cognition and increased risk of psychological disorders.[9] In addition to characterizing whole-brain functional and structural connectivity, specific neural systems, such as the visual system, exhibit small-world network properties.[6] A small-world network of neurons can exhibitshort-term memory. A computer model developed bySara Solla[40][41]had two stable states, a property (calledbistability) thought to be important inmemorystorage. An activating pulse generated self-sustaining loops of communication activity among the neurons. A second pulse ended this activity. The pulses switched the system between stable states: flow (recording a "memory"), and stasis (holding it). Small world neuronal networks have also been used as models to understandseizures.[42]
https://en.wikipedia.org/wiki/Small-world_networks
Thestructural cut-offis a concept innetwork sciencewhich imposes a degree cut-off in thedegree distributionof a finite size network due to structural limitations (such as thesimple graphproperty). Networks with vertices with degree higher than the structural cut-off will display structuraldisassortativity. The structural cut-off is a maximum degree cut-off that arises from the structure of a finite size network. LetEkk′{\displaystyle E_{kk'}}be the number of edges between all vertices of degreek{\displaystyle k}andk′{\displaystyle k'}ifk≠k′{\displaystyle k\neq k'}, and twice the number ifk=k′{\displaystyle k=k'}. Given that multiple edges between two vertices are not allowed,Ekk′{\displaystyle E_{kk'}}is bounded by the maximum number of edges between two degree classesmkk′{\displaystyle m_{kk'}}. Then, the ratio can be written where⟨k⟩{\displaystyle \langle k\rangle }is the average degree of the network,N{\displaystyle N}is the total number of vertices,P(k){\displaystyle P(k)}is the probability a randomly chosen vertex will have degreek{\displaystyle k}, andP(k,k′)=Ekk′/⟨k⟩N{\displaystyle P(k,k')=E_{kk'}/\langle k\rangle N}is the probability that a randomly picked edge will connect on one side a vertex with degreek{\displaystyle k}with a vertex of degreek′{\displaystyle k'}. To be in the physical region,rkk′≤1{\displaystyle r_{kk'}\leq 1}must be satisfied. The structural cut-offks{\displaystyle k_{s}}is then defined byrksks=1{\displaystyle r_{k_{s}k_{s}}=1}.[1] The structural cut-off plays an important role in neutral (or uncorrelated) networks, which do not display any assortativity. The cut-off takes the form which is finite in any real network. Thus, if vertices of degreek≥ks{\displaystyle k\geq k_{s}}exist, it is physically impossible to attach enough edges between them to maintain the neutrality of the network. In ascale-free networkthe degree distribution is described by a power law with characteristic exponentγ{\displaystyle \gamma },P(k)∼k−γ{\displaystyle P(k)\sim k^{-\gamma }}. In a finite scale free network, the maximum degree of any vertex (also called the natural cut-off), scales as Then, networks withγ<3{\displaystyle \gamma <3}, which is the regime of most real networks, will havekmax{\displaystyle k_{\text{max}}}diverging faster thanks∼N1/2{\displaystyle k_{s}\sim N^{1/2}}in a neutral network. This has the important implication that an otherwise neutral network may show disassortative degree correlations ifkmax>ks{\displaystyle k_{\text{max}}>k_{s}}. This disassortativity is not a result of any microscopic property of the network, but is purely due to the structural limitations of the network. In the analysis of networks, for a degree correlation to be meaningful, it must be checked that the correlations are not of structural origin. A network generated randomly by a network generation algorithm is in general not free of structural disassortativity. If a neutral network is required, then structural disassortativity must be avoided. There are a few methods by which this can be done:[2] In some real networks, the same methods as for generated networks can also be used. In many cases, however, it may not make sense to consider multiple edges between two vertices, or such information is not available. The high degree vertices (hubs) may also be an important part of the network that cannot be removed without changing other fundamental properties. To determine whether the assortativity or disassortativity of a network is of structural origin, the network can be compared with a degree-preserving randomized version of itself (without multiple edges). Then any assortativity measure of the randomized version will be a result of the structural cut-off. If the real network displays any additional assortativity or disassortativity beyond the structural disassortativity, then it is a meaningful property of the real network. Other quantities that depend on the degree correlations, such as some definitions of therich-club coefficient, will also be impacted by the structural cut-off.[3]
https://en.wikipedia.org/wiki/Structural_cut-off
Collective intelligenceCollective actionSelf-organized criticalityHerd mentalityPhase transitionAgent-based modellingSynchronizationAnt colony optimizationParticle swarm optimizationSwarm behaviour Social network analysisSmall-world networksCentralityMotifsGraph theoryScalingRobustnessSystems biologyDynamic networks Evolutionary computationGenetic algorithmsGenetic programmingArtificial lifeMachine learningEvolutionary developmental biologyArtificial intelligenceEvolutionary robotics Reaction–diffusion systemsPartial differential equationsDissipative structuresPercolationCellular automataSpatial ecologySelf-replication Conversation theoryEntropyFeedbackGoal-orientedHomeostasisInformation theoryOperationalizationSecond-order cyberneticsSelf-referenceSystem dynamicsSystems scienceSystems thinkingSensemakingVariety Ordinary differential equationsPhase spaceAttractorsPopulation dynamicsChaosMultistabilityBifurcation Rational choice theoryBounded rationality Systems theoryis thetransdisciplinary[1]study ofsystems, i.e. cohesive groups of interrelated, interdependent components that can benaturalorartificial. Every system has causal boundaries, is influenced by its context, defined by its structure, function and role, and expressed through its relations with other systems. A system is "more than the sum of its parts" when it expressessynergyoremergent behavior.[2] Changing one component of a system may affect other components or the whole system. It may be possible to predict these changes in patterns of behavior. For systems that learn and adapt, the growth and the degree ofadaptationdepend upon how well the system is engaged with its environment and other contexts influencing its organization. Some systems support other systems, maintaining the other system to prevent failure. The goals of systems theory are to model a system's dynamics,constraints, conditions, and relations; and to elucidate principles (such as purpose, measure, methods, tools) that can be discerned and applied to other systems at every level of nesting, and in a wide range of fields for achieving optimizedequifinality.[3] General systems theory is about developing broadly applicable concepts and principles, as opposed to concepts and principles specific to one domain of knowledge. It distinguishes dynamic or active systems from static or passive systems. Active systems are activity structures or components that interact in behaviours and processes or interrelate through formal contextual boundary conditions (attractors). Passive systems are structures and components that are being processed. For example, a computer program is passive when it is a file stored on the hard drive and active when it runs in memory.[4]The field is related tosystems thinking, machine logic, andsystems engineering. Systems theory is manifest in the work of practitioners in many disciplines, for example the works of physicianAlexander Bogdanov, biologistLudwig von Bertalanffy, linguistBéla H. Bánáthy, and sociologistTalcott Parsons; in the study of ecological systems byHoward T. Odum,Eugene Odum; inFritjof Capra's study oforganizational theory; in the study ofmanagementbyPeter Senge; in interdisciplinary areas such ashuman resource developmentin the works ofRichard A. Swanson; and in the works of educatorsDebora Hammondand Alfonso Montuori. As atransdisciplinary, interdisciplinary, andmultiperspectivalendeavor, systems theory brings together principles and concepts fromontology, thephilosophy of science,physics,computer science,biology, andengineering, as well asgeography,sociology,political science,psychotherapy(especiallyfamily systems therapy), andeconomics. Systems theory promotes dialogue between autonomous areas of study as well as withinsystems scienceitself. In this respect, with the possibility of misinterpretations, von Bertalanffy[5]believed a general theory of systems "should be an important regulative device in science," to guard against superficial analogies that "are useless in science and harmful in their practical consequences." Others remain closer to the direct systems concepts developed by the original systems theorists. For example,Ilya Prigogine, ofthe Center for Complex Quantum Systemsat theUniversity of Texas, has studiedemergent properties, suggesting that they offeranaloguesforliving systems. Thedistinctionofautopoiesisas made byHumberto MaturanaandFrancisco Varelarepresent further developments in this field. Important names in contemporary systems science includeRussell Ackoff,Ruzena Bajcsy,Béla H. Bánáthy,Gregory Bateson,Anthony Stafford Beer,Peter Checkland,Barbara Grosz,Brian Wilson,Robert L. Flood,Allenna Leonard,Radhika Nagpal,Fritjof Capra,Warren McCulloch,Kathleen Carley,Michael C. Jackson,Katia Sycara, andEdgar Morinamong others. With the modern foundations for a general theory of systems following World War I,Ervin László, in the preface for Bertalanffy's book,Perspectives on General System Theory, points out that thetranslationof "general system theory" from German into English has "wrought a certain amount of havoc":[6] It (General System Theory) was criticized as pseudoscience and said to be nothing more than an admonishment to attend to things in a holistic way. Such criticisms would have lost their point had it been recognized that von Bertalanffy's general system theory is a perspective or paradigm, and that such basic conceptual frameworks play a key role in the development of exact scientific theory. .. Allgemeine Systemtheorie is not directly consistent with an interpretation often put on 'general system theory,' to wit, that it is a (scientific) "theory of general systems." To criticize it as such is to shoot at straw men. Von Bertalanffy opened up something much broader and of much greater significance than a single theory (which, as we now know, can always be falsified and has usually an ephemeral existence): he created a new paradigm for the development of theories. Theorie (orLehre) "has a much broader meaning in German than the closest English words 'theory' and 'science'," just asWissenschaft(or 'Science').[6]These ideas refer to an organized body of knowledge and "any systematically presented set of concepts, whetherempirically,axiomatically, orphilosophically" represented, while many associateLehrewith theory and science in the etymology of general systems, though it also does not translate from the German very well; its "closest equivalent" translates to 'teaching', but "sounds dogmatic and off the mark."[6]An adequate overlap in meaning is found within the word "nomothetic", which can mean "having the capability to posit long-lasting sense." While the idea of a "general systems theory" might have lost many of its root meanings in the translation, by defining a new way of thinking about science andscientific paradigms, systems theory became a widespread term used for instance to describe the interdependence of relationships created inorganizations. A system in this frame of reference can contain regularly interacting or interrelating groups of activities. For example, in noting the influence in the evolution of "an individually orientedindustrial psychology[into] a systems and developmentally orientedorganizational psychology," some theorists recognize that organizations have complex social systems; separating the parts from the whole reduces the overall effectiveness of organizations.[7]This difference, from conventional models that center on individuals, structures, departments and units, separates in part from the whole, instead of recognizing the interdependence between groups of individuals, structures and processes that enable an organization to function. László explains that the new systems view of organized complexity went "one step beyond the Newtonian view of organized simplicity" which reduced the parts from the whole, or understood the whole without relation to the parts. The relationship between organisations and theirenvironmentscan be seen as the foremost source of complexity and interdependence. In most cases, the whole has properties that cannot be known from analysis of the constituent elements in isolation.[8] Béla H. Bánáthy, who argued—along with the founders of the systems society—that "the benefit of humankind" is the purpose of science, has made significant and far-reaching contributions to the area of systems theory. For the Primer Group at theInternational Society for the System Sciences, Bánáthy defines a perspective that iterates this view:[9][full citation needed] The systems view is a world-view that is based on the discipline of SYSTEM INQUIRY. Central to systems inquiry is the concept of SYSTEM. In the most general sense, system means a configuration of parts connected and joined together by a web of relationships. The Primer Group defines system as a family of relationships among the members acting as a whole. Von Bertalanffy defined system as "elements in standing relationship." Systems biology is a movement that draws on several trends inbioscienceresearch. Proponents describe systems biology as a biology-based interdisciplinary study field that focuses on complex interactions inbiological systems, claiming that it uses a new perspective (holisminstead ofreduction). Particularly from the year 2000 onwards, the biosciences use the term widely and in a variety of contexts. An often stated ambition of systems biology is the modelling and discovery ofemergent propertieswhich represents properties of a system whose theoretical description requires the only possible useful techniques to fall under the remit of systems biology. It is thought thatLudwig von Bertalanffymay have created the termsystems biologyin 1928.[10] Subdisciplines of systems biology include: Systems ecology is aninterdisciplinaryfield ofecologythat takes aholisticapproach to the study ofecological systems, especiallyecosystems;[11][12][13]it can be seen as an application of general systems theory to ecology. Central to the systems ecology approach is the idea that an ecosystem is acomplex systemexhibitingemergent properties. Systems ecology focuses on interactions and transactions within and between biological and ecological systems, and is especially concerned with the way the functioning of ecosystems can be influenced by human interventions. It uses and extends concepts fromthermodynamicsand develops other macroscopic descriptions of complex systems. Systems chemistry is the science of studyingnetworksof interacting molecules, to create new functions from a set (or library) of molecules with different hierarchical levels and emergent properties.[14]Systems chemistry is also related to the origin of life (abiogenesis).[15] Systems engineering is aninterdisciplinaryapproach and means for enabling the realisation and deployment of successfulsystems. It can be viewed as the application of engineering techniques to the engineering of systems, as well as the application of a systems approach to engineering efforts.[16]Systems engineering integrates other disciplines and specialty groups into a team effort, forming a structured development process that proceeds from concept to production to operation and disposal. Systems engineering considers both the business and the technical needs of all customers, with the goal of providing a quality product that meets the user's needs.[17][18] Systems thinking is a crucial part ofuser-centered designprocesses and is necessary to understand the whole impact of a newhuman computer interaction(HCI)information system.[19]Overlooking this and developing software without insights input from the future users (mediated by user experience designers) is a serious design flaw that can lead to complete failure of information systems, increased stress and mental illness for users of information systems leading to increased costs and a huge waste of resources.[20]It is currently surprisingly uncommon for organizations and governments to investigate the project management decisions leading to serious design flaws and lack of usability.[citation needed] TheInstitute of Electrical and Electronics Engineersestimates that roughly 15% of the estimated $1 trillion used to develop information systems every year is completely wasted and the produced systems are discarded before implementation by entirely preventable mistakes.[21]According to the CHAOS report published in 2018 by the Standish Group, a vast majority of information systems fail or partly fail according to their survey: Pure success is the combination of high customer satisfaction with high return on value to the organization. Related figures for the year 2017 are: successful: 14%, challenged: 67%, failed 19%.[22] System dynamics is an approach to understanding thenonlinearbehaviour ofcomplex systemsover time usingstocks, flows, internalfeedback loops, and time delays.[23] Systems psychology is a branch ofpsychologythat studieshuman behaviourandexperienceincomplex systems. It received inspiration from systems theory and systems thinking, as well as the basics of theoretical work fromRoger Barker,Gregory Bateson,Humberto Maturanaand others. It makes an approach inpsychologyin which groups and individuals receive consideration assystemsinhomeostasis. Systems psychology "includes the domain ofengineering psychology, but in addition seems more concerned with societal systems[24]and with the study of motivational, affective, cognitive and group behavior that holds the name engineering psychology."[25] In systems psychology, characteristics oforganizational behaviour(such as individual needs, rewards,expectations, and attributes of the people interacting with thesystems) "considers this process in order to create an effective system."[26] System theory has been applied in the field of neuroinformatics and connectionist cognitive science. Attempts are being made in neurocognition to merge connectionist cognitive neuroarchitectures with the approach of system theory anddynamical systems theory.[27] Predecessors Founders Other contributors Systems thinking can date back to antiquity, whether considering the first systems of written communication with SumeriancuneiformtoMaya numerals, or the feats of engineering with theEgyptian pyramids. Differentiated from Westernrationalisttraditions of philosophy,C. West Churchmanoften identified with theI Chingas a systems approach sharing a frame of reference similar topre-Socraticphilosophy andHeraclitus.[29]: 12–13Ludwig von Bertalanffytraced systems concepts to the philosophy ofGottfried LeibnizandNicholas of Cusa'scoincidentia oppositorum. While modern systems can seem considerably more complicated, they may embed themselves in history. Figures likeJames JouleandSadi Carnotrepresent an important step to introduce thesystems approachinto the (rationalist) hard sciences of the 19th century, also known as theenergy transformation. Then, thethermodynamicsof this century, byRudolf Clausius,Josiah Gibbsand others, established thesystemreference modelas a formal scientific object. Similar ideas are found inlearning theoriesthat developed from the same fundamental concepts, emphasising how understanding results from knowing concepts both in part and as a whole. In fact, Bertalanffy's organismic psychology paralleled the learning theory ofJean Piaget.[30]Some consider interdisciplinary perspectives critical in breaking away fromindustrial agemodels and thinking, wherein history represents history and math represents math, while the arts and sciencesspecializationremain separate and many treat teaching asbehavioristconditioning.[31] The contemporary work ofPeter Sengeprovides detailed discussion of the commonplace critique of educational systems grounded in conventional assumptions about learning,[32]including the problems with fragmented knowledge and lack of holistic learning from the "machine-age thinking" that became a "model of school separated from daily life." In this way, some systems theorists attempt to provide alternatives to, and evolved ideation from orthodox theories which have grounds in classical assumptions, including individuals such asMax WeberandÉmile Durkheimin sociology andFrederick Winslow Taylorinscientific management.[33]The theorists sought holistic methods by developing systems concepts that could integrate with different areas. Some may view the contradiction ofreductionismin conventional theory (which has as its subject a single part) as simply an example of changing assumptions. The emphasis with systems theory shifts from parts to the organization of parts, recognizing interactions of the parts as not static and constant but dynamic processes. Some questioned the conventionalclosed systemswith the development ofopen systemsperspectives. The shift originated fromabsoluteand universal authoritative principles and knowledge to relative and generalconceptualandperceptualknowledge[34]and still remains in the tradition of theorists that sought to provide means to organize human life. In other words, theorists rethought the precedinghistory of ideas; they did not lose them. Mechanistic thinking was particularly critiqued, especially the industrial-age mechanisticmetaphorfor the mind frominterpretationsofNewtonian mechanicsbyEnlightenmentphilosophers and later psychologists that laid the foundations of modern organizational theory and management by the late 19th century.[35] Where assumptions in Western science fromPlatoandAristotletoIsaac Newton'sPrincipia(1687) have historically influenced all areas from thehardtosocialsciences (see,David Easton's seminal development of the "political system" as an analytical construct), the original systems theorists explored the implications of 20th-century advances in terms of systems. Between 1929 and 1951,Robert Maynard Hutchinsat theUniversity of Chicagohad undertaken efforts to encourage innovation and interdisciplinary research in the social sciences, aided by theFord Foundationwith the university's interdisciplinaryDivision of the Social Sciencesestablished in 1931.[29]: 5–9 Many early systems theorists aimed at finding a general systems theory that could explain all systems in all fields of science. "General systems theory" (GST;German:allgemeine Systemlehre) was coined in the 1940s byLudwig von Bertalanffy, who sought a new approach to the study ofliving systems.[36]Bertalanffy developed the theory via lectures beginning in 1937 and then via publications beginning in 1946.[37]According toMike C. Jackson(2000), Bertalanffy promoted an embryonic form of GST as early as the 1920s and 1930s, but it was not until the early 1950s that it became more widely known in scientific circles.[38] Jackson also claimed that Bertalanffy's work was informed byAlexander Bogdanov's three-volumeTectology(1912–1917), providing the conceptual base for GST.[38]A similar position is held byRichard Mattessich(1978) andFritjof Capra(1996). Despite this, Bertalanffy never even mentioned Bogdanov in his works. The systems view was based on several fundamental ideas. First, all phenomena can be viewed as a web of relationships among elements, or asystem. Second, all systems, whetherelectrical,biological, orsocial, have commonpatterns,behaviors, andpropertiesthat the observer can analyze and use to develop greater insight into the behavior of complex phenomena and to move closer toward a unity of the sciences. System philosophy, methodology and application are complementary to this science.[6] Cognizant of advances in science that questioned classical assumptions in the organizational sciences, Bertalanffy's idea to develop a theory of systems began as early as theinterwar period, publishing "An Outline for General Systems Theory" in theBritish Journal for the Philosophy of Scienceby 1950.[39] In 1954, von Bertalanffy, along withAnatol Rapoport,Ralph W. Gerard, andKenneth Boulding, came together at theCenter for Advanced Study in the Behavioral Sciencesin Palo Alto to discuss the creation of a "society for the advancement of General Systems Theory." In December that year, a meeting of around 70 people was held inBerkeleyto form a society for the exploration and development of GST.[40]TheSociety for General Systems Research(renamed the International Society for Systems Science in 1988) was established in 1956 thereafter as an affiliate of theAmerican Association for the Advancement of Science(AAAS),[40]specifically catalyzing systems theory as an area of study. The field developed from the work of Bertalanffy, Rapoport, Gerard, and Boulding, as well as other theorists in the 1950s likeWilliam Ross Ashby,Margaret Mead,Gregory Bateson, andC. West Churchman, among others. Bertalanffy's ideas were adopted by others, working in mathematics, psychology, biology,game theory, andsocial network analysis. Subjects that were studied included those ofcomplexity,self-organization,connectionismandadaptive systems. In fields likecybernetics, researchers such as Ashby,Norbert Wiener,John von Neumann, andHeinz von Foersterexamined complex systems mathematically; Von Neumann discoveredcellular automataand self-reproducing systems, again with only pencil and paper.Aleksandr LyapunovandJules Henri Poincaréworked on the foundations ofchaos theorywithout anycomputerat all. At the same time,Howard T. Odum, known as a radiation ecologist, recognized that the study of general systems required a language that could depictenergetics,thermodynamicsandkineticsat any system scale. To fulfill this role, Odum developed a general system, oruniversal language, based on the circuit language ofelectronics, known as theEnergy Systems Language. TheCold Waraffected the research project for systems theory in ways that sorely disappointed many of the seminal theorists. Some began to recognize that theories defined in association with systems theory had deviated from the initial general systems theory view.[41]Economist Kenneth Boulding, an early researcher in systems theory, had concerns over the manipulation of systems concepts. Boulding concluded from the effects of the Cold War that abuses ofpoweralways prove consequential and that systems theory might address such issues.[29]: 229–233Since the end of the Cold War, a renewed interest in systems theory emerged, combined with efforts to strengthen anethical[42]view on the subject. In sociology, systems thinking also began in the 20th century, includingTalcott Parsons'action theory[43]andNiklas Luhmann'ssocial systems theory.[44][45]According to Rudolf Stichweh (2011):[43]: 2 Since its beginnings thesocial scienceswere an important part of the establishment of systems theory... [T]he two most influential suggestions were the comprehensive sociological versions of systems theory which were proposed by Talcott Parsons since the 1950s and by Niklas Luhmann since the 1970s. Elements of systems thinking can also be seen in the work ofJames Clerk Maxwell, particularlycontrol theory. Many early systems theorists aimed at finding a general systems theory that could explain all systems in all fields of science.Ludwig von Bertalanffybegan developing his 'general systems theory' via lectures in 1937 and then via publications from 1946.[37]The concept received extensive focus in his 1968 book,General System Theory: Foundations, Development, Applications.[30] There are many definitions of a general system, some properties that definitions include are: an overallgoal of the system,parts of the system and relationships between these parts, andemergent propertiesof the interaction between the parts of the system that are not performed by any part on its own.[46]: 58Derek Hitchinsdefines a system in terms ofentropyas a collection of parts and relationships between the parts where the parts of their interrelationships decrease entropy.[46]: 58 Bertalanffy aimed to bring together under one heading the organismic science that he had observed in his work as a biologist. He wanted to use the wordsystemfor those principles that are common to systems in general. InGeneral System Theory(1968), he wrote:[30]: 32 [T]here exist models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind, the nature of their component elements, and the relationships or "forces" between them. It seems legitimate to ask for a theory, not of systems of a more or less special kind, but of universal principles applying to systems in general. In the preface to von Bertalanffy'sPerspectives on General System Theory,Ervin Lászlóstated:[6] Thus when von Bertalanffy spoke of Allgemeine Systemtheorie it was consistent with his view that he was proposing a new perspective, a new way of doing science. It was not directly consistent with an interpretation often put on "general system theory", to wit, that it is a (scientific) "theory of general systems." To criticize it as such is to shoot at straw men. Von Bertalanffy opened up something much broader and of much greater significance than a single theory (which, as we now know, can always be falsified and has usually an ephemeral existence): he created a new paradigm for the development of theories. Bertalanffy outlines systems inquiry into three major domains:philosophy,science, andtechnology. In his work with the Primer Group,Béla H. Bánáthygeneralized the domains into four integratable domains of systemic inquiry: These operate in a recursive relationship, he explained; integrating 'philosophy' and 'theory' as knowledge, and 'method' and 'application' as action; systems inquiry is thus knowledgeable action.[47][failed verification] General systems may be split into ahierarchyof systems, where there is less interactions between the different systems than there is the components in the system. The alternative isheterarchywhere all components within the system interact with one another.[46]: 65Sometimes an entire system will be represented inside another system as a part, sometimes referred to as a holon.[46]These hierarchies of system are studied inhierarchy theory.[48]The amount of interaction between parts of systems higher in the hierarchy and parts of the system lower in the hierarchy is reduced. If all the parts of a system are tightlycoupled(interact with one another a lot) then the system cannot be decomposed into different systems. The amount of coupling between parts of a system may differ temporally, with some parts interacting more often than other, or for different processes in a system.[49]: 293Herbert A. Simondistinguished between decomposable, nearly decomposable and nondecomposable systems.[46]: 72 Russell L. Ackoffdistinguished general systems by how their goals and subgoals could change over time. He distinguished between goal-maintaining,goal-seeking, multi-goal and reflective (or goal-changing) systems.[46]: 73 Cyberneticsis the study of thecommunicationand control of regulatoryfeedbackboth in living and lifeless systems (organisms, organizations, machines), and in combinations of those. Its focus is how anything (digital, mechanical or biological) controls its behavior, processes information, reacts to information, and changes or can be changed to better accomplish those three primary tasks. The termssystems theoryandcyberneticshave been widely used as synonyms. Some authors use the termcyberneticsystems to denote a proper subset of the class of general systems, namely those systems that includefeedback loops. However,Gordon Pask's differences of eternal interacting actor loops (that produce finite products) makes general systems a proper subset of cybernetics. In cybernetics, complex systems have been examined mathematically by such researchers asW. Ross Ashby,Norbert Wiener,John von Neumann, andHeinz von Foerster. Threads of cybernetics began in the late 1800s that led toward the publishing of seminal works (such as Wiener'sCyberneticsin 1948 andBertalanffy'sGeneral System Theoryin 1968). Cybernetics arose more from engineering fields and GST from biology. If anything, it appears that although the two probably mutually influenced each other, cybernetics had the greater influence. Bertalanffy specifically made the point of distinguishing between the areas in noting the influence of cybernetics: Systems theory is frequently identified with cybernetics and control theory. This again is incorrect. Cybernetics as the theory of control mechanisms in technology and nature is founded on the concepts of information and feedback, but as part of a general theory of systems.... [T]he model is of wide application but should not be identified with 'systems theory' in general ... [and] warning is necessary against its incautious expansion to fields for which its concepts are not made.[30]: 17–23 Cybernetics,catastrophe theory,chaos theoryandcomplexity theoryhave the common goal to explain complex systems that consist of a large number of mutually interacting and interrelated parts in terms of those interactions.Cellular automata,neural networks,artificial intelligence, andartificial lifeare related fields, but do not try to describe general (universal) complex (singular) systems. The best context to compare the different "C"-Theories about complex systems is historical, which emphasizes different tools and methodologies, frompure mathematicsin the beginning to purecomputer sciencetoday. Since the beginning of chaos theory, whenEdward Lorenzaccidentally discovered astrange attractorwith his computer, computers have become an indispensable source of information. One could not imagine the study of complex systems without the use of computers today. Complex adaptive systems (CAS), coined byJohn H. Holland,Murray Gell-Mann, and others at the interdisciplinarySanta Fe Institute, are special cases ofcomplex systems: they arecomplexin that they are diverse and composed of multiple, interconnected elements; they areadaptivein that they have the capacity to change and learn from experience. In contrast tocontrol systems, in whichnegative feedbackdampens and reverses disequilibria, CAS are often subject topositive feedback, which magnifies and perpetuates changes, converting local irregularities into global features. Organizations
https://en.wikipedia.org/wiki/Systems_theory
Common sense(fromLatinsensus communis) is "knowledge, judgement, and taste which is more or less universal and which is held more or less without reflection or argument".[1]As such, it is often considered to represent the basic level of sound practical judgement or knowledge of basic facts that any adult human being ought to possess.[2]It is "common" in the sense of being shared by nearly all people. Relevant terms from other languages used in such discussions include the aforementioned Latin, itself translating Ancient Greekκοινὴ αἴσθησις(koinḕ aísthēsis), and Frenchbon sens. However, these are not straightforward translations in all contexts, and in English different shades of meaning have developed. In philosophical and scientific contexts, since theAge of Enlightenmentthe term "common sense" has been used forrhetoricaleffect both approvingly and disapprovingly. On the one hand it has been a standard forgood taste, good sense, and source of scientific and logicalaxioms. On the other hand it has been equated toconventional wisdom, vulgarprejudice, andsuperstition.[3] "Common sense" has at least two older and more specialized meanings which have influenced the modern meanings, and are still important inphilosophy. The original historical meaning is the capability of the animal soul (ψῡχή,psūkhḗ), proposed byAristotleto explain how the different senses join and enable discrimination of particular objects by people and other animals. This common sense is distinct from the severalsensory perceptionsand from humanrational thought, but it cooperates with both. The second philosophical use of the term is Roman-influenced, and is used for the natural human sensitivity for other humans and the community.[4]Just like the everyday meaning, both of the philosophical meanings refer to a type of basic awareness and ability to judge that most people are expected to share naturally, even if they cannot explain why. All these meanings of "common sense", including the everyday ones, are interconnected in a complex history and have evolved during important political and philosophical debates in modernWestern civilisation, notably concerning science, politics and economics.[5]The interplay between the meanings has come to be particularly notable in English, as opposed to other western European languages, and the English term has in turn become international.[6] It was at the beginning of the 18th century that this old philosophical term first acquired its modern English meaning: "Those plain, self-evident truths or conventional wisdom that one needed no sophistication to grasp and no proof to accept precisely because they accorded so well with the basic (common sense) intellectual capacities and experiences of the whole social body."[7]This began withDescartes's criticism of it, and what came to be known as the dispute between "rationalism" and "empiricism". In the opening line of one of his most famous books,Discourse on Method, Descartes established the most common modern meaning, and its controversies, when he stated that everyone has a similar and sufficient amount of common sense (bon sens), but it is rarely used well. Therefore, a skeptical logical method described by Descartes needs to be followed and common sense should not be overly relied upon.[8]In the ensuing 18th centuryEnlightenment, common sense came to be seen more positively as the basis for empiricist modern thinking. It was contrasted tometaphysics, which was, likeCartesianism, associated with theAncien Régime.Thomas Paine's polemical pamphletCommon Sense(1776) has been described as the most influential political pamphlet of the 18th century, affecting both theAmericanandFrench revolutions.[3] The origin of the term "common sense" is in the works of Aristotle.Heller-Roazen (2008)writes that "In different ways the philosophers of medieval Latin and Arabic tradition, fromAl-FarabitoAvicenna,Averroës,Albert, andThomas, found in theDe Animaand theParva Naturaliathe scattered elements of a coherent doctrine of the "central" faculty of the sensuous soul."[9]It was "one of the most successful and resilient of Aristotelian notions".[10] The best-known case isDe AnimaBook III, chapter 1, especially at line 425a27.[11]The passage is about how the animal mind converts raw sense perceptions from thefive specializedsense perceptions, into perceptions of real things moving and changing, which can be thought about. According to Aristotle's understanding of perception, each of the five senses perceives one type of "perceptible" or "sensible" which is specific (ἴδια,idia) to it. For example, sight can see colour. But Aristotle was explaining how the animal mind, not just the human mind, links and categorizes different tastes, colours, feelings, smells and sounds in order to perceive real things in terms of the "common sensibles" (or "common perceptibles"). In this discussion, "common" (κοινή,koiné) is a term opposed to specific or particular (idia). The Greek for these common sensibles istá koiná(τά κοινᾰ́), which means shared or common things, and examples include the oneness of each thing, with its specific shape and size and so on, and the change or movement of each thing.[12]Distinct combinations of these properties are common to all perceived things.[13] In this passage, Aristotle explained that concerning thesekoiná(such as movement) people have a sense — a "common sense" or sense of the common things (aísthēsis koinḕ) — and there is no specific (idéā) sense perception for movement and otherkoiná, because then we would not perceive thekoináat all, except byaccident(κᾰτᾰ́ σῠμβεβηκός,katá sumbebēkós). As examples of perceiving by accident Aristotle mentions using the specific sense perception vision on its own to try to see that something is sweet, or to try to recognize a friend only by their distinctive color.Lee (2011, p. 31) explains that "when I see Socrates, it is not insofar as he is Socrates that he is visible to my eye, but rather because he is coloured". So the normal five individual senses do sense the common perceptibles according to Aristotle (and Plato), but it is not something they necessarily interpret correctly on their own. Aristotle proposes that the reason for having several senses is in fact that it increases the chances that we can distinguish and recognize things correctly, and not just occasionally or by accident.[14]Each sense is used to identify distinctions, such as sight identifying the difference between black and white, but, says Aristotle, all animals with perception must have "some one thing" that can distinguish black from sweet.[15]The common sense is where this comparison happens, and this must occur by comparing impressions (or symbols or markers;σημεῖον,sēmeîon, 'sign, mark') of what the specialist senses have perceived.[16]The common sense is therefore also where a type ofconsciousnessoriginates, "for it makes us aware of having sensations at all". And it receives physical picture imprints from the imaginative faculty, which are then memories that can be recollected.[17] The discussion was apparently intended to improve upon the account of Aristotle's friend and teacherPlatoin hisSocratic dialogue, theTheaetetus.[18]But Plato's dialogue presented an argument that recognisingkoináis an active thinking process in the rational part of the human soul, making the senses instruments of the thinking part of man. Plato's Socrates says this kind of thinking is not a kind of sense at all. Aristotle, trying to give a more general account of the souls of all animals, not just humans, moved the act of perception out of the rational thinking soul into thissensus communis, which is something like a sense, and something like thinking, but not rational.[19] The passage is difficult to interpret and there is little consensus about the details.[20]Gregorić (2007, pp. 204–205) has argued that this may be because Aristotle did not use the term as a standardized technical term at all. For example, in some passages in his works, Aristotle seems to use the term to refer to the individual sense perceptions simply being common to all people, or common to various types of animals. There is also difficulty with trying to determine whether the common sense is truly separable from the individual sense perceptions and from imagination, in anything other than a conceptual way as a capability. Aristotle never fully spells out the relationship between the common sense and theimaginative faculty(φᾰντᾰσῐ́ᾱ,phantasíā), although the two clearly work together in animals, and not only humans, for example in order to enable a perception of time. They may even be the same.[17][19]Despite hints by Aristotle himself that they were united, early commentators such asAlexander of Aphrodisiasand Al-Farabi felt they were distinct, but later, Avicenna emphasized the link, influencing future authors including Christian philosophers.[21][22]Gregorić (2007, p. 205) argues that Aristotle used the term "common sense" both to discuss the individual senses when these act as a unity, which Gregorić calls "the perceptual capacity of the soul", or the higher level "sensory capacity of the soul" that represents the senses and the imagination working as a unity. According to Gregorić, there appears to have been a standardization of the termkoinḕ aísthēsisas a term for the perceptual capacity (not the higher level sensory capacity), which occurred by the time of Alexander of Aphrodisias at the latest.[23] Compared to Plato, Aristotle's understanding of the soul (psūkhḗ) has an extra level of complexity in the form of thenoûsor "intellect"—which is something only humans have and enables humans to perceive things differently from other animals. It works with images coming from the common sense and imagination, using reasoning (λόγος,lógos) as well as theactive intellect. Thenoûsidentifies the trueforms of things, while the common sense identifies shared aspects of things. Though scholars have varying interpretations of the details, Aristotle's "common sense" was in any case not rational, in the sense that it implied no ability to explain the perception.Reasonorrationality(lógos) exists only in man according to Aristotle, and yet some animals can perceive "common perceptibles" such as change and shape, and some even have imagination according to Aristotle. Animals with imagination come closest to having something like reasoning andnoûs.[24]Plato, on the other hand was apparently willing to allow that animals could have some level of thought, meaning that he did not have to explain their sometimes complex behavior with a strict division between high-level perception processing and the human-like thinking such as being able to form opinions.[25]Gregorić additionally argues that Aristotle can be interpreted as using the verbsphroneînandnoeînto distinguish two types of thinking or awareness, the first being found in animals and the second unique to humans and involving reason.[26]Therefore, in Aristotle (and the medieval Aristotelians) the universals used to identify and categorize things are divided into two. In medieval terminology these are thespecies sensibilisused for perception and imagination in animals, and thespecies intelligibilisor apprehendable forms used in the human intellect ornoûs. Aristotle also occasionally called thekoinḕ aísthēsis(or one version of it) theprôton aisthētikón(πρῶτον αἰσθητῐκόν,lit.''first of the senses''). (According to Gregorić, this is specifically in contexts where it refers to the higher order common sense that includes imagination.) Later philosophers developing this line of thought, such asThemistius,Galen, and Al-Farabi, calling it therulerof the senses orrulingsense, apparently a metaphor developed from a section of Plato'sTimaeus(70b).[22]Augustineand some of the Arab writers, also called it the "inner sense".[21]The concept of the inner senses, plural, was further developed in theMiddle Ages. Under the influence of the great Persian philosophers Al-Farabi and Avicenna, several inner senses came to be listed. "Thomas Aquinas andJohn of Jandunrecognized four internal senses: the common sense,imagination,vis cogitativa, and memory. Avicenna, followed byRobert Grosseteste,Albert the Great, andRoger Bacon, argued for five internal senses: the common sense, imagination, fantasy,vis aestimativa, and memory."[27]By the time ofDescartesandHobbes, in the 1600s, the inner senses had been standardized tofive wits, which complemented the more well-known five "external" senses.[21]Under this medieval scheme the common sense was understood to be seated not in the heart, as Aristotle had thought, but in the anteriorGalenicventricleof the brain. The anatomistAndreas Vesaliusfound no connections between the anterior ventricle and the sensory nerves, leading to speculation about other parts of the brain into the 1600s.[28] "Sensus communis" is the Latin translation of the Greekkoinḕ aísthēsis, which came to be recovered by Medievalscholasticswhen discussing Aristotelian theories of perception. In the earlier Latin of the Roman Empire, the term had taken a distinct ethical detour, developing new shades of meaning. These especially Roman meanings were apparently influenced by several Stoic Greek terms with the wordkoinḗ(κοινή, 'common, shared'); not onlykoinḕ aísthēsis, but also such terms askoinós noûs(κοινός νοῦς, 'common mind/thought/reason'),koinḗ énnoia(κοινή ἔννοιᾰ), andkoinonoēmosúnē, all of which involvenoûs—something, at least in Aristotle, that would not be present in "lower" animals.[29] Another link between Latincommunis sensusand Aristotle's Greek was inrhetoric, a subject that Aristotle was the first to systematize. In rhetoric, a prudent speaker must take account of opinions (δόξαι,dóxai) that are widely held.[32]Aristotle referred to such commonly held beliefs not askoinaí dóxai(κοιναί δόξαι,lit.''common opinions''), which is a term he used for self-evident logical axioms, but with other terms such aséndóxa(ἔνδόξα). In hisRhetoricfor example Aristotle mentions "koinōn [...] tàs písteis" or "common beliefs", saying that "our proofs and arguments must rest on generally accepted principles, [...] when speaking of converse with the multitude".[33]In a similar passage in his own work on rhetoric,De Oratore,Cicerowrote that "in oratory the very cardinal sin is to depart from the language of everyday life and the usage approved by the sense of the community." The sense of the community is in this case one translation of "communis sensus" in the Latin of Cicero.[34][35] Whether the Latin writers such asCicerodeliberately used this Aristotelian term in a new more peculiarly Roman way, probably also influenced by Greek Stoicism, therefore remains a subject of discussion.Schaeffer (1990, p. 112) has proposed for example that theRoman Republicmaintained a very "oral" culture whereas in Aristotle's time rhetoric had come under heavy criticism from philosophers such as Socrates.Peters Agnew (2008)argues, in agreement with Shaftesbury, that the concept developed from the Stoic concept of ethical virtue, influenced by Aristotle, but emphasizing the role of both the individual perception, and shared communal understanding. A complex of ideas attached itself to the term, to be almost forgotten in the Middle Ages, and eventually returning into ethical discussion in 18th-century Europe, after Descartes. As with other meanings of common sense, for the Romans of the classical era "it designates a sensibility shared by all, from which one may deduce a number of fundamental judgments, that need not, or cannot, be questioned by rational reflection".[36]But even though Cicero did at least once use the term in a manuscript on Plato'sTimaeus(concerning a primordial "sense, one and common for all [...] connected with nature"), he and other Roman authors did not normally use it as a technical term limited to discussion about sense perception, as Aristotle apparently had inDe Anima, and as the Scholastics later would in the Middle Ages.[37]Instead of referring to all animal judgment, it was used to describe pre-rational, widely shared human beliefs, and therefore it was a near equivalent to the concept ofhumanitas. This was a term that could be used by Romans to imply not onlyhuman nature, but also humane conduct, good breeding, refined manners, and so on.[38]Apart from Cicero,Quintilian,Lucretius,Seneca,Horaceand some of the most influential Roman authors influenced by Aristotle's rhetoric and philosophy used the Latin term "sensus communis" in a range of such ways.[39]AsC. S. Lewiswrote: Quintilian says it is better to send a boy to school than to have a private tutor for him at home; for if he is kept away from the herd (congressus) how will he ever learn thatsensuswhich we callcommunis? (I, ii, 20). On the lowest level it means tact. In Horace the man who talks to you when you obviously don't want to talk lackscommunis sensus.[40] Compared to Aristotle and his strictest medieval followers, these Roman authors were not so strict about the boundary between animal-like common sense and specially human reasoning. As discussed above, Aristotle had attempted to make a clear distinction between, on the one hand, imagination and the sense perception which both use the sensiblekoiná, and which animals also have; and, on the other hand,noûs(intellect) and reason, which perceives another type ofkoiná, the intelligible forms, which (according to Aristotle) only humans have. In other words, these Romans allowed that people could have animal-like shared understandings of reality, not just in terms of memories of sense perceptions, but in terms of the way they would tend to explain things, and in the language they use.[41] One of the last notable philosophers to accept something like the Aristotelian "common sense" wasDescartesin the 17th century, but he also undermined it. He described this inner faculty when writing in Latin in hisMeditations on first philosophy.[42]The common sense is the link between the body and its senses, and the true human mind, which according to Descartes must be purely immaterial. Unlike Aristotle, who had placed it in the heart, by the time of Descartes this faculty was thought to be in the brain, and he located it in thepineal gland.[43]Descartes' judgement of this common sense was that it was enough to persuade the human consciousness of the existence of physical things, but often in a very indistinct way. To get a more distinct understanding of things, it is more important to be methodical and mathematical.[44]This line of thought was taken further, if not by Descartes himself then by those he influenced, until the concept of a faculty or organ of common sense was itself rejected. René Descartes is generally credited with making obsolete the notion that there was an actual faculty within the human brain that functioned as asensus communis. The French philosopher did not fully reject the idea of the inner senses, which he appropriated from theScholastics. But he distanced himself from the Aristotelian conception of a common sense faculty, abandoning it entirely by the time of hisPassions of the Soul(1649).[45] Contemporaries such asGassendiandHobbeswent beyond Descartes in some ways in their rejection of Aristotelianism, rejecting explanations involving anything other than matter and motion, including the distinction between the animal-like judgement of sense perception, a special separate common sense, and the human mind ornoûs, which Descartes had retained from Aristotelianism.[46]In contrast to Descartes who "found it unacceptable to assume that sensory representations may enter the mental realm from without"... According to Hobbes [...] man is no different from the other animals. [...] Hobbes' philosophy constituted a more profound rupture withPeripateticthought. He accepted mental representations but [...] "All sense is fancy", as Hobbes famously put it, with the only exception of extension and motion.[47] But Descartes used two different terms in his work, not only the Latin term "sensus communis", but also the French termbon sens, with which he opens hisDiscourse on Method. And this second concept survived better. This work was written in French, and does not directly discuss the Aristotelian technical theory of perception.Bon sensis the equivalent of modern English "common sense" or "good sense". As the Aristotelian meaning of the Latin term began to be forgotten after Descartes, his discussion ofbon sensgave a new way of definingsensus communisin various European languages (including Latin, even though Descartes himself did not translatebon sensassensus communis, but treated them as two separate things).[48] Schaeffer (1990, p. 2) writes that "Descartes is the source of the most common meaning ofcommon sensetoday: practical judgment". Gilson noted that Descartes actually gavebon senstwo related meanings, first the basic and widely shared ability to judge true and false, which he also callsraison(lit.''reason''); and second, wisdom, the perfected version of the first. The Latin term Descartes uses,bona mens(lit.''good mind''), derives from the Stoic authorSenecawho only used it in the second sense. Descartes was being original.[49] The idea that now became influential, developed in both the Latin and French works of Descartes, though coming from different directions, is that common good sense (and indeed sense perception) is not reliable enough for the new Cartesian method ofskepticalreasoning.[50]The Cartesian project to replace common good sense with clearly defined mathematical reasoning was aimed at certainty, and not mere probability. It was promoted further by people such as Hobbes,Spinoza, and others and continues to have important impacts on everyday life. In France, the Netherlands, Belgium, Spain and Italy, it was in its initial florescence associated with the administration of Catholic empires of the competingBourbon, andHabsburgdynasties, both seeking to centralize their power in a modern way, responding toMachiavellianismandProtestantismas part of theCounter-Reformation.[51] Cartesian theory offered a justification for innovative social change achieved through the courts and administration, an ability to adapt the law to changing social conditions by making the basis for legislation "rational" rather than "traditional".[52] So after Descartes, critical attention turned from Aristotle and his theory of perception, and more towards Descartes' own treatment of common good sense, concerning which several 18th-century authors found help in Roman literature. During theEnlightenment, Descartes' insistence upon a mathematical-style method of thinking that treated common sense and the sense perceptions sceptically, was accepted in some ways, but also criticized. On the one hand, the approach of Descartes is and was seen as radically sceptical in some ways. On the other hand, like the Scholastics before him, while being cautious of common sense, Descartes was instead seen to rely too much on undemonstrable metaphysical assumptions in order to justify his method, especially in its separation of mind and body (with thesensus communislinking them). Cartesians such asHenricus Regius,Geraud de Cordemoy, andNicolas Malebrancherealized that Descartes's logic could give no evidence of the "external world" at all, meaning it had to be taken on faith.[53]Though his own proposed solution was even more controversial, Berkeley famously wrote that enlightenment requires a "revolt from metaphysical notions to the plain dictates of nature and common sense".[54]Descartes and the Cartesian "rationalists", rejected reliance upon experience, the senses andinductive reasoning, and seemed to insist that certainty was possible. The alternative to induction, deductive reasoning, demanded a mathematical approach, starting from simple and certain assumptions. This in turn required Descartes (and later rationalists such as Kant) to assume the existence of innate or "a priori"knowledgein the human mind—a controversial proposal. In contrast to the rationalists, the "empiricists" took their orientation fromFrancis Bacon, whose arguments for methodical science were earlier than those of Descartes, and less directed towards mathematics and certainty. Bacon is known for his doctrine of the "idols of the mind", presented in hisNovum Organum, and in hisEssaysdescribed normal human thinking as biased towards believing in lies.[55]But he was also the opponent of all metaphysical explanations of nature, or over-reaching speculation generally, and a proponent of science based on small steps of experience, experimentation and methodical induction. So while agreeing upon the need to help common sense with a methodical approach, he also insisted that starting from common sense, including especially common sense perceptions, was acceptable and correct. He influencedLockeandPierre Bayle, in their critique of metaphysics, and in 1733Voltaire"introduced him as the "father" of thescientific method" to a French audience, an understanding that was widespread by 1750. Together with this, references to "common sense" became positive and associated with modernity, in contrast to negative references to metaphysics, which was associated with theAncien Régime.[3] As mentioned above, in terms of the more general epistemological implications of common sense, modern philosophy came to use the term common sense like Descartes, abandoning Aristotle's theory. While Descartes had distanced himself from it, John Locke abandoned it more openly, while still maintaining the idea of "common sensibles" that are perceived. But thenGeorge Berkeleyabandoned both.[45]David Humeagreed with Berkeley on this, and like Locke and Vico saw himself as following Bacon more than Descartes. In his synthesis, which he saw as the first Baconian analysis of man (something the lesser known Vico had claimed earlier), common sense is entirely built up from shared experience and shared innate emotions, and therefore it is indeed imperfect as a basis for any attempt to know the truth or to make the best decision. But he defended the possibility of science without absolute certainty, and consistently described common sense as giving a valid answer to the challenge ofextreme skepticism. Concerning such sceptics, he wrote: But would these prejudiced reasoners reflect a moment, there are many obvious instances and arguments, sufficient to undeceive them, and make them enlarge their maxims and principles. Do they not see the vast variety of inclinations and pursuits among our species; where each man seems fully satisfied with his own course of life, and would esteem it the greatest unhappiness to be confined to that of his neighbour? Do they not feel in themselves, that what pleases at one time, displeases at another, by the change of inclination; and that it is not in their power, by their utmost efforts, to recall that taste or appetite, which formerly bestowed charms on what now appears indifferent or disagreeable? [...] Do you come to a philosopher as to a cunning man, to learn something by magic or witchcraft, beyond what can be known by common prudence and discretion?[56] Once Thomas Hobbes andSpinozahad applied Cartesian approaches topolitical philosophy, concerns about the inhumanity of the deductive approach of Descartes increased. With this in mind, Shaftesbury andGiambattista Vicopresented new arguments for the importance of the Roman understanding of common sense, in what is now often referred to, afterHans-Georg Gadamer, as ahumanistinterpretation of the term.[57]Their concern had several inter-related aspects. One ethical concern was the deliberately simplified method that treated human communities as made up of selfish independent individuals (methodological individualism), ignoring thesense of communitythat the Romans understood as part of common sense. Another connected epistemological concern was that by consideringcommon good senseas inherently inferior to Cartesian conclusions developed from simple assumptions, an important type of wisdom was being arrogantly ignored. The Earl's seminal 1709 essaySensus Communis: An Essay on the Freedom of Wit and Humourwas a highly erudite and influential defense of the use of irony and humour in serious discussions, at least among men of "Good Breeding". He drew upon authors such asSeneca,Juvenal,HoraceandMarcus Aurelius, for whom, he saw, common sense was not just a reference to widely held vulgar opinions, but something cultivated among educated people living in better communities. One aspect of this, later taken up by authors such as Kant, was good taste. Another very important aspect of common sense particularly interesting to later British political philosophers such asFrancis Hutchesonwas what came to be calledmoral sentiment, which is different from a tribal or factional sentiment, but a more general fellow feeling that is very important for larger communities: A publick Spirit can come only from a social Feeling orSense of Partnershipwith Human Kind. Now there are none so far from beingPartnersin thisSense, or sharers in thiscommon Affection, as they who scarcely knowan Equall, nor consider themselves as subject to any law ofFellowshiporCommunity. And thus Morality and good Government go together.[58] Hutcheson described it as, "a Publick Sense, viz. "our Determination to be pleased with the Happiness of others, and to be uneasy at their Misery."" which, he explains, "was sometimes calledκοινονοημοσύνη[59]or Sensus Communis by some of the Antients".[60] A reaction to Shaftesbury in defense of the Hobbesian approach of treating communities as driven by individual self-interest, was not long coming inBernard Mandeville's controversial works. Indeed, this approach was never fully rejected, at least in economics. And so despite the criticism heaped upon Mandeville and Hobbes by Adam Smith, Hutcheson's student and successor in Glasgow university, Smith made self-interest a core assumption within nascent modern economics, specifically as part of the practical justification for allowing free markets. By the late enlightenment period in the 18th century, the communal sense had become the "moral sense" or "moral sentiment" referred to by Hume andAdam Smith, the latter writing in plural of the "moral sentiments" with the key one beingsympathy, which was not so much a public spirit as such, but a kind of extension of self-interest.Jeremy Benthamgives a summary of the plethora of terms used in British philosophy by the nineteenth century to describe common sense in discussions about ethics: Another man comes and alters the phrase: leaving out moral, and putting incommon, in the room of it. He then tells you, that his common sense teaches him what is right and wrong, as surely as the other's moral sense did: meaning by common sense, a sense of some kind or other, which he says, is possessed by all mankind: the sense of those, whose sense is not the same as the author's, being struck out of the account as not worth taking.[61] This was at least to some extent opposed to the Hobbesian approach, still today normal in economic theory, of trying to understand all human behaviour as fundamentally selfish, and would also be a foil to the new ethics of Kant. This understanding of a moral sense or public spirit remains a subject for discussion, although the term "common sense" is no longer commonly used for the sentiment itself.[62]In several European languages, a separate term for this type of common sense is used. For example, Frenchsens communand GermanGemeinsinnare used for this feeling of human solidarity, whilebon sens(good sense) andgesunder Verstand(healthy understanding) are the terms for everyday "common sense". According to Gadamer, at least in French and British philosophy a moral element in appeals to common sense (orbon sens), such as found in Reid, remains normal to this day.[63]But according to Gadamer, the civic quality implied in discussion ofsensus communisin other European countries did not take root in the German philosophy of the 18th and 19th centuries, despite the fact it consciously imitated much in English and French philosophy. "Sensus communiswas understood as a purely theoretical judgment, parallel to moral consciousness (conscience) andtaste."[64]The concept ofsensus communis"was emptied and intellectualized by the German enlightenment".[65]But German philosophy was becoming internationally important at this same time. Gadamer notes one less-known exception—theWürttemberg pietism, inspired by the 18th centurySwabianchurchman, M.Friedrich Christoph Oetinger, who appealed to Enlightenment figures in his critique of the Cartesian rationalism ofLeibnizandWolff, who were the most important German philosophers before Kant.[66] Vico, who taught classical rhetoric inNaples(where Shaftesbury died) under a Cartesian-influenced Spanish government, was not widely read until the 20th century, but his writings on common sense have been an important influence upon Hans-Georg Gadamer,Benedetto CroceandAntonio Gramsci.[29]Vico united the Roman and Greek meanings of the termcommunis sensus.[67]Vico's initial use of the term, which was of much inspiration to Gadamer for example, appears in hisOn the Study Methods of our Time, which was partly a defense of his own profession, given the reformist pressure upon both his University and the legal system in Naples. It presents common sense as something adolescents need to be trained in if they are not to "break into odd and arrogant behaviour when adulthood is reached", whereas teaching Cartesian method on its own harms common sense and stunts intellectual development. Rhetoric and elocution are not just for legal debate, but also educate young people to use their sense perceptions and their perceptions more broadly, building a fund of remembered images in their imagination, and then using ingenuity in creating linking metaphors, in order to makeenthymemes. Enthymemes are reasonings about uncertain truths and probabilities—as opposed to the Cartesian method, which was skeptical of all that could not be dealt with assyllogisms, including raw perceptions of physical bodies. Hence common sense is not just a "guiding standard ofeloquence" but also "the standard ofpractical judgment". The imagination or fantasy, which under traditional Aristotelianism was often equated with thekoinḕ aísthēsis, is built up under this training, becoming the "fund" (to use Schaeffer's term) accepting not only memories of things seen by an individual, but also metaphors and images known in the community, including the ones out of which language itself is made.[68] In its mature version, Vico's conception ofsensus communisis defined by him as "judgment without reflection, shared by an entire class, an entire people, and entire nation, or the entire human race". Vico proposed his own anti-Cartesian methodology for a new Baconian science, inspired, he said, byPlato,Tacitus,[69]Francis Bacon andGrotius. In this he went further than his predecessors concerning the ancient certainties available within vulgar common sense. What is required, according to his new science, is to find the common sense shared by different people and nations. He made this a basis for a new and better-founded approach to discussNatural Law, improving upon Grotius,John Selden, andPufendorfwho he felt had failed to convince, because they could claim no authority from nature. Unlike Grotius, Vico went beyond looking for one single set of similarities amongst nations but also established rules about how natural law properly changes as peoples change, and has to be judged relative to this state of development. He thus developed a detailed view of an evolving wisdom of peoples. Ancient forgotten wisdoms, he claimed, could be re-discovered by analysis of languages and myths formed under the influence of them.[70]This is comparable to bothMontesquieu'sSpirit of the Laws, as well as much laterHegelianhistoricism, both of which apparently developed without any awareness of Vico's work.[71] Contemporary with Hume, but critical of Hume's scepticism, a so-calledScottish school of Common Senseformed, whose basic principle was enunciated by its founder and greatest figure,Thomas Reid: If there are certain principles, as I think there are, which the constitution of our nature leads us to believe, and which we are under a necessity to take for granted in the common concerns of life, without being able to give a reason for them — these are what we call the principles of common sense; and what is manifestly contrary to them, is what we call absurd.[72] Thomas Reid was a successor to Francis Hutcheson and Adam Smith asProfessor of Moral Philosophy, Glasgow. While Reid's interests lay in the defense of common sense as a type of self-evident knowledge available to individuals, this was also part of a defense of natural law in the style of Grotius. He believed his use of "common sense" encompassed both the communal common sense described by Shaftesbury and Hutcheson, and the perceptive powers described by Aristotelians. Reid was criticised, partly for his critique of Hume, by Kant andJ. S. Mill, who were two of the most important influences in nineteenth century philosophy. He was blamed for over-stating Hume's scepticism of commonly held beliefs, and more importantly for not perceiving the problem with any claim that common sense could ever fulfill Cartesian (or Kantian) demands for absolute knowledge. Reid furthermore emphasized inborn common sense as opposed to only experience and sense perception. In this way his common sense has a similarity to the assertion ofa prioriknowledge asserted by rationalists like Descartes and Kant, despite Reid's criticism of Descartes concerning his theory of ideas. Hume was critical of Reid on this point. Despite the criticism, the influence of the Scottish school was notable for example upon Americanpragmatism, and modernThomism. The influence has been particularly important concerning the epistemological importance of asensus communisfor any possibility of rational discussion between people. Immanuel Kantdeveloped a new variant of the idea ofsensus communis, noting how having a sensitivity for what opinions are widely shared and comprehensible gives a sort of standard for judgment, and objective discussion, at least in the field ofaestheticsand taste: The common Understanding of men[gemeine Menschenverstand], which, as the mere sound (not yet cultivated) Understanding, we regard as the least to be expected from any one claiming the name of man, has therefore the doubtful honour of being given the name of common sense [Namen des Gemeinsinnes] (sensus communis); and in such a way that by the name common (not merely in our language, where the word actually has a double signification, but in many others) we understand vulgar, that which is everywhere met with, the possession of which indicates absolutely no merit or superiority. But under thesensus communiswe must include the Idea of acommunalsense [eines gemeinschaftlichen Sinnes], i.e. of a faculty of judgement, which in its reflection takes account (a priori) of the mode of representation of all other men in thought; in order as it were to compare its judgement with the collective Reason of humanity, and thus to escape the illusion arising from the private conditions that could be so easily taken for objective, which would injuriously affect the judgement.[73] Kant saw this concept as answering a particular need in his system: "the question of why aesthetic judgments are valid: since aesthetic judgments are a perfectly normal function of the same faculties of cognition involved in ordinary cognition, they will have the same universal validity as such ordinary acts of cognition".[74] But Kant's overall approach was very different from those of Hume or Vico. Like Descartes, he rejected appeals to uncertain sense perception and common sense (except in the very specific way he describes concerning aesthetics), or the prejudices of one's "Weltanschauung", and tried to give a new way to certainty through methodical logic, and an assumption of a type ofa prioriknowledge. He was also not in agreement with Reid and the Scottish school, who he criticized in hisProlegomena to Any Future Metaphysicsas using "the magic wand of common sense", and not properly confronting the "metaphysical" problem defined by Hume, which Kant wanted to be solved scientifically—the problem of how to use reason to consider how one ought to act. Kant used different words to refer to his aestheticsensus communis, for which he used Latin or else GermanGemeinsinn, and the more general English meaning which he associated with Reid and his followers, for which he used various terms such asgemeinen Menscheverstand,gesunden Verstand, orgemeinen Verstand.[75] According to Gadamer, in contrast to the "wealth of meaning" brought from the Roman tradition into humanism, Kant "developed his moral philosophy in explicit opposition to the doctrine of 'moral feeling' that had been worked out in English philosophy". Themoral imperative"cannot be based on feeling, not even if one does not mean an individual's feeling but common moral sensibility".[76]For Kant, thesensus communisonly applied to taste, and the meaning of taste was also narrowed as it was no longer understood as any kind of knowledge.[77]Taste, for Kant, is universal only in that it results from "the free play of all our cognitive powers", and is communal only in that it "abstracts from all subjective, private conditions such as attractiveness and emotion".[78] Kant himself did not see himself as a relativist, and was aiming to give knowledge a more solid basis, but asRichard J. Bernsteinremarks, reviewing this same critique of Gadamer: Once we begin to question whether there is a common faculty of taste (asensus communis), we are easily led down the path torelativism. And this is what did happen after Kant—so much so that today it is extraordinarily difficult to retrieve any idea of taste or aesthetic judgment that is more than the expression of personal preferences. Ironically (given Kant's intentions), the same tendency has worked itself out with a vengeance with regards to all judgments of value, including moral judgments.[79] Continuing the tradition of Reid and the enlightenment generally, the common sense of individuals trying to understand reality continues to be a serious subject in philosophy. In America, Reid influencedC. S. Peirce, the founder of the philosophical movement now known asPragmatism, which has become internationally influential. One of the names Peirce used for the movement was "Critical Common-Sensism". Peirce, who wrote afterCharles Darwin, suggested that Reid and Kant's ideas about inborn common sense could be explained by evolution. But while such beliefs might be well adapted to primitive conditions, they were not infallible, and could not always be relied upon. Another example still influential today is fromG. E. Moore, several of whose essays, such as the 1925 "A Defence of Common Sense", argued that individuals can make many types of statements about what they judge to be true, and that the individual and everyone else knows to be true.Michael Huemerhas advocated an epistemic theory he callsphenomenal conservatism, which he claims to accord with common sense by way ofinternalistintuition.[80] In twentieth century philosophy the concept of thesensus communisas discussed by Vico and especially Kant became a major topic of philosophical discussion. The theme of this discussion questions how far the understanding of eloquent rhetorical discussion (in the case of Vico), or communally sensitive aesthetic tastes (in the case of Kant) can give a standard or model for political, ethical and legal discussion in a world where forms ofrelativismare commonly accepted, and serious dialogue between very different nations is essential. Some philosophers such asJacques Rancièreindeed take the lead fromJean-François Lyotardand refer to the "postmodern" condition as one where there is "dissensus communis".[81] Hannah Arendtadapted Kant's concept ofsensus communisas a faculty of aesthetic judgement that imagines the judgements of others, into something relevant for political judgement. Thus she created a "Kantian" political philosophy, which, as she said herself, Kant did not write. She argued that there was often a banality to evil in the real world, for example in the case of someone likeAdolf Eichmann, which consisted in a lack ofsensus communisand thoughtfulness generally. Arendt and alsoJürgen Habermas, who took a similar position concerning Kant'ssensus communis, were criticised by Lyotard for their use of Kant'ssensus communisas a standard for real political judgement. Lyotard also saw Kant'ssensus communisas an important concept for understanding political judgement, not aiming at any consensus, but rather at a possibility of a "euphony" in "dis-sensus". Lyotard claimed that any attempt to impose anysensus communisin real politics would mean imposture by an empowered faction upon others.[82] In a parallel development,Antonio Gramsci, Benedetto Croce, and later Hans-Georg Gadamer took inspiration from Vico's understanding of common sense as a kind of wisdom of nations, going beyond Cartesian method. It has been suggested that Gadamer's most well-known work,Truth and Method, can be read as an "extended meditation on the implications of Vico's defense of the rhetorical tradition in response to the nascent methodologism that ultimately dominated academic enquiry".[83]In the case of Gadamer, this was in specific contrast to thesensus communisconcept in Kant, which he felt (in agreement with Lyotard) could not be relevant to politics if used in its original sense. Gadamer came into direct debate with his contemporary Habermas, the so-calledHermeneutikstreit. Habermas, with a self-declared Enlightenment "prejudice against prejudice" argued that if breaking free from the restraints of language is not the aim of dialectic, then social science will be dominated by whoever wins debates, and thus Gadamer's defense ofsensus communiseffectively defends traditional prejudices. Gadamer argued that being critical requires being critical of prejudices including the prejudice against prejudice. Some prejudices will be true. And Gadamer did not share Habermas' acceptance that aiming at going beyond language through method was not itself potentially dangerous. Furthermore, he insisted that because all understanding comes through language, hermeneutics has a claim to universality. As Gadamer wrote in the "Afterword" ofTruth and Method, "I find it frighteningly unreal when people like Habermas ascribe to rhetoric a compulsory quality that one must reject in favor of unconstrained, rational dialogue". Paul Ricoeurargued that Gadamer and Habermas were both right in part. As a hermeneutist like Gadamer he agreed with him about the problem of lack of any perspective outside of history, pointing out that Habermas himself argued as someone coming from a particular tradition. He also agreed with Gadamer that hermeneutics is a "basic kind of knowing on which others rest".[84]But he felt that Gadamer under-estimated the need for a dialectic that was critical and distanced, and attempting to go behind language.[85][86] A recent commentator on Vico, John D. Schaeffer has argued that Gadamer's approach tosensus communisexposed itself to the criticism of Habermas because it "privatized" it, removing it from a changing and oral community, following the Greek philosophers in rejecting true communal rhetoric, in favour of forcing the concept within aSocratic dialecticaimed at truth. Schaeffer claims that Vico's concept provides a third option to those of Habermas and Gadamer and he compares it to the recent philosophersRichard J. Bernstein,Bernard Williams,Richard Rorty, andAlasdair MacIntyre, and the recent theorist of rhetoric,Richard Lanham.[87] The other Enlightenment debate about common sense, concerning common sense as a term for an emotion or drive that is unselfish, also continues to be important in discussion of social science, and especiallyeconomics. The axiom that communities can be usefully modeled as a collection ofself-interested individualsis a central assumption in much of modernmathematical economics, and mathematical economics has now come to be an influential tool of political decision making. While the term "common sense" had already become less commonly used as a term for the empathetic moral sentiments by the time of Adam Smith, debates continue aboutmethodological individualismas something supposedly justified philosophically for methodological reasons (as argued for example byMilton Friedmanand more recently byGary S. Becker, both members of the so-calledChicago school of economics).[88]As in the Enlightenment, this debate therefore continues to combine debates about not only what the individual motivations of people are, but also what can be known about scientifically, and what should be usefully assumed for methodological reasons, even if the truth of the assumptions are strongly doubted. Economics and social science generally have been criticized as a refuge of Cartesian methodology. Hence, amongst critics of the methodological argument for assuming self-centeredness in economics are authors such asDeirdre McCloskey, who have taken their bearings from the above-mentioned philosophical debates involving Habermas, Gadamer, the anti-CartesianRichard Rortyand others, arguing that trying to force economics to follow artificial methodological laws is bad, and it is better to recognize social science as driven by rhetoric. Among Catholic theologians, writers such as theologianFrançois Fénelonand philosopherClaude Buffier(1661–1737) gave an anti-Cartesian defense of common sense as a foundation for knowledge. Other Catholic theologians took up this approach, and attempts were made to combine this with more traditional Thomism, for exampleJean-Marie de Lamennais. This was similar to the approach of Thomas Reid, who for example was a direct influence onThéodore Jouffroy. This meant basing knowledge upon something uncertain, and irrational.Matteo Liberatore, seeking an approach more consistent with Aristotle and Aquinas, equated this foundational common sense with thekoinaí dóxaiof Aristotle, that correspond to thecommunes conceptionesof Aquinas.[53]In the twentieth century, this debate is especially associated withÉtienne GilsonandReginald Garrigou-Lagrange.[89]Gilson pointed out that Liberatore's approach means categorizing such common beliefs as the existence of God or the immortality of the soul, under the same heading as (in Aristotle and Aquinas) such logical beliefs as that it is impossible for something to exist and not exist at the same time. This, according to Gilson, is going beyond the original meaning. Concerning Liberatore he wrote: Endeavours of this sort always end in defeat. In order to confer a technical philosophical value upon the common sense of orators and moralists it is necessary either to accept Reid's common sense as a sort of unjustified and unjustifiable instinct, which will destroy Thomism, or to reduce it to the Thomistintellectand reason, which will result in its being suppressed as a specifically distinct faculty of knowledge. In short, there can be no middle ground between Reid and St. Thomas.[53] Gilson argued that Thomism avoided the problem of having to decide between Cartesian innate certainties and Reid's uncertain common sense, and that "as soon as the problem of the existence of the external world was presented in terms of common sense, Cartesianism was accepted".[89] "Good Sense is, of all things among men, the most equally distributed; for every one thinks himself so abundantly provided with it, that those even who are the most difficult to satisfy in everything else, do not usually desire a larger measure of this quality than they already possess. And in this it is not likely that all are mistaken: the conviction is rather to be held as testifying that the power of judging aright and of distinguishingTruthfromError, which is properly what is calledGood SenseorReason, is by nature equal in all men; and that the diversity of our opinions, consequently, does not arise from some being endowed with a larger share of Reason than others, but solely from this, that we conduct our thoughts along different ways, and do not fix our attention on the same objects. For to be possessed of a vigorous mind is not enough; the prime requisite is rightly to apply it. The greatest minds, as they are capable of the highest excellencies, are open likewise to the greatest aberrations; and those who travel very slowly may yet make far greater progress, provided they keep always to the straight road, than those who, while they run, forsake it." Some say the Senses receive the Species of things, and deliver them to the Common-sense; and the Common Sense delivers them over to the Fancy, and the Fancy to the Memory, and the Memory to the Judgement, like handing of things from one to another, with many words making nothing understood. (Hobbes, Thomas,"II.: of imagination",The English Works of Thomas Hobbes of Malmesbury; Now First Collected and Edited by Sir William Molesworth, Bart., 11 vols., vol. 3 (Leviathan), London: Bohn).
https://en.wikipedia.org/wiki/Common_sense
Incomputing,linked datais structured data which is interlinked with other data so it becomes more useful throughsemantic queries. It builds upon standardWebtechnologies such asHTTP,RDFandURIs, but rather than using them to serve web pages only for human readers, it extends them to share information in a way that can be read automatically by computers. Part of the vision of linked data is for theInternetto become a globaldatabase.[1] Tim Berners-Lee, director of theWorld Wide Web Consortium(W3C), coined the term in a 2006 design note about theSemantic Webproject.[2] Linked data may also beopen data, in which case it is usually described as Linked Open Data.[3] In his 2006 "Linked Data" note,Tim Berners-Leeoutlined four principles of linked data, paraphrased along the following lines:[2] Tim Berners-Lee later restated these principles at a 2009TED conference, again paraphrased along the following lines:[4] Thus, we can identify the following components as essential to a global Linked Data system as envisioned, and to any actual Linked Data subset within it: Linked open dataare linked data that areopen data.[6][7][8]Tim Berners-Lee gives the clearest definition of linked open data as differentiated from linked data. Linked Open Data (LOD) is Linked Data which is released under an open license, which does not impede its reuse for free. Large linked open data sets includeDBpedia,Wikibase,WikidataandOpen ICEcat[uk;nl]. In 2010,Tim Berners-Leesuggested a 5-star scheme for grading the quality of open data on the web, for which the highest ranking is Linked Open Data:[11] The term "linked open data" has been in use since at least February 2007, when the "Linking Open Data" mailing list[12]was created.[13]The mailing list was initially hosted by theSIMILEproject[14]at theMassachusetts Institute of Technology. The goal of the W3C Semantic Web Education and Outreach group's Linking Open Data community project is to extend the Web with adata commonsby publishing variousopendatasetsas RDF on the Web and by settingRDFlinks between data items from different data sources. In October 2007, datasets consisted of over two billion RDFtriples, which were interlinked by over two million RDF links.[16][17]By September 2011 this had grown to 31 billion RDF triples, interlinked by around 504 million RDF links. A detailed statistical breakdown was published in 2014.[18] There are a number ofEuropean Unionprojects involving linked data. These include the linked open data around the clock (LATC) project,[19]the AKN4EU project for machine-readable legislative data,[20]the PlanetData project,[21]the DaPaaS (Data-and-Platform-as-a-Service) project,[22]and the Linked Open Data 2 (LOD2) project.[23][24][25]Data linking is one of the main goals of theEU Open Data Portal, which makes available thousands of datasets for anyone to reuse and link. Ontologiesare formal descriptions of data structures. Some of the better known ontologies are: Clickable diagrams that show the individual datasets and their relationships within the DBpedia-spawned LOD cloud (as by the figures to the right) are available.[30][31]
https://en.wikipedia.org/wiki/Linked_data
Reason maintenance[1][2]is aknowledge representationapproach to efficient handling of inferred information that is explicitly stored. Reason maintenance distinguishes between base facts, which can bedefeated, and derived facts. As such it differs frombelief revisionwhich, in its basic form, assumes that all facts are equally important. Reason maintenance was originally developed as a technique for implementing problem solvers.[2]It encompasses a variety of techniques that share a common architecture:[3]two components—a reasoner and a reason maintenance system—communicate with each other via an interface. The reasoner uses the reason maintenance system to record its inferences and justifications of ("reasons" for) the inferences. The reasoner also informs the reason maintenance system which are the currently valid base facts (assumptions). The reason maintenance system uses the information to compute the truth value of the stored derived facts and to restore consistency if an inconsistency is derived. Atruth maintenance system, orTMS, is aknowledge representationmethod for representing both beliefs and their dependencies and an algorithm called the "truth maintenance algorithm" that manipulates and maintains the dependencies. The nametruth maintenanceis due to the ability of these systems to restore consistency. A truth maintenance system maintains consistency between old believed knowledge and current believed knowledge in the knowledge base (KB) through revision. If the current believed statements contradict the knowledge in the KB, then the KB is updated with the new knowledge. It may happen that the same data will again be believed, and the previous knowledge will be required in the KB. If the previous data are not present, but may be required for new inference. But if the previous knowledge was in the KB, then no retracing of the same knowledge is needed. The use of TMS avoids such retracing; it keeps track of the contradictory data with the help of a dependency record. This record reflects the retractions and additions which makes theinference engine(IE) aware of its current belief set. Each statement having at least one valid justification is made a part of the current belief set. When a contradiction is found, the statement(s) responsible for the contradiction are identified and the records are appropriately updated. This process is called dependency-directed backtracking. The TMS algorithm maintains the records in the form of a dependency network. Each node in the network is an entry in the KB (a premise, antecedent, or inference rule etc.) Each arc of the network represent the inference steps through which the node was derived. A premise is a fundamental belief which is assumed to be true. They do not need justifications. The set of premises are the basis from which justifications for all other nodes will be derived. There are two types of justification for a node. They are: Many kinds of truth maintenance systems exist. Two major types are single-context and multi-context truth maintenance. In single context systems, consistency is maintained among all facts in memory (KB) and relates to the notion of consistency found inclassical logic. Multi-context systems supportparaconsistencyby allowing consistency to be relevant to a subset of facts in memory, a context, according to the history of logical inference. This is achieved by tagging each fact or deduction with its logical history. Multi-agent truth maintenance systems perform truth maintenance across multiple memories, often located on different machines. de Kleer's assumption-based truth maintenance system (ATMS, 1986) was utilized in systems based uponKEEon theLisp Machine. The first multi-agent TMS was created by Mason and Johnson. It was a multi-context system. Bridgeland and Huhns created the first single-context multi-agent system.
https://en.wikipedia.org/wiki/Reason_maintenance
Aconcept inventoryis acriterion-referenced testdesigned to help determine whether a student has an accurate workingknowledgeof a specific set of concepts. Historically, concept inventories have been in the form ofmultiple-choice testsin order to aid interpretability and facilitate administration in large classes. Unlike a typical, teacher-authored multiple-choice test, questions and response choices on concept inventories are the subject of extensive research. The aims of the research include ascertaining (a) the range of what individuals think a particular question is asking and (b) the most common responses to the questions. Concept inventories are evaluated to ensure testreliabilityandvalidity. In its final form, each question includes one correct answer and several distractors. Ideally, a score on a criterion-referenced test reflects the degrees of proficiency of the test taker with one or more KSAs (knowledge, skills and/abilities), and may report results with one unidimensional score and/or multiple sub-scores. Criterion-referenced tests differ fromnorm-referenced testsin that (in theory) the former report level of proficiency relative pre-determined level and the latter reports relative standing to other test takers. Criterion-referenced tests may be used to determine whether a student reached predetermined levels of proficiency (i.e., scoring above somecutoff score) and therefore move on to the next unit or level of study. The distractors are incorrect or irrelevant answers that are usually (but not always) based on students' commonly held misconceptions.[1]Test developers often research student misconceptions by examining students' responses to open-ended essay questions and conducting "think-aloud" interviews with students. The distractors chosen by students help researchers understand student thinking and give instructors insights into students' prior knowledge (and, sometimes, firmly held beliefs). This foundation in research underlies instrument construction and design, and plays a role in helping educators obtain clues about students' ideas,scientific misconceptions, anddidaskalogenic("teacher-induced" or "teaching-induced") confusions and conceptuallacunaethat interfere with learning. Concept inventories are education-related diagnostic tests.[2]In 1985 Halloun and Hestenes introduced a "multiple-choice mechanics diagnostic test" to examine students' concepts about motion.[3]It evaluates student understanding of basic concepts in classical (macroscopic) mechanics. A little later, theForce Concept Inventory(FCI), another concept inventory, was developed.[3][4][5]The FCI was designed to assess student understanding of theNewtonianconcepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could stateNewton's Third Lawat the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below). That said, there remain questions as what exactly the FCI measures.[6]Results using the FCI have led to greater recognition in thescience educationcommunity of the importance of students' "interactive engagement" with the materials to be mastered.[7] Since the development of the FCI, other physics instruments have been developed. These include theForce and Motion Conceptual Evaluationconcept[8]and theBrief Electricity and Magnetism Assessment.[9]For a discussion of how a number of concept inventories were developed see Beichner.[10] In addition to physics, concept inventories have been developed instatistics,[11]chemistry,[12][13]astronomy,[14]basicbiology,[15][16][17][18]natural selection,[19][20][21]genetics,[22]engineering,[23]geoscience.[24]andcomputer science.[25] In many areas, foundational scientific concepts transcend disciplinary boundaries. An example of an inventory that assesses knowledge of such concepts is an instrument developed by Odom and Barrow (1995) to evaluate understanding ofdiffusionandosmosis.[26]In addition, there are non-multiple choice conceptual instruments, such as theessay-based approach[13]and the essay and oral exams concept to measure student understanding of Lewis structures in chemistry.[20][27] Some concept inventories are problematic. The concepts tested may not be fundamental or important in a particular discipline, the concepts involved may not be explicitly taught in a class or curriculum, or answering a question correctly may require only a superficial understanding of a topic. It is therefore possible to either over-estimate or under-estimate student content mastery. While concept inventories designed to identify trends in student thinking may not be useful in monitoring learning gains as a result of pedagogical interventions, disciplinary mastery may not be the variable measured by a particular instrument. Users should be careful to ensure that concept inventories are actually testing conceptual understanding, rather than test-taking ability, language skills, or other abilities that can influence test performance. The use of multiple-choice exams as concept inventories is not without controversy. The very structure of multiple-choice type concept inventories raises questions involving the extent to which complex, and often nuanced situations and ideas must be simplified or clarified to produce unambiguous responses. For example, a multiple-choice exam designed to assess knowledge of key concepts in natural selection[19]does not meet a number of standards of quality control.[21]One problem with the exam is that the two members of each of several pairs of parallel items, with each pair designed to measure exactly one key concept in natural selection, sometimes have very different levels of difficulty.[20]Another problem is that the multiple-choice exam overestimates knowledge of natural selection as reflected in student performance on a diagnostic essay exam and a diagnostic oral exam, two instruments with reasonably goodconstruct validity.[20]Although scoring concept inventories in the form of essay or oral exams is labor-intensive, costly, and difficult to implement with large numbers of students, such exams can offer a more realistic appraisal of the actual levels of students' conceptual mastery as well as their misconceptions.[13][20]Recently, however, computer technology has been developed that canscore essay responseson concept inventories in biology and other domains,[28]promising to facilitate the scoring of concept inventories organized as (transcribed) oral exams as well as essays.
https://en.wikipedia.org/wiki/Concept_inventory
Aconceptual frameworkis ananalytical toolwith several variations and contexts. It can be applied in different categories of work where an overall picture is needed. It is used to make conceptual distinctions and organize ideas. Strong conceptual frameworks capture something real and do this in a way that is easy to remember and apply. Isaiah Berlinused the metaphor ofa "fox" and a "hedgehog"to make conceptual distinctions in how important philosophers and authors view the world.[1]Berlin describes hedgehogs as those who use a single idea or organizing principle to view the world (such asDante Alighieri,Blaise Pascal,Fyodor Dostoyevsky,Plato,Henrik IbsenandGeorg Wilhelm Friedrich Hegel). Foxes, on the other hand, incorporate a type ofpluralismand view the world through multiple, sometimes conflicting, lenses (examples includeJohann Wolfgang von Goethe,James Joyce,William Shakespeare,Aristotle,Herodotus,Molière, andHonoré de Balzac). Economists use the conceptual framework ofsupplyanddemandto distinguish between the behavior and incentive systems of firms and consumers.[2]Like many other conceptual frameworks, supply and demand can be presented through visual or graphical representations (seedemand curve). Bothpolitical scienceandeconomicsuseprincipal agent theoryas a conceptual framework. Thepolitics-administration dichotomyis a long-standing conceptual framework used inpublic administration.[3] All three of these cases are examples of a macro-level conceptual framework. The use of the termconceptual frameworkcrosses both scale (large and small theories)[4][5]and contexts (social science,[6][7]marketing,[8]applied science,[9]art[10]etc.). The explicit definition of what a conceptual framework is and its application can therefore vary. Conceptual frameworks are beneficial as organizing devices in empirical research. One set of scholars has applied the notion of a conceptual framework todeductive, empirical research at the micro- or individual study level.[11][12][13][14]They employAmerican football playsas a useful metaphor to clarify the meaning ofconceptual framework(used in the context of a deductive empirical study). Likewise, conceptual frameworks are abstract representations, connected to the research project's goal that direct the collection and analysis of data (on the plane of observation – the ground). Critically, a football play is a "plan of action" tied to a particular, timely, purpose, usually summarized as long or short yardage.[15]Shields and Rangarajan (2013) argue that it is this tie to "purpose" that makesAmerican football playssuch a good metaphor. They define a conceptual framework as "the way ideas are organized to achieve a research project's purpose".[13]Like football plays, conceptual frameworks are connected to a research purpose or aim.Explanation[16]is the most common type of research purpose employed in empirical research. The formalhypothesisof a scientific investigation is the framework associated withexplanation.[17] Explanatory research usually focuses on "why" or "what caused" a phenomenon. Formal hypotheses posit possible explanations (answers to the why question) that are tested by collecting data and assessing the evidence (usually quantitative using statistical tests). For example, Kai Huang wanted to determine what factors contributed to residential fires in U.S. cities. Three factors were posited to influence residential fires. These factors (environment, population, and building characteristics) became the hypotheses or conceptual framework he used to achieve his purpose – explain factors that influenced home fires inU.S.cities.[18] Several types of conceptual frameworks have been identified,[13][14][19]and line up with a research purpose in the following ways: Note that Shields and Rangarajan (2013) do not claim that the above is the only framework-purpose pairing. Nor do they claim the system is applicable toinductiveforms of empirical research. Rather, the conceptual framework-research purpose pairings they propose are useful and provide new scholars a point of departure to develop their ownresearch design.[13] Frameworks have also been used to explainconflict theoryand the balance necessary to reach what amounts to a resolution. Within these conflict frameworks, visible and invisible variables function under concepts of relevance. Boundaries form and within these boundaries, tensions regarding laws and chaos (or freedom) are mitigated. These frameworks often function like cells, with sub-frameworks, stasis, evolution and revolution.[22]Anomaliesmay exist without adequate "lenses" or "filters" to see them and may become visible only when the tools exist to define them.[23]
https://en.wikipedia.org/wiki/Conceptual_framework
Group concept mappingis a structured methodology for organizing the ideas of a group on any topic of interest and representing those ideas visually in a series of interrelated maps.[1][2]It is a type of integrativemixed method,[3][4]combining qualitative and quantitative approaches todata collectionandanalysis. Group concept mapping allows for a collaborative group process with groups of any size, including a broad and diverse array of participants.[1]Since its development in the late 1980s by William M.K. Trochim atCornell University, it has been applied to various fields and contexts, including community and public health,[5][6][7][8]social work,[9][10]health care,[11]human services,[12][13]and biomedical research and evaluation.[14][15][16] Group concept mapping integrates qualitative group processes withmultivariate analysisto help a group organize and visually represent its ideas on any topic of interest through a series of related maps.[1][2]It combines the ideas of diverse participants to show what the group thinks and values in relation to the specific topic of interest. It is a type of structured conceptualization used by groups to develop a conceptual framework, often to help guide evaluation and planning efforts.[2]Group concept mapping is participatory in nature, allowing participants to have an equal voice and to contribute through various methods.[1]A group concept map visually represents all the ideas of a group and how they relate to each other, and depending on the scale, which ideas are more relevant, important, or feasible. Group concept mapping involves a structured multi-step process, includingbrainstorming, sorting and rating,multidimensional scalingandcluster analysis, and the generation and interpretation of multiple maps.[1][2]The first step requires participants to brainstorm a large set of statements relevant to the topic of interest, usually in response to a focus prompt. Participants are then asked to individually sort those statements into categories based on their perceived similarity and rate each statement on one or more scales, such as importance or feasibility. The data is then analyzed using The Concept System software, which creates a series of interrelated maps usingmultidimensional scaling(MDS) of the sort data,hierarchical clusteringof the MDS coordinates applyingWard's method, and the computation of average ratings for each statement and cluster of statements.[17]The resulting maps display the individual statements in two-dimensional space with more similar statements located closer to each other, and grouped into clusters that partition the space on the map. The Concept System software also creates other maps that show the statements in each cluster rated on one or more scales, and absolute or relative cluster ratings between two cluster sets. As a last step in the process, participants are led through a structured interpretation session to better understand and label all the maps. Group concept mapping was developed as a methodology in the late 1980s by William M.K. Trochim atCornell University. Trochim is considered to be a leading evaluation expert, and he has taught evaluation and research methods at Cornell since 1980.[18]Originally called "concept mapping", the methodology has evolved since its inception with the maturation of the field and the continued advancement of the software, which is now a Web application. Group concept mapping can be used with any group for any topic of interest. It is often used by government agencies, academic institutions, national associations, not-for-profit and community-based organizations, and private businesses to help turn the ideas of the group into measurable actions. This includes in the areas of organizational development, strategic planning, needs assessment, curriculum development, research, and evaluation.[1]Group concept mapping is well-documented, well-established methodology, and it has been used in hundreds of published papers. More generally, concept mapping is any process used for visually representing relationships between ideas in pictures or diagrams.[1]Aconcept mapis typically a diagram of multiple ideas, often represented as boxes or circles, linked in agraph (network) structurethrough arrows and words where each idea is connected to another.[19]The technique was originally developed in the 1970s byJoseph D. NovakatCornell University.[19]Concept mapping may be done by an individual or a group. Amind mapis a diagram used to visually represent information, centering on one word or idea with categories and sub-categories radiating off of it in atree structure.[20]Popularized byTony Buzanin the 1970s, mind mapping is often a spontaneous exercise done by an individual or group to gather information about what they think around a single topic. Unlike Novak's concept maps and Buzan's mind maps,group concept mappinghas a structured mathematical process (sorting and rating,multidimensional scalingandcluster analysis) for organizing and visually representing multiple ideas of a group through a series of specific steps.[1]In other words, in group concept mapping, the resulting visual representations are mathematically generated from mixed (qualitativeandquantitative) data collected from a group ofresearch subjects, whereas in Novak's concept maps and Buzan's mind maps the visual representations are drawn directly by the subjects resulting in diagrams that are qualitative data and final product at the same time.
https://en.wikipedia.org/wiki/Group_concept_mapping
Aninformation modelinsoftware engineeringis a representation of concepts and the relationships, constraints, rules, andoperationsto specifydata semanticsfor a chosen domain of discourse. Typically it specifies relations between kinds of things, but may also include relations with individual things. It can provide sharable, stable, and organized structure of information requirements or knowledge for the domain context.[1] The terminformation modelin general is used for models of individual things, such as facilities, buildings, process plants, etc. In those cases, the concept is specialised tofacility information model,building information model, plant information model, etc. Such an information model is an integration of a model of the facility with the data and documents about the facility. Within the field of software engineering anddata modeling, an information model is usually an abstract, formal representation of entity types that may include their properties, relationships and the operations that can be performed on them. The entity types in the model may be kinds of real-world objects, such as devices in a network, or occurrences, or they may themselves be abstract, such as for the entities used in a billing system. Typically, they are used to model a constrained domain that can be described by a closed set of entity types, properties, relationships and operations. An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are calleddata models, irrespective of whether they areobject models(e.g. usingUML),entity relationship modelsorXML schemas. In 1976, anentity-relationship(ER) graphic notation was introduced byPeter Chen. He stressed that it was a "semantic" modelling technique and independent of any database modelling techniques such as Hierarchical, CODASYL, Relational etc.[2]Since then,languages for information modelshave continued to evolve. Some examples are the Integrated Definition Language 1 Extended (IDEF1X), theEXPRESSlanguage and theUnified Modeling Language(UML).[1] Research by contemporaries of Peter Chen such as J.R.Abrial (1974) and G.M Nijssen (1976) led to today's Fact Oriented Modeling (FOM) languages which are based on linguistic propositions rather than on "entities". FOM tools can be used to generate an ER model which means that the modeler can avoid the time-consuming and error prone practice of manual normalization. Object-Role Modeling language (ORM) and Fully Communication Oriented Information Modeling (FCO-IM) are both research results developed in the early 1990s, based upon earlier research. In the 1980s there were several approaches to extend Chen’s Entity Relationship Model. Also important in this decade is REMORA byColette Rolland.[3] TheICAMDefinition (IDEF) Language was developed from the U.S. Air Force ICAM Program during the 1976 to 1982 timeframe.[4]The objective of the ICAM Program, according to Lee (1999), was to increase manufacturing productivity through the systematic application of computer technology. IDEF includes three different modeling methods:IDEF0,IDEF1, andIDEF2for producing a functional model, an information model, and a dynamic model respectively.IDEF1Xis an extended version of IDEF1. The language is in the public domain. It is a graphical representation and is designed using the ER approach and the relational theory. It is used to represent the “real world” in terms of entities, attributes, and relationships between entities. Normalization is enforced by KEY Structures and KEY Migration. The language identifies property groupings (Aggregation) to form complete entity definitions.[1] EXPRESSwas created as ISO 10303-11 for formally specifying information requirements of product data model. It is part of a suite of standards informally known as the STandard for the Exchange of Product model data (STEP). It was first introduced in the early 1990s.[5][6]The language, according to Lee (1999), is a textual representation. In addition, a graphical subset of EXPRESS called EXPRESS-G is available. EXPRESS is based on programming languages and the O-O paradigm. A number of languages have contributed to EXPRESS. In particular, Ada, Algol, C, C++, Euler, Modula-2, Pascal, PL/1, and SQL. EXPRESS consists of language elements that allow an unambiguous object definition and specification of constraints on the objects defined. It uses SCHEMA declaration to provide partitioning and it supports specification of data properties, constraints, and operations.[1] UML is a modeling language for specifying, visualizing, constructing, and documenting the artifacts, rather than processes, of software systems. It was conceived originally byGrady Booch,James Rumbaugh, andIvar Jacobson. UML was approved by theObject Management Group(OMG) as a standard in 1997. The language, according to Lee (1999), is non-proprietary and is available to the public. It is a graphical representation. The language is based on the objected-oriented paradigm. UML contains notations and rules and is designed to represent data requirements in terms of O-O diagrams. UML organizes a model in a number of views that present different aspects of a system. The contents of a view are described in diagrams that are graphs with model elements. A diagram contains model elements that represent common O-O concepts such as classes, objects, messages, and relationships among these concepts.[1] IDEF1X, EXPRESS, and UML all can be used to create a conceptual model and, according to Lee (1999), each has its own characteristics. Although some may lead to a natural usage (e.g., implementation), one is not necessarily better than another. In practice, it may require more than one language to develop all information models when an application is complex. In fact, the modeling practice is often more important than the language chosen.[1] Information models can also be expressed in formalized natural languages, such asGellish. Gellish, which has natural language variantsGellish Formal English, Gellish Formal Dutch(Gellish Formeel Nederlands), etc. is an information representation language or modeling language that is defined in the Gellish smart Dictionary-Taxonomy, which has the form of aTaxonomy/Ontology. A Gellish Database is not only suitable to store information models, but also knowledge models, requirements models and dictionaries, taxonomies and ontologies. Information models in Gellish English use Gellish Formal English expressions. For example, a geographic information model might consist of a number of Gellish Formal English expressions, such as: whereas information requirements and knowledge can be expressed for example as follows: Such Gellish expressions use names of concepts (such as 'city') and relation types (such as⟨is located in⟩and⟨is classified as a⟩) that should be selected from the Gellish Formal English Dictionary-Taxonomy (or of your own domain dictionary). The Gellish English Dictionary-Taxonomy enables the creation of semantically rich information models, because the dictionary contains definitions of more than 40000 concepts, including more than 600 standard relation types. Thus, an information model in Gellish consists of a collection of Gellish expressions that use those phrases and dictionary concepts to express facts or make statements, queries and answers. TheDistributed Management Task Force(DMTF) provides a standard set of information models for various enterprise domains under the general title of theCommon Information Model(CIM). Specific information models are derived from CIM for particular management domains. TheTeleManagement Forum(TMF) has defined an advanced model for the Telecommunication domain (theShared Information/Data model, or SID) as another. This includes views from the business, service and resource domains within the Telecommunication industry. The TMF has established a set of principles that anOSSintegration should adopt, along with a set of models that provide standardized approaches. The models interact with the information model (theShared Information/Data Model, or SID), via aprocess model(theBusiness Process Framework (eTOM), or eTOM) and a life cycle model.
https://en.wikipedia.org/wiki/Information_model
Idea networkingis a qualitative method of doing acluster analysisof any collection of statements, developed by Mike Metcalfe at theUniversity of South Australia.[1]Networking lists of statements acts to reduce them into a handful of clusters or categories. The statements might be source from interviews, text, websites,focus groups,SWOT analysisor community consultation. Idea networking isinductiveas it does not assume any prior classification system to cluster the statements. Rather keywords or issues in the statements are individually linked (paired). These links can then be entered into network software to be displayed as a network with clusters. When named, these clusters provide emergent categories, meta themes, frames or concepts which represent, structure or sense-make the collection of statements.[1] An idea network can be constructed in the following way:[1] The number of links per statement should be from 1 to 7; many more will result in a congested network diagram. This means choosing why the statements are linked may need grading as strong or weak, or by sub sets. For example, statements linked as being about weather conditions may be further subdivided into those about good weather, wet weather or bad weather, etc.). This linking is sometimes called 'coding' in thematic analysis which highlights that the statements can be linked for several and different reasons (source, context, time, etc.). There may be many tens of reasons why statements are linked. The same statements may be linked for different reasons. The number of reasons should not be restricted to low number as so anticipate the resultant clustering. In his bookNotes on the Synthesis of Form, the pragmatistChristopher Alexandersuggested networking the ideas of clients as means to identifying the major facets of an architectural design.[1]This is still used modern design work usually usingcluster analysis. Modernsocial network analysis softwareprovides a useful tool for how these ideas can be networked. This simply adds ideas to the list of computers, power stations, people and events that can be networked (seeNetwork theory).[3]The links between ideas can be represented in a matrix or network. Modern network diagramming software, with node repulsion algorithms, allows useful visual representation of these networks revealing clusters of nodes. When networking peoples' statements or ideas, these become the nodes and the links are provided by an analyst linking those statements thought to be similar. Keywords, synonyms, experience or context might be used to provide this linking. For example, the statements: (1) "That war is economics progressed by other means, might be considered linked to the statement"; (2) "That progress unfortunately needs the innovation which is a consequence of human conflict". Linguisticpragmatismargues we use our conceptions to interpret our perceptions (sensory inputs).[1]: 18These conceptions might be represented by words as conceptual ideas or concepts. For example, if we use the conceptual idea or concepts of justice to interpret the actions of people, we get a different interpretation (or meaning) compared to using the conceptual idea of personal power. Using the conceptual idea of justice makes certain action ideas seem reasonable. These may include due process, legal representation, hearing both sides, have norms or regulations for comparison. Therefore, there is a relationship between conceptual ideas and related apparently rational action ideas. If the statements gathered at a consultative meeting are considered action ideas, then clusters of these similar actions ideas might be considered to examples of a meta idea or conceptual idea. These are also called themes, and frames. Modern research extending Miller's Magic number 7 plus or minus 2, to idea handling, suggests a five-part classification is appropriate for humans.[1]: 145 Using networking to cluster statements is considered useful because:[1]
https://en.wikipedia.org/wiki/Idea_networking
Concept mappingandmind mappingsoftware is used to create diagrams of relationships between concepts, ideas, or other pieces of information. It has been suggested that the mind mapping technique can improve learning and study efficiency up to 15% over conventionalnote-taking.[1]Many software packages and websites allow creating or otherwise supporting mind maps. Using a standard file format allows interchange of files between various programs. Many programs listed below support theOPMLfile format and theXMLfile format used byFreeMind.[citation needed] The following tools comply with theFree Software Foundation's (FSF) definition offree software. As such, they are alsoopen-source software. The following is a list of notable concept mapping and mind mappingapplicationswhich areproprietary software(albeit perhaps available at no cost, seefreeware).
https://en.wikipedia.org/wiki/List_of_concept-_and_mind-mapping_software
Anomological network(ornomological net[1]) is a representation of theconcepts(constructs) of interest in a study, their observable manifestations, and the interrelationships between these. The term "nomological" derives from theGreek, meaning "lawful", or inphilosophy of scienceterms, "law-like". It wasCronbachandMeehl's view ofconstruct validitythat in order to provideevidencethat ameasurehas construct validity, a nomological network must be developed for its measure.[2] The necessary elements of a nomological network are: Validity evidence based onnomological validityis a general form ofconstruct validity. It is the degree to which a construct behaves as it should within a system of related constructs (the nomological network).[3] Nomological networks are used in theory development and use amodernist[clarification needed]approach.[4] Thispsychology-related article is astub. You can help Wikipedia byexpanding it. Thissociology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Nomological_network
Apersonal knowledge base(PKB) is an electronic tool used by an individual to express, capture, and later retrieve personal knowledge. It differs from a traditionaldatabasein that it contains subjective material particular to the owner, that others may not agree with nor care about. Importantly, a PKB consists primarily of knowledge, rather thaninformation; in other words, it is not a collection of documents or other sources an individual has encountered, but rather an expression of the distilled knowledge the owner has extracted from those sources or from elsewhere.[1][2][3] The termpersonal knowledge basewas mentioned as early as the 1980s,[4][5][6][7]but the term came to prominence in the 2000s when it was described at length in publications by computer scientist Stephen Davies and colleagues,[1][2]who compared PKBs on a number of different dimensions, the most important of which is thedata modelthat each PKB uses to organize knowledge.[1]: 18[3] Davies and colleagues examined three aspects of the data models of PKBs:[1]: 19–36 Davies and colleagues also emphasized the principle oftransclusion, "the ability to view the same knowledge element (not a copy) in multiple contexts", which they considered to be "pivotal" to an ideal PKB.[1][2]They concluded, after reviewing many design goals, that the ideal PKB was still to come in the future.[1][2] In their publications on PKBs, Davies and colleagues discussedknowledge graphsas they were implemented in some software of the time.[1][2]Later, other writers used the termpersonal knowledge graph(PKG) to refer to a PKB featuring a graph structure andgraph visualization.[8]However, the termpersonal knowledge graphis also used by software engineers to refer to the different subject of a knowledge graphabouta person,[9]in contrast to a knowledge graphcreated bya person in a PKB.[10] Davies and colleagues also differentiated PKBs according to theirsoftware architecture:file-based, database-based, orclient–serversystems (including Internet-based systems accessed through desktop computers and/or handheld mobile devices).[1]: 37–41 Non-electronic personal knowledge bases have probably existed in some form for centuries:Leonardo da Vinci's journals and notesare a famous example of the use ofnotebooks.Commonplace books,florilegia, annotatedprivate libraries, andcard files(in German,Zettelkästen) ofindex cardsandedge-notched cardsare examples of formats that have served this function in the pre-electronic age.[11] Undoubtedly the most famous early formulation of an electronic PKB wasVannevar Bush's description of the "memex" in 1945.[1][2][12]In a 1962 technical report,human–computer interactionpioneerDouglas Engelbart(who would later become famous for his 1968 "Mother of All Demos" that demonstrated almost all the fundamental elements of modern personal computing) described his use of edge-notched cards to partially model Bush's memex.[13] In their 2005 paper, Davies and colleagues mentioned the following, among others, as examples ofsoftware applicationsthat had been used to build PKBs using various data models and architectures:[1]
https://en.wikipedia.org/wiki/Personal_knowledge_base
TheISO/IEC 11179metadata registry(MDR) standard is an internationalISO/IECstandard for representingmetadatafor an organization in a metadata registry. It documents the standardization and registration of metadata to make data understandable and shareable.[1] The ISO/IEC 11179 model is a result of two principles of semantic theory, combined with basic principles of data modelling. The first principle from semantic theory is the thesaurus type relation between wider and more narrow (or specific) concepts, e.g. the wide concept "income" has a relation to the more narrow concept "net income". The second principle from semantic theory is the relation between a concept and its representation, e.g., "buy" and "purchase" are the same concept although different terms are used. A basic principle of data modelling is the combination of an object class and a characteristic. For example, "Person - hair color". When applied to data modelling, ISO/IEC 11179 combines a wide "concept" with an "object class" to form a more specific "data element concept". For example, the high-level concept "income" is combined with the object class "person" to form the data element concept "net income of person". Note that "net income" is more specific than "income". The different possible representations of a data element concept are then described with the use of one or more data elements. Differences in representation may be a result of the use of synonyms or different value domains in different data sets in a data holding. A value domain is the permitted range of values for a characteristic of an object class. An example of a value domain for "sex of person" is "M = Male, F = Female, U = Unknown". The letters M, F and U are then the permitted values of sex of person in a particular data set. The data element concept "monthly net income of person" may thus have one data element called "monthly net income of individual by 100 dollar groupings" and one called "monthly net income of person range 0-1000 dollars", etc., depending on the heterogeneity of representation that exists within the data holdings covered by one ISO/IEC 11179 registry. Note that these two examples have different terms for the object class (person/individual) and different value sets (a 0-1000 dollar range as opposed to 100 dollar groupings). The result of this is a catalogue of sorts, in which related data element concepts are grouped by a high-level concept and an object class, and data elements grouped by a shared data element concept. Strictly speaking, this is not a hierarchy, even if it resembles one. ISO/IEC 11179 proper does not describe data as it is actually stored. It does not refer to the description of physical files, tables and columns. The ISO/IEC 11179 constructs are "semantic" as opposed to "physical" or "technical". The standard has two main purposes: definition and exchange. The core object is the data element concept, since it defines a concept and, ideally, describes data independent of its representation in any one system, table, column or organisation. The standard consists of seven parts: Part 1 explains the purpose of each part. Part 3 specifies the metamodel that defines the registry. Part 7 is released per December 2019 and provides an extension to part 3 for registration of metadata about data sets. The other parts specify various aspects of the use of the registry. Thedata elementis foundational concept in an ISO/IEC 11179 metadata registry. The purpose of the registry is to maintain a semantically precise structure of data elements. Each Data element in an ISO/IEC 11179 metadata registry: Data elements that store "Codes" or enumerated values must also specify the semantics of each of the code values with precise definitions. Software AG's COTS Metadata Registry (MDR) product supports the ISO 11179 standard and continues to be sold and used for this purpose in both commercial and government applications (see Vendor Tools section below). While commercial adoption is increasing, the spread of ISO/IEC 11179 has been more successful in the public sector. However, the reason for this is unclear. ISO membership is open to organizations through their national bodies. Countries with public sector repositories across various industries include Australia, Canada, Germany, United States and the United Kingdom. The United Nations and the US Government refer to and use the 11179 standards. 11179 is strongly recommended on the U.S. government'sXMLwebsite.[2]and is promoted byThe Open Groupas a foundation of theUniversal Data Element Framework.[3]The Open Group is avendor-neutraland technology-neutralconsortiumworking to enable access to integrated information within and between enterprises based onopen standardsand globalinteroperability. Although the ISO/IEC 11179 metadata registry is 6-part standard comprising several hundreds of pages, the primary model is presented in Part-3 and depicted in UML diagrams to facilitate understanding, supported by normative text. The eXtended Metadata Registry initiative,XMDRled by the US, explored the use of ontologies as the basis for MDR content in order to provide richer semantic framework than could be achieved by lexical and syntax naming conventions alone. The XMDR experimented with a prototype using OWL, RDF and SPARQL to prove the concept. The initiative resulted in Edition 3 of ISO/IEC 11179. The first part published is ISO/IEC 11179-3:2013. The primary extension in Edition 3 is the Concept Region, expanding the use of concepts to more components within the standard, and supporting registration of a Concept system for use within the registry. The standard also supports the use of externally defined concept systems. Edition 3 versions of Parts 1, 5, and 6 were published in 2015. Part 2, Classifications, is subsumed by the Concept Region in Part 3, but is being updated to a Technical Report (TR) to provide guidance on the development of Classification Schemes. Part 4 describes principles for forming data definitions; an Edition 3 has not been proposed. The following metadata registries state that they follow ISO/IEC 11179 guidelines although there have been no formal third party tests developed to test for metadata registry compliance. No independent agencies certify ISO/IEC 11179 compliance. To some extent, certain existing software implementations suffer from poor design and potential security vulnerabilities, which hinder the adoption of ISO/IEC 11179. Open Metadata
https://en.wikipedia.org/wiki/ISO/IEC_11179
Arepresentation termis a word, or a combination of words, used as part of adata element name.Representation classis sometimes used as asynonymfor representation term. InISO/IEC 11179, arepresentation classprovides a way toclassifyor groupdata elements. Arepresentation classis effectively a specializedclassification scheme. Hence, there is currently some discussion in ISO over the merits of keeping representation class as a separate entity in 11179, versus collapsing it into the general classification scheme facility[1]. A clear distinction between the two mechanisms, however, is that 11179 allows a data element to be classified by only one representation class, whereas there is no such restriction on other classification schemes. ISO/IEC 11179 does not specify thatrepresentation termsshould be drawn from the values ofrepresentation class, though it would make sense to do so, nor does it provide any mechanism to ensure any sort of consistency (whatever that might mean) between the representation terms used to name a data element, and the representation class used to classify it. The termrepresentation classhas been used in metadata registry standards for many years. Today it has a combination of meanings and now goes well beyond how adata elementis represented in acomputer system. In practice this term is used to shed light on thesemanticsor meaning of the data element. There are several alternate definitions forrepresentation class. Some of these are taken from the ISO documents. Note that these documents are copyrighted and extracts can only be taken under the fair use rules. B.2.3 Representation class Representation class is the value domain for representation. The set of classes make it easy to distinguish among the elements in the registry. For instance, a data element categorized with the representation classamountis different from an element categorized asnumber. It probably will not make sense to compare the contents of these elements, or perform calculations using them together. Representation class is a mechanism by which the functional and/or presentational category of an item may be conveyed to the user. 3.3.51 data element representation classthe class of representation of a data element
https://en.wikipedia.org/wiki/Representation_class
Arepresentation termis a word, or a combination of words, that semantically represent the data type (value domain) of a data element. A representation term is commonly referred to as aclass wordby those familiar withdata dictionaries. ISO/IEC 11179-5:2005 definesrepresentation termas adesignation of an instance of a representation classAs used inISO/IEC 11179, the representation term is that part of adata element namethat provides a semantic pointer to the underlying data type. ARepresentation classis a class of representations. Thisrepresentation classprovides a way toclassifyor groupdata elements. ARepresentation Termmay be thought of as an attribute of adata elementin ametadata registrythat classifies the data element according to the type of data stored in the data element.[1] Representation terms are typically "approved" by the organization or standards body using them. For example, the UN publishes its approved list as part of the UN/CEFACT Core Components Technical Specification. TheUniversal Data Element Frameworkuses a subset of CCTS representation terms and assigns numeric codes to those used. A value domain expresses the set of allowed values for a data element. The representation term (and typically the corresponding data type term) comprise a taxonomy for the value domains within a data set. This taxonomy is the representation class. Thus the representation term can be used to control proliferation of value domains by ensuring equivalent value domains use the same representation term. When a person or software agent is analyzing two separate metadata registries to findproperty equivalence, the Representation Term can be used as a guide. For example, if system A has a Data Element such as PersonGenderCode and system B has a data element such as PersonSexCode the code suffix might assist the two systems to only match data elements that have the suffix "Code". However, a taxonomy of property terms (i.e. "Sex" or "Gender") is much more efficient in this respect. The Representation Term can be used in many ways to do inferences on data sets. Representation Terms tells the observer of any data stream about the data types and gives an indication of how the Data Element can be used. This is critical when mapping metadata registries to external Data Elements. For example, if you are sent a record about a person you may look for any "ID" suffix to understand how the remote system may differentiate two distinct records. Representation Terms are also used to make inferences about the requirements of a property. For example, if a data stream had Data Element PersonBirthDateAndTime you would know that BOTH the date AND time are available and relevant, not just the date. If the birth time was optional, a separate data elements should be used such as PersonBirthDate and PersonBirthTime. When creating a data warehouse, a business analyst looks at the Representation Terms to quickly find the dimensions and measures of a subject matter in order to build OLAP cubes. For example: The joint ISO/UN Core Components Technical Specification formally define both the allowed set of representation terms and the corresponding set of data types. ISO 15000-5 is an implementation layer of ISO 11179 and normatively expresses a set of rules to semantically define conceptual and physical/logical data models for a wide variety of uses. In ISO 15000-5, the representation term provides a mechanism to harmonize the value domains of candidate data elements before being added to the overall data model(s). ISO 15000-5 is being used by a number of government, standards development organizations, and private sector as the basis for data modeling. Some informal standards such as theUniversal Data Element Framework(which refer to a Representation Term as a "Property Word") assign unique integer IDs to each Representation Term. This allows metadata mapping tools to map one set of data elements into other metadata vocabularies. An example of these mappings can be found atProperty word ID. Note that as of November 2005 the UDEF concepts have not been widely adopted. For example, if an XML Data fragment had the following: In the example above, the Representation terms are "ID" for the <PersonID>, the suffix "Name" for the Given and Family names, and "Date" for the <PersonBirthDate>. The following are samples of Representation Terms that have been used for the exchange of electronic messages in systems such asNIEMorGJXDM3.0: [note: the restrictions expressed here are limited to those specifications and do not represent universal consensus] [Note] This is an extremely limited set of the wide range of standards that specify the use of representation terms.
https://en.wikipedia.org/wiki/Representation_term
Simple Knowledge Organization System(SKOS) is aW3C recommendationdesigned for representation ofthesauri,classification schemes,taxonomies,subject-heading systems, or any other type of structuredcontrolled vocabulary. SKOS is part of theSemantic Webfamily of standards built uponRDFandRDFS, and its main objective is to enable easy publication and use of such vocabularies aslinked data. The most direct ancestor to SKOS was the RDF Thesaurus work undertaken in the second phase of the EU DESIRE project[1][citation needed]. Motivated by the need to improve the user interface and usability of multi-service browsing and searching,[2]a basic RDF vocabulary for Thesauri was produced. As noted later in the SWAD-Europe workplan, the DESIRE work was adopted and further developed in the SOSIG and LIMBER projects. A version of the DESIRE/SOSIG implementation was described in W3C's QL'98 workshop, motivating early work on RDF rule and query languages: A Query and Inference Service for RDF.[3] SKOS built upon the output of the Language Independent Metadata Browsing of European Resources (LIMBER) project funded by theEuropean Community, and part of theInformation Society Technologiesprogramme. In the LIMBER projectCCLRCfurther developed anRDFthesaurus interchange format[4]which was demonstrated on the European Language Social Science Thesaurus (ELSST) at theUK Data Archiveas a multilingual version of the English language Humanities and Social Science Electronic Thesaurus (HASSET) which was planned to be used by the Council of European Social Science Data Archives CESSDA. SKOS as a distinct initiative began in the SWAD-Europe project, bringing together partners from both DESIRE, SOSIG (ILRT) and LIMBER (CCLRC) who had worked with earlier versions of the schema. It was developed in the Thesaurus Activity Work Package, in the Semantic Web Advanced Development for Europe (SWAD-Europe) project.[5]SWAD-Europe was funded by theEuropean Community, and part of theInformation Society Technologiesprogramme. The project was designed to support W3C's Semantic Web Activity through research, demonstrators and outreach efforts conducted by the five project partners,ERCIM, the ILRT atBristol University,HP Labs,CCLRCand Stilo. The first release of SKOS Core and SKOS Mapping were published at the end of 2003, along with other deliverables on RDF encoding of multilingual thesauri[6]and thesaurus mapping.[7] Following the termination of SWAD-Europe, SKOS effort was supported by the W3C Semantic Web Activity[8]in the framework of the Best Practice and Deployment Working Group.[9]During this period, focus was put both on consolidation of SKOS Core, and development of practical guidelines for porting and publishing thesauri for the Semantic Web. The SKOS main published documents — the SKOS Core Guide,[10]the SKOS Core Vocabulary Specification,[11]and the Quick Guide to Publishing a Thesaurus on the Semantic Web[12]— were developed through the W3C Working Draft process. Principal editors of SKOS were Alistair Miles,[13]initially Dan Brickley, and Sean Bechhofer. The Semantic Web Deployment Working Group,[14]chartered for two years (May 2006 – April 2008), put in its charter to push SKOS forward on theW3C Recommendationtrack. The roadmap projected SKOS as a Candidate Recommendation by the end of 2007, and as a Proposed Recommendation in the first quarter of 2008. The main issues to solve were determining its precise scope of use, and its articulation with other RDF languages and standards used in libraries (such asDublin Core).[15][16] On August 18, 2009,W3Creleased the new standard that builds a bridge between the world ofknowledge organization systems– including thesauri, classifications, subject headings, taxonomies, andfolksonomies– and thelinked datacommunity, bringing benefits to both. Libraries, museums, newspapers, government portals, enterprises, social networking applications, and other communities that manage large collections of books, historical artifacts, news reports, business glossaries, blog entries, and other items can now use SKOS[17]to leverage the power of linked data. SKOS was originally designed as a modular and extensible family of languages, organized as SKOS Core, SKOS Mapping, and SKOS Extensions, and a Metamodel. The entire specification is now complete within the namespacehttp://www.w3.org/2004/02/skos/core#. In addition to the reference itself, the SKOS Primer (a W3C Working Group Note) summarizes the Simple Knowledge Organization System. The SKOS[18]defines the classes and properties sufficient to represent the common features found in a standard thesaurus. It is based on a concept-centric view of the vocabulary, where primitive objects are not terms, but abstract notions represented by terms. Each SKOS concept is defined as anRDF resource. Each concept can have RDF properties attached, including: Concepts can be organized inhierarchiesusing broader-narrower relationships, or linked by non-hierarchical (associative) relationships. Concepts can be gathered in concept schemes, to provide consistent and structured sets of concepts, representing whole or part of a controlled vocabulary. The principal element categories of SKOS are concepts, labels, notations, documentation, semantic relations, mapping properties, and collections. The associated elements are listed in the table below. The SKOS vocabulary is based on concepts. Concepts are the units of thought—ideas, meanings, or objects and events (instances or categories)—which underlie many knowledge organization systems. As such, concepts exist in the mind as abstract entities which are independent of the terms used to label them. In SKOS, aConcept(based on the OWLClass) is used to represent items in a knowledge organization system (terms, ideas, meanings, etc.) or such a system's conceptual or organizational structure.[19] AConceptSchemeis analogous to a vocabulary, thesaurus, or other way of organizing concepts. SKOS does not constrain a concept to be within a particular scheme, nor does it provide any way to declare a complete scheme—there is no way to say the scheme consists only of certain members. A topConcept is (one of) the upper concept(s) in a hierarchical scheme. Each SKOSlabelis a string ofUnicodecharacters, optionally with language tags, that are associated with a concept. TheprefLabelis the preferred human-readable string (maximum one per language tag), whilealtLabelcan be used for alternative strings, andhiddenLabelcan be used for strings that are useful to associate, but not meant for humans to read. A SKOSnotationis similar to a label, but this literal string has a datatype, like integer, float, or date; the datatype can even be made up (see 6.5.1 Notations, Typed Literals and Datatypes in the SKOS Reference). The notation is useful for classification codes and other strings not recognizable as words. The Documentation or Note properties provide basic information about SKOS concepts. All the properties are considered a type ofskos:note; they just provide more specific kinds of information. The propertydefinition, for example, should contain a full description of the subject resource. More specific note types can be defined in a SKOS extension, if desired. A query for<A> skos:note ?will obtain all the notes about <A>, including definitions, examples, and scope, history and change, and editorial documentation. Any of these SKOS Documentation properties can refer to several object types: a literal (e.g., a string); a resource node that has its own properties; or a reference to another document, for example using a URI. This enables the documentation to have its ownmetadata, like creator and creation date. Specific guidance on SKOS documentation properties can be found in the SKOS Primer Documentary Notes. SKOS semantic relations are intended to provide ways to declare relationships between concepts within a concept scheme. While there are no restrictions precluding their use with two concepts from separate schemes, this is discouraged because it is likely to overstate what can be known about the two schemes, and perhaps link them inappropriately. The propertyrelatedsimply makes an association relationship between two concepts; no hierarchy or generality relation is implied. The propertiesbroaderandnarrowerare used to assert a direct hierarchical link between two concepts. The meaning may be unexpected; the relation<A> broader <B>means that A has a broader concept called B—hence that B is broader than A. Narrower follows in the same pattern. While the casual reader might expect broader and narrower to betransitiveproperties, SKOS does not declare them as such. Rather, the propertiesbroaderTransitiveandnarrowerTransitiveare defined as transitive super-properties of broader and narrower. These super-properties are (by convention) not used in declarative SKOS statements. Instead, when a broader or narrower relation is used in a triple, the corresponding transitive super-property also holds; and transitive relations can be inferred (and queried) using these super-properties. SKOS mapping properties are intended to express matching (exact or fuzzy) of concepts from one concept scheme to another, and by convention are used only to connect concepts from different schemes. The conceptsrelatedMatch,broadMatch, andnarrowMatchare a convenience, with the same meaning as the semantic propertiesrelated,broader, andnarrower. (See previous section regarding the meanings of broader and narrower.) The propertyrelatedMatchmakes a simple associative relationship between two concepts. When concepts are so closely related that they can generally be used interchangeably,exactMatchis the appropriate property (exactMatchrelations are transitive, unlike any of the other Match relations). ThecloseMatchproperty that indicates concepts that only sometimes can be used interchangeably, and so it is not a transitive property. The concept collections (Collection,orderedCollection) are labeled and/or ordered (orderedCollection) groups of SKOS concepts. Collections can be nested, and can have defined URIs or not (which is known as a blank node). Neither a SKOSConceptnor aConceptSchememay be a Collection, nor vice versa; and SKOS semantic relations can only be used with a Concept (not a Collection). The items in a Collection can not be connected to other SKOS Concepts through the Collection node; individual relations must be defined to each Concept in the Collection. All development work is carried out via the mailing list which is a completely open and publicly archived[20]mailing list devoted to discussion of issues relating to knowledge organisation systems, information retrieval and the Semantic Web. Anyone may participate informally in the development of SKOS by joining the discussions on public-esw-thes@w3.org – informal participation is warmly welcomed. Anyone who works for a W3C member organisation may formally participate in the development process by joining the Semantic Web Deployment Working Group – this entitles individuals to edit specifications and to vote on publication decisions. There are publicly available SKOS data sources. The SKOS metamodel is broadly compatible with the data model ofISO 25964-1– Thesauri for Information Retrieval. This data model can be viewed and downloaded from the website forISO 25964.[42] SKOS development has involved experts from both RDF and library community, and SKOS intends to allow easy migration of thesauri defined by standards such asNISOZ39.19 – 2005[43]orISO 25964.[42] SKOS is intended to provide a way to make a legacy of concept schemes available to Semantic Web applications, simpler than the more complex ontology language,OWL. OWL is intended to express complex conceptual structures, which can be used to generate rich metadata and support inference tools. However, constructing useful web ontologies is demanding in terms of expertise, effort, and cost. In many cases, this type of effort might be superfluous or unsuited to requirements, and SKOS might be a better choice. The extensibility of RDF makes possible further incorporation or extension of SKOS vocabularies into more complex vocabularies, including OWL ontologies.
https://en.wikipedia.org/wiki/Simple_Knowledge_Organisation_System
Thesemanticspectrum, sometimes referred to as theontology spectrum, thesmart data continuum, orsemantic precision, is inlinguistics, a series of increasingly precise or rathersemanticallyexpressive definitions fordata elementsinknowledge representations, especially for machine use. At the low end of the spectrum is a simple binding of a single word or phrase and its definition. At the high end is a fullontologythat specifies relationships between data elements using preciseURIsfor relationships and properties. With increasedspecificitycomes increased precision and the ability to use tools to automaticallyintegratesystems, but also increased cost to build and maintain ametadata registry. Some steps in the semantic spectrum include the following: The following is a list of questions that may arise in determining semantic precision. Many organizations today are building ametadata registryto store their data definitions and to performmetadata publishing. The question of where they are on the semantic spectrum frequently arises. To determine where your systems are, some of the following questions are frequently useful. Today, much of the World Wide Web is stored asHypertext Markup Language. Search engines are severely hampered by their inability to understand the meaning of published web pages. These limitations have led to the advent of theSemantic webmovement.[1] In the past, many organizations that created custom database application used isolated teams of developers that did not formally publish their data definitions. These teams frequently used internal data definitions that were incompatible with other computer systems. This madeEnterprise Application IntegrationandData warehousingextremely difficult and costly. Many organizations today require that teams consult a centralized data registry before new applications are created. The job title of an individual that is responsible for coordinating an organization's data is aData architect. The first reference to this term was at the 1999AAAIOntologies Panel. The panel was organized by Chris Welty, who at the prodding of Fritz Lehmann and in collaboration with the panelists (Fritz,Mike Uschold,Mike Gruninger, andDeborah McGuinness) came up with a "spectrum" of kinds of information systems that were, at the time, referred to as ontologies. The "ontology spectrum" picture appeared in print in the introduction toFormal Ontology and Information Systems: Proceedings of the 2001 Conference.The ontology spectrum was also featured in a talk at the Semantics for the Web meeting in 2000 at Dagstuhl by Deborah McGuinness. McGuinness produced apaperdescribing the points on that spectrum that appeared in the book that emerged (much later) from that workshop called"Spinning the Semantic Web."Later, Leo Obrst extended the spectrum into two dimensions (which technically is not really a spectrum anymore) and added a lot more detail, which was included in his book,The Semantic Web: A Guide to the Future of XML, Web Services, and Knowledge Management. The concept of the Semantic precision inbusiness systemswas popularized byDave McCombin his bookSemantics in Business Systems: The Savvy Managers Guidepublished in 2003 where he frequently uses the termSemantic Precision. This discussion centered around a 10 level partition that included the following levels (listed in the order of increasing semantic precision): Note that there was formerly a special emphasis on the adding of formalis-arelationships to the spectrum which has been dropped. The companyCerebrahas also popularized this concept by describing the data formats that exist within an enterprise in their ability to store semantically precisemetadata. Their list includes: What these concepts share in common is the ability to store information with increasing precision to facilitate intelligent agents.
https://en.wikipedia.org/wiki/Semantic_spectrum
Automatic image annotation(also known asautomatic image taggingorlinguistic indexing) is the process by which a computer system automatically assignsmetadatain the form ofcaptioningorkeywordsto adigital image. This application ofcomputer visiontechniques is used inimage retrievalsystems to organize and locate images of interest from adatabase. This method can be regarded as a type ofmulti-classimage classificationwith a very large number of classes - as large as the vocabulary size. Typically,image analysisin the form of extractedfeature vectorsand the training annotation words are used bymachine learningtechniques to attempt to automatically apply annotations to new images.[1]The first methods learned the correlations betweenimage featuresand training annotations. Subsequently, techniques were developed usingmachine translationto to attempt to translate the textual vocabulary into the 'visual vocabulary,' represented by clustered regions known asblobs.Subsequent work has included classification approaches, relevance models, and other related methods. The advantages of automatic image annotation versuscontent-based image retrieval(CBIR) are that queries can be more naturally specified by the user.[2]At present, Content-Based Image Retrieval (CBIR) generally requires users to search by image concepts such as color andtextureor by finding example queries. However, certain image features in example images may override the concept that the user is truly focusing on. Traditional methods of image retrieval, such as those used by libraries, have relied on manually annotated images, which is expensive and time-consuming, especially given the large and constantly growing image databases in existence. Simultaneous Image Classification and Annotation
https://en.wikipedia.org/wiki/Automatic_image_annotation
Theblogosphereis made up of allblogsand their interconnections. The term implies that blogs exist together as a connectedcommunity(or as a collection of connected communities) or as asocial networking servicein which everyday authors can publish their opinions and views. The term was coined on September 10, 1999 by Brad L. Graham, as a joke.[1][2]It was re-coined in 2002 byWilliam Quick,[3]and was quickly adopted and propagated by thewarbloggercommunity. The term resembles the older wordlogosphere(from Greeklogosmeaningword, andsphere, interpreted asworld), "the world of words", theuniverse of discourse.[4][5][better source needed] Despite the term's humorous intent,CNN, theBBC, andNational Public Radio's programsMorning Edition,Day To Day, andAll Things Consideredused it several times to discuss public opinion. A number of media outlets in the late 2000s started treating the blogosphere as a gauge of public opinion, and it has been cited in both academic and non-academic work as evidence of rising or falling resistance toglobalization,voter fatigue, and many other phenomena,[6][7]and also in reference to identifying influential bloggers[8]and "familiar strangers" in the blogosphere.[9][10] In 1999,Pyra Labsopened blogging to the masses by simplifying the process of creating and maintaining personal web spaces. released "Blogger", the number of blogs in existence was thought to be less than one hundred. Blogger led to the birth of the wider blogosphere.[11][12]In 2005, a Gallup poll showed that a third of Internet users read blogs at least on occasion,[13]and in May 2006, a study showed that there were over forty-two million bloggers contributing to the blogosphere. With less than 1 million blogs in existence at the start of 2003, the number of blogs had doubled in size every six months through 2006.[14] In 2011, it was estimated that there were more than 153 million blogs, with nearly 1 million new posts being produced by the blogosphere each day.[15] In a 2010 Technorati study, 36% of bloggers reported some sort of income from their blogs, most often in the form of ad revenue.[16]This shows a steady increase from their 2009 report, in which 28% of the blogging world reported their blog as a source of income, with the mean annual income from advertisements at $42,548.[17]Other common sources of blog-related income are paid speaking engagements and paid postings.[16]Paid postings may be subject to rules on clearly disclosing commercial advertisements as such (regulated by, for example, theFederal Trade Commissionin the US and theAdvertising Standards Authorityin the UK). Sites such asTechnorati,BlogPulse, andTailranktrack the interconnections between bloggers. Taking advantage ofhypertextlinks which act as markers for the subjects the bloggers are discussing, these sites can follow a piece of conversation as it moves from blog to blog. These also can help information researchers study how fast amemespreads through the blogosphere, to determine which sites are the most important for gaining early recognition.[18]Sites also exist to track specific blogospheres, such as those related by a certain genre, culture, subject matter, or geopolitical location. In 2007, following six weeks of observation, social media expert Matthew Hurst mapped the blogosphere, generating the plot to the left based on the interconnections between blogs. The most densely populated areas represent the most active portions of the blogosphere. White dots represent individual blogs. They are sized according to the number of links surrounding that particular blog. Links are plotted in both green and blue, with green representing one-way links and blue representing reciprocal links.[19] DISCOVER Magazinedescribed six major 'hot spots' of the blogosphere. While points 1 and 2 represent influential individual blogs, point 3 is the perfect example of "blogging island", where individual blogs are highly connected within a sub-community but lack many connections to the larger blogosphere. Point 4 describes a sociopolitical blogging niche, in which links demonstrate the constant dialogue between bloggers who write about the same subject of interest. Point 5 is an isolated sub-community of blogs dedicated to the world of pornography. Lastly, point 6 represents a collection of sports' lovers who largely segregate themselves but still manage to link back to the higher traffic blogs toward the center of the blogosphere.[19] Over time, the blogosphere developed as its own network of interconnections. In this time, bloggers began to engage in other online communities, specifically social networking sites, melding the two realms of social media together. According to Technorati's 2010 "State of the Blogosphere" report, 78% of bloggers were using the microblogging service Twitter, with much larger percentages of individuals who blogged as a part-time job (88%) or full-time for a specific company (88%). Almost half of all bloggers surveyed used Twitter to interact with the readers of their blog, while 72% of bloggers used it for blog promotion. For bloggers whose blog was their business (self-employed), 63% used Twitter to market their business. Additionally, according to the report, almost 9 out of 10 (87%) bloggers were using Facebook.[16] News blogshave become popular, and have created competition for traditional print newspaper and news magazines. TheHuffington Postwas ranked the most powerful blog in the world byThe Observerin 2008,[20]and has come to dominate current event reporting.
https://en.wikipedia.org/wiki/Blogosphere
Anannotationis extra information associated with a particular point in adocumentor other piece of information. It can be a note that includes a comment or explanation.[1]Annotations are sometimes presentedin the margin of book pages. For annotations of different digital media, seeweb annotationandtext annotation. Annotation Practices are highlighting a phrase or sentence and including a comment, circling a word that needs defining, posing a question when something is not fully understood and writing a short summary of a key section.[2]It also invites students to "(re)construct a history through material engagement and exciting DIY (Do-It-Yourself) annotation practices."[3]Annotation practices that are available today offer a remarkable set of tools for students to begin to work, and in a more collaborative, connected way than has been previously possible.[4] Text and Film Annotation is a technique that involves using comments, text within a film. Analyzing videos is an undertaking that is never entirely free of preconceived notions, and the first step for researchers is to find their bearings within the field of possible research approaches and thus reflect on their own basic assumptions.[5]Annotations can take part within the video, and can be used when the data video is recorded. It is being used as a tool in text and film to write one's thoughts and emotion into the markings.[2]In any number of steps of analysis, it can also be supplemented with more annotations. Anthropologists Clifford Geertz calls it a "thick description." This can give a sense of how useful annotation is, especially by adding a description of how it can be implemented in film.[5] Marginalia refers to writing or decoration in the margins of a manuscript. Medieval marginalia is so well known that amusing or disconcerting instances of it are fodder for viral aggregators such as Buzzfeed and Brainpickings, and the fascination with other readers’ reading is manifest in sites such as Melville's Marginalia Online or Harvard's online exhibit of marginalia from six personal libraries.[4]It can also be a part of other websites such as Pinterest, or even meme generators and GIF tools. Textual scholarshipis a discipline that often uses the technique of annotation to describe or add additional historical context to texts and physical documents to make it easier to understand.[6] Students oftenhighlightpassages in books in order to actively engage with the text. Students can use annotations to refer back to key phrases easily, or addmarginaliato aid studying and finding connections between the text and prior knowledge or running themes.[7] Annotated bibliographiesadd commentary on the relevance or quality of each source, in addition to the usual bibliographic information that merely identifies the source. Students use Annotation not only for academic purposes, but interpreting their own thoughts, feelings, and emotions.[2]Sites such as Scalar and Omeka are sites that students use. There are multiple genres with Annotation such as math, film, linguists, and literary theory which students find it most helpful to use. Most students reported the annotation process as helpful for improving overall writing ability, grammar, and academic vocabulary knowledge. Mathematical expressions(symbols and formulae) can be annotated with their natural language meaning. This is essential for disambiguation, since symbols may have different meanings (e.g., "E" can be "energy" or "expectation value", etc.).[8][9]The annotation process can be facilitated and accelerated through recommendation, e.g., using the "AnnoMathTeX" system that is hosted by Wikimedia.[10][11][12] From a cognitive perspective, annotation has an important role in learning and instruction. As part of guided noticing it involves highlighting, naming or labelling and commenting aspects of visual representations to help focus learners' attention on specific visual aspects. In other words, it means the assignment of typological representations (culturally meaningful categories), to topological representations (e.g. images).[13]This is especially important when experts, such as medical doctors, interpret visualizations in detail and explain their interpretations to others, for example by means of digital technology.[14]Here, annotation can be a way to establishcommon groundbetween interactants with different levels of knowledge.[15]The value of annotation has been empirically confirmed, for example, in a study which shows that in computer-based teleconsultations the integration of image annotation and speech leads to significantly improved knowledge exchange compared with the use of images and speech without annotation.[16] Annotations were removed on January 15, 2019, fromYouTubeafter around a decade of service.[17]They had allowed users to provide information that popped up during videos, but YouTube indicated they did not work well on small mobile screens, and were being abused. Markup languageslikeXMLandHTMLannotate text in a way that is syntactically distinguishable from that text. They can be used to add information about the desired visual presentation, or machine-readable semantic information, as in thesemantic web.[18] This includesCSVandXLS. The process of assigning semantic annotations to tabular data is referred to as semantic labelling.Semantic Labellingis the process of assigning annotations fromontologiesto tabular data.[19][20][21][22]This process is also referred to as semantic annotation.[23][22]Semantic Labelling is often done in a (semi-)automatic fashion. Semantic Labelling techniques work on entity columns,[22]numeric columns,[19][21][24][25]coordinates,[26]and more.[26][25] There are several semantic labelling types which utilises machine learning techniques. These techniques can be categorised following the work of Flach[27][28]as follows: geometric (using lines and planes, such asSupport-vector machine,Linear regression), probabilistic (e.g.,Conditional random field), logical (e.g.,Decision tree learning), and Non-ML techniques (e.g., balancing coverage and specificity[22]). Note that the geometric, probabilistic, and logical machine learning models are not mutually exclusive.[27] Pham et al.[29]useJaccard indexandTF-IDFsimilarity for textual data andKolmogorov–Smirnov testfor the numeric ones. Alobaid and Corcho[21]usefuzzy clustering(c-means[30][31]) to label numeric columns. Limaye et al.[32]usesTF-IDFsimilarity andgraphical models. They also usesupport-vector machineto compute the weights. Venetis et al.[33]construct an isA database which consists of the pairs (instance, class) and then compute maximum likelihood using these pairs. Alobaid and Corcho[34]approximated the q-q plot for predicting the properties of numeric columns. Syed et al.[35]built Wikitology, which is "a hybrid knowledge base of structured and unstructured information extracted from Wikipedia augmented by RDF data from DBpedia and other Linked Data resources."[35]For the Wikitology index, they usePageRankforEntity linking, which is one of the tasks often used in semantic labelling. Since they were not able to query Google for all Wikipedia articles to get thePageRank, they usedDecision treeto approximate it.[35] Alobaid and Corcho[22]presented an approach to annotate entity columns. The technique starts by annotating the cells in the entity column with the entities from the reference knowledge graph (e.g.,DBpedia). The classes are then gathered and each one of them is scored based on several formulas they presented taking into account the frequency of each class and their depth according to the subClass hierarchy.[36] Here are some of the common semantic labelling tasks presented in the literature: This is the most common task in semantic labelling. Given a text of a cell and a data source, the approach predicts the entity and link it to the one identified in the given data source. For example, if the input to the approach were the text "Richard Feynman" and a URL to the SPARQL endpoint of DBpedia, the approach would return "http://dbpedia.org/resource/Richard_Feynman", which is the entity from DBpedia. Some approaches use exact match.[22]while others use similarity metrics such asCosine similarity[32] The subject column of a table is the column that contain the main subjects/entities in the table.[19][28][33][37][38]Some approaches expects the subject column as an input[22]while others predict the subject column such as TableMiner+.[38] Columns types are divided differently by different approaches.[28]Some divide them into strings/text and numbers[21][29][39][25]while others divide them further[28](e.g., Number Typology,[19]Date,[35][33]coordinates[40]). The relation betweenMadridandSpainis "capitalOf".[41]Such relations can easily be found in ontologies, such asDBpedia. Venetis et al.[33]use TextRunner[42]to extract the relation between two columns. Syed et al.[35]use the relation between the entities of the two columns and the most frequent relation is selected. T2D[43]is the most common gold standard for semantic labelling. Two versions exists of T2D: T2Dv1 (sometimes are referred to T2D as well) and T2Dv2.[43]Another known benchmarks are published with the SemTab Challenge.[44] The "annotate" function (also known as "blame" or "praise") used insource controlsystems such asGit,Team Foundation ServerandSubversiondetermines whocommittedchanges to the source code into the repository. This outputs a copy of the source code where each line is annotated with the name of the last contributor to edit that line (and possibly a revision number). This can help establish blame in the event a change caused a malfunction, or identify the author of brilliant code. A special case is theJava programming language, where annotations can be used as a special form of syntacticmetadatain the source code.[45]Classes, methods, variables, parameters and packages may be annotated. The annotations can be embedded inclass filesgenerated by the compiler and may be retained by theJava virtual machineand thus influence therun-timebehaviour of an application. It is possible to create meta-annotations out of the existing ones in Java.[46] Automatic image annotation is used to classify images forimage retrievalsystems.[47] Since the 1980s,molecular biologyandbioinformaticshave created the need forDNA annotation. DNA annotation or genome annotation is the process of identifying the locations of genes and all of the coding regions in a genome and determining what those genes do. An annotation (irrespective of the context) is a note added by way of explanation or commentary. Once a genome is sequenced, it needs to be annotated to make sense of it.[48] In thedigital imagingcommunity the term annotation is commonly used for visible metadata superimposed on animagewithout changing the underlying master image, such assticky notes, virtual laser pointers, circles, arrows, and black-outs (cf.redaction).[49] In themedical imagingcommunity, an annotation is often referred to as aregion of interestand is encoded inDICOMformat. In the United States, legal publishers such asThomson WestandLexis Nexispublish annotated versions ofstatutes, providing information aboutcourt casesthat have interpreted the statutes. Both the federalUnited States Codeand state statutes are subject to interpretation by thecourts, and the annotated statutes are valuable tools inlegal research.[50] One purpose of annotation is to transform the data into a form suitable for computer-aided analysis. Prior to annotation, an annotation scheme is defined that typically consists of tags. During tagging, transcriptionists manually add tags into transcripts where required linguistical features are identified in an annotation editor. The annotation scheme ensures that the tags are added consistently across the data set and allows for verification of previously tagged data.[51]Aside from tags, more complex forms of linguistic annotation include the annotation of phrases and relations, e.g., intreebanks. Many different forms of linguistic annotation have been developed, as well as different formats and tools for creating and managing linguistic annotations, as described, for example, in the Linguistic Annotation Wiki.[52]
https://en.wikipedia.org/wiki/Semantic_annotation
Athesaurus(pl.:thesauriorthesauruses), sometimes called asynonym dictionaryordictionary of synonyms, is areference workwhich arranges words by their meanings (or in simpler terms, a book where one can find different words with similar meanings to other words),[1][2]sometimes as a hierarchy ofbroader and narrower terms, sometimes simply as lists ofsynonymsandantonyms. They are often used by writers to help find the best word to express an idea: ...to find the word, or words, by which [an] idea may be most fitly and aptly expressed Synonym dictionaries have a long history. The word 'thesaurus' was used in 1852 byPeter Mark Rogetfor hisRoget's Thesaurus. While some works called "thesauri", such asRoget's Thesaurus, group words in ahierarchicalhypernymictaxonomyof concepts, others are organised alphabetically[4][2]or in some other way. Most thesauri do not include definitions, but many dictionaries include listings of synonyms. Some thesauri and dictionary synonym notes characterise the distinctions between similar words, with notes on their "connotations and varying shades of meaning".[5]Some synonym dictionaries are primarily concerned with differentiating synonyms by meaning and usage.Usage manualssuch as Fowler'sDictionary of Modern English UsageorGarner's Modern English Usageoftenprescribeappropriate usage of synonyms. Writers sometimes use thesauri to avoid repetition of words –elegant variation– which is often criticised by usage manuals: "Writers sometimes use them not just to vary their vocabularies but to dress them up too much".[6] The word "thesaurus" comes fromLatinthēsaurus, which in turn comes fromGreekθησαυρός(thēsauros) 'treasure, treasury, storehouse'.[7]The wordthēsaurosis of uncertain etymology.[7][8][9] Until the 19th century, a thesaurus was anydictionaryorencyclopedia,[9]as in theThesaurus Linguae Latinae(Dictionary of the Latin Language, 1532), and theThesaurus Linguae Graecae(Dictionary of the Greek Language, 1572). It was Roget who introduced the meaning "collection of words arranged according to sense", in 1852.[7] In antiquity,Philo of Byblosauthored the first text that could now be called a thesaurus. InSanskrit, theAmarakoshais a thesaurus in verse form, written in the 4th century. The study of synonyms became an important theme in 18th-century philosophy, andCondillacwrote, but never published, a dictionary of synonyms.[10][11] Some early synonym dictionaries include: Roget's Thesaurus, first compiled in 1805 by Peter Mark Roget, and published in 1852, followsJohn Wilkins' semantic arrangement of 1668. Unlike earlier synonym dictionaries, it does not include definitions or aim to help the user choose among synonyms. It has been continuously in print since 1852 and remains widely used across the English-speaking world.[20]Roget described his thesaurus in the foreword to the first edition:[21] It is now nearly fifty years since I first projected a system of verbal classification similar to that on which the present work is founded. Conceiving that such a compilation might help to supply my deficiencies, I had, in the year 1805, completed a classed catalogue of words on a small scale, but on the same principle, and nearly in the same form, as the Thesaurus now published. Roget's original thesaurus was organized into 1000 conceptual Heads (e.g., 806 Debt) organized into a four-leveltaxonomy. For example, debt is classed under V.ii.iv:[22] Each head includes direct synonyms: Debt, obligation, liability, ...; related concepts: interest, usance, usury; related persons: debtor, debitor, ... defaulter (808); verbs: to be in debt, to owe, ...seeBorrow (788); phrases: to run up a bill or score, ...; and adjectives: in debt, indebted, owing, .... Numbers in parentheses arecross-referencesto other Heads. The book starts with a Tabular Synopsis of Categories laying out the hierarchy,[23]then the main body of the thesaurus listed by the Head, and then an alphabetical index listing the different Heads under which a word may be found: Liable,subject to, 177;debt, 806;duty, 926.[24] Some recent versions have kept the same organization, though often with more detail under each Head.[25]Others have made modest changes such as eliminating the four-level taxonomy and adding new heads: one has 1075 Heads in fifteen Classes.[26] Some non-English thesauri have also adopted this model.[27] In addition to its taxonomic organization, theHistorical Thesaurus of English(2009) includes the date when each word came to have a given meaning. It has the novel and unique goal of "charting the semantic development of the huge and varied vocabulary of English". Different senses of a word are listed separately. For example, three different senses of "debt" are listed in three different places in the taxonomy:[28]A sum of money that is owed or due; a liability or obligation to pay An immaterial debt; is an obligation to do something An offence requiring expiation (figurative, Biblical) Other thesauri and synonym dictionaries are organized alphabetically. Most repeat the list of synonyms under each word.[29][30][31][32] Some designate a principal entry for each concept and cross-reference it.[33][34][35] A third system interfiles words and conceptual headings.Francis March'sThesaurus Dictionarygives forliability:CONTINGENCY, CREDIT–DEBT, DUTY–DERELICTION, LIBERTY–SUBJECTION, MONEY, each of which is a conceptual heading.[36]TheCREDIT—DEBTarticle has multiple subheadings, including Nouns of Agent, Verbs, Verbal Expressions,etc.Under each are listed synonyms with brief definitions,e.g."Credit.Transference of property on promise of future payment." The conceptual headings are not organized into a taxonomy. Benjamin Lafaye'sSynonymes français(1841) is organized aroundmorphologicallyrelated families of synonyms (e.g.logis, logement),[37]and hisDictionnaire des synonymes de la langue française(1858) is mostly alphabetical, but also includes a section on morphologically related synonyms, which is organized by prefix, suffix, or construction.[11] Before Roget, most thesauri and dictionary synonym notes included discussions of the differences among near-synonyms, as do some modern ones.[32][31][30][5] Merriam-Webster's Dictionary of Synonymsis a stand-alone modern English synonym dictionary that does discuss differences.[33]In addition, many general English dictionaries include synonym notes. Several modern synonym dictionaries in French areprimarilydevoted to discussing the precise demarcations among synonyms.[38][11] Some include short definitions.[36] Some give illustrative phrases.[32] Some include lists of objects within the category (hyponyms),e.g.breeds of dogs.[32] Bilingual synonym dictionaries are designed for language learners. One such dictionary gives various French words listed alphabetically, with an English translation and an example of use.[39]Another one is organized taxonomically with examples, translations, and some usage notes.[40] Inlibraryandinformation science, a thesaurus is a kind ofcontrolled vocabulary. A thesaurus can form part of anontologyand be represented in theSimple Knowledge Organization System(SKOS).[41] Thesauri are used innatural language processingforword-sense disambiguation[42]andtext simplificationformachine translationsystems.[43]
https://en.wikipedia.org/wiki/Thesaurus
Awiki(/ˈwɪki/ⓘWICK-ee) is a form ofhypertextpublication on theinternetwhich iscollaboratively editedand managed by its audience directly through aweb browser. A typical wiki contains multiple pages that can either be edited by the public or limited to use within an organization for maintaining its internalknowledge base. It has got it's name from the first user-editable website called "WikiWikiWeb". The "wiki" meaning quick.[1] Wikis are powered bywiki software, also known as wiki engines. Being a form ofcontent management system, these differ from otherweb-basedsystems such asblog softwareorstatic site generatorsin that the content is created without any defined owner or leader. Wikis have little inherent structure, allowing one to emerge according to the needs of the users.[2]Wiki engines usually allow content to be written using alightweight markup languageand sometimes edited with the help of arich-text editor.[3]There are dozens of different wiki engines in use, both standalone and part of other software, such asbug tracking systems. Some wiki engines arefree and open-source, whereas others areproprietary. Some permit control over different functions (levels of access); for example, editing rights may permit changing, adding, or removing material. Others may permit access without enforcing access control. Further rules may be imposed to organize content. In addition to hosting user-authored content, wikis allow those users to interact, hold discussions, and collaborate.[4] There are hundreds of thousands ofwikis in use, both public and private, including wikis functioning asknowledge managementresources,note-takingtools,community websites, andintranets.Ward Cunningham, the developer of the first wiki software,WikiWikiWeb, originally described wiki as "the simplest online database that could possibly work".[5]"Wiki" (pronounced[wiki][note 1]) is aHawaiianword meaning "quick".[6][7][8] Theonline encyclopediaprojectWikipediais the most popular wiki-based website, as well beingone of the internet's most popular websites, having been ranked consistently as such since at least 2007.[9]Wikipedia is not a single wiki but rather a collection of hundreds of wikis, with each one pertaining to a specific language, making it the largest reference work of all time.[10]TheEnglish-language Wikipediahas the largest collection of articles, standing at 6,995,363 as of May 2025.[11] In their 2001 bookThe Wiki Way: Quick Collaboration on the Web, Cunningham and co-authorBo Leufdescribed the essence of the wiki concept:[12][13] Some wikis will present users with an edit button or link directly on the page being viewed. This will open aninterfacefor writing, formatting, and structuring page content. The interface may be a source editor, which is text-based and employs a lightweight markup language (also known aswikitext,wiki markup, orwikicode), or avisual editor. For example, in a source editor, starting lines of text withasteriskscould create abulleted list. Thesyntaxand features of wiki markup languages for denoting style and structure can vary greatly amongimplementations. Some allow the use ofHTMLTooltip Hypertext Markup LanguageandCSSTooltip Cascading Style Sheets,[14]while others prevent the use of these to foster uniformity in appearance. A short section ofAlice's Adventures in Wonderlandrendered in wiki markup: "I've hadnothingyet," Alice replied in an offended tone, "so I can't take more." "You mean you can't takeless," said the Hatter. "It's very easy to takemorethan nothing." While wiki engines have traditionally offered source editing to users, in recent years some implementations have added a rich text editing mode. This is usually implemented, usingJavaScript, as an interface which translates formatting instructions chosen from atoolbarinto the corresponding wiki markup or HTML. This is generated and submitted to the servertransparently, shielding users from the technical detail of markup editing and making it easier for them to change the content of pages. An example of such an interface is theVisualEditorinMediaWiki, the wiki engine used by Wikipedia.WYSIWYGeditors may not provide all the features available in wiki markup, and some users prefer not to use them, so a source editor will often be available simultaneously. Some wiki implementations keep a record of changes made to wiki pages, and may store every version of the page permanently. This allows authors to revert a page to an older version to rectify a mistake, or counteract a malicious or inappropriate edit to its content.[15] These stores are typically presented for each page in a list, called a "log" or "edit history", available from the page via a link in the interface. The list displaysmetadatafor each revision to the page, such as the time and date of when it was stored, and the name of the person who created it, alongside a link to view that specific revision. Adiff(short for "difference") feature may be available, which highlights the changes between any two revisions. The edit history view in many wiki implementations will includeedit summarieswritten by users when submitting changes to a page. Similar to the function of alog messagein arevision controlsystem, an edit summary is a short piece of text which summarizes and perhaps explains the change, for example "Corrected grammar" or "Fixed table formatting to not extend past page width". It is not inserted into the article's main text. Traditionally, wikis offer free navigation between their pages viahypertextlinks in page text, rather than requiring users to follow a formal or structured navigation scheme. Users may also createindexesortable of contentspages, hierarchical categorization via ataxonomy, or other forms ofad hoccontent organization. Wiki implementations can provide one or more ways to categorize ortagpages to support the maintenance of such index pages, such as abacklinkfeature which displays all pages that link to a given page. Adding categories or tags to a page makes it easier for other users to find it. Most wikis allow the titles of pages to be searched amongst, and some offerfull text searchof all stored content. Some wiki communities have established navigational networks between each other using a system calledWikiNodes. A WikiNode is a page on a wiki which describes and links to other, related wikis. Some wikis operate a structure ofneighborsanddelegates, wherein a neighbor wiki is one which discusses similar content or is otherwise of interest, and a delegate wiki is one which has agreed to have certain content delegated to it.[16]WikiNode networks act aswebringswhich may be navigated from one node to another to find a wiki which addresses a specific subject. The syntax used to create internal hyperlinks varies between wiki implementations. Beginning with the WikiWikiWeb in 1995, most wikis usedcamel caseto name pages,[17]which is when words in a phrase arecapitalizedand the spaces between them removed. In this system, the phrase "camel case" would be rendered as "CamelCase". In early wiki engines, when a page was displayed, any instance of a camel case phrase would be transformed into a link to another page named with the same phrase. While this system made it easy to link to pages, it had the downside of requiring pages to be named in a form deviating from standard spelling, and titles of a single word required abnormally capitalizing one of the letters (e.g. "WiKi" instead of "Wiki"). Some wiki implementations attempt to improve the display of camel case page titles and links by reinserting spaces and possibly also reverting to lower case, but this simplistic method is not able to correctly present titles of mixed capitalization. For example, "Kingdom of France" as a page title would be written as "KingdomOfFrance", and displayed as "Kingdom Of France". To avoid this problem, the syntax of wiki markup gainedfree links, wherein a term in natural language could be wrapped in special characters to turn it into a link without modifying it. The concept was given the name in its first implementation, inUseModWikiin February 2001.[18]In that implementation, link terms were wrapped in a double set of square brackets, for example[[Kingdom of France]]. This syntax was adopted by a number of later wiki engines. It is typically possible for users of a wiki to create links to pages that do not yet exist, as a way to invite the creation of those pages. Such links are usually differentiated visually in some fashion, such as being colored red instead of the default blue, which was the case in the original WikiWikiWeb, or by appearing as a question mark next to the linked words. WikiWikiWebwas the first wiki.[19]Ward Cunningham started developing it in 1994, and installed it on theInternet domainc2.comon March 25, 1995. Cunningham gave it the name after remembering aHonolulu International Airportcounter employee telling him to take the "Wiki Wiki Shuttle" bus that runs between the airport's terminals, later observing that "I chose wiki-wiki as an alliterative substitute for 'quick' and thereby avoided naming this stuff quick-web."[20][21] Cunningham's system was inspired by his having usedApple's hypertext softwareHyperCard, which allowed users to create interlinked "stacks" of virtual cards.[22]HyperCard, however, was single-user, and Cunningham was inspired to build upon the ideas ofVannevar Bush, the inventor of hypertext, by allowing users to "comment on and change one another's text."[3][23]Cunningham says his goals were to link together people's experiences to create a new literature to document programmingpatterns, and to harness people's natural desire to talk and tell stories with a technology that would feel comfortable to those not used to "authoring".[22] Wikipedia became the most famous wiki site[clarification needed], launched in January 2001 and entering the top ten most popular websites in 2007. In the early 2000s, wikis were increasingly adopted in enterprise as collaborative software. Common uses included project communication,intranets, and documentation, initially for technical users. Somecompanies use wikisas their collaborative software and as a replacement for static intranets, and some schools and universities use wikis to enhancegroup learning. On March 15, 2007, the wordwikiwas listed in the onlineOxford English Dictionary.[24] In the late 1990s and early 2000s, the word "wiki" was used to refer to both user-editable websites and the software that powers them, and the latter definition is still occasionally in use.[2] By 2014, Ward Cunningham's thinking on the nature of wikis had evolved, leading him to write[25]that the word "wiki" should not be used to refer to a single website, but rather to a mass of user-editable pages or sites so that a single website is not "a wiki" but "an instance of wiki". In this concept of wiki federation, in which the same content can be hosted and edited in more than one location in a manner similar todistributed version control, the idea of a single discrete "wiki" no longer made sense.[26] The software which powers a wiki may be implemented as a series ofscriptswhich operate an existingweb server, a standaloneapplication serverthat runs on one or more web servers, or in the case ofpersonal wikis, run as a standalone application on a single computer. Some wikis useflat file databasesto store page content, while others use arelational database,[27]asindexeddatabase access is faster on large wikis, particularly for searching. Wikis can also be created onwiki hosting services(also known aswiki farms), where theserver-sidesoftware is implemented by the wiki farm owner, and may do so at no charge in exchange foradvertisementsbeing displayed on the wiki's pages. Some hosting services offer private, password-protected wikis requiringauthenticationto access. Free wiki farms generally contain advertising on every page. The four basic types of users who participate in wikis are readers, authors, wiki administrators and system administrators. System administrators are responsible for the installation and maintenance of the wiki engine and the container web server. Wiki administrators maintain content and, through having elevatedprivileges, are granted additional functions (including, for example, preventing edits to pages, deleting pages, changing users' access rights, or blocking them from editing).[28] Wikis are generally designed with asoft securityphilosophy in which it is easy to correct mistakes or harmful changes, rather than attempting to prevent them from happening in the first place. This allows them to be very open while providing a means to verify the validity of recent additions to the body of pages. Most wikis offer arecent changespage which shows recent edits, or a list of edits made within a given time frame.[29]Some wikis can filter the list to remove edits flagged by users as "minor" andautomatededits.[30]The version history feature allows harmful changes to be reverted quickly and easily.[15] Some wiki engines provide additional content control, allowingremote monitoring and managementof a page or set of pages to maintain quality. A person willing to maintain pages will be alerted of modifications to them, allowing them to verify the validity of new editions quickly.[31]Such a feature is often called awatchlist. Some wikis also implementpatrolled revisions, in which editors with the requisite credentials can mark edits as being legitimate. Aflagged revisionssystem can prevent edits from going live until they have been reviewed.[32] Wikis may allow any person on the web to edit their content without having to register an account on the site first (anonymous editing), or require registration as a condition of participation.[33]On implementations where an administrator is able to restrict editing of a page or group of pages to a specific group of users, they may have the option to prevent anonymous editing while allowing it for registered users.[34] Critics of publicly editable wikis argue that they could be easily tampered with by malicious individuals, or even by well-meaning but unskilled users who introduce errors into the content. Proponents maintain that these issues will be caught and rectified by a wiki's community of users.[3][19]High editorial standards in medicine and health sciences articles, in which users typically use peer-reviewed journals or university textbooks as sources, have led to the idea of expert-moderated wikis.[35]Wiki implementations retaining and allowing access to specific versions of articles has been useful to the scientific community, by allowing expertpeer reviewersto provide links to trusted version of articles which they have analyzed.[36] Trollingandcybervandalismon wikis, where content is changed to something deliberately incorrect or ahoax, offensive material or nonsense is added, or content is maliciously removed, can be a major problem. On larger wiki sites it is possible for such changes to go unnoticed for a long period. In addition to using the approach of soft security for protecting themselves, larger wikis may employ sophisticated methods, such as bots that automatically identify and revert vandalism. For example, on Wikipedia, the botClueBot NGusesmachine learningto identify likely harmful changes, and reverts these changes within minutes or even seconds.[37] Disagreements between users over the content or appearance of pages may causeedit wars, where competing users repetitively change a page back to a version that they favor. Some wiki software allows administrators to prevent pages from being editable until a decision has been made on what version of the page would be most appropriate.[4] Some wikis may be subject to external structures of governance which address the behavior of persons with access to the system, for example in academic contexts.[27] As most wikis allow the creation of hyperlinks to other sites and services, the addition of malicious hyperlinks, such as sites infected withmalware, can also be a problem. For example, in 2006 a German Wikipedia article about theBlaster Wormwas edited to include a hyperlink to a malicious website, and users of vulnerable Microsoft Windows systems who followed the link had their systems infected with the worm.[4]Some wiki engines offer ablacklistfeature which prevents users from adding hyperlinks to specific sites that have been placed on the list by the wiki's administrators. The English Wikipedia has the largest user base among wikis on theWorld Wide Web[38]and ranks in the top 10 among all Web sites in terms of traffic.[39]Other large wikis include theWikiWikiWeb,Memory Alpha,Wikivoyage, and previouslySusning.nu, a Swedish-language knowledge base.Medicaland health-related wiki examples includeGanfyd, an online collaborative medical reference that is edited by medical professionals and invited non-medical experts.[40]Many wikicommunitiesare private, particularly withinenterprises. They are often used as internal documentation for in-house systems and applications. Some companies use wikis to allow customers to help produce software documentation.[41]A study of corporate wiki users found that they could be divided into "synthesizers" and "adders" of content. Synthesizers' frequency of contribution was affected more by their impact on other wiki users, while adders' contribution frequency was affected more by being able to accomplish their immediate work.[42]From a study of thousands of wiki deployments, Jonathan Grudin concluded careful stakeholder analysis and education are crucial to successful wiki deployment.[43] In 2005, the Gartner Group, noting the increasing popularity of wikis, estimated that they would become mainstream collaboration tools in at least 50% of companies by 2009.[44][needs update]Wikis can be used forproject management.[45][46][unreliable source]Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries.[47]In those settings, they have been found useful for collaboration ongrant writing,strategic planning, departmental documentation, and committee work.[48]In the mid-2000s, the increasing trend among industries toward collaboration placed a heavier impetus upon educators to make students proficient in collaborative work, inspiring even greater interest in wikis being used in the classroom.[4] Wikis have found some use within the legal profession and within the government. Examples include theCentral Intelligence Agency'sIntellipedia, designed to share and collectintelligence assessments,DKosopedia, which was used by theAmerican Civil Liberties Unionto assist with review of documents about the internment of detainees inGuantánamo Bay;[49]and the wiki of theUnited States Court of Appeals for the Seventh Circuit, used to post court rules and allow practitioners to comment and ask questions. TheUnited States Patent and Trademark OfficeoperatesPeer-to-Patent, a wiki to allow the public to collaborate on findingprior artrelevant to the examination of pending patent applications.Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park.Cornell Law Schoolfounded a wiki-based legal dictionary calledWex, whose growth has been hampered by restrictions on who can edit.[34] In academic contexts, wikis have also been used as project collaboration and research support systems.[50][51] Acity wikiorlocal wikiis a wiki used as aknowledge baseandsocial networkfor a specificgeographicallocale.[52][53][54]The term city wiki is sometimes also used for wikis that cover not just a city, but a small town or an entire region. Such a wiki contains information about specific instances of things, ideas, people and places. Such highly localized information might be appropriate for a wiki targeted at local viewers, and could include: A study of several hundred wikis in 2008 showed that a relatively high number of administrators for a given content size is likely to reduce growth;[55]access controls restricting editing to registered users tends to reduce growth; a lack of such access controls tends to fuel new user registration; and that a higher ratio of administrators to regular users has no significant effect on content or population growth.[56] Joint authorship of articles, in which different users participate in correcting, editing, and compiling the finished product, can also cause editors to becometenants in commonof the copyright, making it impossible to republish without permission of all co-owners, some of whose identities may be unknown due to pseudonymous or anonymous editing.[4]Some copyright issues can be alleviated through the use of anopen contentlicense. Version 2 of theGNU Free Documentation Licenseincludes a specific provision for wiki relicensing, andCreative Commonslicenses are also popular. When no license is specified, an implied license to read and add content to a wiki may be deemed to exist on the grounds of business necessity and the inherent nature of a wiki. Wikis and their users can be held liable for certain activities that occur on the wiki. If a wiki owner displays indifference and forgoes controls (such as banning copyright infringers) that they could have exercised to stop copyright infringement, they may be deemed to have authorized infringement, especially if the wiki is primarily used to infringe copyrights or obtains a direct financial benefit, such as advertising revenue, from infringing activities.[4]In the United States, wikis may benefit fromSection 230 of the Communications Decency Act, which protects sites that engage in "Good Samaritan" policing of harmful material, with no requirement on the quality or quantity of such self-policing.[57]It has also been argued that a wiki's enforcement of certain rules, such as anti-bias, verifiability, reliable sourcing, and no-original-research policies, could pose legal risks.[58]Whendefamationoccurs on a wiki, theoretically, all users of the wiki can be held liable, because any of them had the ability to remove or amend the defamatory material from the "publication". It remains to be seen whether wikis will be regarded as more akin to aninternet service provider, which is generally not held liable due to its lack of control over publications' contents, than a publisher.[4]It has been recommended that trademark owners monitor what information is presented about their trademarks on wikis, since courts may use such content as evidence pertaining to public perceptions, and they can edit entries to rectify misinformation.[59] Active conferences and meetings about wiki-related topics include: Former wiki-related events include:
https://en.wikipedia.org/wiki/Wiki
Philosophical analysisis any of varioustechniques, typically used byphilosophersin theanalytic tradition, in order to "break down" (i.e. analyze) philosophical issues. Arguably the most prominent of these techniques is the analysis ofconcepts, known asconceptual analysis. While analysis is characteristic of the analytic tradition inphilosophy, what is to be analyzed (theanalysandum) often varies. In their papers, philosophers may focus on different areas. One might analyzelinguisticphenomena such assentences, orpsychologicalphenomena such assense data. However, arguably the most prominent analyses are written onconceptsorpropositionsand are known asconceptual analysis.[1] A.C. Ewingdistinguished between two forms of philosophical analysis. The first is "what the persons who make a certain statement usually intend to assert" and the second "the qualities, relations and species of continuants mentioned in the statement". As an illustration he takes the statement "I see a tree", this statement could be analysed in terms what the everyday person intends what they say this or it could be analysed metaphysically by assertingrepresentationalism.[2] Conceptual analysis consists primarily in breaking down or analyzing concepts into their constituent parts in order to gain knowledge or a better understanding of a particular philosophical issue in which the concept is involved.[3]For example, theproblem of free willin philosophy involves various key concepts, including the concepts offreedom,moral responsibility,determinism,ability, etc. The method of conceptual analysis tends to approach such a problem by breaking down the key concepts pertaining to the problem and seeing how they interact. Thus, in the long-standing debate on whetherfree willis compatible with the doctrine ofdeterminism, several philosophers have proposed analyses of the relevant concepts to argue for eithercompatibilismorincompatibilism. A famous example of conceptual analysis at its best is given byBertrand Russellin histheory of descriptions. Russell attempted to analyze propositions that involveddefinite descriptions, which pick out a unique individual (such as "The tallest spy"), andindefinite descriptions, which pick out a set of individuals (such as "a spy"). In his analysis of definite descriptions, superficially, these descriptions have the standard subject-predicate form of a proposition: thus "The presentking of Franceisbald" appears to be predicating "baldness" of the subject, "the present king of France". However, Russell noted that this is problematic, because there is no present king of France (France isno longer a monarchy). Normally, to decide whether a proposition of the standard subject-predicate form is true or false, one checks whether the subject is in the extension of the predicate. The proposition is then true if and only if the subject is in the extension of the predicate. The problem is that there is no present king of France, so the present king of France cannot be found on the list of bald things or non-bald things. So, it would appear that the proposition expressed by "The present king of France is bald" is neither true nor false. However, analyzing the relevant concepts and propositions, Russell proposed that what definite descriptions really express are not propositions of the subject-predicate form, but rather they express existentially quantified propositions. Thus, "The present king of France" isanalyzed, according to Russell's theory of descriptions, as "There exists an individual who is currently the king of France, there is only one such individual, and that individual is bald." Now one can determine thetruth valueof the proposition. Indeed, it is false, because it is not the case that there exists a unique individual who is currently the king of France and is bald, since there is no present king of France.[4][5] While the method of analysis is characteristic of contemporaryanalytic philosophy, its status continues to be a source of great controversy even among analytic philosophers. Several current criticisms of the analytic method derive fromW.V. Quine's famous rejection of theanalytic–synthetic distinction. While Quine's critique is well-known, it is highly controversial. Further, the analytic method seems to rely on some sort of definitional structure of concepts, so that one can give necessary and sufficient conditions for the application of the concept. For example, the concept "bachelor" is often analyzed as having the concepts "unmarried" and "male" as its components. Thus, the definition or analysis of "bachelor" is thought to be an unmarried male. But one might worry that these so-called necessary and sufficient conditions do not apply in every case.Wittgenstein, for instance, argues that language (e.g., the word 'bachelor') is used for various purposes and in an indefinite number of ways. Wittgenstein's famous thesis states that meaning is determined by use. This means that, in each case, the meaning of 'bachelor' is determined by its use in a context. So if it can be shown that the word means different things across different contexts of use, then cases where its meaning cannot be essentially defined as 'unmarried man' seem to constitute counterexamples to this method of analysis. This is just one example of a critique of the analytic method derived from a critique of definitions. There are several other such critiques.[6]This criticism is often said to have originated primarily with Wittgenstein'sPhilosophical Investigations. A third critique of the method of analysis derives primarily from psychological critiques ofintuition. A key part of the analytic method involves analyzing concepts via "intuition tests". Philosophers tend to motivate various conceptual analyses by appeal to their intuitions about thought experiments.[7] In short, some philosophers feel strongly that the analytic method (especially conceptual analysis) is essential to and defines philosophy.[8]Yet, some philosophers argue that the method of analysis is problematic.[9]Some, however, take the middle ground and argue that while analysis is largely a fruitful method of inquiry, philosophers should not limit themselves to only using the method of analysis.
https://en.wikipedia.org/wiki/Conceptual_analysis
Description logics(DL) are a family of formalknowledge representationlanguages. Many DLs are more expressive thanpropositional logicbut less expressive thanfirst-order logic. In contrast to the latter, the core reasoning problems for DLs are (usually)decidable, and efficient decision procedures have been designed and implemented for these problems. There are general, spatial, temporal, spatiotemporal, and fuzzy description logics, and each description logic features a different balance betweenexpressive powerandreasoningcomplexityby supporting different sets of mathematical constructors.[1] DLs are used inartificial intelligenceto describe and reason about the relevant concepts of an application domain (known asterminological knowledge). It is of particular importance in providing a logical formalism forontologiesand theSemantic Web: theWeb Ontology Language(OWL) and its profiles are based on DLs. The most notable application of DLs and OWL is inbiomedical informaticswhere DL assists in the codification of biomedical knowledge.[citation needed] A description logic (DL) modelsconcepts,rolesandindividuals, and their relationships. The fundamental modeling concept of a DL is theaxiom—a logical statement relating roles and/or concepts.[2]This is a key difference from theframesparadigm where aframe specificationdeclares and completely defines a class.[2] The description logic community uses different terminology than thefirst-order logic(FOL) community for operationally equivalent notions; some examples are given below. TheWeb Ontology Language(OWL) uses again a different terminology, also given in the table below. There are many varieties of description logics and there is an informal naming convention, roughly describing the operators allowed. Theexpressivityis encoded in the label for a logic starting with one of the following basic logics: Followed by any of the following extensions: Some canonical DLs that do not exactly fit this convention are: As an example,ALC{\displaystyle {\mathcal {ALC}}}is a centrally important description logic from which comparisons with other varieties can be made.ALC{\displaystyle {\mathcal {ALC}}}is simplyAL{\displaystyle {\mathcal {AL}}}with complement of any concept allowed, not just atomic concepts.ALC{\displaystyle {\mathcal {ALC}}}is used instead of the equivalentALUE{\displaystyle {\mathcal {ALUE}}}. A further example, the description logicSHIQ{\displaystyle {\mathcal {SHIQ}}}is the logicALC{\displaystyle {\mathcal {ALC}}}plus extended cardinality restrictions, and transitive and inverse roles. The naming conventions aren't purely systematic so that the logicALCOIN{\displaystyle {\mathcal {ALCOIN}}}might be referred to asALCNIO{\displaystyle {\mathcal {ALCNIO}}}and other abbreviations are also made where possible. The Protégé ontology editor supportsSHOIN(D){\displaystyle {\mathcal {SHOIN}}^{\mathcal {(D)}}}. Three major biomedical informatics terminology bases,SNOMED CT, GALEN, and GO, are expressible inEL{\displaystyle {\mathcal {EL}}}(with additional role properties). OWL 2 provides the expressiveness ofSROIQ(D){\displaystyle {\mathcal {SROIQ}}^{\mathcal {(D)}}}, OWL-DL is based onSHOIN(D){\displaystyle {\mathcal {SHOIN}}^{\mathcal {(D)}}}, and for OWL-Lite it isSHIF(D){\displaystyle {\mathcal {SHIF}}^{\mathcal {(D)}}}. Description logic was given its current name in the 1980s. Previous to this it was called (chronologically):terminological systems, andconcept languages. Framesandsemantic networkslack formal (logic-based) semantics.[5]DL was first introduced intoknowledge representation(KR) systems to overcome this deficiency.[5] The first DL-based KR system wasKL-ONE(byRonald J. Brachmanand Schmolze, 1985). During the '80s other DL-based systems usingstructural subsumption algorithms[5]were developed including KRYPTON (1983),LOOM(1987), BACK (1988), K-REP (1991) and CLASSIC (1991). This approach featured DL with limited expressiveness but relatively efficient (polynomial time) reasoning.[5] In the early '90s, the introduction of a newtableau based algorithmparadigm allowed efficient reasoning on more expressive DL.[5]DL-based systems using these algorithms — such as KRIS (1991) — show acceptable reasoning performance on typical inference problems even though the worst case complexity is no longer polynomial.[5] From the mid '90s, reasoners were created with good practical performance on very expressive DL with high worst case complexity.[5]Examples from this period include FaCT,[6]RACER(2001), CEL (2005), andKAON 2(2005). DL reasoners, such as FaCT, FaCT++,[6]RACER, DLP and Pellet,[7]implement themethod of analytic tableaux. KAON2 is implemented by algorithms which reduce a SHIQ(D) knowledge base to a disjunctivedatalogprogram. TheDARPA Agent Markup Language(DAML) andOntology Inference Layer(OIL)ontology languagesfor theSemantic Webcan be viewed assyntacticvariants of DL.[8]In particular, the formal semantics and reasoning in OIL use theSHIQ{\displaystyle {\mathcal {SHIQ}}}DL.[9]TheDAML+OILDL was developed as a submission to[10]—and formed the starting point of—theWorld Wide Web Consortium(W3C) Web Ontology Working Group.[11]In 2004, the Web Ontology Working Group completed its work by issuing theOWL[12]recommendation. The design of OWL is based on theSH{\displaystyle {\mathcal {SH}}}family of DL[13]with OWL DL and OWL Lite based onSHOIN(D){\displaystyle {\mathcal {SHOIN}}^{\mathcal {(D)}}}andSHIF(D){\displaystyle {\mathcal {SHIF}}^{\mathcal {(D)}}}respectively.[13] The W3C OWL Working Group began work in 2007 on a refinement of - and extension to - OWL.[14]In 2009, this was completed by the issuance of theOWL2recommendation.[15]OWL2 is based on the description logicSROIQ(D){\displaystyle {\mathcal {SROIQ}}^{\mathcal {(D)}}}.[16]Practical experience demonstrated that OWL DL lacked several key features necessary to model complex domains.[2] In DL, a distinction is drawn between the so-calledTBox(terminological box) and theABox(assertional box). In general, the TBox contains sentences describing concept hierarchies (i.e., relations betweenconcepts) while the ABox containsground sentencesstating where in the hierarchy, individuals belong (i.e., relations between individuals and concepts). For example, the statement: belongs in the TBox, while the statement: belongs in the ABox. Note that the TBox/ABox distinction is not significant, in the same sense that the two "kinds" of sentences are not treated differently in first-order logic (which subsumes most DL). When translated into first-order logic, a subsumptionaxiomlike (1) is simply a conditional restriction tounarypredicates(concepts) with only variables appearing in it. Clearly, a sentence of this form is not privileged or special over sentences in which only constants ("grounded" values) appear like (2). So why was the distinction introduced? The primary reason is that the separation can be useful when describing and formulating decision-procedures for various DL. For example, a reasoner might process the TBox and ABox separately, in part because certain key inference problems are tied to one but not the other one ('classification' is related to the TBox, 'instance checking' to the ABox). Another example is that the complexity of the TBox can greatly affect the performance of a given decision-procedure for a certain DL, independently of the ABox. Thus, it is useful to have a way to talk about that specific part of theknowledge base. The secondary reason is that the distinction can make sense from the knowledge base modeler's perspective. It is plausible to distinguish between our conception of terms/concepts in the world (class axioms in the TBox) and particular manifestations of those terms/concepts (instance assertions in the ABox). In the above example: when the hierarchy within a company is the same in every branch but the assignment to employees is different in every department (because there are other people working there), it makes sense to reuse the TBox for different branches that do not use the same ABox. There are two features of description logic that are not shared by most other data description formalisms: DL does not make theunique name assumption(UNA) or theclosed-world assumption(CWA). Not having UNA means that two concepts with different names may be allowed by some inference to be shown to be equivalent. Not having CWA, or rather having theopen world assumption(OWA) means that lack of knowledge of a fact does not immediately imply knowledge of the negation of a fact. Likefirst-order logic(FOL), asyntaxdefines which collections of symbols are legal expressions in a description logic, andsemanticsdetermine meaning. Unlike FOL, a DL may have several well known syntactic variants.[8] The syntax of a member of the description logic family is characterized by its recursive definition, in which the constructors that can be used to form concept terms are stated. Some constructors are related to logical constructors infirst-order logic(FOL) such asintersectionorconjunctionof concepts,unionordisjunctionof concepts,negationorcomplementof concepts,universal restrictionandexistential restriction. Other constructors have no corresponding construction in FOL including restrictions on roles for example, inverse,transitivityand functionality. Let C and D be concepts, a and b be individuals, and R be a role. If a is R-related to b, then b is called an R-successor of a. The prototypical DLAttributive Concept Language with Complements(ALC{\displaystyle {\mathcal {ALC}}}) was introduced by Manfred Schmidt-Schauß and Gert Smolka in 1991, and is the basis of many more expressive DLs.[5]The following definitions follow the treatment in Baader et al.[5] LetNC{\displaystyle N_{C}},NR{\displaystyle N_{R}}andNO{\displaystyle N_{O}}be (respectively)setsofconcept names(also known asatomic concepts),role namesandindividual names(also known asindividuals,nominalsorobjects). Then the ordered triple (NC{\displaystyle N_{C}},NR{\displaystyle N_{R}},NO{\displaystyle N_{O}}) is thesignature. The set ofALC{\displaystyle {\mathcal {ALC}}}conceptsis the smallest set such that: Ageneral concept inclusion(GCI) has the formC⊑D{\displaystyle C\sqsubseteq D}whereC{\displaystyle C}andD{\displaystyle D}areconcepts. WriteC≡D{\displaystyle C\equiv D}whenC⊑D{\displaystyle C\sqsubseteq D}andD⊑C{\displaystyle D\sqsubseteq C}. ATBoxis any finite set of GCIs. AnABoxis a finite set of assertional axioms. Aknowledge base(KB) is an ordered pair(T,A){\displaystyle ({\mathcal {T}},{\mathcal {A}})}forTBoxT{\displaystyle {\mathcal {T}}}andABoxA{\displaystyle {\mathcal {A}}}. Thesemanticsof description logics are defined by interpreting concepts as sets of individuals and roles as sets of ordered pairs of individuals. Those individuals are typically assumed from a given domain. The semantics of non-atomic concepts and roles is then defined in terms of atomic concepts and roles. This is done by using a recursive definition similar to the syntax. The following definitions follow the treatment in Baader et al.[5] Aterminological interpretationI=(ΔI,⋅I){\displaystyle {\mathcal {I}}=(\Delta ^{\mathcal {I}},\cdot ^{\mathcal {I}})}over asignature(NC,NR,NO){\displaystyle (N_{C},N_{R},N_{O})}consists of such that DefineI⊨{\displaystyle {\mathcal {I}}\models }(readin I holds) as follows LetK=(T,A){\displaystyle {\mathcal {K}}=({\mathcal {T}},{\mathcal {A}})}be a knowledge base. In addition to the ability to describe concepts formally, one also would like to employ the description of a set of concepts to ask questions about the concepts and instances described. The most common decision problems are basic database-query-like questions likeinstance checking(is a particular instance (member of an ABox) a member of a given concept) andrelation checking(does a relation/role hold between two instances, in other words doesahave propertyb), and the more global-database-questions likesubsumption(is a concept a subset of another concept), andconcept consistency(is there no contradiction among the definitions or chain of definitions). The more operators one includes in a logic and the more complicated the TBox (having cycles, allowing non-atomic concepts to include each other), usually the higher the computational complexity is for each of these problems (seeDescription Logic Complexity Navigatorfor examples). Many DLs aredecidablefragmentsoffirst-order logic(FOL)[5]and are usually fragments oftwo-variable logicorguarded logic. In addition, some DLs have features that are not covered in FOL; this includesconcrete domains(such as integer or strings, which can be used as ranges for roles such ashasAgeorhasName) or an operator on roles for thetransitive closureof that role.[5] Fuzzy description logics combinesfuzzy logicwith DLs. Since many concepts that are needed forintelligent systemslack well defined boundaries, or precisely defined criteria of membership, fuzzy logic is needed to deal with notions of vagueness and imprecision. This offers a motivation for a generalization of description logic towards dealing with imprecise and vague concepts. Description logic is related to—but developed independently of—modal logic(ML).[5]Many—but not all—DLs are syntactic variants of ML.[5] In general, an object corresponds to apossible world, a concept corresponds to a modal proposition, and a role-bounded quantifier to a modal operator with that role as its accessibility relation. Operations on roles (such as composition, inversion, etc.) correspond to the modal operations used indynamic logic.[17] Temporal description logic represents—and allows reasoning about—time dependent concepts and many different approaches to this problem exist.[18]For example, a description logic might be combined with amodaltemporal logicsuch aslinear temporal logic. There are somesemantic reasonersthat deal with OWL and DL. These are some of the most popular:
https://en.wikipedia.org/wiki/Description_logic
Agraphical modelorprobabilistic graphical model(PGM) orstructured probabilistic modelis aprobabilistic modelfor which agraphexpresses theconditional dependencestructure betweenrandom variables. Graphical models are commonly used inprobability theory,statistics—particularlyBayesian statistics—andmachine learning. Generally, probabilistic graphical models use a graph-based representation as the foundation for encoding a distribution over a multi-dimensional space and a graph that is a compact orfactorizedrepresentation of a set of independences that hold in the specific distribution. Two branches of graphical representations of distributions are commonly used, namely,Bayesian networksandMarkov random fields. Both families encompass the properties of factorization and independences, but they differ in the set of independences they can encode and the factorization of the distribution that they induce.[1] The undirected graph shown may have one of several interpretations; the common feature is that the presence of an edge implies some sort of dependence between the corresponding random variables. From this graph, we might deduce that B, C, and D are allconditionally independentgiven A. This means that if the value of A is known, then the values of B, C, and D provide no further information about each other. Equivalently (in this case), the joint probability distribution can be factorized as: for some non-negative functionsfAB,fAC,fAD{\displaystyle f_{AB},f_{AC},f_{AD}}. If the network structure of the model is adirected acyclic graph, the model represents a factorization of the jointprobabilityof all random variables. More precisely, if the events areX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}then the joint probability satisfies wherepa(Xi){\displaystyle {\text{pa}}(X_{i})}is the set of parents of nodeXi{\displaystyle X_{i}}(nodes with edges directed towardsXi{\displaystyle X_{i}}). In other words, thejoint distributionfactors into a product of conditional distributions. For example, in the directed acyclic graph shown in the Figure this factorization would be Any two nodes areconditionally independentgiven the values of their parents. In general, any two sets of nodes are conditionally independent given a third set if a criterion calledd-separationholds in the graph. Local independences and global independences are equivalent in Bayesian networks. This type of graphical model is known as a directed graphical model,Bayesian network, or belief network. Classic machine learning models likehidden Markov models,neural networksand newer models such asvariable-order Markov modelscan be considered special cases of Bayesian networks. One of the simplest Bayesian Networks is theNaive Bayes classifier. The next figure depicts a graphical model with a cycle. This may be interpreted in terms of each variable 'depending' on the values of its parents in some manner. The particular graph shown suggests a joint probability density that factors as but other interpretations are possible.[2] The framework of the models, which provides algorithms for discovering and analyzing structure in complex distributions to describe them succinctly and extract the unstructured information, allows them to be constructed and utilized effectively.[1]Applications of graphical models includecausal inference,information extraction,speech recognition,computer vision, decoding oflow-density parity-check codes, modeling ofgene regulatory networks, gene finding and diagnosis of diseases, andgraphical models for protein structure.
https://en.wikipedia.org/wiki/Graphical_model
Grounded theoryis a systematic methodology that has been largely applied toqualitative researchconducted bysocial scientists. The methodology involves the construction of hypotheses and theories through the collecting and analysis of data.[1][2][3]Grounded theory involves the application ofinductive reasoning. The methodology contrasts with thehypothetico-deductive modelused in traditional scientific research. A study based on grounded theory is likely to begin with a question, or even just with the collection of qualitative data. As researchers review the data collected, ideas or concepts become apparent to the researchers. These ideas/concepts are said to "emerge" from the data. The researchers tag those ideas/concepts withcodesthat succinctly summarize the ideas/concepts. As more data are collected and re-reviewed, codes can be grouped into higher-level concepts and then into categories. These categories become the basis of a hypothesis or a new theory. Thus, grounded theory is quite different from the traditional scientific model of research, where the researcher chooses an existing theoretical framework, develops one or more hypotheses derived from that framework, and only then collects data for the purpose of assessing the validity of the hypotheses.[4] Grounded theory is a general research methodology, a way of thinking about and conceptualizing data. It is used in studies of diverse populations from areas like remarriage after divorce[5]and professional socialization.[6]Grounded theory methods were developed by two sociologists,Barney GlaserandAnselm Strauss.[7] While collaborating on research on dying hospital patients, Glaser and Strauss developed theconstant comparative methodwhich later became known as the grounded theory method. They summarized their research in the bookAwareness of Dying, which was published in 1965. Glaser and Strauss went on to describe their method in more detail in their 1967 book,The Discovery of Grounded Theory.[7]The three aims of the book were to: A turning point in the acceptance of the theory came after the publication ofAwareness of Dying. Their work on dying helped establish the influence of grounded theory inmedical sociology,psychology, andpsychiatry.[3][7]From its beginnings, grounded theory methods have become more prominent in fields as diverse asdrama,management,manufacturing, andeducation.[8] Grounded theory combines traditions inpositivist philosophy, generalsociology, and, particularly, thesymbolic interactionist branch of sociology. According to Ralph, Birks and Chapman,[9]grounded theory is "methodologically dynamic"[7]in the sense that, rather than being a complete methodology, grounded theory provides a means of constructing methods to better understand situations humans find themselves in. Glaser had a background in positivism, which helped him develop a system of labeling for the purpose ofcodingstudy participants' qualitative responses. He recognized the importance of systematic analysis for qualitative research. He thus helped ensure that grounded theory require the generation of codes, categories, and properties.[10] Strauss had a background insymbolic interactionism, a theory that aims to understand how people interact with each other in creating symbolic worlds and how an individual's symbolic world helps to shape a person's behavior. He viewed individuals as "active" participants in forming their own understanding of the world. Strauss underlined the richness of qualitative research in shedding light on social processes and the complexity of social life.[10] According to Glaser, the strategy of grounded theory is to interpret personal meaning in the context of social interaction.[11]The grounded theory system studies "the interrelationship between meaning in the perception of the subjects and their action".[12] Grounded theory constructs symbolic codes based on categories emerging from recorded qualitative data. The idea is to allow grounded theory methods to help us better understand the phenomenal world of individuals.[10]According to Milliken and Schreiber, another of the grounded theorist's tasks is to understand the socially-shared meanings that underlie individuals' behaviors and the reality of the participants being studied.[10] Grounded theory provides methods for generating hypotheses from qualitative data. After hypotheses are generated, it is up to other researchers to attempt to sustain or reject those hypotheses. Questions asked by the qualitative researcher employing grounded theory include "What is going on?" and "What is the main problem of the participants, and how are they trying to solve it?" Researchers using grounded theory methods do not aim for the "truth." Rather, those researchers try to conceptualize what has been taking place in the lives of study participants. When applying grounded theory methods, the researcher doesnotformulate hypotheses in advance of data collection as is often the case in traditional research, otherwise the hypotheses would be ungrounded in the data. Hypotheses are supposed to emerge from the data.[13] A goal of the researcher employing grounded theory methods is that of generating concepts that explain the way people resolve their central concerns regardless of time and place. These concepts organize the ground-level data. The concepts become the building blocks of hypotheses. The hypotheses become the constituents of a theory. In most behavioral research endeavors, persons or patients are units of analysis, whereas in grounded theory the unit of analysis is the incident.[13]Typically several hundred incidents are analyzed in a grounded theory study because every participant usually reports many incidents. When comparing many incidents in a certain area of study, the emerging concepts and their inter-relationships are paramount. Consequently, grounded theory is a general method that can use any kind of data although grounded theory is most commonly applied to qualitative data.[14][15] Most researchers oriented toward grounded theory do not apply statistical methods to the qualitative data they collect. The results of grounded theory research are not reported in terms ofstatistically significantfindings although there may be probability statements about the relationship between concepts.[16]Internal validityin its traditional research sense is not an issue in grounded theory. Rather, questions of fit, relevance, workability, and modifiability are more important in grounded theory.[7][17][16]In addition, adherents of grounded theory emphasize a theoretical validity rather than traditional ideas of internal validity ormeasurement-related validity.[18]Grounded theory adherents are "less charitable when discussing [psychometric] reliability, calling a single method of observation continually yielding an unvarying measurement a quixotic reliability."[18] A theory that is fitting has concepts that are closely connected to the incidents the theory purports to represent; fit depends on how thoroughly the constant comparison of incidents to concepts has been conducted. A qualitative study driven by grounded theory examines the genuine concerns of study participants; those concerns are not only of academic interest. Grounded theory works when it explains how study participants address the problem at hand and related problems. A theory is modifiable and can be altered when new relevant data are compared to existing data. Once the data are collected, grounded theory analysis involves the following basic steps: Theorizing is involved in all these steps. One is required to build and test theory all the way through till the end of a project.[20] The idea thatall is datais a fundamental property of grounded theory. The idea means that everything that the researcher encounters when studying a certain area is data, including not only interviews or observations but anything that helps the researcher generate concepts for the emerging theory. According to Ralph, Birks, and Chapman field notes can come from informal interviews, lectures, seminars, expert group meetings, newspaper articles, Internet mail lists, even television shows, conversations with friends etc.[21] Codingplaces incidents into categories and then creates one or more hierarchies out of these categories in terms of categories and subcategories orpropertiesof a categories. A property might be on a continuum such as from low to high, this may be referred to as adimension.[a]Constant comparisonwhere categories are continually compared to one another is used to create both subcategories and properties.[b]There is some variation in the meanings of the terms code, concept and category with some authors viewing a code as identical to category while others consider a concept to be more abstract than a code, which a code being more like asubstantive code.[c]Different researchers have identified different types of codes and encourage different methods of coding, with Strauss and Glaser both going on to extend their work with different forms of coding. The core variable explains most of the participants' main concern with as much variation as possible. It has the most powerful properties to picture what's going on, but with as few properties as possible needed to do so. A popular type of core variable can be theoretically modeled as abasic social processthat accounts for most of the variation in change over time, context, and behavior in the studied area. "grounded theory is multivariate. It happens sequentially, subsequently, simultaneously, serendipitously, and scheduled" (Glaser, 1998). Open codingorsubstantive codingis conceptualizing on the first level of abstraction. Written data from field notes or transcripts are conceptualized line by line. In the beginning of a study everything is coded in order to find out about the problem and how it is being resolved. The coding is often done in the margin of the field notes. This phase is often tedious since it involves conceptualizing all the incidents in the data, which yields many concepts. These are compared as more data is coded, merged into new concepts, and eventually renamed and modified. The grounded theory researcher goes back and forth while comparing data, constantly modifying, and sharpening the growing theory at the same time they follow the build-up schedule of grounded theory's different steps. Strauss and Corbin proposedaxial codingand defined it in 1990 as "a set of procedures whereby data are put back together in new ways after open coding, by making connections between categories."[19]Glaser proposed a similar concept calledtheoretical coding.Theoretical codes help to develop an integrated theory by weaving fractured concepts into hypotheses that work together. The theory, of which the just-mentioned hypotheses are constituents, explains the main concern of the participants. It is, however, important that the theory is not forced on the data beforehand but is allowed to emerge during the comparative process of grounded theory. Theoretical codes, like substantive codes, should emerge from the process of constantly comparing the data in field notes and memos. Selective codingis conducted after the researcher has found the core variable or what is thought to be the tentative core. The core explains the behavior of the participants in addressing their main concern. The tentative core is never wrong. It just more or less fits with the data. After the core variable is chosen, researchers selectively code data with the core guiding their coding, not bothering about concepts of little relevance to the core and its sub-cores. In addition, the researcher now selectively samples new data with the core in mind, a process that is calledtheoretical sampling– a deductive component of grounded theory. Selective coding delimits the scope of the study (Glaser, 1998). Grounded theory is less concerned with data accuracy than with generating concepts that are abstract and general. Selective coding could be conducted by reviewing old field notes and/or memos that have already been coded once at an earlier stage or by coding newly gathered data. Strauss and Corbin proposed a "coding paradigm" that involved "conditions, context, action/interactional strategies and consequences."[19] Theoretical memoingis "the core stage of grounded theory methodology" (Glaser 1998). "Memos are the theorizing write-up of ideas about substantive codes and their theoretically coded relationships as they emerge during coding, collecting and analyzing data, and during memoing" (Glaser 1998). Memoing is also important in the early phase of a grounded theory study (e.g., during open coding). In memoing, the researcher conceptualizes incidents, helping the process along. Theoretical memos can be anything written or drawn in the context of the constant comparative method, an important component of grounded theory.[23]Memosare important tools to both refine and keep track of ideas that develop when researchers compare incidents to incidents and then concepts to concepts in the evolving theory. In memos, investigators develop ideas about naming concepts and relating them to each other. They examine relationships between concepts with the help of fourfold tables, diagrams, figures, or other means generating comparative power. Without memoing, the theory is superficial and the concepts generated are not very original. Memoing works as an accumulation of written ideas into a bank of ideas about concepts and how they relate to each other. This bank contains rich parts of what will later be the written theory. Memoing is total creative freedom without rules of writing, grammar or style (Glaser 1998). The writing must be an instrument for outflow of ideas, and nothing else. When people write memos, the ideas become more realistic, being converted from thoughts into words, and thus ideas communicable to the afterworld. In grounded theory thepreconscious processingthat occurs when coding and comparing is recognized. The researcher is encouraged to register ideas about the ongoing study that eventually pop up in everyday situations, and awareness of theserendipityof the method is also necessary to achieve good results. Building on the work of sociologistRobert K. Merton,[24]his idea ofserendipity patternshas come to be applied in grounded theory research. Serendipity patterns refer to fairly common experiences when observing the world. Serendipity patterns include unanticipated and anomalous events. These patterns can become the impetus for the development of a new theory or the extension of an existing theory. Merton also coauthored (with Elinor Barber)The Travels and Adventures of Serendipity,[25]which traces the origins and uses of the word "serendipity" since it was coined. The book is "a study in sociological semantics and the sociology of science," as the subtitle declares. Merton and Barber further develop the idea of serendipity as scientific "method," as contrasted with purposeful discovery by experiment or retrospective prophecy. In the next step memos are sorted, which is the key to formulating a theory that could be clearly presented to others.Sortingputs fractured data back together. During sorting new ideas can emerge. The new ideas can, in turn, be recorded in new memos, giving rise to the memo-on-memos phenomenon. Sorting memos can help generate theory that explains the main action in the studied area. A theory written from unsorted memos may be rich in ideas but the connections among concepts are likely to be weak. Writing upthe sorted memos follows the sorting process. At this stage, a written theory takes shape. The different categories are now related to each other and the core variable. The theory should encompass the important emergent concepts and their careful description. The researcher may also construct tables and/or figures to optimize readability. In a laterrewritingstage, the relevant scholarly literature is woven into the theory. Finally, the theory is edited for style and language. Eventually, the researcher submits the resulting scholarly paper for publication. Most books on grounded theory do not explain what methodological details should be included in a scholarly article; however, some guidelines have been suggested.[26] Grounded theory gives the researcher freedom to generate new concepts in explaining human behavior.[7]Research based on grounded theory, however, follows a number of rules. These rules make grounded theory different from most other methods employed in qualitative research. No pre-research literature review.Reviewing the literature of the area under study is thought to generate preconceptions about what to find. The researcher is said to become sensitized to concepts in the extant literature. According to grounded theory, theoretical concepts should emerge from the data unsullied by what has come before. The literature should only be read at the sorting stage and be treated as more data to code and compared with what has already been coded and generated. No talk.Talking about the theory before it is written up drains the researcher of motivational energy. Talking can either render praise or criticism. Both can diminish the motivational drive to write memos that develop and refine the concepts and the theory.[16]Positive feedback, according to Glaser, can make researchers content with what they have and negative feedback hampers their self-confidence. Talking about the grounded theory should be restricted to persons capable of helping the researcher without influencing their final judgments.[16] Different approaches to grounded theory reflect different views on how preexisting theory should be used in research. InThe Discovery of Grounded Theory, Glaser and Strauss[7]advanced the view that, prior to conducting research, investigators should come to an area of study without any preconceived ideas regarding relevant concepts and hypotheses. In this way, the investigator, according to Glaser and Strauss, avoids imposing preconceived categories upon the research endeavor. Glaser later attempted to address the tension between not reading and reading the literature before a qualitative study begins.[17]Glaser raised the issue of the use of a literature review to enhance the researchers' "theoretical sensitivity," i.e., their ability to identify a grounded theory that is a good fit to the data. He suggested that novice researchers might delay reading the literature to avoid undue influence on their handling of the qualitative data they collect. Glaser believed that reading the relevant research literature (substantive literature) could lead investigators to apply preexisting concepts to the data, rather than interpret concepts emerging from the data. He, however, encouraged a broad reading of the literature to develop theoretical sensitivity. Strauss felt that reading relevant material could enhance the researcher's theoretical sensitivity.[27] There has been some divergence in the methodology of grounded theory. Over time, Glaser and Strauss came to disagree about methodology and other qualitative researchers have also modified ideas linked to grounded theory.[9]This divergence occurred most obviously after Strauss publishedQualitative Analysis for Social Scientists(1987).[28]In 1990, Strauss, together with Juliet Corbin, publishedBasics of Qualitative Research: Grounded Theory Procedures and Techniques.[19]The publication of the book was followed by a rebuke by Glaser (1992), who set out, chapter by chapter, to highlight the differences in what he argued was the original grounded theory and why what Strauss and Corbin had written was not grounded theory in its "intended form."[11]This divergence in methodology is a subject of much academic debate, which Glaser (1998) calls a "rhetorical wrestle".[16]Glaser continues to write about and teach the original grounded theory method. Grounded theory methods, according to Glaser, emphasizeinductionor emergence, and the individual researcher's creativity within a clear stagelike framework. By contrast, Strauss has been more interested in validation criteria and a systematic approach.[29]According to Kelle (2005), "the controversy between Glaser and Strauss boils down to the question of whether the researcher uses a well-defined "coding paradigm" and always looks systematically for "causal conditions," "phenomena/context, intervening conditions, action strategies," and "consequences" in the data (Straussian), or whether theoretical codes are employed as they emerge in the same way as substantive codes emerge, but drawing on a huge fund of "coding families" (Glaserian).[29] A later version of grounded theory called constructivist grounded theory, which is rooted inpragmatismandconstructivist epistemology, assumes that neither data nor theories are discovered, but are constructed by researchers as a result of their interactions with the field and study participants.[30]Proponents of this approach includeKathy Charmaz[31][32][33][34]and Antony Bryant.[35] In an interview, Charmaz justified her approach as follows: "Grounded theory methodology had been under attack. The postmodern critique of qualitative research had weakened its legitimacy and narrative analysts criticized grounded theory methodology for fragmenting participants' stories. Hence, grounded theory methodology was beginning to be seen as a dated methodology and some researchers advocated abandoning it. I agreed with much of the epistemological critique of the early versions of grounded theory methodology by people likeKenneth Gergen. However, I had long thought that the strategies of grounded theory methodology, including coding, memo writing, and theoretical sampling were excellent methodological tools. I saw no reason to discard these tools and every reason to shift the epistemological grounds on which researchers used them."[36] Data are co-constructed by the researcher and study participants, and colored by the researcher's perspectives, values, privileges, positions, interactions, and geographical locations.[citation needed]This position takes a middle ground between the realist and postmodernist positions by assuming an "obdurate reality" at the same time as it assumes multiple perspectives on that reality. Within the framework of this approach, a literature review prior to data collection is used in a productive and data-sensitive way without forcing the conclusions contained in the review on the collected data.[37][38] More recently, a critical realist version of grounded theory has been developed and applied in research devoted to developing mechanism-based explanations for social phenomena.[39][40][41][42]Critical realism(CR) is a philosophical approach associated withRoy Bhaskar, who argued for a structured and differentiated account of reality in which difference, stratification, and change are central.[citation needed]A critical realist grounded theory produces an explanation through an examination of the three domains of social reality: the "real," as the domain of structures and mechanisms; the "actual," as the domain of events; and the "empirical," as the domain of experiences and perceptions.[citation needed] Grounded theory has been "shaped by the desire to discover social and psychological processes."[43]Grounded theory, however, is not restricted to these two areas study. As Gibbs points out, the process of grounded theory can be and has been applied to a number of different disciplines, including medicine, law, and economics. The reach of grounded theory has extended to nursing, business, and education.[citation needed] Grounded theory focuses more on procedures than on the discipline to which grounded theory is applied. Rather than being limited to a particular discipline or form of data collection, grounded theory has been found useful across multiple research areas.[44]Here are some examples: The benefits of using grounded theory includeecological validity, the discovery of novel phenomena, andparsimony.[citation needed]. Ecological validity refers to the extent to which research findings accurately represent real-world settings. Research based on grounded theories are often thought to be ecologically valid because the research is especially close to the real-world participants. Although the constructs in a grounded theory are appropriately abstract (since their goal is to explain other similar phenomenon), they are context-specific, detailed, and tightly connected to the data.[citation needed] Because grounded theories are not tied to any preexisting theory, grounded theories are often fresh and new and have the potential for novel discoveries in science and other areas.[citation needed] Parsimonyrefers to a heuristic often used in science that suggests that when there are competing hypotheses that make the same prediction, the hypothesis that relies on the fewest assumptions is preferable. Grounded theories aim to provide practical and simple explanations of complex phenomena by attempting to link those phenomena to abstract constructs and hypothesizing relationships among those constructs.[citation needed] Grounded theory has further significance because: Grounded theory methods have earned their place as a standardsocial researchmethodology and have influenced researchers from varied disciplines and professions.[50] Grounded theory has been criticized based on the scientific idea of what a theory is. Thomas and James,[51]for example, distinguish the ideas of generalization, overgeneralization, and theory, noting that some scientific theories explain a broad range of phenomena succinctly, which grounded theory does not. Thomas and James observed that "The problems come when too much is claimed for [for a theory], simply because it is empirical; problems come in distinguishing generalization from over-generalization, narrative from induction." They also write that grounded theory advocates sometimes claim to find causal implications when in truth they only find an association. There has been criticism of grounded theory on the grounds that it opens the door to letting too much researcher subjectivity enter.[51][52]The authors just cited suggest that it is impossible to free oneself of preconceptions in the collection and analysis of data in the way that Glaser and Strauss assert is necessary. Popper also undermines grounded theory's idea that hypotheses arise from data unaffected by prior expectations.[53]Popper wrote that "objects can be classified and can become similar or dissimilar, only in this way--by being related to needs and interests." Observation is always selective, based on past research and the investigators' goals and motives, and that preconceptionless research is impossible. Critics also note that grounded theory fails to mitigate participant reactivity and has the potential for an investigator steeped in grounded theory to over-identify with one or more study participants.[52] Although they suggest that one element of grounded theory worth keeping is the constant comparative method, Thomas and James point to the formulaic nature of grounded theory methods and the lack of congruence of those methods with open and creative interpretation, which ought to be the hallmark of qualitative inquiry.[51] The grounded theory approach can be criticized as being too empiricist, i.e., that it relies too heavily on the empirical data. Grounded theory considers fieldwork data as the source of theory. Thus the theories that emerge from a new fieldwork are set against the theories that preceded the fieldwork.[54] Strauss's version of grounded theory has been criticized in several other ways:[55] Grounded theory was developed during an era whenqualitative methodswere often considered unscientific. But as the academic rigor of qualitative research became known, this type of research approach achieved wide acceptance. In American academia, qualitative research is often equated with grounded theory methods. Such equating of most qualitative methods with grounded theory has sometimes been criticized by qualitative researchers[who?]who take different approaches to methodology (for example, in traditionalethnography,narratology, andstorytelling). One alternative to grounded theory isengaged theory. Engaged theory equally emphasizes the conducting of on-the-ground empirical research but linking that research to analytical processes of empirical generalization. Unlike grounded theory, engaged theory derives from the tradition ofcritical theory. Engaged theory locates analytical processes within a larger theoretical framework that specifies different levels of abstraction, allowing investigators to make claims about the wider world.[57] Braun and Clarke[58]regardthematic analysisas having fewer theoretical assumptions than grounded theory, and can be used within several theoretical frameworks. They write that in comparison to grounded theory, thematic analysis is freer because it is not linked to any preexisting framework for making sense of qualitative data. Braun and Clarke, however, concede that there is a degree of similarity between grounded theory and thematic analysis but prefer thematic analysis.
https://en.wikipedia.org/wiki/Grounded_theory
Inductive logic programming(ILP) is a subfield ofsymbolic artificial intelligencewhich useslogic programmingas a uniform representation for examples, background knowledge and hypotheses. The term "inductive" here refers tophilosophical(i.e. suggesting a theory to explain observed facts) rather thanmathematical(i.e. proving a property for all members of a well-ordered set) induction. Given an encoding of the known background knowledge and a set of examples represented as a logicaldatabaseof facts, an ILP system will derive a hypothesised logic program whichentailsall the positive and none of the negative examples. Inductive logic programming is particularly useful inbioinformaticsandnatural language processing. Building on earlier work onInductive inference,Gordon Plotkinwas the first to formalise induction in aclausalsetting around 1970, adopting an approach of generalising from examples.[1][2]In 1981,Ehud Shapirointroduced several ideas that would shape the field in his new approach of model inference, an algorithm employing refinement and backtracing to search for a complete axiomatisation of given examples.[1][3]His first implementation was theModel Inference Systemin 1981:[4][5]aPrologprogram that inductively inferredHorn clauselogic programs from positive and negative examples.[1]The termInductive Logic Programmingwas first introduced in a paper byStephen Muggletonin 1990, defined as the intersection of machine learning and logic programming.[1]Muggleton and Wray Buntine introduced predicate invention andinverse resolutionin 1988.[1][6] Several inductive logic programming systems that proved influential appeared in the early 1990s.FOIL, introduced byRoss Quinlanin 1990[7]was based on upgradingpropositionallearning algorithmsAQandID3.[8]Golem, introduced by Muggleton and Feng in 1990, went back to a restricted form of Plotkin's least generalisation algorithm.[8][9]TheProgolsystem, introduced by Muggleton in 1995, first implemented inverse entailment, and inspired many later systems.[8][10][11]Aleph, a descendant of Progol introduced by Ashwin Srinivasan in 2001, is still one of the most widely used systems as of 2022[update].[10] At around the same time, the first practical applications emerged, particularly inbioinformatics, where by 2000 inductive logic programming had been successfully applied to drug design, carcinogenicity and mutagenicity prediction, and elucidation of the structure and function of proteins.[12]Unlike the focus onautomatic programminginherent in the early work, these fields used inductive logic programming techniques from a viewpoint ofrelational data mining. The success of those initial applications and the lack of progress in recovering larger traditional logic programs shaped the focus of the field.[13] Recently, classical tasks from automated programming have moved back into focus, as the introduction of meta-interpretative learning makes predicate invention and learning recursive programs more feasible. This technique was pioneered with theMetagolsystem introduced by Muggleton, Dianhuan Lin, Niels Pahlavi and Alireza Tamaddoni-Nezhad in 2014.[14]This allows ILP systems to work with fewer examples, and brought successes in learning string transformation programs, answer set grammars and general algorithms.[15] Inductive logic programming has adopted several different learning settings, the most common of which are learning fromentailmentand learning from interpretations.[16]In both cases, the input is provided in the form ofbackground knowledgeB, a logical theory (commonly in the form ofclausesused inlogic programming), as well as positive and negative examples, denotedE+{\textstyle E^{+}}andE−{\textstyle E^{-}}respectively. The output is given as ahypothesisH, itself a logical theory that typically consists of one or more clauses. The two settings differ in the format of examples presented. As of 2022[update], learning from entailment is by far the most popular setting for inductive logic programming.[16]In this setting, thepositiveandnegativeexamples are given as finite setsE+{\textstyle E^{+}}andE−{\textstyle E^{-}}of positive and negatedgroundliterals, respectively. Acorrect hypothesisHis a set of clauses satisfying the following requirements, where the turnstile symbol⊨{\displaystyle \models }stands forlogical entailment:[16][17][18]Completeness:B∪H⊨E+Consistency:B∪H∪E−⊭false{\displaystyle {\begin{array}{llll}{\text{Completeness:}}&B\cup H&\models &E^{+}\\{\text{Consistency: }}&B\cup H\cup E^{-}&\not \models &{\textit {false}}\end{array}}}Completeness requires any generated hypothesisHto explain all positive examplesE+{\textstyle E^{+}}, and consistency forbids generation of any hypothesisHthat is inconsistent with the negative examplesE−{\textstyle E^{-}}, both given the background knowledgeB. In Muggleton's setting of concept learning,[19]"completeness" is referred to as "sufficiency", and "consistency" as "strong consistency". Two further conditions are added: "Necessity", which postulates thatBdoes not entailE+{\textstyle E^{+}}, does not impose a restriction onH, but forbids any generation of a hypothesis as long as the positive facts are explainable without it. "Weak consistency", which states that no contradiction can be derived fromB∧H{\textstyle B\land H}, forbids generation of any hypothesisHthat contradicts the background knowledgeB. Weak consistency is implied by strong consistency; if no negative examples are given, both requirements coincide. Weak consistency is particularly important in the case of noisy data, where completeness and strong consistency cannot be guaranteed.[19] In learning from interpretations, thepositiveandnegativeexamples are given as a set of complete or partialHerbrand structures, each of which are themselves a finite set of ground literals. Such a structureeis said to be a model of the set of clausesB∪H{\textstyle B\cup H}if for anysubstitutionθ{\textstyle \theta }and any clausehead←body{\textstyle \mathrm {head} \leftarrow \mathrm {body} }inB∪H{\textstyle B\cup H}such thatbodyθ⊆e{\textstyle \mathrm {body} \theta \subseteq e},headθ⊆e{\displaystyle \mathrm {head} \theta \subseteq e}also holds. The goal is then to output a hypothesis that iscomplete,meaning every positive example is a model ofB∪H{\textstyle B\cup H}, andconsistent,meaning that no negative example is a model ofB∪H{\textstyle B\cup H}.[16] Aninductive logic programming systemis a program that takes as an input logic theoriesB,E+,E−{\displaystyle B,E^{+},E^{-}}and outputs a correct hypothesisHwith respect to theoriesB,E+,E−{\displaystyle B,E^{+},E^{-}}. A system iscompleteif and only if for any input logic theoriesB,E+,E−{\displaystyle B,E^{+},E^{-}}any correct hypothesisHwith respect to these input theories can be found with its hypothesis search procedure. Inductive logic programming systems can be roughly divided into two classes, search-based and meta-interpretative systems. Search-based systems exploit that the space of possible clauses forms acomplete latticeunder thesubsumptionrelation, where one clauseC1{\textstyle C_{1}}subsumes another clauseC2{\textstyle C_{2}}if there is asubstitutionθ{\textstyle \theta }such thatC1θ{\textstyle C_{1}\theta }, the result of applyingθ{\textstyle \theta }toC1{\textstyle C_{1}}, is a subset ofC2{\textstyle C_{2}}. This lattice can be traversed either bottom-up or top-down. Bottom-up methods to search the subsumption lattice have been investigated since Plotkin's first work on formalising induction in clausal logic in 1970.[1][20]Techniques used include least general generalisation, based onanti-unification, and inverse resolution, based on inverting theresolutioninference rule. Aleast general generalisation algorithmtakes as input two clausesC1{\textstyle C_{1}}andC2{\textstyle C_{2}}and outputs the least general generalisation ofC1{\textstyle C_{1}}andC2{\textstyle C_{2}}, that is, a clauseC{\textstyle C}that subsumesC1{\textstyle C_{1}}andC2{\textstyle C_{2}}, and that is subsumed by every other clause that subsumesC1{\textstyle C_{1}}andC2{\textstyle C_{2}}. The least general generalisation can be computed by first computing allselectionsfromC{\textstyle C}andD{\textstyle D}, which are pairs of literals(L,M)∈(C1,C2){\displaystyle (L,M)\in (C_{1},C_{2})}sharing the same predicate symbol and negated/unnegated status. Then, the least general generalisation is obtained as the disjunction of the least general generalisations of the individual selections, which can be obtained byfirst-order syntactical anti-unification.[21] To account for background knowledge, inductive logic programming systems employrelative least general generalisations, which are defined in terms of subsumption relative to a background theory. In general, such relative least general generalisations are not guaranteed to exist; however, if the background theoryBis a finite set ofgroundliterals, then the negation ofBis itself a clause. In this case, a relative least general generalisation can be computed by disjoining the negation ofBwith bothC1{\textstyle C_{1}}andC2{\textstyle C_{2}}and then computing their least general generalisation as before.[22] Relative least general generalisations are the foundation of the bottom-up systemGolem.[8][9] Inverse resolution is aninductive reasoningtechnique that involvesinvertingtheresolution operator. Inverse resolution takes information about theresolventof a resolution step to compute possible resolving clauses. Two types of inverse resolution operator are in use in inductive logic programming: V-operators and W-operators. A V-operator takes clausesR{\textstyle R}andC1{\textstyle C_{1}}as input and returns a clauseC2{\textstyle C_{2}}such thatR{\textstyle R}is the resolvent ofC1{\textstyle C_{1}}andC2{\textstyle C_{2}}. A W-operator takes two clausesR1{\textstyle R_{1}}andR2{\textstyle R_{2}}and returns three clausesC1{\textstyle C_{1}},C2{\textstyle C_{2}}andC3{\textstyle C_{3}}such thatR1{\textstyle R_{1}}is the resolvent ofC1{\textstyle C_{1}}andC2{\textstyle C_{2}}andR2{\textstyle R_{2}}is the resolvent ofC2{\textstyle C_{2}}andC3{\textstyle C_{3}}.[23] Inverse resolution was first introduced byStephen Muggletonand Wray Buntine in 1988 for use in the inductive logic programming system Cigol.[6]By 1993, this spawned a surge of research into inverse resolution operators and their properties.[23] The ILP systems Progol,[11]Hail[24]and Imparo[25]find a hypothesisHusing the principle of theinverse entailment[11]for theoriesB,E,H:B∧H⊨E⟺B∧¬E⊨¬H{\displaystyle B\land H\models E\iff B\land \neg E\models \neg H}. First they construct an intermediate theoryFcalled a bridge theory satisfying the conditionsB∧¬E⊨F{\displaystyle B\land \neg E\models F}andF⊨¬H{\displaystyle F\models \neg H}. Then asH⊨¬F{\displaystyle H\models \neg F}, they generalize the negation of the bridge theoryFwith anti-entailment.[26]However, the operation of anti-entailment is computationally more expensive since it is highly nondeterministic. Therefore, an alternative hypothesis search can be conducted using the inverse subsumption (anti-subsumption) operation instead, which is less non-deterministic than anti-entailment. Questions of completeness of a hypothesis search procedure of specific inductive logic programming system arise. For example, the Progol hypothesis search procedure based on the inverse entailment inference rule is not complete byYamamoto's example.[27]On the other hand, Imparo is complete by both anti-entailment procedure[28]and its extended inverse subsumption[29]procedure. Rather than explicitly searching the hypothesis graph, metainterpretive ormeta-levelsystems encode the inductive logic programming program as a meta-level logic program which is then solved to obtain an optimal hypothesis. Formalisms used to express the problem specification includePrologandanswer set programming, with existing Prolog systems and answer set solvers used for solving the constraints.[30] And example of a Prolog-based system isMetagol, which is based on ameta-interpreter in Prolog, while ASPAL and ILASP are based on an encoding of the inductive logic programming problem in answer set programming.[30] Evolutionary algorithmsin ILP use a population-based approach to evolve hypotheses, refining them through selection, crossover, and mutation. Methods likeEvoLearnerhave been shown to outperform traditional approaches on structured machine learning benchmarks.[31] Probabilistic inductive logic programming adapts the setting of inductive logic programming to learningprobabilistic logic programs. It can be considered as a form ofstatistical relational learningwithin the formalism of probabilistic logic programming.[34][35] Given the goal of probabilistic inductive logic programming is to find a probabilistic logic programH{\textstyle H}such that the probability of positive examples according toH∪B{\textstyle {H\cup B}}is maximized and the probability of negative examples is minimized.[35] This problem has two variants: parameter learning and structure learning. In the former, one is given the structure (the clauses) ofHand the goal is to infer the probabilities annotations of the given clauses, while in the latter the goal is to infer both the structure and the probability parameters ofH. Just as in classical inductive logic programming, the examples can be given as examples or as (partial) interpretations.[35] Parameter learning for languages following the distribution semantics has been performed by using anexpectation-maximisation algorithmor bygradient descent. An expectation-maximisation algorithm consists of a cycle in which the steps of expectation and maximization are repeatedly performed. In the expectation step, the distribution of the hidden variables is computed according to the current values of the probability parameters, while in the maximisation step, the new values of the parameters are computed. Gradient descent methods compute the gradient of the target function and iteratively modify the parameters moving in the direction of the gradient.[35] Structure learning was pioneered byDaphne Kollerand Avi Pfeffer in 1997,[36]where the authors learn the structure offirst-orderrules with associated probabilistic uncertainty parameters. Their approach involves generating the underlyinggraphical modelin a preliminary step and then applying expectation-maximisation.[35] In 2008,De Raedtet al. presented an algorithm for performingtheory compressiononProbLogprograms, where theory compression refers to a process of removing as many clauses as possible from the theory in order to maximize the probability of a given set of positive and negative examples. No new clause can be added to the theory.[35][37] In the same year, Meert, W. et al. introduced a method for learning parameters and structure ofgroundprobabilistic logic programs by considering theBayesian networksequivalent to them and applying techniques for learning Bayesian networks.[38][35] ProbFOIL, introduced by De Raedt and Ingo Thon in 2010, combined the inductive logic programming systemFOILwithProbLog. Logical rules are learned from probabilistic data in the sense that both the examples themselves and their classifications can be probabilistic. The set of rules has to allow one to predict the probability of the examples from their description. In this setting, the parameters (the probability values) are fixed and the structure has to be learned.[39][35] In 2011, Elena Bellodi and Fabrizio Riguzzi introduced SLIPCASE, which performs a beam search among probabilistic logic programs by iteratively refining probabilistic theories and optimizing the parameters of each theory using expectation-maximisation.[40]Its extension SLIPCOVER, proposed in 2014, uses bottom clauses generated as inProgolto guide the refinement process, thus reducing the number of revisions and exploring the search space more effectively. Moreover, SLIPCOVER separates the search for promising clauses from that of the theory: the space of clauses is explored with abeam search, while the space of theories is searchedgreedily.[41][35] This article incorporates text from afree contentwork. Licensed under CC-BY 4.0 (license statement/permission). Text taken fromA History of Probabilistic Inductive Logic Programming​, Fabrizio Riguzzi, Elena Bellodi and Riccardo Zese,Frontiers Media.
https://en.wikipedia.org/wiki/Inductive_logic_programming
Pattern theory, formulated byUlf Grenander, is a mathematicalformalismto describe knowledge of the world aspatterns. It differs from other approaches toartificial intelligencein that it does not begin by prescribingalgorithmsand machinery to recognize and classify patterns; rather, it prescribes a vocabulary to articulate and recast the pattern concepts in precise language. Broad in its mathematical coverage, Pattern Theory spansalgebraandstatistics, as well as local topological and global entropic properties. In addition to the new algebraic vocabulary, itsstatisticalapproach is novel in its aim to: TheBrown UniversityPattern Theory Group was formed in 1972 by Ulf Grenander.[1]Many mathematicians are currently working in this group, noteworthy among them being theFields MedalistDavid Mumford.[2]Mumford regards Grenander as his "guru" in Pattern Theory.[citation needed]
https://en.wikipedia.org/wiki/Pattern_theory
Aschema(pl.:schemata) is a template incomputer scienceused in the field ofgenetic algorithmsthat identifies asubsetof strings with similarities at certain string positions. Schemata are a special case ofcylinder sets, forming abasisfor aproduct topologyon strings.[1]In other words, schemata can be used to generate atopologyon a space of strings. For example, consider binary strings of length 6. The schema 1**0*1 describes the set of all words of length 6 with 1's at the first and sixth positions and a 0 at the fourth position. The * is awildcardsymbol, which means that positions 2, 3 and 5 can have a value of either 1 or 0. Theorder of a schemais defined as the number of fixed positions in the template, while thedefining lengthδ(H){\displaystyle \delta (H)}is the distance between the first and last specific positions. The order of 1**0*1 is 3 and its defining length is 5. Thefitness of a schemais the average fitness of all strings matching the schema. The fitness of a string is a measure of the value of the encoded problem solution, as computed by a problem-specific evaluation function. The length of a schemaH{\displaystyle H}, calledN(H){\displaystyle N(H)}, is defined as the total number of nodes in the schema.N(H){\displaystyle N(H)}is also equal to the number of nodes in the programs matchingH{\displaystyle H}.[2] If the child of an individual that matches schema H does notitselfmatch H, the schema is said to have beendisrupted.[2] Inevolutionary computingsuch asgenetic algorithmsandgenetic programming,propagationrefers to the inheritance of characteristics of one generation by the next. For example, a schema is propagated if individuals in the current generation match it and so do those in the next generation. Those in the next generation may be (but do not have to be) children of parents who matched it. Recently schema have been studied usingorder theory.[3] Two basic operators are defined for schema: expansion and compression. The expansion maps a schema onto a set of words which it represents, while the compression maps a set of words on to a schema. In the following definitionsΣ{\displaystyle \Sigma }denotes an alphabet,Σl{\displaystyle \Sigma ^{l}}denotes all words of lengthl{\displaystyle l}over the alphabetΣ{\displaystyle \Sigma },Σ∗{\displaystyle \Sigma _{*}}denotes the alphabetΣ{\displaystyle \Sigma }with the extra symbol∗{\displaystyle *}.Σ∗l{\displaystyle \Sigma _{*}^{l}}denotes all schema of lengthl{\displaystyle l}over the alphabetΣ∗{\displaystyle \Sigma _{*}}as well as the empty schemaϵ∗{\displaystyle \epsilon _{*}}. For any schemas∈Σ∗l{\displaystyle s\in \Sigma _{*}^{l}}the following operator↑s{\displaystyle {\uparrow }s}, called theexpansion{\displaystyle expansion}ofs{\displaystyle s}, which mapss{\displaystyle s}to a subset of words inΣl{\displaystyle \Sigma ^{l}}: ↑s:={b∈Σl|bi=siorsi=∗for eachi∈{1,...,l}}{\displaystyle {\uparrow }s:=\{b\in \Sigma ^{l}|b_{i}=s_{i}{\mbox{ or }}s_{i}=*{\mbox{ for each }}i\in \{1,...,l\}\}} Where subscripti{\displaystyle i}denotes the character at positioni{\displaystyle i}in a word or schema. Whens=ϵ∗{\displaystyle s=\epsilon _{*}}then↑s=∅{\displaystyle {\uparrow }s=\emptyset }. More simply put,↑s{\displaystyle {\uparrow }s}is the set of all words inΣl{\displaystyle \Sigma ^{l}}that can be made by exchanging the∗{\displaystyle *}symbols ins{\displaystyle s}with symbols fromΣ{\displaystyle \Sigma }. For example, ifΣ={0,1}{\displaystyle \Sigma =\{0,1\}},l=3{\displaystyle l=3}ands=10∗{\displaystyle s=10*}then↑s={100,101}{\displaystyle {\uparrow }s=\{100,101\}}. Conversely, for anyA⊆Σl{\displaystyle A\subseteq \Sigma ^{l}}we define↓A{\displaystyle {\downarrow }{A}}, called thecompression{\displaystyle compression}ofA{\displaystyle A}, which mapsA{\displaystyle A}on to a schemas∈Σ∗l{\displaystyle s\in \Sigma _{*}^{l}}:↓A:=s{\displaystyle {\downarrow }A:=s}wheres{\displaystyle s}is a schema of lengthl{\displaystyle l}such that the symbol at positioni{\displaystyle i}ins{\displaystyle s}is determined in the following way: ifxi=yi{\displaystyle x_{i}=y_{i}}for allx,y∈A{\displaystyle x,y\in A}thensi=xi{\displaystyle s_{i}=x_{i}}otherwisesi=∗{\displaystyle s_{i}=*}. IfA=∅{\displaystyle A=\emptyset }then↓A=ϵ∗{\displaystyle {\downarrow }A=\epsilon _{*}}. One can think of this operator as stacking up all the items inA{\displaystyle A}and if all elements in a column are equivalent, the symbol at that position ins{\displaystyle s}takes this value, otherwise there is a wild card symbol. For example, letA={100,000,010}{\displaystyle A=\{100,000,010\}}then↓A=∗∗0{\displaystyle {\downarrow }A=**0}. Schemata can bepartially ordered. For anya,b∈Σ∗l{\displaystyle a,b\in \Sigma _{*}^{l}}we saya≤b{\displaystyle a\leq b}if and only if↑a⊆↑b{\displaystyle {\uparrow }a\subseteq {\uparrow }b}. It follows that≤{\displaystyle \leq }is apartial orderingon a set of schemata from thereflexivity,antisymmetryandtransitivityof thesubsetrelation. For example,ϵ∗≤11≤1∗≤∗∗{\displaystyle \epsilon _{*}\leq 11\leq 1*\leq **}. This is because↑ϵ∗⊆↑11⊆↑1∗⊆↑∗∗=∅⊆{11}⊆{11,10}⊆{11,10,01,00}{\displaystyle {\uparrow }\epsilon _{*}\subseteq {\uparrow }11\subseteq {\uparrow }1*\subseteq {\uparrow }**=\emptyset \subseteq \{11\}\subseteq \{11,10\}\subseteq \{11,10,01,00\}}. The compression and expansion operators form aGalois connection, where↓{\displaystyle \downarrow }is the lower adjoint and↑{\displaystyle \uparrow }the upper adjoint.[3] For a setA⊆Σl{\displaystyle A\subseteq \Sigma ^{l}}, we call the process of calculating the compression on each subset of A, that is{↓X|X⊆A}{\displaystyle \{{\downarrow }X|X\subseteq A\}}, the schematic completion ofA{\displaystyle A}, denotedS(A){\displaystyle {\mathcal {S}}(A)}.[3] For example, letA={110,100,001,000}{\displaystyle A=\{110,100,001,000\}}. The schematic completion ofA{\displaystyle A}, results in the following set:S(A)={001,100,000,110,00∗,∗00,1∗0,∗∗0,∗0∗,∗∗∗,ϵ∗}{\displaystyle {\mathcal {S}}(A)=\{001,100,000,110,00*,*00,1*0,**0,*0*,***,\epsilon _{*}\}} Theposet(S(A),≤){\displaystyle ({\mathcal {S}}(A),\leq )}always forms acomplete latticecalled the schematic lattice. The schematic lattice is similar to the concept lattice found inFormal concept analysis.
https://en.wikipedia.org/wiki/Schema_(genetic_algorithms)
Ininformation science, anupper ontology(also known as atop-level ontology,upper model, orfoundation ontology) is anontology(in the sense used ininformation science) that consists of very general terms (such as "object", "property", "relation") that are common across all domains. An important function of an upper ontology is to support broadsemantic interoperabilityamong a large number of domain-specific ontologies by providing a common starting point for the formulation of definitions. Terms in the domain ontology are ranked under the terms in the upper ontology, e.g., the upper ontology classes aresuperclassesorsupersetsof all the classes in the domain ontologies. A number of upper ontologies have been proposed, each with its own proponents. Library classificationsystems predate upper ontology systems. Though library classifications organize and categorize knowledge using general concepts that are the same across all knowledge domains, neither system is a replacement for the other. Any standard foundational ontology is likely to be contested among different groups, each with its own idea of "what exists". One factor exacerbating the failure to arrive at a common approach has been the lack of open-source applications that would permit the testing of different ontologies in the same computational environment. The differences have thus been debated largely on theoretical grounds, or are merely the result of personal preferences. Foundational ontologies can however be compared on the basis of adoption for the purposes of supporting interoperability across domain ontologies. No particular upper ontology has yet gained widespread acceptance as ade factostandard. Different organizations have attempted todefine standardsfor specific domains. The 'Process Specification Language' (PSL) created by theNational Institute of Standards and Technology(NIST) is one example. Another important factor leading to the absence of wide adoption of any existing upper ontology is the complexity. Some upper ontologies—Cycis often cited as an example in this regard—are very large, ranging up to thousands of elements (classes, relations), with complex interactions among them and with a complexity similar to that of a humannatural language, and the learning process can be even longer than for a natural language because of the unfamiliar format and logical rules. The motivation to overcome this learning barrier is largely absent because of the paucity of publicly accessible examples of use. As a result, those building domain ontologies for local applications tend to create the simplest possible domain-specific ontology, not related to any upper ontology. Such domain ontologies may function adequately for the local purpose, but they are very time-consuming to relate accurately to other domain ontologies. To solve this problem, some genuinely top level ontologies have been developed, which are deliberately designed to have minimal overlap with any domain ontologies. Examples areBasic Formal Ontologyand theDOLCE(see below). Historically, many attempts in many societies[which?]have been made to impose or define a single set of concepts as more primal, basic, foundational, authoritative, true or rational than all others. A common objection[by whom?]to such attempts points out that humans lack the sort of transcendent perspective — orGod's eye view— that would be required to achieve this goal. Humans are bound by language or culture, and so lack the sort of objective perspective from which to observe the whole terrain of concepts and derive any one standard. Thomasson,[1]under the headline "1.5 Skepticism about Category Systems", wrote: "category systems, at least as traditionally presented, seem to presuppose that there is a unique true answer to the question of what categories of entity there are – indeed the discovery of this answer is the goal of most such inquiries into ontological categories. [...] But actual category systems offered vary so much that even a short survey of past category systems like that above can undermine the belief that such a unique, true and complete system of categories may be found. Given such a diversity of answers to the question of what the ontological categories are, by what criteria could we possibly choose among them to determine which is uniquely correct?" Another objection is the problem of formulating definitions. Top level ontologies are designed to maximize support for interoperability across a large number of terms. Such ontologies must therefore consist of terms expressing very general concepts, but such concepts are so basic to our understanding that there is no way in which they can be defined, since the very process of definition implies that a less basic (and less well understood) concept is defined in terms of concepts that are more basic and so (ideally) more well understood. Very general concepts can often only be elucidated, for example by means of examples, or paraphrase. Those[who?]who doubt the feasibility of general purpose ontologies are more inclined to ask "what specific purpose do we have in mind for this conceptual map of entities and what practical difference will this ontology make?" This pragmatic philosophical position surrenders all hope of devising the encoded ontology version of "The world is everything that is the case." (Wittgenstein,Tractatus Logico-Philosophicus). Finally, there are objections similar to those againstartificial intelligence[from whom?]. Technically, the complex concept acquisition and the social / linguistic interactions of human beings suggest any axiomatic foundation of "most basic" concepts must becognitive biologicalor otherwise difficult to characterize since we don't have axioms for such systems. Ethically, any general-purpose ontology could quickly become an actual tyranny by recruiting adherents into a political program designed to propagate it and its funding means, and possibly defend it by violence. Historically, inconsistent and irrational belief systems have proven capable of commanding obedience to the detriment or harm of persons both inside and outside a society that accepts them. How much more harmful would a consistent rational one be, were it to contain even one or two basic assumptions incompatible with human life? Many of those who doubt the possibility of developing wide agreement on a common upper ontology fall into one of two traps: In fact, different representations of assertions about the real world (though not philosophical models), if they accurately reflect the world, must be logically consistent, even if they focus on different aspects of the same physical object or phenomenon. If any two assertions about the real world are logically inconsistent, one or both must be wrong, and that is a topic for experimental investigation, not for ontological representation. In practice, representations of the real world are created as and known to be approximations to the basic reality, and their use is circumscribed by the limits of error of measurements in any given practical application. Ontologies are entirely capable of representing approximations, and are also capable of representing situations in which different approximations have different utility. Objections based on the different ways people perceive things attack a simplistic, impoverished view of ontology. The objection that there are logically incompatible models of the world is true, but in an upper ontology those different models can be represented as different theories, and the adherents of those theories can use them in preference to other theories, while preserving the logical consistency of thenecessaryassumptions of the upper ontology. Thenecessaryassumptions provide the logical vocabulary with which to specify the meanings of all of the incompatible models. It has never been demonstrated that incompatible models cannot be properly specified with a common, more basic set of concepts, while there are examples of incompatible theories that can be logically specified with only a few basic concepts. Many of the objections to upper ontology refer to the problems of life-critical decisions or non-axiomatized problem areas such as law or medicine or politics that are difficult even for humans to understand. Some of these objections do not apply to physical objects or standard abstractions that are defined into existence by human beings and closely controlled by them for mutual good, such as standards for electrical power system connections or the signals used in traffic lights. No single generalmetaphysicsis required to agree that some such standards are desirable. For instance, while time and space can be represented in many ways, some of these are already used in interoperable artifacts like maps or schedules. Objections to the feasibility of a common upper ontology also do not take into account the possibility of forging agreement on an ontology that contains all of theprimitiveontology elements that can be combined to create any number of more specialized concept representations. Adopting this tactic permits effort to be focused on agreement only on a limited number of ontology elements. By agreeing on the meanings of that inventory of basic concepts, it becomes possible to create and then accurately and automatically interpret an infinite number of concept representations as combinations of the basic ontology elements. Any domain ontology or database that uses the elements of such an upper ontology to specify the meanings of its terms will be automatically and accurately interoperable with other ontologies that use the upper ontology, even though they may each separately define a large number of domain elements not defined in other ontologies. In such a case, proper interpretation will require that the logical descriptions of domain-specific elements be transmitted along with any data that is communicated; the data will then be automatically interpretable because the domain element descriptions, based on the upper ontology, will be properly interpretable by any system that can properly use the upper ontology. In effect, elements in different domain ontologies can be *translated* into each other using the common upper ontology. An upper ontology based on such a set of primitive elements can include alternative views, provided that they are logically compatible. Logically incompatible models can be represented as alternative theories, or represented in a specialized extension to the upper ontology. The proper use of alternative theories is a piece of knowledge that can itself be represented in an ontology. Users that develop new domain ontologies and find that there are semantic primitives needed for their domain but missing from the existing common upper ontology can add those new primitives by the accepted procedure, expanding the common upper ontology as necessary. Most proponents[who?]of an upper ontology argue that several good ones may be created with perhaps different emphasis. Very few are actually arguing to discover just one within natural language or even an academic field. Most are simply standardizing some existing communication. Another view advanced is that there is almost total overlap of the different ways that upper ontologies have been formalized, in the sense that different ontologies focus on different aspects of the same entities, but the different views are complementary and not contradictory to each other; as a result, an internally consistent ontology that contains all the views, with means of translating the different views into the other, is feasible. Such an ontology has thus far not been constructed, however, because it would require a large project to develop so as to include all of the alternative views in the separately developed upper ontologies, along with their translations. The main barrier to construction of such an ontology is not the technical issues, but the reluctance of funding agencies to provide the funds for a large enough consortium of developers and users. Several common arguments against upper ontology can be examined more clearly by separating issues of concept definition (ontology), language (lexicons), and facts (knowledge). For instance, people have different terms and phrases for the same concept. However, that does not necessarily mean that those people are referring to different concepts. They may simply be using different language or idiom. Formal ontologies typically use linguistic labels to refer to concepts, but the terms that label ontology elements mean no more and no less than what their axioms say they mean. Labels are similar to variable names in software, evocative rather than definitive. The proponents of a common upper ontology point out that the meanings of the elements (classes, relations, rules) in an ontology depend only on theirlogical form, and not on the labels, which are usually chosen merely to make the ontologies more easily usable by their human developers. In fact, the labels for elements in an ontology need not be words — they could be, for example, images of instances of a particular type, or videos of an action that is represented by a particular type. It cannot be emphasized too strongly that words are *not* what are represented in an ontology, but entities in the real world, or abstract entities (concepts) in the minds of people. Words are not equivalent to ontology elements, but words *label* ontology elements. There can be many words that label a single concept, even in a single language (synonymy), and there can be many concepts labeled by a single word (ambiguity). Creating the mappings between human language and the elements of an ontology is the province of Natural Language Understanding. But the ontology itself stands independently as a logical and computational structure. For this reason, finding agreement on the structure of an ontology is actually easier than developing a controlled vocabulary, because all different interpretations of a word can be included, each *mapped* to the same word in the different terminologies. A second argument is that people believe different things, and therefore can't have the same ontology. However, people can assign different truth values to a particular assertion while accepting the validity of certain underlying claims, facts, or ways of expressing an argument with which they disagree. (Using, for instance, theissue/position/argument form.) This objection to upper ontologies ignores the fact that a single ontology can represent different belief systems, and also representing them as different belief systems, without taking a position on the validity of either. Even arguments about the existence of a thing require a certain sharing of a concept, even though its existence in the real world may be disputed. Separating belief from naming and definition also helps to clarify this issue, and show how concepts can be held in common, even in the face of differing beliefs. For instance,wikias a medium may permit such confusion but disciplined users can applydispute resolutionmethods to sort out their conflicts. It is also argued that most people share a common set of "semantic primitives", fundamental concepts, to which they refer when they are trying to explain unfamiliar terms to other people. An ontology that includes representations of those semantic primitives could in such a case be used to create logical descriptions of any term that a person may wish to define logically. That ontology would be one form of upper ontology, serving as a logical "interlingua" that can translate ideas in one terminology to itslogical equivalentin another terminology. Advocates[who?]argue that most disagreement about the viability of an upper ontology can be traced to the conflation of ontology, language and knowledge, or too-specialized areas of knowledge: many people, or agents or groups will have areas of their respective internal ontologies that do not overlap. If they can cooperate and share a conceptual map at all, this may be so very useful that it outweighs any disadvantages that accrue from sharing. To the degree it becomes harder to share concepts the deeper one probes, the more valuable such sharing tends to get. If the problem is as basic as opponents of upper ontologies claim, then, it also applies to a group of humans trying to cooperate, who might need machine assistance to communicate easily. If nothing else, such ontologies are implied bymachine translation, used when people cannot practically communicate. Whether "upper" or not, these seem likely to proliferate. The following table contains data mainly from "A Comparison of Upper Ontologies"[2]article by V Mascardi, V Cordi and P Rosso (2007). The Basic Formal Ontology (BFO) framework developed byBarry Smithand his associates consists of a series of sub-ontologies at different levels of granularity. The ontologies are divided into two varieties: relating to continuant entities such as three-dimensional enduring objects, and occurrent entities (primarily) processes conceived as unfolding in successive phases through time. BFO thus incorporates both three-dimensionalist and four-dimensionalist perspectives on reality within a single framework. Interrelations are defined between the two types of ontologies in a way which gives BFO the facility to deal with both static/spatial and dynamic/temporal features of reality. A continuant domain ontology descending from BFO can be conceived as an inventory of entities existing at a time. Each occurrent ontology can be conceived as an inventory of processes unfolding through a given interval of time. Both BFO itself and each of its extension sub-ontologies can be conceived as a window on a certain portion of reality at a given level of granularity. The more than 350 ontology frameworks based on BFO are catalogued on the BFO website.[9]These apply the BFO architecture to different domains through the strategy of downward population. The Cell Ontology, for example, populates downward from BFO by importing the BFO branch terminating with object, and defining a cell as a subkind of object. Other examples of ontologies extending BFO are theOntology for Biomedical Investigations(OBI) and other the ontologies of theOpen Biomedical Ontologies Foundry. In addition to these examples, BFO and extensions are increasingly being used in defense and security domains, for example in the Common Core Ontology framework.[10]BFO also serves as the upper level of theSustainable Development Goals(SDG) Interface Ontology developed by the United Nations Environment Programme,[11]and of the Industrial Ontologies Foundry (IOF) initiative of the manufacturing industry.[12]BFO has been documented in the textbookBuilding Ontologies with Basic Formal Ontology,[13]published by MIT Press in 2015. Business Objects Reference Ontology is an upper ontology designed for developing ontological or semantic models for large complex operational applications that consists of a top ontology as well as a process for constructing the ontology. It is built upon a series of clearmetaphysical choicesto provide a solid (metaphysical) foundation. A key choice was for anextensional(and hence,four-dimensional)ontologywhich provides it a simplecriteria of identity. Elements of it have appeared in a number of standards. For example, the ISO standard,ISO 15926– Industrial automation systems and integration – was heavily influenced by an early version. TheIDEAS(International Defence Enterprise Architecture Specification for exchange) standard is based upon BORO, which in turn was used to developDODAF2.0. Although "CIDOC object-oriented Conceptual Reference Model" (CRM) is adomain ontology, specialised to the purposes of representing cultural heritage, a subset called CRM Core is a generic upper ontology, including:[14][15] A persistent item is a physical or conceptional item that has a persistent identity recognized within the duration of its existence by its identification rather than by its continuity or by observation. A persistent item is comparable to an endurant.A propositional object is a set of statements about real or imaginary things.A symbolic object is a sign/symbol or an aggregation of signs or symbols. COSMO(COmmon Semantic MOdel) is an ontology that was initiated as a project of the COSMO working group of the Ontology and taxonomy Coordinating Working Group, with the goal of developing a foundation ontology that can serve to enable broad generalSemantic Interoperability. The current version is an OWL ontology, but aCommon-Logiccompliant version is anticipated in the future. The ontology and explanatory files are available at the COSMO site. The goal of the COSMO working group was to develop a foundation ontology by a collaborative process that will allow it to represent all of the basic ontology elements that all members feel are needed for their applications. The development of COSMO is fully open, and any comments or suggestions from any sources are welcome. After some discussion and input from members in 2006, the development of the COSMO has been continued primarily by Patrick Cassidy, the chairman of the COSMO Working Group. Contributions and suggestions from any interested party are still welcome and encouraged. Many of the types (OWL classes) in the current COSMO have been taken from the OpenCyc OWL version 0.78, and from the SUMO. Other elements were taken from other ontologies (such as BFO and DOLCE), or developed specifically for COSMO. Development of the COSMO initially focused on including representations of all of the words in theLongman Dictionary of Contemporary English(LDOCE) controlleddefining vocabulary(2148 words). These words are sufficient to define (linguistically) all of the entries in the LDOCE. It is hypothesized that the ontological representations of the concepts represented by those terms will be sufficient to specify the meanings of any specialized ontology element, thereby serving as a basis for generalSemantic Interoperability. Interoperability via COSMO is enabled by using the COSMO (or an ontology derived from it) as an interlingua by which other domain ontologies can be translated into each other's terms and thereby accurately communicate. As new domains are linked into COSMO, additional semantic primitives may be recognized and added to its structure. The current (January 2021) OWL version of COSMO has over 24000 types (OWL classes), over 1350 relations, and over 21000 restrictions. The COSMO itself (COSMO.owl) and other related and explanatory files can be obtained at the link for COSMO in the External Links section below. Cycis a proprietary system, under development since 1986, consisting of a foundation ontology and several domain-specific ontologies (calledmicrotheories). A subset of those ontologies was released for free under the nameOpenCycin 2002 and was available until circa 2016. A subset of Cyc calledResearchCycwas made available for free non-commercial research use in 2006. Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE) is a foundational ontology designed in 2002 in the context of the WonderWeb EU project,[16]developed by Nicola Guarino and his associates at the Laboratory for Applied Ontology (LOA). As implied by its acronym, DOLCE is oriented toward capturing the ontological categories underlyingnatural languageand humancommon sense. DOLCE, however, does not commit to a strictlyreferentialistmetaphysics related to the intrinsic nature of the world. Rather, the categories it introduces are thought of as cognitive artifacts, which are ultimately depending on human perception, cultural imprints, and social conventions. In this sense, they intend to be justdescriptive(vsprescriptive) notions, which support the formal specification of domain conceptualizations. DOLCE-Ultralite,[17]designed by Aldo Gangemi and colleagues at the Semantic Technology Lab ofNational Research Council (Italy)is theWeb Ontology Language(OWL) version of DOLCE. It simplifies some modal axioms of DOLCE, and extends it to cover the Descriptions and Situations framework, also designed in the WonderWeb project. DOLCE-Ultralite is the source of some core ontology design patterns,[18]and is widely adopted in ontology projects worldwide. The general formal ontology (GFO), developed by Heinrich Herre and his colleagues of the research group Onto-Med inLeipzig, is a realistic ontology integrating processes and objects. It attempts to include many aspects of recent philosophy, which is reflected both in its taxonomic tree and its axiomatizations. GFO allows for different axiomatizations of its categories (such as the existence ofatomic time-intervalsvs.dense time). The basic principles of GFO are published in the Onto-Med Report Nr. 8 and in "General Formal Ontology (GFO): A Foundational Ontology for Conceptual Modelling".[19][20] Two GFO specialties, among others, are its account of persistence and its time model. Regarding persistence, the distinction between endurants (objects) and perdurants (processes) is made explicit within GFO by the introduction of a special category, a persistent.[21]A persistant is a special category with the intention that its instances "remain identical" (over time). With respect to time, time intervals are taken as primitive in GFO, and time-points (called "time boundaries") as derived. Moreover, time-points may coincide, which is convenient for modelling instantaneous changes. gist is developed and supported by Semantic Arts. gist (not an acronym – it means to get the essence of) is a "minimalist upper ontology". gist is targeted at enterprise information systems, although it has been applied to healthcare delivery applications. gist has been used to build enterprise ontologies for a number of major commercial and governmental agencies including: Procter & Gamble, Sentara Healthcare, Washington State Department of Labor & Industries, LexisNexis, Sallie Mae and two major Financial Services firms. gist is freely available with a Creative Commons share alike license. gist is actively maintained, and has been in use for over 10 years. As of October 2020 it is at version 9.4.[22] gist was the subject of a paper exploring how to bridge modeling differences between ontologies.[23]In a paper describing the OQuaRE methodology for evaluating ontologies, the gist unit of measure ontology (at that time, a separate module) scored the highest in the manual evaluation against 10 other unit of measure ontologies, and scored above average in the automated evaluation. The authors stated: "This ontology could easily be tested and validated, its knowledge could be effectively reused and adapted for different specified environments".[24] ISO 15926 is an International Standard for the representation ofprocess plant life-cycleinformation. This representation is specified by a generic, conceptual data model that is suitable as the basis for implementation in a shared database or data warehouse. The data model is designed to be used in conjunction with reference data: standard instances that represent information common to a number of users, process plants, or both. The support for a specific life-cycle activity depends on the use of appropriate reference data in conjunction with the data model. To enable integration of life-cycle information the model excludes all information constraints that are appropriate only to particular applications within the scope. ISO 15926-2 defines a generic model with 201 entity types. It has been prepared by Technical Committee ISO/TC 184, Industrial automation systems and integration, Subcommittee SC 4, Industrial data. TheSuggested Upper Merged Ontology(SUMO) is another comprehensive ontology project. It includes anupper ontology, created by theIEEEworking group P1600.1 (originally byIan NilesandAdam Pease). It is extended with many domain ontologies and a complete set of links to WordNet. It is open source. Upper Mapping and Binding Exchange Layer (UMBEL) is an ontology of 28,000 reference concepts that maps to a simplified subset of theOpenCycontology, that is intended to provide a way of linking the precise OpenCyc ontology with less formal ontologies.[25]It also has formal mappings toWikipedia,DBpedia,PROTONandGeoNames. It has been developed and maintained asopen sourceby Structured Dynamics. WordNet, a freely available database originally designed as asemantic networkbased onpsycholinguisticprinciples, was expanded by addition of definitions and is now also viewed as adictionary. It qualifies as an upper ontology by including the most general concepts as well as more specialized concepts, related to each other not only by thesubsumption relations, but by other semantic relations as well, such as part-of and cause. However, unlike Cyc, it has not been formally axiomatized so as to make the logical relations between the concepts precise. It has been widely used inNatural language processingresearch. YAMATO is developed by Riichiro Mizoguchi, formerly at the Institute of Scientific and Industrial Research of theUniversity of Osaka, and now at theJapan Advanced Institute of Science and Technology. Major features of YAMATO are: YAMATO has been extensively used for developing other, more applied, ontologies such as a medical ontology,[30]an ontology of gene,[31]an ontology of learning/instructional theories,[32]an ontology of sustainability science,[33]and an ontology of the cultural domain.[34]
https://en.wikipedia.org/wiki/Upper_ontology
Inrepresentation learning,knowledge graph embedding(KGE), also calledknowledge representation learning(KRL), ormulti-relation learning,[1]is amachine learningtask of learning a low-dimensional representation of aknowledge graph's entities and relations while preserving theirsemanticmeaning.[1][2][3]Leveraging theirembeddedrepresentation, knowledge graphs (KGs) can be used for various applications such aslink prediction, triple classification,entity recognition,clustering, andrelation extraction.[1][4] A knowledge graphG={E,R,F}{\displaystyle {\mathcal {G}}=\{E,R,F\}}is a collection of entitiesE{\displaystyle E}, relationsR{\displaystyle R}, and factsF{\displaystyle F}.[5]Afactis a triple(h,r,t)∈F{\displaystyle (h,r,t)\in F}that denotes a linkr∈R{\displaystyle r\in R}between the headh∈E{\displaystyle h\in E}and the tailt∈E{\displaystyle t\in E}of the triple. Another notation that is often used in the literature to represent a triple (or fact) is<head,relation,tail>{\displaystyle <head,relation,tail>}. This notation is called resource description framework (RDF).[1][5]A knowledge graph represents the knowledge related to a specific domain; leveraging this structured representation, it is possible to infer a piece of new knowledge from it after some refinement steps.[6]However, nowadays, people have to deal with the sparsity of data and the computational inefficiency to use them in a real-world application.[3][7] The embedding of a knowledge graph is a function that translates each entity and each relation into a vector of a given dimensiond{\displaystyle d}, called embedding dimension.[7]It is even possible to embed the entities and relations with different dimensions.[7]The embedding vectors can then be used for other tasks. A knowledge graph embedding is characterized by four aspects:[1] All algorithms for creating a knowledge graph embedding follow the same approach.[7]First, the embedding vectors are initialized to random values.[7]Then, they are iteratively optimized using atraining setof triples. In each iteration, abatchof sizeb{\displaystyle b}triples is sampled from the training set, and a triple from it is sampled and corrupted—i.e., a triple that does not represent a true fact in the knowledge graph.[7]The corruption of a triple involves substituting the head or the tail (or both) of the triple with another entity that makes the fact false.[7]The original triple and the corrupted triple are added in the training batch, and then the embeddings are updated, optimizing a scoring function.[5][7]Iteration stops when a stop condition is reached.[7]Usually, the stop condition depends on theoverfittingof the training set.[7]At the end, the learned embeddings should have extracted semantic meaning from the training triples and should correctly predict unseen true facts in the knowledge graph.[5] The following is the pseudocode for the general embedding procedure.[9][7] These indexes are often used to measure the embedding quality of a model. The simplicity of the indexes makes them very suitable for evaluating the performance of an embedding algorithm even on a large scale.[10]GivenQ{\displaystyle {\ce {Q}}}as the set of all ranked predictions of a model, it is possible to define three different performance indexes: Hits@K, MR, and MRR.[10] Hits@K or in short, H@K, is a performance index that measures the probability to find the correct prediction in the first top K model predictions.[10]Usually, it is usedk=10{\displaystyle k=10}.[10]Hits@K reflects the accuracy of an embedding model to predict the relation between two given triples correctly.[10] Hits@K=|{q∈Q:q<k}||Q|∈[0,1]{\displaystyle ={\frac {|\{q\in Q:q<k\}|}{|Q|}}\in [0,1]} Larger values mean better predictive performances.[10] Mean rank is the average ranking position of the items predicted by the model among all the possible items.[10] MR=1|Q|∑q∈Qq{\displaystyle MR={\frac {1}{|Q|}}\sum _{q\in Q}{q}} The smaller the value, the better the model.[10] Mean reciprocal rank measures the number of triples predicted correctly.[10]If the first predicted triple is correct, then 1 is added, if the second is correct12{\displaystyle {\frac {1}{2}}}is summed, and so on.[10] Mean reciprocal rank is generally used to quantify the effect of search algorithms.[10] MRR=1|Q|∑q∈Q1q∈[0,1]{\displaystyle MRR={\frac {1}{|Q|}}\sum _{q\in Q}{\frac {1}{q}}\in [0,1]} The larger the index, the better the model.[10] Knowledge graph completion (KGC) is a collection of techniques to infer knowledge from an embedded knowledge graph representation.[11]In particular, this technique completes a triple inferring the missing entity or relation.[11]The corresponding sub-tasks are named link or entity prediction (i.e., guessing an entity from the embedding given the other entity of the triple and the relation), and relation prediction (i.e., forecasting the most plausible relation that connects two entities).[11] Triple Classification is a binary classification problem.[1]Given a triple, the trained model evaluates the plausibility of the triple using the embedding to determine if a triple is true or false.[11]The decision is made with the model score function and a given threshold.[11]Clustering is another application that leverages the embedded representation of a sparse knowledge graph to condense the representation of similar semantic entities close in a 2D space.[4] The use of knowledge graph embedding is increasingly pervasive in many applications. In the case ofrecommender systems, the use of knowledge graph embedding can overcome the limitations of the usualreinforcement learning,[12][13]as well as limitations of the conventionalcollaborative filteringmethod.[14]Training this kind of recommender system requires a huge amount of information from the users; however, knowledge graph techniques can address this issue by using a graph already constructed over a prior knowledge of the item correlation and using the embedding to infer from it the recommendation.[12]Drug repurposingis the use of an already approved drug, but for a therapeutic purpose different from the one for which it was initially designed.[15]It is possible to use the task of link prediction to infer a new connection between an already existing drug and a disease by using a biomedical knowledge graph built leveraging the availability of massive literature and biomedical databases.[15]Knowledge graph embedding can also be used in the domain of social politics.[4] Given a collection of triples (or facts)F={<head,relation,tail>}{\displaystyle {\mathcal {F}}=\{<head,relation,tail>\}}, the knowledge graph embedding model produces, for each entity and relation present in the knowledge graph a continuous vector representation.[7](h,r,t){\displaystyle (h,r,t)}is the corresponding embedding of a triple withh,t∈IRd{\displaystyle h,t\in {\rm {I\!R}}^{d}}andr∈IRk{\displaystyle r\in {\rm {I\!R}}^{k}}, whered{\displaystyle d}is the embedding dimension for the entities, andk{\displaystyle k}for the relations.[7]The score function of a given model is denoted byfr(h,t){\displaystyle {\mathcal {f}}_{r}(h,t)}and measures the distance of the embedding of the head from the embedding of tail given the embedding of the relation. In other words, it quantifies the plausibility of the embedded representation of a given fact.[5] Rossi et al. propose a taxonomy of the embedding models and identifies three main families of models: tensor decomposition models, geometric models, and deep learning models.[5] The tensor decomposition is a family of knowledge graph embedding models that use a multi-dimensional matrix to represent a knowledge graph,[1][5][18]that is partially knowable due to gaps of the graph describing a particular domain thoroughly.[5]In particular, these models use a third-order (3D)tensor, which is then factorized into low-dimensional vectors that are the embeddings.[5][18]A third-order tensor is suitable for representing a knowledge graph because it records only the existence or absence of a relation between entities,[18]and so is simple, and there is no need to knowa priorithe network structure,[16]making this class of embedding models light, and easy to train even if they suffer from high-dimensionality and sparsity of data.[5][18] This family of models uses a linear equation to embed the connection between the entities through a relation.[1]In particular, the embedded representation of the relations is a bidimensional matrix.[5]These models, during the embedding procedure, only use the single facts to compute the embedded representation and ignore the other associations to the same entity or relation.[19] The geometric space defined by this family of models encodes the relation as a geometric transformation between the head and tail of a fact.[5]For this reason, to compute the embedding of the tail, it is necessary to apply a transformationτ{\displaystyle \tau }to the head embedding, and a distance functionδ{\displaystyle \delta }is used to measure the goodness of the embedding or to score the reliability of a fact.[5] fr(h,t)=δ(τ(h,r),t){\displaystyle {\mathcal {f}}_{r}(h,t)=\delta (\tau (h,r),t)} Geometric models are similar to the tensor decomposition model, but the main difference between the two is that they have to preserve the applicability of the transformationτ{\displaystyle \tau }in the geometric space in which it is defined.[5] This class of models is inspired by the idea of translation invariance introduced inword2vec.[7]A pure translational model relies on the fact that the embedding vector of the entities are close to each other after applying a proper relational translation in the geometric space in which they are defined.[19]In other words, given a fact, the embedding of the head plus the embedding of the relation should equal the embedding of the tail.[5]The closeness of the entities embedding is given by some distance measure and quantifies the reliability of a fact.[18] It is possible to associate additional information to each element in the knowledge graph and their common representation facts.[1]Each entity and relation can be enriched with text descriptions, weights, constraints, and others in order to improve the overall description of the domain with a knowledge graph.[1]During the embedding of the knowledge graph, this information can be used to learn specialized embeddings for these characteristics together with the usual embedded representation of entities and relations, with the cost of learning a more significant number of vectors.[5] This family of models, in addition or in substitution of a translation they employ a rotation-like transformation.[5] This group of embedding models usesdeep neural networkto learn patterns from the knowledge graph that are the input data.[5]These models have the generality to distinguish the type of entity and relation, temporal information, path information, underlay structured information,[19]and resolve the limitations of distance-based and semantic-matching-based models in representing all the features of a knowledge graph.[1]The use of deep learning for knowledge graph embedding has shown good predictive performance even if they are more expensive in the training phase, data-hungry, and often required a pre-trained embedding representation of knowledge graph coming from a different embedding model.[1][5] This family of models, instead of using fully connected layers, employs one or moreconvolutional layersthat convolve the input data applying a low-dimensional filter capable of embedding complex structures with few parameters by learning nonlinear features.[1][5][19] This family of models usescapsule neural networksto create a more stable representation that is able to recognize a feature in the input without losing spatial information.[5]The network is composed of convolutional layers, but they are organized in capsules, and the overall result of a capsule is sent to a higher-capsule decided by a dynamic process routine.[5] This class of models leverages the use ofrecurrent neural network.[5]The advantage of this architecture is to memorize a sequence of fact, rather than just elaborate single events.[41] The machine learning task for knowledge graph embedding that is more often used to evaluate the embedding accuracy of the models is the link prediction.[1][3][5][6][7][19]Rossi et al.[5]produced an extensive benchmark of the models, but also other surveys produces similar results.[3][7][19][26]Thebenchmarkinvolves five datasets FB15k,[9]WN18,[9]FB15k-237,[42]WN18RR,[37]and YAGO3-10.[43]More recently, it has been discussed that these datasets are far away from real-world applications, and other datasets should be integrated as a standard benchmark.[44]
https://en.wikipedia.org/wiki/Knowledge_graph_embedding
Atopic mapis a standard for therepresentationandinterchangeof knowledge, with an emphasis on thefindabilityof information. Topic maps were originally developed in the late 1990s as a way to representback-of-the-book indexstructures so that multiple indexes from different sources could be merged. However, the developers quickly realized that with a little additional generalization, they could create ameta-modelwith potentially far wider application.[further explanation needed]TheISO/IECstandard is formally known asISO/IEC 13250:2003. A topic map represents information using Topic maps are similar toconcept mapsandmind mapsin many respects, though only topic maps are ISO standards. Topic maps are a form ofsemantic webtechnology similar to RDF. Topics, associations, and occurrences can all be typed, where the types must be defined by the one or more creators of the topic map(s). The definitions of allowed types is known as theontologyof the topic map. Topic maps explicitly support the concept of merging of identity between multiple topics or topic maps. Furthermore, because ontologies are topic maps themselves, they can also be merged thus allowing for the automated integration of information from diverse sources into a coherent new topic map. Features such as subject identifiers (URIsgiven to topics) and PSIs (published subject indicators) are used to control merging between differing taxonomies. Scoping on names provides a way to organise the various names given to a particular topic by different sources. The work standardizing topic maps (ISO/IEC 13250) took place under the umbrella of theISO/IEC JTC 1/SC 34/WG 3 committee (ISO/IEC Joint Technical Committee 1, Subcommittee 34, Working Group 3 – Document description and processing languages – Information Association).[1][2]However, WG3 was disbanded and maintenance of ISO/IEC 13250 was assigned to WG8. The topic maps (ISO/IEC 13250) reference model and data model standardsare defined independent of any specific serialization or syntax. The specification is summarized in the abstract as follows: "This specification provides a model and grammar for representing the structure of information resources used to define topics, and the associations (relationships) between topics. Names, resources, and relationships are said to be characteristics of abstract subjects, which are called topics. Topics have their characteristics within scopes: i.e. the limited contexts within which the names and resources are regarded as their name, resource, and relationship characteristics. One or more interrelated documents employing this grammar is called a topic map." Note that XTM 1.0 predates and therefore is not compatible with the more recent versions of the (ISO/IEC 13250) standard. Other proposed or standardized serialization formats include: The above standards are all recently proposed or defined as part of ISO/IEC 13250. As described below, there are also other, serialization formats such as LTM, AsTMa= that have not been put forward as standards. Linear topic map notation (LTM) serves as a kind of shorthand for writing topic maps in plain text editors. This is useful for writing short personal topic maps or exchanging partial topic maps by email. The format can be converted to XTM. There is another format called AsTMa which serves a similar purpose. When writing topic maps manually it is much more compact, but of course can be converted to XTM. Alternatively, it can be used directly with thePerlModule TM (which also supports LTM). The data formats of XTM and LTM are similar to the W3C standards for RDF/XML or the older N3 notation.[3] A de factoAPIstandard called Common Topic Maps Application Programming Interface (TMAPI) was published in April 2004 and is supported by many Topic Maps implementations or vendors: In normal use it is often desirable to have a way to arbitrarily query the data within a particular Topic Maps store. Many implementations provide a syntax by which this can be achieved (somewhat like 'SQL for Topic Maps') but the syntax tends to vary a lot between different implementations. With this in mind, work has gone into defining a standardized syntax for querying topic maps: It can also be desirable to define a set of constraints that can be used to guarantee or check the semantic validity of topic maps data for a particular domain. (Somewhat likedatabase constraintsfor topic maps). Constraints can be used to define things like 'every document needs an author' or 'all managers must be human'. There are often implementation specific ways of achieving these goals, but work has gone into defining a standardized constraint language as follows: TMCL is functionally similar toRDF SchemawithWeb Ontology Language(OWL).[3] The "Topic Maps" concept has existed for a long time. The HyTime standard was proposed as far back as 1992 (or earlier?). Earlier versions of ISO 13250 (than the current revision) also exist. More information about such standards can be found at the ISO Topic Maps site.[citation needed] Some work has been undertaken to provide interoperability between the W3C'sRDF/OWL/SPARQLfamily ofsemantic webstandards and theISO's family ofTopic Mapsstandards though the two have slightly different goals.[citation needed] The semanticexpressive powerof Topic Maps is, in many ways, equivalent to that ofRDF,[citation needed]but the major differences are that Topic Maps (i) provide a higher level ofsemantic abstraction(providing a template of topics, associations and occurrences, while RDF only provides a template of two arguments linked by one relationship) and (hence) (ii) allown-aryrelationships (hypergraphs) between any number of nodes, while RDF is limited totriplets.[citation needed]
https://en.wikipedia.org/wiki/Topic_map
Wikibaseis a set of software tools for working withversionedsemi-structured datain a centralrepository. It is based uponJSONinstead of theunstructured dataofwikitextnormally used in MediaWiki. It stores and organizes information that can be collaboratively edited and read by humans and by computers, translated into multiple languages and shared with the rest of the world as part of the Linked Open Data (LOD) web.[3]It is primary made up of twoMediaWikiextensions, theWikibase Repository, an extension for storing and managing data, and theWikibase Clientwhich allows for the retrieval and embedding ofstructured datafrom a Wikibase repository. It was developed for and is used byWikidata,[4]byWikimedia Deutschland. Thedata modelfor Wikibase links consists of "entities" which include individual "items", labels or identifiers to describe them (potentially in multiple languages), and semantic statements that attribute "properties" to the item. These properties may either be other items within the database, textual information or other semi structured information.[5] Wikibase has aJavaScript-based user interface, a fully features API, and provides exports of all or subsets of data in many formats. Projects using it includeWikidata,Wikimedia Commons,[6]Europeana's Project,Lingua Libre,[7]FactGrid, theOpenStreetMapwiki,[8]and wikibase.cloud.
https://en.wikipedia.org/wiki/Wikibase
YAGO(Yet Another GreatOntology) is an open source[3]knowledge basedeveloped at theMax Planck Institute for InformaticsinSaarbrücken. It is automatically extracted fromWikidataandSchema.org. YAGO4, which was released in 2020, combines data that was extracted from Wikidata with relationship designators from Schema.org.[4]The previous version of YAGO, YAGO3, had knowledge of more than 10 million entities and contained more than 120 million facts about these entities.[5]The information in YAGO3 was extracted fromWikipedia(e.g., categories, redirects, infoboxes),WordNet(e.g., synsets, hyponymy), andGeoNames.[6]The accuracy of YAGO was manually evaluated to be above 95% on a sample of facts.[7]To integrate it to thelinked datacloud, YAGO has been linked to theDBpediaontology[8]and to theSUMOontology.[9] YAGO3 is provided inTurtleandtsvformats. Dumps of the wholedatabaseare available, as well as thematic and specialized dumps. It can also be queried through various online browsers and through aSPARQLendpoint hosted by OpenLink Software. The source code of YAGO3 is available onGitHub. YAGO has been used in theWatsonartificial intelligence system.[10]
https://en.wikipedia.org/wiki/YAGO_(database)
All definitions tacitly require thehomogeneous relationR{\displaystyle R}betransitive: for alla,b,c,{\displaystyle a,b,c,}ifaRb{\displaystyle aRb}andbRc{\displaystyle bRc}thenaRc.{\displaystyle aRc.}A term's definition may require additional properties that are not listed in this table. Inmathematics, specificallyorder theory, thejoinof asubsetS{\displaystyle S}of apartially ordered setP{\displaystyle P}is thesupremum(least upper bound) ofS,{\displaystyle S,}denoted⋁S,{\textstyle \bigvee S,}and similarly, themeetofS{\displaystyle S}is theinfimum(greatest lower bound), denoted⋀S.{\textstyle \bigwedge S.}In general, the join and meet of a subset of a partially ordered set need not exist. Join and meet aredualto one another with respect to order inversion. A partially ordered set in which all pairs have a join is ajoin-semilattice. Dually, a partially ordered set in which all pairs have a meet is ameet-semilattice. A partially ordered set that is both a join-semilattice and a meet-semilattice is alattice. A lattice in which every subset, not just every pair, possesses a meet and a join is acomplete lattice. It is also possible to define apartial lattice, in which not all pairs have a meet or join but the operations (when defined) satisfy certain axioms.[1] The join/meet of a subset of atotally ordered setis simply the maximal/minimal element of that subset, if such an element exists. If a subsetS{\displaystyle S}of a partially ordered setP{\displaystyle P}is also an (upward)directed set, then its join (if it exists) is called adirected joinordirected supremum. Dually, ifS{\displaystyle S}is a downward directed set, then its meet (if it exists) is adirected meetordirected infimum. LetA{\displaystyle A}be a set with apartial order≤,{\displaystyle \,\leq ,\,}and letx,y∈A.{\displaystyle x,y\in A.}An elementm{\displaystyle m}ofA{\displaystyle A}is called themeet(orgreatest lower boundorinfimum) ofxandy{\displaystyle x{\text{ and }}y}and is denoted byx∧y,{\displaystyle x\wedge y,}if the following two conditions are satisfied: The meet need not exist, either since the pair has no lower bound at all, or since none of the lower bounds is greater than all the others. However, if there is a meet ofxandy,{\displaystyle x{\text{ and }}y,}then it is unique, since if bothmandm′{\displaystyle m{\text{ and }}m^{\prime }}are greatest lower bounds ofxandy,{\displaystyle x{\text{ and }}y,}thenm≤m′andm′≤m,{\displaystyle m\leq m^{\prime }{\text{ and }}m^{\prime }\leq m,}and thusm=m′.{\displaystyle m=m^{\prime }.}[2]If not all pairs of elements fromA{\displaystyle A}have a meet, then the meet can still be seen as apartialbinary operation onA.{\displaystyle A.}[1] If the meet does exist then it is denotedx∧y.{\displaystyle x\wedge y.}If all pairs of elements fromA{\displaystyle A}have a meet, then the meet is abinary operationonA,{\displaystyle A,}and it is easy to see that this operation fulfills the following three conditions: For any elementsx,y,z∈A,{\displaystyle x,y,z\in A,} Joins are definedduallywith the join ofxandy,{\displaystyle x{\text{ and }}y,}if it exists, denoted byx∨y.{\displaystyle x\vee y.}An elementj{\displaystyle j}ofA{\displaystyle A}is thejoin(orleast upper boundorsupremum) ofxandy{\displaystyle x{\text{ and }}y}inA{\displaystyle A}if the following two conditions are satisfied: By definition, abinary operation∧{\displaystyle \,\wedge \,}on a setA{\displaystyle A}is ameetif it satisfies the three conditionsa,b, andc. The pair(A,∧){\displaystyle (A,\wedge )}is then ameet-semilattice. Moreover, we then may define abinary relation≤{\displaystyle \,\leq \,}onA, by stating thatx≤y{\displaystyle x\leq y}if and only ifx∧y=x.{\displaystyle x\wedge y=x.}In fact, this relation is apartial orderonA.{\displaystyle A.}Indeed, for any elementsx,y,z∈A,{\displaystyle x,y,z\in A,} Both meets and joins equally satisfy this definition: a couple of associated meet and join operations yield partial orders which are the reverse of each other. When choosing one of these orders as the main ones, one also fixes which operation is considered a meet (the one giving the same order) and which is considered a join (the other one). If(A,≤){\displaystyle (A,\leq )}is apartially ordered set, such that each pair of elements inA{\displaystyle A}has a meet, then indeedx∧y=x{\displaystyle x\wedge y=x}if and only ifx≤y,{\displaystyle x\leq y,}since in the latter case indeedx{\displaystyle x}is a lower bound ofxandy,{\displaystyle x{\text{ and }}y,}and sincex{\displaystyle x}is thegreatestlower bound if and only if it is a lower bound. Thus, the partial order defined by the meet in the universal algebra approach coincides with the original partial order. Conversely, if(A,∧){\displaystyle (A,\wedge )}is ameet-semilattice, and the partial order≤{\displaystyle \,\leq \,}is defined as in the universal algebra approach, andz=x∧y{\displaystyle z=x\wedge y}for some elementsx,y∈A,{\displaystyle x,y\in A,}thenz{\displaystyle z}is the greatest lower bound ofxandy{\displaystyle x{\text{ and }}y}with respect to≤,{\displaystyle \,\leq ,\,}sincez∧x=x∧z=x∧(x∧y)=(x∧x)∧y=x∧y=z{\displaystyle z\wedge x=x\wedge z=x\wedge (x\wedge y)=(x\wedge x)\wedge y=x\wedge y=z}and thereforez≤x.{\displaystyle z\leq x.}Similarly,z≤y,{\displaystyle z\leq y,}and ifw{\displaystyle w}is another lower bound ofxandy,{\displaystyle x{\text{ and }}y,}thenw∧x=w∧y=w,{\displaystyle w\wedge x=w\wedge y=w,}whencew∧z=w∧(x∧y)=(w∧x)∧y=w∧y=w.{\displaystyle w\wedge z=w\wedge (x\wedge y)=(w\wedge x)\wedge y=w\wedge y=w.}Thus, there is a meet defined by the partial order defined by the original meet, and the two meets coincide. In other words, the two approaches yield essentially equivalent concepts, a set equipped with both a binary relation and a binary operation, such that each one of these structures determines the other, and fulfill the conditions for partial orders or meets, respectively. If(A,∧){\displaystyle (A,\wedge )}is a meet-semilattice, then the meet may be extended to a well-defined meet of anynon-emptyfinite set, by the technique described initerated binary operations. Alternatively, if the meet defines or is defined by a partial order, some subsets ofA{\displaystyle A}indeed have infima with respect to this, and it is reasonable to consider such an infimum as the meet of the subset. For non-empty finite subsets, the two approaches yield the same result, and so either may be taken as a definition of meet. In the case whereeachsubset ofA{\displaystyle A}has a meet, in fact(A,≤){\displaystyle (A,\leq )}is acomplete lattice; for details, seecompleteness (order theory). If somepower set℘(X){\displaystyle \wp (X)}is partially ordered in the usual way (by⊆{\displaystyle \,\subseteq }) then joins are unions and meets are intersections; in symbols,∨=∪and∧=∩{\displaystyle \,\vee \,=\,\cup \,{\text{ and }}\,\wedge \,=\,\cap \,}(where the similarity of these symbols may be used as a mnemonic for remembering that∨{\displaystyle \,\vee \,}denotes the join/supremum and∧{\displaystyle \,\wedge \,}denotes the meet/infimum[note 1]). More generally, suppose thatF≠∅{\displaystyle {\mathcal {F}}\neq \varnothing }is afamily of subsetsof some setX{\displaystyle X}that ispartially orderedby⊆.{\displaystyle \,\subseteq .\,}IfF{\displaystyle {\mathcal {F}}}is closed under arbitrary unions and arbitrary intersections and ifA,B,(Fi)i∈I{\displaystyle A,B,\left(F_{i}\right)_{i\in I}}belong toF{\displaystyle {\mathcal {F}}}thenA∨B=A∪B,A∧B=A∩B,⋁i∈IFi=⋃i∈IFi,and⋀i∈IFi=⋂i∈IFi.{\displaystyle A\vee B=A\cup B,\quad A\wedge B=A\cap B,\quad \bigvee _{i\in I}F_{i}=\bigcup _{i\in I}F_{i},\quad {\text{ and }}\quad \bigwedge _{i\in I}F_{i}=\bigcap _{i\in I}F_{i}.}But ifF{\displaystyle {\mathcal {F}}}is not closed under unions thenA∨B{\displaystyle A\vee B}exists in(F,⊆){\displaystyle ({\mathcal {F}},\subseteq )}if and only if there exists a unique⊆{\displaystyle \,\subseteq }-smallestJ∈F{\displaystyle J\in {\mathcal {F}}}such thatA∪B⊆J.{\displaystyle A\cup B\subseteq J.}For example, ifF={{1},{2},{1,2,3},R}{\displaystyle {\mathcal {F}}=\{\{1\},\{2\},\{1,2,3\},\mathbb {R} \}}then{1}∨{2}={1,2,3}{\displaystyle \{1\}\vee \{2\}=\{1,2,3\}}whereas ifF={{1},{2},{1,2,3},{0,1,2},R}{\displaystyle {\mathcal {F}}=\{\{1\},\{2\},\{1,2,3\},\{0,1,2\},\mathbb {R} \}}then{1}∨{2}{\displaystyle \{1\}\vee \{2\}}does not exist because the sets{0,1,2}and{1,2,3}{\displaystyle \{0,1,2\}{\text{ and }}\{1,2,3\}}are the only upper bounds of{1}and{2}{\displaystyle \{1\}{\text{ and }}\{2\}}in(F,⊆){\displaystyle ({\mathcal {F}},\subseteq )}that could possibly be theleastupper bound{1}∨{2}{\displaystyle \{1\}\vee \{2\}}but{0,1,2}⊈{1,2,3}{\displaystyle \{0,1,2\}\not \subseteq \{1,2,3\}}and{1,2,3}⊈{0,1,2}.{\displaystyle \{1,2,3\}\not \subseteq \{0,1,2\}.}IfF={{1},{2},{0,2,3},{0,1,3}}{\displaystyle {\mathcal {F}}=\{\{1\},\{2\},\{0,2,3\},\{0,1,3\}\}}then{1}∨{2}{\displaystyle \{1\}\vee \{2\}}does not exist because there is no upper bound of{1}and{2}{\displaystyle \{1\}{\text{ and }}\{2\}}in(F,⊆).{\displaystyle ({\mathcal {F}},\subseteq ).}
https://en.wikipedia.org/wiki/Join_and_meet
The concept of alatticearises inorder theory, a branch of mathematics. TheHasse diagrambelow depicts the inclusion relationships among some important subclasses of lattices. 1. Aboolean algebrais acomplementeddistributive lattice. (def) 2. A boolean algebra is aheyting algebra.[1] 3. A boolean algebra isorthocomplemented.[2] 4. A distributive orthocomplemented lattice isorthomodular. 5. A boolean algebra is orthomodular. (1,3,4) 6. An orthomodular lattice is orthocomplemented. (def) 7. An orthocomplemented lattice is complemented. (def) 8. A complemented lattice is bounded. (def) 9. Analgebraic latticeis complete. (def) 10. Acomplete latticeis bounded. 11. A heyting algebra is bounded. (def) 12. A bounded lattice is a lattice. (def) 13. A heyting algebra isresiduated. 14. A residuated lattice is a lattice. (def) 15. A distributive lattice is modular.[3] 16. A modular complemented lattice is relatively complemented.[4] 17. A boolean algebra isrelatively complemented. (1,15,16) 18. A relatively complemented lattice is a lattice. (def) 19. A heyting algebra is distributive.[5] 20. Atotally ordered setis a distributive lattice. 21. Ametric latticeismodular.[6] 22. A modular lattice is semi-modular.[7] 23. Aprojective latticeis modular.[8] 24. A projective lattice is geometric. (def) 25. Ageometric latticeis semi-modular.[9] 26. A semi-modular lattice is atomic.[10][disputed–discuss] 27. Anatomiclattice is a lattice. (def) 28. A lattice is a semi-lattice. (def) 29. Asemi-latticeis apartially ordered set. (def)
https://en.wikipedia.org/wiki/Map_of_lattices
In themathematicaldiscipline oforder theory, acomplemented latticeis a boundedlattice(withleast element0 andgreatest element1), in which every elementahas acomplement, i.e. an elementbsatisfyinga∨b= 1 anda∧b= 0. Complements need not be unique. Arelatively complemented latticeis a lattice such that everyinterval[c,d], viewed as a bounded lattice in its own right, is a complemented lattice. Anorthocomplementationon a complemented lattice is aninvolutionthat isorder-reversingand maps each element to a complement. An orthocomplemented lattice satisfying a weak form of themodular lawis called anorthomodular lattice. In boundeddistributive lattices, complements are unique. Every complemented distributive lattice has a unique orthocomplementation and is in fact aBoolean algebra. Acomplemented latticeis a bounded lattice (withleast element0 andgreatest element1), in which every elementahas acomplement, i.e. an elementbsuch that In general an element may have more than one complement. However, in a (bounded)distributive latticeevery element will have at most one complement.[1]A lattice in which every element has exactly one complement is called auniquely complemented lattice[2] A lattice with the property that every interval (viewed as a sublattice) is complemented is called arelatively complemented lattice. In other words, a relatively complemented lattice is characterized by the property that for every elementain an interval [c,d] there is an elementbsuch that Such an elementbis called a complement ofarelative to the interval. A distributive lattice is complemented if and only if it is bounded and relatively complemented.[3][4]The lattice ofsubspacesof avector spaceprovide an example of a complemented lattice that is not, in general, distributive. Anorthocomplementationon a bounded lattice is a function that maps each elementato an "orthocomplement"a⊥in such a way that the following axioms are satisfied:[5] Anorthocomplemented latticeorortholatticeis a bounded lattice equipped with an orthocomplementation. The lattice of subspaces of aninner product space, and theorthogonal complementoperation, provides an example of an orthocomplemented lattice that is not, in general, distributive.[6] Boolean algebrasare a special case of orthocomplemented lattices, which in turn are a special case of complemented lattices (with extra structure). The ortholattices are most often used inquantum logic, where theclosedsubspacesof aseparableHilbert spacerepresent quantum propositions and behave as an orthocomplemented lattice. Orthocomplemented lattices, like Boolean algebras, satisfyde Morgan's laws: A lattice is calledmodularif for all elementsa,bandcthe implication holds. This is weaker thandistributivity; e.g. the above-shown latticeM3is modular, but not distributive. A natural further weakening of this condition for orthocomplemented lattices, necessary for applications in quantum logic, is to require it only in the special caseb=a⊥. Anorthomodular latticeis therefore defined as an orthocomplemented lattice such that for any two elements the implication holds. Lattices of this form are of crucial importance for the study ofquantum logic, since they are part of the axiomisation of theHilbert spaceformulationofquantum mechanics.Garrett BirkhoffandJohn von Neumannobserved that thepropositionalcalculusin quantum logic is "formally indistinguishable from the calculus of linear subspaces [of a Hilbert space] with respect toset products,linear sumsand orthogonal complements" corresponding to the roles ofand,orandnotin Boolean lattices. This remark has spurred interest in the closed subspaces of a Hilbert space, which form an orthomodular lattice.[7]
https://en.wikipedia.org/wiki/Orthocomplemented_lattice
Inmathematics, atotal orderorlinear orderis apartial orderin which any two elements are comparable. That is, a total order is abinary relation≤{\displaystyle \leq }on somesetX{\displaystyle X}, which satisfies the following for alla,b{\displaystyle a,b}andc{\displaystyle c}inX{\displaystyle X}: Requirements 1. to 3. just make up the definition of a partial order. Reflexivity (1.) already follows from strong connectedness (4.), but is required explicitly by many authors nevertheless, to indicate the kinship to partial orders.[1]Total orders are sometimes also calledsimple,[2]connex,[3]orfull orders.[4] A set equipped with a total order is atotally ordered set;[5]the termssimply ordered set,[2]linearly ordered set,[3][5]toset[6]andloset[7][8]are also used. The termchainis sometimes defined as a synonym oftotally ordered set,[5]but generally refers to a totally ordered subset of a given partially ordered set. An extension of a given partial order to a total order is called alinear extensionof that partial order. For delimitation purposes, a total order as definedaboveis sometimes callednon-strictorder. For each (non-strict) total order≤{\displaystyle \leq }there is an associated relation<{\displaystyle <}, called thestrict total orderassociated with≤{\displaystyle \leq }that can be defined in two equivalent ways: Conversely, thereflexive closureof a strict total order<{\displaystyle <}is a (non-strict) total order. Thus, astrict total orderon a setX{\displaystyle X}is astrict partial orderonX{\displaystyle X}in which any two distinct elements are comparable. That is, a strict total order is abinary relation<{\displaystyle <}on somesetX{\displaystyle X}, which satisfies the following for alla,b{\displaystyle a,b}andc{\displaystyle c}inX{\displaystyle X}: Asymmetry follows from transitivity and irreflexivity;[9]moreover, irreflexivity follows from asymmetry.[10] The termchainis sometimes defined as a synonym for a totally ordered set, but it is generally used for referring to asubsetof apartially ordered setthat is totally ordered for the induced order.[1][12]Typically, the partially ordered set is a set of subsets of a given set that is ordered by inclusion, and the term is used for stating properties of the set of the chains. This high number of nested levels of sets explains the usefulness of the term. A common example of the use ofchainfor referring to totally ordered subsets isZorn's lemmawhich asserts that, if every chain in a partially ordered setXhas an upper bound inX, thenXcontains at least one maximal element.[13]Zorn's lemma is commonly used withXbeing a set of subsets; in this case, the upper bound is obtained by proving that the union of the elements of a chain inXis inX. This is the way that is generally used to prove that avector spacehasHamel basesand that aringhasmaximal ideals. In some contexts, the chains that are considered are order isomorphic to the natural numbers with their usual order or itsopposite order. In this case, a chain can be identified with amonotone sequence, and is called anascending chainor adescending chain, depending whether the sequence is increasing or decreasing.[14] A partially ordered set has thedescending chain conditionif every descending chain eventually stabilizes.[15]For example, an order iswell foundedif it has the descending chain condition. Similarly, theascending chain conditionmeans that every ascending chain eventually stabilizes. For example, aNoetherian ringis a ring whoseidealssatisfy the ascending chain condition. In other contexts, only chains that arefinite setsare considered. In this case, one talks of afinite chain, often shortened as achain. In this case, thelengthof a chain is the number of inequalities (or set inclusions) between consecutive elements of the chain; that is, the number minus one of elements in the chain.[16]Thus asingleton setis a chain of length zero, and anordered pairis a chain of length one. Thedimensionof a space is often defined or characterized as the maximal length of chains of subspaces. For example, thedimension of a vector spaceis the maximal length of chains oflinear subspaces, and theKrull dimensionof acommutative ringis the maximal length of chains ofprime ideals. "Chain" may also be used for some totally ordered subsets ofstructuresthat are not partially ordered sets. An example is given byregular chainsof polynomials. Another example is the use of "chain" as a synonym for awalkin agraph. One may define a totally ordered set as a particular kind oflattice, namely one in which we have We then writea≤bif and only ifa=a∧b{\displaystyle a=a\wedge b}. Hence a totally ordered set is adistributive lattice. A simplecountingargument will verify that any non-empty finite totally ordered set (and hence any non-empty subset thereof) has a least element. Thus every finite total order is in fact awell order. Either by direct proof or by observing that every well order isorder isomorphicto anordinalone may show that every finite total order isorder isomorphicto aninitial segmentof the natural numbers ordered by <. In other words, a total order on a set withkelements induces a bijection with the firstknatural numbers. Hence it is common to index finite total orders or well orders withorder typeω by natural numbers in a fashion which respects the ordering (either starting with zero or with one). Totally ordered sets form afull subcategoryof thecategoryofpartially ordered sets, with themorphismsbeing maps which respect the orders, i.e. mapsfsuch that ifa≤bthenf(a) ≤f(b). Abijectivemapbetween two totally ordered sets that respects the two orders is anisomorphismin this category. For any totally ordered setXwe can define theopen intervals We can use these open intervals to define atopologyon any ordered set, theorder topology. When more than one order is being used on a set one talks about the order topology induced by a particular order. For instance ifNis the natural numbers,<is less than and>greater than we might refer to the order topology onNinduced by<and the order topology onNinduced by>(in this case they happen to be identical but will not in general). The order topology induced by a total order may be shown to be hereditarilynormal. A totally ordered set is said to becompleteif every nonempty subset that has anupper bound, has aleast upper bound. For example, the set ofreal numbersRis complete but the set ofrational numbersQis not. In other words, the various concepts ofcompleteness(not to be confused with being "total") do not carry over torestrictions. For example, over thereal numbersa property of the relation≤is that everynon-emptysubsetSofRwith anupper boundinRhas aleast upper bound(also called supremum) inR. However, for the rational numbers this supremum is not necessarily rational, so the same property does not hold on the restriction of the relation≤to the rational numbers. There are a number of results relating properties of the order topology to the completeness of X: A totally ordered set (with its order topology) which is acomplete latticeiscompact. Examples are the closed intervals of real numbers, e.g. theunit interval[0,1], and theaffinely extended real number system(extended real number line). There are order-preservinghomeomorphismsbetween these examples. For any two disjoint total orders(A1,≤1){\displaystyle (A_{1},\leq _{1})}and(A2,≤2){\displaystyle (A_{2},\leq _{2})}, there is a natural order≤+{\displaystyle \leq _{+}}on the setA1∪A2{\displaystyle A_{1}\cup A_{2}}, which is called the sum of the two orders or sometimes justA1+A2{\displaystyle A_{1}+A_{2}}: Intuitively, this means that the elements of the second set are added on top of the elements of the first set. More generally, if(I,≤){\displaystyle (I,\leq )}is a totally ordered index set, and for eachi∈I{\displaystyle i\in I}the structure(Ai,≤i){\displaystyle (A_{i},\leq _{i})}is a linear order, where the setsAi{\displaystyle A_{i}}are pairwise disjoint, then the natural total order on⋃iAi{\displaystyle \bigcup _{i}A_{i}}is defined by Thefirst-ordertheory of total orders isdecidable, i.e. there is an algorithm for deciding which first-order statements hold for all total orders. Using interpretability inS2S, themonadic second-ordertheory ofcountabletotal orders is also decidable.[17] There are several ways to take two totally ordered sets and extend to an order on theCartesian product, though the resulting order may only bepartial. Here are three of these possible orders, listed such that each order is stronger than the next: Each of these orders extends the next in the sense that if we havex≤yin the product order, this relation also holds in the lexicographic order, and so on. All three can similarly be defined for the Cartesian product of more than two sets. Applied to thevector spaceRn, each of these make it anordered vector space. See alsoexamples of partially ordered sets. A real function ofnreal variables defined on a subset ofRndefines a strict weak order and a corresponding total preorderon that subset. All definitions tacitly require thehomogeneous relationR{\displaystyle R}betransitive: for alla,b,c,{\displaystyle a,b,c,}ifaRb{\displaystyle aRb}andbRc{\displaystyle bRc}thenaRc.{\displaystyle aRc.}A term's definition may require additional properties that are not listed in this table. A binary relation that is antisymmetric, transitive, and reflexive (but not necessarily total) is apartial order. Agroupwith a compatible total order is atotally ordered group. There are only a few nontrivial structures that are (interdefinable as) reducts of a total order. Forgetting the orientation results in abetweenness relation. Forgetting the location of the ends results in acyclic order. Forgetting both data results use ofpoint-pair separationto distinguish, on a circle, the two intervals determined by a point-pair.[18]
https://en.wikipedia.org/wiki/Total_order
Inabstract algebra, askew latticeis analgebraic structurethat is anon-commutativegeneralization of alattice. While the termskew latticecan be used to refer to any non-commutative generalization of a lattice, since 1989 it has been used primarily as follows. Askew latticeis asetSequipped with twoassociative,idempotentbinary operations∧{\displaystyle \wedge }and∨{\displaystyle \vee }, calledmeetandjoin, that validate the following dual pair of absorption laws Given that∨{\displaystyle \vee }and∧{\displaystyle \wedge }are associative and idempotent, these identities are equivalent to validating the following dual pair of statements: For over 60 years, noncommutative variations of lattices have been studied with differing motivations. For some the motivation has been an interest in the conceptual boundaries oflattice theory; for others it was a search for noncommutative forms oflogicandBoolean algebra; and for others it has been the behavior ofidempotentsinrings. Anoncommutative lattice, generally speaking, is analgebra(S;∧,∨){\displaystyle (S;\wedge ,\vee )}where∧{\displaystyle \wedge }and∨{\displaystyle \vee }areassociative,idempotentbinaryoperationsconnected byabsorption identitiesguaranteeing that∧{\displaystyle \wedge }in some way dualizes∨{\displaystyle \vee }. The precise identities chosen depends upon the underlying motivation, with differing choices producing distinctvarieties of algebras. Pascual Jordan, motivated by questions inquantum logic, initiated a study ofnoncommutative latticesin his 1949 paper,Über Nichtkommutative Verbände,[2]choosing the absorption identities He referred to those algebras satisfying them asSchrägverbände. By varying or augmenting these identities, Jordan and others obtained a number of varieties of noncommutative lattices. Beginning with Jonathan Leech's 1989 paper,Skew lattices in rings,[1]skew lattices as defined above have been the primary objects of study. This was aided by previous results aboutbands. This was especially the case for many of the basic properties. Natural partial order and natural quasiorder In a skew latticeS{\displaystyle S}, the naturalpartial orderis defined byy≤x{\displaystyle y\leq x}ifx∧y=y=y∧x{\displaystyle x\wedge y=y=y\wedge x}, or dually,x∨y=x=y∨x{\displaystyle x\vee y=x=y\vee x}. The naturalpreorderonS{\displaystyle S}is given byy⪯x{\displaystyle y\preceq x}ify∧x∧y=y{\displaystyle y\wedge x\wedge y=y}or duallyx∨y∨x=x{\displaystyle x\vee y\vee x=x}. While≤{\displaystyle \leq }and⪯{\displaystyle \preceq }agree on lattices,≤{\displaystyle \leq }properly refines⪯{\displaystyle \preceq }in the noncommutative case. The induced naturalequivalenceD{\displaystyle D}is defined byxDy{\displaystyle xDy}ifx⪯y⪯x{\displaystyle x\preceq y\preceq x}, that is,x∧y∧x=x{\displaystyle x\wedge y\wedge x=x}andy∧x∧y=y{\displaystyle y\wedge x\wedge y=y}or dually,x∨y∨x=x{\displaystyle x\vee y\vee x=x}andy∨x∨y=y{\displaystyle y\vee x\vee y=y}. The blocks of the partitionS/D{\displaystyle S/D}are lattice ordered byA>B{\displaystyle A>B}if and only ifa∈A{\displaystyle a\in A}andb∈B{\displaystyle b\in B}exist such thata>b{\displaystyle a>b}. This permits us to drawHasse diagramsof skew lattices such as the following pair: E.g., in the diagram on the left above, thata{\displaystyle a}andb{\displaystyle b}areD{\displaystyle D}related is expressed by the dashed segment. The slanted lines reveal the natural partial order between elements of the distinctD{\displaystyle D}-classes. The elements1{\displaystyle 1},c{\displaystyle c}and0{\displaystyle 0}form the singletonD{\displaystyle D}-classes. Rectangular Skew Lattices Skew lattices consisting of a singleD{\displaystyle D}-class are calledrectangular. They are characterized by the equivalent identities:x∧y∧x=x{\displaystyle x\wedge y\wedge x=x},y∨x∨y=y{\displaystyle y\vee x\vee y=y}andx∨y=y∧x{\displaystyle x\vee y=y\wedge x}. Rectangular skew lattices are isomorphic to skew lattices having the following construction (and conversely): given nonempty setsL{\displaystyle L}andR{\displaystyle R}, onL×R{\displaystyle L\times R}define(x,y)∨(z,w)=(z,y){\displaystyle (x,y)\vee (z,w)=(z,y)}and(x,y)∧(z,w)=(x,w){\displaystyle (x,y)\wedge (z,w)=(x,w)}. TheD{\displaystyle D}-class partition of a skew latticeS{\displaystyle S}, as indicated in the above diagrams, is the unique partition ofS{\displaystyle S}into its maximal rectangular subalgebras, Moreover,D{\displaystyle D}is acongruencewith the inducedquotientalgebraS/D{\displaystyle S/D}being the maximal lattice image ofS{\displaystyle S}, thus making every skew latticeS{\displaystyle S}a lattice of rectangular subalgebras. This is the Clifford–McLean theorem for skew lattices, first given for bands separately byCliffordand McLean. It is also known asthe first decomposition theorem for skew lattices. Right (left) handed skew lattices and the Kimura factorization A skew lattice is right-handed if it satisfies the identityx∧y∧x=y∧x{\displaystyle x\wedge y\wedge x=y\wedge x}or dually,x∨y∨x=x∨y{\displaystyle x\vee y\vee x=x\vee y}. These identities essentially assert thatx∧y=y{\displaystyle x\wedge y=y}andx∨y=x{\displaystyle x\vee y=x}in eachD{\displaystyle D}-class. Every skew latticeS{\displaystyle S}has a unique maximal right-handed imageS/L{\displaystyle S/L}where the congruenceL{\displaystyle L}is defined byxLy{\displaystyle xLy}if bothx∧y=x{\displaystyle x\wedge y=x}andy∧x=y{\displaystyle y\wedge x=y}(or dually,x∨y=y{\displaystyle x\vee y=y}andy∨x=x{\displaystyle y\vee x=x}). Likewise a skew lattice is left-handed ifx∧y=x{\displaystyle x\wedge y=x}andx∨y=y{\displaystyle x\vee y=y}in eachD{\displaystyle D}-class. Again the maximal left-handed image of a skew latticeS{\displaystyle S}is the imageS/R{\displaystyle S/R}where the congruenceR{\displaystyle R}is defined in dual fashion toL{\displaystyle L}. Many examples of skew lattices are either right- or left-handed. In the lattice of congruences,R∨L=D{\displaystyle R\vee L=D}andR∩L{\displaystyle R\cap L}is the identity congruenceΔ{\displaystyle \Delta }. The induced epimorphismS→S/D{\displaystyle S\rightarrow S/D}factors through both induced epimorphismsS→S/L{\displaystyle S\rightarrow S/L}andS→S/R{\displaystyle S\rightarrow S/R}. SettingT=S/D{\displaystyle T=S/D}, the homomorphismk:S→S/L×S/R{\displaystyle k:S\rightarrow S/L\times S/R}defined byk(x)=(Lx,Rx){\displaystyle k(x)=(L_{x},R_{x})}, induces an isomorphismk∗:S∼S/L×TS/R{\displaystyle k*:S\sim S/L\times _{T}S/R}. This is the Kimura factorization ofS{\displaystyle S}into a fibred product of its maximal right- and left-handed images. Like the Clifford–McLean theorem, Kimura factorization (or thesecond decomposition theorem for skew lattices) was first given for regular bands (bands that satisfy the middle absorption identity,xyxzx=xyzx{\displaystyle xyxzx=xyzx}). Indeed, both∧{\displaystyle \wedge }and∨{\displaystyle \vee }are regular band operations. The above symbolsD{\displaystyle D},R{\displaystyle R}andL{\displaystyle L}come, of course, from basic semigroup theory.[1][3][4][5][6][7][8][9] Skew lattices form a variety. Rectangular skew lattices, left-handed and right-handed skew lattices all form subvarieties that are central to the basic structure theory of skew lattices. Here are several more. Symmetric skew lattices A skew latticeSis symmetric if for anyx,y∈S{\displaystyle x,y\in S},x∧y=y∧x{\displaystyle x\wedge y=y\wedge x}if and only ifx∨y=y∨x{\displaystyle x\vee y=y\vee x}. Occurrences of commutation are thus unambiguous for such skew lattices, with subsets of pairwise commuting elements generating commutative subalgebras, i.e., sublattices. (This is not true for skew lattices in general.) Equational bases for this subvariety, first given by Spinks[10]are:x∨y∨(x∧y)=(y∧x)∨y∨x{\displaystyle x\vee y\vee (x\wedge y)=(y\wedge x)\vee y\vee x}andx∧y∧(x∨y)=(y∨x)∧y∧x{\displaystyle x\wedge y\wedge (x\vee y)=(y\vee x)\wedge y\wedge x}. Alattice sectionof a skew latticeS{\displaystyle S}is a sublatticeT{\displaystyle T}ofS{\displaystyle S}meeting eachD{\displaystyle D}-class ofS{\displaystyle S}at a single element.T{\displaystyle T}is thus an internal copy of the latticeS/D{\displaystyle S/D}with the compositionT⊆S→S/D{\displaystyle T\subseteq S\rightarrow S/D}being an isomorphism. All symmetric skew lattices for which|S/D|≤ℵ0{\displaystyle |S/D|\leq \aleph _{0}}admit a lattice section.[9]Symmetric or not, having a lattice sectionT{\displaystyle T}guarantees thatS{\displaystyle S}also has internal copies ofS/L{\displaystyle S/L}andS/R{\displaystyle S/R}given respectively byT[R]=⋃t∈TRt{\displaystyle T[R]=\bigcup _{t\in T}R_{t}}andT[L]=⋃t∈TLt{\displaystyle T[L]=\bigcup _{t\in T}L_{t}}, whereRt{\displaystyle R_{t}}andLt{\displaystyle Lt}are theR{\displaystyle R}andL{\displaystyle L}congruence classes oft{\displaystyle t}inT{\displaystyle T}. ThusT[R]⊆S→S/L{\displaystyle T[R]\subseteq S\rightarrow S/L}andT[L]⊆S→S/R{\displaystyle T[L]\subseteq S\rightarrow S/R}are isomorphisms.[7]This leads to a commuting diagram of embedding dualizing the preceding Kimura diagram. Cancellative skew lattices A skew lattice is cancellative ifx∨y=x∨z{\displaystyle x\vee y=x\vee z}andx∧y=x∧z{\displaystyle x\wedge y=x\wedge z}impliesy=z{\displaystyle y=z}and likewisex∨z=y∨z{\displaystyle x\vee z=y\vee z}andx∧z=y∧z{\displaystyle x\wedge z=y\wedge z}impliesx=y{\displaystyle x=y}. Cancellatice skew lattices are symmetric and can be shown to form a variety. Unlike lattices, they need not be distributive, and conversely. Distributive skew lattices Distributive skew lattices are determined by the identities: x∧(y∨z)∧x=(x∧y∧x)∨(x∧z∧x){\displaystyle x\wedge (y\vee z)\wedge x=(x\wedge y\wedge x)\vee (x\wedge z\wedge x)}(D1) x∨(y∧z)∨x=(x∨y∨x)∧(x∨z∨x).{\displaystyle x\vee (y\wedge z)\vee x=(x\vee y\vee x)\wedge (x\vee z\vee x).}(D'1) Unlike lattices, (D1) and (D'1) are not equivalent in general for skew lattices, but they are for symmetric skew lattices.[8][11][12]The condition (D1) can be strengthened to x∧(y∨z)∧w=(x∧y∧w)∨(x∧z∧w){\displaystyle x\wedge (y\vee z)\wedge w=(x\wedge y\wedge w)\vee (x\wedge z\wedge w)}(D2) in which case (D'1) is a consequence. A skew latticeS{\displaystyle S}satisfies both (D2) and its dual,x∨(y∧z)∨w=(x∨y∨w)∧(x∨z∨w){\displaystyle x\vee (y\wedge z)\vee w=(x\vee y\vee w)\wedge (x\vee z\vee w)}, if and only if it factors as the product of a distributive lattice and a rectangular skew lattice. In this latter case (D2) can be strengthened to x∧(y∨z)=(x∧y)∨(x∧z){\displaystyle x\wedge (y\vee z)=(x\wedge y)\vee (x\wedge z)}and(y∨z)∧w=(y∧w)∨(z∧w){\displaystyle (y\vee z)\wedge w=(y\wedge w)\vee (z\wedge w)}. (D3) On its own, (D3) is equivalent to (D2) when symmetry is added.[1]We thus have six subvarieties of skew lattices determined respectively by (D1), (D2), (D3) and their duals. Normal skew lattices As seen above,∧{\displaystyle \wedge }and∨{\displaystyle \vee }satisfy the identityxyxzx=xyzx{\displaystyle xyxzx=xyzx}. Bands satisfying the stronger identity,xyzx=xzyx{\displaystyle xyzx=xzyx}, are called normal. A skew lattice is normal skew if it satisfies x∧y∧z∧x=x∧z∧y∧x.(N){\displaystyle x\wedge y\wedge z\wedge x=x\wedge z\wedge y\wedge x.(N)} For each element a in a normal skew latticeS{\displaystyle S}, the seta∧S∧a{\displaystyle a\wedge S\wedge a}defined by {a∧x∧a|x∈S{\displaystyle a\wedge x\wedge a|x\in S}} or equivalently {x∈S|x≤a{\displaystyle x\in S|x\leq a}} is a sublattice ofS{\displaystyle S}, and conversely. (Thus normal skew lattices have also been called local lattices.) When both∧{\displaystyle \wedge }and∨{\displaystyle \vee }are normal,S{\displaystyle S}splits isomorphically into a productT×D{\displaystyle T\times D}of a latticeT{\displaystyle T}and a rectangular skew latticeD{\displaystyle D}, and conversely. Thus both normal skew lattices and split skew lattices form varieties. Returning to distribution,(D2)=(D1)+(N){\displaystyle (D2)=(D1)+(N)}so that(D2){\displaystyle (D2)}characterizes the variety of distributive, normal skew lattices, and (D3) characterizes the variety of symmetric, distributive, normal skew lattices. Categorical skew lattices A skew lattice is categorical if nonempty composites of coset bijections are coset bijections. Categorical skew lattices form a variety. Skew lattices in rings and normal skew lattices are examples of algebras in this variety.[3]Leta>b>c{\displaystyle a>b>c}witha∈A{\displaystyle a\in A},b∈B{\displaystyle b\in B}andc∈C{\displaystyle c\in C},φ{\displaystyle \varphi }be the coset bijection fromA{\displaystyle A}toB{\displaystyle B}takinga{\displaystyle a}tob{\displaystyle b},ψ{\displaystyle \psi }be the coset bijection fromB{\displaystyle B}toC{\displaystyle C}takingb{\displaystyle b}toc{\displaystyle c}and finallyχ{\displaystyle \chi }be the coset bijection fromA{\displaystyle A}toC{\displaystyle C}takinga{\displaystyle a}toc{\displaystyle c}. A skew latticeS{\displaystyle S}is categorical if one always has the equalityψ∘φ=χ{\displaystyle \psi \circ \varphi =\chi }, i.e. , if the composite partial bijectionψ∘φ{\displaystyle \psi \circ \varphi }if nonempty is a coset bijection from aC{\displaystyle C}-coset ofA{\displaystyle A}to anA{\displaystyle A}-coset ofC{\displaystyle C}. That is(A∧b∧A)∩(C∨b∨C)=(C∨a∨C)∧b∧(C∨a∨C)=(A∧c∧A)∨b∨(A∧c∧A){\displaystyle (A\wedge b\wedge A)\cap (C\vee b\vee C)=(C\vee a\vee C)\wedge b\wedge (C\vee a\vee C)=(A\wedge c\wedge A)\vee b\vee (A\wedge c\wedge A)}. All distributive skew lattices are categorical. Though symmetric skew lattices might not be. In a sense they reveal the independence between the properties of symmetry and distributivity.[1][3][5][8][9][10][12][13] A zero element in a skew latticeSis an element 0 ofSsuch that for allx∈S,{\displaystyle x\in S,}0∧x=0=x∧0{\displaystyle 0\wedge x=0=x\wedge 0}or, dually,0∨x=x=x∨0.{\displaystyle 0\vee x=x=x\vee 0.}(0) A Boolean skew lattice is a symmetric, distributive normal skew lattice with 0,(S;∨,∧,0),{\displaystyle (S;\vee ,\wedge ,0),}such thata∧S∧a{\displaystyle a\wedge S\wedge a}is a Boolean lattice for eacha∈S.{\displaystyle a\in S.}Given such skew latticeS, a difference operator \ is defined by x \ y =x−x∧y∧x{\displaystyle x-x\wedge y\wedge x}where the latter is evaluated in the Boolean latticex∧S∧x.{\displaystyle x\wedge S\wedge x.}[1]In the presence of (D3) and (0), \ is characterized by the identities: y∧x∖y=0=x∖y∧y{\displaystyle y\wedge x\setminus y=0=x\setminus y\wedge y}and(x∧y∧x)∨x∖y=x=x∖y∨(x∧y∧x).{\displaystyle (x\wedge y\wedge x)\vee x\setminus y=x=x\setminus y\vee (x\wedge y\wedge x).}(S B) One thus has a variety of skew Boolean algebras(S;∨,∧,0){\displaystyle (S;\vee ,\wedge ,\,0)}characterized by identities (D3), (0) and (S B). A primitive skew Boolean algebra consists of 0 and a single non-0D-class. Thus it is the result of adjoining a 0 to a rectangular skew latticeDvia (0) withx∖y=x{\displaystyle x\setminus y=x}, ify=0{\displaystyle y=0}and0{\displaystyle 0}otherwise. Every skew Boolean algebra is asubdirect productof primitive algebras. Skew Boolean algebras play an important role in the study of discriminator varieties and other generalizations inuniversal algebraof Boolean behavior.[14][15][16][17][18][19][20][21][22][23][24] LetA{\displaystyle A}be aringand letE(A){\displaystyle E(A)}denote thesetof allidempotentsinA{\displaystyle A}. For allx,y∈A{\displaystyle x,y\in A}setx∧y=xy{\displaystyle x\wedge y=xy}andx∨y=x+y−xy{\displaystyle x\vee y=x+y-xy}. Clearly∧{\displaystyle \wedge }but also∨{\displaystyle \vee }isassociative. If a subsetS⊆E(A){\displaystyle S\subseteq E(A)}is closed under∧{\displaystyle \wedge }and∨{\displaystyle \vee }, then(S,∧,∨){\displaystyle (S,\wedge ,\vee )}is a distributive, cancellative skew lattice. To find such skew lattices inE(A){\displaystyle E(A)}one looks at bands inE(A){\displaystyle E(A)}, especially the ones that are maximal with respect to some constraint. In fact, every multiplicative band in(){\displaystyle ()}that is maximal with respect to being right regular (= ) is also closed under∨{\displaystyle \vee }and so forms a right-handed skew lattice. In general, every right regular band inE(A){\displaystyle E(A)}generates a right-handed skew lattice inE(A){\displaystyle E(A)}. Dual remarks also hold for left regular bands (bands satisfying the identityxyx=xy{\displaystyle xyx=xy}) inE(A){\displaystyle E(A)}. Maximal regular bands need not to be closed under∨{\displaystyle \vee }as defined; counterexamples are easily found using multiplicative rectangular bands. These cases are closed, however, under the cubic variant of∨{\displaystyle \vee }defined byx∇y=x+y+yx−xyx−yxy{\displaystyle x\nabla y=x+y+yx-xyx-yxy}since in these casesx∇y{\displaystyle x\nabla y}reduces toyx{\displaystyle yx}to give the dual rectangular band. By replacing the condition of regularity by normality(xyzw=xzyw){\displaystyle (xyzw=xzyw)}, every maximal normal multiplicative bandS{\displaystyle S}inE(A){\displaystyle E(A)}is also closed under∇{\displaystyle \nabla }with(S;∧,∨,/,0){\displaystyle (S;\wedge ,\vee ,/,0)}, wherex/y=x−xyx{\displaystyle x/y=x-xyx}, forms a Boolean skew lattice. WhenE(A){\displaystyle E(A)}itself is closed under multiplication, then it is a normal band and thus forms a Boolean skew lattice. In fact, any skew Boolean algebra can be embedded into such an algebra.[25]When A has a multiplicative identity1{\displaystyle 1}, the condition thatE(A){\displaystyle E(A)}is multiplicatively closed is well known to imply thatE(A){\displaystyle E(A)}forms a Boolean algebra. Skew lattices in rings continue to be a good source of examples and motivation.[22][26][27][28][29] Skew lattices consisting of exactly twoD-classes are called primitive skew lattices. Given such a skew latticeS{\displaystyle S}withD{\displaystyle D}-classesA>B{\displaystyle A>B}inS/D{\displaystyle S/D}, then for anya∈A{\displaystyle a\in A}andb∈B{\displaystyle b\in B}, the subsets A∧b∧A={\displaystyle A\wedge b\wedge A=}{u∧b∧u:u∈A{\displaystyle u\wedge b\wedge u:u\in A}}⊆B{\displaystyle \subseteq B}andB∨a∨B={\displaystyle B\vee a\vee B=}{v∨a∨v:v∈B{\displaystyle v\vee a\vee v:v\in B}}⊆A{\displaystyle \subseteq A} are called, respectively,cosets of A in Bandcosets of B in A. These cosets partition B and A withb∈A∧b∧A{\displaystyle b\in A\wedge b\wedge A}anda∈B∧a∧B{\displaystyle a\in B\wedge a\wedge B}. Cosets are always rectangular subalgebras in theirD{\displaystyle D}-classes. What is more, the partial order≥{\displaystyle \geq }induces a coset bijectionφ:B∨a∨B→A∧b∧A{\displaystyle \varphi :B\vee a\vee B\rightarrow A\wedge b\wedge A}defined by: ϕ(x)=y{\displaystyle \phi (x)=y}iffx>y{\displaystyle x>y}, forx∈B∨a∨B{\displaystyle x\in B\vee a\vee B}andy∈A∧b∧A{\displaystyle y\in A\wedge b\wedge A}. Collectively, coset bijections describe≥{\displaystyle \geq }between the subsetsA{\displaystyle A}andB{\displaystyle B}. They also determine∨{\displaystyle \vee }and∧{\displaystyle \wedge }for pairs of elements from distinctD{\displaystyle D}-classes. Indeed, givena∈A{\displaystyle a\in A}andb∈B{\displaystyle b\in B}, letφ{\displaystyle \varphi }be the cost bijection between the cosetsB∨a∨B{\displaystyle B\vee a\vee B}inA{\displaystyle A}andA∧b∧A{\displaystyle A\wedge b\wedge A}inB{\displaystyle B}. Then: a∨b=a∨φ−1(b),b∨a=φ−1(b)∨a{\displaystyle a\vee b=a\vee \varphi -1(b),b\vee a=\varphi -1(b)\vee a}anda∧b=φ(a)∧b,b∧a=b∧φ(a){\displaystyle a\wedge b=\varphi (a)\wedge b,b\wedge a=b\wedge \varphi (a)}. In general, givena,c∈A{\displaystyle a,c\in A}andb,d∈B{\displaystyle b,d\in B}witha>b{\displaystyle a>b}andc>d{\displaystyle c>d}, thena,c{\displaystyle a,c}belong to a commonB{\displaystyle B}- coset inA{\displaystyle A}andb,d{\displaystyle b,d}belong to a commonA{\displaystyle A}-coset inB{\displaystyle B}if and only ifa>b//c>d{\displaystyle a>b//c>d}. Thus each coset bijection is, in some sense, a maximal collection of mutually parallel pairsa>b{\displaystyle a>b}. Every primitive skew latticeS{\displaystyle S}factors as the fibred product of its maximal left and right- handed primitive imagesS/R×2S/L{\displaystyle S/R\times _{2}S/L}. Right-handed primitive skew lattices are constructed as follows. LetA=∪iAi{\displaystyle A=\cup _{i}A_{i}}andB=∪jBj{\displaystyle B=\cup _{j}B_{j}}be partitions of disjoint nonempty setsA{\displaystyle A}andB{\displaystyle B}, where allAi{\displaystyle A_{i}}andBj{\displaystyle B_{j}}share a common size. For each pairi,j{\displaystyle i,j}pick a fixed bijectionφi,j{\displaystyle \varphi _{i},j}fromAi{\displaystyle A_{i}}ontoBj{\displaystyle B_{j}}. OnA{\displaystyle A}andB{\displaystyle B}separately setx∧y=y{\displaystyle x\wedge y=y}andx∨y=x{\displaystyle x\vee y=x}; but givena∈A{\displaystyle a\in A}andb∈B{\displaystyle b\in B}, set a∨b=a,b∨a=a′,a∧b=b{\displaystyle a\vee b=a,b\vee a=a',a\wedge b=b}andb∧a=b′{\displaystyle b\wedge a=b'} whereφi,j(a′)=b{\displaystyle \varphi _{i,j}(a')=b}andφi,j(a)=b′{\displaystyle \varphi _{i,j}(a)=b'}witha′{\displaystyle a'}belonging to the cellAi{\displaystyle A_{i}}ofa{\displaystyle a}andb′{\displaystyle b'}belonging to the cellBj{\displaystyle B_{j}}ofb{\displaystyle b}. The variousφi,j{\displaystyle \varphi i,j}are the coset bijections. This is illustrated in the following partial Hasse diagram where|Ai|=|Bj|=2{\displaystyle |A_{i}|=|B_{j}|=2}and the arrows indicate theφi,j{\displaystyle \varphi _{i,j}}-outputs and≥{\displaystyle \geq }fromA{\displaystyle A}andB{\displaystyle B}. One constructs left-handed primitive skew lattices in dual fashion. All right [left] handed primitive skew lattices can be constructed in this fashion.[1] A nonrectangular skew latticeS{\displaystyle S}is covered by its maximal primitive skew lattices: given comparableD{\displaystyle D}-classesA>B{\displaystyle A>B}inS/D{\displaystyle S/D},A∪B{\displaystyle A\cup B}forms a maximal primitive subalgebra ofS{\displaystyle S}and everyD{\displaystyle D}-class inS{\displaystyle S}lies in such a subalgebra. The coset structures on these primitive subalgebras combine to determine the outcomesx∨y{\displaystyle x\vee y}andx∧y{\displaystyle x\wedge y}at least whenx{\displaystyle x}andy{\displaystyle y}are comparable under⪯{\displaystyle \preceq }. It turns out thatx∨y{\displaystyle x\vee y}andx∧y{\displaystyle x\wedge y}are determined in general by cosets and their bijections, although in a slightly less direct manner than the⪯{\displaystyle \preceq }-comparable case. In particular, given two incomparableD-classes A and B with joinD-classJand meetD-classM{\displaystyle M}inS/D{\displaystyle S/D}, interesting connections arise between the two coset decompositions of J (or M) with respect to A and B.[3] Thus a skew lattice may be viewed as a coset atlas of rectangular skew lattices placed on the vertices of a lattice and coset bijections between them, the latter seen as partial isomorphisms between the rectangular algebras with each coset bijection determining a corresponding pair of cosets. This perspective gives, in essence, the Hasse diagram of the skew lattice, which is easily drawn in cases of relatively small order. (See the diagrams in Section 3 above.) Given a chain ofD-classesA>B>C{\displaystyle A>B>C}inS/D{\displaystyle S/D}, one has three sets of coset bijections: from A to B, from B to C and from A to C. In general, given coset bijectionsφ:A→B{\displaystyle \varphi :A\rightarrow B}andψ:B→C{\displaystyle \psi :B\rightarrow C}, the composition of partial bijectionsψφ{\displaystyle \psi \varphi }could be empty. If it is not, then a unique coset bijectionχ:A→C{\displaystyle \chi :A\rightarrow C}exists such thatψφ⊆χ{\displaystyle \psi \varphi \subseteq \chi }. (Again,χ{\displaystyle \chi }is a bijection between a pair of cosets inA{\displaystyle A}andC{\displaystyle C}.) This inclusion can be strict. It is always an equality (givenψφ≠∅{\displaystyle \psi \varphi \neq \emptyset }) on a given skew latticeSprecisely whenSis categorical. In this case, by including the identity maps on each rectangularD-class and adjoining empty bijections between properly comparableD-classes, one has a category of rectangular algebras and coset bijections between them. The simple examples in Section 3 are categorical.
https://en.wikipedia.org/wiki/Skew_lattice
Incombinatorialmathematics, anEulerian posetis agraded posetin which every nontrivialintervalhas the same number of elements of even rank as of odd rank. An Eulerian poset which is alatticeis anEulerian lattice. These objects are named afterLeonhard Euler. Eulerian lattices generalizeface latticesofconvex polytopesand much recent research has been devoted to extending known results frompolyhedral combinatorics, such as various restrictions onf-vectors of convexsimplicial polytopes, to this more general setting.
https://en.wikipedia.org/wiki/Eulerian_lattice
Inlogicanduniversal algebra,Post's latticedenotes thelatticeof allcloneson a two-element set {0, 1}, ordered byinclusion. It is named forEmil Post, who published a complete description of the lattice in 1941.[1]The relative simplicity of Post's lattice is in stark contrast to the lattice of clones on a three-element (or larger) set, which has thecardinality of the continuum, and a complicated inner structure. A modern exposition of Post's result can be found in Lau (2006).[2] ABoolean function, orlogical connective, is ann-aryoperationf:2n→2for somen≥ 1, where2denotes the two-element set {0, 1}. Particular Boolean functions are theprojections and given anm-ary functionf, andn-ary functionsg1, ...,gm, we can construct anothern-ary function called theircomposition. A set of functions closed under composition, and containing all projections, is called aclone. LetBbe a set of connectives. The functions which can be defined by aformulausingpropositional variablesand connectives fromBform a clone [B], indeed it is the smallest clone which includesB. We call [B] the clonegeneratedbyB, and say thatBis thebasisof [B]. For example, [¬, ∧] are all Boolean functions, and [0, 1, ∧, ∨] are the monotone functions. We use the operations ¬, Np, (negation), ∧, Kpq, (conjunctionormeet), ∨, Apq, (disjunctionorjoin), →, Cpq, (implication), ↔, Epq, (biconditional), +, Jpq(exclusive disjunctionorBoolean ringaddition), ↛, Lpq,[3](nonimplication), ?: (the ternaryconditional operator) and the constant unary functions 0 and 1. Moreover, we need the threshold functions For example, thn1is the large disjunction of all the variablesxi, and thnnis the large conjunction. Of particular importance is themajority function We denote elements of2n(i.e., truth-assignments) as vectors:a= (a1, ...,an). The set2ncarries a naturalproductBoolean algebrastructure. That is, ordering, meets, joins, and other operations onn-ary truth assignments are defined pointwise: Intersectionof an arbitrary number of clones is again a clone. It is convenient to denote intersection of clones by simplejuxtaposition, i.e., the cloneC1∩C2∩ ... ∩Ckis denoted byC1C2...Ck. Some special clones are introduced below: The set of all clones is aclosure system, hence it forms acomplete lattice. The lattice iscountably infinite, and all its members are finitely generated. All the clones are listed in the table below. The eight infinite families have actually also members withk= 1, but these appear separately in the table:T01= P0,T11= P1,PT01= PT11= P,MT01= MP0,MT11= MP1,MPT01= MPT11= MP. The lattice has a natural symmetry mapping each cloneCto its dual cloneCd= {fd|f∈C}, wherefd(x1, ...,xn) = ¬f(¬x1, ..., ¬xn)is thede Morgan dualof a Boolean functionf. For example,Λd= V,(T0k)d= T1k, andMd= M. The complete classification of Boolean clones given by Post helps to resolve various questions about classes of Boolean functions. For example: If one only considers clones that are required to contain the constant functions, the classification is much simpler: there are only 7 such clones: UM, Λ, V, U, A, M, and ⊤. While this can be derived from the full classification, there is a simpler proof, taking less than a page.[5] Composition alone does not allow to generate a nullary function from the corresponding unary constant function, this is the technical reason why nullary functions are excluded from clones in Post's classification. If we lift the restriction, we get more clones. Namely, each cloneCin Post's lattice which contains at least one constant function corresponds to two clones under the less restrictive definition:C, andCtogether with all nullary functions whose unary versions are inC. Post originally did not work with the modern definition of clones, but with the so-callediterative systems, which are sets of operations closed under substitution as well as permutation and identification of variables. The main difference is that iterative systems do not necessarily contain all projections. Every clone is an iterative system, and there are 20 non-empty iterative systems which are not clones. (Post also excluded the empty iterative system from the classification, hence his diagram has no least element and fails to be a lattice.) As another alternative, some authors work with the notion of aclosed class, which is an iterative system closed under introduction of dummy variables. There are four closed classes which are not clones: the empty set, the set of constant 0 functions, the set of constant 1 functions, and the set of all constant functions.
https://en.wikipedia.org/wiki/Post%27s_lattice
In mathematics, aTamari lattice, introduced byDov Tamari(1962), is apartially ordered setin which the elements consist of different ways of grouping a sequence of objects into pairs using parentheses; for instance, for a sequence of four objectsabcd, the five possible groupings are ((ab)c)d, (ab)(cd), (a(bc))d,a((bc)d), anda(b(cd)). Each grouping describes a different order in which the objects may be combined by abinary operation; in the Tamari lattice, one grouping is ordered before another if the second grouping may be obtained from the first by only rightward applications of theassociative law(xy)z=x(yz). For instance, applying this law withx=a,y=bc, andz=dgives the expansion (a(bc))d=a((bc)d), so in the ordering of the Tamari lattice (a(bc))d≤a((bc)d). In this partial order, any two groupingsg1andg2have a greatest common predecessor, themeetg1∧g2, and a least common successor, thejoing1∨g2. Thus, the Tamari lattice has the structure of alattice. TheHasse diagramof this lattice isisomorphicto thegraph of vertices and edgesof anassociahedron. The number of elements in a Tamari lattice for a sequence ofn+ 1 objects is thenthCatalan numberCn. The Tamari lattice can also be described in several other equivalent ways: The Tamari lattice of the groupings ofn+1 objects is called Tn. The correspondingassociahedronis called Kn+1. InThe Art of Computer ProgrammingT4is called theTamari lattice of order 4and its Hasse diagram K5theassociahedron of order 4.
https://en.wikipedia.org/wiki/Tamari_lattice
Inlattice theory, abounded latticeLis called a0,1-simple latticeif nonconstant lattice homomorphisms ofLpreserve the identity of its top and bottom elements. That is, ifLis 0,1-simple and ƒ is a function fromLto some other lattice that preserves joins and meets and does not map every element ofLto a single element of the image, then it must be the case that ƒ−1(ƒ(0)) = {0} and ƒ−1(ƒ(1)) = {1}.[1] For instance, letLnbe a lattice withnatomsa1,a2, ...,an, top and bottom elements 1 and 0, and no other elements. Then forn≥ 3,Lnis 0,1-simple. However, forn= 2, the function ƒ that maps 0 anda1to 0 and that mapsa2and 1 to 1 is a homomorphism, showing thatL2is not 0,1-simple. Thisabstract algebra-related article is astub. You can help Wikipedia byexpanding it. Thiscombinatorics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/0,1-simple_lattice
Data conversionis the conversion ofcomputer datafrom oneformatto another. Throughout a computer environment, data isencodedin a variety of ways. For example,computer hardwareis built on the basis of certain standards, which requires that data contains, for example,parity bitchecks. Similarly, theoperating systemis predicated on certain standards for data and file handling. Furthermore, each computer program handles data in a different manner. Whenever any one of these variables is changed, data must be converted in some way before it can be used by a different computer, operating system or program. Even different versions of these elements usually involve different data structures. For example, the changing ofbitsfrom one format to another, usually for the purpose of application interoperability or of the capability of using new features, is merely a data conversion. Data conversions may be as simple as the conversion of atext filefrom onecharacter encodingsystem to another; or more complex, such as the conversion of office file formats, or theconversion of image formatsandaudio file formats. There are many ways in which data is converted within the computer environment. This may be seamless, as in the case of upgrading to a newer version of a computer program. Alternatively, the conversion may require processing by the use of a special conversion program, or it may involve a complex process of going through intermediary stages, or involving complex "exporting" and "importing" procedures, which may include converting to and from a tab-delimited or comma-separated text file. In some cases, a program may recognize several data file formats at the data input stage and then is also capable of storing the output data in several different formats. Such a program may be used to convert a file format. If the source format or target format is not recognized, then at times a third program may be available which permits the conversion to an intermediate format, which can then be reformatted using the first program. There are many possible scenarios. Before any data conversion is carried out, the user or application programmer should keep a few basics of computing andinformation theoryin mind. These include: For example, atrue colorimage can easily be converted to grayscale, while the opposite conversion is a painstaking process. Converting aUnixtext file to aMicrosoft(DOS/Windows) text file involves adding characters, but this does not increase theentropysince it is rule-based; whereas the addition of color information to a grayscale image cannot be reliably done programmatically, as it requires adding new information, so any attempt to add color would requireestimationby the computer based on previous knowledge. Converting a 24-bitPNGto a 48-bit one does not add information to it, it only pads existingRGBpixel values with zeroes[citation needed], so that a pixel with a value of FF C3 56, for example, becomes FF00 C300 5600. The conversion makes it possible to change a pixel to have a value of, for instance, FF80 C340 56A0, but the conversion itself does not do that, only further manipulation of the image can. Converting an image or audio file in alossyformat (likeJPEGorVorbis) to alossless(likePNGorFLAC) or uncompressed (likeBMPorWAV) format only wastes space, since the same image with its loss of original information (the artifacts of lossy compression) becomes the target. A JPEG image can never be restored to the quality of the original image from which it was made, no matter how much the user tries the "JPEG ArtifactRemoval" feature of his or her image manipulation program. Automatic restoration of information that was lost through alossy compressionprocess would probably require important advances inartificial intelligence. Because of these realities of computing and information theory, data conversion is often a complex and error-prone process that requires the help of experts. Data conversion can occur directly from one format to another, but many applications that convert between multiple formats use anintermediate representationby way of which any source format is converted to its target.[1]For example, it is possible to convertCyrillictext fromKOI8-RtoWindows-1251using a lookup table between the two encodings, but the modern approach is to convert the KOI8-R file toUnicodefirst and from that to Windows-1251. This is a more manageable approach; rather than needing lookup tables for all possible pairs of character encodings, an application needs only one lookup table for each character set, which it uses to convert to and from Unicode, thereby scaling the number of tables down from hundreds to a few tens.[citation needed] Pivotal conversion is similarly used in other areas. Office applications, when employed to convert between office file formats, use their internal, default file format as a pivot. For example, aword processormay convert anRTFfile to aWordPerfectfile by converting the RTF toOpenDocumentand then that to WordPerfect format. An image conversion program does not convert aPCXimage toPNGdirectly; instead, when loading the PCX image, it decodes it to a simple bitmap format for internal use in memory, and when commanded to convert to PNG, that memory image is converted to the target format. An audio converter that converts fromFLACtoAACdecodes the source file to rawPCMdata in memory first, and then performs the lossy AAC compression on that memory image to produce the target file. The objective of data conversion is to maintain all of the data, and as much of the embedded information as possible. This can only be done if the target format supports the same features and data structures present in the source file. Conversion of a word processing document to a plain text file necessarily involves loss of formatting information, because plain text format does not support word processing constructs such as marking a word as boldface. For this reason, conversion from one format to another which does not support a feature that is important to the user is rarely carried out, though it may be necessary for interoperability, e.g. converting a file from one version ofMicrosoft Wordto an earlier version to enable transfer and use by other users who do not have the same later version of Word installed on their computer. Loss of information can be mitigated by approximation in the target format. There is no way of converting a character likeätoASCII, since the ASCII standard lacks it, but the information may be retained by approximating the character asae. Of course, this is not an optimal solution, and can impact operations like searching and copying; and if a language makes a distinction betweenäandae, then that approximation does involve loss of information. Data conversion can also suffer from inexactitude, the result of converting between formats that are conceptually different. TheWYSIWYGparadigm, extant in word processors anddesktop publishingapplications, versus the structural-descriptive paradigm, found inSGML,XMLand many applications derived therefrom, likeHTMLandMathML, is one example. Using a WYSIWYG HTML editor conflates the two paradigms, and the result is HTML files with suboptimal, if not nonstandard, code. In the WYSIWYG paradigm a double linebreak signifies a new paragraph, as that is the visual cue for such a construct, but a WYSIWYG HTML editor will usually convert such a sequence to <BR><BR>, which is structurally no new paragraph at all. As another example, converting fromPDFto an editable word processor format is a tough chore, because PDF records the textual information like engraving on stone, with each character given a fixed position and linebreaks hard-coded, whereas word processor formats accommodate text reflow. PDF does not know of a word space character—the space between two letters and the space between two words differ only in quantity. Therefore, a title with ample letter-spacing for effect will usually end up with spaces in the word processor file, for example INTRODUCTION with spacing of 1emas I N T R O D U C T I O N on the word processor. Successful data conversion requires thorough knowledge of the workings of both source and target formats. In the case where the specification of a format is unknown,reverse engineeringwill be needed to carry out conversion. Reverse engineering can achieve close approximation of the original specifications, but errors and missing features can still result. Data format conversion can also occur at the physical layer of an electronic communication system. Conversion betweenline codessuch asNRZandRZcan be accomplished when necessary. Manolescu, FirstName (2006).Pattern Languages of Program Design 5. Upper Saddle River, NJ: Addison-Wesley.ISBN0321321944.
https://en.wikipedia.org/wiki/Data_conversion
Minimal mappingsare the result of an advanced technique ofsemantic matching, a technique used incomputer scienceto identify information which is semantically related.[1] Semantic matching has been proposed as a valid solution to the semantic heterogeneity problem, namely, supporting diversity in knowledge.[2]Given any two graph-like structures, e.g. classifications,databases, orXML schemasandontologies, matching is anoperatorwhich identifies those nodes in the two structures that semantically correspond to one another. For example, applied to file systems, it can identify that a folder labeled “car” is semantically equivalent to another folder “automobile” because they are synonyms in English. The proposed technique works on lightweight ontologies, namely, tree structures where each node is labeled by a natural language sentence, for example in English.[3]These sentences are translated into a formal logical formula (according to an unambiguous,artificial language). The formula codifies the meaning of the node, accounting for its position in the graph. For example, in case the folder “car” is under another folder “red” we can say that the meaning of the folder “car” is “red car” in this case. This is translated into the logical formula “red AND car”. The output of matching is a mapping, namely a set of semantic correspondences between the two graphs. Each mapping element is attached with asemantic relation, for exampleequivalence. Among all possible mappings, the minimal mapping is such that all other mapping elements can be computed from the minimal set in an amount of time proportional to the size of the input graphs (linear time) and none of the elements in the minimal set can be dropped without preventing such a computation. The main advantage of minimal mappings is that they minimize the number of nodes for subsequent processing. Notice that this is a rather important feature because the number of possible mappings can reachn×mwithnandmthe size of the two input ontologies. In particular, minimal mappings become crucial with large ontologies, e.g.DMOZ, where even relatively small (non-minimal) subsets of the number of possible mapping elements, potentially millions of them, are unmanageable. Minimal mappings provide usability advantages. Many systems and corresponding interfaces, mostly graphical, have been provided for the management of mappings but all of them scale poorly with the number of nodes. Visualizations of large graphs are rather messy.[4]Maintenance of smaller mappings is much easier, faster and less error prone.
https://en.wikipedia.org/wiki/Minimal_mappings
TheRule Interchange Format(RIF) is aW3C Recommendation. RIF is part of the infrastructure for thesemantic web, along with (principally)SPARQL,RDFandOWL. Although originally envisioned by many as a "rules layer" for the semantic web, in reality the design of RIF is based on the observation that there are many "rules languages" in existence, and what is needed is to exchange rules between them.[1] RIF includes three dialects, a Core dialect which is extended into a Basic Logic Dialect (BLD) and Production Rule Dialect (PRD).[2] The RIF working group was chartered in late 2005. Among its goals was drawing in members of the commercial rules marketplace. The working group started with more than 50 members and two chairs drawn from industry, Christian de Sainte Marie ofILOG, andChris WeltyofIBM. The charter, to develop an interchange formatbetween existing rule systemswas influenced by a workshop in the spring of 2005 in which it was clear that one rule language would not serve the needs of all interested parties (Dr. Welty described the outcome of the workshop asNash Equilibrium[3]). RIF became aW3C Recommendationon June 22, 2010.[4] Aruleis perhaps one of the simplest notions in computer science: it is an IF - THEN construct. If some condition (the IF part) that is checkable in some dataset holds, then the conclusion (the THEN part) is processed. Deriving somewhat from its roots inlogic, rule systems use a notion of predicates that hold or not of some data object or objects. For example, the fact that two people are married might be represented with predicates as: MARRIEDis a predicate that can be said toholdbetweenLISAandJOHN. Adding the notion of variables, a rule could be something like: We would expect that for every pair of ?x and ?y (e.g.LISAandJOHN) for which theMARRIEDpredicate holds, some computer system that could understand this rule would conclude that theLOVESpredicate holds for that pair as well. Rules are a simple way of encoding knowledge, and are a drastic simplification offirst order logic, for which it is relatively easy to implement inference engines that can process the conditions and draw the right conclusions. Arule systemis an implementation of a particularsyntaxandsemanticsof rules, which may extend the simple notion described above to includeexistential quantification,disjunction,logical conjunction,negation,functions,non monotonicity, and many other features. Rule systems have been implemented and studied since the mid-1970s and saw significant uptake in the 1980s during the height of so-calledExpert Systems. The standard RIF dialects are Core, BLD and PRD. These dialects depend on an extensive list of datatypes with builtin functions and predicates on those datatypes. Relations of various RIF dialects are shown in the following Venn diagram.[5] Datatypes and Built-Ins (DTB) specifies a list of datatypes, built-in functions and built-in predicates expected to be supported by RIF dialects. Some of the datatypes are adapted fromXML SchemaDatatypes,[6]XPathfunctions[7]and rdf:PlainLiteral functions.[8] The Core dialect comprises a common subset of most rule dialect. RIF-Core is a subset of both RIF-BLD and RIF-PRD. Framework for Logic Dialects (FLD) describes mechanisms for specifying the syntax and semantics of logic RIF dialects, including the RIF-BLD and RIF-Core, but not RIF-PRD which is not a logic-based RIF dialect. The Basic Logic Dialect (BLD) adds features to the Core dialect that are not directly available such as: logic functions, equality in the then-part andnamed arguments. RIF BLD corresponds to positive datalogs, that is, logic programs without functions or negations. RIF-BLD has amodel-theoreticsemantics. Theframesyntax of RIF BLD is based onF-logic, but RIF BLD doesn't have thenon-monotonic reasoningfeatures of F-logic.[9] The Production Rules Dialect (PRD) can be used to modelproduction rules. Features that are notably in PRD but not BLD include negation and retraction of facts (thus, PRD is not monotonic). PRD rules are order dependent, hence conflict resolution strategies are needed when multiple rules can be fired. The PRD specification defines one such resolution strategy based onforward chainingreasoning. RIF-PRD has anoperational semantics, whereas the condition formulas also have a model-theoretic semantics. Example (Example 1.2 in[10]) Several other RIF dialects exist. None of them are officially endorsed by W3C and they are not part of the RIF specification. The Core Answer Set Programming Dialect (CASPD)[11]is based onanswer set programming, that is, declarative logic programming based on the answer set semantics (stable model semantics). Example: The Uncertainty Rule Dialect (URD)[12]supports a direct representation of uncertain knowledge. Example: RIF-SILK[13]can be used to modeldefault logic. It is based on declarative logic programming with thewell-founded semantics. RIF-SILK also includes a number of other features present in more sophisticated declarative logic programming languages such as SILK.[14] Example
https://en.wikipedia.org/wiki/Rule_Interchange_Format
Semantic interoperabilityis the ability ofcomputersystems to exchangedatawith unambiguous, shared meaning.Semanticinteroperability is a requirement to enable machine computablelogic, inferencing, knowledge discovery, and data federation betweeninformation systems.[1] Semantic interoperability is therefore concerned not just with the packaging of data (syntax), but the simultaneous transmission of the meaning with the data (semantics). This is accomplished by adding data about the data (metadata), linking each data element to a controlled, sharedvocabulary. The meaning of the data is transmitted with the data itself, in one self-describing "information package" that is independent of any information system. It is this shared vocabulary, and its associated links to anontology, which provides the foundation and capability of machine interpretation, inference, and logic. Syntactic interoperability (seebelow) is a prerequisite for semantic interoperability.Syntactic interoperabilityrefers to the packaging and transmission mechanisms for data. In healthcare, HL7 has been in use for over thirty years (which predates the internet and web technology), and uses the pipe character (|) as a data delimiter. The current internet standard fordocument markupisXML, which uses "< >" as a data delimiter. The data delimiters convey no meaning to the data other than to structure the data. Without adata dictionaryto translate the contents of the delimiters, the data remains meaningless. While there are many attempts at creating data dictionaries and information models to associate with these data packaging mechanisms, none have been practical to implement. This has only perpetuated the ongoing "babelization" of data and inability to exchange data with meaning. Since the introduction of theSemantic Webconcept byTim Berners-Leein 1999,[2]there has been growing interest and application of theW3C(World Wide Web Consortium) standards to provide web-scale semantic data exchange, federation, and inferencing capabilities. Syntactic interoperability, provided by for instanceXMLor theSQLstandards, is a pre-requisite to semantic. It involves a common data format and common protocol to structure any data so that the manner of processing the information will be interpretable from the structure. It also allows detection of syntactic errors, thus allowing receiving systems to request resending of any message that appears to be garbled or incomplete. No semantic communication is possible if thesyntaxis garbled or unable to represent the data. However, information represented in one syntax may in some cases be accurately translated into a different syntax. Where accurate translation of syntaxes is possible, systems using different syntaxes may also interoperate accurately. In some cases, the ability to accurately translate information among systems using different syntaxes may be limited to one direction, when the formalisms used have different levels ofexpressivity(ability to express information). A singleontologycontaining representations of every term used in every application is generally considered impossible, because of the rapid creation of new terms or assignments of new meanings to old terms. However, though it is impossible to anticipateeveryconcept that a user may wish to represent in a computer, there is the possibility of finding some finite set of "primitive" concept representations that can be combined to create any of the more specific concepts that users may need for any given set of applications or ontologies. Having a foundation ontology (also calledupper ontology) that contains all those primitive elements would provide a sound basis for general semantic interoperability, and allow users to define any new terms they need by using the basic inventory of ontology elements, and still have those newly defined terms properly interpreted by any other computer system that can interpret the basic foundation ontology. Whether the number of such primitive concept representations is in fact finite, or will expand indefinitely, is a question under active investigation. If it is finite, then a stable foundation ontology suitable to support accurate and general semantic interoperability can evolve after some initial foundation ontology has been tested and used by a wide variety of users. At the present time, no foundation ontology has been adopted by a wide community, so such a stable foundation ontology is still in the future. One persistent misunderstanding recurs in discussion of semantics is "the confusion of words and meanings". The meanings of words change, sometimes rapidly. But aformal languagesuch as used in an ontology can encode the meanings (semantics) of concepts in a form that does not change. In order to determine what is the meaning of a particular word (or term in adatabase, for example) it is necessary to label each fixed concept representation in an ontology with the word(s) or term(s) that may refer to that concept. When multiple words refer to the same (fixed) concept in language this is calledsynonymy; when one word is used to refer to more than one concept, that is calledambiguity. Ambiguity and synonymy are among the factors that make computer understanding of language very difficult. The use of words to refer to concepts (the meanings of the words used) is very sensitive to the context and the purpose of any use for many human-readable terms. The use of ontologies in supporting semantic interoperability is to provide a fixed set of concepts whose meanings and relations are stable and can be agreed to by users. The task of determining which terms in which contexts (each database is a different context) is then separated from the task of creating the ontology, and must be taken up by the designer of a database, or the designer of a form fordata entry, or the developer of a program for language understanding. When the meaning of a word used in some interoperable context is changed, then to preserve interoperability it is necessary to change the pointer to the ontology element(s) that specifies the meaning of that word. A knowledge representation language may be sufficiently expressive to describe nuances of meaning in well understood fields. There are at least five levels of complexity of these[specify]. For generalsemi-structured dataone may use a general purpose language such as XML.[3] Languages with the full power offirst-order predicate logicmay be required for many tasks. Human languages are highly expressive, but are considered too ambiguous to allow the accurate interpretation desired, given the current level of human language technology. Semantic interoperability healthcare systems leverage data in a standardized way as they break down and share information. For example, two systems can now recognize terminology and medication symbols. Semantic interoperability healthcare systems leverage data in a standardized way as they break down and share information. For example, two systems can now recognize terminology, medication symbols, and other nuances while exchanging data automatically, without human intervention. Semantic interoperability may be distinguished from other forms of interoperability by considering whether the information transferred has, in its communicated form, all of the meaning required for the receiving system to interpret it correctly, even when thealgorithmsused by the receiving system are unknown to the sending system. Consider sending one number: If that number is intended to be the sum of money owed by one company to another, it implies some action or lack of action on the part of both those who send it and those who receive it. It may be correctly interpreted if sent in response to a specific request, and received at the time and in the form expected. This correct interpretation does not depend only on the number itself, which could represent almost any of millions of types of quantitative measurement, rather it depends strictly on the circumstances of transmission. That is, the interpretation depends on both systems expecting that the algorithms in the other system use the number in exactly the same sense, and it depends further on the entire envelope of transmissions that preceded the actual transmission of the bare number. By contrast, if the transmitting system does not know how the information will be used by other systems, it is necessary to have a shared agreement on how information with some specific meaning (out of many possible meanings) will appear in a communication. For a particular task, one solution is to standardize a form, such as a request for payment; that request would have to encode, in standardized fashion, all of the information needed to evaluate it, such as: the agent owing the money, the agent owed the money, the nature of the action giving rise to the debt, the agents, goods, services, and other participants in that action; the time of the action; the amount owed and currency in which the debt is reckoned; the time allowed for payment; the form of payment demanded; and other information. When two or more systems have agreed on how to interpret the information in such a request, they can achieve semantic interoperabilityfor that specific type of transaction. For semantic interoperability generally, it is necessary to provide standardized ways to describe the meanings of many more things than just commercial transactions, and the number of concepts whose representation needs to be agreed upon are at a minimum several thousand. How to achieve semantic interoperability for more than a few restricted scenarios is currently a matter of research and discussion. For the problem of General Semantic Interoperability, some form of foundation ontology ('upper ontology') is required that is sufficiently comprehensive to provide the definition of concepts for more specialized ontologies in multiple domains. Over the past decade, more than ten foundation ontologies have been developed, but none have as yet been adopted by a wide user base. The need for a single comprehensive all-inclusive ontology to support Semantic Interoperability can be avoided by designing the common foundation ontology as a set of basic ("primitive") concepts that can be combined to create the logical descriptions of the meanings of terms used in local domain ontologies or local databases. This tactic is based on the principle that: If: Then: Therefore: This tactic then limits the need for prior agreement on meanings to only those ontology elements in the commonFoundation Ontology(FO). Based on several considerations, this may require fewer than 10,000 elements (types and relations). However, for ease of understanding and use, more ontology elements with additional detail and specifics can help to find the exact location in the FO where specific domain concepts can be found or added. In practice, together with the FO focused on representations of the primitive concepts, a set of domain extension ontologies to the FO with elements specified using the FO elements will likely also be used. Such pre-existing extensions will ease the cost of creating domain ontologies by providing existing elements with the intended meaning, and will reduce the chance of error by using elements that have already been tested. Domain extension ontologies may be logically inconsistent with each other, and that needs to be determined if different domain extensions are used in any communication. Whether use of such a single foundation ontology can itself be avoided by sophisticated mapping techniques among independently developed ontologies is also under investigation. The practical significance of semantic interoperability has been measured by several studies that estimate the cost (in lost efficiency) due to lack of semantic interoperability. One study,[4]focusing on the lost efficiency in the communication of healthcare information, estimated that US$77.8 billion per year could be saved by implementing an effective interoperability standard in that area. Other studies, of the construction industry[5]and of the automobile manufacturing supply chain,[6]estimate costs of over US$10 billion per year due to lack of semantic interoperability in those industries. In total these numbers can be extrapolated to indicate that well over US$100 billion per year is lost because of the lack of a widely used semantic interoperability standard in the US alone. There has not yet been a study about each policy field that might offer big cost savings applying semantic interoperability standards. But to see which policy fields are capable of profiting from semantic interoperability, see 'Interoperability' in general. Such policy fields areeGovernment, health, security and many more. The EU also set up theSemantic Interoperability Centre Europein June 2007. Digital transformation holds huge benefits for enabling organizations to be more efficient, more flexible, and more nimble in responding to changes in business and operating conditions. This involves the need to integrate heterogeneous data and services throughout organizations. Semantic interoperability addresses the need for shared understanding of the meaning and context. To support this, a cross-organization expert group involving ISO/IEC JTC1, ETSI, oneM2M and W3C are collaborating with AIOTI on accelerating adoption of semantic technologies in the IoT. The group has very recently published two joint white papers on semantic interoperability respectively named “Semantic IoT Solutions – A Developer Perspective” and “Towards semantic interoperability standards based on ontologies“. This follows on the success of the earlier white paper on “Semantic Interoperability for the Web of Things.” Source: “Semantic IoT Solutions – A Developer Perspective” “Towards semantic interoperability standards based on ontologies“. This follows on the success of the earlier white paper on “Semantic Interoperability for the Web of Things.” https://www.w3.org/blog/2019/10/aioti-iso-iec-jtc1-etsi-onem2m-and-w3c-collaborate-on-two-joint-white-papers-on-semantic-interoperability-targeting-developers-and-standardization-engineers/
https://en.wikipedia.org/wiki/Semantic_interoperability
Semantic unificationis the process of unifying lexically different concept representations that are judged to have the same semantic content (i.e., meaning). In business processes, the conceptual semantic unification is defined as "the mapping of two expressions onto an expression in an exchange format which is equivalent to the given expression".[1] Semantic unification has since been applied to the fields ofbusiness processesandworkflow management. In the early 1990s Charles Petri[full citation needed]at Stanford University[full citation needed]introduced the term "semantic unification" for business models, later references could be found in[2]and later formalized in Fawsy Bendeck's dissertation.[3]Petri introduced the term 'pragmatic semantic unification" to refer to the approaches in which the results are tested against a running application using the semantic mappings.[4]In this pragmatic approach, the accuracy of the mapping is not as important as its usability. In general, semantic unification as used in business processes is employed to find a common unified concept that matches two lexicalized expressions into the same interpretation.[citation needed] Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Semantic_unification
Semantic analysisis a method for eliciting and representingknowledgeaboutorganisations.[vague][1] Initially the problem must be defined by domain experts and passed to the project analyst(s). The next step is the generation of candidate affordances. This step will generate a list of semantic units that may be included in the schema. The candidate grouping follows where some of the semantic units that will appear in the schema are placed in simple groups. Finally the groups will be integrated together into anontologychart. Semantic analysis always starts from the problem definition which if not clear, require the analyst to employ relevantliterature,interviewswith thestakeholdersand other techniques towards collecting supplementaryinformation. All assumptions made must be genuine and not limiting the system. This article relating tolibrary scienceorinformation scienceis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Semantic_analysis_(knowledge_representation)
Human–computer interaction(HCI) is the process through which people operate and engage with computer systems. Research in HCI covers the design and the use ofcomputer technology, which focuses on theinterfacesbetween people (users) andcomputers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. These include visual, auditory, and tactile (haptic) feedback systems, which serve as channels for interaction in both traditional interfaces and mobile computing contexts.[1]A device that allows interaction between human being and a computer is known as a "human–computer interface". As a field of research, human–computer interaction is situated at the intersection ofcomputer science,behavioral sciences,design,media studies, andseveral other fields of study. The term was popularized byStuart K. Card,Allen Newell, andThomas P. Moranin their 1983 book,The Psychology of Human–Computer Interaction.The first known use was in 1975 by Carlisle.[2]The term is intended to convey that, unlike other tools with specific and limited uses, computers have many uses which often involve an open-ended dialogue between the user and the computer. The notion of dialogue likens human–computer interaction to human-to-human interaction: an analogy that is crucial to theoretical considerations in the field.[3][4] Humans interact with computers in many ways, and the interface between the two is crucial to facilitating this interaction. HCI is also sometimes termedhuman–machine interaction(HMI),man-machine interaction(MMI) orcomputer-human interaction(CHI). Desktop applications, web browsers, handheld computers, and computer kiosks make use of the prevalentgraphical user interfaces(GUI) of today.[5]Voice user interfaces(VUIs) are used forspeech recognitionand synthesizing systems, and the emergingmulti-modaland Graphical user interfaces (GUI) allow humans to engage withembodied character agentsin a way that cannot be achieved with other interface paradigms. TheAssociation for Computing Machinery(ACM) defines human–computer interaction as "a discipline that is concerned with the design, evaluation, and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them".[5]A key aspect of HCI is user satisfaction, also referred to as End-User Computing Satisfaction. It goes on to say: "Because human–computer interaction studies a human and a machine in communication, it draws from supporting knowledge on both the machine and the human side. On the machine side, techniques incomputer graphics,operating systems,programming languages, and development environments are relevant. On the human side,communication theory,graphicandindustrial designdisciplines,linguistics,social sciences,cognitive psychology,social psychology, andhuman factorssuch ascomputer user satisfactionare relevant. And, of course, engineering and design methods are relevant."[5]HCI ensures that humans can safely and efficiently interact with complex technologies in fields like aviation and healthcare.[6] Due to the multidisciplinary nature of HCI, people with different backgrounds contribute to its success. Poorly designedhuman-machine interfacescan lead to many unexpected problems. A classic example is theThree Mile Island accident, a nuclear meltdown accident, where investigations concluded that the design of the human-machine interface was at least partly responsible for the disaster.[7][8][9]Similarly, some accidents in aviation have resulted from manufacturers' decisions to use non-standardflight instrumentsor throttle quadrant layouts: even though the new designs were proposed to be superior in basic human-machine interaction, pilots had already ingrained the "standard" layout. Thus, the conceptually good idea had unintended results.[10] A human–computer interface can be described as the interface of communication between a human user and a computer. The flow of information between the human and computer is defined as theloop of interaction. The loop of interaction has several aspects to it, including: Human–computer interaction involves the ways in which humans make—or do not make—use of computational artifacts, systems, and infrastructures. Much of the research in this field seeks toimprovethe human–computer interaction by improving theusabilityof computer interfaces.[11]How usability is to be precisely understood, how it relates to other social and cultural values, and when it is, and when it may not be a desirable property of computer interfaces is increasingly debated.[12][13] Much of the research in the field of human–computer interaction takes an interest in: Visions of what researchers in the field seek to achieve might vary. When pursuing a cognitivist perspective, researchers of HCI may seek to align computer interfaces with the mental model that humans have of their activities. When pursuing apost-cognitivistperspective, researchers of HCI may seek to align computer interfaces with existing social practices or existing sociocultural values. Researchers in HCI are interested in developing design methodologies, experimenting with devices, prototyping software, and hardware systems, exploring interaction paradigms, and developing models and theories of interaction. The following experimental design principles are considered, when evaluating a currentuser interface, or designing a new user interface: The iterative design process is repeated until a sensible, user-friendly interface is created.[16] Various strategies delineating methods for human–PCinteraction designhave developed since the conception of the field during the 1980s. Most plan philosophies come from a model for how clients, originators, and specialized frameworks interface. Early techniques treated clients' psychological procedures as unsurprising and quantifiable and urged plan specialists to look at subjective science to establish zones, (for example, memory and consideration) when structuring UIs. Present-day models, in general, center around a steady input and discussion between clients, creators, and specialists and push for specialized frameworks to be folded with the sorts of encounters clients need to have, as opposed to wrappinguser experiencearound a finished framework. Topics in human–computer interaction include the following: Human-AI Interaction explores how users engage with artificial intelligence systems, particularly focusing on usability, trust, and interpretability. The research mainly aims to design AI-driven interfaces that are transparent, explainable, and ethically responsible.[20]Studies highlight the importance of explainable AI (XAI) and human-in-the-loop decision-making, ensuring that AI outputs are understandable and trustworthy.[21]Researchers also develop design guidelines for human-AI interaction, improving the collaboration between users and AI systems.[22] Augmented reality (AR) integrates digital content with the real world. It enhances human perception and interaction with physical environments. AR research mainly focuses on adaptive user interfaces, multimodal input techniques, and real-world object interaction.[23]Advances in wearable AR technology improve usability, enabling more natural interaction with AR applications.[24] Virtual reality (VR) creates a fully immersive digital environment, allowing users to interact with computer-generated worlds through sensory input devices. Research focuses on user presence, interaction techniques, and cognitive effects of immersion.[25]A key area of study is the impact of VR on cognitive load and user adaptability, influencing how users process information in virtual spaces.[26] Mixed reality (MR) blends elements of both augmented reality (AR) and virtual reality (VR). It enables real-time interaction with both physical and digital objects. HCI research in MR concentrates on spatial computing, real-world object interaction, and context-aware adaptive interfaces.[27]MR technologies are increasingly applied in education, training simulations, and healthcare, enhancing learning outcomes and user engagement.[28] Extended reality (XR) is an umbrella term encompassing AR, VR, and MR, offering a continuum between real and virtual environments. Research investigates user adaptability, interaction paradigms, and ethical implications of immersive technologies.[29]Recent studies highlight how AI-driven personalization and adaptive interfaces improve the usability of XR applications.[30] Accessibility in human–computer interaction (HCI) focuses on designing inclusive digital experiences, ensuring usability for people with diverse abilities. Research in this area is related to assistive technologies, adaptive interfaces, and universal design principles.[31]Studies indicate that accessible design benefits not only people with disabilities but also enhances usability for all users.[32] Social computing is an interactive and collaborative behavior considered between technology and people. In recent years, there has been an explosion of social science research focusing on interactions as the unit of analysis, as there are a lot of social computing technologies that include blogs, emails, social networking, quick messaging, and various others. Much of this research draws from psychology, social psychology, and sociology. For example, one study found out that people expected a computer with a man's name to cost more than a machine with a woman's name.[33]Other research finds that individuals perceive their interactions with computers more negatively than humans, despite behaving the same way towards these machines.[34] In human and computer interactions, a semantic gap usually exists between human and computer's understandings towards mutual behaviors.Ontology, as a formal representation of domain-specific knowledge, can be used to address this problem by solving the semantic ambiguities between the two parties.[35] In the interaction of humans and computers, research has studied how computers can detect, process, and react to human emotions to develop emotionally intelligent information systems. Researchers have suggested several 'affect-detection channels'. The potential of telling human emotions in an automated and digital fashion lies in improvements to the effectiveness of human–computer interaction. The influence of emotions in human–computer interaction has been studied in fields such as financial decision-making usingECGand organizational knowledge sharing usingeye-trackingand face readers as affect-detection channels. In these fields, it has been shown that affect-detection channels have the potential todetect human emotionsand those information systems can incorporate the data obtained from affect-detection channels to improve decision models. Abrain–computer interface(BCI), is a direct communication pathway between an enhanced or wiredbrainand an external device. BCI differs fromneuromodulationin that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.[36] Security interactions are the study of interaction between humans and computers specifically as it pertains toinformation security. Its aim, in plain terms, is to improve theusabilityof security features inend userapplications. Unlike HCI, which has roots in the early days ofXerox PARCduring the 1970s, HCISec is a nascent field of study by comparison. Interest in this topic tracks with that ofInternet security, which has become an area of broad public concern only in very recent years. When security features exhibit poor usability, the following are common reasons: Traditionally, computer use was modeled as a human–computer dyad in which the two were connected by a narrow explicit communication channel, such as text-based terminals. Much work has been done to make the interaction between a computing system and a human more reflective of the multidimensional nature of everyday communication. Because of potential issues, human–computer interaction shifted focus beyond the interface to respond to observations as articulated byDouglas Engelbart: "If ease of use were the only valid criterion, people would stick to tricycles and never try bicycles."[37] How humans interact with computers continues to evolve rapidly. Human–computer interaction is affected by developments in computing. These forces include: As of 2010[update]the future for HCI is expected[38]to include the following characteristics: One of the main conferences for new research in human–computer interaction is the annually heldAssociation for Computing Machinery's (ACM)Conference on Human Factors in Computing Systems, usually referred to by its short name CHI (pronouncedkai, orKhai). CHI is organized by ACM Special Interest Group on Computer-Human Interaction (SIGCHI). CHI is a large conference, with thousands of attendants, and is quite broad in scope. It is attended by academics, practitioners, and industry people, with company sponsors such as Google, Microsoft, and PayPal. There are also dozens of other smaller, regional, or specialized HCI-related conferences held around the world each year, including:[39]
https://en.wikipedia.org/wiki/Human_Computer_Interaction
Aglossary(fromAncient Greek:γλῶσσα,glossa; language, speech, wording), also known as avocabularyorclavis, is an alphabetical list oftermsin a particulardomain of knowledgewith thedefinitionsfor those terms.[citation needed]Traditionally, a glossary appears at the end of abookand includes terms within that book that are either newly introduced, uncommon, or specialized. While glossaries are most commonly associated withnon-fictionbooks, in some cases,fictionnovels sometimes include a glossary for unfamiliar terms. A bilingual glossary is a list of terms in one language defined in a second language orglossedbysynonyms(or at least near-synonyms) in another language. In a general sense, a glossary contains explanations ofconceptsrelevant to a certain field of study or action. In this sense, the term is related to the notion ofontology. Automatic methods have been also provided that transform a glossary into an ontology[1]or a computational lexicon.[2] Acore glossaryis a simple glossary orexplanatory dictionarythat enables definition of other concepts, especially for newcomers to a language or field of study. It contains a small working vocabulary and definitions for important or frequently encountered concepts, usually including idioms or metaphors useful in a culture. Computational approachesto the automated extraction of glossaries from corpora[3]or the Web[4][5]have been developed in the recent years[timeframe?]. These methods typically start from domainterminologyand extract one or more glosses for each term of interest. Glosses can then be analyzed to extracthypernymsof the defined term and other lexical and semantic relations.
https://en.wikipedia.org/wiki/Glossary
Ininformation science, anontologyencompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or alldomains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to asapplied ontology.[1] Everyacademic disciplineor field, in creating its terminology, thereby lays the groundwork for an ontology. Each uses ontological assumptions to frame explicit theories, research and applications. Improved ontologies may improve problem solving within that domain,interoperabilityof data systems, and discoverability of data. Translating research papers within every field is a problem made easier when experts from different countries maintain acontrolled vocabularyofjargonbetween each of their languages.[2]For instance, thedefinition and ontology of economicsis a primary concern inMarxist economics,[3]but also in othersubfields of economics.[4]An example of economics relying on information science occurs in cases where a simulation or model is intended to enable economic decisions, such as determining whatcapital assetsare at risk and by how much (seerisk management). What ontologies in bothinformation scienceandphilosophyhave in common is the attempt to represent entities, including both objects and events, with all their interdependent properties and relations, according to a system of categories. In both fields, there is considerable work on problems ofontology engineering(e.g.,QuineandKripkein philosophy,SowaandGuarinoin information science),[5]and debates concerning to what extentnormativeontology is possible (e.g.,foundationalismandcoherentismin philosophy,BFOandCycin artificial intelligence). Applied ontologyis considered by some as a successor to prior work in philosophy. However many current efforts are more concerned with establishingcontrolled vocabulariesof narrow domains than with philosophicalfirst principles, or with questions such as the mode of existence offixed essencesor whether enduring objects (e.g.,perdurantismandendurantism) may be ontologically more primary thanprocesses.Artificial intelligencehas retained considerable attention regardingapplied ontologyin subfields likenatural language processingwithinmachine translationandknowledge representation, but ontology editors are being used often in a range of fields, including biomedical informatics,[6]industry.[7]Such efforts often use ontology editing tools such asProtégé.[8] Ontologyis a branch ofphilosophyand intersects areas such asmetaphysics,epistemology, andphilosophy of language, as it considers how knowledge, language, and perception relate to the nature of reality.Metaphysicsdeals with questions like "what exists?" and "what is the nature of reality?". One of five traditional branches of philosophy, metaphysics is concerned with exploring existence through properties, entities and relations such as those betweenparticularsanduniversals,intrinsic and extrinsic properties, oressenceandexistence. Metaphysics has been an ongoing topic of discussion since recorded history. Thecompoundwordontologycombinesonto-, from theGreekὄν,on(gen.ὄντος,ontos), i.e. "being; that which is", which is thepresentparticipleof theverbεἰμί,eimí, i.e. "to be, I am", and-λογία,-logia, i.e. "logical discourse", seeclassical compoundsfor this type of word formation.[9][10] While theetymologyis Greek, the oldest extant record of the word itself, theNeo-Latinformontologia, appeared in 1606 in the workOgdoas ScholasticabyJacob Lorhard(Lorhardus) and in 1613 in theLexicon philosophicumbyRudolf Göckel(Goclenius).[11] The first occurrence in English ofontologyas recorded by theOED(Oxford English Dictionary, online edition, 2008) came inArcheologia Philosophica NovaorNew Principles of PhilosophybyGideon Harvey. Since the mid-1970s, researchers in the field ofartificial intelligence(AI) have recognized thatknowledge engineeringis the key to building large and powerful AI systems[citation needed]. AI researchers argued that they could create new ontologies ascomputational modelsthat enable certain kinds ofautomated reasoning, which was onlymarginally successful. In the 1980s, the AI community began to use the termontologyto refer to both a theory of a modeled world and a component ofknowledge-based systems. In particular, David Powers introduced the wordontologyto AI to refer to real world or robotic grounding,[12][13]publishing in 1990 literature reviews emphasizing grounded ontology in association with the call for papers for a AAAI Summer Symposium Machine Learning of Natural Language and Ontology, with an expanded version published in SIGART Bulletin and included as a preface to the proceedings.[14]Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy.[15] In 1993, the widely cited web page and paper "Toward Principles for the Design of Ontologies Used for Knowledge Sharing" byTom Gruber[16]usedontologyas a technical term incomputer scienceclosely related to earlier idea ofsemantic networksandtaxonomies. Gruber introduced the term asa specification of a conceptualization: An ontology is a description (like a formal specification of a program) of the concepts and relationships that can formally exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy.[17] Attempting to distance ontologies from taxonomies and similar efforts inknowledge modelingthat rely onclassesandinheritance, Gruber stated (1993): Ontologies are often equated with taxonomic hierarchies of classes, class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited toconservative definitions, that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world (Enderton, 1972). To specify a conceptualization, one needs to state axioms thatdoconstrain the possible interpretations for the defined terms.[16] Recent experimental ontology frameworks have also explored resonance-based AI-human co-evolution structures, such as IAMF (Illumination AI Matrix Framework). Though not yet widely adopted in academic discourse, such models propose phased approaches to ethical harmonization and structural emergence.[18] As refinement of Gruber's definition Feilmayr and Wöß (2016) stated: "An ontology is a formal, explicit specification of a shared conceptualization that is characterized by high semantic expressiveness required for increased complexity."[19] Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes and relations. A domain ontology (or domain-specific ontology) represents concepts which belong to a realm of the world, such as biology or politics. Each domain ontology typically models domain-specific definitions of terms. For example, the wordcardhas many different meanings. An ontology about the domain ofpokerwould model the "playing card" meaning of the word, while an ontology about the domain ofcomputer hardwarewould model the "punched card" and "video card" meanings. Since domain ontologies are written by different people, they represent concepts in very specific and unique ways, and are often incompatible within the same project. As systems that rely on domain ontologies expand, they often need to merge domain ontologies by hand-tuning each entity or using a combination of software merging and hand-tuning. This presents a challenge to the ontology designer. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.)[citation needed]. At present, merging ontologies that are not developed from a commonupper ontologyis a largely manual process and therefore time-consuming and expensive. Domain ontologies that use the same upper ontology to provide a set of basic elements with which to specify the meanings of the domain ontology entities can be merged with less effort. There are studies on generalized techniques for merging ontologies,[20]but this area of research is still ongoing, and it is a recent event to see the issue sidestepped by having multiple domain ontologies using the same upper ontology like theOBO Foundry. An upper ontology (or foundation ontology) is a model of the commonly shared relations and objects that are generally applicable across a wide range of domain ontologies. It usually employs acore glossarythat overarches the terms and associated object descriptions as they are used in various relevant domain ontologies. Standardized upper ontologies available for use includeBFO,BORO method,Dublin Core,GFO,Cyc,SUMO,UMBEL, andDOLCE.[21][22]WordNethas been considered an upper ontology by some and has been used as a linguistic tool for learning domain ontologies.[23] TheGellishontology is an example of a combination of an upper and a domain ontology. A survey of ontology visualization methods is presented by Katifori et al.[24]An updated survey of ontology visualization methods and tools was published by Dudás et al.[25]The most established ontology visualization methods, namely indented tree and graph visualization are evaluated by Fu et al.[26]A visual language for ontologies represented inOWLis specified by theVisual Notation for OWL Ontologies (VOWL).[27] Ontology engineering (also called ontology building) is a set of tasks related to the development of ontologies for a particular domain.[28]It is a subfield ofknowledge engineeringthat studies the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tools and languages that support them.[29][30] Ontology engineering aims to make explicit the knowledge contained in software applications, and organizational procedures for a particular domain. Ontology engineering offers a direction for overcoming semantic obstacles, such as those related to the definitions of business terms and software classes. Known challenges with ontology engineering include: Ontology editorsare applications designed to assist in the creation or manipulation of ontologies. It is common for ontology editors to use one or moreontology languages. Aspects of ontology editors include: visual navigation possibilities within theknowledge model,inference enginesandinformation extraction; support for modules; the import and export of foreignknowledge representationlanguages forontology matching; and the support of meta-ontologies such asOWL-S,Dublin Core, etc.[31] Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting a domain's terms from natural language text. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Information extraction andtext mininghave been explored to automatically link ontologies to documents, for example in the context of the BioCreative challenges.[32] Epistemological assumptions, which in research asks "What do you know? or "How do you know it?", creates the foundation researchers use when approaching a certain topic or area for potential research. As epistemology is directly linked to knowledge and how we come about accepting certain truths, individuals conducting academic research must understand what allows them to begin theory building. Simply, epistemological assumptions force researchers to question how they arrive at the knowledge they have.[citation needed] Anontology languageis aformal languageused to encode an ontology. There are a number of such languages for ontologies, both proprietary and standards-based: The W3CLinking Open Data community projectcoordinates attempts to converge different ontologies into worldwideSemantic Web. The development of ontologies has led to the emergence of services providing lists or directories of ontologies called ontology libraries. The following are libraries of human-selected ontologies. The following are both directories and search engines. In general, ontologies can be used beneficially in several fields.
https://en.wikipedia.org/wiki/Domain_ontology
Common Logic(CL) is a framework for a family oflogic languages, based onfirst-order logic, intended to facilitate the exchange and transmission ofknowledgeincomputer-based systems.[1] The CL definition permits and encourages the development of a variety of different syntactic forms, calleddialects. A dialect may use any desired syntax, but it must be possible to demonstrate precisely how the concrete syntax of a dialect conforms to the abstract CL semantics, which are based on amodel theoreticinterpretation. Each dialect may be then treated as aformal language. Once syntactic conformance is established, a dialect gets the CL semantics for free, as they are specified relative to the abstract syntax only, and hence are inherited by any conformant dialect. In addition, all CL dialects are comparable (i.e., can be automatically translated to a common language), although some may be more expressive than others. In general, a less expressive subset of CL may betranslatedto a more expressive version of CL, but the reverse translation is only defined on a subset of the larger language. Common Logic is published byISOas "ISO/IEC 24707:2007 - Information technology — Common Logic (CL): a framework for a family of logic-based languages".[2]It is available for purchase from ISO's catalog, and is freely available from ISO's index of publicly available standards.[3][4] The CL Standard includes specifications for three dialects, theCommon Logic Interchange Format(CLIF) (Annex A), theConceptual Graph Interchange Format(CGIF) (Annex B), and anXML-based notation for Common Logic (XCL) (Annex C). The semantics of these dialects are defined in the Standard by their translation to the abstract syntax and semantics of Common Logic. Many other logic-based languages could also be defined as subsets of CL by means of similar translations; among them are theRDFandOWLlanguages, which have been defined by theW3C. The ISO standard's development began in June 2003 under Working Group 2 (Metadata) ofSub-Committee 32 (Data Interchange)underISO/IEC JTC 1, and was completed in October 2007. A technical corrigendum, correcting some errors in the original standard, is being prepared at the time being.
https://en.wikipedia.org/wiki/Common_Logic
FOAF(an acronym offriend of a friend) is amachine-readableontologydescribingpersons, their activities and their relations to other people and objects. Anyone can use FOAF to describe themselves. FOAF allows groups of people to describesocial networkswithout the need for a centralised database. FOAF is a descriptive vocabulary expressed using theResource Description Framework(RDF) and theWeb Ontology Language(OWL). Computers may use these FOAF profiles to find, for example, all people living in Europe, or to list all people both you and a friend of yours know.[1][2]This is accomplished by defining relationships between people. Each profile has a unique identifier (such as the person'se-mail addresses, internationaltelephone number,Facebookaccount name, aJabber ID, or aURIof the homepage or weblog of the person), which is used when defining these relationships. The FOAF project, which defines and extends the vocabulary of a FOAF profile, was started in 2000 by Libby Miller and Dan Brickley. It can be considered the firstSocial Semantic Webapplication,[citation needed]in that it combinesRDFtechnology with 'social web' concerns.[clarification needed] Tim Berners-Lee, in a 2007 essay,[3]redefined thesemantic webconcept into theGiant Global Graph(GGG), where relationships transcend networks and documents. He considers the GGG to be on equal ground with theInternetand theWorld Wide Web, stating that "I express my network in a FOAF file, and that is a start of the revolution." FOAF is one of the key components of theWebIDspecifications, in particular for the WebID+TLS protocol, which was formerly known as FOAF+SSL. Although it is a relatively simple use-case and standard, FOAF has had limited adoption on the web. For example, theLive JournalandDeadJournalblogging sites support FOAF profiles for all their members,[4]My Operacommunity supported FOAF profiles for members as well as groups. FOAF support is present onIdenti.ca,FriendFeed,WordPressandTypePadservices.[5] Yandexblog search platform supports search over FOAF profile information.[6]Prominent client-side FOAF support was available inSafari[7]web browser before RSS support was removed in Safari 6 and in the Semantic Radar[8]plugin forFirefoxbrowser.Semantic MediaWiki, thesemantic annotationandlinked dataextension ofMediaWikisupports mapping properties to external ontologies, including FOAF which is enabled by default. There are also modules or plugins to support FOAF profiles or FOAF+SSL authorization for programming languages,[9][10]as well as forcontent management systems.[11] The following FOAF profile (written inTurtleformat) states that James Wales is the name of the person described here. His e-mail address, homepage and depiction areweb resources, which means that each can be described using RDF as well. He has Wikimedia as an interest, and knows Angela Beesley (which is the name of a 'Person' resource). Paddington Edition
https://en.wikipedia.org/wiki/FOAF_(software)
Framesare anartificial intelligencedata structureused to divideknowledgeinto substructures by representing "stereotypedsituations". They were proposed byMarvin Minskyin his 1974 article "A Framework for Representing Knowledge". Frames are the primary data structure used in artificial intelligence frame languages; they are stored asontologiesofsets. Frames are also an extensive part ofknowledge representation and reasoningschemes. They were originally derived fromsemantic networksand are therefore part of structure-basedknowledge representations. According toRussellandNorvig'sArtificial Intelligence: A Modern Approach, structural representations assemble "[...]facts about particular objects and event types and arrange the types into a largetaxonomichierarchy analogous to abiological taxonomy". The frame contains information on how to use the frame, what to expect next, and what to do when these expectations are not met. Some information in the frame is generally unchanged while other information, stored in "terminals", usually change. Terminals can be considered as variables. Top-level frames carry information, that is always true about the problem in hand, however, terminals do not have to be true. Their value might change with the new information encountered. Different frames may share the same terminals. Each piece of information about a particular frame is held in a slot. The information can contain: A frame's terminals are already filled with default values, which is based on how thehuman mindworks. For example, when a person is told "a boy kicks a ball", most people will visualize a particular ball (such as a familiarsoccer ball) rather than imagining some abstract ball with no attributes. One particular strength of frame-based knowledge representations is that, unlike semantic networks, they allow for exceptions in particular instances. This gives frames a degree of flexibility that allows representations to reflect real-world phenomena more accurately. Likesemantic networks, frames can be queried using spreading activation. Following the rules ofinheritance, any value given to a slot that is inherited by subframes will be updated (IF-ADDED) to the corresponding slots in the subframes and any new instances of a particular frame will feature that new value as the default. Because frames are based on structures, it is possible to generate a semantic network given a set of frames even though it lacks explicit arcs. References toNoam Chomskyand hisgenerative grammarof 1950 are generally missing fromMinsky's work. The simplified structures of frames allow for easy analogical reasoning, a much prized feature in anyintelligent agent. The procedural attachments provided by frames also allow a degree of flexibility that makes for a more realistic representation and gives a natural affordance for programming applications. Worth noticing here is the easy analogical reasoning (comparison) that can be done between a boy and a monkey just by having similarly named slots. Also notice that Alex, an instance of a boy, inherits default values like "Sex" from the more general parent object Boy, but the boy may also have different instance values in the form of exceptions such as the number of legs. Aframe languageis a technology used forknowledge representationinartificial intelligence. They are similar toclass hierarchiesinobject-oriented languagesalthough their fundamental design goals are different. Frames are focused on explicit and intuitive representation of knowledge whereas objects focus onencapsulationandinformation hiding. Frames originated in AI research and objects primarily insoftware engineering. However, in practice, the techniques and capabilities of frame andobject-oriented languagesoverlap significantly. A simple example of concepts modeled in a frame language is theFriend of A Friend (FOAF) ontologydefined as part of theSemantic Webas a foundation for social networking and calendar systems. The primary frame in this simple example is aPerson. Example slots are the person'semail,home page, phone,etc. The interests of each person can be represented by additional frames describing the space of business and entertainment domains. The slotknowslinks each person with other persons. Default values for a person's interests can beinferredby the web of people they are friends of.[1] The earliest frame-based languages were custom developed for specific research projects and were not packaged as tools to be re-used by other researchers. Just as withexpert systeminference engines, researchers soon realized the benefits of extracting part of the core infrastructure and developing general-purpose frame languages that were not coupled to specific applications. One of the first general-purpose frame languages was KRL.[2]One of the most influential early frame languages wasKL-ONE.[3]KL-ONE spawned several subsequent Frame languages. One of the most widely used successors to KL-ONE was theLoom languagedeveloped by Robert MacGregor at theInformation Sciences Institute.[4] In the 1980s, Artificial Intelligence generated a great deal of interest in the business world fueled byexpert systems. This led to the development of many commercial products for the development of knowledge-based systems. These early products were usually developed inLispand integrated constructs such as IF-THEN rules forlogical reasoningwith Frame hierarchies for representing data. One of the most well known of these early Lisp knowledge-base tools was theKnowledge Engineering Environment(KEE) fromIntellicorp. KEE provided a full Frame language with multiple inheritance, slots, triggers, default values, and a rule engine that supported backward and forward chaining. As with most early commercial versions of AI software KEE was originally deployed inLisponLisp machineplatforms but was eventually ported toPCsandUnix workstations.[5] The research agenda of theSemantic Webspawned a renewed interest in automatic classification and frame languages. An example is theWeb Ontology Language(OWL) standard for describing information on the Internet. OWL is a standard to provide a semantic layer on top of the Internet. The goal is that rather than organizing the web using keywords as most applications (e.g. Google) do today the web can be organized by concepts organized in an ontology. The name of the OWL language itself provides a good example of the value of a Semantic Web. If one were to search for "OWL" using the Internet today most of the pages retrieved would be on the birdOwlrather than the standardOWL. With a Semantic Web it would be possible to specify the concept "Web Ontology Language" and the user would not need to worry about the various possible acronyms or synonyms as part of the search. Likewise, the user would not need to worry about homonyms crowding the search results with irrelevant data such as information about birds of prey as in this simple example. In addition to OWL, various standards and technologies that are relevant to the Semantic Web and were influenced by Frame languages includeOILandDAML. TheProtegeOpen Source software tool from Stanford University provides an ontology editing capability that is built on OWL and has the full capabilities of a classifier. However it ceased to explicitly support frames as of version 3.5 (which is maintained for those preferring frame orientation), the version current in 2017 being 5. The justification for moving from explicit frames being that OWL DL is more expressive and "industry standard".[6] Frame languages have a significant overlap withobject-orientedlanguages. The terminologies and goals of the two communities were different but as they moved from the academic world and labs to the commercial world developers tended to not care about philosophical issues and focused primarily on specific capabilities, taking the best from either camp regardless of where the idea began. What both paradigms have in common is a desire to reduce the distance between concepts in the real world and their implementation in software. As such bothparadigmsarrived at the idea of representing the primary software objects intaxonomiesstarting with very general types and progressing to more specific types. The following table illustrates thecorrelationbetween standard terminology from the object-oriented and frame language communities: The primary difference between the two paradigms was in the degree thatencapsulationwas considered a major requirement. For the object-oriented paradigm encapsulation was one of, if not the most, critical requirement. The desire to reduce the potential interactions between software components and hence manage large complex systems was a key driver of object-oriented technology. For the frame language camp this requirement was less critical than the desire to provide a vast array of possible tools to represent rules, constraints, and programming logic. In the object-oriented world everything is controlled by methods and the visibility of methods. So for example, accessing the data value of an object property must be done via an accessor method. This method controls things such as validating the data type and constraints on the value being retrieved or set on the property. In Frame languages these same types of constraints could be handled in multiple ways. Triggers could be defined to fire before or after a value was set or retrieved. Rules could be defined that managed the same types of constraints. The slots themselves could be augmented with additional information (called "facets" in some languages) again with the same type of constraint information. The other main differentiator between frame and OO languages wasmultiple inheritance(allowing a frame or class to have two or more superclasses). For frame languages multiple inheritance was a requirement. This follows from the desire to model the world the way humans do, human conceptualizations of the world seldom fall into rigidly defined non-overlappingtaxonomies. For many OO languages, especially in the later years of OO, single inheritance was either strongly desired or required. Multiple inheritance was seen as a possible step in the analysis phase to model a domain but something that should be eliminated in the design and implementation phases in the name of maintaining encapsulation andmodularity.[7] Although the early frame languages such asKRLdid not includemessage passing, driven by the demands of developers, most of the later frame languages (e.g. Loom, KEE) included the ability to define messages on Frames.[8] On the object-oriented side, standards have also emerged that provide essentially the equivalent functionality that frame languages provided, albeit in a different format and all standardized on object libraries. For example, theObject Management Grouphas standardized specifications for capabilities such as associating test data and constraints with objects (analogous to common uses for facets in Frames and to constraints in Frame languages such as Loom) and for integrating rule engines.[9][10] Early work on Frames was inspired by psychological research going back to the 1930s that indicated people use stored stereotypical knowledge to interpret and act in new cognitive situations.[11]The term Frame was first used byMarvin Minskyas a paradigm to understand visual reasoning and natural language processing.[12]In these and many other types of problems the potential solution space for even the smallest problem is huge. For example, extracting the phonemes from a raw audio stream ordetecting the edgesof an object. Things that seem trivial to humans are actually quite complex. In fact, how difficult they really were was probably not fully understood until AI researchers began to investigate the complexity of getting computers to solve them. The initial notion of Frames or Scripts as they were also called is that they would establish the context for a problem and in so doing automatically reduce the possible search space significantly. The idea was also adopted by Schank and Abelson who used it to illustrate how an AI system could process common human interactions such as ordering a meal at a restaurant.[13]These interactions were standardized as Frames with slots that stored relevant information about each Frame. Slots are analogous to object properties in object-oriented modeling and to relations in entity-relation models. Slots often had default values but also required further refinement as part of the execution of each instance of the scenario. I.e., the execution of a task such as ordering at a restaurant was controlled by starting with a basic instance of the Frame and then instantiating and refining various values as appropriate. Essentially the abstract Frame represented an object class and the frame instances an object instance. In this early work, the emphasis was primarily on the static data descriptions of the Frame. Various mechanisms were developed to define the range of a slot, default values, etc. However, even in these early systems there were procedural capabilities. One common technique was to use "triggers" (similar to the database concept oftriggers) attached to slots. A trigger is simply procedural code that have attached to a slot. The trigger could fire either before and/or after a slot value was accessed or modified. As with object classes, Frames were organized insubsumptionhierarchies. For example, a basic frame might be ordering at a restaurant. An instance of that would be Joe goes to Dairy Queen. A specialization (essentially asubclass) of the restaurant frame would be a frame for ordering at a fancy restaurant. The fancy restaurant frame would inherit all the default values from the restaurant frame but also would either add more slots or change one or more of the default values (e.g., expected price range) for the specialized frame.[14][15] Much of the early Frame language research (e.g. Schank and Abelson) had been driven by findings from experimental psychology and attempts to design knowledge representation tools that corresponded to the patterns humans were thought to use to function in daily tasks. These researchers were less interested in mathematical formality since they believed such formalisms were not necessarily good models for the way the average human conceptualizes the world. The way humans use language for example is often far from truly logical. Similarly, in linguistics,Charles J. Fillmorein the mid-1970s started working on his theory offrame semantics, which later would lead to computational resources likeFrameNet.[16]Frame semantics was motivated by reflections on human language and human cognition. Researchers such asRon Brachmanon the other hand wanted to give AI researchers the mathematical formalism and computational power that were associated with Logic. Their aim was to map the Frame classes, slots, constraints, and rules in a Frame language to set theory and logic. One of the benefits of this approach is that the validation and even creation of the models could be automated using theorem provers and other automated reasoning capabilities. The drawback was that it could be more difficult to initially specify the model in a language with a formal semantics. This evolution also illustrates a classic divide in AI research known as the "neats vs. scruffies". The "neats" were researchers who placed the most value on mathematical precision and formalism which could be achieved viaFirst Order LogicandSet Theory. The "scruffies" were more interested in modeling knowledge in representations that were intuitive and psychologically meaningful to humans.[17] The most notable of the more formal approaches was theKL-ONElanguage.[18]KL-ONE later went on to spawn several subsequent Frame languages. The formal semantics of languages such as KL-ONE gave these frame languages a new type of automated reasoning capability known as theclassifier. The classifier is an engine that analyzes the various declarations in the frame language: the definition of sets, subsets, relations, etc. The classifier can then automatically deduce various additional relations and can detect when some parts of a model are inconsistent with each other. In this way many of the tasks that would normally be executed by forward or backward chaining in an inference engine can instead be performed by the classifier.[19] This technology is especially valuable in dealing with the Internet. It is an interesting result that the formalism of languages such as KL-ONE can be most useful dealing with the highly informal and unstructured data found on the Internet. On the Internet it is simply not feasible to require all systems to standardize on one data model. It is inevitable that terminology will be used in multiple inconsistent forms. The automatic classification capability of the classifier engine provides AI developers with a powerful toolbox to help bring order and consistency to a very inconsistent collection of data (i.e., the Internet). The vision for an enhanced Internet, where pages are ordered not just by text keywords but by classification of concepts is known as theSemantic Web. Classification technology originally developed for Frame languages is a key enabler of the Semantic Web.[20][21]The "neats vs. scruffies" divide also emerged in Semantic Web research, culminating in the creation of theLinking Open Datacommunity—their focus was on exposing data on the Web rather than modeling.
https://en.wikipedia.org/wiki/Frame_language
TheFAO geopolitical ontologyis anontologydeveloped by theFood and Agriculture Organization of the United Nations (FAO)to describe, manage and exchange data related togeopoliticalentities such as countries, territories, regions and other similar areas. Anontologyis a kind of dictionary that describes information in a certain domain using concepts and relationships. It is often implemented usingOWL(Web Ontology Language), anXML-based standard language that can be interpreted by computers. The advantage of describing information in an ontology is that it enables to acquiredomain knowledgeby defining hierarchical structures of classes, adding individuals, setting object properties and datatype properties, and assigning restrictions. The geopolitical ontology provides names in seven languages (Arabic, Chinese, French, English, Spanish, Russian and Italian) and identifiers in various international coding systems (ISO2,ISO3,AGROVOC,FAOSTAT, FAOTERM,[2]GAUL,UN,UNDPandDBPediaID codes) for territories and groups. Moreover, theFAOgeopolitical ontology tracks historical changes from 1985 up until today;[3]providesgeolocation(geographical coordinates); implements relationships amongcountriesand countries, or countries and groups, including properties such ashas border with,is predecessor of,is successor of,is administered by,has members, andis in group; and disseminates country statistics including country area, land area, agricultural area,GDPorpopulation. The FAO geopolitical ontology provides a structured description of data sources. This includes: source name, source identifier, source creator and source's update date. Concepts are described using theDublin Corevocabulary[4] In summary, the main objectives of the FAO geopolitical ontology are: It is possible todownloadthe FAO geopolitical ontology in OWL[5]and RDF[6]formats. Documentation is available in theFAO Country ProfilesGeopolitical information web page.[7] The geopolitical ontology contains : TheFAOgeopolitical ontology is implemented inOWL. It consists of classes, properties, individuals and restrictions. Table 1 shows all classes, gives a brief description and lists some individuals that belong to each class. Note that the current version of the geopolitical ontology does not provide individuals of the class "disputed" territories. Table 2 and Table 3 illustrate datatype properties and object properties. The FAO Geopolitical ontology is embracing the W3CLinked Open Data(LOD) initiative[14]and released itsRDFversion of the geopolitical ontology in March 2011. The term 'Linked Open Data' refers to a set of best practices for publishing and connecting structured data on the Web. The key technologies that support Linked Data are URIs, HTTP and RDF. The RDF version of the geopolitical ontology is compliant with all Linked data principles to be included in the Linked Open Data cloud, as explained in the following.[15][16] Every resource in the OWL format of the FAO Geopolitical Ontology has a unique URI. Dereferenciation was implemented to allow for three different URIs to be assigned to each resource as follows: In addition the current URIs used for OWL format needed to be kept to allow for backwards compatibility for other systems that are using them. Therefore, the new URIs for the FAO Geopolitical Ontology in LOD were carefully created, using “Cool URIs for Semantic Web” and considering other good practices for URIs, such as DBpedia URIs. The URIs of the geopolitical ontology need to be permanent, consequently all transient information, such as year, version, or format was avoided in the definition of the URIs. The new URIs can be accessed[6] For example, for the resource “Italy” the URIs are the following: In addition, “owl:sameAs” is used to map the new URIs to the OWL representation. When a non-information resource is looked up without any specific representation format, then the server needs to redirect the request to information resource with an HTML representation. For example, to retrieve the resource “Italy”,[17]which is a non-information resource, the server redirects to the HTML page of “Italy”.[18] The total number of triple statements in FAO Geopolitical Ontology is 22,495. At least 50 links to a dataset already in the current LOD Cloud: FAO Geopolitical Ontology has 195 links toDBpedia, which is already part of the LOD Cloud. FAO Geopolitical Ontology provides the entire dataset as a RDF dump.[19] The RDF version of the FAO Geopolitical Ontology has been already registered in CKAN[20]and it was requested to add it into the LOD Cloud. TheFAO Country Profilesis an information retrieval tool which groups the FAO's vast archive of information on its global activities inagricultureandrural developmentin one single area and catalogues it exclusively by country. TheFAO Country Profilessystem provides access to country-based heterogeneous data sources.[21]By using the geopolitical ontology in the system, the following benefits are expected:[22] Figure 3 shows a page in theFAO Country Profileswhere the geopolitical ontology is described.
https://en.wikipedia.org/wiki/Geopolitical_ontology
TheInternational Defence Enterprise Architecture Specification for exchange Group(IDEAS Group) is a project involving four nations (plusNATOas observers) and coveringMODAF(UK),DoDAF(US), DNDAF[1](Canada) and the Australian Defence Architecture Framework (AUSDAF). The deliverable of the project is a data exchange format for military Enterprise Architectures. The initial scope for exchange is the architectural data required to support coalition operations planning, including: The work has begun with the development of a formal ontology to specify the data exchange semantics. TheW3CResource Description Framework(RDF) andWeb Ontology Language(OWL) will be the format used for data exchange. A demonstration of multinational interoperability is scheduled for September 2007, based on exchanging process models for casualty tracking. The need for IDEAS was identified in 2005 by the Australian, Canadian, UK and US defence departments. The main purpose of IDEAS is to support coalition military operations planning. The ability to exchange architectures between countries enables better understanding of each other's capabilities, communications mechanisms and standard procedures. IDEAS is a formal, higher-order, 4D (seefour dimensionalism) ontology. It is extensional (seeExtension (metaphysics)), using physical existence as its criterion for identity. In practical terms, this means the ontology is well suited to managing change over time and identifying elements with a degree of precision that is not possible using names alone. The ontology is being built using theBORO Methodwhich has proven useful for the multidisciplinary team working on IDEAS. BORO forces the ontology developer to consider each concept in terms of its physical extent. This means there can be no argument about names or meaning—something either exists or it doesn't. TheBORO Methodalso deals with classes and relationships by tracing them back to their members (classes) or ends (relationships). The concepts specified in IDEAS and theBORO Methodhave also been employed in the Information Exchange Standard in UK Government. To date, there have been three IDEAS implementations: The IDEAS work has been presented at a number of conferences.[2][3]It has also been cited in aCutter Consortiumwhite paper[4]and in a book on Systems Engineering fromSpringer Verlag.[5]
https://en.wikipedia.org/wiki/IDEAS_Group
TheMeta-Object Facility(MOF) is anObject Management Group(OMG) standard formodel-driven engineering. Its purpose is to provide atype systemfor entities in theCORBAarchitecture and a set of interfaces through which those types can be created and manipulated. MOF may be used fordomain-driven software designandobject-oriented modelling.[1]: 15 MOF was developed to provide atype systemfor use in theCORBAarchitecture, a set of schemas by which the structure, meaning and behaviour of objects could be defined, and a set of CORBA interfaces through which these schemas could be created, stored and manipulated.[2] MOF is designed as a four-layered architecture. It provides a meta-meta model at the top layer, called the M3 layer. This M3-model is the language used by MOF to build metamodels, called M2-models. The most prominent example of a Layer 2 MOF model is the UML metamodel, the model that describes the UML itself. These M2-models describe elements of the M1-layer, and thus M1-models. These would be, for example, models written in UML. The last layer is the M0-layer or data layer. It is used to describe real-world objects. Beyond the M3-model, MOF describes the means to create and manipulate models and metamodels by definingCORBAinterfaces that describe those operations. Because of the similarities between the MOF M3-model and UML structure models, MOF metamodels are usually modeled as UML class diagrams. A conversion from MOF specification models (M3-, M2-, or M1-Layer) to W3C XML and XSD are specified by theXMI(ISO/IEC 19503) specification. XMI is an XML-based exchange format for models.[1]: xi From MOF to Java™ there is the Java Metadata Interchange (JMI) specification byJava Community Process.[1]: xi It also provides specs to make easier automatic CORBA IDL interfaces generation.[1]: 3 MOF is aclosedmetamodeling architecture; it defines an M3-model, which conforms to itself. MOF allows astrictmeta-modeling architecture; every model element on every layer is strictly in correspondence with a model element of the layer above. MOF only provides a means to define the structure, orabstract syntaxof a language or of data. For defining metamodels, MOF plays exactly the role thatEBNFplays for defining programming language grammars. MOF is aDomain Specific Language(DSL) used to define metamodels, just as EBNF is a DSL for defining grammars. Similarly to EBNF, MOF could be defined in MOF. In short, MOF uses the notion ofMOF::Classes(not to be confused withUML::Classes), as known fromobject orientation, to define concepts (model elements) on a metalayer. MOF may be used to define object-oriented metamodels (asUMLfor example) as well as non object-oriented metamodels (e.g. aPetri netor aWeb Servicemetamodel). As of May 2006, theOMGhas defined two compliance points for MOF: In June 2006, arequest for proposalwas issued by OMG for a third variant, SMOF (Semantic MOF). The variantECorethat has been defined in theEclipse Modeling Frameworkis more or less aligned on OMG's EMOF. Another related standard isOCL, which describes a formal language that can be used to define model constraints in terms ofpredicate logic. QVT, which introduces means to query, view and transform MOF-based models, is a very important standard, approved in 2008. SeeModel Transformation Languagefor further information. MOF is an international standard: MOF can be viewed as a standard to writemetamodels, for example in order to model the abstract syntax ofDomain Specific Languages.Kermetais an extension to MOF allowing executable actions to be attached to EMOF meta-models, hence making it possible to also model a DSL operational semantics and readily obtain an interpreter for it. JMIdefines a Java API for manipulating MOF models. OMG's MOF is not to be confused with the Managed Object Format (MOF) defined by theDistributed Management Task Force(DMTF) in section 6 of the Common Information Model (CIM) Infrastructure Specification, version 2.5.0.[3]
https://en.wikipedia.org/wiki/Meta-Object_Facility
TheObject Management Group(OMG) is a computer industrystandardsconsortium. OMG task forces develop enterprise integration standards for a range of technologies. The goal of the OMG was a common portable and interoperable object model with methods and data that work using all types of development environments on all types of platforms.[1] The group provides only specifications, not implementations. But before a specification can be accepted as a standard by the group, the members of the submitter team must guarantee that they will bring a conforming product to market within a year. This is an attempt to prevent unimplemented (and unimplementable) standards. Other private companies or open source groups are encouraged to produce conforming products and OMG is attempting to develop mechanisms to enforce true interoperability. OMG hosts four technical meetings per year for its members and interested nonmembers. The Technical Meetings provide a neutral forum to discuss, develop and adopt standards that enable software interoperability. Founded in 1989 by eleven companies (includingHewlett-Packard,IBM,Sun Microsystems,Apple Computer,American Airlines, iGrafx, andData General), OMG's initial focus was to create a heterogeneousdistributedobjectstandard. The founding executive team included Christopher Stone and John Slitz. Current leadership includes chairman and CEORichard Soley, President and COO Bill Hoffman and Vice President and Technical Director Jason McC. Smith. Since 2000, the group's international headquarters has been located inBoston,Massachusetts. In 1997, theUnified Modeling Language(UML) was added to the list of OMG adopted technologies. UML is a standardized general-purpose modeling language in the field of object-oriented software engineering. In June 2005, the Business Process Management Initiative (BPMI.org) and OMG announced the merger of their respective Business Process Management (BPM) activities to form the Business Modeling and Integration Domain Task Force (BMI DTF). In 2006 theBusiness Process Model and Notation(BPMN) was adopted as a standard by OMG. In 2007 theBusiness Motivation Model(BMM) was adopted as a standard by the OMG. The BMM is a metamodel that provides a vocabulary for corporate governance and strategic planning and is particularly relevant to businesses undertakinggovernance,regulatory compliance,business transformationandstrategic planningactivities. In 2009 OMG, together with theSoftware Engineering InstituteatCarnegie Mellonlaunched theConsortium of IT Software Quality(CISQ). In 2011 OMG formed the Cloud Standards Customer Council.[2]Founding sponsors includedCA,IBM,Kaavo,RackspaceandSoftware AG. The CSCC is an OMG end user advocacy group dedicated to accelerating cloud's successful adoption, and drilling down into the standards, security and interoperability issues surrounding the transition to the cloud. In September 2011, the OMG Board of Directors voted to adopt the Vector Signal and Image Processing Library (VSIPL) as the latest OMG specification. Work for adopting the specification was led byMentor Graphics' Embedded Software Division, RunTime Computing Solutions, TheMitre Corporationas well as the High Performance Embedded Computing Software Initiative (HPEC-SI). VSIPL is an application programming interface (API). VSIPL and VSIPL++ contain functions used for common signal processing kernel and other computations. These functions include basic arithmetic, trigonometric, transcendental, signal processing, linear algebra, and image processing. The VSIPL family of libraries has been implemented by multiple vendors for a range of processor architectures, including x86, PowerPC, Cell, and NVIDIA GPUs. VSIPL and VSIPL++ are designed to maintain portability across a range of processor architectures. Additionally, VSIPL++ was designed from the start to include support for parallelism. Late 2012 early 2013, the group's Board of Directors adopted the Automated Function Point (AFP) specification.[3]The push for adoption was led by the Consortium for IT Software Quality (CISQ). AFP provides a standard for automating the popularfunction pointmeasure according to the counting guidelines of the International Function Point User Group (IFPUG). On March 27, 2014, OMG announced it would be managing the newly formedIndustrial Internet Consortium(IIC).[4][5] Of the many standards maintained by the OMG, 13 have been ratified asISOstandards.[6]These standards are:
https://en.wikipedia.org/wiki/Object_Management_Group
Inknowledge representation, particularly in theSemantic Web, ametaclassis aclasswhose instances can themselves be classes. Similar to their rolein programming languages, metaclasses inontology languagescan have properties otherwise applicable only to individuals, while retaining the same class's ability to be classified in a concept hierarchy. This enables knowledge about instances of those metaclasses to be inferred bysemantic reasonersusing statements made in the metaclass. Metaclasses thus enhance the expressivity of knowledge representations in a way that can be intuitive for users. While classes are suitable to represent a population of individuals, metaclasses can, as one of their feature, be used to represent the conceptual dimension of anontology.[1]Metaclasses are supported in theWeb Ontology Language(OWL) and the data-modeling vocabularyRDFS. Metaclasses are often modeled by setting them as theobjectof claims involving rdf:type and rdfs:subClassOf—built-in properties commonly referred to asinstance ofandsubclass of.Instance ofentails that thesubjectof the claim is an instance, i.e. an individual that is a member of a class.Subclass ofentails that the subject is a class. In the context ofinstance ofandsubclass of, the key difference between metaclasses and ordinary classes is that metaclasses are the object ofinstance ofclaims used on a class, while ordinary classes are not objects of such claims. (e.g. in a claimBobinstance ofHuman, Bob is the subject and an Instance, while the object, Human, is an ordinary class; but a further claim thatHumaninstance ofAnimal speciesmakes "Animal species" a metaclass because it has a member, "Human", that is also a Class). OWL 2 DL supports metaclasses by a feature calledpunning,[2]in which one entity is interpreted as two different types of thing—a class and an individual—depending on its syntactic context. For example, through punning, an ontology could have a concept hierarchy such asHarry the eagleinstance ofgolden eagle,golden eaglesubclass ofbird, andgolden eagleinstance ofspecies. In this case, the punned entity would be golden eagle, because it is represented as a class (second claim) and an instance (third claim); whereas the metaclass would be species, as it has an instance that is a class. Punning also enables other properties that would otherwise be applicable only to ordinary instances to be used directly on classes, for example "golden eagleconservation statusleast concern."[3] Having arisen from the fields ofknowledge representation,description logicandformal ontology, Semantic Web languages have a closer relationship tophilosophical ontologythan do conventional programming languages such asJavaorPython. Accordingly, the nature of metaclasses is informed by philosophical notions such asabstract objects, theabstract and concrete, andtype-token distinction. Metaclasses permit concepts to be construed as tokens of other concepts while retaining their ontological status as types. This enables types to be enumerated over, while preserving the ability to inherit from types. For example, metaclasses could allow a machine reasoner to infer from a human-friendly ontology how manyelementsare in theperiodic table, or, given thatnumber of protonsis a property of chemical element and isotopes are a subclass of elements, how many protons exist in the isotopehydrogen-2. Metaclasses are sometime organized by levels, in a similar way to the simple Theory of types[4]where classes that are not metaclasses are assigned the first level, classes of classes in the first level are in the second level, classes of classes in the second level on the next and so on.[5] Following thetype-token distinction, real world objects such asAbraham Lincolnor the planetMarsare regrouped into classes of similar objects. Abraham Lincoln is said to be aninstance ofhuman, and Mars is aninstance ofplanet. This is a kind ofis-arelationship. Metaclasses are class of classes, such as for example thenuclideconcept. In chemistry,atomsare often classified aselementsand, more specifically,isotopes. The glass of water one last drank has many hydrogen atoms, each of which is aninstance ofhydrogen.Hydrogenitself, a class of atoms, is aninstance ofnuclide. Nuclide is a class of classes, hence a metaclass. InRDF, therdf:typeproperty is used to state that a resource is an instance of a class.[6]This enables metaclasses to be easily created by usingrdf:typein a chain-like fashion.[citation needed][dubious–discuss]For example, in the twotriples the resourcespeciesis a metaclass,[dubious–discuss]becausegolden eagleis used as a class in the first statement and the classgolden eagleis said to be an instance of the classspeciesin the second statement. RDF also providesrdf:Propertyas a way to create properties beyond those defined in the built-in vocabulary. Properties can be used directly on metaclasses, for example "speciesquantity8.7 million", wherequantityis a property defined viardf:Propertyandspeciesis a metaclass per the preceding example above. RDFS, an extension of RDF, introducedrdfs:Classandrdfs:subClassOfand enriched how vocabularies can classify concepts.[7][8]Whereasrdf:typeenables vocabularies to representinstantiation, the propertyrdfs:subClassOfenables vocabularies to representsubsumption. RDFS thus makes it possible for vocabularies to representtaxonomies, also known as subsumption hierarchies or concept hierarchies, which is an important addition to thetype–token distinctionmade possible by RDF. Notably, the resourcerdfs:Classis an instance of itself,[7]demonstrating both the use of metaclasses in the language's internal implementation and areflexiveusage ofrdf:type. RDFS is its ownmetamodel[9] In some OWL flavors likeOWL1-DL, entities can be either classes or instances, but cannot be both. This limitations forbids metaclasses andmetamodeling.[10]This is not the case in the OWL1 full flavor, but this allows the model to be computationally undecidable.[11]InOWL2, metaclasses can implemented withpunning, that is a way to treat classes as if they were individuals.[2]Other approaches have also been proposed and used to check the properties ofontologiesat a meta level.[12] OWL 2 supports metaclasses through a feature calledpunning. In metaclasses implemented by punning, the same subject is interpreted as two fundamentally different types of thing—a class and an individual—depending on its syntactic context. This is similar to apunin natural language, where different senses of the same word are emphasized to illustrate a point. Unlike in natural language, where puns are typically used for comedic or rhetorical effect, the main goal of punning in Semantic Web technologies is to make concepts easier to represent, closer to how they are discussed in everyday speech or academic literature. Although OWL 2 permits the same symbol to assume different roles, its standard semantics (known as Direct Semantics) still interprets the symbol differently depending on whether it is used as an individual, a class, or a property.[13][14] In the ontology editorProtégé, metaclasses are templates for other classes who are their instances.[15] Some ontologies like theCyc AI project's classifies classes and metaclasses.[5]Classes are divided intofixed-order classesandvariable-order classes. In the case of fixed-order classes, an order is attributed for metaclasses by measuring the distance to individuals with respect to the number of "instance of" triples that are necessary to find an individual. Classes that arenotmetaclasses are classes of individuals, so their order is "1" (first-order classes). Metaclasses that are classes of first-order classes' order is "2" (second-order classes), and so on. Variable-order metaclasses, on the other hand, can have instances; one example of variable-order metaclass is the class of all fixed-order classes.
https://en.wikipedia.org/wiki/Metaclass_(Semantic_Web)
Machine interpretation of documents and services inSemantic Webenvironment is primarily enabled by (a) the capability to mark documents, document segments and services with semantic tags and (b) the ability to establish contextual relations between the tags with a domainmodel, which is formally represented asontology. Human beings usenatural languagesto communicate an abstract view of the world. Natural language constructs are symbolic representations of human experience and are close to the conceptual model that Semantic Web technologies deal with. Thus, natural language constructs have been naturally used to represent the ontology elements. This makes it convenient to apply Semantic Web technologies in the domain of textual information. In contrast,multimediadocuments are perceptual recording of human experience. An attempt to use a conceptual model to interpret the perceptual records gets severely impaired by thesemantic gapthat exists between the perceptual media features and the conceptual world. Notably, the concepts have their roots in perceptual experience of human beings and the apparent disconnect between the conceptual and the perceptual world is rather artificial. The key to semantic processing of multimedia data lies in harmonizing the seemingly isolated conceptual and the perceptual worlds. Representation of theDomain knowledgeneeds to be extended to enable perceptual modeling, over and above conceptual modeling that is supported. The perceptual model of a domain primarily comprises observable media properties of the concepts. Such perceptual models are useful for semantic interpretation of media documents, just as the conceptual models help in the semantic interpretation of textual documents. Multimedia Ontology language (M-OWL) is an ontology representation language that enables such perceptual modeling. It assumes acausal modelof the world, where observable media features are caused by underlying concepts. In MOWL, it is possible to associate different types of media features in different media format and at different levels of abstraction with the concepts in a closed domain. The associations are probabilistic in nature to account for inherent uncertainties in observation of media patterns. The spatial and temporal relations between the media properties characterizing a concept (or, event) can also be expressed using MOWL. Often the concepts in a domaininheritthe media properties of some related concepts, such as a historic monument inheriting the color and texture properties of its building material. It is possible to reason with the media properties of the concepts in a domain to derive anObservation Modelfor a concept. Finally, MOWL supports anabductive reasoningframework usingBayesian networks, that is robust against imperfect observations of media data. W3C forum has undertaken the initiative of standardizing the ontology representation for web-based applications. TheWeb Ontology Language(OWL), standardized in 2004 after maturing throughXML(S),RDF(S) andDAML+OILis a result of that effort. Ontology in OWL (and some of its predecessor languages) has been successfully used in establishing semantics of text in specific application contexts. The concepts and properties in these traditional ontology languages are expressed as text, making an ontology readily usable forsemantic analysisof textual documents. Semantic processing of media data calls for perceptual modeling of domain concepts with their media properties. M-OWL has been proposed as an ontology language that enables such perceptual modeling. While M-OWL is a syntactic extension of OWL, it uses a completely different semantics based on probabilistic causal model of the world. Syntactically, MOWL is an extension of OWL. These extensions enable MOWL is accompanied with reasoning tools that support
https://en.wikipedia.org/wiki/Multimedia_Web_Ontology_Language
Asemantic reasoner,reasoning engine,rules engine, or simply areasoner, is a piece of software able to inferlogical consequencesfrom a set of asserted facts oraxioms. The notion of a semantic reasoner generalizes that of aninference engine, by providing a richer set of mechanisms to work with. Theinference rulesare commonly specified by means of anontology language, and often adescription logiclanguage. Many reasoners usefirst-order predicate logicto perform reasoning;inferencecommonly proceeds byforward chainingandbackward chaining. There are also examples of probabilistic reasoners, includingnon-axiomatic reasoning systems,[1]andprobabilistic logic networks.[2] Notable semantic reasoners and related software: S-LOR (Sensor-based Linked Open Rules) semantic reasonerS-LOR is under GNU GPLv3 license. S-LOR (Sensor-based Linked Open Rules) is a rule-based reasoning engine and an approach for sharing and reusing interoperable rules to deduce meaningful knowledge from sensor measurements.
https://en.wikipedia.org/wiki/Semantic_reasoner
Simple Knowledge Organization System(SKOS) is aW3C recommendationdesigned for representation ofthesauri,classification schemes,taxonomies,subject-heading systems, or any other type of structuredcontrolled vocabulary. SKOS is part of theSemantic Webfamily of standards built uponRDFandRDFS, and its main objective is to enable easy publication and use of such vocabularies aslinked data. The most direct ancestor to SKOS was the RDF Thesaurus work undertaken in the second phase of the EU DESIRE project[1][citation needed]. Motivated by the need to improve the user interface and usability of multi-service browsing and searching,[2]a basic RDF vocabulary for Thesauri was produced. As noted later in the SWAD-Europe workplan, the DESIRE work was adopted and further developed in the SOSIG and LIMBER projects. A version of the DESIRE/SOSIG implementation was described in W3C's QL'98 workshop, motivating early work on RDF rule and query languages: A Query and Inference Service for RDF.[3] SKOS built upon the output of the Language Independent Metadata Browsing of European Resources (LIMBER) project funded by theEuropean Community, and part of theInformation Society Technologiesprogramme. In the LIMBER projectCCLRCfurther developed anRDFthesaurus interchange format[4]which was demonstrated on the European Language Social Science Thesaurus (ELSST) at theUK Data Archiveas a multilingual version of the English language Humanities and Social Science Electronic Thesaurus (HASSET) which was planned to be used by the Council of European Social Science Data Archives CESSDA. SKOS as a distinct initiative began in the SWAD-Europe project, bringing together partners from both DESIRE, SOSIG (ILRT) and LIMBER (CCLRC) who had worked with earlier versions of the schema. It was developed in the Thesaurus Activity Work Package, in the Semantic Web Advanced Development for Europe (SWAD-Europe) project.[5]SWAD-Europe was funded by theEuropean Community, and part of theInformation Society Technologiesprogramme. The project was designed to support W3C's Semantic Web Activity through research, demonstrators and outreach efforts conducted by the five project partners,ERCIM, the ILRT atBristol University,HP Labs,CCLRCand Stilo. The first release of SKOS Core and SKOS Mapping were published at the end of 2003, along with other deliverables on RDF encoding of multilingual thesauri[6]and thesaurus mapping.[7] Following the termination of SWAD-Europe, SKOS effort was supported by the W3C Semantic Web Activity[8]in the framework of the Best Practice and Deployment Working Group.[9]During this period, focus was put both on consolidation of SKOS Core, and development of practical guidelines for porting and publishing thesauri for the Semantic Web. The SKOS main published documents — the SKOS Core Guide,[10]the SKOS Core Vocabulary Specification,[11]and the Quick Guide to Publishing a Thesaurus on the Semantic Web[12]— were developed through the W3C Working Draft process. Principal editors of SKOS were Alistair Miles,[13]initially Dan Brickley, and Sean Bechhofer. The Semantic Web Deployment Working Group,[14]chartered for two years (May 2006 – April 2008), put in its charter to push SKOS forward on theW3C Recommendationtrack. The roadmap projected SKOS as a Candidate Recommendation by the end of 2007, and as a Proposed Recommendation in the first quarter of 2008. The main issues to solve were determining its precise scope of use, and its articulation with other RDF languages and standards used in libraries (such asDublin Core).[15][16] On August 18, 2009,W3Creleased the new standard that builds a bridge between the world ofknowledge organization systems– including thesauri, classifications, subject headings, taxonomies, andfolksonomies– and thelinked datacommunity, bringing benefits to both. Libraries, museums, newspapers, government portals, enterprises, social networking applications, and other communities that manage large collections of books, historical artifacts, news reports, business glossaries, blog entries, and other items can now use SKOS[17]to leverage the power of linked data. SKOS was originally designed as a modular and extensible family of languages, organized as SKOS Core, SKOS Mapping, and SKOS Extensions, and a Metamodel. The entire specification is now complete within the namespacehttp://www.w3.org/2004/02/skos/core#. In addition to the reference itself, the SKOS Primer (a W3C Working Group Note) summarizes the Simple Knowledge Organization System. The SKOS[18]defines the classes and properties sufficient to represent the common features found in a standard thesaurus. It is based on a concept-centric view of the vocabulary, where primitive objects are not terms, but abstract notions represented by terms. Each SKOS concept is defined as anRDF resource. Each concept can have RDF properties attached, including: Concepts can be organized inhierarchiesusing broader-narrower relationships, or linked by non-hierarchical (associative) relationships. Concepts can be gathered in concept schemes, to provide consistent and structured sets of concepts, representing whole or part of a controlled vocabulary. The principal element categories of SKOS are concepts, labels, notations, documentation, semantic relations, mapping properties, and collections. The associated elements are listed in the table below. The SKOS vocabulary is based on concepts. Concepts are the units of thought—ideas, meanings, or objects and events (instances or categories)—which underlie many knowledge organization systems. As such, concepts exist in the mind as abstract entities which are independent of the terms used to label them. In SKOS, aConcept(based on the OWLClass) is used to represent items in a knowledge organization system (terms, ideas, meanings, etc.) or such a system's conceptual or organizational structure.[19] AConceptSchemeis analogous to a vocabulary, thesaurus, or other way of organizing concepts. SKOS does not constrain a concept to be within a particular scheme, nor does it provide any way to declare a complete scheme—there is no way to say the scheme consists only of certain members. A topConcept is (one of) the upper concept(s) in a hierarchical scheme. Each SKOSlabelis a string ofUnicodecharacters, optionally with language tags, that are associated with a concept. TheprefLabelis the preferred human-readable string (maximum one per language tag), whilealtLabelcan be used for alternative strings, andhiddenLabelcan be used for strings that are useful to associate, but not meant for humans to read. A SKOSnotationis similar to a label, but this literal string has a datatype, like integer, float, or date; the datatype can even be made up (see 6.5.1 Notations, Typed Literals and Datatypes in the SKOS Reference). The notation is useful for classification codes and other strings not recognizable as words. The Documentation or Note properties provide basic information about SKOS concepts. All the properties are considered a type ofskos:note; they just provide more specific kinds of information. The propertydefinition, for example, should contain a full description of the subject resource. More specific note types can be defined in a SKOS extension, if desired. A query for<A> skos:note ?will obtain all the notes about <A>, including definitions, examples, and scope, history and change, and editorial documentation. Any of these SKOS Documentation properties can refer to several object types: a literal (e.g., a string); a resource node that has its own properties; or a reference to another document, for example using a URI. This enables the documentation to have its ownmetadata, like creator and creation date. Specific guidance on SKOS documentation properties can be found in the SKOS Primer Documentary Notes. SKOS semantic relations are intended to provide ways to declare relationships between concepts within a concept scheme. While there are no restrictions precluding their use with two concepts from separate schemes, this is discouraged because it is likely to overstate what can be known about the two schemes, and perhaps link them inappropriately. The propertyrelatedsimply makes an association relationship between two concepts; no hierarchy or generality relation is implied. The propertiesbroaderandnarrowerare used to assert a direct hierarchical link between two concepts. The meaning may be unexpected; the relation<A> broader <B>means that A has a broader concept called B—hence that B is broader than A. Narrower follows in the same pattern. While the casual reader might expect broader and narrower to betransitiveproperties, SKOS does not declare them as such. Rather, the propertiesbroaderTransitiveandnarrowerTransitiveare defined as transitive super-properties of broader and narrower. These super-properties are (by convention) not used in declarative SKOS statements. Instead, when a broader or narrower relation is used in a triple, the corresponding transitive super-property also holds; and transitive relations can be inferred (and queried) using these super-properties. SKOS mapping properties are intended to express matching (exact or fuzzy) of concepts from one concept scheme to another, and by convention are used only to connect concepts from different schemes. The conceptsrelatedMatch,broadMatch, andnarrowMatchare a convenience, with the same meaning as the semantic propertiesrelated,broader, andnarrower. (See previous section regarding the meanings of broader and narrower.) The propertyrelatedMatchmakes a simple associative relationship between two concepts. When concepts are so closely related that they can generally be used interchangeably,exactMatchis the appropriate property (exactMatchrelations are transitive, unlike any of the other Match relations). ThecloseMatchproperty that indicates concepts that only sometimes can be used interchangeably, and so it is not a transitive property. The concept collections (Collection,orderedCollection) are labeled and/or ordered (orderedCollection) groups of SKOS concepts. Collections can be nested, and can have defined URIs or not (which is known as a blank node). Neither a SKOSConceptnor aConceptSchememay be a Collection, nor vice versa; and SKOS semantic relations can only be used with a Concept (not a Collection). The items in a Collection can not be connected to other SKOS Concepts through the Collection node; individual relations must be defined to each Concept in the Collection. All development work is carried out via the mailing list which is a completely open and publicly archived[20]mailing list devoted to discussion of issues relating to knowledge organisation systems, information retrieval and the Semantic Web. Anyone may participate informally in the development of SKOS by joining the discussions on public-esw-thes@w3.org – informal participation is warmly welcomed. Anyone who works for a W3C member organisation may formally participate in the development process by joining the Semantic Web Deployment Working Group – this entitles individuals to edit specifications and to vote on publication decisions. There are publicly available SKOS data sources. The SKOS metamodel is broadly compatible with the data model ofISO 25964-1– Thesauri for Information Retrieval. This data model can be viewed and downloaded from the website forISO 25964.[42] SKOS development has involved experts from both RDF and library community, and SKOS intends to allow easy migration of thesauri defined by standards such asNISOZ39.19 – 2005[43]orISO 25964.[42] SKOS is intended to provide a way to make a legacy of concept schemes available to Semantic Web applications, simpler than the more complex ontology language,OWL. OWL is intended to express complex conceptual structures, which can be used to generate rich metadata and support inference tools. However, constructing useful web ontologies is demanding in terms of expertise, effort, and cost. In many cases, this type of effort might be superfluous or unsuited to requirements, and SKOS might be a better choice. The extensibility of RDF makes possible further incorporation or extension of SKOS vocabularies into more complex vocabularies, including OWL ontologies.
https://en.wikipedia.org/wiki/SKOS
TheiPlant Collaborative, renamedCyversein 2017, is avirtual organizationcreated by acooperative agreementfunded by the USNational Science Foundation(NSF) to createcyberinfrastructurefor the plant sciences (botany).[1]The NSF compared cyberinfrastructure to physicalinfrastructure, "... thedistributed computer,information and communication technologiescombined with the personnel and integrating components that provide a long-term platform to empower the modern scientific research endeavor".[2]In September 2013 it was announced that the National Science Foundation had renewed iPlant's funding for a second 5-year term with an expansion of scope to all non-human life science research.[3] The project develops computing systems and software that combine computing resources, like those ofTeraGrid, andbioinformaticsandcomputational biologysoftware. Its goal is easier collaboration among researchers with improved data access and processing efficiency. Primarily centered in the United States, it collaborates internationally. Biology is relying more and more on computers.[4]Plant biology is changing with the rise of new technologies.[5]With the advent ofbioinformatics,computational biology,DNA sequencing,geographic information systemsand others computers can greatly assist researchers who study plant life looking for solutions to challenges inmedicine,biofuels,biodiversity,agricultureand problems likedrought tolerance,plant breeding, andsustainable farming.[6]Many of these problems cross traditional disciplines and facilitating collaboration between plant scientists of diverse backgrounds and specialties is necessary.[6][7][8] In 2006, the NSF solicited proposals to create "a new type of organization – a cyberinfrastructure collaborative for plant science" with a program titled "Plant Science Cyberinfrastructure Collaborative" (PSCIC) with Christopher Greer as program director.[9]A proposal was accepted (adopting the convention of using the word "Collaborative" as a noun) and iPlant was officially created on February 1, 2008.[1][9]Funding was estimated as $10 million per year over five years.[10] Richard Jorgensenled the team through the proposal stage and was theprincipal investigator(PI) from 2008 to 2009.[10]Gregory Andrews, Vicki Chandler, Sudha Ram and Lincoln Stein served as Co-Principal Investigators (Co-PIs) from 2008 to 2009. In late 2009, Stephen Goff was named PI and Daniel Stanzione was added as a Co-PI.[1][11][12]As of May 2014, Co-PI Stanzione was replaced by 4 new Co-PIs: Doreen Ware at Cold Spring Harbor, Nirav Merchant and Eric Lyons at the University of Arizona, and Matthew Vaughn at the Texas Advanced Computing Center.[13] The iPlant project supports what has been callede-Science, which is a use of information systems technology that is being adopted by the research community in efforts such as theNational Center for Ecological Analysis and Synthesis(NCEAS), ELIXIR,[14]and the Bamboo Technology Project that started in September 2010.[15][16]iPlant is "designed to create the foundation to support the computational needs of the research community and facilitate progress toward solutions of major problems in plant biology."[6][17] The project works as acollaboration. It seeks input from the wider plant science community on what to build.[18]Based on that input, it has enabled easier use of large data sets,[19]created a community-driven research environment to share existing data collections within a research area and between research areas[20]and shares data withprovenancetracking.[21][22]One model studied for collaboration wasWikipedia.[23][24] Several more recent National Science Foundation awards mentioned iPlant explicitly in their descriptions, as either a design pattern to follow or a collaborator with whom the recipient will work.[25] The primary institution for the iPlant project is theUniversity of Arizona, located within the BIO5 Institute inTucson.[26]Since its inception in 2008, personnel worked at other institutions includingCold Spring Harbor Laboratory,University of North Carolina, Wilmington, and theUniversity of Texas at Austinin theTexas Advanced Computing Center.[27]Purdue UniversityandArizona State Universitywere part of the original project group.[10] Other collaborating institutions that received support from iPlant for their work on aGrand Challengeinphylogeneticsstarting in March 2009 includedYale University,University of Florida, and theUniversity of Pennsylvania.[27]A trait evolution group was led at theUniversity of Tennessee.[28]A visualization workshop employing iPlant was run byVirginia Techin 2011.[29] The NSF requires that funding subcontracts stay within the United States, but international collaboration started in 2009 with theTechnical University Munich[27]andUniversity of Torontoin 2010.[29][30]East Main Evaluation & Consulting provides external oversight, advice, and assistance.[31] The iPlant project makes its cyberinfrastructure available several different ways and offers services to make it the accessible to its primary audience. The design was meant to grow in response to needs of the research community it serves.[6] The Discovery Environment integrates community-recommended software tools into a system that can handleterabytesof data using high-performance supercomputers to perform these tasks much more quickly. It has an interface designed to hide the complexity needed to do this from the end user. The goal was to make the cyberinfrastructure available to non-technical end users who are not as comfortable using acommand-line interface.[6][32] A set ofapplication programming interfaces(APIs) for developers allow access to iPlant services, including authentication, data management, high performance supercomputing resources from custom, locally produced software.[6][33] Atmosphere is acloud computingplatform that provides easy access to pre-configured, frequently used analysis routines, relevant algorithms, and data sets, and accommodates computationally and data-intensive bioinformatics tasks.[6]It uses theEucalyptusvirtualization platform.[34][35] The iPlant Semantic Web effort uses an iPlant-created architecture, protocol, and platform called the Simple Semantic Web Architecture and Protocol (SSWAP) forsemantic weblinking using a plant science focusedontology.[6][36][37]SSWAP is based on the notion ofRESTfulweb services with an ontology based onWeb Ontology Language(OWL).[38][39] The Taxonomic Name Resolution Service (TNRS) is a free utility for correcting and standardizing plant names. This is needed because plant names that are misspelled, out of date (because a newer synonym is preferred), or incomplete make it hard to use computers to process large lists.[6][40][41] My-Plant.org is asocial networkingcommunity for plant biologists, educators and others to come together to share information and research, collaborate, and track the latest developments in plant science.[6][42]The My-Plant network uses the terminologycladesto group users in a manner similar tophylogeneticsof plants themselves.[42]It was implemented usingDrupalas itscontent management system.[42] The DNA Subway website uses agraphical user interface(GUI) to generateDNAsequence annotations, explore plantgenomesfor members of gene andtransposonfamilies, and conductphylogeneticanalyses. It makes high-level DNA analysis available to faculty and students by simplifying annotation andcomparative genomicsworkflows.[6][43]It was developed for iPlant by theDolan DNA Learning Center.[44][45]
https://en.wikipedia.org/wiki/SSWAP
Variational message passing(VMP) is anapproximate inferencetechnique for continuous- or discrete-valuedBayesian networks, withconjugate-exponentialparents, developed by John Winn. VMP was developed as a means of generalizing the approximatevariational methodsused by such techniques aslatent Dirichlet allocation, and works by updating an approximate distribution at each node through messages in the node'sMarkov blanket. Given some set of hidden variablesH{\displaystyle H}and observed variablesV{\displaystyle V}, the goal of approximate inference is to maximize a lower-bound on the probability that a graphical model is in the configurationV{\displaystyle V}. Over some probability distributionQ{\displaystyle Q}(to be defined later), So, if we define our lower bound to be then the likelihood is simply this bound plus therelative entropybetweenP{\displaystyle P}andQ{\displaystyle Q}. Because the relative entropy is non-negative, the functionL{\displaystyle L}defined above is indeed a lower bound of the log likelihood of our observationV{\displaystyle V}. The distributionQ{\displaystyle Q}will have a simpler character than that ofP{\displaystyle P}because marginalizing overP{\displaystyle P}is intractable for all but the simplest ofgraphical models. In particular, VMP uses a factorized distribution whereHi{\displaystyle H_{i}}is a disjoint part of the graphical model. The likelihood estimate needs to be as large as possible; because it's a lower bound, getting closerlog⁡P{\displaystyle \log P}improves the approximation of the log likelihood. By substituting in the factorized version ofQ{\displaystyle Q},L(Q){\displaystyle L(Q)}, parameterized over the hidden nodesHi{\displaystyle H_{i}}as above, is simply the negativerelative entropybetweenQj{\displaystyle Q_{j}}andQj∗{\displaystyle Q_{j}^{*}}plus other terms independent ofQj{\displaystyle Q_{j}}ifQj∗{\displaystyle Q_{j}^{*}}is defined as whereE−j{ln⁡P(H,V)}{\displaystyle \mathbb {E} _{-j}\{\ln P(H,V)\}}is the expectation over all distributionsQi{\displaystyle Q_{i}}exceptQj{\displaystyle Q_{j}}. Thus, if we setQj{\displaystyle Q_{j}}to beQj∗{\displaystyle Q_{j}^{*}}, the boundL{\displaystyle L}is maximized. Parents send their children the expectation of theirsufficient statisticwhile children send their parents theirnatural parameter, which also requires messages to be sent from the co-parents of the node. Because all nodes in VMP come fromexponential familiesand all parents of nodes areconjugateto their children nodes, the expectation of thesufficient statisticcan be computed from thenormalization factor. The algorithm begins by computing the expected value of the sufficient statistics for that vector. Then, until the likelihood converges to a stable value (this is usually accomplished by setting a small threshold value and running the algorithm until it increases by less than that threshold value), do the following at each node: Because every child must be conjugate to its parent, this has limited the types of distributions that can be used in the model. For example, the parents of aGaussian distributionmust be aGaussian distribution(corresponding to theMean) and agamma distribution(corresponding to the precision, or one overσ{\displaystyle \sigma }in more common parameterizations). Discrete variables can haveDirichletparents, andPoissonandexponentialnodes must havegammaparents. More recently, VMP has been extended to handle models that violate this conditional conjugacy constraint.[1]
https://en.wikipedia.org/wiki/Variational_message_passing
Generalized filteringis a genericBayesian filteringscheme for nonlinear state-space models.[1]It is based on avariational principle of least action, formulated in generalized coordinates of motion.[2]Note that "generalized coordinates of motion" are related to—but distinct from—generalized coordinatesas used in (multibody) dynamical systems analysis. Generalized filtering furnishes posterior densities over hidden states (and parameters) generating observed data using a generalizedgradient descenton variational free energy, under theLaplace assumption. Unlike classical (e.g.Kalman-Bucyorparticle) filtering, generalized filtering eschews Markovian assumptions about random fluctuations. Furthermore, it operates online, assimilating data to approximate the posterior density over unknown quantities, without the need for a backward pass. Special cases includevariational filtering,[3]dynamic expectation maximization[4]andgeneralized predictive coding. Definition: Generalized filtering rests on thetuple(Ω,U,X,S,p,q){\displaystyle (\Omega ,U,X,S,p,q)}: Here ~ denotes a variable in generalized coordinates of motion:u~=[u,u′,u″,…]T{\displaystyle {\tilde {u}}=[u,u',u'',\ldots ]^{T}} The objective is to approximate the posterior density over hidden and control states, given sensor states and agenerative model– and estimate the (path integral of)model evidencep(s~(t)|m){\displaystyle p({\tilde {s}}(t)\vert m)}to compare different models. This generally involves an intractable marginalization over hidden states, so model evidence (or marginal likelihood) is replaced with a variational free energy bound.[5]Given the following definitions: Denote theShannon entropyof the densityq{\displaystyle q}byH[q]=Eq[−log⁡(q)]{\displaystyle H[q]=E_{q}[-\log(q)]}. We can then write the variational free energy in two ways: The second equality shows that minimizing variational free energy (i) minimizes theKullback-Leibler divergencebetween the variational and true posterior density and (ii) renders the variational free energy (a bound approximation to) the negative log evidence (because the divergence can never be less than zero).[6]Under the Laplace assumptionq(x~,u~∣μ~)=N(μ~,C){\displaystyle q({\tilde {x}},{\tilde {u}}\mid {\tilde {\mu }})={\mathcal {N}}({\tilde {\mu }},C)}the variational density is Gaussian and the precision that minimizes free energy isC−1=Π=∂μ~μ~G(μ~){\displaystyle C^{-1}=\Pi =\partial _{{\tilde {\mu }}{\tilde {\mu }}}G({\tilde {\mu }})}. This means that free-energy can be expressed in terms of the variational mean[7](omitting constants): The variational means that minimize the (path integral) of free energy can now be recovered by solving the generalized filter: whereD{\displaystyle D}is ablock matrixderivative operator of identify matrices such thatDu~=[u′,u″,…]T{\displaystyle D{\tilde {u}}=[u',u'',\ldots ]^{T}} Generalized filtering is based on the following lemma:The self-consistent solution toμ~˙=Dμ~−∂μ~F(s,μ~){\displaystyle {\dot {\tilde {\mu }}}=D{\tilde {\mu }}-\partial _{\tilde {\mu }}F(s,{\tilde {\mu }})}satisfies the variationalprinciple of stationary action, where action is the path integral of variational free energy Proof: self-consistency requires the motion of the mean to be the mean of the motion and (by thefundamental lemma of variational calculus) Put simply, small perturbations to the path of the mean do not change variational free energy and it has the least action of all possible (local) paths. Remarks: Heuristically, generalized filtering performs a gradient descent on variational free energy in a moving frame of reference:μ~˙−Dμ~=−∂μ~F(s,μ~){\displaystyle {\dot {\tilde {\mu }}}-D{\tilde {\mu }}=-\partial _{\tilde {\mu }}F(s,{\tilde {\mu }})}, where the frame itself minimizes variational free energy. For a related example in statistical physics, see Kerr and Graham[8]who use ensemble dynamics in generalized coordinates to provide a generalized phase-space version of Langevin and associated Fokker-Planck equations. In practice, generalized filtering useslocal linearization[9]over intervalsΔt{\displaystyle \Delta t}to recover discrete updates This updates the means of hidden variables at each interval (usually the interval between observations). Usually, the generative density or model is specified in terms of a nonlinear input-state-output model with continuous nonlinear functions: The corresponding generalized model (under local linearity assumptions) obtains the from the chain rule Gaussian assumptions about the random fluctuationsω{\displaystyle \omega }then prescribe the likelihood and empirical priors on the motion of hidden states The covariancesΣ~=V⊗Σ{\displaystyle {\tilde {\Sigma }}=V\otimes \Sigma }factorize into a covariance among variables and correlationsV{\displaystyle V}among generalized fluctuations that encodes theirautocorrelation: Here,ρ¨(0){\displaystyle {\ddot {\rho }}(0)}is the second derivative of the autocorrelation function evaluated at zero. This is a ubiquitous measure of roughness in the theory ofstochastic processes.[10]Crucially, the precision (inverse variance) of high order derivatives fall to zero fairly quickly, which means it is only necessary to model relatively low order generalized motion (usually between two and eight) for any given or parameterized autocorrelation function. When time series are observed as a discrete sequence ofN{\displaystyle N}observations, the implicit sampling is treated as part of the generative process, where (usingTaylor's theorem) In principle, the entire sequence could be used to estimate hidden variables at each point in time. However, the precision of samples in the past and future falls quickly and can be ignored. This allows the scheme to assimilate data online, using local observations around each time point (typically between two and eight). For any slowly varying model parameters of the equations of motionf(x,u,θ){\displaystyle f(x,u,\theta )}or precisionΠ~(x,u,θ){\displaystyle {\tilde {\Pi }}(x,u,\theta )}generalized filtering takes the following form (whereμ{\displaystyle \mu }corresponds to the variational mean of the parameters) Here, the solutionμ~˙=0{\displaystyle {\dot {\tilde {\mu }}}=0}minimizes variational free energy, when the motion of the mean is small. This can be seen by notingμ˙=μ˙′=0⇒∂μF=0⇒δμS=0{\displaystyle {\dot {\mu }}={\dot {\mu }}'=0\Rightarrow \partial _{\mu }F=0\Rightarrow \delta _{\mu }S=0}. It is straightforward to show that this solution corresponds to a classicalNewton update.[11] Classical filtering under Markovian or Wiener assumptions is equivalent to assuming the precision of the motion of random fluctuations is zero. In this limiting case, one only has to consider the states and their first derivativeμ~=(μ,μ′){\displaystyle {\tilde {\mu }}=(\mu ,{\mu }')}. This means generalized filtering takes the form of a Kalman-Bucy filter, with prediction and correction terms: Substituting this first-order filtering into the discrete update scheme above gives the equivalent of (extended) Kalman filtering.[12] Particle filteringis a sampling-based scheme that relaxes assumptions about the form of the variational or approximate posterior density. The corresponding generalized filtering scheme is calledvariational filtering.[3]In variational filtering, an ensemble of particles diffuse over the free energy landscape in a frame of reference that moves with the expected (generalized) motion of the ensemble. This provides a relatively simple scheme that eschews Gaussian (unimodal) assumptions. Unlike particle filtering it does not require proposal densities—or the elimination or creation of particles. Variational Bayesrests on a mean field partition of the variational density: This partition induces a variational update or step for each marginal density—that is usually solved analytically using conjugate priors. In generalized filtering, this leads todynamic expectation maximisation.[4]that comprises a D-step that optimizes the sufficient statistics of unknown states, an E-step for parameters and an M-step for precisions. Generalized filtering is usually used to invert hierarchical models of the following form The ensuing generalized gradient descent on free energy can then be expressed compactly in terms of prediction errors, where (omitting high order terms): Here,Π(i){\displaystyle \Pi ^{(i)}}is the precision of random fluctuations at thei-th level. This is known as generalized predictive coding [11], withlinear predictive codingas a special case. Generalized filtering has been primarily applied to biological timeseries—in particular functional magnetic resonance imaging and electrophysiological data. This is usually in the context ofdynamic causal modellingto make inferences about the underlying architectures of (neuronal) systems generating data.[13]It is also used to simulate inference in terms of generalized (hierarchical) predictive coding in the brain.[14]
https://en.wikipedia.org/wiki/Generalized_filtering