text
stringlengths
16
172k
source
stringlengths
32
122
Collective wisdom, also called group wisdom andco-intelligence, is shared knowledge arrived at by individuals and groups with collaboration. Collective intelligence, which is sometimes used synonymously with collective wisdom, is more of a shared decision process than collective wisdom. Unlike collective wisdom, collective intelligence is not uniquely human and has been associated with animal and plant life. Collective intelligence is basically consensus-driven decision-making, whereas collective wisdom is not necessarily focused on the decision process. Collective wisdom is a more amorphous phenomenon which can be characterized by collective learning over time. Collective wisdom, which may be said to have a more distinctly human quality than collective intelligence, is contained in such early works asThe Torah,The Bible,The Koran, the works ofPlato,ConfuciusandBuddha,Bhagavad Gita, and the many myths and legends from all cultures. Drawing from the idea of universal truth, the point of collective wisdom is to make life easier/more enjoyable through understanding human behavior, whereas the point of collective intelligence is to make life easier/more enjoyable through the application of acquired knowledge. While collective intelligence may be said to have more mathematical and scientific bases, collective wisdom also accounts for the spiritual realm of human behaviors and consciousness.Thomas Jeffersonreferred to the concept of collective wisdom when he made his statement, "A Nation's best defense is an educated citizenry". Ιn effect, the ideal of a democracy is that government functions best when everyone participates. British philosopherThomas Hobbesuses hisLeviathanto illustrate how mankind'scollective consciousnessgrows to create collective wisdom.Émile Durkheimargues inThe Elementary Forms of Religious Life(1912) that society by definition constitutes a higher intelligence because it transcends the individual over space and time, thereby achieving collective wisdom. 19th-century Prussian physicistGustav Fechnerargued for a collective consciousness of mankind, and cited Durkheim as the most credible scholar in the field of "collective consciousness". Fechner also referred to the work of Jesuit Priest PierreTeilhard de Chardin, whose concept of thenoospherewas a precursor to the term collective intelligence.H.G. Wells's concept of "world brain", as described in his book of essays with the same title, has more recently been examined in depth byPierre Lévyin his book,The Universe-Machine: Creation, Cognition and Computer Culture.Howard Bloom's treatise "The Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century" examines similarities in organizational patterns in nature, human brain function, society, and the cosmos. He also posits the theory that group selection directs evolutionary change through collective information processing. Alexander Flor related the world brain concept with current developments in global knowledge networking spawned by new information and communication technologies in an online paper, A Global Knowledge Network.[1]He also discussed the collective mind within the context of social movements in Asia in a bookDevelopment Communication Praxis.[2] Dave Pollard's restatement of Collective wisdom:"Many cognitive, coordination and cooperation problems are best solved by canvassing groups (the larger the better) of reasonably informed, unbiased, engaged people. The group's answer is almost invariably much better than any individual expert's answer, even better than the best answer of the experts in the group." Harnessing the collective wisdom of people is an area of intense contemporary interest and cutting-edge research. The application of the term to methodologies that are designed to harness collective wisdom is credited to the work ofAlexander Christakisand his group,[3][4]As the challenges society faces today are of extreme complexities, the only solution is to develop technologies capable of harnessing theCollective Intelligenceand collective wisdom of many people, or even crowds. TheInstitute for 21st Century Agorasfounded in 2002 byAlexander Christakis, the Wisdom Research Network of the University of Chicago[5]launched in 2010 and theMIT Center for Collective Intelligencefounded byThomas W. Malonein 2007 are some examples. The Collective Wisdom Initiative was formed in 2000 with the support of theFetzer Institutefor the purpose of gathering material on the research, theory, and practice of collective wisdom. It was a collaboration of practitioners and academics in areas such as business, health care, mental health, education, criminal justice, and conflict resolution.[6]Several of the founding members subsequently co-authoredThe Power of Collective Wisdom. In this, six stances or principles, which support the power of collective wisdom are presented: deep listening, suspension of certainty, seeing whole systems/seeking diverse perspectives, respect for other/group discernment, welcoming all that is arising, and trust in the transcendent.[7] Two strands of thought relating to collective wisdom follow very different paths. The first suggests that aggregates of people and information will succeed in advancing wisdom, that wisdom is built on the accumulation of data and knowledge, without a need for judgement or qualification. Some have faulted this belief for failing to take into account the importance of 'adaptive assessment'.[8]The second argues that wisdom is only possible in reflective states of mind, including metacognition. According toAlan Briskin, wisdom requires systematic reflection on the inner self and the outer states of social order. Mark Baurelein has made the case that the hypercommunication of knowledge has hobbled rather than promoted intellectual development.[9]
https://en.wikipedia.org/wiki/Collective_wisdom
Theconventional wisdomorreceived opinionis the body of ideas or explanations generally accepted by the public and/or by experts in a field.[1] The term "conventional wisdom" dates back to at least 1838, as a synonym for "commonplace knowledge".[2][n 1]It was used in a number of works, occasionally in a benign[3]or neutral[4]sense, but more often pejoratively.[5]Despite this previous usage, the term is often credited to the economistJohn Kenneth Galbraith, who used it in his 1958 bookThe Affluent Society:[6] It will be convenient to have a name for the ideas which are esteemed at any time for their acceptability, and it should be a term that emphasizes this predictability. I shall refer to these ideas henceforth as the conventional wisdom.[7] Galbraith specifically prepended "The" to the phrase to emphasize its uniqueness, and sharpened its meaning to narrow it to those commonplace beliefs that are also acceptable and comfortable to society, thus enhancing their ability to resist facts that might diminish them.[citation needed]He repeatedly referred to it throughout the text ofThe Affluent Society, invoking it to explain the high degree of resistance in academic economics to new ideas. For these reasons, he is usually credited with the invention and popularization of the phrase in modern usage.[citation needed]
https://en.wikipedia.org/wiki/Conventional_wisdom
Dispersed knowledgeineconomicsis the notion that no singleagenthas information as to all of the factors which influencepricesandproductionthroughout the system.[1]The term has been both expanded upon and popularized by American economistThomas Sowell.[2] Each agent in a market for assets, goods, or services possesses incomplete knowledge as to most of the factors which affect prices in that market. For example, no agent has full information as to other agents' budgets, preferences, resources or technologies, not to mention their plans for the future and numerous other factors which affect prices in those markets.[3] Market prices are the result ofprice discovery, in which each agent participating in the market makes use of its current knowledge and plans to decide on the prices and quantities at which it chooses to transact. The resulting prices and quantities of transactions may be said to reflect the current state of knowledge of the agents currently in the market, even though no single agent commands information as to the entire set of such knowledge.[4] Some economists believe that market transactions provide the basis for a society to benefit from the knowledge that is dispersed among its constituent agents. For example, in hisPrinciples of Political Economy,John Stuart Millstates that one of the justifications for alaissez fairegovernment policy is his belief that self-interested individuals throughout the economy, acting independently, can make better use of dispersed knowledge than could the best possible government agency.[5] Friedrich Hayekclaimed that "dispersed knowledge isessentiallydispersed, and cannot possibly be gathered together and conveyed to an authority charged with the task of deliberately creating order".[6] Today, the best and most comprehensive book on dispersed knowledge isKnowledge and DecisionsbyThomas Sowell, which Hayek called "the best book on general economics in many a year."[7] Dispersed knowledge will give rise to uncertainty which will lead to different kinds of results. Richard LeFauve highlights the advantages of organizational structure in companies: "Before if we had a tough decision to make, we would have two or three different perspectives with strong support of all three. In a traditional organization the bossman decides after he’s heard all three alternatives. At Saturn we take time to work it out, and what generally happens is that you end up with a fourth answer which none of the portions had in the first place. but one that all three portions of the organization fully support (AutoWeeR, Oct. 8, 1990. p. 20)." Companies are supposed to think highly of the dispersed knowledge and make adjustments to meet demands.[10] Tsoukas stated: "A firm’s knowledge is distributed, not only in a computational sense . . . or in Hayek’s (1945, p. 521) sense that the factual knowledge of the particular circumstances of time and place cannot be surveyed as a whole. But, more radically, a firm’s knowledge is distributed in the sense that it is inherently indeterminate: nobody knows in advance what that knowledge is or need be. Firms are faced with radical uncertainty: they do not, they cannot, know what they need to know."[11] There are several strategies targeting at the problems caused by dispersed knowledge. First of all, replacing knowledge by getting access to knowledge can be one of the strategies.[12][13] What's more, the capability to complete incomplete knowledge can deal with knowledge gaps created by the dispersed knowledge. In addition, making a design of institutions with reasonable coordination mechanisms can be regarded as the third strategy.[14] Besides, resolving organization units into smaller ones should be taken into consideration.[15] Last but not least, providing more data to decision maker will be helpful for making a correct decision.
https://en.wikipedia.org/wiki/Dispersed_knowledge
Dollar votingis an analogy that refers to the theoretical impact ofconsumer choiceon producers' actions by means of the flow of consumer payments to producers for their goods and services. In some principles textbooks of the mid-20th century, the term "dollar voting" was used to describe the process by which consumers' choices influence firms' production decisions.[citation needed]Products that consumers buy will tend to be produced in the future. Products that do not sell as well as expected will receive fewer productive resources in the future. According to this analogy, consumers vote for "winners" and "losers" with their purchases. This argument was used to explain market allocations of goods and services under the catchphrase "consumer sovereignty".[citation needed] Consumerboycottssometimes aim to change producers' behaviour. The goals of selective boycotts, or dollar voting, have been diverse, including cutting corporate revenues, removal of key executives, and reputational damage.[1] The modern idea of dollar voting can be traced back to its development byJames M. BuchananinIndividual Choice in Voting and the Market.[2]As apublic choicetheorist, Buchanan considered economic participation by the individual to be a form of pure democracy.[3][non-primary source needed]Also known aspolitical consumerism, the history of dollar voting in the United States can be traced back to the American Revolution, when colonists boycotted several British products in protest oftaxation without representation.[4] If voters feel disenfranchised politically, they may instead use their spending power to influence politics and the economy. Consumers use dollar voting because they hope to impact society's values and the use of resources.[4] Dollar voting has faced criticism in modern America for being class-bound. Dollar voting is archetypically used by middle and upper middle class consumers who spend their money at local farmers markets, community agricultural programs, and the preparation of "slow food".[5]These purchases do not affect low-income producers and consumers in the food market.[5]Dollar voting has also been criticized as a form of conspicuous consumption for the well-off.[5] Dollar voting has also been criticized for being a sort of consumer vigilantism. While most economists and economic philosophers accept that consumers have a right to their personal moral choices in the market, large-scale movements to influenceconsumer spendingcould have potentially dangerous implications.[example needed][6] Efforts to encourage corporations and firms to act in environmentally friendly ways have become popular. It is unclear whether firms that create negative environmental externalities will actually change their method of production to satisfy such desires.[7]Dollar voting also could dissuade citizens from law-making efforts to check unmitigated self-interest in firms and consumers, instead shifting this responsibility over to the market.
https://en.wikipedia.org/wiki/Dollar_voting
TheDunning–Kruger effectis acognitive biasin which people with limited competence in a particular domain overestimate their abilities. It was first described by the psychologistsDavid DunningandJustin Krugerin 1999. Some researchers also include the opposite effect for high performers: their tendency to underestimate their skills. In popular culture, the Dunning–Kruger effect is often misunderstood as a claim about general overconfidence of people with low intelligence instead of specific overconfidence of people unskilled at a particular task. Numerous similar studies have been done. The Dunning–Kruger effect is usually measured by comparingself-assessmentwith objective performance. For example, participants may take a quiz and estimate their performance afterward, which is then compared to their actual results. The original study focused onlogical reasoning, grammar, and social skills. Other studies have been conducted across a wide range of tasks. They include skills from fields such asbusiness,politics,medicine, driving,aviation,spatial memory, examinations in school, andliteracy. There is disagreement about the causes of the Dunning–Kruger effect. According to themetacognitiveexplanation, poor performers misjudge their abilities because they fail to recognize the qualitative difference between their performances and the performances of others. The statistical model explains the empirical findings asa statistical effectin combination with the general tendency to think that one isbetter than average. Some proponents of this view hold that the Dunning–Kruger effect is mostly a statistical artifact. The rational model holds that overly positive prior beliefs about one's skills are the source of false self-assessment. Another explanation claims that self-assessment is more difficult and error-prone for low performers because many of them have very similar skill levels. There is also disagreement about where the effect applies and about how strong it is, as well as about its practical consequences. Inaccurate self-assessment could potentially lead people to making bad decisions, such as choosing a career for which they are unfit, or engaging in dangerous behavior. It may also inhibit people from addressing their shortcomings to improve themselves. Critics argue that such an effect would have much more dire consequences than what is observed. The Dunning–Kruger effect is defined as the tendency of people with lowabilityin a specific area to give overly positive assessments of this ability.[2][3][4]This is often seen as acognitive bias, i.e. as a systematic tendency to engage in erroneous forms ofthinkingandjudging.[5][6][7]In the case of the Dunning–Kruger effect, this applies mainly to people with low skill in a specific area trying to evaluate their competence within this area. The systematic error concerns their tendency to greatly overestimate their competence, i.e. to see themselves as more skilled than they are.[5] The Dunning–Kruger effect is usually defined specifically for the self-assessments of people with a low level ofcompetence.[8][5][9]But some theorists do not restrict it to the bias of people with low skill, also discussing the reverse effect, i.e., the tendency of highly skilled people to underestimate their abilities relative to the abilities of others.[2][4][9]In this case, the source of the error may not be the self-assessment of one's skills, but an overly positive assessment of the skills of others.[2]This phenomenon can be understood as a form of thefalse-consensus effect, i.e., the tendency to "overestimate the extent to which other people share one's beliefs, attitudes, and behaviours".[10][2][9] Not knowing the scope of your own ignorance is part of the human condition. The problem with it is we see it in other people, and we don't see it in ourselves. The first rule of the Dunning–Kruger club is you don't know you're a member of the Dunning–Kruger club. Some researchers include ametacognitivecomponent in their definition. In this view, the Dunning–Kruger effect is the thesis that those who are incompetent in a given area tend to be ignorant of their incompetence, i.e., they lack the metacognitive ability to become aware of their incompetence. This definition lends itself to a simple explanation of the effect: incompetence often includes being unable to tell the difference between competence and incompetence. For this reason, it is difficult for the incompetent to recognize their incompetence.[12][5]This is sometimes termed the "dual-burden" account, since low performers are affected by two burdens: they lack a skill and they are unaware of this deficiency.[9]Other definitions focus on the tendency to overestimate one's ability and see the relation to metacognition as a possible explanation that is not part of the definition.[5][9][13]This contrast is relevant since the metacognitive explanation is controversial. Many criticisms of the Dunning–Kruger effect target this explanation but accept the empirical findings that low performers tend to overestimate their skills.[8][9][13] Among laypeople, the Dunning–Kruger effect is often misunderstood as the claim that people with low intelligence are more confident in their knowledge and skills than people with high intelligence.[14]According to psychologist Robert D. McIntosh and his colleagues, it is sometimes understood in popular culture as the claim that "stupid people are too stupid to know they are stupid".[15]But the Dunning–Kruger effect applies not to intelligence in general but to skills in specific tasks. Nor does it claim that people lacking a given skill are as confident as high performers. Rather, low performers overestimate themselves but their confidence level is still below that of high performers.[14][1][7] The most common approach tomeasuringthe Dunning–Kruger effect is to compare self-assessment with objective performance. The self-assessment is sometimes calledsubjective abilityin contrast to theobjective abilitycorresponding to the actual performance.[7]The self-assessment may be done before or after the performance.[9]If done afterward, the participants receive no independent clues during the performance as to how well they did. Thus, if the activity involves answering quiz questions, no feedback is given as to whether a given answer was correct.[13]The measurement of the subjective and the objective abilities can be in absolute or relative terms. When done in absolute terms, self-assessment and performance are measured according to objective standards, e.g. concerning how many quiz questions were answered correctly. When done in relative terms, the results are compared with a peer group. In this case, participants are asked to assess their performances in relation to the other participants, for example in the form of estimating the percentage of peers they outperformed.[17][13][2]The Dunning–Kruger effect is present in both cases, but tends to be significantly more pronounced when done in relative terms. This means that people are usually more accurate when predicting their raw score than when assessing how well they did relative to their peer group.[18] The main point of interest for researchers is usually thecorrelationbetween subjective and objective ability.[7]To provide a simplified form ofanalysisof the measurements, objective performances are often divided into four groups. They start from the bottomquartileof low performers and proceed to the top quartile of high performers.[2][7]The strongest effect is seen for the participants in the bottom quartile, who tend to see themselves as being part of the top two quartiles when measured in relative terms.[19][7][20] The initial study byDavid DunningandJustin Krugerexamined the performance and self-assessment of undergraduate students ininductive,deductive, andabductivelogical reasoning; English grammar; and appreciation of humor. Across four studies, the research indicates that the participants who scored in the bottom quartile overestimated their test performance and their abilities. Their test scores placed them in the 12thpercentile, but they ranked themselves in the 62nd percentile.[21][22][5]Other studies focus on how a person'sself-viewcauses inaccurate self-assessments.[23]Some studies indicate that the extent of the inaccuracy depends on the type of task and can be improved by becoming a better performer.[24][25][21] Overall, the Dunning–Kruger effect has been studied across a wide range of tasks, inaviation,business, debating,chess, driving,literacy,medicine,politics,spatial memory, and other fields.[5][9][26]Many studies focus on students—for example, how they assess their performance after an exam. In some cases, these studies gather and compare data from different countries.[27][28]Studies are often done in laboratories; the effect has also been examined in other settings. Examples include assessing hunters' knowledge of firearms and large Internet surveys.[19][13] Various theorists have tried to provide models to explain the Dunning–Kruger effect's underlying causes.[13][20][9]The original explanation by Dunning and Kruger holds that a lack of metacognitive abilities is responsible. This interpretation is not universally accepted, and many alternative explanations are discussed in the academic literature. Some of them focus only on one specific factor, while others see a combination of various factors as the cause.[29][13][5] The metacognitive explanation rests on the idea that part of acquiring a skill consists in learning to distinguish between good and bad performances of the skill. It assumes that people of low skill level are unable to properly assess their performance because they have not yet acquired the discriminatory ability to do so. This leads them to believe that they are better than they actually are because they do not see the qualitative difference between their performance and that of others. In this regard, they lack the metacognitive ability to recognize their incompetence.[5][7][30]This model has also been called the "dual-burden account" or the "double-burden of incompetence", since the burden of regular incompetence is paired with the burden of metacognitive incompetence.[9][13][15]The metacognitive lack may hinder some people from becoming better by hiding their flaws from them.[31]This can then be used to explain how self-confidence is sometimes higher for unskilled people than for people with an average skill: only the latter are aware of their flaws.[32][33] Some attempts have been made to measure metacognitive abilities directly to examine this hypothesis. Some findings suggest that poor performers have reduced metacognitive sensitivity, but it is not clear that its extent is sufficient to explain the Dunning–Kruger effect.[9]Another study concluded that unskilled people lack information but that their metacognitive processes have the same quality as those of skilled people.[15]An indirect argument for the metacognitive model is based on the observation that training people in logical reasoning helps them make more accurate self-assessments.[2]Many criticisms of the metacognitive model hold that it has insufficient empirical evidence and that alternative models offer a better explanation.[20][9][13] A different interpretation is further removed from the psychological level and sees the Dunning–Kruger effect as mainly a statistical artifact.[7][34][30]It is based on the idea that the statistical effect known asregression toward the meanexplains the empirical findings. This effect happens when two variables are not perfectly correlated: if one picks a sample that has an extreme value for one variable, it tends to show a less extreme value for the other variable. For the Dunning–Kruger effect, the two variables are actual performance and self-assessed performance. If a person with low actual performance is selected, their self-assessed performance tends to be higher.[13][7][30] Most researchers acknowledge that regression toward the mean is a relevant statistical effect that must be taken into account when interpreting the empirical findings. This can be achieved by various methods.[35][9]Some theorists, like Gilles Gignac and Marcin Zajenkowski, go further and argue that regression toward the mean in combination with other cognitive biases, like thebetter-than-average effect, can explain most of the empirical findings.[2][7][9]This type of explanation is sometimes called "noise plus bias".[15] According to the better-than-average effect, people generally tend to rate their abilities, attributes, and personality traits as better than average.[36][37]For example, the averageIQis 100, but people on average think their IQ is 115.[7]The better-than-average effect differs from the Dunning–Kruger effect since it does not track how the overly positive outlook relates to skill. The Dunning–Kruger effect, on the other hand, focuses on how this type of misjudgment happens for poor performers.[38][2][4]When the better-than-average effect is paired with regression toward the mean, it shows a similar tendency. This way, it can explain both that unskilled people greatly overestimate their competence and that the reverse effect for highly skilled people is much less pronounced.[7][9][30]This can be shown using simulated experiments that have almost the same correlation between objective and self-assessed ability as actual experiments.[7] Some critics of this model have argued that it can explain the Dunning–Kruger effect only when assessing one's ability relative to one's peer group. But it may not be able to explain self-assessment relative to an objective standard.[39][9]A further objection claims that seeing the Dunning–Kruger effect as a regression toward the mean is only a form of relabeling the problem and does not explain what mechanism causes the regression.[40][41] Based on statistical considerations, Nuhfer et al. arrive at the conclusion that there is no strong tendency to overly positive self-assessment and that the label "unskilled and unaware of it" applies only to few people.[42][43]Science communicatorJonathan Jarrymakes the case that this effect is the only one shown in the original and subsequent papers.[44]Dunning has defended his findings, writing that purely statistical explanations often fail to consider key scholarly findings while adding that self-misjudgements are real regardless of their underlying cause.[45] The rational model of the Dunning–Kruger effect explains the observed regression toward the mean not as a statistical artifact but as the result of prior beliefs.[13][30][20]If low performers expect to perform well, this can cause them to give an overly positive self-assessment. This model uses a psychological interpretation that differs from the metacognitive explanation. It holds that the error is caused by overly positive prior beliefs and not by the inability to correctly assess oneself.[30]For example, after answering a ten-question quiz, a low performer with only four correct answers may believe they got two questions right and five questions wrong, while they are unsure about the remaining three. Because of their positive prior beliefs, they will automatically assume that they got these three remaining questions right and thereby overestimate their performance.[13] Another model sees the way high and low performers are distributed as the source of erroneous self-assessment.[46][20]It is based on the assumption that many low performers' skill levels are very similar, i.e., that "many people [are] piled up at the bottom rungs of skill level".[2]This would make it much more difficult for them to accurately assess their skills in relation to their peers.[9][46]According to this model, the reason for the increased tendency to give false self-assessments is not a lack of metacognitive ability but a more challenging situation in which this ability is applied.[46][2][9]One criticism of this interpretation is directed against the assumption that this type of distribution of skill levels can always be used as an explanation. While it can be found in various fields where the Dunning–Kruger effect has been researched, it is not present in all of them. Another criticism holds that this model can explain the Dunning–Kruger effect only when the self-assessment is measured relative to one's peer group. But it may fail when it is measured relative to absolute standards.[2] A further explanation, sometimes given by theorists with an economic background, focuses on the fact that participants in the corresponding studies lack incentive to give accurate self-assessments.[47][48]In such cases, intellectual laziness or a desire to look good to the experimenter may motivate participants to give overly positive self-assessments. For this reason, some studies were conducted with additional incentives to be accurate. One study gave participants a monetary reward based on how accurate their self-assessments were. These studies failed to show any significant increase in accuracy for the incentive group in contrast to the control group.[47] There are disagreements about the Dunning–Kruger effect's magnitude and practical consequences as compared to other psychological effects. Claims about its significance often focus on how it causes affected people to make decisions that have bad outcomes for them or others. For example, according to Gilles E. Gignac and Marcin Zajenkowski, it can have long-term consequences by leading poor performers into careers for which they are unfit. High performers underestimating their skills, though, may forgo viable career opportunities matching their skills in favor of less promising ones that are below their skill level. In other cases, the wrong decisions can also have short-term effects. For example, Pavel et al. hold thatoverconfidencecan lead pilots to operate a new aircraft for which they lack adequate training or to engage in flight maneuvers that exceed their proficiency.[4][7][8] Emergency medicine is another area where the correct assessment of one's skills and the risks of treatment matters. According to Lisa TenEyck, the tendencies of physicians in training to be overconfident must be considered to ensure the appropriate degree of supervision and feedback.[33]Schlösser et al. hold that the Dunning–Kruger effect can also negatively affect economic activities. This is the case, for example, when the price of a good, such as a used car, is lowered by the buyers' uncertainty about its quality. An overconfident buyer unaware of their lack of knowledge may be willing to pay a much higher price because they do not take into account all the potential flaws and risks relevant to the price.[2] Another implication concerns fields in which researchers rely on people's self-assessments to evaluate their skills. This is common, for example, invocational counselingor to estimate students' and professionals'information literacyskills.[3][7]According to Khalid Mahmood, the Dunning–Kruger effect indicates that such self-assessments often do not correspond to the underlying skills. It implies that they are unreliable as a method for gathering this type of data.[3]Regardless of the field in question, the metacognitive ignorance often linked to the Dunning–Kruger effect may inhibit low performers from improving themselves. Since they are unaware of many of their flaws, they may have little motivation to address and overcome them.[49][50] Not all accounts of the Dunning–Kruger effect focus on its negative sides. Some also concentrate on its positive sides, e.g. that ignorance is sometimes bliss. In this sense, optimism can lead people to experience their situation more positively, and overconfidence may help them achieve even unrealistic goals.[51]To distinguish the negative from the positive sides, two important phases have been suggested to be relevant for realizing a goal: preparatory planning and the execution of the plan. According to Dunning, overconfidence may be beneficial in the execution phase by increasing motivation and energy. However it can be detrimental in the planning phase since the agent may ignore bad odds, take unnecessary risks, or fail to prepare for contingencies. For example, being overconfident may be advantageous for a general on the day of battle because of the additional inspiration passed on to his troops. But it can be disadvantageous in the weeks before by ignoring the need for reserve troops or additional protective gear.[52] Historical precursors of the Dunning–Kruger effect were expressed by theorists such asCharles Darwin("Ignorance more frequently begets confidence than does knowledge") andBertrand Russell("...in the modern world the stupid are cocksure while the intelligent are full of doubt").[53][5]In 2000, Kruger and Dunning were awarded the satiricalIg Nobel Prizein recognition of the scientific work recorded in "their modest report".[54]
https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
TheDelphi methodorDelphi technique(/ˈdɛlfaɪ/DEL-fy; also known asEstimate-Talk-EstimateorETE) is a structured communication technique or method, originally developed as a systematic, interactiveforecastingmethod that relies on a panel of experts.[1][2][3][4][5]Delphi has been widely used for business forecasting and has certain advantages over another structured forecasting approach,prediction markets.[6] Delphi can also be used to help reach expert consensus and develop professional guidelines.[7]It is used for such purposes in many health-related fields, including clinical medicine, public health, and research.[7][8] Delphi is based on the principle that forecasts (or decisions) from a structured group of individuals are more accurate than those from unstructured groups.[9]The experts answer questionnaires in two or more rounds. After each round, afacilitatoror change agent[10]provides an anonymised summary of the experts' forecasts from the previous round as well as the reasons they provided for their judgments. Thus, experts are encouraged to revise their earlier answers in light of the replies of other members of their panel. It is believed that during this process the range of the answers will decrease and the group will converge towards the "correct" answer. Finally, the process is stopped after a predefined stopping criterion (e.g., number of rounds, achievement of consensus, stability of results), and themeanormedianscores of the final rounds determine the results.[11] Special attention has to be paid to the formulation of the Delphi theses and the definition and selection of the experts in order to avoid methodological weaknesses that severely threaten the validity and reliability of the results.[12][13] Ensuring that the participants have requisite expertise and that more domineering participants do not overwhelm weaker-willed participants, as the first group tends to be less inclined to change their minds and the second group is more motivated to fit in, can be a barrier to reaching true consensus.[14] The nameDelphiderives from theOracle of Delphi, although the authors of the method were unhappy with the oracular connotation of the name, "smacking a little of the occult".[15]The Delphi method assumes that group judgments are more valid than individual judgments. The Delphi method was developed at the beginning of theCold Warto forecast the impact of technology onwarfare.[16]In 1944,General Henry H. Arnoldordered the creation of the report for theU.S. Army Air Corpson the future technological capabilities that might be used by the military. Different approaches were tried, but the shortcomings of traditionalforecastingmethods, such astheoretical approach,quantitative modelsor trend extrapolation, quickly became apparent in areas where precise scientific laws have not been established yet. To combat these shortcomings, the Delphi method was developed byProject RANDduring the 1950-1960s (1959) byOlaf Helmer, Norman Dalkey, andNicholas Rescher.[17]It has been used ever since, together with various modifications and reformulations, such as theImen-Delphiprocedure.[18] Experts were asked to give their opinion on the probability, frequency, and intensity of possible enemy attacks. Other experts could anonymously give feedback. This process was repeated several times until a consensus emerged. In 2021, a cross-disciplinary study by Beiderbeck et al. focused on new directions and advancements of the Delphi method, includingReal-time Delphiformats. The authors provide a methodological toolbox for designing Delphi surveys including among others sentiment analyses of the field of psychology.[19] The following key characteristics of the Delphi method help the participants to focus on the issues at hand and separate Delphi from other methodologies: in this technique a panel of experts is drawn from both inside and outside the organisation. The panel consists of experts having knowledge of the area requiring decision making. Each expert is asked to make anonymous predictions. Usually all participants remain anonymous. Their identity is not revealed, even after the completion of the final report. This prevents the authority, personality, or reputation of some participants from dominating others in the process. Arguably, it also frees participants (to some extent) from their personal biases, minimizes the "bandwagon effect" or "halo effect", allows free expression of opinions, encourages open critique, and facilitates admission of errors when revising earlier judgments. The initial contributions from the experts are collected in the form of answers to questionnaires and their comments to these answers. The panel director controls the interactions among the participants by processing the information and filtering out irrelevant content. This avoids the negative effects of face-to-face panel discussions and solves the usual problems ofgroup dynamics. The Delphi method allows participants to comment on the responses of others, the progress of the panel as a whole, and to revise their own forecasts and opinions in real time. The person coordinating the Delphi method is usually known as afacilitatoror Leader, and facilitates the responses of theirpanel of experts, who are selected for a reason, usually that they hold knowledge on an opinion or view. The facilitator sends out questionnaires, surveys etc. and if the panel of experts accept, they follow instructions and present their views. Responses are collected and analyzed, then common and conflicting viewpoints are identified. If consensus is not reached, the process continues through thesis and antithesis, to gradually work towards synthesis, and building consensus. During the past decades, facilitators have used many different measures and thresholds to measure the degree of consensus or dissent. A comprehensive literature review and summary is compiled in an article by von der Gracht.[20] First applications of the Delphi method were in the field of science and technology forecasting. The objective of the method was to combine expert opinions on likelihood and expected development time, of the particular technology, in a single indicator. One of the first such reports, prepared in 1964 by Gordon and Helmer, assessed the direction of long-term trends in science and technology development, covering such topics as scientific breakthroughs,population control,automation, space progress, war prevention and weapon systems. Other forecasts of technology were dealing with vehicle-highway systems, industrial robots, intelligent internet, broadband connections, and technology in education. Later the Delphi method was applied in other places, especially those related to public policy issues, such aseconomic trends, health and education. It was also applied successfully and with high accuracy in business forecasting. For example, in one case reported by Basu and Schroeder (1977),[21]the Delphi method predicted the sales of a new product during the first two years with inaccuracy of 3–4% compared with actual sales. Quantitative methods produced errors of 10–15%, and traditional unstructured forecast methods had errors of about 20%. (This is only one example; the overall accuracy of the technique is mixed.) The Delphi method has also been used as a tool to implement multi-stakeholder approaches for participative policy-making in developing countries. The governments of Latin America and the Caribbean have successfully used the Delphi method as an open-ended public-private sector approach to identify the most urgent challenges for their regional ICT-for-developmenteLAC Action Plans.[22]As a result, governments have widely acknowledged the value of collective intelligence from civil society, academic and private sector participants of the Delphi, especially in a field of rapid change, such as technology policies. In the early 1980s Jackie Awerman of Jackie Awerman Associates, Inc. designed a modified Delphi method for identifying the roles of various contributors to the creation of a patent-eligible product. (Epsilon Corporation, Chemical Vapor Deposition Reactor) The results were then used by patent attorneys to determine bonus distribution percentage to the general satisfaction of all team members.[citation needed] From the 1970s, the use of the Delphi technique in public policy-making introduces a number of methodological innovations. In particular: Further innovations come from the use of computer-based (and later web-based) Delphi conferences. According to Turoff and Hiltz,[23]in computer-based Delphis: According to Bolognini,[24]web-based Delphis offer two further possibilities, relevant in the context of interactive policy-making ande-democracy. These are: One successful example of a (partially) web-based policy Delphi is the five-round Delphi exercise (with 1,454 contributions) for the creation of theeLAC Action Plansin Latin America. It is believed to be the most extensive online participatory policy-making foresight exercise in the history of intergovernmental processes in the developing world at this time.[22]In addition to the specific policy guidance provided, the authors list the following lessons learned: "(1) the potential of Policy Delphi methods to introduce transparency and accountability into public decision-making, especially in developing countries; (2) the utility of foresight exercises to foster multi-agency networking in the development community; (3) the usefulness of embedding foresight exercises into established mechanisms of representative democracy and international multilateralism, such as the United Nations; (4) the potential of online tools to facilitate participation in resource-scarce developing countries; and (5) the resource-efficiency stemming from the scale of international foresight exercises, and therefore its adequacy for resource-scarce regions."[22] The Delphi technique is widely used to help reach expert consensus in health-related settings.[7]For example, it is frequently employed in the development ofmedical guidelinesandprotocols.[7] Some examples of its application inpublic healthcontexts includenon-alcoholic fatty liver disease,[25]iodine deficiency disorders,[26]building responsive health systems for communities affected by migration,[27]the role of health systems in advancing well-being for those living with HIV,[28]on policies and interventions to reduce harmfulgambling,[29]on the regulation ofelectronic cigarettes[30][31][32]and on recommendations to end theCOVID-19 pandemic.[33] Use of the Delphi method in the development of guidelines for the reporting of health research[8]is recommended, especially for experienced developers.[34]Since this advice was made in 2010, two systematic reviews have found that fewer than 30% of published reporting guidelines incorporated Delphi methods into the development process.[35][36] A number of Delphi forecasts are conducted using web sites that allow the process to be conducted in real-time. For instance, the TechCast Project uses a panel of 100 experts worldwide to forecast breakthroughs in all fields of science and technology. Another example is theHorizon Project, where educational futurists collaborate online using the Delphi method to come up with the technological advancements to look out for in education for the next few years. Traditionally the Delphi method has aimed at a consensus of the most probable future by iteration. Other versions, such as the Policy Delphi,[37][38]offer decision support methods aiming at structuring and discussing the diverse views of the preferred future. In Europe, more recent web-based experiments have used the Delphi method as a communication technique for interactivedecision-makingande-democracy.[39] The Argument Delphi, developed by Osmo Kuusi, focuses on ongoing discussion and finding relevant arguments rather than focusing on the output. The Disaggregative Policy Delphi, developed by Petri Tapio, uses cluster analysis as a systematic tool to construct various scenarios of the future in the latest Delphi round.[40]The respondent's view on the probable and the preferable future are dealt with as separate cases. The computerization of Argument Delphi is relatively difficult because of several problems like argument resolution, argument aggregation and argument evaluation. The computerization of Argument Delphi, developed bySadi Evren Seker, proposes solutions to such problems.[41] A fast-track Delphi was developed to provide consensual expert opinion on the state of scientific knowledge in public health crises.[42]It can provide results within three weeks, while the conventional Delphi can take several months (sometimes years).[42] Today the Delphi method is a widely accepted forecasting tool and has been used successfully for thousands of studies in areas varying from technology forecasting todrug abuse.[43]Overall the track record of the Delphi method is mixed.[44]There have been many cases when the method produced poor results. Still, some authors attribute this to poor application of the method and not to the weaknesses of the method itself. TheRAND Methodological Guidance for Conducting and Critically Appraising Delphi Panelsis a manual for doing Delphi research which provides guidance for doing research and offers a appraisal tool.[44]This manual gives guidance on best practices that will help to avoid, or mitigate, potential drawbacks of Delphi Method Research; it also helps to understand the confidence that can be given to study results. It must also be realized that in areas such as science and technology forecasting, the degree of uncertainty is so great that exact and always correct predictions are impossible, so a high degree of error is to be expected. An important challenge for the method is ensuring sufficiently knowledgeable panelists. If panelists are misinformed about a topic, the use of Delphi may only add confidence to their ignorance.[6] One of the initial problems of the method was its inability to make complex forecasts with multiple factors. Potential future outcomes were usually considered as if they had no effect on each other. Later on, several extensions to the Delphi method were developed to address this problem, such ascross impact analysis, that takes into consideration the possibility that the occurrence of one event may change probabilities of other events covered in the survey. Still the Delphi method can be used most successfully in forecasting single scalar indicators. Delphi has characteristics similar toprediction marketsas both are structured approaches that aggregate diverse opinions from groups. Yet, there are differences that may be decisive for their relative applicability for different problems.[6] Some advantages ofprediction marketsderive from the possibility to provide incentives for participation. Delphi seems to have these advantages over prediction markets: More recent research has also focused on combining both, the Delphi technique and prediction markets. More specifically, in a research study atDeutsche Börseelements of the Delphi method had been integrated into a prediction market.[45]
https://en.wikipedia.org/wiki/Delphi_method
Ensemble forecastingis a method used in or withinnumerical weather prediction. Instead of making a single forecast of the most likely weather, a set (or ensemble) of forecasts is produced. This set of forecasts aims to give an indication of the range of possible future states of the atmosphere. Ensemble forecasting is a form ofMonte Carlo analysis. The multiple simulations are conducted to account for the two usual sources ofuncertaintyin forecast models: (1) the errors introduced by the use of imperfect initial conditions, amplified by thechaoticnature of the equations of the atmosphere, which is often referred to assensitive dependence on initial conditions; and (2) errors introduced because of imperfections in the model formulation, such as the approximate mathematical methods to solve the equations. Ideally, the verified future atmospheric state should fall within the predicted ensemblespread, and the amount of spread should be related to the uncertainty (error) of the forecast. In general, this approach can be used to make probabilistic forecasts of anydynamical system, and not just for weather prediction. Today ensemble predictions are commonly made at most of the major operational weather prediction facilities worldwide, including: Experimental ensemble forecasts are made at a number of universities, such as the University of Washington, and ensemble forecasts in the US are also generated by theUS NavyandAir Force. There are various ways of viewing the data such asspaghetti plots,ensemble meansorPostage Stampswhere a number of different results from the models run can be compared. As proposed byEdward Lorenzin 1963, it is impossible for long-range forecasts—those made more than two weeks in advance—to predict the state of the atmosphere with any degree ofskillowing to thechaotic natureof thefluid dynamicsequations involved.[1]Furthermore, existing observation networks have limited spatial and temporal resolution (for example, over large bodies of water such as the Pacific Ocean), which introduces uncertainty into the true initial state of the atmosphere. While a set of equations, known as theLiouville equations, exists to determine the initial uncertainty in the model initialization, the equations are too complex to run in real-time, even with the use of supercomputers.[2]The practical importance of ensemble forecasts derives from the fact that in a chaotic and hence nonlinear system, the rate of growth of forecast error is dependent on starting conditions. An ensemble forecast therefore provides a prior estimate of state-dependent predictability, i.e. an estimate of the types of weather that might occur, given inevitable uncertainties in the forecast initial conditions and in the accuracy of the computational representation of the equations. These uncertainties limit forecast model accuracy to about six days into the future.[3]The first operational ensemble forecasts were produced for sub-seasonal timescales in 1985.[4]However, it was realised that the philosophy underpinning such forecasts was also relevant on shorter timescales – timescales where predictions had previously been made by purely deterministic means. Edward Epsteinrecognized in 1969 that the atmosphere could not be completely described with a single forecast run due to inherent uncertainty, and proposed astochasticdynamic model that producedmeansandvariancesfor the state of the atmosphere.[5]Although theseMonte Carlo simulationsshowed skill, in 1974Cecil Leithrevealed that they produced adequate forecasts only when the ensembleprobability distributionwas a representative sample of the probability distribution in the atmosphere.[6]It was not until 1992 that ensemble forecasts began being prepared by theEuropean Centre for Medium-Range Weather Forecasts(ECMWF) and theNational Centers for Environmental Prediction(NCEP). There are two main sources of uncertainty that must be accounted for when making an ensemble weather forecast: initial condition uncertainty and model uncertainty.[7] Initial condition uncertainty arises due to errors in the estimate of the starting conditions for the forecast, both due to limited observations of the atmosphere, and uncertainties involved in using indirect measurements, such assatellite data, to measure the state of atmospheric variables. Initial condition uncertainty is represented by perturbing the starting conditions between the different ensemble members. This explores the range of starting conditions consistent with our knowledge of the current state of the atmosphere, together with its past evolution. There are a number of ways to generate these initial condition perturbations. The ECMWF model, the Ensemble Prediction System (EPS),[8]uses a combination ofsingular vectorsand an ensemble ofdata assimilations(EDA) to simulate the initialprobability density.[9]The singular vector perturbations are more active in the extra-tropics, while the EDA perturbations are more active in the tropics. The NCEP ensemble, the Global Ensemble Forecasting System, uses a technique known asvector breeding.[10][11]Perturbing the initial state from the satellite measurements such that the perturbed states look physical is a tough task. Also, deep learning has brought forward some techniques to perturb the complex initial state almost physically using flow matching.[12] Model uncertainty arises due to the limitations of the forecast model. The process of representing the atmosphere in a computer model involves many simplifications such as the development ofparametrisationschemes, which introduce errors into the forecast. Several techniques to represent model uncertainty have been proposed. When developing aparametrisationscheme, many new parameters are introduced to represent simplified physical processes. These parameters may be very uncertain. For example, the 'entrainmentcoefficient' represents theturbulentmixing of dry environmental air into aconvective cloud, and so represents a complex physical process using a single number. In a perturbed parameter approach, uncertain parameters in the model's parametrisation schemes are identified and their value changed between ensemble members. While in probabilistic climate modelling, such asclimateprediction.net, these parameters are often held constant globally and throughout the integration,[13]in modern numerical weather prediction it is more common to stochastically vary the value of the parameters in time and space.[14]The degree of parameter perturbation can be guided using expert judgement,[15]or by directly estimating the degree of parameter uncertainty for a given model.[16] A traditionalparametrisationscheme seeks to represent the average effect of the sub grid-scale motion (e.g. convective clouds) on the resolved scale state (e.g. the large scale temperature and wind fields). A stochastic parametrisation scheme recognises that there may be many sub-grid scale states consistent with a particular resolved scale state. Instead of predicting the most likely sub-grid scale motion, a stochastic parametrisation scheme represents one possible realisation of the sub-grid. It does this through includingrandom numbersinto the equations of motion. This samples from theprobability distributionassigned to uncertain processes. Stochastic parametrisations have significantly improved the skill of weather forecasting models, and are now used in operational forecasting centres worldwide.[17]Stochastic parametrisations were first developed at theEuropean Centre for Medium Range Weather Forecasts.[18] When many different forecast models are used to try to generate a forecast, the approach is termed multi-model ensemble forecasting. This method of forecasting can improve forecasts when compared to a single model-based approach.[19]When the models within a multi-model ensemble are adjusted for their various biases, this process is known as "superensemble forecasting". This type of a forecast significantly reduces errors in model output.[20]When models of different physical processes are combined, such as combinations of atmospheric, ocean and wave models, the multi-model ensemble is called hyper-ensemble.[21] The ensemble forecast is usually evaluated by comparing the ensemble average of the individual forecasts for one forecast variable to the observed value of that variable (the "error"). This is combined with consideration of the degree of agreement between various forecasts within the ensemble system, as represented by their overallstandard deviationor "spread". Ensemble spread can be visualised through tools such as spaghetti diagrams, which show the dispersion of one quantity on prognostic charts for specific time steps in the future. Another tool where ensemble spread is used is ameteogram, which shows the dispersion in the forecast of one quantity for one specific location. It is common for the ensemble spread to be too small, such that the observed atmospheric state falls outside of the ensemble forecast. This can lead the forecaster to be overconfident in their forecast.[22]This problem becomes particularly severe for forecasts of the weather about 10 days in advance,[23]particularly if model uncertainty is not accounted for in the forecast. The spread of the ensemble forecast indicates how confident the forecaster can be in his or her prediction. When ensemble spread is small and the forecast solutions are consistent within multiple model runs, forecasters perceive more confidence in the forecast in general.[22]When the spread is large, this indicates more uncertainty in the prediction. Ideally, aspread-skill relationshipshould exist, whereby the spread of the ensemble is a good predictor of the expected error in the ensemble mean. If the forecast isreliable,the observed state will behave as if it is drawn from the forecast probability distribution. Reliability (orcalibration) can be evaluated by comparing the standard deviation of the error in the ensemble mean with the forecast spread: for a reliable forecast, the two should match, both at different forecast lead times and for different locations.[24] The reliability of forecasts of a specific weather event can also be assessed. For example, if 30 of 50 members indicated greater than 1 cm rainfall during the next 24 h, theprobability of exceeding1 cm could be estimated to be 60%. The forecast would be considered reliable if, considering all the situations in the past when a 60% probability was forecast, on 60% of those occasions did the rainfall actually exceed 1 cm. In practice, the probabilities generated from operational weather ensemble forecasts are not highly reliable, though with a set of past forecasts (reforecastsorhindcasts) and observations, the probability estimates from the ensemble can be adjusted to ensure greater reliability. Another desirable property of ensemble forecasts isresolution.This is an indication of how much the forecast deviates from the climatological event frequency – provided that the ensemble is reliable, increasing this deviation will increase the usefulness of the forecast. This forecast quality can also be considered in terms ofsharpness, or how small the spread of the forecast is. The key aim of a forecaster should be to maximise sharpness, while maintaining reliability.[25]Forecasts at long leads will inevitably not be particularly sharp (have particularly high resolution), for the inevitable (albeit usually small) errors in the initial condition will grow with increasing forecast lead until the expected difference between two model states is as large as the difference between two random states from the forecast model's climatology. If ensemble forecasts are to be used for predicting probabilities of observed weather variables they typically need calibration in order to create unbiased and reliable forecasts. For forecasts of temperature one simple and effective method of calibration islinear regression, often known in this context asmodel output statistics. The linear regression model takes the ensemble mean as a predictor for the real temperature, ignores the distribution of ensemble members around the mean, and predicts probabilities using the distribution of residuals from the regression. In this calibration setup the value of the ensemble in improving the forecast is then that the ensemble mean typically gives a better forecast than any single ensemble member would, and not because of any information contained in the width or shape of the distribution of the members in the ensemble around the mean. However, in 2004, a generalisation of linear regression (now known asNonhomogeneous Gaussian regression) was introduced[26]that uses a linear transformation of the ensemble spread to give the width of the predictive distribution, and it was shown that this can lead to forecasts with higher skill than those based on linear regression alone. This proved for the first time that information in the shape of the distribution of the members of an ensemble around the mean, in this case summarized by the ensemble spread, can be used to improve forecasts relative tolinear regression. Whether or not linear regression can be beaten by using the ensemble spread in this way varies, depending on the forecast system, forecast variable and lead time. In addition to being used to improve predictions of uncertainty, the ensemble spread can also be used as a predictor for the likely size of changes in the mean forecast from one forecast to the next.[27]This works because, in some ensemble forecast systems, narrow ensembles tend to precede small changes in the mean, while wide ensembles tend to precede larger changes in the mean. This has applications in the trading industries, for whom understanding the likely sizes of future forecast changes can be important. The Observing System Research and Predictability Experiment(THORPEX) is a 10-year international research and development programme to accelerate improvements in the accuracy of one-day to two-week high impact weather forecasts for the benefit of society, the economy and the environment. It establishes an organizational framework that addresses weather research and forecast problems whose solutions will be accelerated through international collaboration among academic institutions, operational forecast centres and users of forecast products. One of its key components isTHORPEX Interactive Grand Global Ensemble(TIGGE), a World Weather Research Programme to accelerate the improvements in the accuracy of 1-day to 2 week high-impact weather forecasts for the benefit of humanity. Centralized archives of ensemble model forecast data, from many international centers, are used to enable extensivedata sharingand research.
https://en.wikipedia.org/wiki/Ensemble_forecasting
In the field ofhuman factors and ergonomics,human reliability(also known ashuman performanceorHU) is the probability that a human performs a task to a sufficient standard.[1]Reliabilityofhumanscan be affected by many factors such asage, physicalhealth,mental state,attitude,emotions, personal propensity for certain mistakes, andcognitive biases. Human reliability is important to theresilienceofsocio-technical systems, and has implications for fields likemanufacturing,medicineandnuclear power. Attempts made to decreasehuman errorand increase reliability in human interaction with technology includeuser-centered designanderror-tolerant design. Human error, human performance, and human reliability are especially important to consider when work is performed in a complex and high-risk environment.[2] Strategies for dealing with performance-shaping factors such aspsychological stress,cognitive load,fatigueinclude heuristics andbiasessuch asconfirmation bias,availability heuristic, andfrequency bias. A variety of methods exist forhuman reliability analysis(HRA).[3][4]Two general classes of methods are those based onprobabilistic risk assessment(PRA) and those based on acognitivetheory ofcontrol. One method for analyzing human reliability is a straightforward extension ofprobabilistic risk assessment(PRA): in the same way that equipment can fail in apower plant, so can a human operator commit errors. In both cases, an analysis (functional decompositionfor equipment andtask analysisfor humans) would articulate a level of detail for which failure or error probabilities can be assigned. This basic idea is behind theTechnique for Human Error Rate Prediction(THERP).[5]THERP is intended to generate human error probabilities that would be incorporated into a PRA. TheAccident Sequence Evaluation Program(ASEP) human reliability procedure is a simplified form of THERP; an associated computational tool is Simplified Human Error Analysis Code (SHEAN).[6]More recently, theUS Nuclear Regulatory Commissionhas published the Standardized Plant Analysis Risk – Human Reliability Analysis (SPAR-H) method to take account of the potential for human error.[7][8] Erik Hollnagel has developed this line of thought in his work on the Contextual Control Model (COCOM)[9]and the Cognitive Reliability and Error Analysis Method (CREAM).[10]COCOM models human performance as a set of controlmodes—strategic(based on long-term planning),tactical(based on procedures),opportunistic(based on present context), and scrambled (random) – and proposes a model of how transitions between these control modes occur. This model of control mode transition consists of a number of factors, including the human operator's estimate of the outcome of the action (success or failure), the time remaining to accomplish the action (adequate or inadequate), and the number of simultaneous goals of the human operator at that time. CREAM is a human reliability analysis method that is based on COCOM. Related techniques insafety engineeringandreliability engineeringincludefailure mode and effects analysis,hazop,fault tree, andSAPHIRE(Systems Analysis Programs for Hands-on Integrated Reliability Evaluations). The Human Factors Analysis and Classification System (HFACS) was developed initially as a framework to understand the role of human error inaviation accidents.[11][12]It is based on James Reason'sSwiss cheese modelof human error in complex systems. HFACS distinguishes between the "active failures" of unsafe acts, and "latent failures" of preconditions for unsafe acts, unsafesupervision, and organizational influences. These categories were developed empirically on the basis of many aviation accident reports. "Unsafe acts" are performed by the human operator "on the front line" (e.g., thepilot, theair traffic controller, or the driver). Unsafe acts can be either errors (in perception, decision making or skill-based performance) or violations. Violations, or the deliberate disregard for rules and procedures, can be routine or exceptional. Routine violations occur habitually and are usually tolerated by the organization or authority. Exceptional violations are unusual and often extreme. For example, driving 60 mph in a 55-mph speed limit zone is a routine violation, while driving 130 mph in the same zone is exceptional. There are two types of preconditions for unsafe acts: those that relate to the human operator's internal state and those that relate to the human operator's practices or ways of working. Adverse internal states include those related tophysiology(e.g., illness) and mental state (e.g., mentally fatigued, distracted). A third aspect of 'internal state' is really a mismatch between the operator's ability and the task demands. Four types of unsafe supervision are: inadequate supervision; planned inappropriate operations; failure to correct a known problem; and supervisory violations. Organizational influences include those related toresources management(e.g., inadequate human or financial resources),organizational climate(structures,policies, andculture), and organizational processes (such asprocedures,schedules, oversight).
https://en.wikipedia.org/wiki/Human_reliability
Inprobability theory, thelaw of large numbersis amathematical lawthat states that theaverageof the results obtained from a large number of independent random samples converges to the true value, if it exists.[1]More formally, the law of large numbers states that given a sample of independent and identically distributed values, thesample meanconverges to the truemean. The law of large numbers is important because it guarantees stable long-term results for the averages of somerandomevents.[1][2]For example, while acasinomay losemoneyin a single spin of theroulettewheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game. Importantly, the law applies (as the name indicates) only when alarge numberof observations are considered. There is no principle that a small number of observations will coincide with the expected value or that a streak of one value will immediately be "balanced" by the others (see thegambler's fallacy). The law of large numbers only applies to theaverageof the results obtained from repeated trials and claims that this average converges to the expected value; it does not claim that thesumofnresults gets close to the expected value timesnasnincreases. Throughout its history, many mathematicians have refined this law. Today, the law of large numbers is used in many fields including statistics, probability theory, economics, and insurance.[3] For example, a single roll of a six-sideddiceproduces one of the numbers 1, 2, 3, 4, 5, or 6, each with equalprobability. Therefore, theexpected valueof the roll is: 1+2+3+4+5+66=3.5{\displaystyle {\frac {1+2+3+4+5+6}{6}}=3.5} According to the law of large numbers, if a large number of six-sided dice are rolled, the average of their values (sometimes called thesample mean) will approach 3.5, with the precision increasing as more dice are rolled. It follows from the law of large numbers that theempirical probabilityof success in a series ofBernoulli trialswill converge to the theoretical probability. For aBernoulli random variable, the expected value is the theoretical probability of success, and the average ofnsuch variables (assuming they areindependent and identically distributed (i.i.d.)) is precisely the relative frequency. For example, afair cointoss is a Bernoulli trial. When a fair coin is flipped once, the theoretical probability that the outcome will be heads is equal to1⁄2. Therefore, according to the law of large numbers, the proportion of heads in a "large" number of coin flips "should be" roughly1⁄2. In particular, the proportion of heads afternflips willalmost surelyconvergeto1⁄2asnapproaches infinity. Although the proportion of heads (and tails) approaches1⁄2, almost surely theabsolute differencein the number of heads and tails will become large as the number of flips becomes large. That is, the probability that the absolute difference is a small number approaches zero as the number of flips becomes large. Also, almost surely the ratio of the absolute difference to the number of flips will approach zero. Intuitively, the expected difference grows, but at a slower rate than the number of flips. Another good example of the law of large numbers is theMonte Carlo method. These methods are a broad class ofcomputationalalgorithmsthat rely on repeatedrandom samplingto obtain numerical results. The larger the number of repetitions, the better the approximation tends to be. The reason that this method is important is mainly that, sometimes, it is difficult or impossible to use other approaches.[4] The average of the results obtained from a large number of trials may fail to converge in some cases. For instance, the average ofnresults taken from theCauchy distributionor somePareto distributions(α<1) will not converge asnbecomes larger; the reason isheavy tails.[5]The Cauchy distribution and the Pareto distribution represent two cases: the Cauchy distribution does not have an expectation,[6]whereas the expectation of the Pareto distribution (α<1) is infinite.[7]One way to generate the Cauchy-distributed example is where the random numbers equal thetangentof an angle uniformly distributed between −90° and +90°.[8]Themedianis zero, but the expected value does not exist, and indeed the average ofnsuch variables have the same distribution as one such variable. It does not converge in probability toward zero (or any other value) asngoes to infinity. If the trials embed aselection bias, typical in human economic/rational behaviour, the law of large numbers does not help in solving the bias, even if the number of trials is increased the selection bias remains. The Italian mathematicianGerolamo Cardano(1501–1576) stated without proof that the accuracies of empirical statistics tend to improve with the number of trials.[9][3]This was then formalized as a law of large numbers. A special form of the law of large numbers (for a binary random variable) was first proved byJacob Bernoulli.[10][3]It took him over 20 years to develop a sufficiently rigorous mathematical proof which was published in hisArs Conjectandi(The Art of Conjecturing) in 1713. He named this his "golden theorem" but it became generally known as "Bernoulli's theorem". This should not be confused withBernoulli's principle, named after Jacob Bernoulli's nephewDaniel Bernoulli. In 1837,S. D. Poissonfurther described it under the name"la loi des grands nombres"("the law of large numbers").[11][12][3]Thereafter, it was known under both names, but the "law of large numbers" is most frequently used. After Bernoulli and Poisson published their efforts, other mathematicians also contributed to refinement of the law, includingChebyshev,[13]Markov,Borel,Cantelli,KolmogorovandKhinchin.[3]Markov showed that the law can apply to a random variable that does not have a finite variance under some other weaker assumption, and Khinchin showed in 1929 that if the series consists of independent identically distributed random variables, it suffices that theexpected valueexists for the weak law of large numbers to be true.[14][15]These further studies have given rise to two prominent forms of the law of large numbers. One is called the "weak" law and the other the "strong" law, in reference to two different modes ofconvergenceof the cumulative sample means to the expected value; in particular, as explained below, the strong form implies the weak.[14] There are two different versions of the law of large numbers that are described below. They are called thestrong lawof large numbersand theweak lawof large numbers.[16][1]Stated for the case whereX1,X2, ... is an infinite sequence ofindependent and identically distributed (i.i.d.)Lebesgue integrablerandom variables with expected value E(X1) = E(X2) = ... =μ, both versions of the law state that the sample average X¯n=1n(X1+⋯+Xn){\displaystyle {\overline {X}}_{n}={\frac {1}{n}}(X_{1}+\cdots +X_{n})} converges to the expected value: (Lebesgue integrability ofXjmeans that the expected value E(Xj) exists according to Lebesgue integration and is finite. It doesnotmean that the associated probability measure isabsolutely continuouswith respect toLebesgue measure.) Introductory probability texts often additionally assume identical finitevarianceVar⁡(Xi)=σ2{\displaystyle \operatorname {Var} (X_{i})=\sigma ^{2}}(for alli{\displaystyle i}) and no correlation between random variables. In that case, the variance of the average ofnrandom variables is Var⁡(X¯n)=Var⁡(1n(X1+⋯+Xn))=1n2Var⁡(X1+⋯+Xn)=nσ2n2=σ2n.{\displaystyle \operatorname {Var} ({\overline {X}}_{n})=\operatorname {Var} ({\tfrac {1}{n}}(X_{1}+\cdots +X_{n}))={\frac {1}{n^{2}}}\operatorname {Var} (X_{1}+\cdots +X_{n})={\frac {n\sigma ^{2}}{n^{2}}}={\frac {\sigma ^{2}}{n}}.} which can be used to shorten and simplify the proofs. This assumption of finitevarianceisnot necessary. Large or infinite variance will make the convergence slower, but the law of large numbers holds anyway.[17] Mutual independenceof the random variables can be replaced bypairwise independence[18]orexchangeability[19]in both versions of the law. The difference between the strong and the weak version is concerned with the mode of convergence being asserted. For interpretation of these modes, seeConvergence of random variables. Theweak law of large numbers(also calledKhinchin's law) states that given a collection ofindependent and identically distributed(iid) samples from a random variable with finite mean, the sample meanconverges in probabilityto the expected value[20] That is, for any positive numberε, limn→∞Pr(|X¯n−μ|<ε)=1.{\displaystyle \lim _{n\to \infty }\Pr \!\left(\,|{\overline {X}}_{n}-\mu |<\varepsilon \,\right)=1.} Interpreting this result, the weak law states that for any nonzero margin specified (ε), no matter how small, with a sufficiently large sample there will be a very high probability that the average of the observations will be close to the expected value; that is, within the margin. As mentioned earlier, the weak law applies in the case of i.i.d. random variables, but it also applies in some other cases. For example, the variance may be different for each random variable in the series, keeping the expected value constant. If the variances are bounded, then the law applies, as shown byChebyshevas early as 1867. (If the expected values change during the series, then we can simply apply the law to the average deviation from the respective expected values. The law then states that this converges in probability to zero.) In fact, Chebyshev's proof works so long as the variance of the average of the firstnvalues goes to zero asngoes to infinity.[15]As an example, assume that each random variable in the series follows aGaussian distribution(normal distribution) with mean zero, but with variance equal to2n/log⁡(n+1){\displaystyle 2n/\log(n+1)}, which is not bounded. At each stage, the average will be normally distributed (as the average of a set of normally distributed variables). The variance of the sum is equal to the sum of the variances, which isasymptoticton2/log⁡n{\displaystyle n^{2}/\log n}. The variance of the average is therefore asymptotic to1/log⁡n{\displaystyle 1/\log n}and goes to zero. There are also examples of the weak law applying even though the expected value does not exist. Thestrong law of large numbers(also calledKolmogorov's law) states that the sample averageconverges almost surelyto the expected value[21] That is, Pr(limn→∞X¯n=μ)=1.{\displaystyle \Pr \!\left(\lim _{n\to \infty }{\overline {X}}_{n}=\mu \right)=1.} What this means is that, as the number of trialsngoes to infinity, the probability that the average of the observations converges to the expected value, is equal to one. The modern proof of the strong law is more complex than that of the weak law, and relies on passing to an appropriate sub-sequence.[17] The strong law of large numbers can itself be seen as a special case of thepointwise ergodic theorem. This view justifies the intuitive interpretation of the expected value (for Lebesgue integration only) of a random variable when sampled repeatedly as the "long-term average". Law 3 is called the strong law because random variables which converge strongly (almost surely) are guaranteed to converge weakly (in probability). However the weak law is known to hold in certain conditions where the strong law does not hold and then the convergence is only weak (in probability). SeeDifferences between the weak law and the strong law. The strong law applies to independent identically distributed random variables having an expected value (like the weak law). This was proved by Kolmogorov in 1930. It can also apply in other cases. Kolmogorov also showed, in 1933, that if the variables are independent and identically distributed, then for the average to converge almost surely onsomething(this can be considered another statement of the strong law), it is necessary that they have an expected value (and then of course the average will converge almost surely on that).[22] If the summands are independent but not identically distributed, then provided that eachXkhas a finite second moment and ∑k=1∞1k2Var⁡[Xk]<∞.{\displaystyle \sum _{k=1}^{\infty }{\frac {1}{k^{2}}}\operatorname {Var} [X_{k}]<\infty .} This statement is known asKolmogorov's strong law, see e.g.Sen & Singer (1993, Theorem 2.3.10). Theweak lawstates that for a specified largen, the averageX¯n{\displaystyle {\overline {X}}_{n}}is likely to be nearμ.[23]Thus, it leaves open the possibility that|X¯n−μ|>ε{\displaystyle |{\overline {X}}_{n}-\mu |>\varepsilon }happens an infinite number of times, although at infrequent intervals. (Not necessarily|X¯n−μ|≠0{\displaystyle |{\overline {X}}_{n}-\mu |\neq 0}for alln). Thestrong lawshows that thisalmost surelywill not occur. It does not imply that with probability 1, we have that for anyε> 0the inequality|X¯n−μ|<ε{\displaystyle |{\overline {X}}_{n}-\mu |<\varepsilon }holds for all large enoughn, since the convergence is not necessarily uniform on the set where it holds.[24] The strong law does not hold in the following cases, but the weak law does.[25][26] There are extensions of the law of large numbers to collections of estimators, where the convergence is uniform over the collection; thus the nameuniform law of large numbers. Supposef(x,θ) is somefunctiondefined forθ∈ Θ, and continuous inθ. Then for any fixedθ, the sequence {f(X1,θ),f(X2,θ), ...} will be a sequence of independent and identically distributed random variables, such that the sample mean of this sequence converges in probability to E[f(X,θ)]. This is thepointwise(inθ) convergence. A particular example of auniform law of large numbersstates the conditions under which the convergence happensuniformlyinθ. If[29][30] Then E[f(X,θ)] is continuous inθ, and supθ∈Θ‖1n∑i=1nf(Xi,θ)−E⁡[f(X,θ)]‖→P0.{\displaystyle \sup _{\theta \in \Theta }\left\|{\frac {1}{n}}\sum _{i=1}^{n}f(X_{i},\theta )-\operatorname {E} [f(X,\theta )]\right\|{\overset {\mathrm {P} }{\rightarrow }}\ 0.} This result is useful to derive consistency of a large class of estimators (seeExtremum estimator). Borel's law of large numbers, named afterÉmile Borel, states that if an experiment is repeated a large number of times, independently under identical conditions, then the proportion of times that any specified event is expected to occur approximately equals the probability of the event's occurrence on any particular trial; the larger the number of repetitions, the better the approximation tends to be. More precisely, ifEdenotes the event in question,pits probability of occurrence, andNn(E) the number of timesEoccurs in the firstntrials, then with probability one,[31]Nn(E)n→pasn→∞.{\displaystyle {\frac {N_{n}(E)}{n}}\to p{\text{ as }}n\to \infty .} This theorem makes rigorous the intuitive notion of probability as the expected long-run relative frequency of an event's occurrence. It is a special case of any of several more general laws of large numbers in probability theory. Chebyshev's inequality. LetXbe arandom variablewith finiteexpected valueμand finite non-zerovarianceσ2. Then for anyreal numberk> 0, Pr(|X−μ|≥kσ)≤1k2.{\displaystyle \Pr(|X-\mu |\geq k\sigma )\leq {\frac {1}{k^{2}}}.} GivenX1,X2, ... an infinite sequence ofi.i.d.random variables with finite expected valueE(X1)=E(X2)=⋯=μ<∞{\displaystyle E(X_{1})=E(X_{2})=\cdots =\mu <\infty }, we are interested in the convergence of the sample average X¯n=1n(X1+⋯+Xn).{\displaystyle {\overline {X}}_{n}={\tfrac {1}{n}}(X_{1}+\cdots +X_{n}).} The weak law of large numbers states: This proof uses the assumption of finitevarianceVar⁡(Xi)=σ2{\displaystyle \operatorname {Var} (X_{i})=\sigma ^{2}}(for alli{\displaystyle i}). The independence of the random variables implies no correlation between them, and we have that Var⁡(X¯n)=Var⁡(1n(X1+⋯+Xn))=1n2Var⁡(X1+⋯+Xn)=nσ2n2=σ2n.{\displaystyle \operatorname {Var} ({\overline {X}}_{n})=\operatorname {Var} ({\tfrac {1}{n}}(X_{1}+\cdots +X_{n}))={\frac {1}{n^{2}}}\operatorname {Var} (X_{1}+\cdots +X_{n})={\frac {n\sigma ^{2}}{n^{2}}}={\frac {\sigma ^{2}}{n}}.} The common mean μ of the sequence is the mean of the sample average: E(X¯n)=μ.{\displaystyle E({\overline {X}}_{n})=\mu .} UsingChebyshev's inequalityonX¯n{\displaystyle {\overline {X}}_{n}}results in P⁡(|X¯n−μ|≥ε)≤σ2nε2.{\displaystyle \operatorname {P} (\left|{\overline {X}}_{n}-\mu \right|\geq \varepsilon )\leq {\frac {\sigma ^{2}}{n\varepsilon ^{2}}}.} This may be used to obtain the following: P⁡(|X¯n−μ|<ε)=1−P⁡(|X¯n−μ|≥ε)≥1−σ2nε2.{\displaystyle \operatorname {P} (\left|{\overline {X}}_{n}-\mu \right|<\varepsilon )=1-\operatorname {P} (\left|{\overline {X}}_{n}-\mu \right|\geq \varepsilon )\geq 1-{\frac {\sigma ^{2}}{n\varepsilon ^{2}}}.} Asnapproaches infinity, the expression approaches 1. And by definition ofconvergence in probability, we have obtained ByTaylor's theoremforcomplex functions, thecharacteristic functionof any random variable,X, with finite mean μ, can be written as φX(t)=1+itμ+o(t),t→0.{\displaystyle \varphi _{X}(t)=1+it\mu +o(t),\quad t\rightarrow 0.} AllX1,X2, ... have the same characteristic function, so we will simply denote thisφX. Among the basic properties of characteristic functions there are φ1nX(t)=φX(tn)andφX+Y(t)=φX(t)φY(t){\displaystyle \varphi _{{\frac {1}{n}}X}(t)=\varphi _{X}({\tfrac {t}{n}})\quad {\text{and}}\quad \varphi _{X+Y}(t)=\varphi _{X}(t)\varphi _{Y}(t)\quad }ifXandYare independent. These rules can be used to calculate the characteristic function ofX¯n{\displaystyle {\overline {X}}_{n}}in terms ofφX: φX¯n(t)=[φX(tn)]n=[1+iμtn+o(tn)]n→eitμ,asn→∞.{\displaystyle \varphi _{{\overline {X}}_{n}}(t)=\left[\varphi _{X}\left({t \over n}\right)\right]^{n}=\left[1+i\mu {t \over n}+o\left({t \over n}\right)\right]^{n}\,\rightarrow \,e^{it\mu },\quad {\text{as}}\quad n\to \infty .} The limiteitμis the characteristic function of the constant random variable μ, and hence by theLévy continuity theorem,X¯n{\displaystyle {\overline {X}}_{n}}converges in distributionto μ: X¯n→Dμforn→∞.{\displaystyle {\overline {X}}_{n}\,{\overset {\mathcal {D}}{\rightarrow }}\,\mu \qquad {\text{for}}\qquad n\to \infty .} μ is a constant, which implies that convergence in distribution to μ and convergence in probability to μ are equivalent (seeConvergence of random variables.) Therefore, This shows that the sample mean converges in probability to the derivative of the characteristic function at the origin, as long as the latter exists. We give a relatively simple proof of the strong law under the assumptions that theXi{\displaystyle X_{i}}areiid,E[Xi]=:μ<∞{\displaystyle {\mathbb {E} }[X_{i}]=:\mu <\infty },Var⁡(Xi)=σ2<∞{\displaystyle \operatorname {Var} (X_{i})=\sigma ^{2}<\infty }, andE[Xi4]=:τ<∞{\displaystyle {\mathbb {E} }[X_{i}^{4}]=:\tau <\infty }. Let us first note that without loss of generality we can assume thatμ=0{\displaystyle \mu =0}by centering. In this case, the strong law says that Pr(limn→∞X¯n=0)=1,{\displaystyle \Pr \!\left(\lim _{n\to \infty }{\overline {X}}_{n}=0\right)=1,}orPr(ω:limn→∞Sn(ω)n=0)=1.{\displaystyle \Pr \left(\omega :\lim _{n\to \infty }{\frac {S_{n}(\omega )}{n}}=0\right)=1.}It is equivalent to show thatPr(ω:limn→∞Sn(ω)n≠0)=0,{\displaystyle \Pr \left(\omega :\lim _{n\to \infty }{\frac {S_{n}(\omega )}{n}}\neq 0\right)=0,}Note thatlimn→∞Sn(ω)n≠0⟺∃ϵ>0,|Sn(ω)n|≥ϵinfinitely often,{\displaystyle \lim _{n\to \infty }{\frac {S_{n}(\omega )}{n}}\neq 0\iff \exists \epsilon >0,\left|{\frac {S_{n}(\omega )}{n}}\right|\geq \epsilon \ {\mbox{infinitely often}},}and thus to prove the strong law we need to show that for everyϵ>0{\displaystyle \epsilon >0}, we havePr(ω:|Sn(ω)|≥nϵinfinitely often)=0.{\displaystyle \Pr \left(\omega :|S_{n}(\omega )|\geq n\epsilon {\mbox{ infinitely often}}\right)=0.}Define the eventsAn={ω:|Sn|≥nϵ}{\displaystyle A_{n}=\{\omega :|S_{n}|\geq n\epsilon \}}, and if we can show that∑n=1∞Pr(An)<∞,{\displaystyle \sum _{n=1}^{\infty }\Pr(A_{n})<\infty ,}then the Borel-Cantelli Lemma implies the result. So let us estimatePr(An){\displaystyle \Pr(A_{n})}. We computeE[Sn4]=E[(∑i=1nXi)4]=E[∑1≤i,j,k,l≤nXiXjXkXl].{\displaystyle {\mathbb {E} }[S_{n}^{4}]={\mathbb {E} }\left[\left(\sum _{i=1}^{n}X_{i}\right)^{4}\right]={\mathbb {E} }\left[\sum _{1\leq i,j,k,l\leq n}X_{i}X_{j}X_{k}X_{l}\right].}We first claim that every term of the formXi3Xj,Xi2XjXk,XiXjXkXl{\displaystyle X_{i}^{3}X_{j},X_{i}^{2}X_{j}X_{k},X_{i}X_{j}X_{k}X_{l}}where all subscripts are distinct, must have zero expectation. This is becauseE[Xi3Xj]=E[Xi3]E[Xj]{\displaystyle {\mathbb {E} }[X_{i}^{3}X_{j}]={\mathbb {E} }[X_{i}^{3}]{\mathbb {E} }[X_{j}]}by independence, and the last term is zero—and similarly for the other terms. Therefore the only terms in the sum with nonzero expectation areE[Xi4]{\displaystyle {\mathbb {E} }[X_{i}^{4}]}andE[Xi2Xj2]{\displaystyle {\mathbb {E} }[X_{i}^{2}X_{j}^{2}]}. Since theXi{\displaystyle X_{i}}are identically distributed, all of these are the same, and moreoverE[Xi2Xj2]=(E[Xi2])2{\displaystyle {\mathbb {E} }[X_{i}^{2}X_{j}^{2}]=({\mathbb {E} }[X_{i}^{2}])^{2}}. There aren{\displaystyle n}terms of the formE[Xi4]{\displaystyle {\mathbb {E} }[X_{i}^{4}]}and3n(n−1){\displaystyle 3n(n-1)}terms of the form(E[Xi2])2{\displaystyle ({\mathbb {E} }[X_{i}^{2}])^{2}}, and soE[Sn4]=nτ+3n(n−1)σ4.{\displaystyle {\mathbb {E} }[S_{n}^{4}]=n\tau +3n(n-1)\sigma ^{4}.}Note that the right-hand side is a quadratic polynomial inn{\displaystyle n}, and as such there exists aC>0{\displaystyle C>0}such thatE[Sn4]≤Cn2{\displaystyle {\mathbb {E} }[S_{n}^{4}]\leq Cn^{2}}forn{\displaystyle n}sufficiently large. By Markov,Pr(|Sn|≥nϵ)≤1(nϵ)4E[Sn4]≤Cϵ4n2,{\displaystyle \Pr(|S_{n}|\geq n\epsilon )\leq {\frac {1}{(n\epsilon )^{4}}}{\mathbb {E} }[S_{n}^{4}]\leq {\frac {C}{\epsilon ^{4}n^{2}}},}forn{\displaystyle n}sufficiently large, and therefore this series is summable. Since this holds for anyϵ>0{\displaystyle \epsilon >0}, we have established the strong law of large numbers.[32]The proof can be strengthened immensely by dropping all finiteness assumptions on the second and fourth moments. It can also be extended for example to discuss partial sums of distributions without any finite moments. Such proofs use more intricate arguments to prove the same Borel-Cantelli predicate, a strategy attributed to Kolmogorov to conceptually bring the limit inside the probability parentheses.[33] The law of large numbers provides an expectation of an unknown distribution from a realization of the sequence, but also any feature of theprobability distribution.[1]By applyingBorel's law of large numbers, one could easily obtain the probability mass function. For each event in the objective probability mass function, one could approximate the probability of the event's occurrence with the proportion of times that any specified event occurs. The larger the number of repetitions, the better the approximation. As for the continuous case:C=(a−h,a+h]{\displaystyle C=(a-h,a+h]}, for small positive h. Thus, for large n: Nn(C)n≈p=P(X∈C)=∫a−ha+hf(x)dx≈2hf(a){\displaystyle {\frac {N_{n}(C)}{n}}\thickapprox p=P(X\in C)=\int _{a-h}^{a+h}f(x)\,dx\thickapprox 2hf(a)} With this method, one can cover the whole x-axis with a grid (with grid size 2h) and obtain a bar graph which is called ahistogram. One application of the law of large numbers is an important method of approximation known as theMonte Carlo method,[3]which uses a random sampling of numbers to approximate numerical results. The algorithm to compute an integral of f(x) on an interval [a,b] is as follows:[3] We can find the integral off(x)=cos2(x)x3+1{\displaystyle f(x)=cos^{2}(x){\sqrt {x^{3}+1}}}on [-1,2]. Using traditional methods to compute this integral is very difficult, so the Monte Carlo method can be used here.[3]Using the above algorithm, we get ∫−12f(x)dx{\displaystyle \int _{-1}^{2}f(x){dx}}= 0.905 when n=25 and ∫−12f(x)dx{\displaystyle \int _{-1}^{2}f(x){dx}}= 1.028 when n=250 We observe that as n increases, the numerical value also increases. When we get the actual results for the integral we get ∫−12f(x)dx{\displaystyle \int _{-1}^{2}f(x){dx}}= 1.000194 When the LLN was used, the approximation of the integral was closer to its true value, and thus more accurate.[3] Another example is the integration off(x) =ex−1e−1{\displaystyle {\frac {e^{x}-1}{e-1}}}on [0,1].[34]Using the Monte Carlo method and the LLN, we can see that as the number of samples increases, the numerical value gets closer to 0.4180233.[34]
https://en.wikipedia.org/wiki/Law_of_large_numbers
Insoftware development,Linus's lawis the assertion that "given enough eyeballs, allbugsare shallow". The law was formulated byEric S. Raymondin his essay and bookThe Cathedral and the Bazaar(1999), and was named in honor ofLinus Torvalds.[1][2] A more formal statement is: "Given a large enoughbeta-testerand co-developerbase, almost every problem will be characterized quickly and the fix obvious to someone." Presenting the code to multiple developers with the purpose of reaching consensus about its acceptance is a simple form ofsoftware reviewing. Researchers and practitioners have repeatedly shown the effectiveness of reviewing processes in finding bugs and security issues.[3] InFacts and Fallacies about Software Engineering,Robert Glassrefers to the law as a "mantra" of theopen sourcemovement, but calls it a fallacy due to the lack of supporting evidence and because research has indicated that the rate at which additional bugs are uncovered does not scale linearly with the number of reviewers; rather, there is a small maximum number of useful reviewers, between two and four, and additional reviewers above this number uncover bugs at a much lower rate.[4]While closed-source practitioners also promote stringent, independentcode analysisduring a software project's development, they focus on in-depth review by a few and not primarily the number of "eyeballs".[5] The persistence of theHeartbleedsecurity bug in a critical piece of code for two years has been considered as a refutation of Raymond's dictum.[6][7][8][9]Larry Seltzer suspects that the availability of source code may cause some developers and researchers to perform less extensive tests than they would withclosed sourcesoftware, making it easier for bugs to remain.[9]In 2015, theLinux Foundation's executive director Jim Zemlin argued that the complexity of modern software has increased to such levels that specific resource allocation is desirable to improve its security. Regarding some of 2014's largest global open sourcesoftware vulnerabilities, he says, "In these cases, the eyeballs weren't really looking".[8]Large scale experiments or peer-reviewed surveys to test how well the mantra holds in practice have not been performed.[10] Empirical support of the validity of Linus's law[11]was obtained by comparing popular and unpopular projects of the same organization. Popular projects are projects with the top 5% ofGitHubstars (7,481 stars or more). Bug identification was measured using the corrective commit probability, the ratio of commits determined to be related to fixing bugs. The analysis showed that popular projects had a higher ratio of bug fixes (e.g., Google's popular projects had a 27% higher bug fix rate than Google's less popular projects). Since it is unlikely that Google lowered its code quality standards in more popular projects, this is an indication of increased bug detection efficiency in popular projects.
https://en.wikipedia.org/wiki/Linus%27s_law
Anexpertis somebody who has a broad and deepunderstandingandcompetencein terms ofknowledge,skillandexperiencethroughpracticeandeducationin a particular field or area of study. Informally, an expert is someone widely recognized as areliablesource oftechniqueor skill whose faculty for judging or deciding rightly, justly, or wisely is accorded authority and status bypeersor thepublicin a specific well-distinguished domain. An expert, more generally, is a person with extensive knowledge orabilitybased on research, experience, or occupation and in a particular area of study. Experts are called in for advice on their respective subject, but they do not always agree on the particulars of a field of study. An expert can be believed, by virtue ofcredentials,training,education,profession,publicationor experience, to have special knowledge of a subject beyond that of the average person, sufficient that others mayofficially(andlegally) rely upon the individual'sopinionon that topic. Historically, an expert was referred to as asage. The individual was usually a profoundthinkerdistinguished forwisdomand soundjudgment. In specific fields, the definition of expert is well established by consensus and therefore it is not always necessary for individuals to have a professional or academicqualificationfor them to be accepted as an expert. In this respect, a shepherd with fifty years of experience tending flocks would be widely recognized as having complete expertise in the use and training of sheep dogs and the care of sheep. Research in this area attempts to understand the relation between expert knowledge, skills and personal characteristics and exceptional performance. Some researchers have investigated the cognitive structures and processes of experts. The fundamental aim of this research is to describe what it is that experts know and how they use their knowledge to achieve performance that most people assume requires extreme or extraordinary ability. Studies have investigated the factors that enable experts to be fast and accurate.[1] Expertise characteristics, skills and knowledge of a person (that is, expert) or of a system, which distinguish experts from novices and less experienced people. In many domains there are objective measures of performance capable of distinguishing experts from novices: expert chess players will almost always win games against recreational chess players; expertmedical specialistsare more likely to diagnose a disease correctly; etc. The word expertise is used to refer also toexpert determination, where an expert is invited to decide a disputed issue. The decision may be binding or advisory, according to the agreement between the parties in dispute. There are two academic approaches to the understanding and study of expertise. The first understands expertise as an emergent property ofcommunities of practice. In this view expertise is socially constructed; tools for thinking and scripts for action are jointly constructed within social groups enabling that group jointly to define and acquire expertise in some domain. In the second view, expertise is a characteristic of individuals and is a consequence of the human capacity for extensive adaptation to physical and social environments. Many accounts of the development of expertise emphasize that it comes about through long periods of deliberate practice. In many domains of expertise estimates of 10 years' experience[2]deliberate practice are common. Recent research on expertise emphasizes the nurture side of thenature and nurtureargument.[2]Some factors not fitting the nature-nurture dichotomy are biological but not genetic, such as starting age, handedness, and season of birth.[3][4][5] In the field of education there is a potential "expert blind spot" (see alsoDunning–Kruger effect) in newly practicing educators who are experts in their content area. This is based on the "expert blind spot hypothesis" researched byMitchell Nathanand Andrew Petrosino.[6]Newly practicing educators with advanced subject-area expertise of an educational content area tend to use the formalities and analysis methods of their particular area of expertise as a major guiding factor of student instruction and knowledge development, rather than being guided by student learning and developmental needs that are prevalent among novice learners. The blind spot metaphor refers to the physiological blind spot in human vision in which perceptions of surroundings and circumstances are strongly impacted by their expectations. Beginning practicing educators tend to overlook the importance of novice levels of prior knowledge and other factors involved in adjusting and adapting pedagogy for learner understanding. This expert blind spot is in part due to an assumption that novices' cognitive schemata are less elaborate, interconnected, and accessible than experts' and that their pedagogical reasoning skills are less well developed.[7]Essential knowledge of subject matter for practicing educators consists of overlapping knowledge domains: subject matter knowledge and pedagogical content matter.[8]Pedagogical content matter consists of an understanding of how to represent certain concepts in ways appropriate to the learner contexts, including abilities and interests. The expert blind spot is a pedagogical phenomenon that is typically overcome through educators' experience with instructing learners over time.[9][10] In line with the socially constructed view of expertise, expertise can also be understood as a form ofpower; that is, experts have the ability to influence others as a result of their defined social status. By a similar token, a fear of experts can arise from fear of an intellectual elite's power. In earlier periods of history, simply being able to read made one part of an intellectual elite. The introduction of theprinting pressin Europe during the fifteenth century and the diffusion of printed matter contributed to higher literacy rates and wider access to the once-rarefied knowledge of academia. The subsequent spread of education and learning changed society, and initiated an era of widespread education whose elite would now instead be those who produced the written content itself for consumption, in education and all other spheres.[citation needed] Plato's "Noble Lie", concerns expertise. Plato did not believe most people were clever enough to look after their own and society's best interest, so the few clever people of the world needed to lead the rest of the flock. Therefore, the idea was born that only the elite should know the truth in its complete form and the rulers, Plato said, must tell the people of the city "the noble lie" to keep them passive and content, without the risk of upheaval and unrest.[citation needed] In contemporary society, doctors and scientists, for example, are considered to be experts in that they hold a body of dominant knowledge that is, on the whole, inaccessible to the layman.[11]However, this inaccessibility and perhaps even mystery that surrounds expertise does not cause the layman to disregard the opinion of the experts on account of the unknown. Instead, the complete opposite occurs whereby members of the public believe in and highly value the opinion of medical professionals or of scientific discoveries,[11]despite not understanding it. A number of computational models have been developed incognitive scienceto explain the development from novice to expert. In particular,Herbert A. Simonand Kevin Gilmartin proposed a model of learning in chess called MAPP (Memory-Aided Pattern Recognizer).[12]Based on simulations, they estimated that about 50,000chunks(units of memory) are necessary to become an expert, and hence the many years needed to reach this level. More recently, theCHREST model(Chunk Hierarchy and REtrieval STructures) has simulated in detail a number of phenomena in chess expertise (eye movements, performance in a variety of memory tasks, development from novice to expert) and in other domains.[13][14] An important feature of expert performance seems to be the way in which experts are able to rapidly retrieve complex configurations of information from long-term memory. They recognize situations because they have meaning. It is perhaps this central concern with meaning and how it attaches to situations which provides an important link between the individual and social approaches to the development of expertise. Work on "Skilled Memory and Expertise" byAnders EricssonandJames J. Staszewskiconfronts the paradox of expertise and claims that people not only acquire content knowledge as they practice cognitive skills, they also develop mechanisms that enable them to use a large and familiar knowledge base efficiently.[1] Work onexpert systems(computer software designed to provide an answer to a problem, or clarify uncertainties where normally one or more human experts would need to be consulted) typically is grounded on the premise that expertise is based on acquired repertoires of rules and frameworks for decision making which can be elicited as the basis for computer supported judgment and decision-making. However, there is increasing evidence that expertise does not work in this fashion. Rather, experts recognize situations based on experience of many prior situations. They are in consequence able to make rapid decisions in complex and dynamic situations. In a critique of the expert systems literature, Dreyfus & Dreyfus suggest:[15] If one asks an expert for the rules he or she is using, one will, in effect, force the expert to regress to the level of a beginner and state the rules learned in school. Thus, instead of using rules he or she no longer remembers, as the knowledge engineers suppose, the expert is forced to remember rules he or she no longer uses. ... No amount of rules and facts can capture the knowledge an expert has when he or she has stored experience of the actual outcomes of tens of thousands of situations. The role of long-term memory in the skilled memory effect was first articulated by Chase and Simon in their classic studies of chess expertise. They asserted that organized patterns of information stored in long-term memory (chunks) mediated experts' rapid encoding and superior retention. Their study revealed that all subjects retrieved about the same number of chunks, but the size of the chunks varied with subjects' prior experience. Experts' chunks contained more individual pieces than those of novices. This research did not investigate how experts find, distinguish, and retrieve the right chunks from the vast number they hold without a lengthy search of long-term memory. Skilled memory enables experts to rapidly encode, store, and retrieve information within the domain of their expertise and thereby circumvent the capacity limitations that typically constrain novice performance. For example, it explains experts' ability to recall large amounts of material displayed for only brief study intervals, provided that the material comes from their domain of expertise. When unfamiliar material (not from their domain of expertise) is presented to experts, their recall is no better than that of novices. The first principle of skilled memory, themeaningful encoding principle,states that experts exploit prior knowledge to durably encode information needed to perform a familiar task successfully. Experts form more elaborate and accessible memory representations than novices. The elaborate semantic memory network creates meaningful memory codes that create multiple potential cues and avenues for retrieval. The second principle, theretrieval structure principlestates that experts develop memory mechanisms called retrieval structures to facilitate the retrieval of information stored in long-term memory. These mechanisms operate in a fashion consistent with the meaningful encoding principle to provide cues that can later be regenerated to retrieve the stored information efficiently without a lengthy search. The third principle, thespeed up principlestates that long-term memory encoding and retrieval operations speed up with practice, so that their speed and accuracy approach the speed and accuracy of short-term memory storage and retrieval. Examples of skilled memory research described in the Ericsson and Stasewski study include:[1] Much of the research regarding expertise involves the studies of how experts and novices differ in solving problems.[16]Mathematics[17]and physics[18]are common domains for these studies. One of the most cited works in this area examines how experts (PhD students in physics) and novices (undergraduate students that completed one semester of mechanics) categorize and represent physics problems. They found that novices sort problems into categories based upon surface features (e.g., keywords in the problem statement or visual configurations of the objects depicted). Experts, however, categorize problems based upon their deep structures (i.e., the main physics principle used to solve the problem).[19] Their findings also suggest that while the schemas of both novices and experts are activated by the same features of a problem statement, the experts' schemas contain more procedural knowledge which aid in determining which principle to apply, and novices' schemas contain mostly declarative knowledge which do not aid in determining methods for solution.[19] Relative to a specific field, an expert has: Marie-Line Germain developed a psychometric measure of perception of employee expertise called the Generalized Expertise Measure.[20]She defined a behavioral dimension in experts, in addition to the dimensions suggested by Swanson and Holton.[21]Her 16-item scale contains objective expertise items and subjective expertise items. Objective items were named Evidence-Based items. Subjective items (the remaining 11 items from the measure below) were named Self-Enhancement items because of their behavioral component.[20] Scholars inrhetorichave also turned their attention to the concept of the expert. Considered an appeal to ethos or "the personal character of the speaker",[22]established expertise allows a speaker to make statements regarding special topics of which the audience may be ignorant. In other words, the expert enjoys the deference of the audience's judgment and can appeal to authority where a non-expert cannot. In The Rhetoric of Expertise, E. Johanna Hartelius defines two basic modes of expertise: autonomous and attributed expertise. While an autonomous expert can "possess expert knowledge without recognition from other people," attributed expertise is "a performance that may or may not indicate genuine knowledge." With these two categories, Hartelius isolates the rhetorical problems faced by experts: just as someone with autonomous expertise may not possess the skill to persuade people to hold their points of view, someone with merely attributed expertise may be persuasive but lack the actual knowledge pertaining to a given subject. The problem faced by audiences follows from the problem facing experts: when faced with competing claims of expertise, what resources do non-experts have to evaluate claims put before them?[23] Hartelius and other scholars have also noted the challenges that projects such as Wikipedia pose to how experts have traditionally constructed their authority. In "Wikipedia and the Emergence of Dialogic Expertise", she highlights Wikipedia as an example of the "dialogic expertise" made possible by collaborative digital spaces. Predicated upon the notion that "truth emerges from dialogue", Wikipedia challenges traditional expertise both because anyone can edit it and because no single person, regardless of their credentials, can end a discussion by fiat. In other words, the community, rather than single individuals, direct the course of discussion. The production of knowledge, then, as a process of dialogue and argumentation, becomes an inherently rhetorical activity.[24] Hartelius calls attention to two competing norm systems of expertise: “network norms of dialogic collaboration” and “deferential norms of socially sanctioned professionalism”; Wikipedia being evidence of the first.[25]Drawing on aBakhtinianframework, Hartelius posits that Wikipedia is an example of an epistemic network that is driven by the view that individuals' ideas clash with one another so as to generate expertise collaboratively.[25]Hartelius compares Wikipedia's methodology of open-ended discussions of topics to that ofBakhtin's theory of speech communication, where genuinedialogueis considered a live event, which is continuously open to new additions and participants.[25]Hartelius acknowledges thatknowledge,experience,training,skill, andqualificationare important dimensions of expertise but posits that the concept is more complex than sociologists and psychologists suggest.[25]Arguing that expertise is rhetorical, then, Hartelius explains that expertise "is not simply about one person's skills being different from another's. It is also fundamentally contingent on a struggle for ownership and legitimacy."[25]Effective communication is an inherent element in expertise in the same style as knowledge is. Rather than leaving each other out, substance and communicative style are complementary.[25]Hartelius further suggests that Wikipedia's dialogic construction of expertise illustrates both the instrumental and the constitutive dimensions of rhetoric; instrumentally as it challengestraditional encyclopediasand constitutively as a function of its knowledge production.[25]Going over the historical development of the encyclopedic project, Hartelius argues that changes in traditional encyclopedias have led to changes in traditional expertise. Wikipedia's use ofhyperlinksto connect one topic to another depends on, and develops, electronic interactivity meaning that Wikipedia's way of knowing is dialogic.[25]Dialogic expertise then, emerges from multiple interactions between utterances within thediscourse community.[25]The ongoing dialogue between contributors on Wikipedia not only results in the emergence of truth; it also explicates the topics one can be an expert of. As Hartelius explains, "the very act of presenting information about topics that are not included in traditional encyclopedias is a construction of new expertise."[25]While Wikipedia insists that contributors must only publish preexisting knowledge, the dynamics behind dialogic expertise creates new information nonetheless. Knowledge production is created as a function of dialogue.[25]According to Hartelius, dialogic expertise has emerged on Wikipedia not only because of its interactive structure but also because of the site's hortative discourse which is not found in traditional encyclopedias.[25]By Wikipedia's hortative discourse, Hartelius means various encouragements to edit certain topics and instructions on how to do so that appear on the site.[25]One further reason to the emergence of dialogic expertise on Wikipedia is the site'scommunity pages, which function as atechne; explicating Wikipedia's expert methodology.[25] Building on Hartelius, Damien Pfister developed the concept of "networked expertise". Noting that Wikipedia employs a"many to many"rather than a "one to one" model of communication, he notes how expertise likewise shifts to become a quality of a group rather than an individual. With the information traditionally associated with individual experts now stored within a text produced by a collective, knowing about something is less important than knowing how to find something. As he puts it, "With the internet, the historical power of subject matter expertise is eroded: the archival nature of the Web means that what and how to information is readily available." The rhetorical authority previously afforded to subject matter expertise, then, is given to those with the procedural knowledge of how to find information called for by a situation.[26] An expert differs from the specialist in that a specialist has tobe able to solveaproblemand an expert has toknow its solution. The opposite of an expert is generally known as a layperson, while someone who occupies a middle grade of understanding is generally known as atechnicianand often employed to assist experts. A person may well be an expert in one field and a layperson in many other fields. The concepts of experts and expertise are debated within the field ofepistemologyunder the general heading of expert knowledge. In contrast, the opposite of a specialist would be ageneralistorpolymath. The term is widely used informally, with people being described as 'experts' in order to bolster the relative value of their opinion, when noobjectivecriteria for their expertise is available. The termcrankis likewise used to disparage opinions.Academic elitismarises when experts become convinced that only their opinion is useful, sometimes on matters beyond their personal expertise. In contrast to an expert, anovice(knowncolloquiallyas anewbieor 'greenhorn') is any person that is new to any science or field of study or activity or social cause and who is undergoing training in order to meet normal requirements of being regarded a mature and equal participant. "Expert" is also being mistakenly interchanged with the term "authority" in new media. An expert can be an authority if through relationships to people and technology, that expert is allowed to control access to his expertise. However, a person who merely wields authority is not by right an expert. In new media, users are being misled by the term "authority". Many sites and search engines such as Google and Technorati use the term "authority" to denote the link value and traffic to a particular topic. However, this authority only measures populist information. It in no way assures that the author of that site or blog is an expert. An expert is not to be confused with aprofessional. A professional is someone who gets paid to do something. Anamateuris the opposite of a professional, not the opposite of an expert. Some characteristics of the development of an expert have been found to include Mark Twaindefined an expert as "an ordinary fellow from another town".[29]Will Rogersdescribed an expert as "A man fifty miles from home with a briefcase." Danish scientist and Nobel laureateNiels Bohrdefined an expert as "A person that has made every possible mistake within his or her field."[30]Malcolm Gladwelldescribes expertise as a matter of practicing the correct way for a total of around 10,000 hours.
https://en.wikipedia.org/wiki/Networked_expertise
Vox populi(/ˌvɒksˈpɒpjʊli,-laɪ/VOKSPOP-yuu-lee, -⁠lye[1]) is aLatinphrase (originallyVox populi, vox Dei– "The voice of the people is the voice of God") that literally means "voice of the people." It is used in English in the meaning "the opinion of the majority of the people."[1][2]Injournalism,vox poporman on the streetrefers to shortinterviewswith members of the public.[3] American television personalitySteve Allenas the host ofThe Tonight Showfurther developed the "man on the street" interviews and audience-participation comedy breaks that have become commonplace on late-night TV. Usually the interviewees are shown in public places, and supposed to be giving spontaneous opinions in a chance encounter – unrehearsed persons, not selected in any way. As such, journalists almost always refer to them as the abbreviatedvox pop.[4]In U.S.broadcast journalism, it is often referred to as aman on the streetinterview orMOTS.[5] The results of such an interview are unpredictable at best, and therefore vox pop material is usually edited down very tightly. This presents difficulties ofbalance, in that the selection used ought to be, from the point of view ofjournalistic standards, a fair cross-section of opinions. Although the two can be quite often confused, a vox pop is not a form of a survey. Each person is asked the same question; the aim is to get a variety of answers and opinions on any given subject. Journalists are usually instructed to approach a wide range of people to get varied answers from different points of view. The interviewees should be of various ages, sexes, classes and communities so that the diverse views and reactions of the general people will be known. Generally, the vox pop question will be asked of different persons in different parts of streets or public places. But as an exception, in any specific topic or situation which is not concerned to general people, the question can be asked only in a specific group to know what the perception/reaction is of that group to the specific topic or issue; e.g., a question can be asked to a group of students about the quality of their education. With increasing public familiarity with the term, several radio and television programs have been named "vox pop" in allusion to this practice. The Latin phraseVox populi, vox dei(/ˌvɒksˈpɒpjuːliˌvɒksˈdeɪi/), 'The voice of the people [is] the voice of god', is an oldproverb. An early reference to the expression is in a letter fromAlcuin of YorktoCharlemagnein 798CE.[6]The full quotation from Alcuin reads:[7][8] Nec audiendi qui solent dicere, Vox populi, vox Dei, quum tumultuositas vulgi semper insaniae proxima sit. And those people should not be listened to who keep saying the voice of the people is the voice of God, since the riotousness of the crowd is always very close to madness. Writing in the early 12th century,William of Malmesburyrefers to the saying as a "proverb".[9] Of those who promoted the phrase and the idea,Archbishop of CanterburyWalter Reynoldsbrought charges against KingEdward IIin 1327 in a sermon "Vox populi, vox Dei".[10][11]John Lockein hisOf the Conduct of the Understanding(1706) criticises the phrase, writing "I don’t remember God delivering his oracles by the multitude, or nature delivering truths by the herd!".[12] Quotations related toVox populiat Wikiquote
https://en.wikipedia.org/wiki/Vox_populi
Amazon Mechanical Turk(MTurk) is acrowdsourcingwebsite with which businesses can hire remotely located "crowdworkers" to perform discrete on-demand tasks that computers are currently unable to do as economically. It is operated underAmazon Web Services, and is owned byAmazon.[1]Employers, known asrequesters,post jobs known asHuman Intelligence Tasks(HITs), such as identifying specific content in an image or video, writing product descriptions, or answering survey questions. Workers, colloquially known asTurkersorcrowdworkers, browse among existing jobs and complete them in exchange for a fee set by the requester. To place jobs, requesters use an openapplication programming interface(API), or the more limited MTurk Requester site.[2]As of April 2019[update], requesters could register from 49 approved countries.[3] The service was conceived byVenky Harinarayanin a U.S. patent disclosure in 2001.[4][5][6]Amazon coined the termartificial artificial intelligencefor processes that outsource some parts of a computer program to humans, for those tasks carried out much faster by humans than computers. It is claimed[by whom?]thatJeff Bezoswas responsible for proposing the development of Amazon's Mechanical Turk to realize this process.[7] The nameMechanical Turkwas inspired by an 18th-centurychess-playingautomatonof the same name, often simply nicknamed as "The Turk". Made by German-Hungarian author and engineerWolfgang von Kempelen, the machine became an international spectacle, touring Europe, and beating bothNapoleon BonaparteandBenjamin Franklin. It was later revealed that this "machine" was not an automaton, but rather controlled by a humanchess masterhidden in the cabinet beneath the board, puppeting the movements of a humanoid dummy. Analogously, the Mechanical Turk online service uses remote human labor hidden behind a computer interface to help employers perform tasks that are not currently possible using a true machine. MTurk launched publicly on November 2, 2005. Its user base grew quickly. In early- to mid-November 2005, there were tens of thousands of jobs, all uploaded to the system by Amazon itself for some of its internal tasks that required human intelligence. HIT types expanded to include transcribing, rating, image tagging, surveys, and writing. In March 2007, there were reportedly more than 100,000 workers in over 100 countries.[8]This increased to over 500,000 registered workers from over 190 countries in January 2011.[9]That year, Techlist published an interactive map pinpointing the locations of 50,000 of their MTurk workers around the world.[10]By 2018, research demonstrated that while over 100,000 workers were available on the platform at any time, only around 2,000 were actively working.[11] A user of Mechanical Turk can be either a "Worker" (contractor) or a "Requester" (employer). Workers have access to a dashboard that displays three sections: total earnings, HIT status, and HIT totals. Workers set their own hours and are not under any obligation to accept any particular task. Amazon classifies Workers ascontractorsrather than employees and does not pay payroll taxes. Classifying Workers as contractors allows Amazon to avoid things likeminimum wage,overtime, andworkers compensation—this is a common practice among "gig economy" platforms. In the United States, where a supermajority of MTurk workers are located, workers are legally required to report their income asself-employmentincome. The differing legality of this arrangement in countries with stronger labour laws makes the actual international accessibility of the program uncertain. In 2013, the average wage for the multiple microtasks assigned, if performed quickly, was about one dollar an hour, with each task averaging a few cents.[12]However, calculating people's average hourly earnings on a microtask site is extremely difficult and several sources of data show average hourly earnings in the $5–$9 per hour[13][14][15][16]range among a substantial number of Workers, while the most experienced, active, and proficient workers may earn over $20 per hour.[17] Workers can have a postal address anywhere in the world. Payment for completing tasks can be redeemed on Amazon.com viagift certificate(gift certificates are the only payment option available to international workers, apart from India) or can be transferred to a Worker's U.S. bank account. Requesters can ask that Workers fulfill qualifications before engaging in a task, and they can establish a test designed to verify the qualification. They can also accept or reject the result sent by the Worker, which affects the Worker's reputation. As of April 2019[update], Requesters paid Amazon a minimum 20% commission on the price of successfully completed jobs, with increased amounts foradditional services[clarification needed].[8]Requesters can use the Amazon Mechanical Turk API to programmatically integrate the results of the work directly into their business processes and systems. When employers set up a job, they must specify as well as the specific details about the job they want to be completed. Workers have been primarily located in the United States since the platform's inception[18]with demographics generally similar to the overall Internet population in the U.S.[19]Within the U.S. workers are fairly evenly spread across states, proportional to each state’s share of the U.S. population.[20]As of 2019[update], between 15 and 30 thousand people in the U.S. complete at least one HIT each month and about 4,500 new people join MTurk each month.[21] Cash payments for Indian workers were introduced in 2010, which updated the demographics of workers, who however remained primarily within the United States.[22]A website showing worker demographics in May 2015 showed that 80% of workers were located in the United States, with the remaining 20% located elsewhere in the world, most of whom were in India.[23]In May 2019, approximately 60% were in the U.S., 40% elsewhere (approximately 30% in India).[24]In early 2023 about 90% of workers were from the U.S. and about half of the remainder from India.[25] Since 2010[update], numerous researchers have explored the viability of Mechanical Turk to recruit subjects for social science experiments. Researchers have generally found that while samples of respondents obtained through Mechanical Turk do not perfectly match all relevant characteristics of the U.S. population, they are also not wildly misrepresentative.[26][27]As a result, thousands of papers that rely on data collected from Mechanical Turk workers are published each year, including hundreds in top ranked academic journals. A challenge with using MTurk for human-subject research has been maintaining data quality. A study published in 2021 found that the types of quality control approaches used by researchers (such as checking for bots, VPN users, or workers willing to submit dishonest responses) can meaningfully influence survey results. They demonstrated this via impact on three common behavioral/mental healthcare screening tools.[28]Even though managing data quality requires work from researchers, there is a large body of research showing how to gather high quality data from MTurk.[29][30][31][32]The cost of using MTurk is considerably lower than many other means of conducting surveys, so many researchers continue to use it. The general consensus among researchers is that the service works best for recruiting a diverse sample; it is less successful with studies that require more precisely defined populations or that require a representative sample of the population as a whole.[33]Many papers have been published on the demographics of the MTurk population.[20][34][35]MTurk workers tend to be younger, more educated, more liberal, and slightly less wealthy than the U.S. population overall.[36] Supervised Machine Learningalgorithms require large amounts of human-annotated data to be trained successfully. Machine learning researchers have hired Workers through Mechanical Turk to produce datasets such as SQuAD, aquestion answeringdataset.[37] Since 2007[update], the service has been used to search for prominent missing individuals. This use was first suggested during the search forJames Kim, but his body was found before any technical progress was made. That summer, computer scientistJim Graydisappeared on his yacht and Amazon'sWerner Vogels, a personal friend, made arrangements forDigitalGlobe, which provides satellite data forGoogle MapsandGoogle Earth, to put recent photography of theFarallon Islandson Mechanical Turk. A front-page story onDiggattracted 12,000 searchers who worked with imaging professionals on the same data. The search was unsuccessful.[38] In September 2007, a similar arrangement was repeated in thesearch for aviator Steve Fossett. Satellite data was divided into 85-square-metre (910 sq ft) sections, and Mechanical Turk users were asked to flag images with "foreign objects" that might be a crash site or other evidence that should be examined more closely.[39]This search was also unsuccessful. The satellite imagery was mostly within a 50-mile radius,[40]but the crash site was eventually found by hikers about a year later, 65 miles away.[41] MTurk has also been used as a tool for artistic creation. One of the first artists to work with Mechanical Turk wasxtine burrough, withThe Mechanical Olympics(2008),[42][43]Endless Om(2015), andMediations on Digital Labor(2015).[44][45]Another work was artistAaron Koblin'sTen Thousand Cents(2008).[further explanation needed] Programmers have developed browser extensions andscriptsdesigned to simplify the process of completing jobs. Amazon has stated that they disapprove of scripts that completely automate the process and preclude the human element. This is because of the concern that the task completion process—e.g. answering a survey—could be gamed with random responses, and the resultant collected data could be worthless.[46]Accounts using so-called automated bots have been banned.There are services that extend the capabilities to MTurk.[clarification needed] Amazon makes available anapplication programming interface(API) for the MTurk system. The MTurk API lets a programmer submit jobs, retrieve completed work, and approve or reject that work.[47]In 2017, Amazon launched support for AWS Software Development Kits (SDK), allowing for nine new SDKs available to MTurk Users.[importance?]MTurk is accessible via API from the following languages: Python, JavaScript, Java, .NET, Go, Ruby, PHP, or C++.[48]Web sites and web services can use the API to integrate MTurk work into other web applications, providing users with alternatives to the interface Amazon has built for these functions. Amazon Mechanical Turk provides a platform for processing images, a task well-suited to human intelligence. Requesters have created tasks that ask workers to label objects found in an image, select the most relevant picture in a group of pictures, screen inappropriate content, classify objects in satellite images, or digitize text from images such as scanned forms filled out by hand.[49] Companies with large online catalogues use Mechanical Turk to identify duplicates and verify details of item entries. For example: removing duplicates in yellow pages directory listings, checking restaurant details (e.g. phone number and hours), and finding contact information from web pages (e.g. author name and email).[12][49] Diversification and scale of personnel of Mechanical Turk allow collecting information at a large scale, which would be difficult outside of a crowd platform. Mechanical Turk allows Requesters to amass a large number of responses to various types of surveys, from basic demographics to academic research. Other uses include writing comments, descriptions, and blog entries to websites and searching data elements or specific fields in large government and legal documents.[49] Companies use Mechanical Turk's crowd labor to understand and respond to different types of data. Common uses include editing and transcription of podcasts, translation, and matching search engine results.[12][49] The validity of research conducted with the Mechanical Turk worker pool has long been debated among experts.[50]This is largely because questions of validity[51][52]are complex: they involve not only questions of whether the research methods were appropriate and whether the study was well-executed, but also questions about the goal of the project, how the researchers used MTurk, who was sampled, and what conclusions were drawn. Most experts agree that MTurk is better suited for some types of research than others. MTurk appears well-suited for questions that seek to understand whether two or more things are related to each other (called correlational research; e.g., are happy people more healthy?) and questions that attempt to show one thing causes another thing (experimental research; e.g., being happy makes people more healthy). Fortunately, these categories capture most of the research conducted by behavioral scientists, and most correlational and experimental findings found in nationally representative samples replicate on MTurk.[53] The type of research that is not well-suited for MTurk is often called "descriptive research." Descriptive research seeks to describe how or what people think, feel, or do; one example is public opinion polling. MTurk is not well-suited to such research because it does not select a representative sample of the general population. Instead, MTurk is a nonprobability,[jargon]convenience sample. Descriptive research is best conducted with a probability-based, representative sample of the population researchers want to understand. When compared to the general population, people on MTurk are younger, more highly educated, more liberal, and less religious.[54][20][35] Mechanical Turk has been criticized by journalists and activists for its interactions with and use of labor. Computer scientistJaron Laniernoted how thedesignof Mechanical Turk "allows you to think of the people as software components" in a way that conjures "a sense of magic, as if you can just pluck results out of the cloud at an incredibly low cost".[55]A similar point is made in the bookGhostworkby Mary L. Gray and Siddharth Suri.[56][importance?] Critics of MTurk argue that workers are forced onto the site by precarious economic conditions and then exploited by requesters with low wages and a lack of power when disputes occur. Journalist Alana Semuels’s article "The Internet Is Enabling a New Kind of Poorly Paid Hell" inThe Atlanticis typical of such criticisms of MTurk.[57] Some academic papers have obtained findings that support or serve as the basis for such common criticisms,[58][59]but others contradict them.[60][61]A recent academic commentary argued that study participants on sites like MTurk should be clearly warned about the circumstances in which they might later be denied payment as a matter of ethics,[62]even though such statements may not reduce the rate of careless responding.[63] A paper published by a team at CloudResearch[16]shows that only about 7% of people on MTurk view completing HITs as something akin to a full-time job. Most people report that MTurk is a way to earn money during their leisure time or as a side gig. In 2019, the typical worker spent five to eight hours per week and earned around $7 per hour. The sampled workers did not reportrampant[clarification needed]mistreatment at the hands of requesters; they reported trusting requesters more than employers outside of MTurk. Similar findings were presented in a review of MTurk by the Fair Crowd Work organization, a collective of crowd workers and unions.[64][unreliable source?] The minimum payment that Amazon allows for a task is one cent. Because tasks are typically simple and repetitive the majority of tasks pay only a few cents,[65][66]but there are also well-paying tasks on the site.[citation needed] Many criticisms of MTurk stem from the fact that a majority of tasks offer low wages. In addition, workers are consideredindependent contractorsrather than employees. Independent contractors are not protected by theFair Labor Standards Actor other legislation that protects workers’ rights.[United States-centric]Workers on MTurk must compete with others for good HIT opportunities as well as spend time searching for tasks and other actions that they are not compensated for. The low payment offered for many tasks has fueled criticism of Mechanical Turk for exploiting and not compensating workers for the true value of the task they complete.[67]One study of 3.8 million tasks completed by 2,767 workers showed that "workers earned a median hourly wage of about $2 an hour" with 4% of workers earning more than $7.25 per hour.[68] The Pew Research Center and the International Labour Office published data indicating people made around $5.00 per hour in 2015.[14][69]A study focused on workers in the U.S. indicated average wages of at least $5.70 an hour,[70]and data from the CloudResearch study found average wages of about $6.61 per hour.[16]Some evidence suggests that very active and experienced people can earn $20 per hour or more.[71] The Nationmagazine reported in 2014 that some Requesters had taken advantage of Workers by having them do the tasks, then rejecting their submissions in order to avoid paying them.[72]Available data indicates that rejections are fairly rare. Workers report having a small minority of their HITs rejected, perhaps as low as 1%.[16] In theFacebook–Cambridge Analytica data scandal, Mechanical Turk was one of the means of covertly gathering private information for a massive database.[73]The system paid people a dollar or two to install aFacebook-connected app and answer personal questions. The survey task, as a work for hire, was not used for a demographic or psychological research project as it might have seemed. The purpose was instead to bait the worker to reveal personal information about the worker's identity that was not already collected by Facebook or Mechanical Turk. Others have criticized that the marketplace does not allow workers to negotiate with employers. In response to criticisms of payment evasion and lack of representation, a group developed a third-party platform called Turkopticon which allows workers to give feedback on their employers. This allows workers to avoid potentially unscrupulous jobs and to recommend superior employers.[74][75]Another platform called Dynamo allows workersto collect[clarification needed]anonymously and organize campaigns to better their work environment, such as the Guidelines for Academic Requesters and the Dear Jeff Bezos Campaign.[76][77][78][79]Amazon made it harder for workers to enroll in Dynamo by closing the request account that provided workers with a required code for Dynamo membership. Workers created third-party plugins to identify higher paying tasks, but Amazon updated its website to prevent these plugins from working.[80]Workers have complained that Amazon's payment system will on occasion stop working.[80] Mechanical Turk is comparable in some respects to the now discontinuedGoogle Answersservice. However, the Mechanical Turk is a more generalmarketplacethat can potentially help distribute any kind of work tasks all over the world. TheCollaborative Human Interpreter(CHI) by Philipp Lenssen also suggested using distributed human intelligence to help computer programs perform tasks that computers cannot do well. MTurk could be used as the execution engine for the CHI.[citation needed] In 2014 the Russian search giantYandexlaunched a similar system calledTolokathat is similar to the Mechanical Turk.[81]
https://en.wikipedia.org/wiki/Amazon_Mechanical_Turk
Google Opinion Rewardsis aloyalty programdeveloped by Google. It was initially launched as a surveymobile appforAndroidandiOSdeveloped byGoogle. The app allows users to answer surveys and earn rewards. On Android, users earnGoogle Playcredits which can be redeemed by buying paid apps fromGoogle Play. On iOS, users are paid viaPayPal. Users in the available countries who are over 18 years old are eligible.[4]Google Opinion Rewards works withGoogle Surveys, market researchers make the survey through Google Surveys and answers are received through Google Opinion Rewards by app users.[5]This process provides surveyors with a large pool of surveyees quickly. This "fast and easy" surveying process has been criticized due to contention over the validity of results as well as concern over the privacy and security of the app users' data.[5][6] In November 2013, the app was initially launched forAndroidusers inUS.[7] In April 2014, the app was made available inAustralia,Canadaand theUK. The first time the app has been available outside theUS.[8] In August 2014, the app was made available toGermany[9]andNetherlands.[10] In September 2014, the app was made available toItalyandJapan.[11][12] In June 2015, the app was made available toMexicoandBrazil.[13] In November 2015, the app was made available inSpain.[14] In May 2016, the app was made available toDenmark,NorwayandSweden.[15] In September 2016, the app was made available inFrance.[16] In December 2016, the app was made available toSwitzerlandandAustria.[17] In May 2017, the app was made available toIndia,SingaporeandTurkey.[18] In October 2017, the app was launched foriOSusers.[19] In November 2017, the app was made available toBelgiumandNew Zealand.[20] In May 2018, Google announced it would incorporate Cross Media Panel, another one of its rewards-based programs, into the Google Opinion Rewards program.[21] In December 2019, the app was made available inTaiwan.[22] In January 2020, the app was made available toPoland,ChileandUnited Arab Emirates.[23] In April 2020, the app was made available toHong KongandMalaysia.[24] In June 2020, the app was reached 50 million downloads on Google Play.[25] In November 2020, the app was made available inThailand.[26] In July 2021, the app was made available toCzech Republic,Indonesia,Ireland,RussiaandSouth Korea.[27] In March 2022, the app was made available toColombia,Finland,Hungary,South AfricaandVietnam.[28] In September 2024, the app was reached 100 million downloads on Google Play.[29] The app is currently available for download in 39 countries.[4][28] The Google Opinion Rewards app is composed of one main page, displaying the balance and available tasks, leading users to the survey and back to the main page once submitted. The drop-down menu navigates users to their Reward history, Google Play account, or Settings. Each task consists of multiple pages, with the first explaining what the survey is about, followed by a series of multiple-choice questions, and finally one displaying the value of Google Play credits earned.[30][6] The application provides a surveyor with a large group of people to answermarket researchquestions. Surveyors purchase this service from Google through Google Consumer Surveys, providing researchers the ability to create an online survey with the survey question they want answered and publish it to one of Google's platforms, including Google Opinion Rewards.[31] When users take a survey using Google Opinion Rewards, their answers are combined with the large pool of other respondents and shared with the market researcher.[32]The respondents' demographics including age, gender and geographic location are inferred based on "anonymous browsing history and IP address" or taken from the demographics questions asked when setting up user account.[33] Users can earn from $0.10 to $1 for each survey. A minimum of $2 must be earned to be eligible to receive rewards.[34]Apple users can cash out the value of their earnings to their PayPal account. Android users can transfer the value of their earnings into Google Play credits to purchase other apps from the Google Play Store. These apps include digital books, TV shows, movies, games, and apps. Credits can also be used to make in-app purchases. If Android users want to exchange Google Play credits to cash, they can do so externally through third party apps that can exchange the Google Play credits to PayPal for a percentage of their earnings.[35] The rewards must be redeemed within one year from the date received, or else they will expire.[36] Google Opinion Rewards offers benefits to both the surveyor and surveyee. The surveyor benefits from the large pool of surveyees achieved through the financial incentive and the wide demographic available online, in addition to the fast return of results, cost effective price and convenient surveying methodology. The surveyee benefits from the rewards offered in exchange for their time and opinion.[37][38]However, online surveys have also sparked criticism due to highlighted concerns. The two common concerns regarding Google Opinion Rewards include the privacy and security of surveyee information and the validity of results. The validity of results produced is a major concern regarding online surveys. Google Surveys have been designed on a "quick, inexpensive surveying" model compared to traditional interview or paper methods.[5]However, this quick and efficient method to increase the sample size raises the concern ofvalidity, as people are perceived to not pay attention when partaking in an online survey. This is known as 'inattentive response behavior', 'satisficing behavior' or 'careless responding'.[6][37] In addition to the suspected satisficing and forging of data for financial reward, surveyors have been critical of the program's 'random' sampling methodology or rather the lack thereof.[37]Online surveys attract surveyees from a "highly selective subgroup of the general population" whereby all respondents have a specific 'similar' demographic, economic status, values and habits, which can affect the randomization of participants selected within the sample.[37]As noted by social scientist Helen Ball, "It is not easy to use random sampling techniques with online surveys as there is no systematic way to collect a traditional probability sample of the general population using the internet"[38] To overcome these issues, Google has introduced measures to enhance surveycredibilityandreliability.[39] Addressing the concern regarding participants attentiveness, Google has introduced 'Attention checks' to deter careless responding behavior and forging of data. Google tests attentiveness by asking specific questions to ensure the surveyee is paying attention when taking surveys. Such questions include asking the surveyee to choose a specific choice from the multiple choices offered to ensure the question is read. If such questions are answered incorrectly the users account will be flagged and the user will receive fewer surveys. This action has increased researchers' confidence in the quality of their results and the "internal validity of the research."[40] Furthermore, the simple, "fast and easy" design adopted by Google for the online surveys has been recognised by economists Riccardo Vecchio, Gerarda Casoa, Luigi Cembaloa and Massimiliano Borrelloa as an effective display to ensure questions are understood and answered correctly, further enhancing the reliability of the results. They state, "After beginning a survey, Google Opinion Rewards presents only one question to the user at a time, and they are only presented with the next question when they have answered the one presented to them. The presentation of each question is plain to ensure minimal misinterpretation of individuals in the user base."[6] Privacy and security of surveyee information are of concern as Google Opinion Rewards collects information about the surveyee including their name, age and location in addition to the information provided in their responses.[41]While Google has been criticized for collecting data of users such as with their Screenwise Meter,[42]people are generally aware that Google collects data, which Google claims is for the purpose of improving their services,[43]however, users are worried their data would be sold to and accessed by other parties which may use the data for nefarious purposes or even spamming and targeted advertising.[44] While adding this information online provides a form of transparency, contention remains as the extensive list of 'Data Linked To You' contains private information considered highlyintrusive. Furthermore, users also remain unsure what 'broad terms' like 'other' data mean to their privacy, which falls under data that is linked to the user. Such data is collected and shared with the market researcher despite the user being uncertain about what it entails. It is for this reason that paid online surveys are often seen or perceived as atrade-offwhereby, those surveyed trade their privacy for a reward.[44] While Google clearly states its privacy policy, it has not disclosed the security measures in place to protect the app users' data that has been collected as it may contain private information that is highly demanded. Therefore, the security is especially of concern due to the information being stored online, which is susceptible to being intercepted or accessed bycyber criminalsduring acyberattack.[44]
https://en.wikipedia.org/wiki/Google_Opinion_Rewards
Common Voiceis acrowdsourcingproject started byMozillato create a free and openspeech corpus. The project is supported byvolunteerswho record sample sentences with amicrophoneand review recordings of other users. The transcribed sentences are collected in a voice database available under thepublic domainlicenseCC0.[1]This license ensures thatdeveloperscan use the database forvoice-to-textandtext-to-voiceapplications without restrictions or costs. Common Voice aims to provide diverse voice samples. According to Mozilla'sKatharina Borchert, many existing projects took datasets from public radio or otherwise had datasets that underrepresented both women and people with pronounced accents.[2] The first dataset was released in November 2017. More than 20,000 users worldwide had recorded 500 hours of English sentences.[3] In February 2019, the first batch of languages was released for use. This included 18 languages:English,French,GermanandMandarin Chinese, but also less prevalent languages asWelshandKabyle. In total, this included almost 1,400 hours of recorded voice data from more than 42,000 contributors.[4] As of July 2020 the database has amassed 7,226 hours of voice recordings in 54 languages, 5,591 hours of which has been verified by volunteers.[5] In May 2021, following the work to addKinyarwanda, they received a grant to addKiswahili.[6] At the beginning of 2022, Bengali.AI partnered with Common Voice to launch the "Bangla Speech Recognition" project that aims to make machines understand theBangla language. 2000 hours of voice was collected with aim for higher than 10,000 hours.[7] In September 2022, it was announced that theTwi languageof Ghana was the 100th language to be added to the Mozilla Common Voice database.[8] As of October 2022[update], Mozilla Common Voice officially collects voice data for the following languages:[9]
https://en.wikipedia.org/wiki/Common_Voice
Inabstract algebra, aBoolean algebraorBoolean latticeis acomplementeddistributive lattice. This type ofalgebraic structurecaptures essential properties of bothsetoperations andlogicoperations. A Boolean algebra can be seen as a generalization of apower setalgebra or afield of sets, or its elements can be viewed as generalizedtruth values. It is also a special case of aDe Morgan algebraand aKleene algebra (with involution). Every Boolean algebragives riseto aBoolean ring, and vice versa, withringmultiplication corresponding toconjunctionormeet∧, and ring addition toexclusive disjunctionorsymmetric difference(notdisjunction∨). However, the theory of Boolean rings has an inherent asymmetry between the two operators, while theaxiomsand theorems of Boolean algebra express the symmetry of the theory described by theduality principle.[1] The term "Boolean algebra" honorsGeorge Boole(1815–1864), a self-educated English mathematician. He introduced thealgebraic systeminitially in a small pamphlet,The Mathematical Analysis of Logic, published in 1847 in response to an ongoing public controversy betweenAugustus De MorganandWilliam Hamilton, and later as a more substantial book,The Laws of Thought, published in 1854. Boole's formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a dual pair of operations. Boolean algebra emerged in the 1860s, in papers written byWilliam JevonsandCharles Sanders Peirce. The first systematic presentation of Boolean algebra anddistributive latticesis owed to the 1890VorlesungenofErnst Schröder. The first extensive treatment of Boolean algebra in English isA. N. Whitehead's 1898Universal Algebra. Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper byEdward V. Huntington. Boolean algebra came of age as serious mathematics with the work ofMarshall Stonein the 1930s, and withGarrett Birkhoff's 1940Lattice Theory. In the 1960s,Paul Cohen,Dana Scott, and others found deep new results inmathematical logicandaxiomatic set theoryusing offshoots of Boolean algebra, namelyforcingandBoolean-valued models. ABoolean algebrais asetA, equipped with twobinary operations∧(called "meet" or "and"),∨(called "join" or "or"), aunary operation¬(called "complement" or "not") and two elements0and1inA(called "bottom" and "top", or "least" and "greatest" element, also denoted by the symbols⊥and⊤, respectively), such that for all elementsa,bandcofA, the followingaxiomshold:[2] Note, however, that the absorption law and even the associativity law can be excluded from the set of axioms as they can be derived from the other axioms (seeProven properties). A Boolean algebra with only one element is called atrivial Boolean algebraor adegenerate Boolean algebra. (In older works, some authors required0and1to bedistinctelements in order to exclude this case.)[citation needed] It follows from the last three pairs of axioms above (identity, distributivity and complements), or from the absorption axiom, that The relation≤defined bya≤bif these equivalent conditions hold, is apartial orderwith least element 0 and greatest element 1. The meeta∧band the joina∨bof two elements coincide with theirinfimumandsupremum, respectively, with respect to ≤. The first four pairs of axioms constitute a definition of abounded lattice. It follows from the first five pairs of axioms that any complement is unique. The set of axioms isself-dualin the sense that if one exchanges∨with∧and0with1in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra (or Boolean lattice), one obtains another Boolean algebra with the same elements; it is called itsdual.[3] A={e∈R:e2=eandex=xefor allx∈R},{\displaystyle A=\left\{e\in R:e^{2}=e{\text{ and }}ex=xe\;{\text{ for all }}\;x\in R\right\},}becomes a Boolean algebra when its operations are defined bye∨f:=e+f−efande∧f:=ef. Ahomomorphismbetween two Boolean algebrasAandBis afunctionf:A→Bsuch that for alla,binA: It then follows thatf(¬a) = ¬f(a)for allainA. Theclassof all Boolean algebras, together with this notion of morphism, forms afull subcategoryof thecategoryof lattices. Anisomorphismbetween two Boolean algebrasAandBis a homomorphismf:A→Bwith an inverse homomorphism, that is, a homomorphismg:B→Asuch that thecompositiong∘f:A→Ais theidentity functiononA, and the compositionf∘g:B→Bis the identity function onB. A homomorphism of Boolean algebras is an isomorphism if and only if it isbijective. Every Boolean algebra(A, ∧, ∨)gives rise to aring(A, +, ·)by defininga+b:= (a∧ ¬b) ∨ (b∧ ¬a) = (a∨b) ∧ ¬(a∧b)(this operation is calledsymmetric differencein the case of sets andXORin the case of logic) anda·b:=a∧b. The zero element of this ring coincides with the 0 of the Boolean algebra; the multiplicative identity element of the ring is the1of the Boolean algebra. This ring has the property thata·a=afor allainA; rings with this property are calledBoolean rings. Conversely, if a Boolean ringAis given, we can turn it into a Boolean algebra by definingx∨y:=x+y+ (x·y)andx∧y:=x·y.[4][5]Since these two constructions are inverses of each other, we can say that every Boolean ring arises from a Boolean algebra, and vice versa. Furthermore, a mapf:A→Bis a homomorphism of Boolean algebras if and only if it is a homomorphism of Boolean rings. Thecategoriesof Boolean rings and Boolean algebras areequivalent;[6]in fact the categories areisomorphic. Hsiang (1985) gave arule-based algorithmtocheckwhether two arbitrary expressions denote the same value in every Boolean ring. More generally, Boudet,Jouannaud, and Schmidt-Schauß (1989) gave an algorithm tosolve equationsbetween arbitrary Boolean-ring expressions. Employing the similarity of Boolean rings and Boolean algebras, both algorithms have applications inautomated theorem proving. Anidealof the Boolean algebraAis a nonempty subsetIsuch that for allx,yinIwe havex∨yinIand for allainAwe havea∧xinI. This notion of ideal coincides with the notion ofring idealin the Boolean ringA. An idealIofAis calledprimeifI≠Aand ifa∧binIalways impliesainIorbinI. Furthermore, for everya∈Awe have thata∧ −a= 0 ∈I, and then ifIis prime we havea∈Ior−a∈Ifor everya∈A. An idealIofAis calledmaximalifI≠Aand if the only ideal properly containingIisAitself. For an idealI, ifa∉Iand−a∉I, thenI∪ {a}orI∪ {−a}is contained in another proper idealJ. Hence, such anIis not maximal, and therefore the notions of prime ideal and maximal ideal are equivalent in Boolean algebras. Moreover, these notions coincide with ring theoretic ones ofprime idealandmaximal idealin the Boolean ringA. The dual of anidealis afilter. Afilterof the Boolean algebraAis a nonempty subsetpsuch that for allx,yinpwe havex∧yinpand for allainAwe havea∨xinp. The dual of amaximal(orprime)idealin a Boolean algebra isultrafilter. Ultrafilters can alternatively be described as2-valued morphismsfromAto the two-element Boolean algebra. The statementevery filter in a Boolean algebra can be extended to an ultrafilteris called theultrafilter lemmaand cannot be proven inZermelo–Fraenkel set theory(ZF), ifZFisconsistent. Within ZF, the ultrafilter lemma is strictly weaker than theaxiom of choice. The ultrafilter lemma has many equivalent formulations:every Boolean algebra has an ultrafilter,every ideal in a Boolean algebra can be extended to a prime ideal, etc. It can be shown that everyfiniteBoolean algebra is isomorphic to the Boolean algebra of all subsets of a finite set. Therefore, the number of elements of every finite Boolean algebra is apower of two. Stone'scelebratedrepresentation theorem for Boolean algebrasstates thateveryBoolean algebraAis isomorphic to the Boolean algebra of allclopen setsin some (compacttotally disconnectedHausdorff) topological space. The first axiomatization of Boolean lattices/algebras in general was given by the English philosopher and mathematicianAlfred North Whiteheadin 1898.[7][8]It included theabove axiomsand additionallyx∨ 1 = 1andx∧ 0 = 0. In 1904, the American mathematicianEdward V. Huntington(1874–1952) gave probably the most parsimonious axiomatization based on∧,∨,¬, even proving the associativity laws (see box).[9]He also proved that these axioms areindependentof each other.[10]In 1933, Huntington set out the following elegant axiomatization for Boolean algebra.[11]It requires just one binary operation+and aunary functional symboln, to be read as 'complement', which satisfy the following laws: Herbert Robbinsimmediately asked: If the Huntington equation is replaced with its dual, to wit: do (1), (2), and (4) form a basis for Boolean algebra? Calling (1), (2), and (4) aRobbins algebra, the question then becomes: Is every Robbins algebra a Boolean algebra? This question (which came to be known as theRobbins conjecture) remained open for decades, and became a favorite question ofAlfred Tarskiand his students. In 1996,William McCuneatArgonne National Laboratory, building on earlier work by Larry Wos, Steve Winker, and Bob Veroff, answered Robbins's question in the affirmative: Every Robbins algebra is a Boolean algebra. Crucial to McCune's proof was the computer programEQPhe designed. For a simplification of McCune's proof, see Dahn (1998). Further work has been done for reducing the number of axioms; seeMinimal axioms for Boolean algebra. Removing the requirement of existence of a unit from the axioms of Boolean algebra yields "generalized Boolean algebras". Formally, adistributive latticeBis a generalized Boolean lattice, if it has a smallest element0and for any elementsaandbinBsuch thata≤b, there exists an elementxsuch thata∧x= 0anda∨x=b. Defininga\bas the uniquexsuch that(a∧b) ∨x=aand(a∧b) ∧x= 0, we say that the structure(B, ∧, ∨, \, 0)is ageneralized Boolean algebra, while(B, ∨, 0)is ageneralized Booleansemilattice. Generalized Boolean lattices are exactly theidealsof Boolean lattices. A structure that satisfies all axioms for Boolean algebras except the two distributivity axioms is called anorthocomplemented lattice. Orthocomplemented lattices arise naturally inquantum logicas lattices ofclosedlinear subspacesforseparableHilbert spaces.
https://en.wikipedia.org/wiki/Boolean_algebra_(structure)
Boolean algebrais a mathematically rich branch ofabstract algebra.Stanford Encyclopaedia of Philosophydefines Boolean algebra as 'the algebra of two-valued logic with only sentential connectives, or equivalently of algebras of sets under union and complementation.'[1]Just asgroup theorydeals withgroups, andlinear algebrawithvector spaces,Boolean algebrasare models of theequational theoryof the two values 0 and 1 (whose interpretation need not be numerical). Common to Boolean algebras, groups, and vector spaces is the notion of analgebraic structure, asetclosed under someoperationssatisfying certain equations.[2] Just as there are basic examples of groups, such as the groupZ{\displaystyle \mathbb {Z} }ofintegersand thesymmetric groupSnofpermutationsofnobjects, there are also basic examples of Boolean algebras such as the following. Boolean algebra thus permits applying the methods ofabstract algebratomathematical logicanddigital logic. Unlike groups of finiteorder, which exhibit complexity and diversity and whosefirst-ordertheory isdecidableonly in special cases, all finite Boolean algebras share the same theorems and have a decidable first-order theory. Instead, the intricacies of Boolean algebra are divided between the structure of infinite algebras and thealgorithmiccomplexity of theirsyntacticstructure. Boolean algebra treats theequational theoryof the maximal two-elementfinitaryalgebra, called theBoolean prototype, and the models of that theory, calledBoolean algebras.[3]These terms are defined as follows. Analgebrais afamilyof operations on a set, called the underlying set of the algebra. We take the underlying set of the Boolean prototype to be {0,1}. An algebra isfinitarywhen each of its operations takes only finitely many arguments. For the prototype each argument of an operation is either0or1, as is the result of the operation. The maximal such algebra consists of all finitary operations on {0,1}. The number of arguments taken by each operation is called thearityof the operation. An operation on {0,1} of arityn, orn-ary operation, can be applied to any of2npossible values for itsnarguments. For each choice of arguments, the operation may return0or1, whence there are22nn-ary operations. The prototype therefore has two operations taking no arguments, calledzeroaryornullaryoperations, namely zero and one. It has fourunary operations, two of which are constant operations, another is the identity, and the most commonly used one, callednegation, returns the opposite of its argument:1if0,0if1. It has sixteenbinary operations; again two of these are constant, another returns its first argument, yet another returns its second, one is calledconjunctionand returns 1 if both arguments are 1 and otherwise 0, another is calleddisjunctionand returns 0 if both arguments are 0 and otherwise 1, and so on. The number of(n+1)-ary operations in the prototype is the square of the number ofn-ary operations, so there are162= 256ternary operations,2562= 65,536quaternary operations, and so on. Afamilyis indexed by anindex set. In the case of a family of operations forming an algebra, the indices are calledoperation symbols, constituting thelanguageof that algebra. The operation indexed by each symbol is called the denotation orinterpretationof that symbol. Each operation symbol specifies the arity of its interpretation, whence all possible interpretations of a symbol have the same arity. In general it is possible for an algebra to interpret distinct symbols with the same operation, but this is not the case for the prototype, whose symbols are in one-one correspondence with its operations. The prototype therefore has22nn-ary operation symbols, called theBoolean operation symbolsand forming the language of Boolean algebra. Only a few operations have conventional symbols, such as¬for negation,∧for conjunction, and∨for disjunction.[4]It is convenient to consider thei-thn-ary symbol to benfias done below in the section ontruth tables. Anequational theoryin a given language consists of equations between terms built up from variables using symbols of that language. Typical equations in the language of Boolean algebra arex∧y=y∧x,x∧x=x,x∧¬x=y∧¬y, andx∧y=x. An algebrasatisfiesan equation when the equation holds for all possible values of its variables in that algebra when the operation symbols are interpreted as specified by that algebra. The laws of Boolean algebra are the equations in the language of Boolean algebra satisfied by the prototype. The first three of the above examples are Boolean laws, but not the fourth since1∧0 ≠ 1. Theequational theoryof an algebra is the set of all equations satisfied by the algebra. The laws of Boolean algebra therefore constitute the equational theory of the Boolean prototype. Amodel of a theoryis an algebra interpreting the operation symbols in the language of the theory and satisfying the equations of the theory. That is, a Boolean algebra is a set and a family of operations thereon interpreting the Boolean operation symbols and satisfying the same laws as the Boolean prototype.[5] If we define a homologue of an algebra to be a model of the equational theory of that algebra, then a Boolean algebra can be defined as any homologue of the prototype. Example 1. The Boolean prototype is a Boolean algebra, since trivially it satisfies its own laws. It is thus the prototypical Boolean algebra. We did not call it that initially in order to avoid any appearance of circularity in the definition. The operations need not be all explicitly stated. Abasisis any set from which the remaining operations can be obtained by composition. A "Boolean algebra" may be defined from any of several different bases. Three bases for Boolean algebra are in common use, the lattice basis, the ring basis, and theSheffer strokeor NAND basis. These bases impart respectively a logical, an arithmetical, and a parsimonious character to the subject. The common elements of the lattice and ring bases are the constants 0 and 1, and anassociativecommutativebinary operation, calledmeetx∧yin the lattice basis, andmultiplicationxyin the ring basis. The distinction is only terminological. The lattice basis has the further operations ofjoin,x∨y, andcomplement,¬x. The ring basis has instead the arithmetic operationx⊕yofaddition(the symbol⊕is used in preference to+because the latter is sometimes given the Boolean reading of join). To be a basis is to yield all other operations by composition, whence any two bases must be intertranslatable. The lattice basis translatesx∨yto the ring basis asx⊕y⊕xy, and¬xasx⊕1. Conversely the ring basis translatesx⊕yto the lattice basis as(x∨y)∧¬(x∧y). Both of these bases allow Boolean algebras to be defined via a subset of the equational properties of the Boolean operations. For the lattice basis, it suffices to define a Boolean algebra as adistributive latticesatisfyingx∧¬x= 0andx∨¬x= 1, called acomplementeddistributive lattice. The ring basis turns a Boolean algebra into aBoolean ring, namely a ring satisfyingx2=x. Emil Postgave a necessary and sufficient condition for a set of operations to be a basis for the nonzeroary Boolean operations. Anontrivialproperty is one shared by some but not all operations making up a basis. Post listed five nontrivial properties of operations, identifiable with the fivePost's classes, each preserved by composition, and showed that a set of operations formed a basis if, for each property, the set contained an operation lacking that property. (The converse of Post's theorem, extending "if" to "if and only if," is the easy observation that a property from among these five holding of every operation in a candidate basis will also hold of every operation formed by composition from that candidate, whence by nontriviality of that property the candidate will fail to be a basis.) Post's five properties are: TheNAND(dually NOR) operation lacks all these, thus forming a basis by itself. The finitary operations on {0,1} may be exhibited astruth tables, thinking of 0 and 1 as thetruth valuesfalseandtrue.[7]They can be laid out in a uniform and application-independent way that allows us to name, or at least number, them individually.[8]These names provide a convenient shorthand for the Boolean operations. The names of then-ary operations are binary numbers of2nbits. There being22nsuch operations, one cannot ask for a more succinct nomenclature. Note that each finitary operation can be called aswitching function. This layout and associated naming of operations is illustrated here in full for arities from 0 to 2. These tables continue at higher arities, with2nrows at arityn, each row giving a valuation or binding of thenvariablesx0,...xn−1and each column headednfigiving the valuenfi(x0,...,xn−1)of thei-thn-ary operation at that valuation. The operations include the variables, for example1f2isx0while2f10isx0(as two copies of its unary counterpart) and2f12isx1(with no unary counterpart). Negation or complement¬x0appears as1f1and again as2f5, along with2f3(¬x1, which did not appear at arity 1), disjunction or unionx0∨x1as2f14, conjunction or intersectionx0∧x1as2f8, implicationx0→x1as2f13, exclusive-or symmetric differencex0⊕x1as2f6, set differencex0−x1as2f2, and so on. As a minor detail important more for its form than its content, the operations of an algebra are traditionally organized as a list. Although we are here indexing the operations of a Boolean algebra by the finitary operations on {0,1}, the truth-table presentation above serendipitously orders the operations first by arity and second by the layout of the tables for each arity. This permits organizing the set of all Boolean operations in the traditional list format. The list order for the operations of a given arity is determined by the following two rules. When programming in C or Java, bitwise disjunction is denotedx|y, conjunctionx&y, and negation~x. A program can therefore represent for example the operationx∧(y∨z)in these languages asx&(y|z), having previously setx= 0xaa,y= 0xcc, andz= 0xf0(the "0x" indicates that the following constant is to be read in hexadecimal or base 16), either by assignment to variables or defined as macros. These one-byte (eight-bit) constants correspond to the columns for the input variables in the extension of the above tables to three variables. This technique is almost universally used in raster graphics hardware to provide a flexible variety of ways of combining and masking images, the typical operations being ternary and acting simultaneously on source, destination, and mask bits. Example 2. Allbit vectorsof a given length form a Boolean algebra "pointwise", meaning that anyn-ary Boolean operation can be applied tonbit vectors one bit position at a time. For example, the ternary OR of three bit vectors each of length 4 is the bit vector of length 4 formed by or-ing the three bits in each of the four bit positions, thus0100∨1000∨1001 = 1101. Another example is the truth tables above for then-ary operations, whose columns are all the bit vectors of length2nand which therefore can be combined pointwise whence then-ary operations form a Boolean algebra.[9]This works equally well for bit vectors of finite and infinite length, the only rule being that the bit positions all be indexed by the same set in order that "corresponding position" be well defined. Theatomsof such an algebra are the bit vectors containing exactly one 1. In general the atoms of a Boolean algebra are those elementsxsuch thatx∧yhas only two possible values,xor0. Example 3. Thepower set algebra, the set2Wof all subsets of a given setW.[10]This is just Example 2 in disguise, withWserving to index the bit positions. Any subsetXofWcan be viewed as the bit vector having 1's in just those bit positions indexed by elements ofX. Thus the all-zero vector is the empty subset ofWwhile the all-ones vector isWitself, these being the constants 0 and 1 respectively of the power set algebra. The counterpart of disjunctionx∨yis unionX∪Y, while that of conjunctionx∧yis intersectionX∩Y. Negation¬xbecomes~X, complement relative toW. There is also set differenceX\Y=X∩~Y, symmetric difference(X\Y)∪(Y\X), ternary unionX∪Y∪Z, and so on. The atoms here are the singletons, those subsets with exactly one element. Examples 2 and 3 are special cases of a general construct of algebra calleddirect product, applicable not just to Boolean algebras but all kinds of algebra including groups, rings, etc. The direct product of any familyBiof Boolean algebras whereiranges over some index setI(not necessarily finite or even countable) is a Boolean algebra consisting of allI-tuples(...xi,...)whosei-th element is taken fromBi. The operations of a direct product are the corresponding operations of the constituent algebras acting within their respective coordinates; in particular operationnfjof the product operates onnI-tuples by applying operationnfjofBito thenelements in thei-th coordinate of thentuples, for alliinI. When all the algebras being multiplied together in this way are the same algebraAwe call the direct product adirect powerofA. The Boolean algebra of all 32-bit bit vectors is the two-element Boolean algebra raised to the 32nd power, or power set algebra of a 32-element set, denoted232. The Boolean algebra of all sets of integers is2Z. All Boolean algebras we have exhibited thus far have been direct powers of the two-element Boolean algebra, justifying the name "power set algebra". It can be shown that every finite Boolean algebra isisomorphicto some power set algebra.[11]Hence the cardinality (number of elements) of a finite Boolean algebra is a power of2, namely one of1,2,4,8,...,2n,...This is called arepresentation theoremas it gives insight into the nature of finite Boolean algebras by giving arepresentationof them as power set algebras. This representation theorem does not extend to infinite Boolean algebras: although every power set algebra is a Boolean algebra, not every Boolean algebra need be isomorphic to a power set algebra. In particular, whereas there can be nocountably infinitepower set algebras (the smallest infinite power set algebra is the power set algebra2Nof sets of natural numbers,shownbyCantorto beuncountable), there exist various countably infinite Boolean algebras. To go beyond power set algebras we need another construct. Asubalgebraof an algebraAis any subset ofAclosed under the operations ofA. Every subalgebra of a Boolean algebraAmust still satisfy the equations holding ofA, since any violation would constitute a violation forAitself. Hence every subalgebra of a Boolean algebra is a Boolean algebra.[12] Asubalgebraof a power set algebra is called afield of sets; equivalently a field of sets is a set of subsets of some setWincluding the empty set andWand closed under finite union and complement with respect toW(and hence also under finite intersection). Birkhoff's [1935] representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to a field of sets. NowBirkhoff's HSP theoremfor varieties can be stated as, every class of models of the equational theory of a classCof algebras is the Homomorphic image of aSubalgebraof adirect Productof algebras ofC. Normally all three of H, S, and P are needed; what the first of these two Birkhoff theorems shows is that for the special case of the variety of Boolean algebrasHomomorphismcan be replaced byIsomorphism. Birkhoff's HSP theorem for varieties in general therefore becomes Birkhoff's ISP theorem for thevarietyof Boolean algebras. It is convenient when talking about a setXof natural numbers to view it as a sequencex0,x1,x2,...of bits, withxi= 1if and only ifi∈X. This viewpoint will make it easier to talk aboutsubalgebrasof the power set algebra2N, which this viewpoint makes the Boolean algebra of all sequences of bits.[13]It also fits well with the columns of a truth table: when a column is read from top to bottom it constitutes a sequence of bits, but at the same time it can be viewed as the set of those valuations (assignments to variables in the left half of the table) at which the function represented by that column evaluates to 1. Example 4.Ultimately constant sequences. Any Boolean combination of ultimately constant sequences is ultimately constant; hence these form a Boolean algebra. We can identify these with the integers by viewing the ultimately-zero sequences as nonnegative binary numerals (bit0of the sequence being the low-order bit) and the ultimately-one sequences as negative binary numerals (thinktwo's complementarithmetic with the all-ones sequence being−1). This makes the integers a Boolean algebra, with union being bit-wise OR and complement being−x−1. There are only countably many integers, so this infinite Boolean algebra is countable. The atoms are the powers of two, namely 1,2,4,.... Another way of describing this algebra is as the set of all finite and cofinite sets of natural numbers, with the ultimately all-ones sequences corresponding to the cofinite sets, those sets omitting only finitely many natural numbers. Example 5.Periodic sequence. A sequence is calledperiodicwhen there exists some numbern> 0, called a witness to periodicity, such thatxi=xi+nfor alli≥ 0. The period of a periodic sequence is its least witness. Negation leaves period unchanged, while the disjunction of two periodic sequences is periodic, with period at most the least common multiple of the periods of the two arguments (the period can be as small as1, as happens with the union of any sequence and its complement). Hence the periodic sequences form a Boolean algebra. Example 5 resembles Example 4 in being countable, but differs in being atomless. The latter is because the conjunction of any nonzero periodic sequencexwith a sequence of coprime period (greater than 1) is neither0norx. It can be shown that all countably infinite atomless Boolean algebras are isomorphic, that is, up to isomorphism there is only one such algebra. Example 6.Periodic sequence with period a power of two. This is a propersubalgebraof Example 5 (a proper subalgebra equals the intersection of itself with its algebra). These can be understood as the finitary operations, with the first period of such a sequence giving the truth table of the operation it represents. For example, the truth table ofx0in the table of binary operations, namely2f10, has period2(and so can be recognized as using only the first variable) even though 12 of the binary operations have period4. When the period is2nthe operation only depends on the firstnvariables, the sense in which the operation is finitary. This example is also a countably infinite atomless Boolean algebra. Hence Example 5 is isomorphic to a proper subalgebra of itself! Example 6, and hence Example 5, constitutes the free Boolean algebra on countably many generators, meaning the Boolean algebra of all finitary operations on a countably infinite set of generators or variables. Example 7.Ultimately periodic sequences, sequences that become periodic after an initial finite bout of lawlessness. They constitute a proper extension of Example 5 (meaning that Example 5 is a propersubalgebraof Example 7) and also of Example 4, since constant sequences are periodic with period one. Sequences may vary as to when they settle down, but any finite set of sequences will all eventually settle down no later than their slowest-to-settle member, whence ultimately periodic sequences are closed under all Boolean operations and so form a Boolean algebra. This example has the same atoms and coatoms as Example 4, whence it is not atomless and therefore not isomorphic to Example 5/6. However it contains an infinite atomlesssubalgebra, namely Example 5, and so is not isomorphic to Example 4, everysubalgebraof which must be a Boolean algebra of finite sets and their complements and therefore atomic. This example is isomorphic to the direct product of Examples 4 and 5, furnishing another description of it. Example 8. Thedirect productof a Periodic Sequence (Example 5) with any finite but nontrivial Boolean algebra. (The trivial one-element Boolean algebra is the unique finite atomless Boolean algebra.) This resembles Example 7 in having both atoms and an atomlesssubalgebra, but differs in having only finitely many atoms. Example 8 is in fact an infinite family of examples, one for each possible finite number of atoms. These examples by no means exhaust the possible Boolean algebras, even the countable ones. Indeed, there are uncountably many nonisomorphic countable Boolean algebras, which Jussi Ketonen [1978] classified completely in terms of invariants representable by certain hereditarily countable sets. Then-ary Boolean operations themselves constitute a power set algebra2W, namely whenWis taken to be the set of2nvaluations of theninputs. In terms of the naming system of operationsnfiwhereiin binary is a column of a truth table, the columns can be combined with Boolean operations of any arity to produce other columns present in the table. That is, we can apply any Boolean operation of aritymtomBoolean operations of aritynto yield a Boolean operation of arityn, for anymandn. The practical significance of this convention for both software and hardware is thatn-ary Boolean operations can be represented as words of the appropriate length. For example, each of the 256 ternary Boolean operations can be represented as an unsigned byte. The available logical operations such as AND and OR can then be used to form new operations. If we takex,y, andz(dispensing with subscripted variables for now) to be10101010,11001100, and11110000respectively (170, 204, and 240 in decimal,0xaa,0xcc, and0xf0in hexadecimal), their pairwise conjunctions arex∧y= 10001000,y∧z= 11000000, andz∧x= 10100000, while their pairwise disjunctions arex∨y= 11101110,y∨z= 11111100, andz∨x= 11111010. The disjunction of the three conjunctions is11101000, which also happens to be the conjunction of three disjunctions. We have thus calculated, with a dozen or so logical operations on bytes, that the two ternary operations and are actually the same operation. That is, we have proved the equational identity for the two-element Boolean algebra. By the definition of "Boolean algebra" this identity must therefore hold in every Boolean algebra. This ternary operation incidentally formed the basis for Grau's [1947] ternary Boolean algebras, which he axiomatized in terms of this operation and negation. The operation is symmetric, meaning that its value is independent of any of the3! = 6permutations of its arguments. The two halves of its truth table11101000are the truth tables for∨,1110, and∧,1000, so the operation can be phrased asifzthenx∨yelsex∧y. Since it is symmetric it can equally well be phrased as either ofifxtheny∨zelsey∧z, orifythenz∨xelsez∧x. Viewed as a labeling of the 8-vertex 3-cube, the upper half is labeled 1 and the lower half 0; for this reason it has been called themedian operator, with the evident generalization to any odd number of variables (odd in order to avoid the tie when exactly half the variables are 0). The technique we just used to prove an identity of Boolean algebra can be generalized to all identities in a systematic way that can be taken as a sound and completeaxiomatizationof, oraxiomatic systemfor, the equational laws ofBoolean logic. The customary formulation of an axiom system consists of a set of axioms that "prime the pump" with some initial identities, along with a set of inference rules for inferring the remaining identities from the axioms and previously proved identities. In principle it is desirable to have finitely many axioms; however as a practical matter it is not necessary since it is just as effective to have a finiteaxiom schemahaving infinitely many instances each of which when used in a proof can readily be verified to be a legal instance, the approach we follow here. Boolean identities are assertions of the forms=twheresandtaren-ary terms, by which we shall mean here terms whose variables are limited tox0throughxn-1. Ann-arytermis either an atom or an application. An applicationmfi(t0,...,tm-1)is a pair consisting of anm-ary operationmfiand a list orm-tuple(t0,...,tm-1)ofmn-ary terms calledoperands. Associated with every term is a natural number called itsheight. Atoms are of zero height, while applications are of height one plus the height of their highest operand. Now what is an atom? Conventionally an atom is either a constant (0 or 1) or a variablexiwhere0 ≤i<n. For the proof technique here it is convenient to define atoms instead to ben-ary operationsnfi, which although treated here as atoms nevertheless mean the same as ordinary terms of the exact formnfi(x0,...,xn-1)(exact in that the variables must listed in the order shown without repetition or omission). This is not a restriction because atoms of this form include all the ordinary atoms, namely the constants 0 and 1, which arise here as then-ary operationsnf0andnf−1for eachn(abbreviating22n−1to−1), and the variablesx0,...,xn-1as can be seen from the truth tables wherex0appears as both the unary operation1f2and the binary operation2f10whilex1appears as2f12. The following axiom schema and three inference rules axiomatize the Boolean algebra ofn-ary terms. The meaning of the side condition onA1is thatioĵis that2n-bit number whosev-th bit is theĵv-th bit ofi, where the ranges of each quantity areu:m,v: 2n,ju: 22n, andĵv: 2m. (Sojis anm-tuple of2n-bit numbers whileĵas the transpose ofjis a2n-tuple ofm-bit numbers. Bothjandĵtherefore containm2nbits.) A1is an axiom schema rather than an axiom by virtue of containingmetavariables, namelym,i,n, andj0throughjm-1. The actual axioms of the axiomatization are obtained by setting the metavariables to specific values. For example, if we takem=n=i=j0= 1, we can compute the two bits ofioĵfromi1= 0andi0= 1, soioĵ= 2(or10when written as a two-bit number). The resulting instance, namely1f1(1f1) =1f2, expresses the familiar axiom¬¬x=xof double negation. RuleR3then allows us to infer¬¬¬x= ¬xby takings0to be1f1(1f1)or¬¬x0,t0to be1f2orx0, andmfito be1f1or¬. For eachmandnthere are only finitely many axioms instantiatingA1, namely22m× (22n)m. Each instance is specified by2m+m2nbits. We treatR1as an inference rule, even though it is like an axiom in having no premises, because it is a domain-independent rule along withR2andR3common to all equational axiomatizations, whether of groups, rings, or any other variety. The only entity specific to Boolean algebras is axiom schemaA1. In this way when talking about different equational theories we can push the rules to one side as being independent of the particular theories, and confine attention to the axioms as the only part of the axiom system characterizing the particular equational theory at hand. This axiomatization is complete, meaning that every Boolean laws=tis provable in this system. One first shows by induction on the height ofsthat every Boolean law for whichtis atomic is provable, usingR1for the base case (since distinct atoms are never equal) andA1andR3for the induction step (san application). This proof strategy amounts to a recursive procedure for evaluatingsto yield an atom. Then to proves=tin the general case whentmay be an application, use the fact that ifs=tis an identity thensandtmust evaluate to the same atom, call itu. So first proves=uandt=uas above, that is, evaluatesandtusingA1,R1, andR3, and then invokeR2to infers=t. InA1, if we view the numbernmas the function typem→n, andmnas the applicationm(n), we can reinterpret the numbersi,j,ĵ, andioĵas functions of typei: (m→2)→2,j:m→((n→2)→2),ĵ: (n→2)→(m→2), andioĵ: (n→2)→2. The definition(ioĵ)v=iĵvinA1then translates to(ioĵ)(v) =i(ĵ(v)), that is,ioĵis defined to be composition ofiandĵunderstood as functions. So the content ofA1amounts to defining term application to be essentially composition, modulo the need to transpose them-tuplejto make the types match up suitably for composition. This composition is the one in Lawvere's previously mentioned category of power sets and their functions. In this way we have translated the commuting diagrams of that category, as the equational theory of Boolean algebras, into the equational consequences ofA1as the logical representation of that particular composition law. Underlying every Boolean algebraBis apartially ordered setorposet(B,≤). Thepartial orderrelation is defined byx≤yjust whenx=x∧y, or equivalently wheny=x∨y. Given a setXof elements of a Boolean algebra, anupper boundonXis an elementysuch that for every elementxofX,x≤y, while a lower bound onXis an elementysuch that for every elementxofX,y≤x. AsupofXis a least upper bound onX, namely an upper bound onXthat is less or equal to every upper bound onX. Dually aninfofXis a greatest lower bound onX. The sup ofxandyalways exists in the underlying poset of a Boolean algebra, beingx∨y, and likewise their inf exists, namelyx∧y. The empty sup is 0 (the bottom element) and the empty inf is 1 (top). It follows that every finite set has both a sup and an inf. Infinite subsets of a Boolean algebra may or may not have a sup and/or an inf; in a power set algebra they always do. Any poset(B,≤)such that every pairx,yof elements has both a sup and an inf is called alattice. We writex∨yfor the sup andx∧yfor the inf. The underlying poset of a Boolean algebra always forms a lattice. The lattice is said to bedistributivewhenx∧(y∨z) = (x∧y)∨(x∧z), or equivalently whenx∨(y∧z) = (x∨y)∧(x∨z), since either law implies the other in a lattice. These are laws of Boolean algebra whence the underlying poset of a Boolean algebra forms a distributive lattice. Given a lattice with a bottom element 0 and a top element 1, a pairx,yof elements is calledcomplementarywhenx∧y= 0andx∨y= 1, and we then say thatyis a complement ofxand vice versa. Any elementxof a distributive lattice with top and bottom can have at most one complement. When every element of a lattice has a complement the lattice is called complemented. It follows that in a complemented distributive lattice, the complement of an element always exists and is unique, making complement a unary operation. Furthermore, every complemented distributive lattice forms a Boolean algebra, and conversely every Boolean algebra forms a complemented distributive lattice. This provides an alternative definition of a Boolean algebra, namely as any complemented distributive lattice. Each of these three properties can be axiomatized with finitely many equations, whence these equations taken together constitute a finite axiomatization of the equational theory of Boolean algebras. In a class of algebras defined as all the models of a set of equations, it is usually the case that some algebras of the class satisfy more equations than just those needed to qualify them for the class. The class of Boolean algebras is unusual in that, with a single exception, every Boolean algebra satisfies exactly the Boolean identities and no more. The exception is the one-element Boolean algebra, which necessarily satisfies every equation, evenx=y, and is therefore sometimes referred to as the inconsistent Boolean algebra. A Booleanhomomorphismis a functionh:A→Bbetween Boolean algebrasA,Bsuch that for every Boolean operationmfi: ThecategoryBoolof Boolean algebras has as objects all Boolean algebras and as morphisms the Boolean homomorphisms between them. There exists a unique homomorphism from the two-element Boolean algebra2to every Boolean algebra, since homomorphisms must preserve the two constants and those are the only elements of2. A Boolean algebra with this property is called aninitialBoolean algebra. It can be shown that any two initial Boolean algebras are isomorphic, so up to isomorphism2istheinitial Boolean algebra. In the other direction, there may exist many homomorphisms from a Boolean algebraBto2. Any such homomorphism partitionsBinto those elements mapped to 1 and those to 0. The subset ofBconsisting of the former is called anultrafilterofB. WhenBis finite its ultrafilters pair up with its atoms; one atom is mapped to 1 and the rest to 0. Each ultrafilter ofBthus consists of an atom ofBand all the elements above it; hence exactly half the elements ofBare in the ultrafilter, and there as many ultrafilters as atoms. For infinite Boolean algebras the notion of ultrafilter becomes considerably more delicate. The elements greater than or equal to an atom always form an ultrafilter, but so do many other sets; for example, in the Boolean algebra of finite and cofinite sets of integers, the cofinite sets form an ultrafilter even though none of them are atoms. Likewise, the powerset of the integers has among its ultrafilters the set of all subsets containing a given integer; there are countably many of these "standard" ultrafilters, which may be identified with the integers themselves, but there are uncountably many more "nonstandard" ultrafilters. These form the basis fornonstandard analysis, providing representations for such classically inconsistent objects as infinitesimals and delta functions. Recall the definition of sup and inf from the section above on the underlying partial order of a Boolean algebra. Acomplete Boolean algebrais one every subset of which has both a sup and an inf, even the infinite subsets. Gaifman [1964] andHales[1964] independently showed that infinitefreecomplete Boolean algebrasdo not exist. This suggests that a logic with set-sized-infinitary operations may have class-many terms—just as a logic with finitary operations may have infinitely many terms. There is however another approach to introducing infinitary Boolean operations: simply drop "finitary" from the definition of Boolean algebra. A model of the equational theory of the algebra ofalloperations on {0,1} of arity up to the cardinality of the model is called a complete atomic Boolean algebra, orCABA. (In place of this awkward restriction on arity we could allow any arity, leading to a different awkwardness, that the signature would then be larger than any set, that is, a proper class. One benefit of the latter approach is that it simplifies the definition of homomorphism between CABAs of differentcardinality.) Such an algebra can be defined equivalently as acomplete Boolean algebrathat isatomic, meaning that every element is a sup of some set of atoms. Free CABAs exist for all cardinalities of a setVofgenerators, namely thepower setalgebra22V, this being the obvious generalization of the finite free Boolean algebras. This neatly rescues infinitary Boolean logic from the fate the Gaifman–Hales result seemed to consign it to. The nonexistence offreecomplete Boolean algebrascan be traced to failure to extend the equations of Boolean logic suitably to all laws that should hold for infinitary conjunction and disjunction, in particular the neglect of distributivity in the definition of complete Boolean algebra. A complete Boolean algebra is calledcompletely distributivewhen arbitrary conjunctions distribute over arbitrary disjunctions and vice versa. A Boolean algebra is a CABA if and only if it is complete and completely distributive, giving a third definition of CABA. A fourth definition is as any Boolean algebra isomorphic to a power set algebra. A complete homomorphism is one that preserves all sups that exist, not just the finite sups, and likewise for infs. The categoryCABAof all CABAs and their complete homomorphisms is dual to the category of sets and their functions, meaning that it is equivalent to the opposite of that category (the category resulting from reversing all morphisms). Things are not so simple for the categoryBoolof Boolean algebras and their homomorphisms, whichMarshall Stoneshowed in effect (though he lacked both the language and the conceptual framework to make the duality explicit) to be dual to the category oftotally disconnectedcompact Hausdorff spaces, subsequently calledStone spaces. Another infinitary class intermediate between Boolean algebras andcomplete Boolean algebrasis the notion of asigma-algebra. This is defined analogously to complete Boolean algebras, but withsupsandinfslimited to countable arity. That is, asigma-algebrais a Boolean algebra with all countable sups and infs. Because the sups and infs are of boundedcardinality, unlike the situation withcomplete Boolean algebras, the Gaifman-Hales result does not apply andfreesigma-algebrasdo exist. Unlike the situation with CABAs however, the free countably generated sigma algebra is not a power set algebra. We have already encountered several definitions of Boolean algebra, as a model of the equational theory of the two-element algebra, as a complemented distributive lattice, as a Boolean ring, and as a product-preserving functor from a certain category (Lawvere). Two more definitions worth mentioning are:. (The circularity in this definition can be removed by replacing "finite Boolean algebra" by "finite power set" equipped with the Boolean operations standardly interpreted for power sets.) To put this in perspective, infinite sets arise as filtered colimits of finite sets, infinite CABAs as filtered limits of finite power set algebras, and infinite Stone spaces as filtered limits of finite sets. Thus if one starts with the finite sets and asks how these generalize to infinite objects, there are two ways: "adding" them gives ordinary or inductive sets while "multiplying" them givesStone spacesorprofinite sets. The same choice exists for finite power set algebras as the duals of finite sets: addition yields Boolean algebras as inductive objects while multiplication yields CABAs or power set algebras as profinite objects. A characteristic distinguishing feature is that the underlying topology of objects so constructed, when defined so as to beHausdorff, isdiscretefor inductive objects andcompactfor profinite objects. The topology of finite Hausdorff spaces is always both discrete and compact, whereas for infinite spaces "discrete"' and "compact" are mutually exclusive. Thus when generalizing finite algebras (of any kind, not just Boolean) to infinite ones, "discrete" and "compact" part company, and one must choose which one to retain. The general rule, for both finite and infinite algebras, is that finitary algebras are discrete, whereas their duals are compact and feature infinitary operations. Between these two extremes, there are many intermediate infinite Boolean algebras whose topology is neither discrete nor compact.
https://en.wikipedia.org/wiki/Boolean_algebras_canonically_defined
TheBoyer–Moore majority vote algorithmis analgorithmfor finding themajorityof a sequence of elements usinglinear timeand a constant number of words of memory. It is named afterRobert S. BoyerandJ Strother Moore, who published it in 1981,[1]and is a prototypical example of astreaming algorithm. In its simplest form, the algorithm finds a majority element, if there is one: that is, an element that occurs repeatedly for more than half of the elements of the input. A version of the algorithm that makes a second pass through the data can be used to verify that the element found in the first pass really is a majority.[1] If a second pass is not performed and there is no majority, the algorithm will not detect that no majority exists. In the case that no strict majority exists, the returned element can be arbitrary; it is not guaranteed to be the element that occurs most often (themodeof the sequence). It is not possible for a streaming algorithm to find the most frequent element in less than linear space, for sequences whose number of repetitions can be small.[2] The algorithm maintains in itslocal variablesa sequence element and a counter, with the counter initially zero. It then processes the elements of the sequence, one at a time. When processing an elementx, if the counter is zero, the algorithm storesxas its remembered sequence element and sets the counter to one. Otherwise, it comparesxto the stored element and either increments the counter (if they are equal) or decrements the counter (otherwise). At the end of this process, if the sequence has a majority, it will be the element stored by the algorithm. This can be expressed inpseudocodeas the following steps: Even when the input sequence has no majority, the algorithm will report one of the sequence elements as its result. However, it is possible to perform a second pass over the same input sequence in order to count the number of times the reported element occurs and determine whether it is actually a majority. This second pass is needed, as it is not possible for a sublinear-space algorithm to determine whether there exists a majority element in a single pass through the input.[3] The amount of memory that the algorithm needs is the space for one element and one counter. In therandom accessmodel of computing usually used for theanalysis of algorithms, each of these values can be stored in amachine wordand the total space needed isO(1). If an array index is needed to keep track of the algorithm's position in the input sequence, it doesn't change the overall constant space bound. The algorithm'sbit complexity(the space it would need, for instance, on aTuring machine) is higher, the sum of thebinary logarithmsof the input length and the size of the universe from which the elements are drawn.[2]Both the random access model and bit complexity analyses only count the working storage of the algorithm, and not the storage for the input sequence itself. Similarly, on a random access machine, the algorithm takes timeO(n)(linear time) on an input sequence ofnitems, because it performs only a constant number of operations per input item. The algorithm can also be implemented on a Turing machine in time linear in the input length (ntimes the number of bits per input item).[4] After processingninput elements, the input sequence can be partitioned into(n−c) / 2pairs of unequal elements, andccopies ofmleft over. This is a proof by induction; it is trivially true whenn=c= 0, and is maintained every time an elementxis added: In all cases, theloop invariantis maintained.[1] After the entire sequence has been processed, it follows that no elementx≠mcan have a majority, becausexcan equal at most one element of each unequal pair and none of the remainingccopies ofm. Thus, if there is a majority element, it can only bem.[1]
https://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_majority_vote_algorithm
Themajority problem, ordensity classification task, is the problem of finding one-dimensionalcellular automatonrules that accurately performmajority voting. Using local transition rules, cells cannot know the total count of all the ones in system. In order to count the number of ones (or, by symmetry, the number of zeros), the system requires a logarithmic number of bits in the total size of the system. It also requires the system send messages over a distance linear in the size of the system and for the system to recognize a non-regular language. Thus, this problem is an important test case in measuring the computational power of cellular automaton systems. Given a configuration of a two-state cellular automaton withi+jcells total,iof which are in the zero state andjof which are in the one state, a correct solution to the voting problem must eventually set all cells to zero ifi>jand must eventually set all cells to one ifi<j. The desired eventual state is unspecified ifi=j. The problem can also be generalized to testing whether the proportion of zeros and ones is above or below some threshold other than 50%. In this generalization, one is also given a thresholdρ{\displaystyle \rho }; a correct solution to the voting problem must eventually set all cells to zero ifii+j<ρ{\displaystyle {\tfrac {i}{i+j}}<\rho }and must eventually set all cells to one ifji+j>ρ{\displaystyle {\tfrac {j}{i+j}}>\rho }. The desired eventual state is unspecified ifji+j=ρ{\displaystyle {\tfrac {j}{i+j}}=\rho }. Gács, Kurdyumov, andLevinfound an automaton that, although it does not always solve the majority problem correctly, does so in many cases.[1]In their approach to the problem, the quality of a cellular automaton rule is measured by the fraction of the2i+j{\displaystyle 2^{i+j}}possible starting configurations that it correctly classifies. The rule proposed by Gacs, Kurdyumov, and Levin sets the state of each cell as follows. If a cell is 0, its next state is formed as the majority among the values of itself, its immediate neighbor to the left, and its neighbor three spaces to the left. If, on the other hand, a cell is 1, its next state is formed symmetrically, as the majority among the values of itself, its immediate neighbor to the right, and its neighbor three spaces to the right. In randomly generated instances, this achieves about 78% accuracy in correctly determining the majority. Das,Mitchell, and Crutchfield showed that it is possible to develop better rules usinggenetic algorithms.[2] In 1995, Land and Belew[3]showed that no two-state rule with radiusrand density ρ correctly solves the voting problem on all starting configurations when the number of cells is sufficiently large (larger than about 4r/ρ). Their argument shows that because the system isdeterministic, every cell surrounded entirely by zeros or ones must then become a zero. Likewise, any perfect rule can never make the ratio of ones go aboveρ{\displaystyle \rho }if it was below (or vice versa). They then show that any assumed perfect rule will either cause an isolated one that pushed the ratio overρ{\displaystyle \rho }to be cancelled out or, if the ratio of ones is less thanρ{\displaystyle \rho }, will cause an isolated one to introduce spurious ones into a block of zeros causing the ratio of ones to become greater thanρ{\displaystyle \rho }. In 2013, Busic, Fatès, Marcovici and Mairesse gave a simpler proof of the impossibility to have a perfect density classifier, which holds both for deterministic and stochastic cellular and for any dimension.[4] As observed by Capcarrere, Sipper, and Tomassini,[5][6]the majority problem may be solved perfectly if one relaxes the definition by which the automaton is said to have recognized the majority. In particular, for theRule 184automaton, when run on a finite universe withcyclic boundary conditions, each cell will infinitely often remain in the majority state for two consecutive steps while only finitely many times being in the minority state for two consecutive steps. Alternatively, a hybrid automaton that runs Rule 184 for a number of steps linear in the size of the array, and then switches to the majority rule (Rule 232), that sets each cell to the majority of itself and its neighbors, solves the majority problem with the standard recognition criterion of either all zeros or all ones in the final state. However, this machine is not itself a cellular automaton.[7]Moreover, it has been shown that Fukś's composite rule is very sensitive to noise and cannot outperform the noisy Gacs-Kurdyumov-Levin automaton, an imperfect classifier, for any level of noise (e.g., from the environment or from dynamical mistakes).[8] Given that the task had moved from impossible to rather simple depending on the definition of the desired output, the problem was generalised to the following definition: a perfect density classifying automaton is simply defined as an automaton where the set of configurations reachable when the density of the starting configuration is below the threshold is perfectly disjoint with the set of configurations reachable when the density of the starting configuration is above the threshold. Using That definition Capcarrere and Sipper[9]were able to prove two necessary conditions for a celullar automaton to be a perfect density classifyer: (1) the density of the initial configuration must be conserved over time, and (2) therule tablemust exhibit a density of 0.5 (even when the threshold for classification is different from 0.5). That last property is quite unique in the sense that it associates a condition on the form of the rule to a global behaviour.
https://en.wikipedia.org/wiki/Majority_problem_(cellular_automaton)
Inargumentation theory, anargumentum ad populum(Latinfor 'appeal to the people')[1]is afallacious argumentwhich is based on claiming a truth or affirming something is good or correct because many people think so.[2] Other names for the fallacy include: Argumentum ad populumis a type ofinformal fallacy,[1][14]specifically afallacy of relevance,[15][16]and is similar to anargument from authority(argumentum ad verecundiam).[14][4][9]It uses an appeal to the beliefs, tastes, or values of a group of people,[12]stating that because a certain opinion or attitude is held by a majority, or even everyone, it is therefore correct.[12][17] Appeals to popularity are common in commercial advertising that portrays products as desirable because they are used by many people[9]or associated with popular sentiments[18]instead of communicating the merits of the products themselves. Theinverseargument, that something that is unpopular must be flawed, is also a form of this fallacy.[6] The fallacy is similar in structure to certain other fallacies that involve a confusion between the "justification" of a belief and its "widespread acceptance" by a given group of people. When an argument uses the appeal to the beliefs of a group of experts, it takes on the form of an appeal to authority; if the appeal relates to the beliefs of a group of respected elders or the members of one's community over a long time, then it takes on the form of anappeal to tradition. The philosopherIrving Copidefinedargumentum ad populumdifferently from an appeal to popular opinion itself,[19]as an attempt to rouse the "emotions and enthusiasms of the multitude".[19][20] Douglas N. Waltonargues that appeals to popular opinion can be logically valid in some cases, such as in political dialogue within ademocracy.[21] In some circumstances, a person may argue that the fact that Y people believe X to be true implies that X isfalse. This line of thought is closely related to theappeal to spitefallacy given that it invokes a person's contempt for the general populace or something about the general populace to persuade them that most are wrong about X. Thisad populumreversal commits the same logical flaw as the original fallacy given that the idea "X is true" is inherently separate from the idea that "Y people believe X": "Y people believe in X as true, purely because Y people believe in it, and not because of any further considerations. Therefore X must be false." While Y people can believe X to be true for fallacious reasons, X might still be true. Their motivations for believing X do not affect whether X is true or false. Y = most people, a given quantity of people, people of a particular demographic. X = a statement that can be true or false. Examples: In general, the reversal usually goes: "Most people believe A and B are both true. B is false. Thus, A is false." The similar fallacy ofchronological snobberyis not to be confused with thead populumreversal. Chronological snobbery is the claim that if belief in both X and Y was popularly held in the past and if Y was recently proved to be untrue then X must also be untrue. That line of argument is based on a belief in historical progress and not—like thead populumreversal is—on whether or not X and/or Y is currently popular. Appeals to public opinion are valid in situations where consensus is the determining factor for the validity of a statement, such as linguistic usage and definitions of words. Linguistic descriptivistsargue that correct grammar, spelling, and expressions are defined by the language's speakers, especially in languages which do not have a central governing body. According to this viewpoint, if an incorrect expression is commonly used, it becomes correct. In contrast,linguistic prescriptivistsbelieve that incorrect expressions are incorrect regardless of how many people use them.[22] Special functionsaremathematical functionsthat have well-established names and mathematical notations due to their significance in mathematics and other scientific fields. There is no formal definition of what makes a function a special function; instead, the termspecial functionis defined by consensus. Functions generally considered to be special functions includelogarithms,trigonometric functions, and theBessel functions.
https://en.wikipedia.org/wiki/Appeal_to_the_majority
Condorcet methods Positional voting Cardinal voting Quota-remainder methods Approval-based committees Fractional social choice Semi-proportional representation By ballot type Pathological response Strategic voting Paradoxes ofmajority rule Positive results Arrow's impossibility theoremis a key result insocial choice theoryshowing that noranking-baseddecision rulefor a group can satisfy the requirements ofrational choice.[1]Specifically,Arrowshowed no such rule can satisfyindependence of irrelevant alternatives, the principle that a choice between two alternativesAandBshould not depend on the quality of some third, unrelated optionC.[2][3][4] The result is often cited in discussions ofvoting rules,[5]where itimpliesnoranked votingrule can eliminate thespoiler effect,[6][7][8]though this was known before Arrow (dating back to theMarquis de Condorcet'svoting paradox, showing the impossibility ofmajority rule). Arrow's theorem generalizes Condorcet's findings, showing the same problems extend to everygroup decision procedurebased onrelative comparisons, including non-majoritarian rules likecollective leadershiporconsensus decision-making.[1] While the impossibility theorem shows all ranked voting rules must have spoilers, the frequency of spoilers differs dramatically by rule.Plurality-rulemethods likechoose-oneandranked-choice (instant-runoff) votingare highly sensitive to spoilers,[9][10]creating them even in some situations (likecenter squeezes) where they are notmathematically necessary.[11][12]By contrast,majority-rule (Condorcet) methodsofranked votinguniquelyminimize the number of spoiled elections[12]by restricting them tovoting cycles,[11]which are rare in ideologically-driven elections.[13][14]Under somemodelsof voter preferences (like the left-right spectrum assumed in themedian voter theorem), spoilers disappear entirely for these methods.[15][16] Rated voting rules, where voters assign a separate grade to each candidate, are not affected by Arrow's theorem.[17][18][19]Arrow initially asserted the information provided by these systems was meaningless and therefore could not be used to prevent paradoxes, leading him to overlook them.[20]However, Arrow would later describe this as a mistake,[21][22]stating rules based oncardinal utilities(such asscoreandapproval voting) are not subject to his theorem.[23][24] WhenKenneth Arrowproved his theorem in 1950, it inaugurated the modern field ofsocial choice theory, a branch ofwelfare economicsstudying mechanisms to aggregatepreferencesandbeliefsacross a society.[25]Such a mechanism of study can be amarket,voting system,constitution, or even amoralorethicalframework.[1] In the context of Arrow's theorem, citizens are assumed to haveordinal preferences, i.e.orderings of candidates. IfAandBare different candidates or alternatives, thenA≻B{\displaystyle A\succ B}meansAis preferred toB. Individual preferences (or ballots) are required to satisfy intuitive properties of orderings, e.g. they must betransitive—ifA⪰B{\displaystyle A\succeq B}andB⪰C{\displaystyle B\succeq C}, thenA⪰C{\displaystyle A\succeq C}. The social choice function is then amathematical functionthat maps the individual orderings to a new ordering that represents the preferences of all of society. Arrow's theorem assumes as background that anynon-degeneratesocial choice rule will satisfy:[26] Arrow's original statement of the theorem includednon-negative responsivenessas a condition, i.e., thatincreasingthe rank of an outcome should not make themlose—in other words, that a voting rule shouldn't penalize a candidate for being more popular.[2]However, this assumption is not needed or used in his proof (except to derive the weaker condition of Pareto efficiency), and Arrow later corrected his statement of the theorem to remove the inclusion of this condition.[3][29] A commonly-considered axiom ofrational choiceisindependence of irrelevant alternatives(IIA), which says that when deciding betweenAandB, one's opinion about a third optionCshould not affect their decision.[2] IIA is sometimes illustrated with a short joke by philosopherSidney Morgenbesser:[30] Arrow's theorem shows that if a society wishes to make decisions while always avoiding such self-contradictions, it cannot use ranked information alone.[30] Condorcet's exampleis already enough to see the impossibility of a fairranked voting system, given stronger conditions for fairness than Arrow's theorem assumes.[31]Suppose we have three candidates (A{\displaystyle A},B{\displaystyle B}, andC{\displaystyle C}) and three voters whose preferences are as follows: IfC{\displaystyle C}is chosen as the winner, it can be argued any fair voting system would sayB{\displaystyle B}should win instead, since two voters (1 and 2) preferB{\displaystyle B}toC{\displaystyle C}and only one voter (3) prefersC{\displaystyle C}toB{\displaystyle B}. However, by the same argumentA{\displaystyle A}is preferred toB{\displaystyle B}, andC{\displaystyle C}is preferred toA{\displaystyle A}, by a margin of two to one on each occasion. Thus, even though each individual voter has consistent preferences, the preferences of society are contradictory:A{\displaystyle A}is preferred overB{\displaystyle B}which is preferred overC{\displaystyle C}which is preferred overA{\displaystyle A}. Because of this example, some authors creditCondorcetwith having given an intuitive argument that presents the core of Arrow's theorem.[31]However, Arrow's theorem is substantially more general; it applies to methods of making decisions other than one-person-one-vote elections, such asmarketsorweighted voting, based onranked ballots. LetA{\displaystyle A}be a set ofalternatives. A voter'spreferencesoverA{\displaystyle A}are acompleteandtransitivebinary relationonA{\displaystyle A}(sometimes called atotal preorder), that is, a subsetR{\displaystyle R}ofA×A{\displaystyle A\times A}satisfying: The element(a,b){\displaystyle (\mathbf {a} ,\mathbf {b} )}being inR{\displaystyle R}is interpreted to mean that alternativea{\displaystyle \mathbf {a} }is preferred to alternativeb{\displaystyle \mathbf {b} }. This situation is often denoteda≻b{\displaystyle \mathbf {a} \succ \mathbf {b} }oraRb{\displaystyle \mathbf {a} R\mathbf {b} }. Denote the set of all preferences onA{\displaystyle A}byΠ(A){\displaystyle \Pi (A)}. LetN{\displaystyle N}be a positive integer. Anordinal (ranked)social welfare functionis a function[2] which aggregates voters' preferences into a single preference onA{\displaystyle A}. AnN{\displaystyle N}-tuple(R1,…,RN)∈Π(A)N{\displaystyle (R_{1},\ldots ,R_{N})\in \Pi (A)^{N}}of voters' preferences is called apreference profile. Arrow's impossibility theorem: If there are at least three alternatives, then there is no social welfare function satisfying all three of the conditions listed below:[32] Arrow's proof used the concept ofdecisive coalitions.[3] Definition: Our goal is to prove that thedecisive coalitioncontains only one voter, who controls the outcome—in other words, adictator. The following proof is a simplification taken fromAmartya Sen[33]andAriel Rubinstein.[34]The simplified proof uses an additional concept: Thenceforth assume that the social choice system satisfies unrestricted domain, Pareto efficiency, and IIA. Also assume that there are at least 3 distinct outcomes. Field expansion lemma—if a coalitionG{\displaystyle G}is weakly decisive over(x,y){\displaystyle (x,y)}for somex≠y{\displaystyle x\neq y}, then it is decisive. Letz{\displaystyle z}be an outcome distinct fromx,y{\displaystyle x,y}. Claim:G{\displaystyle G}is decisive over(x,z){\displaystyle (x,z)}. Let everyone inG{\displaystyle G}votex{\displaystyle x}overz{\displaystyle z}. By IIA, changing the votes ony{\displaystyle y}does not matter forx,z{\displaystyle x,z}. So change the votes such thatx≻iy≻iz{\displaystyle x\succ _{i}y\succ _{i}z}inG{\displaystyle G}andy≻ix{\displaystyle y\succ _{i}x}andy≻iz{\displaystyle y\succ _{i}z}outside ofG{\displaystyle G}. By Pareto,y≻z{\displaystyle y\succ z}. By coalition weak-decisiveness over(x,y){\displaystyle (x,y)},x≻y{\displaystyle x\succ y}. Thusx≻z{\displaystyle x\succ z}.◻{\displaystyle \square } Similarly,G{\displaystyle G}is decisive over(z,y){\displaystyle (z,y)}. By iterating the above two claims (note that decisiveness implies weak-decisiveness), we find thatG{\displaystyle G}is decisive over all ordered pairs in{x,y,z}{\displaystyle \{x,y,z\}}. Then iterating that, we find thatG{\displaystyle G}is decisive over all ordered pairs inX{\displaystyle X}. Group contraction lemma—If a coalition is decisive, and has size≥2{\displaystyle \geq 2}, then it has a proper subset that is also decisive. LetG{\displaystyle G}be a coalition with size≥2{\displaystyle \geq 2}. Partition the coalition into nonempty subsetsG1,G2{\displaystyle G_{1},G_{2}}. Fix distinctx,y,z{\displaystyle x,y,z}. Design the following voting pattern (notice that it is the cyclic voting pattern which causes the Condorcet paradox): voters inG1:x≻iy≻izvoters inG2:z≻ix≻iyvoters outsideG:y≻iz≻ix{\displaystyle {\begin{aligned}{\text{voters in }}G_{1}&:x\succ _{i}y\succ _{i}z\\{\text{voters in }}G_{2}&:z\succ _{i}x\succ _{i}y\\{\text{voters outside }}G&:y\succ _{i}z\succ _{i}x\end{aligned}}} (Items other thanx,y,z{\displaystyle x,y,z}are not relevant.) SinceG{\displaystyle G}is decisive, we havex≻y{\displaystyle x\succ y}. So at least one is true:x≻z{\displaystyle x\succ z}orz≻y{\displaystyle z\succ y}. Ifx≻z{\displaystyle x\succ z}, thenG1{\displaystyle G_{1}}is weakly decisive over(x,z){\displaystyle (x,z)}. Ifz≻y{\displaystyle z\succ y}, thenG2{\displaystyle G_{2}}is weakly decisive over(z,y){\displaystyle (z,y)}. Now apply the field expansion lemma. By Pareto, the entire set of voters is decisive. Thus by the group contraction lemma, there is a size-one decisive coalition—a dictator. Proofs using the concept of thepivotal voteroriginated from Salvador Barberá in 1980.[35]The proof given here is a simplified version based on two proofs published inEconomic Theory.[32][36] Assume there arenvoters. We assign all of these voters an arbitrary ID number, ranging from1throughn, which we can use to keep track of each voter's identity as we consider what happens when they change their votes.Without loss of generality, we can say there are three candidates who we callA,B, andC. (Because of IIA, including more than 3 candidates does not affect the proof.) We will prove that any social choice rule respecting unanimity and independence of irrelevant alternatives (IIA) is a dictatorship. The proof is in three parts: Consider the situation where everyone prefersAtoB, and everyone also prefersCtoB. By unanimity, society must also prefer bothAandCtoB. Call this situationprofile[0, x]. On the other hand, if everyone preferredBto everything else, then society would have to preferBto everything else by unanimity. Now arrange all the voters in some arbitrary but fixed order, and for eachiletprofile ibe the same asprofile 0, but moveBto the top of the ballots for voters 1 throughi. Soprofile 1hasBat the top of the ballot for voter 1, but not for any of the others.Profile 2hasBat the top for voters 1 and 2, but no others, and so on. SinceBeventually moves to the top of the societal preference as the profile number increases, there must be some profile, numberk, for whichBfirstmovesaboveAin the societal rank. We call the voterkwhose ballot change causes this to happen thepivotal voter forBoverA. Note that the pivotal voter forBoverAis not,a priori, the same as the pivotal voter forAoverB. In part three of the proof we will show that these do turn out to be the same. Also note that by IIA the same argument applies ifprofile 0is any profile in whichAis ranked aboveBby every voter, and the pivotal voter forBoverAwill still be voterk. We will use this observation below. In this part of the argument we refer to voterk, the pivotal voter forBoverA, as thepivotal voterfor simplicity. We will show that the pivotal voter dictates society's decision forBoverC. That is, we show that no matter how the rest of society votes, ifpivotal voterranksBoverC, then that is the societal outcome. Note again that the dictator forBoverCis not a priori the same as that forCoverB. In part three of the proof we will see that these turn out to be the same too. In the following, we call voters 1 throughk − 1,segment one, and votersk + 1throughN,segment two. To begin, suppose that the ballots are as follows: Then by the argument in part one (and the last observation in that part), the societal outcome must rankAaboveB. This is because, except for a repositioning ofC, this profile is the same asprofile k − 1from part one. Furthermore, by unanimity the societal outcome must rankBaboveC. Therefore, we know the outcome in this case completely. Now suppose that pivotal voter movesBaboveA, but keepsCin the same position and imagine that any number (even all!) of the other voters change their ballots to moveBbelowC, without changing the position ofA. Then aside from a repositioning ofCthis is the same asprofile kfrom part one and hence the societal outcome ranksBaboveA. Furthermore, by IIA the societal outcome must rankAaboveC, as in the previous case. In particular, the societal outcome ranksBaboveC, even though Pivotal Voter may have been theonlyvoter to rankBaboveC.ByIIA, this conclusion holds independently of howAis positioned on the ballots, so pivotal voter is a dictator forBoverC. In this part of the argument we refer back to the original ordering of voters, and compare the positions of the different pivotal voters (identified by applying parts one and two to the other pairs of candidates). First, the pivotal voter forBoverCmust appear earlier (or at the same position) in the line than the dictator forBoverC: As we consider the argument of part one applied toBandC, successively movingBto the top of voters' ballots, the pivot point where society ranksBaboveCmust come at or before we reach the dictator forBoverC. Likewise, reversing the roles ofBandC, the pivotal voter forCoverBmust be at or later in line than the dictator forBoverC. In short, ifkX/Ydenotes the position of the pivotal voter forXoverY(for any two candidatesXandY), then we have shown Now repeating the entire argument above withBandCswitched, we also have Therefore, we have and the same argument for other pairs shows that all the pivotal voters (and hence all the dictators) occur at the same position in the list of voters. This voter is the dictator for the whole election. Arrow's impossibility theorem still holds if Pareto efficiency is weakened to the following condition:[4] Arrow's theorem establishes that no ranked voting rule canalwayssatisfy independence of irrelevant alternatives, but it says nothing about the frequency of spoilers. This led Arrow to remark that "Most systems are not going to work badly all of the time. All I proved is that all can work badly at times."[37][38] Attempts at dealing with the effects of Arrow's theorem take one of two approaches: either accepting his rule and searching for the least spoiler-prone methods, or dropping one or more of his assumptions, such as by focusing onrated votingrules.[30] The first set of methods studied by economists are themajority-rule, orCondorcet, methods. These rules limit spoilers to situations where majority rule is self-contradictory, calledCondorcet cycles, and as a result uniquely minimize the possibility of a spoiler effect among ranked rules. (Indeed, many different social welfare functions can meet Arrow's conditions under such restrictions of the domain. It has been proven, however, that under any such restriction, if there exists any social welfare function that adheres to Arrow's criteria, thenCondorcet methodwill adhere to Arrow's criteria.[12]) Condorcet believed voting rules should satisfy both independence of irrelevant alternatives and themajority rule principle, i.e. if most voters rankAliceahead ofBob,Aliceshould defeatBobin the election.[31] Unfortunately, as Condorcet proved, this rule can be intransitive on some preference profiles.[39]Thus, Condorcet proved a weaker form of Arrow's impossibility theorem long before Arrow, under the stronger assumption that a voting system in the two-candidate case will agree with a simple majority vote.[31] Unlike pluralitarian rules such asranked-choice runoff (RCV)orfirst-preference plurality,[9]Condorcet methodsavoid the spoiler effect in non-cyclic elections, where candidates can be chosen by majority rule. Political scientists have found such cycles to be fairly rare, suggesting they may be of limited practical concern.[14]Spatial voting modelsalso suggest such paradoxes are likely to be infrequent[40][13]or even non-existent.[15] Soon after Arrow published his theorem,Duncan Blackshowed his own remarkable result, themedian voter theorem. The theorem proves that if voters and candidates are arranged on aleft-right spectrum, Arrow's conditions are all fully compatible, and all will be met by any rule satisfyingCondorcet's majority-rule principle.[15][16] More formally, Black's theorem assumes preferences aresingle-peaked: a voter's happiness with a candidate goes up and then down as the candidate moves along some spectrum. For example, in a group of friends choosing a volume setting for music, each friend would likely have their own ideal volume; as the volume gets progressively too loud or too quiet, they would be increasingly dissatisfied. If the domain is restricted to profiles where every individual has a single-peaked preference with respect to the linear ordering, then social preferences are acyclic. In this situation, Condorcet methods satisfy a wide variety of highly-desirable properties, including being fully spoilerproof.[15][16][12] The rule does not fully generalize from the political spectrum to the political compass, a result related to theMcKelvey-Schofield chaos theorem.[15][41]However, a well-defined Condorcet winner does exist if thedistributionof voters isrotationally symmetricor otherwise has auniquely-defined median.[42][43]In most realistic situations, where voters' opinions follow a roughly-normal distributionor can be accurately summarized by one or two dimensions, Condorcet cycles are rare (though not unheard of).[40][11] The Campbell-Kelly theorem shows that Condorcet methods are the most spoiler-resistant class of ranked voting systems: whenever it is possible for some ranked voting system to avoid a spoiler effect, a Condorcet method will do so.[12]In other words, replacing a ranked method with its Condorcet variant (i.e. elect a Condorcet winner if they exist, and otherwise run the method) will sometimes prevent a spoiler effect, but can never create a new one.[12] In 1977,Ehud KalaiandEitan Mullergave a full characterization of domain restrictions admitting a nondictatorial andstrategyproofsocial welfare function. These correspond to preferences for which there is a Condorcet winner.[44] Holliday and Pacuit devised a voting system that provably minimizes the number of candidates who are capable of spoiling an election, albeit at the cost of occasionally failingvote positivity(though at a much lower rate than seen ininstant-runoff voting).[11][clarification needed] As shown above, the proof of Arrow's theorem relies crucially on the assumption ofranked voting, and is not applicable torated voting systems. This opens up the possibility of passing all of the criteria given by Arrow. These systems ask voters to rate candidates on a numerical scale (e.g. from 0–10), and then elect the candidate with the highest average (for score voting) or median (graduated majority judgment).[45]: 4–5 Because Arrow's theorem no longer applies, other results are required to determine whether rated methods are immune to thespoiler effect, and under what circumstances. Intuitively, cardinal information can only lead to such immunity if it's meaningful; simply providing cardinal data is not enough.[46] Some rated systems, such asrange votingandmajority judgment, pass independence of irrelevant alternatives when the voters rate the candidates on an absolute scale. However, when they use relative scales, more general impossibility theorems show that the methods (within that context) still fail IIA.[47]As Arrow later suggested, relative ratings may provide more information than pure rankings,[48][49][50][37][51]but this information does not suffice to render the methods immune to spoilers. While Arrow's theorem does not apply to graded systems,Gibbard's theoremstill does: no voting game can bestraightforward(i.e. have a single, clear, always-best strategy).[52] Arrow's framework assumed individual and social preferences areorderingsorrankings, i.e. statements about which outcomes are better or worse than others.[53]Taking inspiration from thestrict behaviorismpopular in psychology, some philosophers and economists rejected the idea of comparing internal human experiences ofwell-being.[54][30]Such philosophers claimed it was impossible to compare the strength of preferences across people who disagreed;Sengives as an example that it would be impossible to know whether theGreat Fire of Romewas good or bad, because despite killing thousands of Romans, it had the positive effect of lettingNeroexpand his palace.[50] Arrow originally agreed with these positions and rejectedcardinal utility, leading him to focus his theorem on preference rankings.[54][3]However, he later stated that cardinal methods can provide additional useful information, and that his theorem is not applicable to them. John Harsanyinoted Arrow's theorem could be considered a weaker version of his own theorem[55][failed verification]and otherutility representation theoremslike theVNM theorem, which generally show thatrational behaviorrequires consistentcardinal utilities.[56] Behavioral economistshave shown individualirrationalityinvolves violations of IIA (e.g. withdecoy effects),[57]suggesting human behavior can cause IIA failures even if the voting method itself does not.[58]However, past research has typically found such effects to be fairly small,[59]and such psychological spoilers can appear regardless of electoral system.BalinskiandLarakidiscuss techniques ofballot designderived frompsychometricsthat minimize these psychological effects, such as asking voters to give each candidate a verbal grade (e.g. "bad", "neutral", "good", "excellent") and issuing instructions to voters that refer to their ballots as judgments of individual candidates.[45][page needed]Similar techniques are often discussed in the context ofcontingent valuation.[51] In addition to the above practical resolutions, there exist unusual (less-than-practical) situations where Arrow's requirement of IIA can be satisfied. Supermajorityrules can avoid Arrow's theorem at the cost of being poorly-decisive (i.e. frequently failing to return a result). In this case, a threshold that requires a2/3{\displaystyle 2/3}majority for ordering 3 outcomes,3/4{\displaystyle 3/4}for 4, etc. does not producevoting paradoxes.[60] Inspatial (n-dimensional ideology) models of voting, this can be relaxed to require only1−e−1{\displaystyle 1-e^{-1}}(roughly 64%) of the vote to prevent cycles, so long as the distribution of voters is well-behaved (quasiconcave).[61]These results provide some justification for the common requirement of a two-thirds majority for constitutional amendments, which is sufficient to prevent cyclic preferences in most situations.[61] Fishburnshows all of Arrow's conditions can be satisfied foruncountably infinite setsof voters given theaxiom of choice;[62]however, Kirman and Sondermann demonstrated this requires disenfranchisingalmost allmembers of a society (eligible voters form a set ofmeasure0), leading them to refer to such societies as "invisible dictatorships".[63] Arrow's theorem is not related tostrategic voting, which does not appear in his framework,[3][1]though the theorem does have important implications for strategic voting (being used as a lemma to proveGibbard's theorem[26]). The Arrovian framework ofsocial welfareassumes all voter preferences are known and the only issue is in aggregating them.[1] Monotonicity(calledpositive associationby Arrow) is not a condition of Arrow's theorem.[3]This misconception is caused by a mistake by Arrow himself, who included the axiom in his original statement of the theorem but did not use it.[2]Dropping the assumption does not allow for constructing a social welfare function that meets his other conditions.[3] Contrary to a common misconception, Arrow's theorem deals with the limited class ofranked-choice voting systems, rather than voting systems as a whole.[1][64] Dr. Arrow:Well, I’m a little inclined to think that score systems where you categorize in maybe three or four classes (in spite of what I said about manipulation) is probably the best.[...] And some of these studies have been made. In France, [Michel] Balinski has done some studies of this kind which seem to give some support to these scoring methods.
https://en.wikipedia.org/wiki/Arrow%27s_theorem
Condorcet's jury theoremis apolitical sciencetheorem about therelative probabilityof a given group of individuals arriving at a correct decision. The theorem was first expressed by theMarquis de Condorcetin his 1785 workEssay on the Application of Analysis to the Probability of Majority Decisions.[1] The assumptions of the theorem are that a group wishes to reach a decision bymajority vote. One of the two outcomes of the vote iscorrect, and each voter has an independent probabilitypof voting for the correct decision. The theorem asks how many voters we should include in the group. The result depends on whetherpis greater than or less than 1/2: Since Condorcet, many other researchers have proved various otherjury theorems, relaxing some or all of Condorcet's assumptions. To avoid the need for a tie-breaking rule, we assumenis odd. Essentially the same argument works for evennif ties are broken by adding a single voter. Now suppose we start withnvoters, and letmof these voters vote correctly. Consider what happens when we add two more voters (to keep the total number odd). The majority vote changes in only two cases: The rest of the time, either the new votes cancel out, only increase the gap, or don't make enough of a difference. So we only care what happens when a single vote (among the firstn) separates a correct from an incorrect majority. Restricting our attention to this case, we can imagine that the firstn-1 votes cancel out and that the deciding vote is cast by then-th voter. In this case the probability of getting a correct majority is justp. Now suppose we send in the two extra voters. The probability that they change an incorrect majority to a correct majority is (1-p)p2, while the probability that they change a correct majority to an incorrect majority isp(1-p)2. The first of these probabilities is greater than the second if and only ifp> 1/2, proving the theorem. This proof is direct; it just sums up the probabilities of the majorities. Each term of the sum multiplies the number ofcombinationsof a majority by theprobabilityof that majority. Each majority is counted using acombination,nitems takenkat a time, wherenis the jury size, andkis the size of the majority. Probabilities range from 0 (= the vote is always wrong) to 1 (= always right). Each person decides independently, so the probabilities of their decisions multiply. The probability of each correct decision isp. The probability of an incorrect decision,q, is the opposite ofp, i.e. 1 −p. The power notation, i.e.px{\displaystyle p^{x}}is a shorthand forxmultiplications ofp. Committee or jury accuracies can be easily estimated by using this approach in computer spreadsheets or programs. As an example, let us take the simplest case ofn= 3,p= 0.8. We need to show that 3 people have higher than 0.8 chance of being right. Indeed: Asymptotics is “The Calculus of Approximations”. It is used to solve hard problems that cannot be solved exactly and to provide simpler forms of complicated results, from early results like Taylor's and Stirling's formulas to the prime number theorem. An important topic in the study of asymptotic is asymptotic distribution which is a probability distribution that is in a sense the "limiting" distribution of a sequence of distributions. The probability of a correct majority decisionP(n,p), when the individual probabilitypis close to 1/2 grows linearly in terms ofp− 1/2. Fornvoters each one having probabilitypof deciding correctly and for oddn(where there are no possible ties): where and the asymptotic approximation in terms ofnis very accurate. The expansion is only in odd powers andc3<0{\displaystyle c_{3}<0}. In simple terms, this says that when the decision is difficult (pclose to 1/2), the gain by havingnvoters grows proportionally ton{\displaystyle {\sqrt {n}}}.[2] The Condorcet jury theorem has recently been used to conceptualize score integration when several physician readers (radiologists, endoscopists, etc.) independently evaluate images for disease activity. This task arises in central reading performed during clinical trials and has similarities to voting. According to the authors, the application of the theorem can translate individual reader scores into a final score in a fashion that is both mathematically sound (by avoiding averaging of ordinal data), mathematically tractable for further analysis, and in a manner that is consistent with the scoring task at hand (based on decisions about the presence or absence of features, a subjective classification task)[3] The Condorcet jury theorem is also used inensemble learningin the field ofmachine learning.[4]An ensemble method combines the predictions of many individual classifiers by majority voting. Assuming that each of the individual classifiers predict with slightly greater than 50% accuracy and their predictions are independent, then the ensemble of their predictions will be far greater than their individual predictive scores. Many political theorists and philosophers use the Condorcet’s Jury Theorem (CJT) to defend democracy, see Brennan[5]and references therein. Nevertheless, it is an empirical question whether the theorem holds in real life or not. Note that the CJT is adouble-edged sword: it can either prove that majority rule is an (almost) perfect mechanism to aggregate information, whenp>1/2{\displaystyle p>1/2}, or an (almost) perfect disaster, whenp<1/2{\displaystyle p<1/2}. A disaster would mean that the wrong option is chosen systematically. Some authors have argued that we are in the latter scenario. For instance,Bryan Caplanhas extensivelyarguedthat voters' knowledge is systematically biased toward (probably) wrong options. In the CJT setup, this could be interpreted as evidence forp<1/2{\displaystyle p<1/2}. Recently, another approach to study the applicability of the CJT was taken.[6]Instead of considering the homogeneous case, each voter is allowed to have a probabilitypi∈[0,1]{\displaystyle p_{i}\in [0,1]}, possibly different from other voters. This case was previously studied by Daniel Berend and Jacob Paroush[7]and includes the classical theorem of Condorcet (whenpi=p∀i∈N{\displaystyle p_{i}=p~~\forall ~i\in \mathbb {N} }) and other results, like the Miracle of Aggregation (whenpi=1/2{\displaystyle p_{i}=1/2}for most voters andpi=1{\displaystyle p_{i}=1}for a small proportion of them). Then, following a Bayesian approach, theprior probability(in this case,a priori) of the thesis predicted by the theorem is estimated. That is, if we choose an arbitrary sequence of voters (i.e., a sequence(pi)i∈N{\displaystyle (p_{i})_{i\in \mathbb {N} }}), will the thesis of the CJT hold? The answer is no. More precisely, if a random sequence ofpi{\displaystyle p_{i}}is taken following an unbiased distribution that does not favor competence,pi>1/2{\displaystyle p_{i}>1/2}, or incompetence,pi<1/2{\displaystyle p_{i}<1/2}, then the thesis predicted by the theorem will not holdalmost surely. With this new approach, proponents of the CJT should present strong evidence of competence, to overcome the low prior probability. That is, it is not only the case that there is evidence against competence (posterior probability), but also that we cannot expect the CJT to hold in the absence of any evidence (prior probability).
https://en.wikipedia.org/wiki/Condorcet%27s_jury_theorem
Condorcet methods Positional voting Cardinal voting Quota-remainder methods Approval-based committees Fractional social choice Semi-proportional representation By ballot type Pathological response Strategic voting Paradoxes ofmajority rule Positive results Themajority criterionis avoting system criterionapplicable to voting rules over ordinal preferences required that if only one candidate is ranked first by over 50% of voters, that candidate must win.[1] Some methods that comply with this criterion include anyCondorcet method,instant-runoff voting,Bucklin voting,plurality voting, andapproval voting. Themutual majority criterionis a generalized form of the criterion meant to account for when the majority prefers multiple candidates above all others; voting methods which pass majority but fail mutual majority can encourage all but one of the majority's preferred candidates to drop out in order to ensure one of the majority-preferred candidates wins, creating aspoiler effect.[2] By the majority criterion, a candidateCshould win if a majority of voters answers affirmatively to the question "Do you (strictly) preferCto every other candidate?" TheCondorcet criteriongives a stronger and more intuitive notion of majoritarianism (and as such is sometimes referred to asmajority rule). According to it, a candidateCshould win if for every other candidateYthere is a majority of voters that answers affirmatively to the question "Do you preferCtoY?" A Condorcet system necessarily satisfies the majority criterion, but not vice versa. A Condorcet winnerConly has to defeat every other candidate "one-on-one"—in other words, when comparingCto anyspecificalternative. To be the majority choice of the electorate, a candidateCmust be able to defeat every other candidatesimultaneously—i.e. voters who are asked to choose betweenCand "anyone else" must pick "C" instead ofanyother candidate. Equivalently, a Condorcet winner can have several different majority coalitions supporting them in each one-on-one matchup. A majority winner must instead have asingle(consistent) majority that supports them across all one-on-one matchups. In systems with absolute rating categories such asscoreandhighest median methods, it is not clear how the majority criterion should be defined. There are three notable definitions of for a candidateA: The first criterion is not satisfied by any common cardinal voting method. Ordinal ballots can only tell uswhetherAis preferred toB(notby how muchAis preferred toB), and so if we only know most voters preferAtoB, it is reasonable to say the majority should win. However, with cardinal voting systems, there is more information available, as voters also state the strength of their preferences. Thus in cardinal voting systems a sufficiently-motivated minority can sometimes outweigh the voices of a majority, if they would be strongly harmed by a policy or candidate. Approval voting non trivially satisfies the ranked majority criterion, because it satisfies IIA. Any candidate receiving more than 50% of the vote will be elected by plurality. Instant-runoff voting satisfies majority--if a candidate is rated first by 50% of the electorate, they will win in the first round. For example 100 voters cast the following votes: A has 110 Borda points (55 × 2 + 35 × 0 + 10 × 0). B has 135 Borda points (55 × 1 + 35 × 2 + 10 × 1). C has 55 Borda points (55 × 0 + 35 × 1 + 10 × 2). Candidate A is the first choice of a majority of voters but candidate B wins the election. AnyCondorcet methodwill automatically satisfy the majority criterion. For example 100 voters cast the following votes: Candidate B would win with a total of 80 × 9 + 20 × 10 = 720 + 200 = 920 rating points, versus 800 for candidate A. Because candidate A is rated higher than candidate B by a (substantial) majority of the voters, but B is declared winner, this voting system fails to satisfy the criterion due to using additional information about the voters' opinion. Conversely, if the bloc of voters who rate A highest know they are in the majority, such as from pre-election polls, they can strategically give a maximal rating to A, a minimal rating to all others, and thereby guarantee the election of their favorite candidate. In this regard, if there exists a majority coalition, the coalition will have the ability to coordinate and elect their favorite candidate. STAR voting fails majority, but satisfies themajority loser criterion. It is controversial how to interpret the term "prefer" in the definition of the criterion. If majority support is interpreted in a relative sense, with a majority rating a preferred candidate above any other, the method does not pass, even with only two candidates. If the word "prefer" is interpreted in an absolute sense, as rating the preferred candidate with the highest available rating, then it does. If "Ais preferred" means that the voter gives a better grade toAthan to every other candidate, majority judgment can fail catastrophically. Consider the case below whennis large: Ais preferred by a majority, butB's median is Good andA's median is only Fair, soBwould win. In fact,Acan be preferred by up to (but not including) 100% of all voters, an exceptionally severe violation of the criterion. If we define the majority criterion as requiring a voter to uniquely top-rate candidateA, then this system passes the criterion; any candidate who receives the highest grade from a majority of voters receives the highest grade (and so can only be defeated by another candidate who has majority support).
https://en.wikipedia.org/wiki/Majority_favorite_criterion
Themajority loser criterionis a criterion to evaluatesingle-winner voting systems.[1][2][3][4]The criterion states that if a majority of voters give a candidate no support, i.e.do not list that candidate on their ballot, that candidate must lose (unless no candidate is accepted by a majority of voters). Either of theCondorcet loser criterionor themutual majority criterionimplies the majority loser criterion. However, theCondorcet criteriondoes not imply the majority loser criterion, since theminimax methodsatisfies the Condorcet but not the majority loser criterion. Also, themajority criterionis logically independent from the majority loser criterion, since thepluralityrule satisfies the majority but not the majority loser criterion, and theanti-pluralityrule satisfies the majority loser but not the majority criterion. There is nopositional scoring rulewhich satisfies both the majority and the majority loser criterion,[5][6]but several non-positional rules, including manyCondorcet rules, do satisfy both. Some voting systems, like instant-runoff voting, fail the criterion if extended to handleincomplete ballots.
https://en.wikipedia.org/wiki/Majority_loser_criterion
Themutual majority criterionis a criterion for evaluatingelectoral systems. It is also known as themajority criterion for solid coalitionsand thegeneralized majority criterion. This criterion requires that whenever amajorityof voters prefer a group of candidates above all others, then the winner must be a candidate from that group.[1]The mutual majority criterion may also be thought of as the single-winner case of Droop-Proportionality for Solid Coalitions. Let L be a subset of candidates. Asolid coalitionin support of L is a group of voters who strictly prefer all members of L to all candidates outside of L. In other words, each member of the solid coalition ranks their least-favorite member of L higher than their favorite member outside L. Note that the members of the solid coalition may rank the members of L differently. The mutual majority criterion says that if there is asolid coalitionof voters in support of L, and this solid coalition consists of more than half of all voters, then the winner of the election must belong to L. This is similar to but stricter than themajority criterion, where the requirement applies only to the case that L is only one single candidate. It is also stricter than themajority loser criterion, which only applies when L consists of all candidates except one.[2] AllSmith-efficientCondorcet methodspass the mutual majority criterion.[3] Methods which pass mutual majority but fail theCondorcet criterionmay nullify the voting power of voters outside the mutual majority whenever they fail to elect the Condorcet winner. Anti-plurality voting,range voting, and theBorda countfail themajority-favorite criterionand hence fail the mutual majority criterion. In addition,minimax, thecontingent vote, Young's method,first past the post, and Black fail, even though they pass the majority-favorite criterion.[4] TheSchulze method,ranked pairs,instant-runoff voting,Nanson's method, andBucklin votingpass this criterion. The mutual majority criterion implies themajority criterionso the Borda count's failure of the latter is also a failure of the mutual majority criterion. The set solely containing candidate A is a set S as described in the definition. Assume four candidates A, B, C, and D with 100 voters and the following preferences: The results would be tabulated as follows: Result: Candidates A, B and C each are strictly preferred by more than the half of the voters (52%) over D, so {A, B, C} is a set S as described in the definition and D is a Condorcet loser. Nevertheless, Minimax declaresDthe winner because its biggest defeat is significantly the smallest compared to the defeats A, B and C caused each other. Suppose thatTennesseeis holding an election on the location of itscapital. The population is concentrated around four major cities.All voters want the capital to be as close to them as possible.The options are: The preferences of each region's voters are: 58% of the voters prefer Nashville, Chattanooga and Knoxville to Memphis. Therefore, the three eastern cities build a setSas described in the definition. But, since the supporters of the three cities split their votes, Memphis wins under plurality voting.
https://en.wikipedia.org/wiki/Mutual_majority_criterion
Majoritarian democracyis a form ofdemocracybased upon a principle ofmajority rule.[1]Majoritarian democracy contrasts withconsensus democracy, rule by as many people as possible.[1][2][3][4] Arend Lijphartoffers what is perhaps the dominant definition of majoritarian democracy. He identifies that majoritarian democracy is based on theWestminster model, and majority rule.[5]According to Lijphart, the key features of a majoritarian democracy are: In the majoritarian vision of democracy, voters mandate elected politicians to enact the policies they proposed during their electoral campaign.[6]Electionsare the focal point of political engagement, with limited ability for the people to influencepolicymakingbetween elections.[7] Though common, majoritarian democracy is not universally accepted – majoritarian democracy is criticized as having the inherent danger of becoming a "tyranny of the majority" whereby the majority in society could oppress or exclude minority groups,[1]which can lead to violence and civil war.[2][3]Some argue[who?]that since parliament, statutes and preparatory works are very important in majoritarian democracies,[citation needed]and considering the absence of a tradition to exercisejudicial reviewat the national level,[citation needed]majoritarian democracies are undemocratic.[citation needed] Fascismrejects majoritarian democracy because the latter assumes equality of citizens and fascists claim that fascism is a form ofauthoritarian democracythat represents the views of a dynamic organized minority of a nation rather than the disorganized majority.[8] There are few, if any, purely majoritarian democracies. In many democracies, majoritarianism is modified or limited by one or several mechanisms which attempt to represent minorities. TheUnited Kingdomis the classical example of a majoritarian system.[5]The United Kingdom's Westminster system has been borrowed and adapted in many other democracies. Majoritarian features of theUnited Kingdom's political systeminclude: However, even in the United Kingdom, majoritarianism has been at least somewhat limited by the introduction ofdevolved parliaments.[10] Australiais a generally majoritarian democracy, although some have argued that it typifies a form of 'modified majoritarianism'.[9]This is because while the lower house of theAustralian Parliamentis elected viapreferential voting, the upper house is elected via proportional representation.Proportional representationis a voting system that allows for greater minority representation.[11]Canada is subject to a similar debate.[12] TheUnited Stateshas some elements of majoritarianism - such as first-past-the-post voting in many contexts - however this is complicated by variation among states. In addition, a strict separation of powers and strongfederalismmediates majoritarianism. An example of this complexity can be seen in the role of theElectoral Collegein presidential elections, as a result of which a candidate who loses thepopular votemay still go on to win the presidency.[13]
https://en.wikipedia.org/wiki/Majoritarian_democracy
No independence before majority rule(abbreviatedNIBMAR) was a policy adopted by theBritish governmentrequiring the implementation ofmajority rulein a colony, rather than rule by thewhite colonial minority, beforethe empiregrantedindependenceto its colonies.[when?]It was sometimes reinterpreted by some commentators asno independence before majority African rulethough this addition was not government policy.[1] In particular, the NIBMAR position was advocated with respect to the future status ofRhodesiaas an independent state.British prime ministerHarold Wilsonwas pressured into adopting the approach during a conference inLondon. Wilson was not initially inclined to do so, fearing it would slow the rate at which Rhodesia could be granted independence, butLester Pearson, thePrime Minister of Canada, formulated a draft resolution committing Wilson to NIBMAR. Wilson defended the policy when it was attacked as disastrous by opposition Conservatives.[2]The accomplishment was short-lived, however, as Wilson continued to extend offers toIan Smith, the Rhodesian Prime Minister, which Smith ultimately rejected.[3]The UK policy of NIBMAR led Smith's government todeclare Rhodesia's independencewithout British consent. This article related to theBritish Empire(1497–1997) is astub. You can help Wikipedia byexpanding it. Thishistoryarticle is astub. You can help Wikipedia byexpanding it. This article aboutpoliticsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/No_independence_before_majority_rule
Mob ruleorochlocracyormobocracyis apejorativeterm describing an oppressivemajoritarianform ofgovernmentcontrolled by the common people through theintimidationof authorities. Ochlocracy is distinguished fromdemocracyor similarly legitimate and representative governments by the absence or impairment of a procedurally civil process reflective of the entire polity.[1] Ochlocracy comes fromLatinochlocratia, fromGreekὀχλοκρατία(okhlokratía), fromὄχλος(ókhlos, "mass", "mob", or "common people") andκράτος(krátos, "rule").[2][3]An ochlocrat is one who is an advocate or partisan of ochlocracy. The adjective may be either ochlocratic or ochlocratical. Ochlocracy is synonymous in meaning and usage to mob rule ormobocracy, whichwas coinedin the 18th century from the sense of "mob" meaning the common rabble that arose from the Latin phrasemobile vulgus("the ficklecrowd") in the 1680s during disputes over theUnited Kingdom'sGlorious Revolution. Polybiusappears to have coined the term ochlocracy in his 2nd century BC workHistories(6.4.6).[4]He uses it to name the "pathological" version of popular rule, in opposition to the good version, which he refers to as democracy. There are numerous mentions of the word "ochlos" in theTalmud, in which "ochlos" refers to anything from "mob", "populace", to "armed guard", as well as in the writings ofRashi, a Jewish commentator on the Bible. The word was first recorded in English in 1584, derived from theFrenchochlocratie(1568), which stems from the original Greekokhlokratia, fromokhlos("mob") andkratos("rule", "power", "strength"). Ancient Greek political thinkers[5]regarded ochlocracy as one of the three "bad" forms of government (tyranny,oligarchy, and ochlocracy) as opposed to the three "good" forms of government:monarchy,aristocracy, anddemocracy. They distinguished "good" and "bad" according to whether the government form would act in the interest of the whole community ("good") or in the exclusive interests of a group or individual at the expense of justice ("bad").[citation needed] Polybius' predecessor,Aristotle, distinguished between different forms of democracy, stating that those disregarding therule of lawdevolved into ochlocracy.[6]Aristotle's teacher,Plato, considered democracy itself to be a degraded form of government and the term is absent from his work.[7] The threat of "mob rule" to a democracy is restrained by ensuring that the rule of law protectsminoritiesor individuals against short-termdemagogueryormoral panic.[8]However, considering how laws in a democracy are established or repealed by the majority, the protection of minorities by rule of law is questionable. Some authors, like the Bosnian political theoretician Jasmin Hasanović, connect the emergence of ochlocracy in democratic societies with thedecadence of democracyinneo-liberalWestern societies, in which "the democratic role of the people has been reduced mainly to the electoral process".[1] During the late 17th and the early 18th centuries, English life was very disorderly. Although theDuke of Monmouth's rising of 1685 was the last rebellion, there was scarcely a year in whichLondonor the provincial towns did not see aggrieved people breaking out into riots. InQueen Anne's reign (1702–14) the word "mob", first heard of not long before, came into general use. With no police force, there was little public order.[9]Several decades later, the anti-CatholicGordon Riotsswept through London and claimed hundreds of lives; at the time, a proclamation painted on the wall of Newgate prison announced that the inmates had been freed by the authority of "His Majesty, King Mob". TheSalem Witch Trialsincolonial Massachusettsduring the 1690s, in which the unified belief of the townspeople overpowered the logic of the law, also has been cited by one essayist as an example of mob rule.[10] In 1837,Abraham Lincolnwrote aboutlynchingand "the increasing disregard for law which pervades the country – the growing disposition to substitute the wild and furious passions in lieu of the sober judgment of courts, and the worse than savage mobs for the executive ministers of justice."[11] Mob violence played a prominent role in the early history of theLatter Day Saint movement.[12]Examples include theexpulsions from Missouri, theHaun's Mill massacre, thedeath of Joseph Smith, theexpulsion from Nauvoo, the murder ofJoseph Standing, theCane Creek Massacre,[13][14]and theMountain Meadows Massacre. Inan 1857 speech,Brigham Younggave an address demanding military action against "mobocrats." Notes Bibliography
https://en.wikipedia.org/wiki/Ochlocracy
Condorcet methods Positional voting Cardinal voting Quota-remainder methods Approval-based committees Fractional social choice Semi-proportional representation By ballot type Pathological response Strategic voting Paradoxes ofmajority rule Positive results Quadratic voting (QV)is avoting systemthat encourages voters to express their truerelative intensity of preference (utility)between multiple options orelections.[1]By doing so, quadratic voting seeks to mitigatetyranny of the majority—where minority preferences are by default repressed since under majority rule, majority cooperation is needed to make any change. Quadratic voting prevents this failure mode by allowing voters to vote multiple times on any one option at the cost of not being able to vote as much on other options. This enables minority issues to be addressed where the minority has a sufficiently strong preference relative to the majority (since motivated minorities can vote multiple times) while also disincentivizing extremism / putting all votes on one issue (since additional votes require more and more sacrifice of influence over other issues). Quadratic voting works by having voters allocate "credits" (usually distributed equally, although some proposals talk about using real money) to various issues. The number of votes to add is determined by aquadraticcost function, which simply means that the number of votes an individual casts for a given issue is equal to the square root of the number of credits they allocate (put another way, to add 3 votes requires allocating the square orquadraticof the number of votes, i.e. 9 credits).[2]Because the quadratic cost function makes each additional vote more expensive (to go from 2-3 votes, you need to allocate 5 extra credits, but from 3-4, you would need to add 7), voters are incentivized to not over-allocate to a single issue and instead to spread their credits across multiple issues in order to make better use of their credits. This incentive creates voting outcomes more closely aligned with a voter's truerelative expected utilitybetween options. Compared toscore votingorcumulative votingwhere voters may simply not vote for anyone other than their favorite, QV disincentivizes this behavior by giving voters who more accurately represent their preferences across multiple options more overall votes than those who don't.[3] Thequadraticcost functionuniquely enables people to purchase votes in a way that reflects the strength of their preferences proportionally. As a result, the total votes cast on a given issue will correspond to the intensity of preferences among voters, effectively balancing the collective outcome according to both the direction and strength of individual preferences. This occurs because the marginal cost of each additional vote increases linearly with the number of votes cast. If the marginal cost increased less than linearly, someone who values the issue twice as much might buy disproportionately more votes, predisposing the system to favor intense special interests with concentrated preferences. This results in a "one-dollar-one-vote" dynamic, where marginal costs remain constant. Conversely, if the cost function rises faster than quadratically, it leads voters to limit themselves to a single vote, pushing the system towardmajority rulewhere only the number of voters matters, rather than the intensity of preference.[1] Quadratic voting is based uponmarket principles, where each voter is given abudgetof vote credits that they have the personal decisions and delegation to spend in order to influence the outcome of a range of decisions. If a participant has a strong support for or against a specific decision, additional votes could be allocated to proportionally demonstrate the voter's support. A votepricingrule determines the cost of additional votes, with each vote becoming increasingly more expensive. By increasing votercreditcosts, this demonstrates an individual's support and interests toward the particular decision.[4] By contrast, majority rule based on individual person voting has the potential to lead to focus on only the most popular policies, so smaller policies would not be placed on as much significance. The larger proportion of voters who vote for a policy even with lesser passion compared to the minority proportion of voters who have higher preferences in a less popular topic can lead to a reduction of aggregatewelfare. In addition, the complicating structures of contemporary democracy with institutional self-checking (i.e.,federalism,separation of powers) will continue to expand its policies, so quadratic voting is responsible for correcting any significant changes ofone-person-one-votepolicies.[5] Robustness of a voting system can be defined as how sensitive a voting scheme is to non-ideal behavior from either voters or outside influence. The robustness of QV with respect to various non-idealities has been studied, including collusion among voters, outside attacks on the voting process, and irrationality of the voters. Collusion is possible in most voting schemes to one extent or another, and what is key is the sensitivity of the voting scheme to collusion. It has been shown that QV exhibits similar sensitivity to collusion as one-person-one-vote systems, and is much less sensitive to collusion than the VCG or Groves and Ledyard mechanisms.[6]Proposals have been put forward to make QV more robust with respect to both collusion and outside attacks.[7]The effects of voter irrationality and misconceptions on QV results have been examined critically by QV by a number of authors. QV has been shown to be less sensitive to 'underdog effects' than one-person-one-vote.[6]When the election is not close, QV has also been shown to be efficient in the face of a number of deviations from perfectly rational behavior, including voters believing vote totals are signals in and of themselves, voters using their votes to express themselves personally, and voter belief that their votes are more pivotal than they actually are. Although such irrational behavior can cause inefficiency in closer elections, the efficiency gains through preference expression are often sufficient to make QV net beneficial compared to one-person-one-vote systems.[6]Some distortionary behaviors can occur for QV in small populations due to people stoking issues to get more return for themselves,[8]but this issue has not been shown to be a practical issue for larger populations. Due to QV allowing people to express preferences continuously, it has been proposed that QV may be more sensitive than 1p1v to social movements that instill misconceptions or otherwise alter voters' behavior away from rationality in a coordinated manner.[9] The quadratic nature of the voting suggests that a voter can use their votes more efficiently by spreading them across many issues. For example, a voter with a budget of 16 vote credits can apply 1 vote credit to each of the 16 issues. However, if the individual has a stronger passion or sentiment on an issue, they could allocate 4 votes, at the cost of 16 credits, to the singular issue, using up their entire budget.[10] One of the earliest known models idealizing quadratic voting was proposed by 3 scientists:William Vickrey,Edward H. Clarke, and Theodore Groves. Together they theorized theVickrey–Clarke–Groves mechanism(VCG mechanism). The purpose of this mechanism was to find the balance between being a transparent, easy-to-understand function that the market could understand in addition to being able to calculate and charge the specific price of any resource. This balance could then theoretically act as motivation for users to not only honestly declare their utilities, but also charge them the correct price.[11]This theory was easily able to be applied into a voting system that could allow people to cast votes while presenting the intensity of their preference. However, much like the majority of the other voting systems proposed during this time, it proved to be too difficult to understand,[12]vulnerable to cheating, weak equilibria, and other impractical deficiencies.[13]As this concept continued developing, E. Glen Weyl, a Microsoft researcher, applied the concept to democratic politics and corporate governance and coining the phrase Quadratic Voting.[1][additional citation(s) needed] The main motivation of Weyl to create a quadratic voting model was to combat against the "tyranny of the majority" outcome that is a direct result of the majority-rule model. He believed the two main problems of the majority-rule model are that it doesn't always advance the public good and it weakens democracy.[14]The stable majority has always been systematically benefited at the direct expense of minorities.[15]On the other hand, even hypothetically if the majority wasn't to be concentrated in a single group, tyranny of the majority would still exist because a social group will still be exploited. Therefore, Weyl concluded that this majority rule system will always cause social harm.[14]He also believed another reason is that the majority rule system weakens democracy. Historically, to discourage political participation of minorities, the majority doesn't hesitate to set legal or physical barriers. As a result, this success of a temporary election is causing democratic institutions to weaken around the world.[14] To combat this, Weyl developed the quadratic voting model and its application to democratic politics. The model theoretically optimizes social welfare by allowing everyone the chance to vote equally on a proposal as well as giving the minority the opportunity to buy more votes to level out the playing field.[14] Quadratic voting in corporate governance is aimed to optimize corporate values through the use of a more fair voting system. Common issues with shareholder voting includes blocking out policies that may benefit the corporate value but don't benefit their shareholder value or having the majority commonly outvote the minority.[16]This poor corporate governance could easily contribute to detrimental financial crises.[17] With quadratic voting, not only are shareholders stripped of their voting rights, but instead corporate employees can buy as many votes as they want and participate in electoral process. Using the quadratic voting model, one vote would be $1, while two votes would be $4, and so on. The collected money gets transferred to the treasury where it gets distributed to the shareholders. To combat voter fraud, the votes are confidential and collusion is illegal. With this, not only is the majority shareholders' power against the minority stripped, but with the participation of everyone, it ensures that the policies are made for the corporate's best interest instead of the shareholders' best interest.[16] The most common objection to QV, are that if it uses real currency (as opposed to a uniformly distributed artificial currency) it efficiently selects the outcome for which the population has the highest willingness to pay. Willingness to pay, however, is not directly proportional to the utility gained by the voting population. For example, if those who are wealthy can afford to buy more votes relative to the rest of the population, this would distort voting outcomes to favor the wealthy in situations where voting is polarized on the basis of wealth.[4][18][Note 1]While the wealthy having undue influence on voting processes is not a unique feature of QV as a voting process, the direct involvement of money in some proposals of the QV process has caused many to have concerns about this method.[citation needed] Several alternative proposals have been put forward to counter this concern, with the most popular being QV with an artificial currency. Usually, the artificial currency is distributed on a uniform basis, thus giving every individual an equal say, but allowing individuals to more flexibly align their voting behavior with their preferences. While many have objected to QV with real currency, there has been fairly broad-based approval of QV with an artificial currency.[18][19][6] Other proposed methods for ameliorating objections to the use of money in real currency QV are: Many areas have been proposed for quadratic voting, including corporate governance in the private sector,[20]allocating budgets, cost-benefit analyses for public goods,[21]more accurate polling and sentiment data,[22]and elections and other democratic decisions.[5] Quadratic voting was conducted in an experiment by the Democraticcaucusof theColorado House of Representativesin April 2019. Lawmakers used it to decide on their legislative priorities for the coming two years, selecting among 107 possible bills. Each member was given 100 virtual tokens that would allow them to put either 10 votes on one bill (as 100 virtual tokens represented 10 votes for one bill) or 5 votes each (25 virtual tokens) on 4 different bills. In the end, the winner was Senate Bill 85, theEqual Pay for Equal WorkAct, with a total of 60 votes.[23]From this demonstration of quadratic voting, no representative spent all 100 tokens on a single bill, and there was delineation between the discussion topics that were the favorites andalso-rans. The computer interface and systematic structure was contributed by Democracy Earth, which is an open-source liquid democracy platform to foster governmental transparency.[24] The first use of quadratic voting in Taiwan was hosted byRadicalxChangeinTaipei, where quadratic voting was used to vote in the Taiwanese presidentialHackathon.[25]The Hackathon projects revolved around 'Cooperative Plurality' – the concept of discovering the richness of diversity that is repressed through human cooperation.[26]Judges were given 99 points with 1 vote costing 1 point and 2 votes costing 4 points and so on. This stopped the follow-up effect and group influenced decision that happened with judges in previous years.[25]This event was considered a successful application of quadratic voting. In Leipzig, Germany,VoltGermany – a pan-European party – held its second party congress and used quadratic voting to determine the most valued topics in their party manifesto among its members.[27]Partner with Deora, Leapdao, a technology start-up company, launched its quadratic voting software consisting of a "burner wallet". Since there was limited time and it was a closed environment, the "burner wallet" with a QR code acted as a private key that allowed congress to access their pre-funded wallet and a list of all the proposals on the voting platform.[28]The event was considered a success because it successfully generated a priority list that ranked the importance of the topics. Quadratic voting also allowed researchers to analyze voter distributions. For example, the topic of Education showed especially high or emotional value to voters with the majority deciding to cast 4 or 9 voice-credits (2 or 3 votes) and a minority casting 25-49 voice-credits (5-7 votes).[28]On the other hand, the topic of Renewed Economy showed a more typical distribution with a majority of voters either not vote or max out at 9 voice-credits (3 votes). This indicates that there are less emotionally invested voters on this proposal as many of them didn't even spend tokens to vote on it.[28] In Brazil, the city council ofGramadohas used quadratic voting to define priorities for the year and to reach consensus on tax amendments.[29] Vitalik Buterinin collaboration with Zoë Hitzig andE. Glen Weylproposed quadratic funding, a way to allocate the distribution of funds (for example, from a government's budget, a philanthropic source, or collected directly from participants) based on quadratic voting, noting that such a mechanism allows for optimal production of public goods without needing to be determined by a centralized legislature. Weyl argues that this fills a gap with traditional free markets – which encourage the production of goods and services for the benefit of individuals, but fail to create outcomes desirable to society as a whole – while still benefiting from the flexibility and diversity free markets have compared to many government programs.[30][31][32] The Gitcoin Grants initiative is an early adopter of quadratic funding. However, this implementation differs in several ways from the original QF scheme.[33]Led by Kevin Owocki, Scott Moore, and Vivek Singh, the initiative has distributed more than $60,000,000 to over 3,000 open-source software development projects as of 2022.[34] Global hackathon organizerDoraHacks' developer incentive platform DoraHacks has leveraged quadratic funding to help many open Web3 ecosystems likeSolana,Filecoinand BSC distribute more than $10,000,000 to 1,500 projects. Schemes have been designed by the DoraHacks team to enhance the integrity of quadratic funding. DoraHacks and Gitcoin are considered the largest quadratic funding platforms for funding public goods and open source projects.[35]
https://en.wikipedia.org/wiki/Quadratic_voting
Condorcet methods Positional voting Cardinal voting Quota-remainder methods Approval-based committees Fractional social choice Semi-proportional representation By ballot type Pathological response Strategic voting Paradoxes ofmajority rule Positive results This article discusses the methods and results of comparing differentelectoral systems. There are two broad ways to compare voting systems: Voting methods can be evaluated by measuring their accuracy under random simulated elections aiming to be faithful to the properties of elections in real life. The first such evaluation was conducted by Chamberlin and Cohen in 1978, who measured the frequency with which certain non-Condorcet systems elected Condorcet winners.[1] TheMarquis de Condorcetviewed elections as analogous to jury votes where each member expresses an independent judgement on the quality of candidates. Candidates differ in terms of their objective merit, but voters have imperfect information about the relative merits of the candidates. Such jury models are sometimes known asvalence models. Condorcet and his contemporaryLaplacedemonstrated that, in such a model, voting theory could be reduced to probability by finding theexpected qualityof each candidate.[2] The jury model implies several natural concepts of accuracy for voting systems under different models: However, Condorcet's model is based on the extremely strong assumption ofindependent errors, i.e. voters will not be systematically biased in favor of one group of candidates or another. This is usually unrealistic: voters tend to communicate with each other, form parties or political ideologies, and engage in other behaviors that can result incorrelated errors. Duncan Blackproposed a one-dimensional spatial model of voting in 1948, viewing elections as ideologically driven.[4]His ideas were later expanded by Anthony Downs.[5]Voters' opinions are regarded as positions in a space of one or more dimensions; candidates have positions in the same space; and voters choose candidates in order of proximity (measured under Euclidean distance or some other metric). Spatial models imply a different notion of merit for voting systems: the more acceptable the winning candidate may be as a location parameter for the voter distribution, the better the system. Apolitical spectrumis a one-dimensional spatial model. Neutral voting models try to minimize the number of parameters and, as an example of thenothing-up-my-sleeve principle. The most common such model is theimpartial anonymous culturemodel (orDirichletmodel). These models assume voters assign each candidate a utility completely at random (from auniform distribution). Tidemanand Plassmann conducted a study which showed that a two-dimensional spatial model gave a reasonable fit to 3-candidate reductions of a large set of electoral rankings. Jury models, neutral models, and one-dimensional spatial models were all inadequate.[6]They looked at Condorcet cycles in voter preferences (an example of which is A being preferred to B by a majority of voters, B to C and C to A) and found that the number of them was consistent with small-sample effects, concluding that "voting cycles will occur very rarely, if at all, in elections with many voters." The relevance of sample size had been studied previously byGordon Tullock, who argued graphically that although finite electorates will be prone to cycles, the area in which candidates may give rise to cycling shrinks as the number of voters increases.[7] Autilitarianmodel views voters as ranking candidates in order of utility. The rightful winner, under this model, is the candidate who maximizes overall social utility. A utilitarian model differs from a spatial model in several important ways: It follows from the last property that no voting system which gives equal influence to all voters is likely to achieve maximum social utility. Extreme cases of conflict between the claims of utilitarianism and democracy are referred to as the 'tyranny of the majority'. See Laslier's, Merlin's, and Nurmi's comments in Laslier's write-up.[8] James Millseems to have been the first to claim the existence of ana prioriconnection between democracy and utilitarianism – see the Stanford Encyclopedia article.[9] Suppose that theithcandidate in an election has meritxi(we may assume thatxi~N(0,σ2)[10]), and that voterj's level of approval for candidateimay be written asxi+ εij(we will assume that the εijareiid.N(0,τ2)). We assume that a voter ranks candidates in decreasing order of approval. We may interpret εijas the error in voterj's valuation of candidateiand regard a voting method as having the task of finding the candidate of greatest merit. Each voter will rank the better of two candidates higher than the less good with a determinate probabilityp(which under the normal model outlined here is equal to12+1πtan−1στ{\displaystyle {\tfrac {1}{2}}\!+\!{\tfrac {1}{\pi }}{\textrm {tan}}^{-1}{\tfrac {\sigma }{\tau }}}, as can be confirmed from a standard formula for Gaussian integrals over a quadrant[citation needed]).Condorcet's jury theoremshows that so long asp>1⁄2, the majority vote of a jury will be a better guide to the relative merits of two candidates than is the opinion of any single member. Peyton Youngshowed that three further properties apply to votes between arbitrary numbers of candidates, suggesting that Condorcet was aware of the first and third of them.[11] Robert F. Bordley constructed a 'utilitarian' model which is a slight variant of Condorcet's jury model.[12]He viewed the task of a voting method as that of finding the candidate who has the greatest total approval from the electorate, i.e. the highest sum of individual voters' levels of approval. This model makes sense even with σ2= 0, in which caseptakes the value12+1πtan−11n−1{\displaystyle {\tfrac {1}{2}}\!+\!{\tfrac {1}{\pi }}{\textrm {tan}}^{-1}{\tfrac {1}{n-1}}}wherenis the number of voters. He performed an evaluation under this model, finding as expected that the Borda count was most accurate. A simulated election can be constructed from a distribution of voters in a suitable space. The illustration shows voters satisfying a bivariate Gaussian distribution centred on O. There are 3 randomly generated candidates, A, B and C. The space is divided into 6 segments by 3 lines, with the voters in each segment having the same candidate preferences. The proportion of voters ordering the candidates in any way is given by the integral of the voter distribution over the associated segment. The proportions corresponding to the 6 possible orderings of candidates determine the results yielded by different voting systems. Those which elect the best candidate, i.e. the candidate closest to O (who in this case is A), are considered to have given a correct result, and those which elect someone else have exhibited an error. By looking at results for large numbers of randomly generated candidates the empirical properties of voting systems can be measured. The evaluation protocol outlined here is modelled on the one described by Tideman and Plassmann.[6]Evaluations of this type are commonest for single-winner electoral systems.Ranked votingsystems fit most naturally into the framework, but other types of ballot (such aFPTPandApproval voting) can be accommodated with lesser or greater effort. The evaluation protocol can be varied in a number of ways: One of the main uses of evaluations is to compare the accuracy of voting systems when voters vote sincerely. If an infinite number of voters satisfy a Gaussian distribution, then the rightful winner of an election can be taken to be the candidate closest to the mean/median, and the accuracy of a method can be identified with the proportion of elections in which the rightful winner is elected. Themedian voter theoremguarantees that all Condorcet systems will give 100% accuracy (and the same applies toCoombs' method[14]). Evaluations published in research papers use multidimensional Gaussians, making the calculation numerically difficult.[1][15][16][17]The number of voters is kept finite and the number of candidates is necessarily small. The computation is much more straightforward in a single dimension, which allows an infinite number of voters and an arbitrary numbermof candidates. Results for this simple case are shown in the first table, which is directly comparable with Table 5 (1000 voters, medium dispersion) of the cited paper by Chamberlin and Cohen. The candidates were sampled randomly from the voter distribution and a single Condorcet method (Minimax) was included in the trials for confirmation. The relatively poor performance of theAlternative vote(IRV) is explained by the well known and common source of error illustrated by the diagram, in which the election satisfies a univariate spatial model and the rightful winner B will be eliminated in the first round. A similar problem exists in all dimensions. An alternative measure of accuracy is the average distance of voters from the winner (in which smaller means better). This is unlikely to change the ranking of voting methods, but is preferred by people who interpret distance as disutility. The second table shows the average distance (in standard deviations)minus2π{\displaystyle {\sqrt {\tfrac {2}{\pi }}}}(which is the average distance of a variate from the centre of a standard Gaussian distribution) for 10 candidates under the same model. James Green-Armytage et al. published a study in which they assessed the vulnerability of several voting systems to manipulation by voters.[18]They say little about how they adapted their evaluation for this purpose, mentioning simply that it "requires creative programming". An earlier paper by the first author gives a little more detail.[19] The number of candidates in their simulated elections was limited to 3. This removes the distinction between certain systems; for instanceBlack's methodand theDasgupta-Maskin methodare equivalent on 3 candidates. The conclusions from the study are hard to summarise, but theBorda countperformed badly;Minimaxwas somewhat vulnerable; and IRV was highly resistant. The authors showed that limiting any method to elections with no Condorcet winner (choosing the Condorcet winner when there was one) would never increase its susceptibility totactical voting. They reported that the 'Condorcet-Hare' system which uses IRV as a tie-break for elections not resolved by the Condorcet criterion was as resistant to tactical voting as IRV on its own and more accurate. Condorcet-Hare is equivalent toCopeland's methodwith an IRV tie-break in elections with 3 candidates. Some systems, and the Borda count in particular, are vulnerable when the distribution of candidates is displaced relative to the distribution of voters. The attached table shows the accuracy of the Borda count (as a percentage) when an infinite population of voters satisfies a univariate Gaussian distribution andmcandidates are drawn from a similar distribution offset byxstandard distributions. Red colouring indicates figures which are worse than random. Recall that all Condorcet methods give 100% accuracy for this problem. (And notice that the reduction in accuracy asxincreases is not seen when there are only 3 candidates.) Sensitivity to the distribution of candidates can be thought of as a matter either of accuracy or of resistance to manipulation. If one expects that in the course of things candidates will naturally come from the same distribution as voters, then any displacement will be seen as attempted subversion; but if one thinks that factors determining the viability of candidacy (such as financial backing) may be correlated with ideological position, then one will view it more in terms of accuracy. Published evaluations take different views of the candidate distribution. Some simply assume that candidates are drawn from the same distribution as voters.[16][18]Several older papers assume equal means but allow the candidate distribution to be more or less tight than the voter distribution.[20][1]A paper by Tideman and Plassmann approximates the relationship between candidate and voter distributions based on empirical measurements.[15]This is less realistic than it may appear, since it makes no allowance for the candidate distribution to adjust to exploit any weakness in the voting system. A paper by James Green-Armytage looks at the candidate distribution as a separate issue, viewing it as a form of manipulation and measuring the effects of strategic entry and exit. Unsurprisingly he finds the Borda count to be particularly vulnerable.[19] The task of a voting system under a spatial model is to identify the candidate whose position most accurately represents the distribution of voter opinions. This amounts to choosing a location parameter for the distribution from the set of alternatives offered by the candidates. Location parameters may be based on the mean, the median, or the mode; but since ranked preference ballots provide only ordinal information, the median is the only acceptable statistic. This can be seen from the diagram, which illustrates two simulated elections with the same candidates but different voter distributions. In both cases the mid-point between the candidates is the 51st percentile of the voter distribution; hence 51% of voters prefer A and 49% prefer B. If we consider a voting method to be correct if it elects the candidate closest to themedianof the voter population, then since the median is necessarily slightly to the left of the 51% line, a voting method will be considered to be correct if it elects A in each case. The mean of the teal distribution is also slightly to the left of the 51% line, but the mean of the orange distribution is slightly to the right. Hence if we consider a voting method to be correct if it elects the candidate closest to themeanof the voter population, then a method will not be able to obtain full marks unless it produces different winners from the same ballots in the two elections. Clearly this will impute spurious errors to voting methods. The same problem will arise for any cardinal measure of location; only the median gives consistent results. The median is not defined for multivariate distributions but the univariate median has a property which generalizes conveniently. The median of a distribution is the position whose average distance from all points within the distribution is smallest. This definition generalizes to thegeometric medianin multiple dimensions. The distance is often defined as a voter'sdisutility function. If we have a set of candidates and a population of voters, then it is not necessary to solve the computationally difficult problem of finding the geometric median of the voters and then identify the candidate closest to it; instead we can identify the candidate whose average distance from the voters is minimized. This is the metric which has been generally deployed since Merrill onwards;[20]see also Green-Armytage and Darlington.[19][16] The candidate closest to the geometric median of the voter distribution may be termed the 'spatial winner'. Data from real elections can be analysed to compare the effects of different systems, either by comparing between countries or by applying alternative electoral systems to the real election data. The electoral outcomes can be compared throughdemocracy indices, measures ofpolitical fragmentation,voter turnout,[21][22]political efficacyand various economic and judicial indicators. The practical criteria to assess real elections include the share ofwasted votes, the complexity ofvote counting,proportionalityof the representation elected based on parties' shares of votes, andbarriers to entryfor new political movements.[23]Additional opportunities for comparison of real elections arise throughelectoral reforms. A Canadian example of such an opportunity is seen in the City of Edmonton (Canada), which went fromfirst-past-the-post votingin1917 Alberta general electionto five-memberplurality block votingin1921 Alberta general election, to five-membersingle transferable votingin1926 Alberta general election, then to FPTP again in1959 Alberta general election. One party swept all the Edmonton seats in 1917, 1921 and 1959. Under STV in 1926, two Conservatives, one Liberal, one Labour and one United Farmers MLA were elected. Traditionally the merits of different electoral systems have been argued by reference to logical criteria. These have the form ofrules of inferencefor electoral decisions, licensing the deduction, for instance, that "ifEandE' are elections such thatR(E,E'), and ifAis the rightful winner ofE, thenAis the rightful winner ofE' ". The absolute criteria state that, if the set of ballots is a certain way, a certain candidate must or must not win. These are criteria that state that, if a certain candidate wins in one circumstance, the same candidate must (or must not) win in a related circumstance. These are criteria which relate to the process of counting votes and determining a winner. These are criteria that relate to a voter's incentive to use certain forms of strategy. They could also be considered as relative result criteria; however, unlike the criteria in that section, these criteria are directly relevant to voters; the fact that a method passes these criteria can simplify the process of figuring out one's optimal strategic vote. Ballots are broadly distinguishable into two categories,cardinalandordinal, where cardinal ballots request individual measures of support for each candidate and ordinal ballots request relative measures of support. A few methods do not fall neatly into one category, such as STAR, which asks the voter to give independent ratings for each candidate, but uses both the absolute and relative ratings to determine the winner. Comparing two methods based on ballot type alone is mostly a matter of voter experience preference, unless the ballot type is connected back to one of the other mathematical criterion listed here. Criterion A is "stronger" than B if satisfying A implies satisfying B. For instance, the Condorcet criterion is stronger than the majority criterion, because all majority winners are Condorcet winners. Thus, any voting method that satisfies the Condorcet criterion must satisfy the majority criterion. The following table shows which of the above criteria are met by several single-winner methods. Not every criterion is listed. type The concerns raised above are used bysocial choice theoriststo devise systems that are accurate and resistant to manipulation. However, there are also practical reasons why one system may be more socially acceptable than another, which fall under the fields ofpublic choiceandpolitical science.[8][16]Important practical considerations include: Other considerations includebarriers to entryto thepolitical competition[28]and likelihood ofgridlocked government.[29] Multi-winner electoral systems at their best seek to produce assemblies representative in a broader sense than that of making the same decisions as would be made by single-winner votes. They can also be route to one-party sweeps of a city's seats, if a non-proportional system, such asplurality block votingorticket voting, is used. Evaluating the performance of multi-winner voting methods requires different metrics than are used for single-winner systems. The following have been proposed. The following table shows which of the above criteria are met by several multiple winner methods.
https://en.wikipedia.org/wiki/Voting_system_criterion
InInternet culture, the1% ruleis a generalrule of thumbpertaining to participation in anInternet community, stating that only 1% of the users of a website actively create new content, while the other 99% of the participants onlylurk. Variants include the1–9–90 rule(sometimes90–9–1 principleor the89:10:1 ratio),[1]which states that in a collaborative website such as awiki, 90% of the participants of a community only consume content, 9% of the participants change or update content, and 1% of the participants add content. Similar rules are known ininformation science; for instance, the 80/20 rule known as thePareto principlestates that 20 percent of a group will produce 80 percent of the activity, regardless of how the activity is defined. According to the 1% rule, about 1% of Internet users create content, while 99% are just consumers of that content. For example, for every person who posts on a forum, generally about 99 other people view that forum but do not post. The term was coined by authors and bloggers Ben McConnell and Jackie Huba,[2]although there were earlier references this concept[3]that did not use the name. The termslurkandlurking, in reference to online activity, are used to refer to online observation without engaging others in the Internet community.[4] A 2007 study ofradicaljihadistInternet forums found 87% of users had never posted on the forums, 13% had posted at least once, 5% had posted 50 or more times, and only 1% had posted 500 or more times.[5] A 2014 peer-reviewed paper entitled "The 1% Rule in Four Digital Health Social Networks: An Observational Study" empirically examined the 1% rule in health-oriented online forums. The paper concluded that the 1% rule was consistent across the four support groups, with a handful of "Superusers" generating the vast majority of content.[6]A study later that year, from a separate group of researchers, replicated the 2014 van Mierlo study in an online forum for depression.[7]Results indicated that the distribution frequency of the 1% rule fit followedZipf's Law, which is a specific type ofpower law. The "90–9–1" version of this rule states that for websites where users can both create and edit content, 1% of people create content, 9% edit or modify that content, and 90% view the content without contributing. However, the actual percentage is likely to vary depending upon the subject. For example, if a forum requires content submissions as a condition of entry, the percentage of people who participate will probably be significantly higher than 1%, but the content producers will still be a minority of users. This is validated in a study conducted by Michael Wu, who uses economics techniques to analyze the participation inequality across hundreds of communities segmented by industry, audience type, and community focus.[8] The 1% rule is often misunderstood to apply to the Internet in general, but it applies more specifically to any given Internet community. It is for this reason that one can see evidence for the 1% principle on many websites, but aggregated together one can see a different distribution. This latter distribution is still unknown and likely to shift, but various researchers and pundits have speculated on how to characterize the sum total of participation. Research in late 2012 suggested that only 23% of the population (rather than 90%) could properly be classified as lurkers, while 17% of the population could be classified as intense contributors of content.[9]Several years prior, results were reported on a sample of students from Chicago where 60% of the sample created content in some form.[10] A similar concept was introduced by Will Hill ofAT&T Laboratories[11]and later cited byJakob Nielsen; this was the earliest known reference to the term "participation inequality" in an online context.[12]The term regained public attention in 2006 when it was used in a strictly quantitative context within a blog entry on the topic of marketing.[2]
https://en.wikipedia.org/wiki/1%25_rule
TheBradley effect, less commonly known as theWilder effect,[1][2]is a theory concerning observed discrepancies between voter opinion polls and election outcomes in some United States government elections where awhiteand anon-whitecandidate run against each other.[3][4][5]The theory proposes that some white voters who intend to vote for the white candidate would nonetheless tell pollsters that they are undecided or likely to vote for the non-white candidate. It was named after Los Angeles mayorTom Bradley, an African-American who lost the1982 California gubernatorial electionto California attorney generalGeorge Deukmejian, an Armenian-American,[6]despite Bradley’s being ahead in voter polls going into the elections.[7] The Bradley effect posits that the inaccurate polls were skewed by the phenomenon ofsocial desirability bias.[8][9]Specifically, some voters give inaccurate polling responses for fear that, by stating their true preference, they will open themselves to criticism of racial motivation, even when neither candidate is considered "white" under the traditional rubric of Americanracism, as was the case in the Bradley/Deukmejian election.[10]Members of the public may feel under pressure to provide an answer that is deemed to be more publicly acceptable, orpolitically correct. The reluctance to give accurate polling answers has sometimes extended to post-electionexit pollsas well. Theraceof the pollster conducting the interview may factor into voters' answers. Some analysts have dismissed the validity of the Bradley effect.[11]Others have argued that it may have existed in past elections but not in more recent ones, such as when the African-AmericanBarack Obamawas elected President of the United States in 2008 and 2012, both times against white opponents.[12]Others believe that it is a persistent phenomenon.[13]Similar effects have been posited in other contexts, for example thespiral of silenceand theshy Tory factor.[12]Bradley himself was unpopular at the state level for a variety of reasons, and went on to lose the1986 California gubernatorial electionto Deukmejian by more than 22 percentage points before settling a longstanding federal corruption probe into his alleged involvement with several multi-million-dollar illegal schemes.[14]In 1991, Bradley would infamously describe the Justice Department's decision not to indict him, after he repaid a portion of illegally transferred funds at issue in the probe, as a "Christmas gift."[14] In 1982,Tom Bradley, the long-time mayor of Los Angeles, ran as theDemocraticParty's candidate forGovernor of CaliforniaagainstRepublicancandidateGeorge Deukmejian, who was ofArmeniandescent. Most polls in the final days before the election showed Bradley with a significant lead.[15]Based onexit polls, a number of media outlets projected Bradley as the winner and early editions of the next day'sSan Francisco Chroniclefeatured a headline proclaiming "Bradley Win Projected." However, despite winning a majority of the votes cast on election day,Bradley narrowly lost the overall raceonce absentee ballots were included.[11]Post-election research indicated that a smaller percentage of white voters actually voted for Bradley than polls had predicted, and that previously undecided voters had voted for Deukmejian in statistically anomalous numbers.[4][16] A month prior to the election, Bill Roberts, Deukmejian's campaign manager, predicted that white voters would break for his candidate. He told reporters that he expected Deukmejian to receive approximately 5 percent more votes than polling numbers indicated because white voters were giving inaccurate polling responses to conceal the appearance of racial prejudice. Deukmejian disavowed Roberts's comments, and Roberts resigned his post as campaign manager.[17] Some news outlets and columnists have attributed the theory's origin to Charles Henry, a professor ofAfrican-American Studiesat theUniversity of California, Berkeley.[18][19][20]Henry researched the election in its aftermath, and in a 1983 study reached the controversial conclusion that race was the most likely factor in Bradley's defeat. One critic of the Bradley effect theory charged thatMervin Fieldof The Field Poll had already offered the theory as explanation for his poll's errors, suggesting it (without providing supporting data for the claim) on the day after the election.[11]Ken Khachigian, a senior strategist and day-to-day tactician in Deukmejian's 1982 campaign, has noted that Field's final pre-election poll was badly timed, since it was taken over the weekend, and most late polls failed to register a surge in support for Deukmejian in the campaign's final two weeks.[21]In addition, the exit polling failed to consider absentee balloting in an election which saw an "unprecedented wave of absentee voters" organized on Deukmejian's behalf. In short, Khachigian argues, the "Bradley effect" was simply an attempt to come up with an excuse for what was really the result of flawed opinion polling practices.[22] Other elections which have been cited as possible demonstrations of the Bradley effect include the 1983 race forMayor of Chicago, the 1988Democraticprimaryrace inWisconsinfor President of the United States, and the 1989 race forMayor of New York City.[23][24][25] The 1983 race in Chicago featured a black candidate,Harold Washington, running against a white candidate,Bernard Epton. More so than the California governor's race the year before,[26]the Washington-Epton matchup evinced strong and overt racial overtones throughout the campaign.[27][28]Two polls conducted approximately two weeks before the election showed Washington with a 14-point lead in the race. A third conducted just three days before the election confirmed Washington continuing to hold a lead of 14 points. But in the election's final results, Washington won by less than four points.[23] In the 1988 Democratic presidential primary in Wisconsin, pre-election polls pegged black candidateJesse Jackson—at the time, a legitimate challenger to white candidate and frontrunnerMichael Dukakis—as likely to receive approximately one-third of the white vote.[29]Ultimately, however, Jackson carried only about one quarter of that vote, with the discrepancy in the heavily white state contributing to a large margin of victory for Dukakis over the second-place Jackson.[30] In the 1989 race for Mayor of New York, a poll conducted just over a week before the election showed black candidateDavid Dinkinsholding an 18-point lead over white candidateRudy Giuliani. Four days before the election, a new poll showed that lead to have shrunk, but still standing at 14 points. On the day of the election, Dinkins prevailed by only two points.[23] Similar voter behavior was noted in the 1989 race forGovernor of Virginiabetween DemocratL. Douglas Wilder, an African-American, and RepublicanMarshall Coleman, who was white. In that race, Wilder prevailed, but by less than half of one percent, when pre-election poll numbers showed him on average with a 9 percent lead.[31][23]The discrepancy was attributed to white voters telling pollsters that they were undecided when they actually voted for Coleman.[32] After the 1989 Virginia gubernatorial election, the Bradley effect was sometimes called the Wilder effect.[33][24]Both terms are still used; and less commonly, the term "Dinkins effect" is also used.[5] Also sometimes mentioned are: In 1995, whenColin Powell's name was floated as a possible 1996 Republican presidential candidate, Powell reportedly spoke of being cautioned by publisherEarl G. Gravesabout the phenomenon described by the Bradley effect. With regard to opinion polls showing Powell leading a hypothetical race with then-incumbentBill Clinton, Powell was quoted as saying, "Every time I see Earl Graves, he says, 'Look, man, don't let them hand you no crap. When [white voters] go in that booth, they ain't going to vote for you.'"[24][37] Analyses of recent elections suggest that there may be some evidence of a diminution in the 'Bradley Effect'. However, at this stage, such evidence is too limited to confirm a trend. A few analysts, such as political commentator andThe Weekly StandardeditorFred Barnes, attributed the four-point loss by Indian American candidateBobby Jindalin the2003 Louisiana gubernatorial runoff electionto the Bradley effect. In making his argument, Barnes mentioned polls that had shown Jindal with a lead.[38]Others, such asNational ReviewcontributorRod Dreher, countered that later polls taken just before the election correctly showed that lead to have evaporated, and reported the candidates to be statistically tied.[39][40]In 2007,Jindal ran again, this time securing an easy victory, with his final vote total[41]remaining in line with or stronger than the predictions of the polls conducted shortly before the election.[42] In 2006, there was speculation that the Bradley effect might appear in theTennessee race for United States SenatorbetweenHarold Ford, Jr.and white candidateBob Corker.[43][24][36][44][45]Ford lost by a slim margin, but an examination of exit polling data indicated that the percentage of white voters who voted for him remained close to the percentage that indicated they would do so in polls conducted prior to the election.[24][46]Several other 2006 biracial contests saw pre-election polls predict their respective elections' final results with similar accuracy.[23] In therace for United States Senator from Maryland, blackRepublicancandidateMichael Steelelost by a wider margin than predicted by late polls. However, those polls correctly predicted Steele's numbers, with the discrepancy in his margin of defeat resulting from their underestimating the numbers for his whiteDemocraticopponent, then longtime RepresentativeBen Cardin. Those same polls also underestimated the Democratic candidate in the state'srace for governor—a race in which both candidates were white.[23] The overall accuracy of the polling data from the 2006 elections was cited, both by those who argue that the Bradley effect has diminished in American politics,[23][45][47]and those who doubt its existence in the first place.[48]When asked about the issue in 2007, Douglas Wilder indicated that while he believed there was still a need for black candidates to be wary of polls, he felt that voters were displaying "more openness" in their polling responses and becoming "less resistant" to giving an accurate answer than was the case at the time of his gubernatorial election.[49]When asked about the possibility of seeing a Bradley effect in 2008, Joe Trippi, who had been a deputy campaign manager for Tom Bradley in 1982, offered a similar assessment, saying, "The country has come a hell of a long way. I think it's a mistake to think that there'll be any kind of big surprise like there was in the Bradley campaign in 1982. But I also think it'd be a mistake to say, 'It's all gone.'"[50] Inaccurate polling statistics attributed to the Bradley effect are not limited to pre-election polls. In the initial hours after voting concluded in the Bradley-Deukmejian race in 1982, similarly inaccurate exit polls led some news organizations to project Bradley to have won.[51]Republican pollsterV. Lance Tarrance, Jr.argues that this was not indicative of the Bradley effect; rather the exit polls were wrong because Bradley actually won on election day turnout, but lost the absentee vote.[52] Exit polls in the Wilder-Coleman race in 1989 also proved inaccurate in their projection of a ten-point win for Wilder, despite those same exit polls accurately predicting other statewide races.[23][31][53]In 2006, a ballot measure inMichiganto endaffirmative actiongenerated exit poll numbers showing the race to be too close to call. Ultimately, the measure passed by a wide margin.[54] The causes of the polling errors are debated, but pollsters generally believe that perceived societal pressures have led some white voters to be less than forthcoming in their poll responses. These voters supposedly have harbored a concern that declaring their support for a white candidate over a non-white candidate will create a perception that the voter is racially prejudiced.[45][55]During the 1988 Jackson presidential campaign, Murray Edelman, a veteran election poll analyst for news organizations and a former president of theAmerican Association for Public Opinion Research, found the race of the pollster conducting the interview to be a factor in the discrepancy. Edelman's research showed white voters to be more likely to indicate support for Jackson when asked by a black interviewer than when asked by a white interviewer.[5] Andrew Kohut, who was the president ofthe Gallup Organizationduring the 1989 Dinkins/Giuliani race and later president of thePew Research Center, which conducted research into the phenomenon, has suggested that the discrepancies may arise, not from white participants giving false answers, but rather from white voters who have negative opinions of blacks being less likely to participate in polling at all than white voters who do not share such negative sentiments with regard to blacks.[56][57] While there is widespread belief in a racial component as at least a partial explanation for the polling inaccuracies in the elections in question, it is not universally accepted that this is the primary factor. Peter Brodnitz, a pollster and contributor to the newsletterThe Polling Report, worked on the 2006 campaign of blackU.S. SenatecandidateHarold Ford, Jr., and contrary to Edelman's findings in 1988, Brodnitz indicated that he did not find the race of the interviewer to be a factor in voter responses in pre-election polls. Brodnitz suggested that late-deciding voters tend to have moderate-to-conservativepolitical opinions and that this may account in part for last-minute decision-makers breaking largely away from black candidates, who have generally been moreliberalthan their white opponents in the elections in question.[5]Another prominent skeptic of the Bradley effect is Gary Langer, the director of polling forABC News. Langer has described the Bradley effect as "a theory in search of data." He has argued that inconsistency of its appearance, particularly in more recent elections, casts doubt upon its validity as a theory.[48][58] Of all of the races presented as possible examples of the Bradley effect theory, perhaps the one most fiercely rebutted by the theory's critics is the 1982 Bradley/Deukmejian contest itself. People involved with both campaigns, as well as those involved with the inaccurate polls have refuted the significance of the Bradley effect in determining that election's outcome. FormerLos Angeles Timesreporter Joe Mathews said that he talked to more than a dozen people who played significant roles in either the Bradley or Deukmejian campaign and that only two felt there was a significant race-based component to the polling failures.[59]Mark DiCamillo, Director of The Field Poll, which was among those that had shown Bradley with a strong lead, has not ruled out the possibility of a Bradley effect as a minor factor, but also said that the organization's own internal examination after that election identified other possible factors that may have contributed to their error, including a shift in voter preference after the final pre-election polls and a high-profile ballot initiative in the same election, a Republican absentee ballot program and a low minority turnout, each of which may have caused pre-election polls to inaccurately predict which respondents were likely voters.[60] Prominent Republican pollsterV. Lance Tarrance, Jr.flatly denies that the Bradley effect occurred during that election, echoing the absentee ballot factor cited by DiCamillo.[11]Tarrance also reports that his own firm's pre-election polls done for the Deukmejian campaign showed the race as having closed from a wide lead for Bradley one month prior to the election down to a statistical dead heat by the day of the election. While acknowledging that some news sources projected a Bradley victory based upon Field Poll exit polls which were also inaccurate, he counters that at the same time, other news sources were able to correctly predict Deukmejian's victory by using other exit polls that were more accurate. Tarrance claims that The Field Poll speculated, without supplying supporting data, in offering the Bradley effect theory as an explanation for why its polling had failed, and he attributes the emergence of the Bradley effect theory to media outlets focusing on this, while ignoring that there were other conflicting polls which had been correct all along.[11] Sal Russo, a consultant for Deukmejian in the race, has said that another private pollster working for the campaign, Lawrence Research, also accurately captured the late surge in favor of Deukmejian, polling as late as the night before the election. According to Russo, that firm's prediction after its final poll was an extremely narrow victory for Deukmejian. He asserts that the failure of pre-election polls such as The Field Poll arose, largely because they stopped polling too soon, and that the failure of the exit polls was due to their inability to account for absentee ballots.[61] Blair Levin, a staffer on the Bradley campaign in 1982 said that as he reviewed early returns at a Bradley hotel on election night, he saw that Deukmejian would probably win. In those early returns, he had taken particular note of the high number of absentee ballots, as well as a higher-than-expected turnout in California'sCentral Valleyby conservative voters who had been mobilized to defeat the handgun ballot initiative mentioned by DiCamillo. According to Levin, even as he heard the "victory" celebration going on among Bradley supporters downstairs, those returns had led him to the conclusion that Bradley was likely to lose.[62][63]John Phillips, the primary sponsor of the controversial gun control proposition, said that he felt as though he, rather than polling inaccuracies, was the primary target of the blame assigned by those present at the Bradley hotel that night.[59]Nelson Rising, Bradley's campaign chair, spoke of having warned Bradley long before any polling concerns arose that endorsing the ballot initiative would ultimately doom his campaign. Rejecting the idea that the Bradley effect theory was a factor in the outcome, Rising said, "If there is such an effect, it shouldn't be named for Bradley, or associated with him in any way."[59] In 2008, several political analysts[64][65][66][67]discussing the Bradley effect referred to a study authored by Daniel J. Hopkins, a post-doctoral fellow inHarvard University's Department of Government, which sought to determine whether the Bradley effect theory was valid, and whether an analogous phenomenon might be observed in races between a female candidate and a male candidate. Hopkins analyzed data from 133 elections between 1989 and 2006, compared the results of those elections to the corresponding pre-election poll numbers, and considered some of the alternate explanations which have been offered for any discrepancies therein. The study concluded finally that the Bradley effect was a real phenomenon, amounting to a median gap of 3.1 percentage points before 1996, but that it was likely not the sole factor in those discrepancies, and further that it had ceased to manifest itself at all by 1996. The study also suggested a connection between the Bradley effect and the level of racial rhetoric exhibited in the discussion of the political issues of the day. It asserted that the timing of the disappearance of the Bradley effect coincided with that of a decrease in such rhetoric in American politics over such potentially racially charged issues as crime andwelfare. The study found no evidence of a corresponding effect based upon gender—in fact, female Senate candidates received on average 1.2 percentage points more votes than polls had predicted.[68] The2008 presidential campaignofBarack Obama, a blackUnited States Senator, brought a heightened level of scrutiny to the Bradley effect,[69]as observers searched for signs of the effect in comparing Obama's polling numbers to the actual election results during the Democratic primary elections.[5][24][46][70][71]After a victorious showing in theIowa caucuses, where votes were cast publicly, polls predicted that Obama would also capture theNew Hampshire Democratic primary electionby a large margin overHillary Clinton, a white senator. However, Clinton defeated Obama by three points in the New Hampshire race, where ballots were cast secretly, immediately initiating suggestions by some analysts that the Bradley effect may have been at work.[72][58]Other analysts cast doubt on that hypothesis, saying that the polls underestimated Clinton rather than overestimated Obama.[73]Clinton may have also benefited from theprimacy effectin the New Hampshire primary as she was listed ahead of Obama on every New Hampshire ballot.[74] After theSuper Tuesdayprimaries of February 5, 2008,political scienceresearchers from theUniversity of Washingtonfound trends suggesting the possibility that with regard to Obama, the effect's presence or absence may be dependent on the percentage of the electorate that is black. The researchers noted that to that point in the election season, opinion polls taken just prior to an election tended to overestimate Obama in states with a black population below eight percent, to track him within the polls' margins of error in states with a black population between ten and twenty percent, and to underestimate him in states with a black population exceeding twenty-five percent. The first finding suggested the possibility of the Bradley effect, while the last finding suggested the possibility of a "reverse" Bradley effect in which black voters might have been reluctant to declare to pollsters their support for Obama or are underpolled. For example, many general election polls in North Carolina and Virginia assume that black voters will be 15% to 20% of each state's electorate; they were around a quarter of each state's electorate in 2004.[75][76]That high support effect has been attributed to high black voter turnout in those states' primaries, with blacks supporting Obama by margins that often exceeded 97%. With only one exception, each state that had opinion polls incorrectly predict the outcome of the Democratic contest also had polls that accurately predicted the outcome of the state's Republican contest, which featured only white candidates).[77] Alternatively,Douglas Wilderhas suggested that a 'reverse Bradley effect' may be possible because some Republicans may not openly say they will vote for a black candidate, but may do so on election day.[78]The "Fishtown Effect" is a scenario where prejudiced or racist white voters cast their vote for a black candidate solely on economic concerns.[79][80]Fishtown, a mostly white and economically depressed neighborhood in Philadelphia, voted 81% for Obama in the 2008 election.[81]Alternatively, writerAlisa Valdes-Rodriguezsuggested another plausible factor is something called the "Huxtable effect", where the positive image of the respectable African American characterCliff Huxtable, a respected middle-classobstetricianand father on the 1980s television seriesThe Cosby Show, made young voters who grew up with that series' initial run comfortable with the idea of an African American man being a viable presidential candidate, which enhanced Obama's election chances with that population.[82]Others have called it the "Palmer effect" on the theory thatDavid Palmer, a fictional president played byDennis Haysbertduring the second and third seasons of the television drama24, showed viewers that an African American man can be a strong commander in chief.[83] This election was widely scrutinized as analysts tried to definitively determine whether the Bradley effect is still a significant factor in the political sphere.[84]An inspection of the discrepancy between pre-election polls and Obama's ultimate support[85]reveals significant bivariate support for the hypothesized "reverse Bradley effect". On average, Obama received three percentage points more support in the primaries and caucuses than he did during polling; however, he also had a strong ground campaign, and many polls do not question voters with only cell phones, who are predominantly young.[86] Obama went on to win the election with 53% of the popular vote and a large electoral college victory. Following the 2008 presidential election, a number of news sources reported that the result confirmed the absence of a 'Bradley Effect' in view of the close correlation between the pre-election polls and the actual share of the popular vote.[87] However, it has been suggested that such assumptions based on the overall share of the vote are too simplistic because they ignore the fact that underlying factors can be contradictory and hence masked in overall voting figures. For instance, it has been suggested that an extant Bradley Effect was masked by the unusually high turnout amongst African Americans and other Democratic leaning voter groups under the unique circumstances of the 2008 election (i.e. the first serious bid for President by an African-American candidate).[13] Although both candidates in the2016 United States presidential electionwere white, a similar phenomenon may have caused polls to inaccurately predict the election outcome. According to major opinion polling, formerUnited States SenatorandSecretary of StateHillary Clintonwas predicted[88]to defeat businessmanDonald Trump. Nevertheless, Trump won the keyRust Belt statesofOhio,Michigan,Pennsylvania, andWisconsin, giving him more electoral votes than Secretary Clinton.Post-election analysisof public opinion polling showed that Trump's base was larger than predicted, leading some experts to suggest that some "shy Trumpers" were hiding their preferences to avoid being seen as prejudiced by pollsters.[89]There may have been also some cases in which male respondents were hiding their preferences to avoid being seen assexist, as Hillary Clinton was the first female major party candidate for President.[89] In a 2019 press conference, Trump estimated the effect to be between 6 and 10% in his favor. He described this effect as "I don’t know if I consider that to be a compliment, but in one way it is a compliment."[90] However many pollsters have disputed this claim. A 2016 poll conducted by Morning Consult showed that Trump performed better in general election polls regardless of whether the poll was conducted online or by live interviewer over the phone. This finding led Morning Consult's chief research officer to conclude that there was little evidence that poll respondents were feeling pressured to downplay their true general election preferences.[91]Harry Enten, an analyst forFiveThirtyEight.comnoted that Trump generally underperformed his polling in Democratic-leaning states like California and New York — where the stigma against voting for Trump likely would have been stronger — and overperformed his polls in places like Wisconsin and Ohio. Enten concluded that, although Trump did better than the polls predicted in many states, he "didn’t do so in a pattern consistent with a 'shy Trump' effect".[92] The Bradley effect—as well a variant of the so-calledshy Tory factorthat involves prospective voters' expressed intentions to vote for candidates belonging to the U.S. Republican Party—reportedly skewed a number of opinion polls running up to the 2018 U.S. elections.[93]Notably, the effect was arguably present in theFlorida gubernatorial electionbetween black DemocratAndrew Gillum, the mayor ofTallahassee, and white RepublicanRon DeSantis, a U.S. Congressman. Despite Gillum having led in most polls before the election, DeSantis ultimately won by a margin of 0.4%.[94]
https://en.wikipedia.org/wiki/Bradley_effect
Consensus decision-makingis agroup decision-makingprocess in which participants work together to develop proposals for actions that achieve a broad acceptance.Consensusis reached when everyone in the groupassentsto a decision (or almost everyone; seestand aside) even if some do not fully agree to or support all aspects of it. It differs from simpleunanimity, which requires all participants to support a decision. Consensus decision-making in a democracy isconsensus democracy.[1] The wordconsensusis Latin meaning "agreement, accord", derived fromconsentiremeaning "feel together".[2]A noun,consensuscan represent a generally accepted opinion[3]– "general agreement or concord; harmony", "a majority of opinion"[4]– or the outcome of a consensus decision-making process. This article refers to the processandthe outcome (e.g. "to decidebyconsensus" and "aconsensus was reached"). Consensus decision-making, as a self-described practice, originates from severalnonviolent,direct actiongroups that were active in theCivil rights,PeaceandWomen'smovements in the USA duringcounterculture of the 1960s. The practice gained popularity in the 1970s through theanti-nuclearmovement, and peaked in popularity in the early 1980s.[5]Consensus spread abroad through theanti-globalizationandclimatemovements, and has become normalized inanti-authoritarianspheres in conjunction withaffinity groupsand ideas ofparticipatory democracyandprefigurative politics.[6] TheMovement for a New Society(MNS) has been credited for popularizing consensus decision-making.[7][6]Unhappy with the inactivity of theReligious Society of Friends(Quakers) against theVietnam War,Lawrence ScottstartedA Quaker Action Group(AQAG) in 1966 to try and encourage activism within the Quakers. By 1971 AQAG members felt they needed not only to end the war, but transform civil society as a whole, and renamed AQAG to MNS. MNS members used consensus decision-making from the beginning as a non-religious adaptation of theQuaker decision-makingthey were used to. MNS trained the anti-nuclearClamshell Alliance(1976)[8][9]andAbalone Alliance(1977) to use consensus, and in 1977 publishedResource Manual for a Living Revolution,[10]which included a section on consensus. An earlier account of consensus decision-making comes from theStudent Nonviolent Coordinating Committee[11](SNCC), the main student organization of thecivil rights movement, founded in 1960. Early SNCC memberMary King, later reflected: "we tried to make all decisions by consensus ... it meant discussing a matter and reformulating it until no objections remained".[12]This way of working was brought to the SNCC at its formation by theNashville student group, who had received nonviolence training fromJames LawsonandMyles Hortonat theHighlander Folk School.[11]However, as the SNCC faced growing internal and external pressure toward the mid-1960s, it developed into a more hierarchical structure, eventually abandoning consensus.[13] Women Strike for Peace(WSP) are also accounted as independently used consensus from their founding in 1961.Eleanor Garst(herself influenced by Quakers) introduced the practice as part of the loose and participatory structure of WSP.[14] As consensus grew in popularity, it became less clear who influenced who.Food Not Bombs, which started in 1980 in connection with an occupation ofSeabrook Station Nuclear Power Plantorganized by theClamshell Alliance, adopted consensus for their organization.[15]Consensus was used in the1999 Seattle WTO protests, which inspired theS11 (World Economic Forum protest)in 2000 to do so too.[16]Consensus was used at the firstCamp for Climate Action(2006) and subsequent camps.Occupy Wall Street(2011) made use of consensus in combination with techniques such as thepeople's microphoneandhand signals. Characteristics of consensus decision-making include: Consensus decision-making is an alternative to commonly practicedgroup decision-makingprocesses.[19]Robert's Rules of Order, for instance, is a guide book used by many organizations. This book onParliamentary Procedureallows the structuring of debate and passage of proposals that can be approved through a form ofmajorityvote. It does not emphasize the goal of full agreement. Critics of such a process believe that it can involve adversarial debate and the formation of competing factions. These dynamics may harm group member relationships and undermine the ability of a group to cooperatively implement a contentious decision. Consensus decision-making attempts to address the beliefs of such problems. Proponents claim that outcomes of the consensus process include:[17][20] Consensus is not synonymous withunanimity– though that may be a rule agreed to in a specific decision-making process. The level of agreement necessary to finalize a decision is known as adecision rule.[17][21] Diversity of opinion is normal in most all situations, and will be represented proportionately in an appropriately functioning group. Even with goodwill and social awareness, citizens are likely to disagree in their political opinions and judgments. Differences of interest as well as of perception and values will lead the citizens to divergent views about how to direct and use the organized political power of the community, in order to promote and protect common interests. If political representatives reflect this diversity, then there will be as much disagreement in the legislature as there is in the population.[22] To ensure the agreement or consent of all participants is valued, many groups choose unanimity or near-unanimity as their decision rule. Groups that require unanimity allow individual participants the option of blocking a group decision. This provision motivates a group to make sure that all group members consent to any new proposal before it is adopted. When there is potential for a block to a group decision, both the group and dissenters in the group are encouraged to collaborate until agreement can be reached. Simplyvetoinga decision is not considered a responsible use of consensus blocking. Some common guidelines for the use of consensus blocking include:[17][23] A participant who does not support a proposal may have alternatives to simply blocking it. Some common options may include the ability to: The basic model for achieving consensus as defined by any decision rule involves: All attempts at achieving consensus begin with a good faith attempt at generating full-agreement, regardless of decision rule threshold. In thespokescouncilmodel,affinity groupsmake joint decisions by each designating a speaker and sitting behind that circle of spokespeople, akin to thespokesof a wheel. While speaking rights might be limited to each group's designee, the meeting may allot breakout time for the constituent groups to discuss an issue and return to the circle via their spokesperson. In the case of an activist spokescouncil preparing for theA16 Washington D.C. protests in 2000, affinity groups disputed their spokescouncil's imposition of nonviolence in their action guidelines. They received the reprieve of letting groups self-organize their protests, and as the city's protest was subsequently divided into pie slices, each blockaded by an affinity group's choice of protest. Many of the participants learned about the spokescouncil model on the fly by participating in it directly, and came to better understand their planned action by hearing others' concerns and voicing their own.[29] InDesigning an All-Inclusive Democracy(2007), Emerson proposes a consensus oriented approach based on theModified Borda Count(MBC) voting method. The group first elects, say, three referees or consensors. The debate on the chosen problem is initiated by the facilitator calling for proposals. Every proposed option is accepted if the referees decide it is relevant and conforms with theUniversal Declaration of Human Rights. The referees produce and display a list of these options. The debate proceeds, with queries, comments, criticisms and/or even new options. If the debate fails to come to a verbal consensus, the referees draw up a final list of options - usually between 4 and 6 - to represent the debate. When all agree, the chair calls for a preferential vote, as per the rules for a Modified Borda Count. The referees decide which option, or which composite of the two leading options, is the outcome. If its level of support surpasses a minimum consensus coefficient, it may be adopted.[30][31] Groups that require unanimity commonly use a core set of procedures depicted in this flow chart.[32][33][34] Once an agenda for discussion has been set and, optionally, the ground rules for the meeting have been agreed upon, each item of the agenda is addressed in turn. Typically, each decision arising from an agenda item follows through a simple structure: Quaker-based consensus[35]is said to be effective because it puts in place a simple, time-tested structure that moves a group towards unity. The Quaker model is intended to allow hearing individual voices while providing a mechanism for dealing with disagreements.[20][36][37] The Quaker model has been adapted byEarlham Collegefor application to secular settings, and can be effectively applied in any consensus decision-making process. Its process includes: Key components of Quaker-based consensus include a belief in a commonhumanityand the ability to decide together. The goal is "unity, not unanimity." Ensuring that group members speak only once until others are heard encourages a diversity of thought. The facilitator is understood as serving the group rather than acting as person-in-charge.[38]In the Quaker model, as with other consensus decision-making processes, articulating the emerging consensus allows members to be clear on the decision in front of them. As members' views are taken into account they are likely to support it.[39] The consensus decision-making process often has several roles designed to make the process run more effectively. Although the name and nature of these roles varies from group to group, the most common are thefacilitator,consensor, a timekeeper, an empath and a secretary or notes taker. Not all decision-making bodies use all of these roles, although the facilitator position is almost always filled, and some groups use supplementary roles, such as aDevil's advocateor greeter. Some decision-making bodies rotate these roles through the group members in order to build the experience and skills of the participants, and prevent any perceived concentration of power.[40] The common roles in a consensus meeting are: Critics of consensus blocking often observe that the option, while potentially effective for small groups of motivated or trained individuals with a sufficiently high degree ofaffinity, has a number of possible shortcomings, notably Consensus seeks to improvesolidarityin the long run. Accordingly, it should not be confused withunanimityin the immediate situation, which is often a symptom ofgroupthink. Studies of effective consensus process usually indicate a shunning of unanimity or "illusion of unanimity"[53]that does not hold up as a group comes under real-world pressure (when dissent reappears).Cory Doctorow,Ralph Naderand other proponents ofdeliberative democracyor judicial-like methods view explicit dissent as a symbol of strength. In his book about Wikipedia,Joseph Reagleconsiders the merits and challenges of consensus in open and online communities.[54]Randy Schutt,[55]Starhawk[56]and other practitioners ofdirect actionfocus on the hazards of apparent agreement followed by action in which group splits become dangerously obvious. Unanimous, or apparently unanimous, decisions can have drawbacks.[57]They may be symptoms of asystemic bias, a rigged process (where anagendais not published in advance or changed when it becomes clear who is present to consent), fear of speaking one's mind, a lack of creativity (to suggest alternatives) or even a lack of courage (to go further along the same road to a more extreme solution that would not achieve unanimous consent). Unanimity is achieved when the full group apparently consents to a decision. It has disadvantages insofar as further disagreement, improvements or better ideas then remain hidden, but effectively ends the debate moving it to an implementation phase. Some consider all unanimity a form of groupthink, and some experts propose "coding systems ... for detecting the illusion of unanimity symptom".[58]InConsensus is not Unanimity, long-time progressive change activist Randy Schutt writes: Many people think of consensus as simply an extended voting method in which everyone must cast their votes the same way. Since unanimity of this kind rarely occurs in groups with more than one member, groups that try to use this kind of process usually end up being either extremely frustrated or coercive. Decisions are never made (leading to the demise of the group), they are made covertly, or some group or individual dominates the rest. Sometimes a majority dominates, sometimes a minority, sometimes an individual who employs "the Block." But no matter how it is done, this coercive process isnotconsensus.[55] Confusion between unanimity and consensus, in other words, usually causes consensus decision-making to fail, and the group then either reverts to majority or supermajority rule or disbands. Most robust models of consensus exclude uniformly unanimous decisions and require at least documentation of minority concerns. Some state clearly that unanimity is not consensus but rather evidence of intimidation, lack of imagination, lack of courage, failure to include all voices, or deliberate exclusion of the contrary views. Some proponents of consensus decision-making view procedures that usemajority ruleas undesirable for several reasons. Majorityvotingis regarded ascompetitive, rather thancooperative, framing decision-making in a win/lose dichotomy that ignores the possibility ofcompromiseor other mutually beneficial solutions.[59]Carlos Santiago Nino, on the other hand, has argued that majority rule leads to better deliberation practice than the alternatives, because it requires each member of the group to make arguments that appeal to at least half the participants.[60] Some advocates of consensus would assert that a majority decision reduces the commitment of each individual decision-maker to the decision. Members of a minority position may feel less commitment to a majority decision, and even majority voters who may have taken their positions along party or bloc lines may have a sense of reduced responsibility for the ultimate decision. The result of this reduced commitment, according to many consensus proponents, is potentially less willingness to defend or act upon the decision. Majority voting cannot measure consensus. Indeed,—so many 'for' and so many 'against'—it measures the very opposite, the degree of dissent. TheModified Borda Counthas been put forward as a voting method which better approximates consensus.[61][31][30] Some formal models based ongraph theoryattempt to explore the implications of suppresseddissentand subsequent sabotage of the group as it takes action.[62] High-stakes decision-making, such as judicial decisions of appeals courts, always require some such explicit documentation. Consent however is still observed that defies factional explanations. Nearly 40% of the decisions of theUnited States Supreme Court, for example, are unanimous, though often for widely varying reasons. "Consensus in Supreme Court voting, particularly the extreme consensus of unanimity, has often puzzled Court observers who adhere to ideological accounts of judicial decision making."[63]Historical evidence is mixed on whether particular Justices' views were suppressed in favour of public unity.[64] Heitzig and Simmons (2012) suggest using random selection as a fall-back method to strategically incentivize consensus over blocking.[50]However, this makes it very difficult to tell the difference between those who support the decision and those who merely tactically tolerate it for the incentive. Once they receive that incentive, they may undermine or refuse to implement the agreement in various and non-obvious ways. In generalvoting systemsavoid allowing offering incentives (or "bribes") to change a heartfelt vote. In theAbilene paradox, a group can unanimously agree on a course of action that no individual member of the group desires because no one individual is willing to go against the perceived will of the decision-making body.[65] Since consensus decision-making focuses on discussion and seeks the input of all participants, it can be a time-consuming process. This is a potential liability in situations where decisions must be made speedily, or where it is not possible to canvass opinions of all delegates in a reasonable time. Additionally, the time commitment required to engage in the consensus decision-making process can sometimes act as a barrier to participation for individuals unable or unwilling to make the commitment.[66]However, once a decision has been reached it can be acted on more quickly than a decision handed down. American businessmen complained that in negotiations with a Japanese company, they had to discuss the idea with everyone even the janitor, yet once a decision was made the Americans found the Japanese were able to act much quicker because everyone was on board, while the Americans had to struggle with internal opposition.[67] Outside of Western culture, multiple other cultures have used consensus decision-making. One early example is theHaudenosaunee (Iroquois) Confederacy Grand Council, which used a 75% supermajority to finalize its decisions,[68]potentially as early as 1142.[69]In theXuluandXhosa(South African) process ofindaba, community leaders gather to listen to the public and negotiatefigurative thresholdstowards an acceptable compromise. The technique was also used during the2015 United Nations Climate Change Conference.[70][71]InAcehandNiascultures (Indonesian), family and regional disputes, from playground fights to estate inheritance, are handled through amusyawarahconsensus-building process in which parties mediate to find peace and avoid future hostility and revenge. The resulting agreements are expected to be followed, and range from advice and warnings to compensation and exile.[72][73] The origins offormal consensus-making can be traced significantly further back, to theReligious Society of Friends, or Quakers, who adopted the technique as early as the 17th century.[74]Anabaptists, including someMennonites, have a history of using consensus decision-making[75]and some believe Anabaptists practiced consensus as early as theMartyrs' Synodof 1527.[74]Some Christians trace consensus decision-making back to the Bible. The Global Anabaptist Mennonite Encyclopedia references, in particular, Acts 15[76]as an example of consensus in the New Testament. The lack of legitimate consensus process in the unanimous conviction of Jesus by corrupt priests[77]in an illegally heldSanhedrincourt (which had rules preventing unanimous conviction in a hurried process) strongly influenced the views of pacifist Protestants, including the Anabaptists (Mennonites/Amish), Quakers and Shakers. In particular it influenced their distrust of expert-led courtrooms and to "be clear about process" and convene in a way that assures that "everyone must be heard".[78] TheModified Borda Countvoting method has been advocated as more 'consensual' than majority voting, by, among others, byRamón Llullin 1199, byNicholas Cusanusin 1435, byJean-Charles de Bordain 1784, byHother Hagein 1860, byCharles Dodgson(Lewis Carroll) in 1884, and byPeter Emersonin 1986. Japanese companies normally use consensus decision-making, meaning that unanimous support on the board of directors is sought for any decision.[79]Aringi-shois a circulation document used to obtain agreement. It must first be signed by the lowest level manager, and then upwards, and may need to be revised and the process started over.[80] In theInternet Engineering Task Force(IETF), decisions are assumed to be taken byrough consensus.[81]The IETF has studiously refrained from defining a mechanical method for verifying such consensus, apparently in the belief that any such codification leads to attempts to "game the system." Instead, aworking group(WG) chair orBoFchair is supposed to articulate the "sense of the group." One tradition in support of rough consensus is the tradition of humming rather than (countable) hand-raising; this allows a group to quickly discern the prevalence of dissent, without making it easy to slip intomajority rule.[82] Much of the business of the IETF is carried out onmailing lists, where all parties can speak their views at all times. In 2001,Robert Rocco Cottonepublished a consensus-based model of professional decision-making for counselors and psychologists.[83]Based onsocial constructivistphilosophy, the model operates as a consensus-building model, as the clinician addresses ethical conflicts through a process of negotiating to consensus. Conflicts are resolved by consensually agreed on arbitrators who are selected early in the negotiation process. The United StatesBureau of Land Management's policy is to seek to use collaborative stakeholder engagement as standard operating practice for natural resources projects, plans, and decision-making except under unusual conditions such as when constrained by law, regulation, or other mandates or when conventional processes are important for establishing new, or reaffirming existing, precedent.[84] ThePolish–Lithuanian Commonwealthof 1569–1795 used consensus decision-making in the form ofliberum veto('free veto') in itsSejms(legislative assemblies). A type ofunanimous consent, theliberum vetooriginally allowed any member of a Sejm to veto an individual law by shoutingSisto activitatem!(Latin: "I stop the activity!") orNie pozwalam!(Polish: "I do not allow!").[85]Over time it developed into a much more extreme form, where any Sejm member could unilaterally and immediately force the end of the current session and nullify any previously passed legislation from that session.[86]Due to excessive use and sabotage from neighboring powers bribing Sejm members, legislating became very difficult and weakened the Commonwealth. Soon after the Commonwealth bannedliberum vetoas part of itsConstitution of 3 May 1791, it dissolved under pressure from neighboring powers.[87] Sociocracyhas many of the same aims as consensus and is in applied in a similar range of situations.[88]It is slightly different in that broad support for a proposal is defined as the lack of disagreement (sometimes called 'reasoned objection') rather than affirmative agreement.[89]To reflect this difference from the common understanding of the word consensus, in Sociocracy the process is called gaining 'consent' (not consensus).[90]
https://en.wikipedia.org/wiki/Consensus_decision-making
Democracy(fromAncient Greek:δημοκρατία,romanized:dēmokratía,dēmos'people' andkratos'rule')[1]is aform of governmentin whichpolitical poweris vested in thepeopleor thepopulationof a state.[2][3][4]Under a minimalist definition of democracy, rulers are elected through competitiveelectionswhile more expansive or maximalist definitions link democracy to guarantees ofcivil libertiesandhuman rightsin addition to competitive elections.[5][6][4] In adirect democracy, the people have the directauthoritytodeliberateand decidelegislation. In arepresentative democracy, the people choose governingofficialsthroughelectionsto do so. The definition of "the people" and the waysauthorityis shared among them or delegated by them have changed over time and at varying rates in different countries. Features of democracy oftentimes includefreedom of assembly,association,personal property,freedom of religionandspeech,citizenship,consent of the governed,voting rights, freedom from unwarranted governmentaldeprivationof theright to lifeandliberty, andminority rights. The notion of democracy has evolved considerably over time. Throughout history, one can find evidence of direct democracy, in whichcommunitiesmake decisions throughpopular assembly. Today, thedominantform of democracy is representative democracy, where citizens electgovernmentofficials to govern on their behalf such as in aparliamentaryorpresidential democracy. In the common variant ofliberal democracy, the powers of the majority are exercised within the framework of a representative democracy, but aconstitutionandsupreme courtlimit the majority and protect theminority—usually through securing the enjoyment by all of certain individual rights, such asfreedomof speech or freedom of association.[7][8] The term appeared in the 5th century BC inGreek city-states, notablyClassical Athens, to mean "rule of the people", in contrast toaristocracy(ἀριστοκρατία,aristokratía), meaning "rule of an elite".[9]In virtually all democratic governments throughout ancient and modern history, democraticcitizenshipwas initially restricted to an elite class, which was later extended to all adult citizens. In most modern democracies, this was achieved through thesuffragemovements of the 19th and 20thcenturies. Democracy contrasts with forms of government wherepoweris not vested in thegeneral populationof astate, such asauthoritariansystems. Historically a rare and vulnerable form of government,[10]democratic systems of government have become more prevalent since the 19th century, in particular with variouswaves of democratization.[11]Democracy garners considerable legitimacy in the modern world,[12]as public opinion across regions tends to strongly favor democratic systems of government relative to alternatives,[13][14]and as even authoritarian states try to present themselves as democratic.[15][16]According to theV-Dem Democracy indicesandThe Economist Democracy Index, less than half the world's population lives in a democracy as of 2022[update].[17][18] Although democracy is generally understood to be defined by voting,[1][8]no consensus exists on a precise definition of democracy.[19]Karl Poppersays that the "classical" view of democracy is, "in brief, the theory that democracy is the rule of the people and that the people have a right to rule".[20]One study identified 2,234 adjectives used to describe democracy in the English language.[21] Democratic principles are reflected in all eligible citizens beingequal before the lawand having equal access to legislative processes.[22]For example, in arepresentative democracy, every vote has (in theory) equal weight, and the freedom of eligible citizens is secured by legitimised rights and liberties which are typically enshrined in aconstitution,[23][24]while other uses of "democracy" may encompassdirect democracy, in which citizens vote on issues directly. According to theUnited Nations, democracy "provides an environment that respectshuman rightsand fundamental freedoms, and in which thefreely expressed will of peopleis exercised."[25] One theory holds that democracy requires three fundamental principles: upward control (sovereignty residing at the lowest levels of authority),political equality, and social norms by which individuals and institutions only consider acceptable acts that reflect the first two principles of upward control and political equality.[26]Legal equality,political freedomandrule of law[27]are often identified by commentators as foundational characteristics for a well-functioning democracy.[19] In some countries, notably in theUnited Kingdom(which originated theWestminster system), the dominant principle is that ofparliamentary sovereignty, while maintainingjudicial independence.[28][29]InIndia, parliamentary sovereignty is subject to theConstitution of Indiawhich includesjudicial review.[30]Though the term "democracy" is typically used in the context of apolitical state, the principles also are potentially applicable to private organisations, such as clubs, societies andfirms. Democracies may use many different decision-making methods, butmajority ruleis the dominant form. Without compensation, like legal protections of individual or group rights,political minoritiescan be oppressed by the "tyranny of the majority". Majority rule involves a competitive approach, opposed toconsensus democracy, creating the need thatelections, and generallydeliberation, be substantively and procedurally"fair"," i.e.justandequitable. In some countries,freedom of political expression,freedom of speech, andfreedom of the pressare considered important to ensure that voters are well informed, enabling them to vote according to their own interests and beliefs.[31][32] It has also been suggested that a basic feature of democracy is the capacity of all voters to participate freely and fully in the life of their society.[33]With its emphasis on notions ofsocial contractand thecollective willof all the voters, democracy can also be characterised as a form of politicalcollectivismbecause it is defined as a form of government in which all eligible citizens have an equal say inlawmaking.[34] Republics, though often popularly associated with democracy because of the shared principle of rule byconsent of the governed, are not necessarily democracies, asrepublicanismdoes not specifyhowthe people are to rule.[35]Classically the term "republic" encompassed both democracies andaristocracies.[36][37]In a modern sense the republican form of government is a form of government without amonarch. Because of this, democracies can be republics orconstitutional monarchies, such as the United Kingdom. Democratic assembliesare as old as the human species and are found throughout human history,[39]but up until the nineteenth century, major political figures have largely opposed democracy.[40]Republican theorists linked democracy to small size: as political units grew in size, the likelihood increased that the government would turn despotic.[10][41]At the same time, small political units were vulnerable to conquest.[10]Montesquieuwrote, "If a republic be small, it is destroyed by a foreign force; if it is large, it is ruined by an internal imperfection."[42]According to Johns Hopkins University political scientistDaniel Deudney, the creation of the United States, with its large size and its system of checks and balances, was a solution to the dual problems of size.[10][43]Forms of democracy occurred organically in societies around the world that had no contact with each other.[44][45] The termdemocracyfirst appeared in ancient Greek political and philosophical thought in the city-state ofAthensduringclassical antiquity.[46][47]The word comes fromdêmos'(common) people' andkrátos'force/might'.[48]UnderCleisthenes, what is generally held as the first example of a type of democracy in the sixth-century BC (508–507 BC) was established in Athens. Cleisthenes is referred to as "the father ofAthenian democracy".[49]The first attested use of the word democracy is found in prose works of the 430s BC, such asHerodotus'Histories, but its usage was older by several decades, as two Athenians born in the 470s were named Democrates, a new political name—likely in support of democracy—given at a time of debates over constitutional issues in Athens.Aeschylusalso strongly alludes to the word in his playThe Suppliants, staged in c.463 BC, where he mentions "the demos's ruling hand" [demou kratousa cheir]. Before that time, the word used to define the new political system of Cleisthenes was probablyisonomia, meaning political equality.[50] Athenian democracy took the form of direct democracy, and it had two distinguishing features: therandom selectionof ordinary citizens to fill the few existing government administrative and judicial offices,[51]and a legislative assembly consisting of all Athenian citizens.[52]All eligible citizens were allowed to speak and vote in the assembly, which set the laws of the city-state. However, Athenian citizenship excluded women, slaves, foreigners (μέτοικοι /métoikoi), and youths below the age of military service.[53][54][contradictory]Effectively, only 1 in 4 residents in Athens qualified as citizens. Owning land was not a requirement for citizenship.[55]The exclusion of large parts of the population from the citizen body is closely related to the ancient understanding of citizenship. In most of antiquity the benefit of citizenship was tied to the obligation to fight war campaigns.[56] Athenian democracy was not onlydirectin the sense that decisions were made by the assembled people, but also themost directin the sense that the people through the assembly,bouleand courts of law controlled the entire political process and a large proportion of citizens were involved constantly in the public business.[57]Even though the rights of the individual were not secured by the Athenian constitution in the modern sense (the ancient Greeks had no word for "rights"[58]), those who were citizens of Athens enjoyed their liberties not in opposition to the government but by living in a city that was not subject to another power and by not being subjects themselves to the rule of another person.[59] Range votingappeared inSpartaas early as 700 BC. TheSpartan ecclesiawas an assembly of the people, held once a month, in which every male citizen of at least 20 years of age could participate. In the assembly, Spartans elected leaders and cast votes by range voting and shouting (the vote is then decided on how loudly the crowd shouts).Aristotlecalled this "childish", as compared with the stone voting ballots used by the Athenian citizenry. Sparta adopted it because of its simplicity, and to prevent any biased voting, buying, or cheating that was predominant in the early democratic elections.[60] Even though theRoman Republiccontributed significantly to many aspects of democracy, only a minority of Romans were citizens with votes in elections for representatives. The votes of the powerful were given more weight through a system ofweighted voting, so most high officials, including members of theSenate, came from a few wealthy and noble families.[62]In addition, theoverthrow of the Roman Kingdomwas the first case in the Western world of a polity being formed with the explicit purpose of being arepublic, although it did not have much of a democracy. The Roman model of governance inspired many political thinkers over the centuries.[63] Vaishali, capital city of theVajjika League(Vrijjimahajanapada) ofIndia, is considered one of the first examples of arepublicaround the 6th century BC.[64][65][66] Other cultures, such as theIroquoisin the Americas also developed a form of democratic society between 1450 and 1660 (and possibly in 1142[67]), well before contact with the Europeans. This democracy continues to the present day and is the world's oldest standing representative democracy.[68][69] While most regions inEuropeduring theMiddle Ageswere ruled byclergyorfeudal lords, there existed various systems involving elections or assemblies, although often only involving a small part of the population. InScandinavia, bodies known asthingsconsisted of freemen presided by alawspeaker. These deliberative bodies were responsible for settling political questions, and variants included theAlthinginIcelandand theLøgtingin theFaeroe Islands.[70][71]Theveche, found inEastern Europe, was a similar body to the Scandinavian thing. In the RomanCatholic Church, thepopehas been elected by apapal conclavecomposed of cardinals since 1059. The first documented parliamentary body in Europe was theCortes of León. Established byAlfonso IXin 1188, the Cortes had authority over setting taxation, foreign affairs and legislating, though the exact nature of its role remains disputed.[72]TheRepublic of Ragusa, established in 1358 and centered around the city ofDubrovnik, provided representation and voting rights to its male aristocracy only. Various Italian city-states and polities had republic forms of government. For instance, theRepublic of Florence, established in 1115, was led by theSignoriawhose members were chosen bysortition. In the 10th–15th centuryFrisia, a distinctly non-feudal society, the right to vote on local matters and on county officials was based on land size. TheKouroukan Fougadivided theMali Empireinto ruling clans (lineages) that were represented at a great assembly called theGbara. However, the charter made Mali more similar to aconstitutional monarchythan ademocratic republic.[73][74] TheParliament of Englandhad its roots in the restrictions on the power of kings written intoMagna Carta(1215), which explicitly protected certain rights of the King's subjects and implicitly supported what became the English writ ofhabeas corpus, safeguarding individual freedom against unlawful imprisonment with the right to appeal.[75][76]The first representative national assembly inEnglandwasSimon de Montfort's Parliamentin 1265.[77][78]The emergence ofpetitioningis some of the earliest evidence of parliament being used as a forum to address the general grievances of ordinary people. However, the power to call parliament remained at the pleasure of the monarch.[79] Studies have linked the emergence of parliamentary institutions in Europe during the medieval period to urban agglomeration and the creation of new classes, such as artisans,[80]as well as the presence of nobility and religious elites.[81]Scholars have also linked the emergence of representative government to Europe's relative political fragmentation.[82]Political scientistDavid Stasavagelinks the fragmentation of Europe, and its subsequent democratization, to the manner in which the Roman Empire collapsed: Roman territory was conquered by small fragmented groups of Germanic tribes, thus leading to the creation of small political units where rulers were relatively weak and needed the consent of the governed to ward off foreign threats.[83] InPoland,noble democracywas characterized by an increase in the activity of the middlenobility, which wanted to increase their share in exercising power at the expense of the magnates. Magnates dominated the most important offices in the state (secular and ecclesiastical) and sat on the royal council, later the senate. The growing importance of the middle nobility had an impact on the establishment of the institution of the landsejmik(local assembly), which subsequently obtained more rights. During the fifteenth and first half of the sixteenth century, sejmiks received more and more power and became the most important institutions of local power. In 1454,Casimir IV Jagiellongranted the sejmiks the right to decide on taxes and to convene a mass mobilization in theNieszawa Statutes. He also pledged not to create new laws without their consent.[84] In 17th century England, there wasrenewed interest in Magna Carta.[85]The Parliament of England passed thePetition of Rightin 1628 which established certain liberties for subjects. TheEnglish Civil War(1642–1651) was fought between the King and an oligarchic but elected Parliament,[86][87]during which the idea of a political party took form with groups debating rights to political representation during thePutney Debatesof 1647.[88]Subsequently,the Protectorate(1653–59) and theEnglish Restoration(1660) restored more autocratic rule, although Parliament passed theHabeas Corpus Actin 1679 which strengthened the convention that forbade detention lacking sufficient cause or evidence. After theGlorious Revolutionof 1688, theBill of Rightswas enacted in 1689 which codified certain rights and liberties and is still in effect. The Bill set out the requirement for regular elections, rules for freedom of speech in Parliament and limited the power of the monarch, ensuring that, unlike much of Europe at the time,royal absolutismwould not prevail.[89][90]Economic historiansDouglass NorthandBarry Weingasthave characterized the institutions implemented in the Glorious Revolution as a resounding success in terms of restraining the government and ensuring protection for property rights.[91] Renewed interest in the Magna Carta, the English Civil War, and the Glorious Revolution in the 17th century prompted the growth ofpolitical philosophyon the British Isles.Thomas Hobbeswas the first philosopher to articulate a detailedsocial contract theory. Writing in theLeviathan(1651), Hobbes theorized that individuals living in thestate of natureled lives that were "solitary, poor, nasty, brutish and short" and constantly waged awar of all against all. In order to prevent the occurrence of an anarchic state of nature, Hobbes reasoned that individuals ceded their rights to a strong, authoritarian power. In other words, Hobbes advocated for an absolute monarchy which, in his opinion, was the best form of government. Later, philosopher and physicianJohn Lockewould posit a different interpretation of social contract theory. Writing in hisTwo Treatises of Government(1689), Locke posited that all individuals possessed the inalienable rights to life, liberty and estate (property).[92]According to Locke, individuals would voluntarily come together to form a state for the purposes of defending their rights. Particularly important for Locke were property rights, whose protection Locke deemed to be a government's primary purpose.[93]Furthermore, Locke asserted that governments werelegitimateonly if they held theconsent of the governed. For Locke, citizens had theright to revoltagainst a government that acted against their interest or became tyrannical. Although they were not widely read during his lifetime, Locke's works are considered the founding documents ofliberalthought and profoundly influenced the leaders of theAmerican Revolutionand later theFrench Revolution.[94]His liberal democratic framework of governance remains the preeminent form of democracy in the world. In the Cossack republics of Ukraine in the 16th and 17th centuries, theCossack HetmanateandZaporizhian Sich, the holder of the highest post ofHetmanwas elected by the representatives from the country's districts. In North America, representative government began inJamestown, Virginia, with the election of theHouse of Burgesses(forerunner of theVirginia General Assembly) in 1619. English Puritans who migrated from 1620 established colonies in New England whose local governance was democratic;[95]although these local assemblies had some small amounts of devolved power, the ultimate authority was held by the Crown and the English Parliament. ThePuritans(Pilgrim Fathers),Baptists, andQuakerswho founded these colonies applied the democratic organisation of their congregations also to the administration of their communities in worldly matters.[96][97][98] Thefirst Parliament of Great Britainwas established in 1707, after the merger of theKingdom of Englandand theKingdom of Scotlandunder theActs of Union. Two key documents of theUK's uncodified constitution, the EnglishDeclaration of Right, 1689(restated in theBill of Rights 1689) and the ScottishClaim of Right 1689, had both cemented Parliament's position as the supreme law-making body and said that the "election of members of Parliament ought to be free".[99]However, Parliament was only elected by male property owners, which amounted to 3% of the population in 1780.[100]The first known British person ofAfricanheritage to vote in a general election,Ignatius Sancho, voted in 1774 and 1780.[101] During theAge of Libertyin Sweden (1718–1772),civil rightswere expanded and power shifted from the monarch to parliament.[102]The taxed peasantry was represented in parliament, although with little influence, but commoners without taxed property had no suffrage. The creation of the short-livedCorsican Republicin 1755 was an early attempt to adopt a democraticconstitution(all men and women above age of 25 could vote).[103]ThisCorsican Constitutionwas the first based onEnlightenmentprinciples and includedfemale suffrage, something that was not included in most other democracies until the 20th century. Colonial Americahad similar property qualifications as Britain, and in the period before 1776 the abundance and availability of land meant that large numbers of colonists met such requirements with at least 60 per cent of adult white males able to vote.[104]The great majority of white men were farmers who met the property ownership or taxpaying requirements. With few exceptions, no blacks or women could vote.Vermont, which, on declaring independence of Great Britain in 1777, adopted a constitution modelled on Pennsylvania's citizenship and democratic suffrage for males with or without property.[105]TheUnited States Constitutionof 1787 is the oldest surviving, still active, governmentalcodified constitution. The Constitution provided for an elected government and protected civil rights and liberties, but did not endslaverynor extendvoting rights in the United States, instead leaving the issue of suffrage to the individual states.[106]Generally, states limited suffrage to white male property owners and taxpayers.[107]At the time of the firstPresidential election in 1789, about 6% of the population was eligible to vote.[108]TheNaturalization Act of 1790limited U.S. citizenship to whites only.[109]TheBill of Rightsin 1791 set limits on government power to protect personal freedoms but had little impact on judgements by the courts for the first 130 years after ratification.[110] In 1789,Revolutionary Franceadopted theDeclaration of the Rights of Man and of the Citizenand, although short-lived, theNational Conventionwas elected by all men in 1792.[111]ThePolish-Lithuanian Constitutionof 3 May 1791 sought to implement a more effectiveconstitutional monarchy, introduced political equality between townspeople and nobility, and placed the peasants under the protection of the government, mitigating the worst abuses ofserfdom. In force for less than 19 months, it was declared null and void by theGrodno Sejmthat met in 1793.[112][113]Nonetheless, the 1791 Constitution helped keep alive Polish aspirations for the eventual restoration of the country's sovereignty over a century later. In the United States, the1828 presidential electionwas the first in which non-property-holding white males could vote in the vast majority of states. Voter turnout soared during the 1830s, reaching about 80% of the adult white male population in the1840 presidential election.[114]North Carolina was the last state to abolish property qualification in 1856 resulting in a close approximation to universal white male suffrage (however tax-paying requirements remained in five states in 1860 and survived in two states until the 20th century).[115][116][117]In the1860 United States census, the slave population had grown to four million,[118]and inReconstructionafter the Civil War, three constitutional amendments were passed: the13th Amendment(1865) that ended slavery; the14th Amendment(1869) that gave black people citizenship, and the15th Amendment(1870) that gave black males a nominal right to vote.[119][120][nb 1]Full enfranchisement of citizens was not secured until after thecivil rights movementgained passage by the US Congress of theVoting Rights Act of 1965.[121][122] The voting franchise in the United Kingdom was expanded and made more uniform in aseries of reformsthat began with theReform Act 1832and continued into the 20th century, notably with theRepresentation of the People Act 1918and theEqual Franchise Act 1928.Universal male suffragewas established inFrancein March 1848 in the wake of theFrench Revolution of 1848.[123]During that year, severalrevolutions broke out in Europeas rulers were confronted with popular demands for liberal constitutions and more democratic government.[124] In 1876, the Ottoman Empire transitioned from anabsolute monarchyto a constitutional one, and held two elections the next year to elect members to her newly formed parliament.[125]Provisional Electoral Regulations were issued, stating that the elected members of the Provincial Administrative Councils would elect members to the firstParliament. Later that year, a new constitution was promulgated, which provided for abicameralParliament with aSenateappointed bythe Sultanand a popularly electedChamber of Deputies. Only men above the age of 30 who were competent inTurkishand had full civil rights were allowed to stand for election. Reasons for disqualification included holding dual citizenship, being employed by a foreign government, being bankrupt, employed as a servant, or having "notoriety for ill deeds". Full universal suffrage was achieved in 1934.[126] In 1893, the self-governing colonyNew Zealandbecame the first country in the world (except for the short-lived 18th-century Corsican Republic) to establish activeuniversal suffrageby recognizing women as having the right to vote.[127] 20th-century transitions to liberal democracy have come in successive "waves of democracy", variously resulting from wars, revolutions,decolonisation, and religious and economic circumstances.[11]Global waves of "democratic regression" reversing democratization, have also occurred in the 1920s and 30s, in the 1960s and 1970s, and in the 2010s.[128][129] World War Iand the dissolution of the autocraticOttomanandAustro-Hungarianempires resulted in the creation of new nation-states in Europe, most of them at least nominally democratic. In the 1920s democratic movements flourished andwomen's suffrageadvanced, but theGreat Depressionbrought disenchantment and most of the countries of Europe, Latin America, and Asia turned to strong-man rule or dictatorships.Fascismand dictatorships flourished inNazi Germany,Italy,SpainandPortugal, as well as non-democratic governments in theBaltics, theBalkans,Brazil,Cuba,China, andJapan, among others.[130] World War IIbrought a definitive reversal of this trend in Western Europe. Thedemocratisationof theAmerican, British, and French sectors of occupied Germany(disputed[131]), Austria, Italy, and theoccupied Japanserved as a model for the later theory ofgovernment change. However, most ofEastern Europe, including theSoviet sector of Germanyfell into the non-democraticSoviet-dominated bloc. The war was followed bydecolonisation, and again most of the new independent states had nominally democratic constitutions.Indiaemerged as the world's largest democracy and continues to be so.[132]Countries that were once part of theBritish Empireoften adopted the BritishWestminster system.[133][134] In 1948, theUniversal Declaration of Human Rightsmandated democracy: 3. The will of the people shall be the basis of the authority of government; this will shall be expressed in periodic and genuine elections which shall be by universal and equal suffrage and shall be held by secret vote or by equivalent free voting procedures. By 1960, the vast majority of country-states were nominally democracies, although most of the world's populations lived in nominal democracies that experienced sham elections, and other forms of subterfuge (particularly in"Communist" statesand the former colonies). A subsequent wave ofdemocratisationbrought substantial gains toward true liberal democracy for many states, dubbed "third wave of democracy". Portugal, Spain, and several of the military dictatorships in South America returned to civilian rule in the 1970s and 1980s.[nb 2]This was followed by countries inEastandSouth Asiaby the mid-to-late 1980s. Economic malaise in the 1980s, along with resentment of Soviet oppression, contributed to thecollapse of the Soviet Union, the associated end of theCold War, and the democratisation andliberalisationof the formerEastern bloccountries. The most successful of the new democracies were those geographically and culturally closest to western Europe, and they are now either part of theEuropean Unionorcandidate states. In 1986, after the toppling of the most prominent Asian dictatorship, the only democratic state of its kind at the time emerged in thePhilippineswith the rise ofCorazon Aquino, who would later be known as the mother ofAsian democracy. The liberal trend spread to some states in Africa in the 1990s, most prominently in South Africa. Some recent examples of attempts of liberalisation include theIndonesian Revolution of 1998, theBulldozer RevolutioninYugoslavia, theRose RevolutioninGeorgia, theOrange Revolutionin Ukraine, theCedar Revolutionin Lebanon, theTulip RevolutioninKyrgyzstan, and theJasmine RevolutioninTunisia. According toFreedom House, in 2007 there were 123 electoral democracies (up from 40 in 1972).[136]According toWorld Forum on Democracy, electoral democracies now represent 120 of the 192 existing countries and constitute 58.2 per cent of the world's population. At the same time liberal democracies i.e. countries Freedom House regards as free and respectful of basic human rights and the rule of law are 85 in number and represent 38 per cent of the global population.[137]Also in 2007 theUnited Nationsdeclared 15 September theInternational Day of Democracy.[138] Many countries reduced theirvoting ageto 18 years; the major democracies began to do so in the 1970s starting in Western Europe and North America.[139][failed verification][140][141]Most electoral democracies continue to exclude those younger than 18 from voting.[142]The voting age has been lowered to 16 for national elections in a number of countries, including Brazil, Austria, Cuba, and Nicaragua. In California, a 2004 proposal to permit a quarter vote at 14 and a half vote at 16 was ultimately defeated. In 2008, the German parliament proposed but shelved a bill that would grant the vote to each citizen at birth, to be used by a parent until the child claims it for themselves. According to Freedom House, starting in 2005, there have been 17 consecutive years in which declines in political rights and civil liberties throughout the world have outnumbered improvements,[143][144]aspopulistandnationalistpolitical forces have gained ground everywhere from Poland (under theLaw and Justiceparty) to the Philippines (underRodrigo Duterte).[143][128]In a Freedom House report released in 2018, Democracy Scores for most countries declined for the 12th consecutive year.[145]The Christian Science Monitorreported thatnationalistandpopulistpolitical ideologies were gaining ground, at the expense ofrule of law, in countries like Poland, Turkey and Hungary. For example, in Poland, the Presidentappointed 27 new Supreme Court judgesover legal objections from theEuropean Commission. In Turkey, thousands of judges were removed from their positions following afailed coup attemptduring agovernment crackdown.[146] "Democratic backsliding" in the 2010s were attributed to economic inequality and social discontent,[148]personalism,[149]poor government's management of theCOVID-19 pandemic,[150][151]as well as other factors such as manipulation of civil society, "toxic polarization", foreign disinformation campaigns,[152]racism and nativism, excessive executive power,[153][154][155]and decreased power of the opposition.[156]Within English-speaking Western democracies, "protection-based" attitudes combining cultural conservatism and leftist economic attitudes were the strongest predictor of support for authoritarian modes of governance.[157] Aristotlecontrasted rule by the many (democracy/timocracy), with rule by the few (oligarchy/aristocracy/elitism), and with rule by a single person (tyranny/autocracy/absolute monarchy). He also thought that there was a good and a bad variant of each system (he considered democracy to be the degenerate counterpart to timocracy).[158][159] A common view among early and renaissanceRepublicantheorists was that democracy could only survive in small political communities.[160]Heeding the lessons of the Roman Republic's shift to monarchism as it grew larger or smaller, these Republican theorists held that the expansion of territory and population inevitably led to tyranny.[160]Democracy was therefore highly fragile and rare historically, as it could only survive in small political units, which due to their size were vulnerable to conquest by larger political units.[160]Montesquieufamously said, "if a republic is small, it is destroyed by an outside force; if it is large, it is destroyed by an internal vice."[160]Rousseauasserted, "It is, therefore the natural property of small states to be governed as a republic, of middling ones to be subject to a monarch, and of large empires to be swayed by a despotic prince."[160] Among modern political theorists, there are different fundamental conceptions of democracy. The theory of aggregative democracy claims that the aim of the democratic processes is to solicit citizens' preferences and aggregate them together to determine what social policies society should adopt. Therefore, proponents of this view hold that democratic participation should primarily focus onvoting, where the policy with the most votes gets implemented. Different variants of aggregative democracy exist. According to the minimalist democracy conception, elections are a mechanism forcompetitionbetweenpoliticians.Joseph Schumpeterarticulated this view famously in his bookCapitalism, Socialism, and Democracy.[161]Contemporary proponents of minimalism includeWilliam H. Riker,Adam Przeworski,Richard Posner. According to themedian voter theoremgovernments will tend to produce laws and policies close to the views of the median voter with half to their left and the other half to their right.Anthony Downssuggests that ideological political parties are necessary to act as a mediating broker between individuals and governments. Downs laid out this view in his 1957 bookAn Economic Theory of Democracy.[162] According to the theory ofdirect democracy, on the other hand, citizens should vote directly, not through their representatives, on legislative proposals. Proponents of direct democracy offer varied reasons to support this view. Political activity can be valuable in itself, it socialises and educates citizens, andpopular participationcan check powerful elites. Proponents view citizens do not rule themselves unless they directly decide laws and policies. Robert A. Dahlargues that the fundamental democratic principle is that, when it comes to binding collective decisions, each person in a political community is entitled to have his/her interests be given equal consideration (not necessarily that all people are equally satisfied by the collective decision). He uses the termpolyarchyto refer to societies in which there exists a certain set of institutions and procedures which are perceived as leading to such democracy. First and foremost among these institutions is the regular occurrence of free and openelectionswhich are used to select representatives who then manage all or most of the public policy of the society. However, these polyarchic procedures may not create a full democracy if, for example, poverty prevents political participation.[163]Similarly,Ronald Dworkinargues that "democracy is a substantive, not a merely procedural, ideal."[164] Deliberative democracyis based on the notion that democracy is government bydeliberation. Unlike aggregative democracy, deliberative democracy holds that, for a democratic decision to be legitimate, it must be preceded by authentic deliberation, not merely the aggregation of preferences that occurs in voting. Authentic deliberation is deliberation among decision-makers that is free from distortions of unequal political power, such as power a decision-maker obtained through economic wealth or the support of interest groups.[165][166][167]If the decision-makers cannot reachconsensusafter authentically deliberating on a proposal, then they vote on the proposal using a form of majority rule.Citizens assembliesare considered by many scholars as practical examples of deliberative democracy,[168][169][170]with a recentOECDreport identifying citizens assemblies as an increasingly popular mechanism to involve citizens in governmental decision-making.[171] Measurement of democracy varies according to the different fundamental conceptions of democracy. Minimalist democracy evaluations focus onfree and fair elections,[161]while maximalist democracy evaluates additional values, such ashuman rights,deliberation,economicoutcomes orstate capacity.[173] Democracy indicesarequantitativeandcomparativeassessments of the state of democracy[174]for different countries according to various definitions of democracy.[175] The democracy indices differ in whether they are categorical, such as classifying countries into democracies,hybrid regimes, andautocracies,[176][177]or continuous values.[178]The qualitative nature of democracy indices enables data analytical approaches for studyingcausalmechanisms of regime transformation processes. Democracy has taken a number of forms, both in theory and practice. Some varieties of democracy provide better representation and more freedom for their citizens than others.[182][183]However, if any democracy is not structured to prohibit the government from excluding the people from the legislative process, or any branch of government from altering theseparation of powersin its favour, then a branch of the system can accumulate too much power and destroy the democracy.[184][185][186] The following kinds of democracy are not exclusive of one another: many specify details of aspects that are independent of one another and can co-exist in a single system. Several variants of democracy exist, but there are two basic forms, both of which concern how the whole body of all eligible citizens executes its will. One form of democracy isdirect democracy, in which all eligible citizens have active participation in the political decision making, for example voting on policy initiatives directly.[187]In most modern democracies, the whole body of eligible citizens remain the sovereign power but political power is exercised indirectly through elected representatives; this is called arepresentative democracy. Direct democracy is a political system where the citizens participate in the decision-making personally, contrary to relying on intermediaries or representatives. A direct democracy gives the voting population the power to: Within modern-day representative governments, certain electoral tools like referendums, citizens' initiatives andrecall electionsare referred to as forms of direct democracy.[188]However, some advocates of direct democracy argue for local assemblies of face-to-face discussion. Direct democracy as a government system currently exists in theSwisscantonsofAppenzell InnerrhodenandGlarus,[189]theRebel Zapatista Autonomous Municipalities,[190]communities affiliated with theCIPO-RFM,[191]theBoliviancity councils ofFEJUVE,[192]and Kurdish cantons ofRojava.[193] Some modern democracies that are predominantly representative in nature also heavily rely upon forms of political action that are directly democratic. These democracies, which combine elements of representative democracy and direct democracy, are termedsemi-direct democraciesorparticipatory democracies. Examples include Switzerland and someU.S. states, where frequent use is made ofreferendumsandinitiatives. TheSwiss confederationis a semi-direct democracy.[189]At the federal level, citizens can propose changes to the constitution (federal popular initiative) or ask for areferendumto be held on any law voted by theparliament.[189]Between January 1995 and June 2005, Swiss citizens voted 31 times, to answer 103 questions (during the same period, French citizens participated in only two referendums).[189]Although in the past 120 years less than 250 initiatives have been put to referendum.[194] Examples include the extensive use ofreferendumsin the US state ofCalifornia, which is a state that has more than 20 million voters.[195] InNew England,town meetingsare often used, especially in rural areas, to manage local government. This creates a hybrid form of government, with a localdirect democracyand a representative state government. For example, mostVermonttowns hold annual town meetings in March in which town officers are elected, budgets for the town and schools are voted on, and citizens have the opportunity to speak and be heard on political matters.[196] The use of a lot system, a characteristic ofAthenian democracy, is a feature of some versions of direct democracies. In this system, important governmental and administrative tasks are performed by citizens picked from a lottery.[197] Representative democracy involves the election of government officials by the people being represented. If the head of state is alsodemocratically electedthen it is called a democraticrepublic.[198]The most common mechanisms involve election of the candidate with a majority or apluralityof the votes. Most western countries have representative systems.[189] Representatives may be elected or become diplomatic representatives by a particular district (orconstituency), or represent the entire electorate throughproportionalsystems, with some using a combination of the two. Some representative democracies also incorporate elements of direct democracy, such asreferendums. A characteristic of representative democracy is that while the representatives are elected by the people to act in the people's interest, they retain the freedom to exercise their own judgement as how best to do so. Such reasons have driven criticism upon representative democracy,[199][200]pointing out the contradictions of representation mechanisms with democracy[201][202] Parliamentary democracy is a representative democracy where government is appointed by or can be dismissed by, representatives as opposed to a "presidential rule" wherein the president is both head of state and the head of government and is elected by the voters. Under a parliamentary democracy, government is exercised by delegation to an executive ministry and subject to ongoing review, checks and balances by the legislative parliament elected by the people.[203][204][205][206] In a parliamentary system, the prime minister may be dismissed by the legislature at any point in time for not meeting the expectations of the legislature. This is done through a vote of no confidence where the legislature decides whether or not to remove the prime minister from office with majority support for dismissal.[207]In some countries, the prime minister can also call an election at any point in time, typically when the prime minister believes that they are in good favour with the public as to get re-elected. In other parliamentary democracies, extra elections are virtually never held, a minority government being preferred until the next ordinary elections. An important feature of the parliamentary democracy is the concept of the "loyal opposition". The essence of the concept is that the second largest political party (or opposition) opposes the governing party (or coalition), while still remaining loyal to the state and its democratic principles. Presidential democracy is a system where the public elects the president through an election. The president serves as both the head of state and head of government controlling most of the executive powers. The president serves for a specific term and cannot exceed that amount of time. The legislature often has limited ability to remove a president from office. Elections typically have a fixed date and are not easily changed. The president has direct control over the cabinet, specifically appointing the cabinet members.[207] The executive usually has the responsibility to execute or implement legislation and may have the limited legislative powers, such as a veto. However, a legislative branch passes legislation and budgets. This provides some measure ofseparation of powers. In consequence, however, the president and the legislature may end up in the control of separate parties, allowing one to block the other and thereby interfere with the orderly operation of the state. This may be the reason why presidential democracy is not very common outside the Americas, Africa, and Central and Southeast Asia.[207] Asemi-presidential systemis a system of democracy in which the government includes both a prime minister and a president. The particular powers held by the prime minister and president vary by country.[207] Many countries such as theUnited Kingdom,Spain, theNetherlands,Belgium,Scandinavian countries,Thailand,JapanandBhutanturned powerful monarchs into constitutional monarchs (often gradually) with limited or symbolic roles. For example, in the predecessor states to the United Kingdom, constitutional monarchy began to emerge and has continued uninterrupted since theGlorious Revolutionof 1688 and passage of theBill of Rights 1689.[28][89]Strongly limited constitutional monarchies, such as the United Kingdom, have been referred to ascrowned republicsby writers such asH. G. Wells.[208] In other countries, the monarchy was abolished along with the aristocratic system (as inFrance,China,Russia,Germany,Austria,Hungary,Italy,Greece, andEgypt). An elected person, with or without significant powers, became the head of state in these countries. Elite upper houses of legislatures, which often had lifetime or hereditary tenure, were common in many states. Over time, these either had their powers limited (as with the BritishHouse of Lords) or else became elective and remained powerful (as with theAustralian Senate). The termrepublichas many different meanings, but today often refers to a representative democracy with an electedhead of state, such as apresident, serving for a limited term, in contrast to states with a hereditarymonarchas a head of state, even if these states also are representative democracies with an elected or appointedhead of governmentsuch as aprime minister.[209] TheFounding Fathers of the United Statesoften criticiseddirect democracy, which in their view often came without the protection of a constitution enshrining inalienable rights;James Madisonargued, especially inThe FederalistNo. 10, that what distinguished a directdemocracyfrom arepublicwas that the former became weaker as it got larger and suffered more violently from the effects of faction, whereas a republic could get stronger as it got larger and combats faction by its very structure.[210] Professors Richard Ellis ofWillamette Universityand Michael Nelson ofRhodes Collegeargue that much constitutional thought, from Madison to Lincoln and beyond, has focused on "the problem of majority tyranny". They conclude, "The principles of republican government embedded in the Constitution represent an effort by the framers to ensure that the inalienable rights of life, liberty, and the pursuit of happiness would not be trampled by majorities."[211]What was critical to American values,John Adamsinsisted,[212]was that the government be "bound by fixed laws, which the people have a voice in making, and a right to defend." As Benjamin Franklin was exiting after writing the US Constitution,Elizabeth Willing Powel[213]asked him "Well, Doctor, what have we got—a republic or a monarchy?". He replied "A republic—if you can keep it."[214] A liberal democracy is a representative democracy which enshrines aliberalpolitical philosophy, where the ability of the elected representatives to exercise decision-making power is subject to therule of law, moderated by a constitution or laws such as the protection of the rights and freedoms of individuals, and constrained on the extent to which the will of the majority can be exercised against the rights of minorities.[215] Socialistthought has several different views on democracy, for examplesocial democracyordemocratic socialism. Many democratic socialists and social democrats believe in a form ofparticipatory,industrial,economicand/orworkplace democracycombined with arepresentative democracy. Marxist theorysupports a democratic society centering theworking class.[216]Some Marxists andTrotskyistsbelieve indirect democracyorworkers' councils(which are sometimes calledsoviets). This system can begin with workplace democracy and can manifest itself assoviet democracyordictatorship of the proletariat.[217][218]Trotskyistgroups have interpreted socialist democracy to be synonymous withmulti-partyfar-leftrepresentation,autonomous union organizations,worker's control of production,[219]internal party democracy and themass participation of the working masses.[220][221]Somecommunist partiessupport asoviet republicwithdemocratic centralism.[222]Withindemocracy in Marxismthere can be hostility to what is commonly called "liberal democracy". Anarchistsare split in this domain, depending on whether they believe that amajority-rule is tyrannic or not. To many anarchists, the only form of democracy considered acceptable is direct democracy.Pierre-Joseph Proudhonargued that the only acceptable form of direct democracy is one in which it is recognised that majority decisions are not binding on the minority, even when unanimous.[223]However,anarcho-communistMurray Bookchincriticisedindividualist anarchistsfor opposing democracy,[224]and says "majority rule" is consistent with anarchism.[225] Some anarcho-communists oppose the majoritarian nature of direct democracy, feeling that it can impede individual liberty and opt-in favour of a non-majoritarian form ofconsensus democracy, similar to Proudhon's position on direct democracy.[226] Sortitionis the process of choosing decision-making bodies via a random selection. These bodies can be more representative of the opinions and interests of the people at large than an elected legislature or other decision-maker. The technique was in widespread use inAthenian DemocracyandRenaissance Florence[227]and is still used in modernjury selectionandcitizens' assemblies. Consociational democracy, also calledconsociationalism, is a form of democracy based on power-sharing formula between elites representing the social groups within the society. In 1969, Arendt Lijphart argued this would stabilize democracies with factions.[228]A consociational democracy allows for simultaneous majority votes in two or more ethno-religious constituencies, and policies are enacted only if they gain majority support from both or all of them. TheQualified majority votingrule inEuropean Council of Ministersis a consociational democracy approach for supranational democracies. This system inTreaty of Romeallocates votes to member states in part according to their population, but heavily weighted in favour of the smaller states. A consociational democracy requires consensus of representatives, while consensus democracy requires consensus of electorate.[needs update] Consensus democracy[233]requiresconsensus decision-makingandsupermajorityto obtain a larger support thanmajority. In contrast, inmajoritarian democracyminority opinions can potentially be ignored by vote-winning majorities.[234]Constitutionstypically require consensus or supermajorities.[235] Inclusive democracy is a political theory and political project that aims fordirect democracyin all fields of social life: political democracy in the form of face-to-face assemblies which are confederated,economic democracyin astateless, moneyless and marketless economy, democracy in the social realm, i.e.self-managementin places of work and education, and ecological democracy which aims to reintegrate society and nature. The theoretical project of inclusive democracy emerged from the work of political philosopherTakis Fotopoulosin "Towards An Inclusive Democracy" and was further developed in the journalDemocracy & Natureand its successorThe International Journal of Inclusive Democracy.[237][238][239][240][241][242] Aparpolityor participatory polity is a theoretical form of democracy that is ruled by anested councilstructure. The guiding philosophy is that people should have decision-making power in proportion to how much they are affected by the decision. Local councils of 25–50 people are completely autonomous on issues that affect only them, and these councils send delegates to higher level councils who are again autonomous regarding issues that affect only the population affected by that council. A council court of randomly chosen citizens serves as a check on thetyranny of the majority, and rules on which body gets to vote on which issue. Delegates may vote differently from how their sending council might wish but are mandated to communicate the wishes of their sending council. Delegates are recallable at any time. Referendums are possible at any time via votes of lower-level councils, however, not everything is a referendum as this is most likely a waste of time. A parpolity is meant to work in tandem with aparticipatory economy. Radical democracyis based on the idea that there are hierarchical and oppressive power relations that exist in society. Radical democracy's role is to make visible and challenge those relations by allowing for difference, dissent and antagonisms in decision-making processes.[248] Cosmopolitan democracy, also known asglobal democracyorworld federalism, is a political system in which democracy is implemented on a global scale, either directly or through representatives. An important justification for this kind of system is that the decisions made in national or regional democracies often affect people outside the constituency who, by definition, cannot vote. By contrast, in a cosmopolitan democracy, the people who are affected by decisions also have a say in them.[250] According to its supporters, any attempt to solve global problems is undemocratic without some form of cosmopolitan democracy. The general principle of cosmopolitan democracy is to expand some or all of the values and norms of democracy, including the rule of law; the non-violent resolution of conflicts; and equality among citizens, beyond the limits of the state. To be fully implemented, this would require reforming existinginternational organisations, e.g., theUnited Nations, as well as the creation of new institutions such as aWorld Parliament, which ideally would enhance public control over, and accountability in, international politics. Cosmopolitan democracy has been promoted, among others, by physicist Albert Einstein,[251]writer Kurt Vonnegut, columnistGeorge Monbiot, and professorsDavid HeldandDaniele Archibugi.[252]The creation of theInternational Criminal Courtin 2003 was seen as a major step forward by many supporters of this type of cosmopolitan democracy. Creative democracy is advocated by American philosopherJohn Dewey. The main idea about creative democracy is that democracy encourages individual capacity building and the interaction among the society. Dewey argues that democracy is a way of life in his work of "Creative Democracy: The Task Before Us"[253]and an experience built on faith in human nature, faith in human beings, and faith in working with others. Democracy, in Dewey's view, is amoral idealrequiring actual effort and work by people; it is not an institutional concept that exists outside of ourselves. "The task of democracy", Dewey concludes, "is forever that of creation of a freer and more humane experience in which all share and to which all contribute". Guided democracy is a form of democracy that incorporates regular popular elections, but which often carefully "guides" the choices offered to the electorate in a manner that may reduce the ability of the electorate to truly determine the type of government exercised over them. Such democracies typically have only one central authority which is often not subject to meaningful public review by any other governmental authority. Russian-style democracy has often been referred to as a "guided democracy".[254]Russian politicians have referred to their government as having only one center of power/ authority, as opposed to most other forms of democracy which usually attempt to incorporate two or more naturally competing sources of authority within the same government.[255] Aside from the public sphere, similar democratic principles and mechanisms of voting and representation have been used to govern other kinds of groups. Manynon-governmental organisationsdecide policy and leadership by voting. Mosttrade unionsandcooperativesare governed by democratic elections.Corporationsare ultimately governed by theirshareholdersthroughshareholder democracy. Corporations may also employ systems such asworkplace democracyto handle internal governance.Amitai Etzionihas postulated a system that fuses elements of democracy withsharia law, termed Islamic democracy orIslamocracy.[256]There is also a growing number ofDemocratic educationalinstitutions such asSudbury schoolsthat are co-governed by students and staff. Shareholder democracy is a concept relating to the governance of corporations by their shareholders. In the United States, shareholders are typically granted voting rights according to theone share, one voteprinciple. Shareholders may vote annually to elect the company'sboard of directors, who themselves may choose the company'sexecutives. The shareholder democracy framework may be inaccurate for companies which have differentclasses of stockthat further alter the distribution of voting rights. Several justifications for democracy have been postulated.[257] Social contract theoryargues that thelegitimacy of governmentis based onconsent of the governed, i.e. an election, and that political decisions must reflect thegeneral will. Some proponents of the theory likeJean-Jacques Rousseauadvocate for adirect democracyon this basis.[258] Condorcet's jury theoremis logical proof that if each decision-maker has a better than chance probability of making the right decision, then having the largest number of decision-makers, i.e. a democracy, will result in the best decisions. This has also been argued by theories ofthe wisdom of the crowd. Democracy tends to improveconflict resolution.[259] InWhy Nations Fail, economistsDaron AcemogluandJames A. Robinsonargue that democracies are more economically successful because undemocratic political systems tend to limit markets and favormonopoliesat the expense of thecreative destructionwhich is necessary for sustainedeconomic growth. A 2019 study by Acemoglu and others estimated that countries switching to democratic from authoritarian rule had on average a 20% higher GDP after 25 years than if they had remained authoritarian. The study examined 122 transitions to democracy and 71 transitions to authoritarian rule, occurring from 1960 to 2010.[260]Acemoglu said this was because democracies tended to invest more in health care and human capital, and reduce special treatment of regime allies.[261] A 2023 study analyzed the long-term effects of democracy on economic prosperity using new data on GDP per capita and democracy for a dataset between 1789 and 2019. The results indicate that democracy substantially increases economic development.[262] A democratic transition describes a phase in a country'spolitical system, often created as a result of an incomplete change from anauthoritarianregime to a democratic one (or vice versa).[263][264] Several philosophers and researchers have outlined historical and social factors seen as supporting the evolution of democracy. Other commentators have mentioned the influence of economic development.[267]In a related theory,Ronald Inglehartsuggests that improved living-standards in modern developed countries can convince people that they can take their basic survival for granted, leading to increased emphasis onself-expression values, which correlates closely with democracy.[268][269] Douglas M. Gibler and Andrew Owsiak in their study argued about the importance of peace and stable borders for the development of democracy. It has often been assumed thatdemocracy causes peace, but this study shows that, historically, peace has almost always predated the establishment of democracy.[270] Carroll Quigleyconcludes that the characteristics of weapons are the main predictor of democracy:[271][272]Democracy—this scenario—tends to emerge only when the best weapons available are easy for individuals to obtain and use.[273]By the 1800s, guns were the best personal weapons available, and in the United States of America (already nominally democratic), almost everyone could afford to buy a gun, and could learn how to use it fairly easily. Governments could not do any better: it became the age of mass armies of citizen soldiers with guns.[273]Similarly, Periclean Greece was an age of the citizen soldier and democracy.[274] Other theories stressed the relevance ofeducationand ofhuman capital—and within them ofcognitive abilityto increasing tolerance, rationality, political literacy and participation. Two effects of education and cognitive ability are distinguished:[275][need quotation to verify][276][277] Evidence consistent with conventional theories of why democracy emerges and is sustained has been hard to come by. Statistical analyses have challengedmodernisation theoryby demonstrating that there is no reliable evidence for the claim that democracy is more likely to emerge when countries become wealthier, more educated, or less unequal.[278]In fact, empirical evidence shows that economic growth and education may not lead to increased demand for democratization as modernization theory suggests: historically, most countries attained high levels of access to primary education well before transitioning to democracy.[279]Rather than acting as a catalyst for democratization, in some situations education provision may instead be used by non-democratic regimes to indoctrinate their subjects and strengthen their power.[279] The assumed link between education and economic growth is called into question when analyzing empirical evidence. Across different countries, the correlation between education attainment and math test scores is very weak (.07). A similarly weak relationship exists between per-pupil expenditures and math competency (.26). Additionally, historical evidence suggests that average human capital (measured using literacy rates) of the masses does not explain the onset of industrialization in France from 1750 to 1850 despite arguments to the contrary.[280]Together, these findings show that education does not always promote human capital and economic growth as is generally argued to be the case. Instead, the evidence implies that education provision often falls short of its expressed goals, or, alternatively, that political actors use education to promote goals other than economic growth and development. Some scholars have searched for the "deep" determinants of contemporary political institutions, be they geographical or demographic.[281][282] An example of this is the disease environment. Places with different mortality rates had different populations and productivity levels around the world. For example, in Africa, thetsetse fly—which afflicts humans and livestock—reduced the ability of Africans to plough the land. This made Africa less settled. As a consequence, political power was less concentrated.[283]This also affected the colonial institutions European countries established in Africa.[284]Whether colonial settlers could live or not in a place made them develop different institutions which led to different economic and social paths. This also affected the distribution of power and the collective actions people could take. As a result, some African countries ended up having democracies and othersautocracies. An example of geographical determinants for democracy is having access to coastal areas and rivers. This natural endowment has a positive relation with economic development thanks to the benefits oftrade.[285]Trade brought economic development, which in turn, broadened power. Rulers wanting to increase revenues had to protect property-rights to create incentives for people to invest. As more people had more power, more concessions had to be made by the ruler and in many[quantify]places this process lead to democracy. These determinants defined the structure of the society moving the balance of political power.[286] Robert Michels asserts that although democracy can never be fully realised, democracy may be developed automatically in the act of striving for democracy: The peasant in the fable, when on his deathbed, tells his sons that a treasure is buried in the field. After the old man's death the sons dig everywhere in order to discover the treasure. They do not find it. But their indefatigable labor improves the soil and secures for them a comparative well-being. The treasure in the fable may well symbolise democracy.[287] Democracy in modern times has almost always faced opposition from the previously existing government, and many times it has faced opposition from social elites. The implementation of a democratic government from a non-democratic state is typically brought by peaceful or violentdemocratic revolution. Steven Levitskysays: “It's not up to voters to defend a democracy. That’s asking far, far too much of voters, to cast their ballot on the basis of some set of abstract principles or procedures. With the exception of a handful of cases, voters never, ever — in any society, in any culture — prioritize democracy over all else. Individual voters worry about much more mundane things, as is their right. It is up to élites and institutions to protect democracy — not voters.”[301] Some democratic governments have experienced suddenstate collapseandregime changeto an undemocratic form of government. Domestic military coups or rebellions are the most common means by which democratic governments have been overthrown.[302](SeeList of coups and coup attempts by countryandList of civil wars.) Examples include theSpanish Civil War, theCoup of 18 Brumairethat ended theFrench First Republic, and the28 May 1926 coup d'étatwhich ended theFirst Portuguese Republic. Some military coups are supported by foreign governments, such as the1954 Guatemalan coup d'étatand the1953 Iranian coup d'état. Other types of a sudden end to democracy include: Democratic backslidingcan end democracy in a gradual manner, by increasing emphasis onnational securityand erodingfree and fair elections,freedom of expression,independence of the judiciary,rule of law. A famous example is theEnabling Act of 1933, which lawfully ended democracy inWeimar Germanyand marked the transition toNazi Germany.[304] Temporary or long-termpolitical violenceand government interference can preventfree and fair elections, which erode the democratic nature of governments. This has happened on a local level even in well-established democracies like the United States; for example, theWilmington insurrection of 1898and African-Americandisfranchisement after the Reconstruction era. Criticism has been a key part of democracy, its functions, and its development throughout history. Some critics call upon the constitutional regime to be true to its own highestprinciples; others reject the values promoted byconstitutional democracy.[305] Platofamously opposed democracy, arguing for a 'government of the best qualified'.James Madisonextensively studied the historic attempts at and arguments on democracy in his preparation for theConstitutional Convention, andWinston Churchillremarked that "No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time."[306] The theory of democracy relies on the implicit assumption that voters are well informed aboutsocial issues, policies, and candidates so that they can make a truly informed decision. Since the late 20th century there has been a growing concern that voters may be poorly informed due to thenews media's focusing more on entertainment and gossip and less on serious journalistic research on political issues.[310][311] The media professors Michael Gurevitch andJay Blumlerhave proposed a number of functions that the mass media are expected to fulfill in a democracy:[312] This proposal has inspired a lot of discussions over whether the news media are actually fulfilling the requirements that a well functioning democracy requires.[313]Commercial mass media are generally not accountable to anybody but their owners, and they have no obligation to serve a democratic function.[313][314]They are controlled mainly by economicmarket forces. Fierce economic competition may force the mass media to divert themselves from any democratic ideals and focus entirely on how to survive the competition.[315][316] Thetabloidizationand popularization of the news media is seen in an increasing focus on human examples rather than statistics and principles. There is more focus on politicians as personalities and less focus on political issues in the popular media. Election campaigns are covered more ashorse racesand less as debates about ideologies and issues. The dominating media focus onspin, conflict, and competitive strategies has made voters perceive the politicians as egoists rather than idealists. This fosters mistrust and acynicalattitude to politics, lesscivic engagement, and less interest in voting.[317][318][319]The ability to find effective political solutions to social problems is hampered when problems tend to be blamed on individuals rather than onstructural causes.[318]This person-centered focus may have far-reaching consequences not only for domestic problems but also for foreign policy when international conflicts are blamed on foreign heads of state rather than on political and economic structures.[320][321]A strong media focus on fear andterrorismhas allowed military logic to penetrate public institutions, leading to increasedsurveillanceand the erosion ofcivil rights.[322] The responsiveness[323]andaccountabilityof the democratic system is compromised when lack of access to substantive, diverse, and undistorted information is handicapping the citizens' capability of evaluating the political process.[314][319]The fast pace and trivialization in the competitive news media isdumbing downthe political debate. Thorough and balanced investigation of complex political issues does not fit into this format. The political communication is characterized by short time horizons, short slogans, simple explanations, and simple solutions. This is conducive to politicalpopulismrather than serious deliberation.[314][322] Commercial mass media are often differentiated along the political spectrum so that people can hear mainly opinions that they already agree with. Too much controversy and diverse opinions are not always profitable for the commercial news media.[324]Political polarizationis emerging when different people read different news and watch different TV channels. This polarization has been worsened by the emergence of thesocial mediathat allow people to communicate mainly with groups of like-minded people, the so-calledecho chambers.[325]Extreme political polarization may undermine the trust in democratic institutions, leading to erosion ofcivil rightsandfree speechand in some cases even reversion toautocracy.[326] Many media scholars have discussed non-commercial news media withpublic serviceobligations as a means to improve the democratic process by providing the kind of political contents that a free market does not provide.[327][328]TheWorld Bankhas recommended public service broadcasting in order to strengthen democracy indeveloping countries. These broadcasting services should be accountable to an independent regulatory body that is adequately protected from interference from political and economic interests.[329]Public service media have an obligation to provide reliable information to voters. Many countries have publicly funded radio and television stations with public service obligations, especially in Europe and Japan,[330]while such media are weak or non-existent in other countries including the US.[331]Several studies have shown that the stronger the dominance of commercial broadcast media over public service media, the less the amount of policy-relevant information in the media and the more focus onhorse race journalism, personalities, and the pecadillos of politicians. Public service broadcasters are characterized by more policy-relevant information and more respect forjournalistic normsandimpartialitythan the commercial media. However, the trend ofderegulationhas put the public service model under increased pressure from competition with commercial media.[330][332][333] The emergence of theinternetand thesocial mediahas profoundly altered the conditions for political communication. The social media have given ordinary citizens easy access to voice their opinion and share information while bypassing thefiltersof the large news media. This is often seen as an advantage for democracy.[334]The new possibilities for communication have fundamentally changed the waysocial movementsandprotest movementsoperate and organize. The internet and social media have provided powerful new tools for democracy movements in developing countries andemerging democracies, enabling them to bypasscensorship, voice their opinions, and organize protests.[335][336] A serious problem with the social media is that they have no truth filters. The established news media have to guard their reputation as trustworthy, while ordinary citizens may post unreliable information.[335]In fact, studies show that false stories are going moreviralthan true stories.[337][338]The proliferation of false stories andconspiracy theoriesmay undermine public trust in the political system and public officials.[338][326] Reliable information sources are essential for the democratic process. Less democratic governments rely heavily oncensorship,propaganda, andmisinformationin order to stay in power, while independent sources of information are able to undermine their legitimacy.[339] Democracy promotioncan increase the quality of already existing democracies, reducepolitical apathy, and the chance of democratic backsliding. Democracy promotion measures includevoting advice applications,[340]participatory democracy,[341]increasingyouth suffrage, increasing civic education,[342]reducingbarriers to entryfor new political parties,[343]increasingproportionality[344]and reducingpresidentialism.[345]
https://en.wikipedia.org/wiki/Democracy
Injournalism,mainstream media(MSM) is a term and abbreviation used to refer collectively to the various largemass news mediathat influence many people and both reflect and shape prevailing currents of thought.[1]The term is used to contrast withalternative media. The term is often used for largenews conglomerates, includingnewspapersandbroadcast media, that underwent successivemergersin many countries. Theconcentration of media ownershiphas raised concerns of a homogenization of viewpoints presented to news consumers. Consequently, the termmainstream mediahas been used in conversation and theblogosphere, sometimes in oppositional, pejorative or dismissive senses, in discussion of themass mediaandmedia bias. In the United States, movie production is known to have been dominated bymajor studiossince the early 20th century; before that, there was a period in time whichEdison's Trustmonopolized the industry.[citation needed]In the early 21st century, the music and television industries was subject to media consolidation, withSony Music Entertainment's parent company merging their music division withBertelsmann AG'sBMGto formSony BMG, and Warner Bros. Entertainment'sThe WBand CBS Corp.'sUPNmerging to formThe CW. In the case of Sony BMG there existed a "Big Five", later "Big Four", of majorrecord companies, while The CW's creation was an attempt to consolidate ratings and stand up to the "Big Four" of Americannetwork (terrestrial) television(although the CW was actually partially owned by one of the Big Four in CBS). In television, the vast majority of broadcast and basic cable networks, over a hundred in all, are controlled by eight corporations:Fox Corporation,The Walt Disney Company(which includes the ABC, ESPN, FX and Disney brands),National Amusements(which ownsParamount Global),Comcast(which ownsNBCUniversal),Warner Bros. Discovery,E. W. Scripps Company,Altice USA, or some combination thereof.[2] Over time the rate of media mergers has increased, while the number ofmedia outletshas also increased. This has resulted in a higherconcentration of media ownership, with fewer companies owning more media outlets.[3] Some critics, such asBen Bagdikian, assailed concentration of ownership, arguing that large media acquisitions limit the information accessible to the public.[4]Other commentators, such asBen CompaineandJack Shafer, find Bagdikian's critique overblown.[4]Shafer noted that U.S. media consumers have a wide variety of news sources, including independent national and local sources.[4]Compaine argues that, based on economic metrics such as theHerfindahl-Hirschman Index, the media industry is not very highly concentrated and did not become more concentrated during the 1990s and 2000s.[4]Compaine also points out that most media mergers are not purely acquisitions, but also include divestitures.[4] One of the biggest mergers/acquisitions in the mainstream media world wasDisneyAcquiring21st Century Foxand all of their assets. One of the main things that was accomplished with this merger was completing the rights to the rest of the Marvel movie franchise. Previously Disney did not have the rights to franchises such asX-Menand certainSpider-Manmovie rights. With the acquisition they now do. 21st Century Fox was purchased for 71.3 billion dollars in March 2019. (*) As of 2020, Two Murdoch companies, with publishing assets and Australian media assets going toNews Corp, and broadcasting assets going toFox Corporation.[8] Trust in the media declined in the 1970s, and then again in the 2000s. Since the 2000s, distrust in the media has been polarized, as Republicans have grown substantially more distrustful of the media than Democrats.[12] As of 2022, only a reported 56% of 18-27 year olds report that they trust information from US-based mainstream media.[13]Growing distrust of the media is linked to a host of different indicators, with those who subscribe to more radical ideologies or populist followings more likely to harbor a distrust of the media.[14]Other identifying information such as age, race, and gender have been found to produce different levels of trust in the media regarding specific issues as well.[14] In the UK, during 1922, after the closure of many radio stations, theBritish Broadcasting Corporationstarted its first daily radio transmission and started to grow an audience.[15]Later that yearJohn Reith, a Scottish engineer, would be appointed the first General Manager for the BBC.[15]Later on January 1, 1927, theBBCwas fully established by Royal Charter and renamed the British Broadcasting Corporation with Reith as the first Director-General.[15]During November 1936 the BBC began to expand into television broadcasting and was the first broadcaster to start the trend of a regularly scheduled TV service.[16] Today the BBC is one of two chartered public broadcasting companies in the United Kingdom. The second is ITV, Independent Television, which was established in 1955 as the first public commercial television company after the Television act of 1954 in an effort to break up the monopoly the BBC had on television broadcasting, gaining fifteen regional broadcasting licenses in less than twenty years.[17][18]Today the BBC and ITV are the two free to air digital services offered to everyone in the United Kingdom and each other's biggest competitors. The BBC has nine national television channels, BBC three, the first channel to switch from television to online, an interactive channel, ten national and forty local radio stations, BBC Online, and BBC Worldwide.[19]ITV currently holds thirteen of the fifteen regional broadcasting licenses in the United Kingdom that carries their multiple channels including ITV, ITVhub, ITV2, ITVBe, ITV 3, ITV4, CITV, ITV Encore, Britbox, a video-on-demand service in collaboration with the BBC to bring British television content to the United States and Canada, and Cirkus, their own video-on-demand service.[20] News consumption has shifted with age demographics along the rise of digital platforms such as social media. Traditional outlets like television and newspapers commonly associated with "mainstream media" face declining audiences as younger users increasingly turn to platforms such asTikTok,Instagram, andFacebookfor news. According to Pew Research Center,[21]these platforms are a primary source of information for Millennials and Gen Z, a change that moves away from traditional media towards more online-focused platforms. publications This shift in consumer platform taste has led to a crisis in the smaller local news scene, with an estimated average of 2 newspapers going out of business per week.[22]Larger mainstream media companies with greater budgets will also be forced to navigate the technological shift, with large news companies such asThe New York TimesandFox Newshaving dedicated teams work on high quality online websites.
https://en.wikipedia.org/wiki/Mainstream_media
ThePact of Forgetting(Spanish:Pacto del Olvido) is the political decision by both leftist and rightist parties of Spain to avoid confronting directly the legacy ofFrancoismafter the death ofFrancisco Francoin 1975.[1]The Pact of Forgetting was an attempt to move on from theCivil Warand subsequent repression and to concentrate on the future of Spain.[2]In making a smooth transition from autocracy and totalitarianism to democracy, the Pact ensured that there were no prosecutions for persons responsible for human rights violations or similar crimes committed during the Francoist period. On the other hand, Francoist public memorials, such as the mausoleum of theValley of the Fallen, fell into disuse for official occasions.[3]Also, the celebration of "Day of Victory" during the Franco era was changed to "Armed Forces Day" so respect was paid to bothNationalistandRepublicanparties of the Civil War. The pact underpinned thetransition to democracyof the 1970s[4]and ensured that difficult questions about the recent past were suppressed for fear of endangering 'national reconciliation' and the restoration of liberal-democratic freedoms. Responsibility for the Spanish Civil War, and for the repression that followed, was not to be placed upon any particular social or political group. "In practice, this presupposed suppressing painful memories derived from the post civil war division of the population into 'victors' and 'vanquished'".[5]While many historians accept that the pact served a purpose at the time of transition,[6]there is more controversy as to whether it should still be adhered to. Paul Preston takes the view that Franco had time to impose his own version of history, which still prevents contemporary Spain from "looking upon its recent violent past in an open and honest way".[7]In 2006, two-thirds of Spaniards favored a "fresh investigation into the war".[8] "It is estimated that 400,000 people spent time in prisons, camps, or forced labor battalions".[9]Some historians believe that the repression committed by the Francoist State was most severe and prevalent in the immediate years after theSpanish Civil Warand through the 1940s. During this time of the repression, there was an escalation of torture, illegal detention, and execution. This style of repression remained frequent until the end of theSpanish State. Especially during 1936–1939, Nationalist Forces seized control of cities and towns in the Franco-led military coup and would hunt down any protesters or those who were labeled as a threat to the government and believed to sympathize with the Republican cause.[10]"Waves of these individuals were condemned on mere hearsay without trial, loaded onto trucks, taken to deserted areas outside city boundaries, summarily shot, and buried in mass, shallow graves that began dotting the Spanish countryside in the wake of the advancing Nationalist."[11] Advances in DNA technology gave scope for the identification of the remains of Republicans executed by Franco supporters. The year 2000 saw the foundation of theAssociation for the Recovery of Historical Memorywhich grew out of the quest by a sociologist,Emilio Silva-Barrera, to locate and identify the remains of his grandfather, who was shot by Franco's forces in 1936. Such projects have been the subject of political debate in Spain, and are referenced for example in the 2021 filmParallel Mothers. There have been other notable references to the Civil War in the arts since the year 2000 (for example,Javier Cercas' 2001 novelSoldiers of Salamis). However, the subject of the Civil War had not been "off limits" in the arts in previous decades; for example, Francoist repression is referenced in the 1973 filmSpirit of the Beehive,[citation needed]and arguably[by whom?]the pact is mainly a political construct. The clearest and most explicit expression of the Pact is theSpanish 1977 Amnesty Law.[12] The Pact was challenged by the socialist government elected in 2004, which under prime ministerJose Luis Rodriguez Zapateropassed theHistorical Memory Law. Among other measures, the Historical Memory Law rejected the legitimacy of laws passed and trials conducted by the Francoist regime. The Law repealed some Francoist laws and ordered the removal of remainingsymbols of Francoismfrom public buildings.[8] The Historical Memory Law has been criticised by some on the left (for not going far enough) and also by some on the right (for example, as a form of "vengeance").[13]After thePartido Populartook power in 2011 it did not repeal the Historical Memory Law, but it closed the government office dedicated to the exhumation of victims of Francoist repression.[14]UnderMariano Rajoy, the government was not willing to spend public money on exhumations in Spain,[15]although the Partido Popular supported the repatriation of the remains of Spanish soldiers who fought in theBlue Divisionfor Hitler. In 2010 there was a judicial controversy pertaining to the 1977 Spanish Amnesty Law. Spanish judgeBaltasar Garzónchallenged the Pact of Forgetting by saying that those who committedcrimes against humanityduring theSpanish Stateare not subject to the amnesty law or statutes of limitation. Relatives of those who were executed or went missing during the Franco regime demanded justice for their loved ones. Some of those who were targeted and buried in mass graves during the Franco regime were teachers, farmers, shop owners, women who did not marry in church and those on the losing side of war.[16]However, the Spanish Supreme Court challenged the investigations by Garzón. They investigated the judge for alleged abuse of power, knowingly violating the amnesty law, following a complaint from Miguel Bernard, the secretary general of a far-right group in Spain called "Manos Limpias". Bernard had criticized Garzón by saying:[17] [Garzón] cannot prosecute Francoism. It's already history, and only historians can judge that period. He uses justice for his own ego. He thought that, by prosecuting Francoism, he could become the head of the International Criminal Court and even win the Nobel Peace Prize. Although Garzón was eventually cleared of abuse of power in this instance, the Spanish judiciary upheld the Amnesty Law, discontinuing his investigations into Francoist crimes.[7] In 2022 theDemocratic Memory Lawenacted by the government ofPedro Sánchezfurther dealt with the legacy of Francoism and included measures such as to make the government responsible for exhuming and identifying the bodies of those killed by the fascist regime and buried in unmarked graves, to create an official register of victims and to remove a number of remaining Francoist symbols from the country. TheUnited Nationshas repeatedly urged Spain to repeal the amnesty law, for example in 2012,[18]and most recently in 2013.[19]This is on the basis that under international law amnesties do not apply to crimes against humanity. According to theInternational Covenant on Civil and Political Rights, Article 7, "no one shall be subjected to torture or to cruel,inhuman or degrading treatmentor punishment".[20]Furthermore, Judge Garzón had drawn attention to Article 15, which does not admit political exceptions to punishing individuals for criminal acts. It has also been argued that crimes during the Franco era, or at least those of the Civil War period, were not yet illegal. This is because international law regarding crimes of humanity developed in the aftermath of the Second World War and for crimes prior to that period the principle ofnullum crimen sine lege, or "no crime without a law", could be said to apply.[20] In 2013, an Argentinian judge was investigating Franco-era crimes under the international legal principle ofuniversal justice.[19][21] In Poland, which underwent a laterdemocratic transition, the Spanish agreement not to prosecute politically-motivated wrongdoing juridically and not to use the past in daily politics was seen as the example to follow.[22]In the 1990s theprogressivemedia hailed the Spanish model, which reportedly refrained from revanchism and from the vicious circle of "settling accounts".[23]The issue was highly related to the debate on "decommunization" in general and on "lustration" in particular; the latter was about measures intended against individuals involved in the pre-1989 regime. Liberal and left-wing media firmly opposed any such plan, and they referred the Spanish pattern as the civilized way of moving from one political system to another.[24]In a debate about transition from communism, held by two opinion leadersVaclav HavelandAdam Michnik, the Spanish model was highly recommended.[25]Later, the policies of prime minister Zapatero were viewed as dangerous "playing with fire",[26]and pundits ridiculed him as the one who was "rattling with skeletons pulled from cupboards" and "winning the civil war lost years ago"; they compared him toJarosław Kaczyński[27]and leaders of allegedly sectarian, fanatically anti-communist, nationalistic, Catholic groupings.[28]However, during the 2010s the left-wing media were gradually abandoning their early criticism of prime minister Zapatero;[29]they were rather agonizing about Rajoy and his strategy to park the "historical memory" politics in obscurity.[30]With the threat of "lustration" now gone, progressist authors have effectively made a U-turn; currently they are rather skeptical about the alleged "pact of forgetting"[31]and advocate the need to make further legislative steps advanced by theSánchezgovernment on the path towards "democratic memory".[32]The Polish right, which in the 1990s was rather muted about the solution adopted in Spain, since then remains consistently highly critical about the "historical memory" politics of bothPSOEandPPgovernments.[33]
https://en.wikipedia.org/wiki/Pact_of_forgetting
"Shy Tory factor" is a name given by Britishopinion pollingcompanies to a phenomenon first observed bypsephologistsin the early 1990s. They observed that the share of the electoral vote won by theConservative Party(known colloquially as theTories) was significantly higher than the equivalent share in opinion polls.[1]The accepted explanation was that "shy Tories" were voting Conservative after telling pollsters they would not.[2]The general elections held in 1992 and 2015 are examples where it has allegedly affected the overall results but has further been discussed in other elections where the Conservatives did unexpectedly well. It has also been applied to the success of theRepublican Partyin the United States or the continued electoral victories of thePeople's Action Partyin Singapore.[1][3][4] The finalopinion polling for the 1992 United Kingdom general electiongave the Conservatives between 38% and 39% of the vote, about 1% behind theLabour Party, suggesting that the election would produce ahung parliamentor a narrow Labour majority and end 13 years of Tory rule. In the final results, the Conservatives received almost 42% (a lead of 7.6% over Labour) and won their fourth successive general election, although they now had a 21-seat majority compared to the 102-seat majority they had gained in the election five years previously. As a result of this failure to predict the result, theMarket Research Societyheld an inquiry into the reasons why the polls had been so much at variance with actual public opinion. The report found that 2% of the 8.5% error could be explained by Conservative supporters refusing to disclose their voting intentions; it cited as evidence the fact thatexit pollson election day also underestimated the Conservative lead. Following the 1992 election, most opinion pollsters altered their methodology to try to correct for this observed behaviour of the electorate.[1]The methods varied for different companies. Some, includingPopulus,YouGov, andICM Research, began to adopt the tactic of asking their interviewees how they had voted at the previous election and then assuming that they would vote that way again at a discounted rate.[5]Others weighted their panel so that their past vote was exactly in line with the actual result of the election. For a time, opinion poll results were published both for unadjusted and adjusted methods. Polling companies found that telephone and personal interviews are more likely to generate a shy response than automated calling or internet polls.[5]In the1997 general election, the result produced a smaller gap between the parties than polls had shown, but a big majority for the Labour Party because theswingwas not uniform; the polling companies that had adjusted for the "Shy Tory effect" got closer to the voting proportions than those that did not.[6] Opinion polling for the 2015 United Kingdom general election underestimated the Conservative vote, with most polls predicting a hung parliament, and exit polls suggesting Conservatives as the largest party but not majority, whereas the actual result was a slim Conservative majority of 12 seats.[7]Of the 92 election polls which met the standards of theBritish Polling Councilin the six weeks before the 2015 election, none foresaw the 6.5% difference in the popular vote between the Conservative Party and Labour Party. One poll had Labour leading by 6%, two polls had Labour ahead by 4%, 7 polls had Labour ahead by 3%, 15 polls had Labour ahead by 2%, 17 polls had Labour ahead by 1%, 17 polls had a dead heat, 15 polls had the Conservatives ahead by 1%, 7 polls had the Conservatives ahead by 2%, 3 polls had the Conservatives ahead by 3%, 5 polls had the Conservatives ahead by 4%, one poll had the Conservatives ahead by 5%, and two polls had the Conservatives ahead by 6%. The two polls that gave the Conservatives a 6% lead were published two weeks before the voting, and the final polls from those polling companies, published on the eve of the voting, gave a dead heat and a 1% Labour lead.[8]The result was eventually a Conservative Party majority with a popular vote share of 36.8% with the Labour Party achieving 30.4%. It was later widely claimed in the media that the "Shy Tory factor" had again occurred as it had done in 1992.[9] The British Polling Council subsequently launched an independent enquiry into how polls were so wrong amid widespread criticism that polls are no longer a trustworthy avenue of measuring voting intentions.[10][11]This enquiry found that, contrary to the popular reporting, there was no "Shy Tory factor" in the election, and the polling had been incorrect for other reasons, most importantly unrepresentative samples.[12]
https://en.wikipedia.org/wiki/Shy_Tory_Factor
Insocial science researchsocial-desirability biasis a type ofresponse biasthat is the tendency ofsurveyrespondents to answer questions in a manner that will be viewed favorably by others.[1]It can take the form of over-reporting "good behavior" or under-reporting "bad" or undesirable behavior. The tendency poses a serious problem with conducting research withself-reports. This bias interferes with the interpretation of average tendencies as well as individual differences. Topics where socially desirable responding (SDR) is of special concern are self-reports of abilities, personality, sexual behavior, and drug use. When confronted with the question "How often do youmasturbate?," for example, respondents may be pressured by a socialtabooagainst masturbation, and either under-report the frequency or avoid answering the question. Therefore, the mean rates of masturbation derived from self-report surveys are likely to be severely underestimated. When confronted with the question, "Do you use drugs/illicit substances?" the respondent may be influenced by the fact thatcontrolled substances, including the more commonly usedmarijuana, are generally illegal. Respondents may feel pressured to deny any drug use orrationalizeit, e.g. "I only smoke marijuana when my friends are around." The bias can also influence reports of number of sexual partners. In fact, the bias may operate in opposite directions for different subgroups: Whereas men tend to inflate the numbers, women tend to underestimate theirs. In either case, the mean reports from both groups are likely to be distorted by social desirability bias. Other topics that are sensitive to social-desirability bias include: In 1953, Allen L. Edwards introduced the notion of social desirability to psychology, demonstrating the role of social desirability in the measurement of personality traits. He demonstrated that social desirability ratings of personality trait descriptions are very highly correlated with the probability that a subsequent group of people will endorse these trait self-descriptions. In his first demonstration of this pattern, the correlation between one group of college students’ social desirability ratings of a set of traits and the probability that college students in a second group would endorse self-descriptions describing the same traits was so high that it could distort the meaning of the personality traits. In other words, do these self-descriptions describe personality traits or social desirability?[13] Edwards subsequently developed the first Social Desirability Scale, a set of 39, true-false questions extracted from the Minnesota Multiphasic Personality Inventory (MMPI), questions that judges could, with high agreement, order according to their social desirability.[4]These items were subsequently found to be very highly correlated with a wide range of measurement scales, MMPI personality and diagnostic scales.[14]The SDS is also highly correlated with the Beck Hopelessness Inventory.[15] The fact that people differ in their tendency to engage in socially desirable responding (SDR) is a special concern to those measuring individual differences with self-reports. Individual differences in SDR make it difficult to distinguish those people with good traits who are responding factually from those distorting their answers in a positive direction. When SDR cannot be eliminated, researchers may resort to evaluating the tendency and then control for it. A separate SDR measure must be administered together with the primary measure (test or interview) aimed at the subject matter of the research/investigation. The key assumption is that respondents who answer in a socially desirable manner on that scale are also responding desirably to all self-reports throughout the study. In some cases, the entire questionnaire package from high scoring respondents may simply be discarded. Alternatively, respondents' answers on the primary questionnaires may be statistically adjusted commensurate with their SDR tendencies. For example, this adjustment is performed automatically in the standard scoring of MMPI scales. The major concern with SDR scales is that they confound style with content. After all, people actually differ in the degree to which they possess desirable traits (e.g. nuns versus criminals). Consequently, measures of social desirability confound true differences with social-desirability bias. Until the 1990s, the most commonly used measure of socially desirable responding was theMarlowe–Crowne Social Desirability Scale.[16]The original version comprised 33 True-False items. A shortened version, the Strahan–Gerbasi only comprises ten items, but some have raised questions regarding the reliability of this measure.[17] In 1991,Delroy L. Paulhuspublished theBalanced Inventory of Desirable Responding(BIDR): a questionnaire designed to measure two forms of SDR.[18]This forty-item instrument provides separate subscales for "impression management," the tendency to give inflated self-descriptions to an audience; andself-deceptive enhancement, the tendency to give honest but inflated self-descriptions. The commercial version of the BIDR is called the "Paulhus Deception Scales (PDS)."[19] Scales designed to tap response styles are available in all major languages, including Italian[20][21]and German.[22] Anonymous survey administration, compared with in-person or phone-based administration, has been shown to elicit higher reporting of items with social-desirability bias.[23]In anonymous survey settings, the subject is assured that their responses will not be linked to them, and they are not asked to divulge sensitive information directly to a surveyor. Anonymity can be established through self-administration of paper surveys returned by envelope, mail, or ballot boxes, or self-administration of electronic survey viacomputer, smartphone, or tablet.[1][24]Audio-assisted electronic surveys have also been established for low-literacy or non-literate study subjects.[1][25] Confidentiality can be established in non-anonymous settings by ensuring that only study staff are present and by maintaining data confidentiality after surveys are complete. Including assurances of data confidentiality in surveys has a mixed effect on sensitive-question response; it may either increase response due to increased trust, or decrease response by increasing suspicion and concern.[1] Several techniques have been established to reduce bias when asking questions sensitive to social desirability.[23]Complex question techniques may reduce social-desirability bias, but may also be confusing or misunderstood by respondents. Beyond specific techniques, social-desirability bias may be reduced by neutral question and prompt wording.[1] The Ballot Box Method (BBM) provides survey respondents anonymity by allowing them to respond in private by self-completing their responses to the sensitive survey questions on a secret ballot and submitting them to a locked box. The interviewer has no knowledge of what is recorded on the secret ballot and does not have access to the lock on the box, providing obscurity to the responses and limiting the potential for SDB. However, a unique control number on each ballot allows the answers to be reunited with a corresponding questionnaire that contains less sensitive questions.[26][27]The BBM has been used successfully to obtain estimates of sensitive sexual behaviours during an HIV prevention study,[26]as well as illegal environmental resource use.[27][28]In a validation study where observed behaviour was matched to reported behaviour using various SDB control methods, the BBM was by far the most accurate bias reduction method, performing significantly better than the Randomized Response Technique (RRT).[27] Therandomized response techniqueasks a participant to respond with a fixed answer or to answer truthfully based on the outcome of a random act.[25]For example, respondents secretly throw a coin and respond "yes" if it comes up heads (regardless of their actual response to the question), and are instructed to respond truthfully if it comes up tails. This enables the researcher to estimate the actual prevalence of the given behavior among the study population without needing to know the true state of any one individual respondent. Research shows that the validity of the randomized response technique is limited.[29]Validation research has shown that the RRT actually performs worse than direct questioning for some sensitive behaviours and care should be taken when considering its use.[27] The nominative technique asks a participant about the behavior of their close friends, rather than about their own behavior.[30]Participants are asked how many close friends they know have done for certain a sensitive behavior and how many other people they think know about that behavior. Population estimates of behaviors can be derived from the response. The similar best-friend methodology asks the participant about the behavior of one best friend.[31] Theunmatched-counttechnique asks respondents to indicate how many of a list of several items they have done or are true for them.[32]Respondents are randomized to receive either a list of non-sensitive items or that same list plus the sensitive item of interest. Differences in the total number of items between the two groups indicate how many of those in the group receiving the sensitive item said yes to it. The grouped-answer method, also known as the two-card or three-card method, combines answer choices such that the sensitive response is combined with at least one non-sensitive response option.[33] These methods ask participants to select one response based on two or more questions, only one of which is sensitive.[34]For example, a participant will be asked whether their birth year is even and whether they have performed an illegal activity; if yes to both or no to both, to select A, and if yes to one but no to the other, select B. By combining sensitive and non-sensitive questions, the participant's response to the sensitive item is masked. Research shows that the validity of the crosswise model is limited.[35] Bogus-pipelinetechniques are those in which a participant believes that an objective test, like a lie detector, will be used along with survey response, whether or not that test or procedure is actually used.[1]Researches using this technique must convince the participants that there is a machine that can measure accurately their true attitudes and desires. While this can raise ethical questions surrounding deception in psychological research, this technique quickly became widely popular in the 1970s. However, by the 1990s the use of this technique began to wane. Interested in this change, Roese and Jamison (1993) took twenty years of research to do a meta-analysis on the effectiveness of the Bogus pipeline technique in reducing social desirability bias. They concluded that while the Bogus pipeline technique was significantly effective, it had perhaps become less used simply because it went out of fashion, or became cumbersome for researchers to use regularly. However, Roese and Jamison argued that there are simple adjustments that can be made to this technique to make it more user-friendly for researchers.[36] "Extreme-response style" (ERS) takes the form of exaggerated-extremity preference, e.g. for '1' or '7' on 7-point scales. Its converse, 'moderacy bias' entails a preference for middle-range (or midpoint) responses (e.g. 3–5 on 7-point scales). "Acquiescence" (ARS) is the tendency to respond to items with agreement/affirmation independent of their content ("yea"-saying). These kinds of response styles differ from social-desirability bias in that they are unrelated to the question's content and may be present in both socially neutral and in socially favorable or unfavorable contexts, whereas SDR is, by definition, tied to the latter.
https://en.wikipedia.org/wiki/Social_desirability_bias
Thespiral of silence theoryis apolitical scienceandmass communicationtheory which states that an individual'sperceptionof the distribution ofpublic opinioninfluences that individual's willingness to express their own opinions.[1][2]Also known as the theory of public opinion, the spiral of silence theory claims individuals will be more confident and outward with their opinion when they notice that their personal opinion is shared throughout a group. But if the individual notices that their opinion is unpopular with the group, they will be more inclined to be reserved and remain silent. In other words, from the individual's perspective, "not isolating themself is more important than their own judgement", meaning their perception of how others in the group perceive them is more important to themself than the need for their opinion to be heard.[3] According to Glynn (1995), "the major components of the spiral of silence include (1) an issue ofpublic interest; (2) divisiveness on the issue; (3) a quasi-statistical sense that helps an individual perceive the climate of opinion as well as estimate the majority andminorityopinion; (4) 'fear ofisolation' from social interaction "(though, whether this is a causal factor in the willingness to speak out is contested[2])"; (5) an individual's belief that a minority (or 'different') opinion isolates oneself from others; and (6) a 'hardcore' group of people whose opinions are unaffected by others' opinions."[1] The theory is not without criticism, some arguing that its widely understood definition and parameters have not been updated to reflect the behavior of 21st centurysociety. Others point out that there is no room within the theory to account for variables of influence other than social isolation. In 1974,Elisabeth Noelle-Neumann, aGermanpolitical scientist, created the model called "Spiral of Silence". She believed that an "individuals willingness to express his or her opinion was a function of how he or she perceived public opinion."[4]In 1974, Neumann and her husband founded the "Public Opinion Organization" in Germany. She was also the President of the "World Association for Public Opinion Research" from 1978 to 1980. Noelle-Neumann evolved the spiral of silence theory from research on the1965 West German federal election. The research, according to Noelle-Neumann, "measured a lot more than we understood."[5]The two major parties were locked in a dead heat from December until September, with a series of questions of public perception of the election winner was showing steady, independent movement. During the final days of the election, 3 to 4% of the voters shifted in the direction of the public's perception of the winner. A similar shift happened in the1972 election, which began the development of the spiral of silence as a theory of public opinion.[1] According to Shelly Neill, "Introduced in 1974, the Spiral of Silence Theory [...] explores hypotheses to determine why some groups remain silent while others are more vocal in forums of public disclosure."[6]The spiral of silence theory suggests that "people who have believed that they hold a minority viewpoint on a public issue will remain in the background where their communication will be restrained; those who believe that they hold a majority viewpoint will be more encouraged to speak."[7]The spiral of silence theory arose from a combination of high public uncertainty about a topic with an increase in the flow of communication.[8] The theory explains the formation ofsocial normsat both the micro and macro level. "As a micro-theory, the spiral of silence examines opinion expression, controlling for people'spredispositions– such as fear of isolation, and also demographic variables that have been shown to influence people's willingness to publicly express opinions on issues, such asagricultural biotechnology."[9]This micro effect is seen in experiments such as theAsch conformity experiments, conducted as early as the 1950s, in which a group of students are asked to compare the length of lines. All but one student are coached ahead of time on what answers to give and how to behave. When the coached subjects gave unanimously incorrect answers, the dissenter tended to agree with the majority, at times even when the difference between the lines was so egregious as seven inches.[10]On themacrolevel, the spiral of silence occurs if more and more members of the perceived minority fall silent. This is when public perceptions of the opinion climate begin to shift.[9]"In other words, a person's individual reluctance to express his or her opinion, simply based on perceptions of what everyone else thinks, has important implications at the social level."[9]As one opinion gains interest, the amount of exposure it receives increases, leading the public to believe it is the majority. The perceived minority then faces the threat and fear of isolation from society unless they conform. As the opinion gains momentum, the perceived minority falls deeper into their silence. This continues until the perceived minority no longer speaks out against it, either by presenting an image of agreement or actually conforming, and the opinion of the perceived majority ultimately becomes a social norm.[11]Large scale effects of the spiral of silence can be seen when examining the growth of the dominant opinion within a countries political climate or other such issues. The spiral of silence has continued to be observed and studied since then. In today's world, technology can play a key part in the spiral of silence, something that could not have been predicted at the time of its inception. For example, survey data showed that during the 2016 US presidential election, opinion congruency for democratic candidateHillary Clintonin society at large and for republican candidateDonald Trumpon Facebook had indirect associations for willingness to present one's opinion both offline and online.[12] The spiral model is used to visually represent thetheory. It claims that an individual is more likely to go down the spiral if his or her opinion does not conform with the perceived majority opinion.[11]The following steps summarize how the process works: In summary, the spiral model is a process of formation, change, and reinforcement of public opinion. The tendency of the one to speak up and the other to be silent begins a spiraling process which increasingly establishes one opinion as the dominant one. Furthermore, Noelle-Neumann describes the spiral of silence as adynamic process, in which predictions about public opinion become fact as mass media's coverage of themajority opinionbecomes thestatus quo, and the minority becomes less likely to speak out.[14] The basic ideas for the spiral of silence are not unique and are closely related to theories onconformity. In 1987, Kerr, MacCoun, Hansen and Hymes introduced the idea of the "momentum effect". The momentum effect states that if some members of a group move toward a particular opinion, others will follow.[15]Others have described similar "gain-loss effect" (Aronson & Linder, 1965), and "bandwagon effect" (Myers & Lamm, 1976). Experiments also show how the spiral of silence and the bandwagon effect jointly undermine minority positions when pre-election polls are shown to voters.[16] Scholarshave misguided interpretations of "public opinion", confusing it with government and therefore limiting the understanding of the term as it relates to the theory. Noelle-Neumann clarifies this by creating three distinct meanings of "public." First, is the legal term used to define "public land" or "public spaces." Second concerns the issues of people as seen in journalism. Finally, public as in "public eye" is used insocial psychologyand refers to the way people think outwardly about their relationships. Public, in this sense, could be characterized associal psychology. This is the meaning intended to emphasize how subjects feel in social settings during conducted research. Scholars have marveled in amazement at the power public opinion has in making regulations, norms, and moral rules triumph over the individual self without ever troubling legislators, governments, or courts for assistance.[11] "CommonOpinion" is how theScottishsocial philosopherDavid Humereferred to public opinion in his 1739 published workA Treatise of Human Nature. Agreement and a sense of the common are what lay behind the English and French "opinion."[11]In researching the term opinion (Meinungin German) researchers were led back toPlato'sRepublic. In Plato'sRepublic, a quote fromSocratesconcluded that opinion takes the middle position.Immanuel Kantconsidered the opinion to be an "insufficient judgment, subjectively as well as objectively."[17]How valuable opinion may be was left out; however, the fact that it is suggested to be unified agreement of a population or segment of the population, was still considered.[11] The termpublic opinionfirst emerged in France during the eighteenth century. The definition of public opinion has been debated over time. There has not been much progress in locking in one classification of the phrasepublic opinion,howeverHermann Oncken, a German historian, stated Whoever desires to grasp and define the concept of public opinion will recognize quickly that he is dealing with aProteus, a being that appears simultaneously in a thousand guises, both visible and as a phantom, impotent and surprisingly efficacious, which presents itself in innumerable transformations and is forever slipping through our fingers just as we believe we have a firm grip on it... That which floats and flows cannot be understood by being locked up in a formula... After all, when asked, everyone knows exactly what public opinion means.[11] It was said to be a "fiction that belonged in a museum of the history of ideas; it could only be of historical interest."[11] In contradiction to that quote, the term public opinion never fell out of use. During the early 1970s, Elisabeth Noelle-Neumann was creating the theory of the spiral of silence. She was attempting to explain why Germans who disagreed withHitlerand the Nazis remained silent until after his regime ended. Behavior like that has come to be known as the spiral of silence theory.Noelle-Neumannbegan to question if she was indeed getting a handle on what public opinion actually was. "The spiral of silence might be one of the forms in which public opinion appeared; it might be a process through which a new, youthful public opinion develops or whereby the transformed meaning of an old opinion spreads."[11] The American sociologistEdward Rossdescribedpublic opinionin 1898 using the word "cheap". "The equation of 'public opinion' with 'ruling opinion' runs like a common thread through its many definitions. This speaks to the fact that something clinging to public opinion sets up conditions that move individuals to act, even against their own will."[18] Other scholars point out that the emergence of the public opinion depends on an open public discourse rather than "on the discipline imposed by an apparent majority dominant enough to intimidate but whose views may or may not support actions that are in the common interest."[19]They have also considered whose opinion establishes public opinion, assumed to be persons of a community who are ready to express themselves responsibly about questions of public relevance. Scholars have also looked into the forms of public opinion, said to be those that are openly expressed and accessible; opinions that are made public, especially in the mass media. Controversy surrounding this term spiraled around both words combining to form the phrase.[11] Neumann (1955) suggests two concepts on public opinion: Public Opinion as Rationality:The public opinion or "dominant view" comes after conscious rational public discussion. Childs (1965) and Wilson (1933) believe that "the rational model is based on the notion of an enlightened, rational public that is willing to and capable of participating in political processes." In all, it is political and necessary for generating social change. Public Opinion as Social Control:This is at the root of the spiral of silence theory. It means that "opinions that can be expressed without risking sanctions or social isolation, or opinions that have to be expressed in order to avoid isolation (Noelle-Neumann 1983). Social systems require cohesion. To achieve this, individuals are threatened with social isolation. Mass media's effects on both public opinion and the perception of the public opinion are central to the spiral of silence theory. One of the earliest works that called attention to the relationship between media and the formation of public opinion wasWalter Lippmann's book "Public Opinion", published in 1922.[20]Ideas of Lippmann regarding the effects of media influenced the emergence of the spiral of silence theory. As she is building the spiral theory, Noelle-Neumann states "the reader can only complete and explain the world by making use of a consciousness which in large measure has been created by the mass media."[18] Agenda-setting theoryis another work that Noelle-Neumann builds on as she is characterizing media's effect on the public opinion. Agenda-setting theory describes the relationship between media and public opinion by asserting that the public importance of an issue depends on its salience in the media.[21]Along with setting the agenda, the media further determine the salient issues through a constant battle with other events attempting to gain place in the agenda.[18]The media battle with these news alternatives by creating "pseudo-crises" and "pseudo-novelties."[18] Media's characteristics as a communication tool further affect people's perception of their own ideas in regard to the public opinion.[18]According to Noelle-Neumann, the media are a "one-sided, indirect, public form of communication, contrasting threefold with the most natural form of human communication, theconversation."[18]When an issue hits the media and proves salient, a dominant point of view usually emerges. These characteristics of the media in particular further overwhelm one's individual ideas. While somemedia communication theoriesassume a passiveaudience, such as theHypodermic Needle model,[22]the spiral model assumes an active audience "who consumes media products in the context of their personal and social goals."[22]Knowledge "gained from the mass media may offer ammunition for people to express their opinions and offer a rationale for their own stance."[23]Ho et al. point out that "among individuals who paid high amount of media attention, those who have a low fear of isolation were significantly more likely to offer a rationale for their own opinion than were those who have a high fear of isolation."[23] Noelle-Neuman regards media as central to the formulation of the Spiral of Silence Theory, whereas some scholars argue whether the dominant idea in one'ssocial environmentoverwhelms the dominant idea that media propose as the perceived social norm.[24][25]Some empirical research align with this perspective; suggesting that the "micro-climate" of an individual overwhelms the effects of the media.[25]Other articles further suggest that talking with others is the primary way of understanding the opinion climate.[26] Current literature suggests that the spiral model can be applied to thesocial mediacontext. Researchers,Chaudhry & Gruzd (2019)found that social media actually weakens this theory. They contest that the spiral of silence suggests that the minority are uncomfortable expressing their opinions because of the fear of isolation, but, "the vocal minority are comfortable expressing unpopular views, questioning the explanatory power of this popular theory in the online context."[27] However, in another study, Gearhart and Zhang examine whether or not the use of social media will increase people's motivation to express their opinions about political issues. The results suggest that social media users "who have received a strong negative reaction to their politically related posts are likely to censor themselves, exemplifying the spiral of silence effect".[28]Another study found that the fear of isolation causes people to not want to share their opinion on social media in the first place. Similar to the Gearhart and Zhang study, results from this study showed that people are more likely to self-censor information on social media by not posting some things that are political, choosing what and what not to follow or like, etc.[29] Another research confirms the positive relationship between speaking out and issue importance on the social media context as well: individuals who viewgay bullyingas a significant social issue are more likely to comment on Facebook.[30] Artificially generated social engagement is also worth noting. As social media becomes more and more important in our daily lives, deceptive socialbotshave been successfully applied for manipulating online conversations and opinions.[31]Social bots are social media accounts managed by computer algorithms. They can automatically generate content and interact with human users, often impersonating or imitating humans.[32]Current research shows that "social bots" are being used on a large scale to control the opinion climate to influence public opinion on social media.[33]In some cases only a small number of social bots can easily direct public opinion on social media and trigger a spiral of silence model.[34]For example, scholars find out that social bots can affect political discussion around the 2016 U.S. presidential election[35]and the 2017 French presidential election.[36] The Spiral of Silence Theory rests on the assumption that individuals will scan their environment to assess the climate to possibly find the dominant point of view. Perception matters because these opinions influence an individual's behavior and attitudes.[4]Sherif (1967)[citation needed]believes individuals use frames of reference based on past experience to inform their perception -- "social environmentas a frame of reference for interpreting new information has important implications for public opinion research." It is also worth mentioning that the assessment of one's social environment may not always correlate with reality.[9] Noelle-Neumann attributed this ability of assessing opinion climate on an issue to the so-called "quasi-statistical organ", which refers to how individuals unconsciously assess the distribution of viewpoints and the chances that certain viewpoints will succeed over others.[37]People assume they can sense and figure out what others are thinking.[9] TheMass mediaplay a large part in determining what the dominant opinion is, since our direct observation is limited to a small percentage of the population. The mass media have an enormous impact on how public opinion is portrayed, and can dramatically impact an individual's perception about where public opinion lies, whether or not that portrayal is factual.[38] Pluralistic ignorancemay occur in some cases in which the minority opinion is incorrectly accepted as the norm.[1]Group members may be privately rejecting a norm, but may falsely assume that other group members accept it. This phenomenon, also known as a collective illusions, is when people in a group think everyone else has a different opinion from theirs and go along with the norm.[39][40] The spiral of silence can lead to a social group or society isolating orexcludingmembers due to themembers' opinions. This stipulates that individuals have a fear ofisolation. This fear of isolation consequently leads to remaining silent instead of voicing opinions. The fear of isolation is the "engine that drives the spiral of silence".[41]Essentially, people fear becoming social isolates and thus take measures to avoid such a consequence, as demonstrated by psychologistSolomon Aschin theAsch conformity experiments.[42]People feel more comfortable agreeing with the dominant opinions instead of expressing their own ideas.[9] An underlying idea of the spiral of silence theory is that public opinion acts as a form of social control. According to Noelle-Neumann's definition this key concept describes "opinions on controversial issues that one can express in public without isolating oneself".[43]This assumption supposed that public opinion is governed by norms and conventions, the violation of which will lead to sanctions against those individuals. Going off of this assumption that going against public opinion will lead to social sanctions, Noelle-Neumann assumes that human beings have an inherent fear of isolation and will adapt their behavior so that they will not be isolated from others.[11]This “fear of isolation” is so strong that people will not express opinions if they assume that these opinions differ from public opinion. This fear of social isolation is a central concept in Noelle-Neumann's theory but throughout different studies on the theory it has been conceptualized in many different ways.[44]Some researchers have considered fear of social isolation to be transitory and triggered by the exposure to a situation in which an individual is expected to express an opinion. In this conceptualization an individual's perception of the opinion climate in a specific situation would trigger the fear of isolation in that moment.[45][full citation needed][46] Other researchers have argued that instead of a situation-specific reaction, fear of social isolation can be viewed as individual characteristic that varies between people and leads individuals to continuously monitor their environment for cues about the opinion climate. While there are individuals who generally do not suffer from a fear of isolation (what Noelle-Neumann referred to as Hardcores[11]) others are constantly aware of their social environment and faced with a constant fear of isolation. Individuals who bear this characteristic of fear of social isolation and at the same time perceive their opinion to be incongruent with the majority opinion climate are less likely to be willing to voice their opinion. In this line of spiral of silence research fear of social isolation is a key concept in formation of public opinion, however research has often assumed this conceptualization as a fact without empirical proof[47][full citation needed]or been inconsistent in the empirical measurement of this phenomenon.[46] Recent research has been able to capture the concept of fear of social isolation in a more reliable and consistent way. One example is research conducted byMathes (2012)harvp error: no target: CITEREFMathes2012 (help)[full citation needed]in which the researchers used an individual differences approach based on individual's character traits and measured individual's fear of social isolation using psychometric properties.Mathes (2012)harvp error: no target: CITEREFMathes2012 (help)[full citation needed], as well as other researchers,[citation needed]considers fear of social isolation to be a subsequent reaction to encountering a perceived hostile opinion climate which in turn leads the individual to not voice their own opinions and therefore sets in motion the spiral of silence. Although many accept fear of isolation to be the motivation behind the theory, arguments have been made for other causal factors.[1]For example, Lasorsa[48]proposed it may be less a fear of isolation fueling the spiral, and more about political interest (in the case of political debate) and self-efficacy. From a more positive standpoint, Taylor suggested the benefits of opinion expression, whether that opinion was common or not, to be the motivation.[2]When studying the willingness to discuss an issue so divisive as abortion, Salmon and Neuwirth found only "mixed supportive evidence" for fear of isolation, and instead found that knowledge and personal concern of the issue played important roles.[49]More examples follow at the end of the article. Where opinions are relatively definite and static – customs, for example – one has to express or act according to this opinion in public or run the risk of becoming isolated. In contrast, where opinions are in flux, or disputed, the individual will try to find out which opinion he can express without becoming isolated. The theory explains avocal minority(the complement of thesilent majority) by stating that people who are highly educated, or who have greater affluence, and the few other cavalier individuals who do not fear isolation (if that is accepted to be the causal factor), are likely to speak out regardless of public opinion.[50]It further states that this minority is a necessary factor of change while the compliant majority is a necessary factor of stability, with both being a product of evolution. There is a vocal minority, which remains at the top of the spiral in defiance of threats of isolation. This theory calls these vocal minorities the hardcorenonconformistor theavant-garde. Hardcore nonconformists are "people who have already been rejected for their beliefs and have nothing to lose by speaking out."[41]The hardcore has the ability to reconfigure majority opinion, while theavant-gardeare "the intellectuals, artists, and reformers in the isolated minority who speak out because they are convinced they are ahead of the times."[41] Hardcore is best understood when the majority voices loses power in public opinion due to a lack of alternatives. People's opinions may affect narrow-minded views as a result of the hard core's efforts to educate the public. Hardcore may be instrumental in changing public opinion even though it is frequently engaged in irrational acts to prove their point. The spiral of silence has brought insight regarding diverse topics, ranging from speaking about popular culture phenomena[51]to smoking.[52]Considering that the spiral of silence is more likely to occur in controversial issues and issues with a moral component,[18]many scholars have applied the theory to controversial topics, such asabortion,[53]affirmative action,[54]capital punishment,[55]mandatory COVID-19 vaccines and masking.[56] The spiral of silence theory can be also applied tosocial capitalcontext. Recent studies see social capital as "a variable that enables citizens to develop norms oftrustandreciprocity, which are necessary for successful engagement in collective activities".[57]One study examines three individual-level indicators of social capital:civic engagement, trust and neighborliness, and the relationship between these indicators and people's willingness to express their opinions and their perception of support for one's opinions. The results suggest that civic engagement has a direct effect on people's willingness to express their opinions and neighborliness and trust had direct positive effects on people's perception of support for one's opinions.[57]Also, the study shows that "only a direct (but not indirect) effect of civic engagement on opinion expression further highlights a potential difference between bonding and bridging social capital".[57] Existing literature prior to the spiral of silence theory suggest a relationship between socialconformityandculture, motivating communication scholars to conductcross-cultural analysisof the theory. Scholars in the field ofpsychologyin particular previously addressed the cultural variance involved in the conformity to the majority opinion.[58]More recent studies confirm the link between conformity and culture: ameta-analysisregardingAsch conformity experiments, for example, suggest thatcollectivistcultures are more likely to exhibit conformity than the individualistic cultures.[59] "A Cross-Cultural Test of the Spiral of Silence" by Huiping Huang analyzes the results of a telephone survey done inTaiwanand theUnited States. The hypotheses tested were the beliefs that theUnited Statesisan "individualistic" society, whileTaiwanis a "collectivist" society. This suggested that the spiral of silence is less likely to be activated in the United States, because individuals are more likely to put emphasis on their personal goals. They put the "I" identity over the "we" identity, and strive for personal success. Therefore, it was hypothesized that they would be more likely to speak out, regardless of if they are in the minority. On the other hand, it was predicted that individuals in Taiwan put more emphasis on the collective goal, so they would conform to the majority influence in hopes of avoiding tension and conflict. The study also tested the effect ofmotives, includingself-efficacyandself-assurance. Telephone surveyswere conducted; the citizens of the United States were questioned in regard to American involvement inSomalia, and the citizens of Taiwan about the possibility of a direct presidential election. Both issues focused on politics and human rights, and were therefore comparable. Respondents were asked to choose "favor", "neutral" or "oppose" in regard to the categories of themselves, family and friends, the media, society, and society in the future about the given issue. Measurements were also taken regarding theindividualismand collectivism constructs, and the "motives of not expressing opinion" based on a 1–10 and 1–5 scale respectively, in approval of given statements. Results showed support for the original hypothesis. Overall,Americanswere more likely to speak out thanTaiwanese. Being incongruous with the majority lessened the motivation of the Taiwanese to speak out (and they had a higher collectivist score), but had little effect on the Americans. In Taiwan, future support and belief of society played a large role in likeliness to voice an opinion, and support that the activation of the spiral of silence is in effect. In the United States, it was hypothesized that because they were more individualistic, they would be more likely to speak out if in the minority, or incongruous group. However, this was not true, but Huang suggests that perhaps the issue chosen was not directly prevalent, and therefore, they found it "unnecessary to voice their objections to the majority opinion." Lack of self-efficacy led to lack of speaking out in both countries.[60] Basque Nationalismand the Spiral of Silenceis an article by Spencer and Croucher that analyzes the public perception ofETA(Euskadi Ta Askatasuna, a militant separatist group) in Spain and France. This study was conducted in a similar way as above, with Basque individuals from Spain and France being questioned about their support of ETA. They were asked questions such as "How likely would you be to enter into a conversation with a stranger on a train about ETA?" Taken into consideration were the cultural differences of the two different regions in which ETA existed. The results supported the theory of the spiral of silence. While there was a highly unfavorable opinion of the group, there was a lack of an outcry to stop it. Individuals claimed that they were more likely to voice their opinions to non-Basques, suggesting that they have a "fear of isolation" in regard to fellowBasques. Furthermore, the Spanish individuals questioned were more likely to be silent because of their greater proximity to the violent acts.[61] One study by Henson and Denker "investigates perceptions of silencing behaviors,political affiliation, and political differences as correlates to perceptions of the universityclassroom climatesandcommunication behaviors."[62]They looked at whether students' view of the classroom changes whether they perceive the instructor and other classmates with a different political affiliation, with the instructor and other classmates communicating using silencing behaviors. The article stated that little has been investigated into student-teacher interactions in the classroom, and how the students are influenced.[62]The goal of the article was to "determine how political ideas are expressed in the university classrooms, and thus, assess the influence of classroom communication on the perceptions of political tolerance."[62] The article claimed that university classrooms are an adequate place to scrutinize the spiral of silence theory because it is a place that hasinterpersonal,cultural,media, andpolitical communication. Henson and Denker said, "Because classroom interactions and societal discourse are mutually influential, instructors and students bring their own biases and cultural perspectives into the classroom."[62] The study researched whether there was a correlation between students' perception that they were being politically silenced and their perceived differences in student-instructor political affiliation. The study also questioned whether there was any connection between the perceived climate and the similarity of the student and instructor on their political affiliations.[62]The researchers used participants from a Midwestern university's communication courses. The students answered a survey over their perceptions of political silencing, classroom climate, and the climate created by the instructor. The results of this research found that there is a positive relationship of the perceived similarities in a political party andideologicaldifferences of the student and instructor to perceived greater political silencing.[62] While the studies regarding the spiral of silence theory focused onface-to-face interactionbefore 2000, the theory was later applied to acomputer-mediated communicationenvironment. The first study in this context analyzed communication behaviors in online chat rooms regarding the issue of abortion, and revealed that minority opinion holders were more likely to speak out, whereas their comments remained neutral.[63]Another study focused on the Korean bulletin board postings regarding the national election, and found a relationship between online postings and the presentation of candidates in the mainstream media.[64]The third study focuses on the online review system, suggesting that the fear of isolation tend to reduce the willingness of members to voice neutral and negative reviews.[65]The spiral of silence theory is extended "into the context of non-anonymous multichannel communication platforms" and "the need to consider the role of communicative affordances in online opinion expression" is also addressed.[65] The concept of isolation has a variety of definitions, dependent upon the circumstances it is investigated in. In one instance the problem of isolation has been defined associal withdrawal, defined as low relative frequencies of peer interaction.[66][67]Other researchers have defined isolation as low levels of peer acceptance or high levels ofpeer rejection.[68]Research that considers isolation with regard to the Internet either focuses on how the Internet makes individuals more isolated from society by cutting off their contact from live human beings[69][70][71]or how the Internet decreasessocial isolationof people by allowing them to expand theirsocial networksand giving them more means to stay in touch with friends and family.[72][73]Since the development of the Internet, and in particular theWorld Wide Web, a wide variety of groups have come into existence, including Web andInternet Relay Chat(IRC),newsgroups, multiuser dimensions (MUDs), and, more recently, commercialvirtual communities.[74]The theories and hypotheses about howInternet-based groups impact individuals are numerous and wide-ranging. Some researchers view these fast growing virtual chat cliques,online games, or computer-based marketplaces as a new opportunity, particularly for stigmatized people, to take a more active part in social life.[75][76][77] Traditionally, social isolation has been presented as a one-dimensional construct organized around the notion of a person's position outside thepeer groupand refers to isolation from the group as a result of being excluded from the group by peers.[78]From children to adults, literature shows that people understand the concept of isolation and fear the repercussions of being isolated from groups of which they are a member. Fearing isolation, people did not feel free to speak up if they feel they hold dissenting views, which means people restrict themselves to having conversation with like-minded individuals, or have no conversation whatsoever.[79]Witschge further explained, "Whether it is fear of harming others, or fear to get harmed oneself, there are factors that inhibit people from speaking freely, and which thus results in a non-ideal type of discussion, as it hinders diversity and equality of participants and viewpoints to arise fully."[80] The medium of the Internet has the power to free people from the fear of social isolation, and in doing so, shuts down the spiral of silence. One article demonstrates that social media can weaken the fear of isolation. The research shows that the vocal minority who hold racist viewpoints are willing to express unpopular views on Facebook.[27]The Internet allows people to find a place where they can find groups of people with like mindsets and similar points of view. Van Alstyne and Brynjolfsson stated that "Internet users can seek out interactions with like-minded individuals who have similar values, and thus become less likely to trust important decisions to people whose values differ from their own."[81]The features of the Internet could not only bring about more people to deliberate by freeing people of psychological barriers, but also bring new possibilities in that it "makes manageable large-scale,many-to-manydiscussion and deliberation."[82]Unliketraditional mediathat limit participation, the Internet brings the characteristics of empowerment, enormous scales of available information, specific audiences can be targeted effectively and people can be brought together through the medium.[83] The Internet is a place where manyreferenceandsocial groupsare available with similar views. It has become a place where it appears that people have less of a fear of isolation. One research article examined individuals' willingness to speak their opinion online and offline. Through survey results from 305 participants, a comparison and contrast of online and offline spiral of silence behaviors was determined.[84]Liu and Fahmy stated that "it is easy to quit from an online discussion without the pressure of complying with the majority group."[85]This is not to say that a spiral of silence does not occur in an online environment. People are still less likely to speak out, even in an online setting, when there is a dominant opinion that differs from their own.[85]But people in the online environment will speak up if someone has a reference group that speaks up for them.[85] Online, the presence of one person who encourages a minority point of view can put an end to a spiral of silence. Studies of the spiral of silence in online behavior have not acknowledged that a person may be more likely to speak out against dominant views offline as well.[85]The person might have characteristics that make them comfortable speaking out against dominant views offline, which make them just as comfortable speaking out in an online setting. Although research suggests that people will disclose their opinions more often in an online setting, silencing of views can still occur. One study indicates that people on Facebook are less willing to discuss the Snowden and NSA stories than an offline situation such as a family dinner or public meeting.[86]Another research article examined the influence of different opinion climates inonline forums(opinion congruence with the majority of forum participants vs. website source) and found personal opinion congruence was more influential than the online site in which the forum is situated in.[87]Nekmat and Gonzenbach said it might be worth researching whether the factors in these studies or other factors cause people to be more comfortable when it comes to speaking their mind while online.[87] The nature of the Internet facilitates not only the participation of more people, but also a more heterogeneous group of people. Page stated, "The onward rush of electronic communications technology will presumably increase the diversity of available ideas and the speed and ease with which they fly about and compete with each other."[88]The reason people engage in deliberations is because of their differences, and the Internet allows differences to be easily found. The Internet seems the perfect place to find different views of a very diverse group of people who are at the same time open to such difference and disagreement needed for deliberation. Noelle-Neumann's initial idea of cowering and muted citizens is difficult to reconcile with empirical studies documenting uninhibited discussion in computer-mediated contexts such aschat roomsand newsgroups.[89][90][91][92] The Internet provides an anonymous setting, and it can be argued that in an anonymous setting, fears of isolation and humiliation would be reduced. Wallace recognized that when people believe their actions cannot be attributed to them personally, they tend to become less inhibited bysocial conventionsand restraints. This can be very positive, particularly when people are offered the opportunity to discuss difficult personal issues under conditions in which they feel safer.[93] The groups' ability to taunt an individual is lessened on the Internet,[citation needed]thus reducing the tendency to conform. Wallace goes on to summarize a number of empirical studies that do find that dissenters feel more liberated to express their views online than offline, which might result from the fact that the person in the minority would not have to endure taunts or ridicule from people that are making up the majority, or be made to feel uncomfortable for having a different opinion.[94]Stromer-Galley considered that "an absence ofnon-verbalcues, which leads to a lowered sense of social presence, and a heightened sense of anonymity" frees people from the psychological barriers that keep them from engaging in a face-to-face deliberation.[95] The crux of the spiral of silence is that people believe consciously or subconsciously that the expression of unpopular opinions will lead to negative repercussions. These beliefs may not exist on the Internet for several reasons. First, embarrassment and humiliation depends on the physical presence of others.[citation needed]Incomputer-mediated communication, physical isolation often already exists and poses no further threat.[63]Second, a great deal ofnormative influenceis communicated through nonverbal cues, such aseye contactandgestures,[96]but computer-mediated communication typically precludes many of these cues. Third, Kiesler, Siegel, and McQuire observe that nonverbalsocial contextcues convey formality and status inequality inface-to-face communication.[97]When these cues are removed, the importance ofsocial statusas a source of influence recedes. Group hierarchies that develop in face-to-face interaction emerge less clearly in a mediated environment.[98]The form and consequences ofconformityinfluence should undergo significant changes given the interposition of a medium that reduces thesocial presenceof participants.[63]Social presence is defined as the degree ofsalienceof the other person in the interaction[99]or the degree to which the medium conveys some of the person's presence.[100] An important issue in obtaining heterogeneity in conversation is equal access for all the participants andequal opportunitiesto influence in the discussion. When people believe they are ignorant about a topic, incapable to participate in a discussion or not equal to their peers, they tend to not even become involved in adeliberation. When people do decide to participate, their participation might be overruled bydominant others, or their contribution might be valued less or more, depending on their status.[63]Dahlberg praises the Internet for its possibility to liberate people from thesocial hierarchiesand power relations that exist offline: "The 'blindness' ofcyberspaceto bodily identity... [is supposed to allow] people to interact as if they were equals. Arguments are said to be assessed by the value of the claims themselves and not thesocial positionof the poster".[101] Gastil sees this feature as one of the strongest points of the Internet: "if computer-mediated interaction can consistently reduce the independent influence of status, it will have a powerful advantage over face-to-face deliberation".[102]While status cues are difficult to detect, perceptions about the status converge, and this lessensstereotypingandprejudice.[94] It may be that people do feel more equal in online forums than they feel offline.Racism,ageism, and other kinds ofdiscriminationagainst out groups "seems to be diminishing because the cues toout-groupstatus are not as obvious".[103]Next to this, the Internet has rapidly and dramatically increased the capacities to develop,shareandorganizeinformation,[104]realizing more equality ofaccess to information.[105] The relationship between the perception of public opinion and willingness to speak-up is mainly measured throughsurveys.[106]Surveys respondents are often asked whether they would reveal their opinions given a hypothetical situation, right after their opinions about the public opinion and their opinion is received. Whether asking hypothetical questions can reflect real life cases was questioned by some communication scholars, leading to a criticism of thismethodologyas not being able to capture what the respondent would do in a real-life situation.[107]A research study addressed this criticism by comparatively testing a spiral model both in a hypothetical survey and in afocus group.[107]The findings are in line with the critic of hypothetical survey questions, demonstrating a significant increase in the spiral of silence in focus groups.[107] Among different approaches to survey methodology,cross-sectional studydesign is the leading method employed to analyze and test the theory.[106]Cross-sectional design involves the analysis of the relationship between public opinion and willingness to speak at one point in time.[106] While many of the researchers employ cross-sectional design, some scholars employedpanel data.[108]Under this methodology, three specific approaches have been used. Noelle-Neumann herself tested the theory from the aggregate level. Using this approach, the change process is "observed by comparing the absolute share of people perceiving a majority climate with people willing to express their views over time."[109]The second approach that has been used in spiral of silence research is conducting separate regressions for each panel survey wave. The drawback for this approach is that the individual change of climate and opinions perception is ignored.[109]The last approach a few scholars used in conducting spiral of silence researches is to use changed scores as dependent variables. However, as intuitive as this approach may be, it "leads to well-documented difficulties with respect to statistical properties, such as regression to the mean or the negative correlation of the change score with the time one state".[109] The critics of this theory most often claim that individuals have different influences that affect whether they speak out or not. Research indicates that people fear isolation in their small social circles more than they do in the population at large. Within a large nation, one can always find a group of people who share one's opinions, however people fear isolation from their close family and friends more in theory. Research has demonstrated that this fear of isolation is stronger than the fear of being isolated from the entire public, as it is typically measured.[54] Scholars have argued that both personal characteristics and various cultures among different groups will have influences on whether a person will willingly speak out. If one person "has a positive self-concept and lacks a sense of shame, that person will speak out regardless of how she or he perceives the climate of public opinion."[110]Another influence critics give for people choosing not to speak out against public opinion is culture. Some cultures are more individualistic, which would support more of an individual's own opinion, while collectivist cultures support the overall group's opinion and needs. Gender can be also considered as a cultural factor. In some cultures, women's "perception of language, not public opinion, forces them to remain quiet."[110]Scheufele & Moy, further assert that certain conflict styles and cultural indicators should be used to understand these differences.[38] The nature of issues will influence the dynamic processes of the spiral of silence.[111]Yeric and Todd present three issues type, including enduring issues that will be discussed by the public for a long time; emerging issues that are new to the public but have the potential to become enduring issues; and the transitory issues, which don't stay in the public consciousness for very long but come up from time to time.[112]The research suggests that issues difference affects people's willingness to express. Facebook users are more likely to post their real thoughts on emerging issues such as gay marriage in an incongruent opinion climate.[111] Another criticism of the spiral of silence research is that the research designs do not observe the movement in the spiral over time. Critics propose that Noelle-Neumann's emphasis on time[18]in the formation of the spiral should reflect on the methodology as well, and the dynamic nature of the spiral model should be acknowledged. They argue that the spiral of silence theory involves a "time factor", considering that the changes in public opinion eventually lead to change in people's assessments of the public opinion.[109]Also, according to Spilchal, the spiral of silence theory "ignores the evidence of the historical development of public opinion, both in theory and practice, through the extension of suffrage, organisation of political propaganda groups, the establishment of pressure groups and political parties, the eligibility of ever wider circles of public officials and, eventually, the installation of several forms of direct democracy."[113] Some scholars also provide understandings of the theory in the contemporary society by pointing out that "it is not so much the actual statistical majority that generates pressure for conformity as it is the climate of opinion conveyed in large measure by the media."[19]Under the great influence by the media coverage, the climate of opinion "is not invariably an accurate reflection of the distribution of opinions within the polity."[19] Further, Scheufele & Moy[38]find problems in the operationalization of key terms, including willingness to speak out. This construct should be measured in terms of actually speaking out, not voting or other conceptually similar constructs. Conformity experiments have no moral component, yet morality is a key construct in the model. These conformity experiments, particularly those by Asch, form part of the base of the theory. Scholars question whether these conformity experiments are relevant to the development of the spiral of silence.[38] While the existence of groups with opinions other than those that are supposed to be dominant in a society provide a space for some people to express seemingly unpopular opinions, assumptions in such groups that criticism of their underrepresented opinion equates to support for society's mainstream views is a source of false dilemmas. Some research indicates that such false dilemmas, especially when there are inconsistencies both in mainstream views and in organized opposition views, causes a spiral of silence that specifically silences logically consistent third, fourth or higher number viewpoint criticism.
https://en.wikipedia.org/wiki/Spiral_of_silence
Member forCook(2007–2024) Prime Minister of Australia(2018–2022) Tenure Royal Commissions Elections "The quiet Australians" is an expression that was used by Australian politicianScott Morrisonwhen hisLiberal/National Coalitionunexpectedly won the2019 Australian federal electionon 18 May 2019, meaning Morrison would continue asPrime Minister of Australia.[1][2]Describing the outcome as a miracle, Morrison stated that "the quiet Australians ... have won a great victory":[3] This is, this is the best country in the world in which to live. It is those Australians that we have been working for, for the last five and a half years since we came to Government, underTony Abbott's leadership back in 2013. It has been those Australians who have worked hard every day, they have their dreams, they have their aspirations; to get a job, to get an apprenticeship, to start a business, to meet someone amazing. To start a family, to buy a home, to work hard and provide the best you can for your kids. To save your retirement and to ensure that when you're in your retirement, that you can enjoy it because you've worked hard for it. These are the quiet Australians who have won a great victory tonight. Morrison used this term prior to the election stating "Too many of us have been quiet for too long and it's time to speak up", and "To those quiet Australians who are out there, now is not the time to turn back".[4][5]After the election, he compared Quiet Australians toRobert Menzies's "forgotten people" andJohn Howard's "battlers".[6]In December, when congratulatingBoris Johnsonfor winning the2019 United Kingdom general election, Morrison asked him to "say g'day to the quiet Britons for us".[7] The term "The Quiet Australians" has been referenced by media outlets and commentators.[2][8][9]Stan Grantwrote that "Retirees, middle-class parents, and those dependent on the mining industry for their livelihoods all felt they were in the firing line. Christian leaders now say that religious freedom was a sleeper issue that turned votes in critical marginal seats. Throughout the world, long-silent voices are making themselves heard and it is shaking up politics as usual. People are saying they want to belong and they want their leaders to put them first".[10]The Guardiancompared Morrison's Quiet Australians toRichard Nixon's "silent majority."[11] Media outlets have been investigating who the Quiet Australians might be.The Australianreferred to voters who ignored messaging that "presumed to tell them how to think and what to do" and voted for a Prime Minister that "spoke not over but right to them".[12]SBS Newsstated that "They don't make a lot of noise online or call into radio stations, they don't campaign in the streets or protest outside parliament".[13] The Australian Financial Reviewused data from the Australian Election Study to define Quiet Australians as being "increasingly disaffected with the political system, and that Education surpassed income as the demographic characteristic most correlated with a swing to either major party". Moreover, the "election also saw the re-emergence of religion as a political force".[14]ABC'sQ&A's panelists discussed the 2019 election results in an episode titled "First Australians and Quiet Australians".[15] TheOrder of AustraliaAssociation uses the term "Quiet Australians" for its collection of stories embodied within the service rendered by award recipients to serve as a national resource to inspire and educate Australians.[16] 3 years later, the result of the2022 Australian federal electionwas a loss for Morrison's Coalition. The OppositionLabor Partyformed majority government, withAnthony Albaneseas the new Prime Minister. TheAustralian Greenshad unprecedented success, and several Liberal seats were lost toteal independents.[17] In the leadup to the election, media outlets and politicians invoked the Quiet Australians: Senior Liberal MP and TreasurerJosh Frydenbergplayed down polls suggesting he was in danger of losing the blue-ribbon (very safe Liberal) seat ofKooyong, by saying “There are many – as the Prime Minister calls them – quiet Australians out there.”[18]Frydenberg ended up losing the seat to teal independentMonique Ryan. The Sydney Morning Herald published an opinion piece on various types of voters in Australia, and quoted Rodney Tiffen, a Sydney University political science professor, who identified the label as more of a tactical grouping and an assertion that the loudest opinions may not be the majority, rather than a distinct group. The article compared the quiet Australians with the "Canberra bubble" - a term for political insiders who are out of touch with the expectations of mainstream Australian society.[19] The Guardian argued that while in the previous election Morrison targeted quiet Australians, this time he was instead appealing to anxious Australian parents by focusing ontransgenderpeople in a "culture war"[20] After the election, media outlets attempted to explain the result by again invoking the quiet Australians: Sky News Australiaargued that the Liberals should support the construction of anuclear power industry, as an alternative tofossil fuels, to win back quiet Australians who had deserted the party for teal independents who campaigned for action onanthropogenic climate change.[21] Paul Osborne, writing for the Australian Associated Press, argued that Morrison had angered the quiet Australians and turned them "cranky."[22] Peter Hartcherwrote inSydney Morning Heraldthat "the quiet Australians spoke and they said 'enough.'" Hartcher argued that Morrison had tried to transform the Liberals into aright wing populistparty and thus had lost the support offiscal conservativesandliberalsto teals, while at the same time Morrison's failures of crisis leadership had lost working-class and middle-class seats to Labor. Hartcher identified all these groups as quiet Australians.[23] The Guardian commented on the Greens campaign strategy of mass door-knocking and conversations with voters, reporting that the Greens planned to repeat this "social work" strategy to target quiet Australians.[24] Morrison stepped down as Liberal leader and commented on his election loss, saying he looked forward to going back to being a quiet Australian inthe shire of Sydney.[25]
https://en.wikipedia.org/wiki/The_Quiet_Australians
InCanada, avisible minority(French:minorité visible) is defined by theGovernment of Canadaas "persons, other than aboriginal peoples, who are non-Caucasian in race or non-white in colour".[1]The term is used primarily as a demographic category byStatistics Canada, in connection with that country'sEmployment Equitypolicies. The qualifier "visible" was chosen by the Canadian authorities as a way to single out newer immigrant minorities from both Aboriginal Canadians and other "older" minorities distinguishable bylanguage(Frenchvs.English) andreligion(Catholicsvs.Protestants), which are "invisible" traits. The term visibleminorityis sometimes used as a euphemism for "non-white". This is incorrect, in that the government definitions differ: Aboriginal people are not considered to be visible minorities, but are not necessarily white either. In some cases, members of "visible minorities" may be visually indistinguishable from the majority population and/or may form amajority-minoritypopulation locally (as is the case inVancouverandToronto). Since the reform ofCanada's immigration laws in the 1960s, immigration has been primarily of peoples from areas other than Europe, many of whom are visible minorities within Canada. 9,639,200Canadiansidentified as a member of a visibleminority groupin the2021 Canadian Census, for 26.53% of the total population.[2][3]This was an increase from the2016 Census, when visible minorities accounted for 22.2% of the total population; from the2011 Census, when visible minorities accounted for 19.1% of the total population; from the2006 Census, when the proportion was 16.2%; from2001, when the proportion was 13.4%; over1996(11.2%); over1991(9.4%) and1981(4.7%). In 1961, the visible minority population was less than 1%. The increase represents a significant shift inCanada's demographicsrelated to record high immigration since the advent of its multiculturalism policies. Statistics Canada projects that by 2041, visible minorities will make up 38.2–43.0% of the total Canadian population, compared with 26.5% in 2021.[4][5][2][3]Statistics Canada further projects that among the working-age population (15 to 64 years), meanwhile, visible minorities are projected to make up 42.1–47.3% of Canada's total population, compared to 28.5% in 2021.[4][5][2][3] As per the 2021 census, of the provinces,British Columbiahad the highest proportion of visible minorities, representing 34.4% of its population, followed byOntarioat 34.3%,Albertaat 27.8% andManitobaat 22.2%.[2][3]Additionally, as of 2021, the largest visible minority group wasSouth Asian Canadianswith a population of approximately 2.6 million, representing roughly 7.1% of the country's population, followed byChinese Canadians(4.7%) andBlack Canadians(4.3%).[2][3] National average: 26.5%Source:Canada 2021 Census[19] Alberta British Columbia Manitoba Ontario Quebec National average: 22.3%Source:Canada 2016 Census[13] Alberta British Columbia Manitoba Ontario Quebec National average: 19.1%Source:Canada 2011 Census[12] Alberta British Columbia Manitoba Ontario Quebec National average: 16.2%Source:Canada 2006 Census[11] Alberta British Columbia Manitoba Ontario Quebec National average: 13.4%Source:Canada 2001 Census[10] Alberta British Columbia Manitoba Ontario Quebec According to the Employment Equity Act of 1995, the definition of visible minority is: "persons, other than aboriginal peoples, who are non-Caucasian in race or non-white in colour".[20] This definition can be traced back to the 1984 Report of theAbella Commission on Equality in Employment. The Commission described the term visible minority as an "ambiguous categorization", but for practical purposes interpreted it to mean "visibly non-white".[21]The Canadian government uses anoperational definitionby which it identifies the following groups as visible minorities: "Chinese, South Asian, Black, Filipino, Latin American, Southeast Asian, Arab, West Asian, Korean, Japanese, Visible minority, n.i.e. (n.i.e.means "not included elsewhere"), and Multiple visible minority".[22] If census respondents write-in multiple entries, like "Black and Malaysian", "Black and French" or "South Asian and European", they would be included in the Black[23]or South Asian counts respectively.[24]However, the 2006 Census states that respondents that add a European ethnic response in combination with certain visible minority groups are not counted as visible minorities. They must add another non-European ethnic response to be counted as such: In contrast, in accordance with employment equity definitions, persons who reported 'Latin American' and 'White,' 'Arab' and 'White,' or 'West Asian' and 'White' have been excluded from the visible minority population. Likewise, persons who reported 'Latin American,' 'Arab' or 'West Asian' and who provided a European write-in response such as 'French' have been excluded from the visible minority population as well. These persons are included in the 'Not a visible minority' category. However, persons who reported 'Latin American,' 'Arab' or 'West Asian' and a non-European write-in response are included in the visible minority population.[25] The term "non-white" is used in the wording of the Employment Equity Act and in employment equity questionnaires distributed to applicants and employees. This is intended as a shorthand phrase for those who are in the Aboriginal and/or visible minority groups.[26] The classification "visible minorities" has attracted controversy, both nationally and from abroad. TheUN Committee on the Elimination of Racial Discriminationhas stated that they have doubts regarding the use of this term since this term may be considered objectionable by certain minorities and recommended an evaluation of this term. In response, the Canadian government made efforts to evaluate how this term is used in Canadian society through commissioning of scholars and open workshops.[27] Since 2008, census data and media reports have suggested that the "visible minorities" label no longer makes sense in some large Canadian cities, due to immigration trends in recent decades. For example, "visible minorities" comprise themajorityof the population in many municipalities across the country, primarily in British Columbia, Ontario, and Alberta.[28] Yet another criticism of the label concerns the composition of "visible minorities". Critics have noted that the groups comprising "visible minorities" have little in common with each other, as they include both disadvantaged groups and non-disadvantaged groups.[29][30]The concept of visible minority has been cited in demography research as an example of astatistext, meaning a census category that has been contrived for a particular public policy purpose.[31][32]As the term "visible minorities" is seen as creating aracializedgroup, some advocate for "global majority" as a more appropriate alternative.[33]
https://en.wikipedia.org/wiki/Visible_minority
Condorcet methods Positional voting Cardinal voting Quota-remainder methods Approval-based committees Fractional social choice Semi-proportional representation By ballot type Pathological response Strategic voting Paradoxes ofmajority rule Positive results This article discusses the methods and results of comparing differentelectoral systems. There are two broad ways to compare voting systems: Voting methods can be evaluated by measuring their accuracy under random simulated elections aiming to be faithful to the properties of elections in real life. The first such evaluation was conducted by Chamberlin and Cohen in 1978, who measured the frequency with which certain non-Condorcet systems elected Condorcet winners.[1] TheMarquis de Condorcetviewed elections as analogous to jury votes where each member expresses an independent judgement on the quality of candidates. Candidates differ in terms of their objective merit, but voters have imperfect information about the relative merits of the candidates. Such jury models are sometimes known asvalence models. Condorcet and his contemporaryLaplacedemonstrated that, in such a model, voting theory could be reduced to probability by finding theexpected qualityof each candidate.[2] The jury model implies several natural concepts of accuracy for voting systems under different models: However, Condorcet's model is based on the extremely strong assumption ofindependent errors, i.e. voters will not be systematically biased in favor of one group of candidates or another. This is usually unrealistic: voters tend to communicate with each other, form parties or political ideologies, and engage in other behaviors that can result incorrelated errors. Duncan Blackproposed a one-dimensional spatial model of voting in 1948, viewing elections as ideologically driven.[4]His ideas were later expanded by Anthony Downs.[5]Voters' opinions are regarded as positions in a space of one or more dimensions; candidates have positions in the same space; and voters choose candidates in order of proximity (measured under Euclidean distance or some other metric). Spatial models imply a different notion of merit for voting systems: the more acceptable the winning candidate may be as a location parameter for the voter distribution, the better the system. Apolitical spectrumis a one-dimensional spatial model. Neutral voting models try to minimize the number of parameters and, as an example of thenothing-up-my-sleeve principle. The most common such model is theimpartial anonymous culturemodel (orDirichletmodel). These models assume voters assign each candidate a utility completely at random (from auniform distribution). Tidemanand Plassmann conducted a study which showed that a two-dimensional spatial model gave a reasonable fit to 3-candidate reductions of a large set of electoral rankings. Jury models, neutral models, and one-dimensional spatial models were all inadequate.[6]They looked at Condorcet cycles in voter preferences (an example of which is A being preferred to B by a majority of voters, B to C and C to A) and found that the number of them was consistent with small-sample effects, concluding that "voting cycles will occur very rarely, if at all, in elections with many voters." The relevance of sample size had been studied previously byGordon Tullock, who argued graphically that although finite electorates will be prone to cycles, the area in which candidates may give rise to cycling shrinks as the number of voters increases.[7] Autilitarianmodel views voters as ranking candidates in order of utility. The rightful winner, under this model, is the candidate who maximizes overall social utility. A utilitarian model differs from a spatial model in several important ways: It follows from the last property that no voting system which gives equal influence to all voters is likely to achieve maximum social utility. Extreme cases of conflict between the claims of utilitarianism and democracy are referred to as the 'tyranny of the majority'. See Laslier's, Merlin's, and Nurmi's comments in Laslier's write-up.[8] James Millseems to have been the first to claim the existence of ana prioriconnection between democracy and utilitarianism – see the Stanford Encyclopedia article.[9] Suppose that theithcandidate in an election has meritxi(we may assume thatxi~N(0,σ2)[10]), and that voterj's level of approval for candidateimay be written asxi+ εij(we will assume that the εijareiid.N(0,τ2)). We assume that a voter ranks candidates in decreasing order of approval. We may interpret εijas the error in voterj's valuation of candidateiand regard a voting method as having the task of finding the candidate of greatest merit. Each voter will rank the better of two candidates higher than the less good with a determinate probabilityp(which under the normal model outlined here is equal to12+1πtan−1στ{\displaystyle {\tfrac {1}{2}}\!+\!{\tfrac {1}{\pi }}{\textrm {tan}}^{-1}{\tfrac {\sigma }{\tau }}}, as can be confirmed from a standard formula for Gaussian integrals over a quadrant[citation needed]).Condorcet's jury theoremshows that so long asp>1⁄2, the majority vote of a jury will be a better guide to the relative merits of two candidates than is the opinion of any single member. Peyton Youngshowed that three further properties apply to votes between arbitrary numbers of candidates, suggesting that Condorcet was aware of the first and third of them.[11] Robert F. Bordley constructed a 'utilitarian' model which is a slight variant of Condorcet's jury model.[12]He viewed the task of a voting method as that of finding the candidate who has the greatest total approval from the electorate, i.e. the highest sum of individual voters' levels of approval. This model makes sense even with σ2= 0, in which caseptakes the value12+1πtan−11n−1{\displaystyle {\tfrac {1}{2}}\!+\!{\tfrac {1}{\pi }}{\textrm {tan}}^{-1}{\tfrac {1}{n-1}}}wherenis the number of voters. He performed an evaluation under this model, finding as expected that the Borda count was most accurate. A simulated election can be constructed from a distribution of voters in a suitable space. The illustration shows voters satisfying a bivariate Gaussian distribution centred on O. There are 3 randomly generated candidates, A, B and C. The space is divided into 6 segments by 3 lines, with the voters in each segment having the same candidate preferences. The proportion of voters ordering the candidates in any way is given by the integral of the voter distribution over the associated segment. The proportions corresponding to the 6 possible orderings of candidates determine the results yielded by different voting systems. Those which elect the best candidate, i.e. the candidate closest to O (who in this case is A), are considered to have given a correct result, and those which elect someone else have exhibited an error. By looking at results for large numbers of randomly generated candidates the empirical properties of voting systems can be measured. The evaluation protocol outlined here is modelled on the one described by Tideman and Plassmann.[6]Evaluations of this type are commonest for single-winner electoral systems.Ranked votingsystems fit most naturally into the framework, but other types of ballot (such aFPTPandApproval voting) can be accommodated with lesser or greater effort. The evaluation protocol can be varied in a number of ways: One of the main uses of evaluations is to compare the accuracy of voting systems when voters vote sincerely. If an infinite number of voters satisfy a Gaussian distribution, then the rightful winner of an election can be taken to be the candidate closest to the mean/median, and the accuracy of a method can be identified with the proportion of elections in which the rightful winner is elected. Themedian voter theoremguarantees that all Condorcet systems will give 100% accuracy (and the same applies toCoombs' method[14]). Evaluations published in research papers use multidimensional Gaussians, making the calculation numerically difficult.[1][15][16][17]The number of voters is kept finite and the number of candidates is necessarily small. The computation is much more straightforward in a single dimension, which allows an infinite number of voters and an arbitrary numbermof candidates. Results for this simple case are shown in the first table, which is directly comparable with Table 5 (1000 voters, medium dispersion) of the cited paper by Chamberlin and Cohen. The candidates were sampled randomly from the voter distribution and a single Condorcet method (Minimax) was included in the trials for confirmation. The relatively poor performance of theAlternative vote(IRV) is explained by the well known and common source of error illustrated by the diagram, in which the election satisfies a univariate spatial model and the rightful winner B will be eliminated in the first round. A similar problem exists in all dimensions. An alternative measure of accuracy is the average distance of voters from the winner (in which smaller means better). This is unlikely to change the ranking of voting methods, but is preferred by people who interpret distance as disutility. The second table shows the average distance (in standard deviations)minus2π{\displaystyle {\sqrt {\tfrac {2}{\pi }}}}(which is the average distance of a variate from the centre of a standard Gaussian distribution) for 10 candidates under the same model. James Green-Armytage et al. published a study in which they assessed the vulnerability of several voting systems to manipulation by voters.[18]They say little about how they adapted their evaluation for this purpose, mentioning simply that it "requires creative programming". An earlier paper by the first author gives a little more detail.[19] The number of candidates in their simulated elections was limited to 3. This removes the distinction between certain systems; for instanceBlack's methodand theDasgupta-Maskin methodare equivalent on 3 candidates. The conclusions from the study are hard to summarise, but theBorda countperformed badly;Minimaxwas somewhat vulnerable; and IRV was highly resistant. The authors showed that limiting any method to elections with no Condorcet winner (choosing the Condorcet winner when there was one) would never increase its susceptibility totactical voting. They reported that the 'Condorcet-Hare' system which uses IRV as a tie-break for elections not resolved by the Condorcet criterion was as resistant to tactical voting as IRV on its own and more accurate. Condorcet-Hare is equivalent toCopeland's methodwith an IRV tie-break in elections with 3 candidates. Some systems, and the Borda count in particular, are vulnerable when the distribution of candidates is displaced relative to the distribution of voters. The attached table shows the accuracy of the Borda count (as a percentage) when an infinite population of voters satisfies a univariate Gaussian distribution andmcandidates are drawn from a similar distribution offset byxstandard distributions. Red colouring indicates figures which are worse than random. Recall that all Condorcet methods give 100% accuracy for this problem. (And notice that the reduction in accuracy asxincreases is not seen when there are only 3 candidates.) Sensitivity to the distribution of candidates can be thought of as a matter either of accuracy or of resistance to manipulation. If one expects that in the course of things candidates will naturally come from the same distribution as voters, then any displacement will be seen as attempted subversion; but if one thinks that factors determining the viability of candidacy (such as financial backing) may be correlated with ideological position, then one will view it more in terms of accuracy. Published evaluations take different views of the candidate distribution. Some simply assume that candidates are drawn from the same distribution as voters.[16][18]Several older papers assume equal means but allow the candidate distribution to be more or less tight than the voter distribution.[20][1]A paper by Tideman and Plassmann approximates the relationship between candidate and voter distributions based on empirical measurements.[15]This is less realistic than it may appear, since it makes no allowance for the candidate distribution to adjust to exploit any weakness in the voting system. A paper by James Green-Armytage looks at the candidate distribution as a separate issue, viewing it as a form of manipulation and measuring the effects of strategic entry and exit. Unsurprisingly he finds the Borda count to be particularly vulnerable.[19] The task of a voting system under a spatial model is to identify the candidate whose position most accurately represents the distribution of voter opinions. This amounts to choosing a location parameter for the distribution from the set of alternatives offered by the candidates. Location parameters may be based on the mean, the median, or the mode; but since ranked preference ballots provide only ordinal information, the median is the only acceptable statistic. This can be seen from the diagram, which illustrates two simulated elections with the same candidates but different voter distributions. In both cases the mid-point between the candidates is the 51st percentile of the voter distribution; hence 51% of voters prefer A and 49% prefer B. If we consider a voting method to be correct if it elects the candidate closest to themedianof the voter population, then since the median is necessarily slightly to the left of the 51% line, a voting method will be considered to be correct if it elects A in each case. The mean of the teal distribution is also slightly to the left of the 51% line, but the mean of the orange distribution is slightly to the right. Hence if we consider a voting method to be correct if it elects the candidate closest to themeanof the voter population, then a method will not be able to obtain full marks unless it produces different winners from the same ballots in the two elections. Clearly this will impute spurious errors to voting methods. The same problem will arise for any cardinal measure of location; only the median gives consistent results. The median is not defined for multivariate distributions but the univariate median has a property which generalizes conveniently. The median of a distribution is the position whose average distance from all points within the distribution is smallest. This definition generalizes to thegeometric medianin multiple dimensions. The distance is often defined as a voter'sdisutility function. If we have a set of candidates and a population of voters, then it is not necessary to solve the computationally difficult problem of finding the geometric median of the voters and then identify the candidate closest to it; instead we can identify the candidate whose average distance from the voters is minimized. This is the metric which has been generally deployed since Merrill onwards;[20]see also Green-Armytage and Darlington.[19][16] The candidate closest to the geometric median of the voter distribution may be termed the 'spatial winner'. Data from real elections can be analysed to compare the effects of different systems, either by comparing between countries or by applying alternative electoral systems to the real election data. The electoral outcomes can be compared throughdemocracy indices, measures ofpolitical fragmentation,voter turnout,[21][22]political efficacyand various economic and judicial indicators. The practical criteria to assess real elections include the share ofwasted votes, the complexity ofvote counting,proportionalityof the representation elected based on parties' shares of votes, andbarriers to entryfor new political movements.[23]Additional opportunities for comparison of real elections arise throughelectoral reforms. A Canadian example of such an opportunity is seen in the City of Edmonton (Canada), which went fromfirst-past-the-post votingin1917 Alberta general electionto five-memberplurality block votingin1921 Alberta general election, to five-membersingle transferable votingin1926 Alberta general election, then to FPTP again in1959 Alberta general election. One party swept all the Edmonton seats in 1917, 1921 and 1959. Under STV in 1926, two Conservatives, one Liberal, one Labour and one United Farmers MLA were elected. Traditionally the merits of different electoral systems have been argued by reference to logical criteria. These have the form ofrules of inferencefor electoral decisions, licensing the deduction, for instance, that "ifEandE' are elections such thatR(E,E'), and ifAis the rightful winner ofE, thenAis the rightful winner ofE' ". The absolute criteria state that, if the set of ballots is a certain way, a certain candidate must or must not win. These are criteria that state that, if a certain candidate wins in one circumstance, the same candidate must (or must not) win in a related circumstance. These are criteria which relate to the process of counting votes and determining a winner. These are criteria that relate to a voter's incentive to use certain forms of strategy. They could also be considered as relative result criteria; however, unlike the criteria in that section, these criteria are directly relevant to voters; the fact that a method passes these criteria can simplify the process of figuring out one's optimal strategic vote. Ballots are broadly distinguishable into two categories,cardinalandordinal, where cardinal ballots request individual measures of support for each candidate and ordinal ballots request relative measures of support. A few methods do not fall neatly into one category, such as STAR, which asks the voter to give independent ratings for each candidate, but uses both the absolute and relative ratings to determine the winner. Comparing two methods based on ballot type alone is mostly a matter of voter experience preference, unless the ballot type is connected back to one of the other mathematical criterion listed here. Criterion A is "stronger" than B if satisfying A implies satisfying B. For instance, the Condorcet criterion is stronger than the majority criterion, because all majority winners are Condorcet winners. Thus, any voting method that satisfies the Condorcet criterion must satisfy the majority criterion. The following table shows which of the above criteria are met by several single-winner methods. Not every criterion is listed. type The concerns raised above are used bysocial choice theoriststo devise systems that are accurate and resistant to manipulation. However, there are also practical reasons why one system may be more socially acceptable than another, which fall under the fields ofpublic choiceandpolitical science.[8][16]Important practical considerations include: Other considerations includebarriers to entryto thepolitical competition[28]and likelihood ofgridlocked government.[29] Multi-winner electoral systems at their best seek to produce assemblies representative in a broader sense than that of making the same decisions as would be made by single-winner votes. They can also be route to one-party sweeps of a city's seats, if a non-proportional system, such asplurality block votingorticket voting, is used. Evaluating the performance of multi-winner voting methods requires different metrics than are used for single-winner systems. The following have been proposed. The following table shows which of the above criteria are met by several multiple winner methods.
https://en.wikipedia.org/wiki/Comparison_of_electoral_systems
Anelectionis a formalgroup decision-makingprocess whereby apopulationchooses an individual or multiple individuals to holdpublic office. Elections have been the usual mechanism by which modernrepresentative democracyhas operated since the 17th century.[1]Elections may fill offices in thelegislature, sometimes in theexecutiveandjudiciary, and forregional and local government. This process is also used in many other private andbusinessorganisations, from clubs tovoluntary associationsandcorporations. The global use of elections as a tool for selecting representatives in modern representative democracies is in contrast with the practice in the democraticarchetype, ancientAthens, where the elections were considered anoligarchicinstitution and most political offices were filled usingsortition, also known as allotment, by which officeholders were chosen by lot.[1] Electoral reformdescribes the process of introducing fairelectoral systemswhere they are not in place, or improving the fairness or effectiveness of existing systems.Psephologyis the study of results and otherstatisticsrelating to elections (especially with a view to predicting future results). Election is the fact of electing, or being elected. Toelectmeans "to select or make a decision", and so sometimes other forms of ballot such asreferendumsare referred to as elections, especially in theUnited States. Elections were used as early in history asancient Greeceandancient Rome, and throughout theMedieval periodto select rulers such as theHoly Roman Emperor(seeimperial election) and thepope(seepapal election).[2] ThePalaKingGopala(ruledc.750s– 770s CE) in early medievalBengalwas elected by a group of feudal chieftains. Such elections were quite common in contemporary societies of the region.[3][4]In theChola Empire, around 920 CE, inUthiramerur(in present-dayTamil Nadu), palm leaves were used for selecting the village committee members. The leaves, with candidate names written on them, were put inside a mud pot. To select the committee members, a young boy was asked to take out as many leaves as the number of positions available. This was known as theKudavolaisystem.[5][6] The first recorded popular elections of officials to public office, by majority vote, where all citizens were eligible both to vote and to hold public office, date back to theEphorsofSpartain 754 BC, under themixed governmentof theSpartan Constitution.[7][8]Atheniandemocratic elections, where all citizens could hold public office, were not introduced for another 247 years, until the reforms ofCleisthenes.[9]Under the earlierSolonian Constitution(c.574 BC), all Athenian citizens were eligible to vote in the popular assemblies, on matters of law and policy, and as jurors, but only the three highest classes of citizens could vote in elections. Nor were the lowest of the four classes of Athenian citizens (as defined by the extent of their wealth and property, rather than by birth) eligible to hold public office, through the reforms ofSolon.[10][11]The Spartan election of the Ephors, therefore, also predates the reforms of Solon in Athens by approximately 180 years.[12] Questions ofsuffrage, especially suffrage for minority groups, have dominated the history of elections. Males, the dominant cultural group in North America and Europe, often dominated theelectorateand continue to do so in many countries.[2]Early elections in countries such as theUnited Kingdomandthe United Stateswere dominated bylandedorruling classmales.[2]By 1920 all Western European and North American democracies had universal adult male suffrage (except Switzerland) and many countries began to considerwomen's suffrage.[2]Despite legally mandated universal suffrage for adult males, political barriers were sometimes erected to prevent fair access to elections (seecivil rights movement).[2] Elections are held in a variety of political, organizational, and corporate settings. Many countries hold elections to select people to serve in their governments, but other types of organizations hold elections as well. For example, many corporations hold elections amongshareholdersto select aboard of directors, and these elections may be mandated bycorporate law.[13]In many places, an election to the government is usually a competition among people who have already won aprimary electionwithin apolitical party.[14]Elections within corporations and other organizations often use procedures and rules that are similar to those of governmental elections.[15] The question of who may vote is a central issue in elections. The electorate does not generally include the entire population; for example, many countries prohibit those who are under the age of majority from voting. All jurisdictions require a minimum age for voting. In Australia,Aboriginal peoplewere not given the right to vote until 1962 (see1967 referendum entry) and in 2010 the federal government removed the rights of prisoners serving for three years or more to vote (a large proportion of whom were Aboriginal Australians). Suffrage is typically only for citizens of the country, though further limits may be imposed. In the European Union, one can vote in municipal elections if one lives in the municipality and is an EU citizen; the nationality of the country of residence is not required. In some countries, voting is required by law. Eligible voters may be subject to punitive measures such as a fine for not casting a vote. In Western Australia, the penalty for a first time offender failing to vote is a $20.00 fine, which increases to $50.00 if the offender refused to vote prior.[16] Historically the size of eligible voters, the electorate, was small having the size of groups or communities of privileged men likearistocratsand men of a city (citizens). With the growth of the number of people withbourgeoiscitizen rights outside of cities, expanding the term citizen, the electorates grew to numbers beyond the thousands. Elections with an electorate in the hundred thousands appeared in the final decades of theRoman Republic, by extending voting rights to citizens outside of Rome with theLex Julia of 90 BC, reaching an electorate of 910,000 and estimatedvoter turnoutof maximum 10% in 70 BC,[17]only again comparable in size to thefirst elections of the United States. At the same time theKingdom of Great Britainhad in 1780 about 214,000 eligible voters, 3% of the whole population.[18]Naturalizationcan reshape the electorate of a country.[19] Arepresentative democracyrequires a procedure to govern nomination for political office. In many cases, nomination for office is mediated throughpreselectionprocesses in organized political parties.[20] Non-partisan systems tend to be different from partisan systems as concerns nominations. In adirect democracy, one type ofnon-partisan democracy, any eligible person can be nominated. Although elections were used in ancient Athens, in Rome, and in the selection of popes and Holy Roman emperors, the origins of elections in the contemporary world lie in the gradual emergence of representative government in Europe and North America beginning in the 17th century. In some systems no nominations take place at all, with voters free to choose any person at the time of voting—with some possible exceptions such as through a minimum age requirement—in the jurisdiction. In such cases, it is not required (or even possible) that the members of the electorate be familiar with all of the eligible persons, though such systems may involveindirect electionsat larger geographic levels to ensure that some first-hand familiarity among potential electees can exist at these levels (i.e., among the elected delegates). Electoral systems are the detailed constitutional arrangements and voting systems that convert the vote into a political decision. The first step is for voters to cast theballots, which may be simple single-choice ballots, but other types, such as multiple choice orranked ballotsmay also be used. Then the votes are tallied, for which variousvote counting systemsmay be used. and the voting system then determines the result on the basis of the tally. Most systems can be categorized as eitherproportional,majoritarianormixed. Among the proportional systems, the most commonly used areparty-list proportional representation(list PR) systems, among majoritarian arefirst-past-the-postelectoral system (single winnerplurality voting) and different methods of majority voting (such as the widely usedtwo-round system).Mixed systemscombine elements of both proportional and majoritarian methods, with some typically producing results closer to the former (mixed-member proportional) or the other (e.g.parallel voting). Many countries have growing electoral reform movements, which advocate systems such asapproval voting,single transferable vote,instant runoff votingor aCondorcet method; these methods are also gaining popularity for lesser elections in some countries where more important elections still use more traditional counting methods. While openness andaccountabilityare usually considered cornerstones of a democratic system, the act of casting a vote and the content of a voter's ballot are usually an important exception. Thesecret ballotis a relatively modern development, but it is now considered crucial in mostfree and fair elections, as it limits the effectiveness of intimidation. When elections are called, politicians and their supporters attempt to influence policy by competing directly for the votes of constituents in what are called campaigns. Supporters for a campaign can be either formally organized or loosely affiliated, and frequently utilizecampaign advertising. It is common for political scientists to attempt to predict elections viapolitical forecastingmethods. The most expensive election campaign included US$7 billion spent on the2012 United States presidential electionand is followed by the US$5 billion spent on the2014 Indian general election.[21] The nature of democracy is that elected officials are accountable to the people, and they must return to the voters at prescribed intervals to seek theirmandateto continue in office. For that reason, most democratic constitutions provide that elections are held at fixed regular intervals. In the United States, elections for public offices are typically held between every two and six years in most states and at the federal level, with exceptions for elected judicial positions that may have longer terms of office. There is a variety of schedules, for example, presidents: thePresident of Irelandis elected every seven years, thePresident of Russiaand thePresident of Finlandevery six years, thePresident of Franceevery five years,President of the United Statesevery four years. Predetermined or fixed election dates have the advantage of fairness and predictability. They tend to greatly lengthen campaigns, and makedissolving the legislature(parliamentary system) more problematic if the date should happen to fall at a time when dissolution is inconvenient (e.g. when war breaks out). Other states (e.g., theUnited Kingdom) only set maximum time in office, and the executive decides exactly when within that limit it will actually go to the polls. In practice, this means the government remains in power for close to its full term, and chooses an election date it calculates to be in its best interests (unless something special happens, such as amotion of no-confidence). This calculation depends on a number of variables, such as its performance in opinion polls and the size of its majority. Rolling electionsare elections in which allrepresentativesin a body are elected, but these elections are spread over a period of time rather than all at once. Examples are the presidentialprimariesin theUnited States,Elections to the European Parliament(where, due to differing election laws in each member state, elections are held on different days of the same week) and, due to logistics, general elections inLebanonandIndia. The voting procedure in theLegislative Assemblies of the Roman Republicare also a classical example. In rolling elections, voters have information about previous voters' choices. While in the first elections, there may be plenty of hopeful candidates, in the last rounds consensus on one winner is generally achieved. In today's context of rapid communication, candidates can put disproportionate resources into competing strongly in the first few stages, because those stages affect the reaction of latter stages. In many of the countries with weakrule of law, the most common reason why elections do not meet international standards of being "free and fair" is interference from the incumbent government.Dictatorsmay use the powers of the executive (police, martial law, censorship, physical implementation of the election mechanism, etc.) to remain in power despite popular opinion in favour of removal. Members of a particular faction in a legislature may use the power of the majority or supermajority (passing criminal laws, and defining the electoral mechanisms including eligibility and district boundaries) to prevent the balance of power in the body from shifting to a rival faction due to an election.[2] Non-governmental entities can also interfere with elections, through physical force, verbal intimidation, or fraud, which can result in improper casting or counting of votes. Monitoring for and minimizing electoral fraud is also an ongoing task in countries with strong traditions of free and fair elections. Problems that prevent an election from being "free and fair" take various forms.[22] The electorate may be poorly informed about issues or candidates due to lack offreedom of the press, lack of objectivity in the press due to state or corporate control, or lack of access to news and political media.Freedom of speechmay be curtailed by the state, favouring certain viewpoints or statepropaganda. Schedulingfrequent electionscan also lead tovoter fatigue. Gerrymandering,wasted votesand manipulatingelectoral thresholdscan prevent that all votes count equally. Exclusion of opposition candidates from eligibility for office, needlessly highnomination ruleson who may be a candidate, are some of the ways the structure of an election can be changed to favour a specific faction or candidate. Those in power may arrest or assassinate candidates, suppress or even criminalize campaigning, close campaign headquarters, harass or beat campaign workers, or intimidate voters with violence.Foreign electoral interventioncan also occur, with the United States interfering between 1946 and 2000 in 81 elections andRussiaor theSoviet Unionin 36.[23]In 2018 the most intense interventions, utilizing false information, were byChinainTaiwanand byRussiainLatvia; the next highest levels were in Bahrain, Qatar and Hungary.[24] This can include falsifying voter instructions,[25]violation of thesecret ballot,ballot stuffing, tampering with voting machines,[26]destruction of legitimately cast ballots,[27]voter suppression, voter registration fraud, failure to validate voter residency, fraudulent tabulation of results, and use of physical force or verbal intimation at polling places. Other examples include persuading candidates not to run, such as through blackmailing, bribery, intimidation or physical violence. Asham election, orshow election, is an election that is held purely for show; that is, without any significant political choice or real impact on the results of the election.[28] Sham elections are a common event indictatorial regimesthat feel the need to feign the appearance of publiclegitimacy. Published results usually show nearly 100%voter turnoutand high support (typically at least 80%, and close to 100% in many cases) for the prescribed candidates or for thereferendumchoice that favours thepolitical partyin power. Dictatorial regimes can also organize sham elections with results simulating those that might be achieved in democratic countries.[29] Sometimes, only one government-approved candidate is allowed to run in sham elections with no opposition candidates allowed, or opposition candidates are arrested on false charges (or even without any charges) before the election to prevent them from running.[30][31][32] Ballots may contain only one "yes" option, or in the case of a simple "yes or no" question, security forces oftenpersecutepeople who pick "no", thus encouraging them to pick the "yes" option. In other cases, those who vote receive stamps in their passport for doing so, while those who did not vote (and thus do not receive stamps) are persecuted asenemies of the people.[33][34] Sham elections can sometimes backfire against the party in power, especially if the regime believes they are popular enough to win without coercion, fraud or suppressing the opposition. The most famous example of this was the1990 Myanmar general election, in which the government-sponsoredNational Unity Partysuffered a landslide defeat by the oppositionNational League for Democracyand consequently, the results were annulled.[35] Examples of sham elections include: the1929and1934electionsinFascist Italy, the1942 general electioninImperial Japan, those inNazi Germany,East Germanyother than the election in 1990, the1940 elections of Stalinist "People's Parliaments"to legitimise theSoviet occupationofEstonia,LatviaandLithuania, those inEgyptunderGamal Abdel Nasser,Anwar Sadat,Hosni Mubarak, andAbdel Fattah el-Sisi, those inBangladeshunderSheikh Hasina, those inRussiaunderVladimir Putin,[36]those in Syria underHafez Al-Assadand his sonBashar Al-Assad, those inVenezuelaUnderHugo ChavezandNicolas Maduroand most Notably in2018and2024, the1928,1935,1942,1949,1951and1958 electionsin Portugal, those inIndonesiaduringNew Orderregime, those inBelarusand Most Notably in2020, the1991and2019 Kazakh presidential elections, those inNorth Korea,[37]the1995and2002 presidential referendumsinSaddam Hussein's Iraq. InMexico, all of the presidential elections from1929to1982are considered to be sham elections, as theInstitutional Revolutionary Party(PRI) and its predecessors governed the country in ade factosingle-party system without serious opposition, and they won all of the presidential elections in that period with more than 70% of the vote. The first seriously competitive presidential election in modern Mexican history was that of1988, in which for the first time the PRI candidate faced two strong opposition candidates, though it is believed that the government rigged the result. The first fair election was held in1994, though the opposition did not win until2000. A predetermined conclusion is permanently established by the regime throughsuppressionof the opposition,coercionof voters,vote rigging, reporting several votes received greater than the number of voters, outright lying, or some combination of these. In an extreme example,Charles D. B. KingofLiberiawas reported to have won by 234,000 votes in the1927 general election, a "majority" that was over fifteen times larger than the number of eligible voters.[38] Some scholars argue that the predominance of elections in modernliberal democraciesmasks the fact that they are actually aristocratic selection mechanisms[39]that deny each citizen an equal chance of holding public office. Such views were expressed as early as the time ofAncient GreecebyAristotle.[39]According to Frenchpolitical scientistBernard Manin, the inegalitarian nature of elections stems from four factors: the unequal treatment of candidates by voters, the distinction of candidates required by choice, the cognitive advantage conferred by salience, and the costs of disseminating information.[40]These four factors result in the evaluation of candidates based on voters' partial standards of quality and social saliency (for example, skin colour and good looks). This leads to self-selection biases in candidate pools due to unobjective standards of treatment by voters and the costs (barriers to entry) associated with raising one's political profile. Ultimately, the result is the election of candidates who are superior (whether in actuality or as perceived within a cultural context) and objectively unlike the voters they are supposed to represent.[40] Evidence suggests that the concept of electing representatives was originally conceived to be different fromdemocracy.[41]Prior to the 18th century, some societies inWestern Europeusedsortitionas a means to select rulers, a method which allowed regular citizens to exercise power, in keeping with understandings of democracy at the time.[42]The idea of what constituted a legitimate government shifted in the 18th century to includeconsent, especially with the rise of theenlightenment. From this point onward, sortition fell out of favor as a mechanism for selecting rulers. On the other hand, elections began to be seen as a way for the masses to express popular consent repeatedly, resulting in the triumph of the electoral process until the present day.[43] This conceptual misunderstanding of elections as open and egalitarian when they are not innately so may thus be a root cause of theproblems in contemporary governance.[44]Those in favor of this view argue that the modern system of elections was never meant to give ordinary citizens the chance to exercise power - merely privileging their right to consent to those who rule.[45]Therefore, the representatives that modern electoral systems select for are too disconnected, unresponsive, and elite-serving.[39][46][47]To deal with this issue, various scholars have proposed alternative models of democracy, many of which include a return to sortition-based selection mechanisms. The extent to which sortition should be the dominant mode of selecting rulers[46]or instead be hybridised with electoral representation[48]remains a topic of debate.
https://en.wikipedia.org/wiki/Election
Condorcet methods Positional voting Cardinal voting Quota-remainder methods Approval-based committees Fractional social choice Semi-proportional representation By ballot type Pathological response Strategic voting Paradoxes ofmajority rule Positive results This is a list ofelectoral systemsby country in alphabetical order. An electoral system is used to elect nationallegislaturesandheads of state. Multi-member constituencies,majoritarian: Multi-member constituencies,proportional: Mixed majoritarian and proportional: No relevant electoral system information: Multi-member constituencies,majoritarian: Multi-member constituencies,proportional: Mixed majoritarian and proportional: Other: Indirect election: No relevant electoral system information:
https://en.wikipedia.org/wiki/List_of_electoral_systems_by_country
Condorcet methods Positional voting Cardinal voting Quota-remainder methods Approval-based committees Fractional social choice Semi-proportional representation By ballot type Pathological response Strategic voting Paradoxes ofmajority rule Positive results Thematrix voteis a voting procedure which can be used when one group of people wishes to elect a smaller number of persons, each of whom is to have a different assignment. Examples of its use are: Consider the situation in which a parliament elects a government of ten ministers. The matrix vote is proportional.[citation needed]It is ideally suited, therefore, to the formation of power-sharing governments, especially in post-conflict scenarios, and not least because it works without any resort to party or sectarian labels.[1][2] This article about apolitical termis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Matrix_vote
Condorcet methods Positional voting Cardinal voting Quota-remainder methods Approval-based committees Fractional social choice Semi-proportional representation By ballot type Pathological response Strategic voting Paradoxes ofmajority rule Positive results Insocial choice theoryandpolitics, aspoiler effecthappens when a losing candidate affects the results of an election simply by participating.[1][2]Voting rules that are not affected by spoilers are said to bespoilerproof.[3][4] The frequency and severity of spoiler effects depends substantially on the voting method.Instant-runoff voting(IRV), thetwo-round system(TRS), and especiallyfirst-past-the-post(FPP) withoutwinnowing or primary elections[5]are highly sensitive to spoilers (though IRV and TRS less so in some circumstances), and all three rules are affected bycenter-squeezeand vote splitting.[6][7][8][9]Majority-rule (or Condorcet) methodsare only rarely affected by spoilers, which are limited to rare[10][11]situations calledcyclic ties.[10][11][12]Rated voting systemsare not subject toArrow's theorem. Whether such methods are spoilerproof depends on the nature of the rating scales the voters use to express their opinions.[13][3][6][14] Spoiler effects can also occur in some methods ofproportional representation, such as thesingle transferable vote (STV or RCV-PR)and thelargest remainders methodof party-list representation, where it is called anew party paradox. A new party entering an election causes some seats to shift from one unrelated party to another, even if the new party wins no seats.[15]This kind of spoiler effect is avoided bydivisor methodsandproportional approval.[15]: Thm.8.3 Indecision theory,independence of irrelevant alternativesis a fundamental principle ofrationalchoice which says that a decision between two outcomes,AorB, should not depend on the quality of a third, unrelated outcomeC. A famous joke bySidney Morgenbesserillustrates this principle:[16] A man is deciding whether to order apple, blueberry, or cherry pie before settling on apple. The waitress informs him that the cherry pie is very good and a favorite of most customers. The man replies "in that case, I'll have the blueberry." Politicians andsocial choice theoristshave long argued for the unfairness of spoiler effects. The mathematician and political economistNicolas de Condorcetwas the first to study the spoiler effect, in the 1780s.[17] Voting systems that violate independence of irrelevant alternatives are susceptible to being manipulated bystrategic nomination. Such systems may produce anincentive to entry, increasing a candidate's chances of winning if similar candidates join the race, or anincentive to exit, reducing the candidate's chances of winning. Some systems are particularly infamous for their ease of manipulation, such as theBorda count, which exhibits a particularly severe entry incentive, letting any party "clone their way to victory" by running a large number of candidates. This famously forced de Borda to concede that "my system is meant only for honest men,"[18][19]and eventually led to its abandonment by theFrench Academy of Sciences.[19] Other systems exhibit an exit incentive. The vote splitting effect inplurality votingdemonstrates this method's strong exit incentive: if multiple candidates with similar views run in an election, their supporters' votes will be diluted, which may cause a unified opposition candidate to win despite having less support. This effect encourages groups of similar candidates to form an organization to make sure they don't step on each other's toes.[20] Differentelectoral systemshave different levels of vulnerability to spoilers. In general, spoilers are common withplurality voting, somewhat common inplurality-runoff methods, rare withmajoritarian methods, and with a varying level of spoiler vulnerability with mostrated voting methods.[note 1] In cases where there are many similar candidates, spoiler effects occur most often infirst-preference plurality (FPP).[citation needed]For example, in the United States, vote splitting is common inprimaries, where many similar candidates run against each other. The purpose of a primary election is to eliminate vote splitting among candidates from the same party in thegeneral electionby running only one candidate. In a two-party system, party primaries effectively turnFPPinto atwo-round system.[21][22][23] Vote splitting is the most common cause of spoiler effects inFPP. In these systems, the presence of many ideologically-similar candidates causes their vote total to be split between them, placing these candidates at a disadvantage.[24][25]This is most visible in elections where a minor candidate draws votes away from a major candidate with similar politics, thereby causing a strong opponent of both to win.[24][26] Plurality-runoff methods like thetwo-round systemandRCVstill experience vote-splitting in each round. This produces a kind of spoiler effect called acenter squeeze. Compared to plurality without primaries, the elimination of weak candidates in earlier rounds reduces their effect on the final results; however, spoiled elections remain common compared to other systems.[25][27][28]As a result, instant-runoff voting still tends towardstwo-party rulethrough the process known asDuverger's law.[6][29]A notable example of this can be seen inAlaska's 2024 race, where party elites pressured candidateNancy Dahlstrominto dropping out to avoid a repeat of thespoiled 2022 election.[30][31][32] Spoiler effects rarely occur when usingtournament solutions, where candidates are compared in one-on-one matchups to determine relative preference. For each pair of candidates, there is a count for how many voters prefer the first candidate in the pair to the second candidate. The resulting table of pairwise counts eliminates the step-by-step redistribution of votes, which is usually the cause for spoilers in other methods.[12]This pairwise comparison means that spoilers can only occur when there is aCondorcet cycle, where there is no single candidate preferred to all others.[12][33][34] Theoretical models suggest that somewhere between 90% and 99% of real-world elections have a Condorcet winner,[33][34]and the first Condorcet cycle in a ranked American election was found in 2021.[35]Some systems like theSchulze methodandranked pairshave stronger spoiler resistance guarantees that limit which candidates can spoil an election without aCondorcet winner.[36]: 228–229 Rated voting methods ask voters to assign each candidate a score on a scale (e.g. rating them from 0 to 10), instead of listing them from first to last.Highest medianandscore (highest mean) votingare the two most prominent examples of rated voting rules. Whenever voters rate candidates independently, the rating given to one candidate does not affect the ratings given to the other candidates. Any new candidate cannot change the winner of the race without becoming the winner themselves, which would disqualify them from the definition of a spoiler. For this to hold, in some elections, some voters must use less than their full voting power despite having meaningful preferences among viable candidates. The outcome of rated voting depends on the scale used by the voter or assumed by the mechanism.[37]If the voters use relative scales, i.e. scales that depend on what candidates are running, then the outcome can change if candidates who don't win drop out.[38]Empirical results from panel data suggest that judgments are at least in part relative.[39][40]Thus, rated methods, as used in practice, may exhibit a spoiler effect caused by the interaction between the voters and the system, even if the system itself passes IIA given an absolute scale. Spoiler effects can also occur in some methods ofproportional representation, such as thesingle transferable vote (STV or RCV-PR)and thelargest remainders methodof party-list representation, where it is called anew party paradox. A new party entering an election causes some seats to shift from one unrelated party to another, even if the new party wins no seats.[15]This kind of spoiler effect is avoided bydivisor methodsandproportional approval.[15]: Thm.8.3 A spoiler campaign in the United States is often one that cannot realistically win but can still determine the outcome by pulling support from a more competitive candidate.[41]The two major parties in the United States, theRepublican PartyandDemocratic Party, have regularly won 98% of all state and federal seats.[42]The US presidential elections most consistently cited as having been spoiled by third-party candidates are1844[43]and2000.[44][45][46][43]The2016election is more disputed as to whether it contained spoiler candidates or not.[47][48][49]For the2024 presidential election, Republican lawyers and operatives have fought to keep right-leaning third-parties like theConstitution Partyoff swing state ballots[50]while working to getCornel Weston battleground ballots.[51]Democrats have helped some right-leaning third-parties gain ballot access while challenging ballot access of left-leaning third-parties like theGreen Party.[52]Barry Burdenargues that they have almost no chance of winning the 2024 election but are often motivated by particular issues.[53] Third partycandidates are always controversial because almost anyone could play spoiler.[54][55]This is especially true in close elections where the chances of a spoiler effect increase.[56]Strategic voting, especially prevalent during high stakes elections with highpolitical polarization, often leads to a third-party that underperforms its poll numbers with voters wanting to make sure their least favorite candidate is not in power.[42][57][58]Third-party campaigns are more likely to result in the candidate a third party voter least wants in the White House.[55]Third-party candidates prefer to focus on their platform than on their impact on the frontrunners.[55] An unintentional spoiler is one that has a realistic chance of winning but falls short and affects the outcome of the election. Some third-party candidates express ambivalence about which major party they prefer and their possible role as spoiler[59][60]or deny the possibility.[61] InBurlington, Vermont's second IRV election, spoilerKurt Wrightknocked out DemocratAndy Montrollin the second round, leading to the election ofBob Kiss, despite the election results showing most voters preferred Montroll to Kiss.[62]The results of every possible one-on-one election can be completed as follows: 591 (Simpson) 2997 (Smith) 3664 (Wright) 3476 (Kiss) 844 (Simpson) 3576 (Smith) 4061 (Wright) 1310 (Simpson) 3793 (Smith) 721 (Simpson) This leads to an overall preference ranking of: Montroll was therefore preferred over Kiss by 54% of voters, over Wright by 56%, and over Smith by 60%. Had Wright not run, Montroll would have won instead of Kiss.[62][63] Because all ballots were fully released, it is possible to reconstruct the winners under other voting methods. While Wright would have won underplurality, Kiss won underIRV, and would have won under atwo-round voteor a traditionalnonpartisan blanket primary. Montroll, being themajority-preferred candidate, would have won if the ballots were counted usingranked pairs(or any otherCondorcet method).[64] InAlaska's first-ever IRV election,Nick Begichwas eliminated in the first round to advanceMary PeltolaandSarah Palin. However, the pairwise comparison shows that Begich was theCondorcet winnerwhile Palin was both theCondorcet loserand a spoiler:[65] In the wake of the election, a poll found 54% of Alaskans, including a third of Peltola voters, supported a repeal of RCV.[67][68][69]Observers noted such pathologies would have occurred under Alaska's previous primary system as well, leading several to suggest Alaska adopt any one ofseveral alternativeswithout this behavior.[70]
https://en.wikipedia.org/wiki/Spoiler_effect
Psephology(/sɪˈfɒlədʒi/; from Greekψῆφος,psephos, 'pebble') is the study of elections and voting.[1]Psephology attempts to bothforecastand explainelectionresults. The term is more common in Britain and in those English-speaking communities that rely heavily on the British standard of the language.[citation needed] Psephology uses historical precinctvotingdata, publicopinion polls,campaign financeinformation and similar statistical data. The term was coined in 1948 byW. F. R. Hardie(1902–1990) in the United Kingdom afterR. B. McCallum, a friend of Hardie's, requested a word to describe the study of elections. Its first documented usage in writing appeared in 1952.[2] The term draws from the Greek word forpebbleas theancient Greeksused pebbles to vote. (Similarly, the word "ballot" is derived from the medieval French word "ballotte", meaning a small ball.[3]) Psephology is a division ofpolitical sciencethat deals with the examination as well as the statistical analysis of elections and polls. People who practise psephology are called psephologists. A few of the major tools that are used by a psephologist are historical precinct voting data, campaign finance information, and other related data.Public opinion pollsalso play an important role in psephology. Psephology also has various applications specifically in analysing the results of election returns for current indicators, as opposed to predictive purposes. For instance, theGallagher Indexmeasures the amount of proportional representation in an election. Degrees in psephology are not offered (instead, a psephologist might have a degree in political science and/or statistics). Knowledge of demographics, statistical analysis and politics (especially electoral systems and voting behaviour) are prerequisites for becoming a psephologist. Notable psephologists include:
https://en.wikipedia.org/wiki/Psephology
Adultismis a bias or prejudice against children or youth.[1][2]It has been defined as "the power adults have over children", or the abuse thereof,[2]as well as "prejudice and accompanying systematic discrimination against young people",[3]and "bias towards adults... and the social addiction to adults, including their ideas, activities, and attitudes". It can be considered a subtype ofageism, or prejudice and discrimination due to age in general. This phenomenon is said to affect families, schools, justice systems and the economy, in addition to other areas of society. Its impacts are largely regarded as negative, except in cases related tochild protectionand the overridingsocial contract.[4]Increased study of adultism has recently occurred in the fields of education,psychology,civic engagement,higher educationand further, with contributions fromEurope,North AmericaandSouth America.[5] According to one writer, "the term 'adultism' has been varyingly employed since at least the 1840s, when it was used to describe traits of an animal that matured faster than expected."[6]More familiar to current usage, the word was used by Patterson Du Bois in 1903,[7]with a meaning broadly similar to that used by Jack Flasher in a journal article seventy-five years later. In France in the 1930s, the same word was used for an entirely different topic, the author describing a condition wherein a child possessed adult-like "physique and spirit": That 1930s usage of the word in France was superseded by a late 1970s American journal article proposing that adultism is the abuse of the power that adults have over children. The author identified examples not only in parents but also in teachers, psychotherapists, the clergy, police, judges, and juries.[2] John Bell in 1995 defined adultism as "behaviors and attitudes based on the assumptions that adults are better than young people, and entitled to act upon young people without agreement".[9][10]Adam Fletcher in 2016 called it "an addiction to the attitudes, ideas, beliefs, and actions of adults."[11]Adultism is popularly used to describe anydiscriminationagainst young people and is sometimes distinguished fromageism, which is simply prejudice on the grounds of age, although it commonly refers to prejudice against older people, not specifically against youth. It has been suggested that adultism, which is associated with a view of the self that trades on rejecting and excluding child-subjectivity, has always been present in Western culture.[12] Fletcher[4]suggests that adultism has three main expressions in society: A study by the Crisis Prevention Institute on the prevalence of adultism found an increasing number of local youth-serving organizations addressing the issue.[13]For instance, a local program (Youth Together) inOakland,California, describes the impact of adultism, which "hinders the development of youth, in particular, their self-esteem and self-worth, ability to form positive relationships with caring adults, or even see adults as allies", on their website.[14] Adultism has been used to describe the oppression of children and young people by adults, which is seen as having the same power dimension in the lives of young people as racism and sexism.[15]When used in this sense it is a generalization ofpaternalism, describing the force of all adults rather than only male adults, and may be witnessed in the infantilization of children and youth.Pedophobia(the fear of children) andephebiphobia(the fear of youth) have been proposed as antecedents to adultism.[16] Terms such as adult privilege, adultarchy, andadultcentrismhave been proposed as descriptions of particular aspects or variants of adultism.[17] National Youth Rights Associationdescribes discrimination against youth asageism, taking that word as any form of discrimination against anyone due to their age. Advocates of using the term 'ageism' for this issue also believe it makes common cause with older people fighting against their own form of age discrimination.[18]However, a national organization calledYouth On Boardcounters this based on a different meaning of "ageism", arguing that "addressing adultist behavior by calling it ageism is discrimination against youth in itself."[19] In his seminal 1978 article, Flasher says that adultism is born of the belief that children are inferior, and he says it can be manifested as excessive nurturing, possessiveness, or over-restrictiveness, all of which are consciously or unconsciously geared toward excessive control of a child.[20]Adultism has been associated withpsychological projectionandsplitting, a process whereby 'the one with the power attributes his or her unconscious, unresolved sexual and aggressive material' to the child – 'both the dark and the light side...hence thedivine child/deficit child'[21]split. Theologians Heather Eaton and Matthew Fox proposed, "Adultism derives from adults repressing the inner child."[22]John Holtstated, "An understanding of adultism might begin to explain what I mean when I say that much of what is known aschildren's artis an adult invention."[23]That perspective is seemingly supported byMaya Angelou, who remarked: We are all creative, but by the time we are three or four years old, someone has knocked the creativity out of us. Some people shut up the kids who start to tell stories. Kids dance in their cribs, but someone will insist they sit still. By the time the creative people are ten or twelve, they want to be like everyone else.[24] A 2006/2007 survey conducted by theChildren's Rights Alliance for Englandand theNational Children's Bureauasked 4,060 children and young people whether they have ever been treated unfairly based on various criteria (race, age, sex, sexual orientation, etc.). A total of 43% of British youth surveyed reported experiencing discrimination based on their age, substantially more than other categories of discrimination like sex (27%), race (11%), or sexual orientation (6%).[25] In addition to Fletcher,[4]other experts have identified multiple forms of adultism, offering atypologythat includes the above categories of internalized adultism,[26]institutionalized adultism,[27]cultural adultism, and other forms. In a publication published by theW. K. Kellogg Foundation,University of MichiganprofessorBarry Checkowayasserts that internalized adultism causes youth to "question their own legitimacy, doubt their ability to make a difference" and perpetuate a "culture of silence" among young people.[28] "Adultism convinces us as children that children don't really count," reports an investigative study, and it "becomes extremely important to us [children] to have the approval of adults and be 'in good' with them, even if it means betraying our fellow children. This aspect of internalized adultism leads to such phenomena tattling on our siblings or being the 'teacher's pet,' to name just two examples." Other examples of internalized adultism include many forms of violence imposed upon children and youth by adults who are reliving the violence they faced as young people, such ascorporal punishment,sexual abuse,verbal abuse, and community incidents that include store policies prohibiting youth from visiting shops without adults, and police, teachers, or parents chasing young people from areas without just cause.[9] Institutional adultism may be apparent in any instance ofsystemic bias, where formalized limitations or demands are placed on people simply because of their young age. Policies, laws, rules, organizational structures, and systematic procedures each serve as mechanisms to leverage, perpetuate, and instill adultism throughout society. These limitations are often reinforced through physical force, coercion or police actions and are often seen as double standards.[29]This treatment is increasingly seen as a form ofgerontocracy.[30][31] Institutions perpetuating adultism may include the fiduciary, legal, educational, communal, religious, and governmental sectors of a community.Social scienceliterature has identified adultism as "within the context of the social inequality and the oppression of children, where children are denied human rights and are disproportionately victims of maltreatment and exploitation."[32] Institutional adultism may be present in: as well asLegal issues affecting adolescenceandTotal institutions. Cultural adultism is a much more ambiguous, yet much more prevalent, form of "discriminationor intolerance towards youth".[36]Any restriction or exploitation of people because of their youth, as opposed to their ability, comprehension, or capacity, may be said to be adultist. These restrictions are often attributed to euphemisms afforded to adults on the basis of age alone, such as "better judgment" or "the wisdom of age". A parenting magazine editor comments, "Most of the time people talk differently to kids than to adults, and often they act differently, too."[37] Discrimination against age is increasingly recognized as a form ofbigotryin social and cultural settings around the world. An increasing number of social institutions are acknowledging the positions of children and teenagers as anoppressedminority group.[38]Many youth are rallying against the adultist myths spread through mass media from the 1970s through the 1990s.[39][40] Research compiled from two sources (a Cornell University nationwide study, and a Harvard University study on youth) has shown that social stratification between age groups causesstereotypingand generalization; for instance, the media-perpetuated myth that all adolescents are immature, violent and rebellious.[41]Opponents of adultism contend that this has led to growing number of youth, academics, researchers, and other adults rallying against adultism and ageism, such as organizing education programs, protesting statements, and creating organizations devoted to publicizing the concept and addressing it.[42] Simultaneously, research shows that young people who struggle against adultism within community organizations have a high rate of impact upon said agencies, as well as their peers, the adults who work with them, and the larger community to which the organization belongs.[43] There may be many negative effects of adultism, includingephebiphobiaand a growinggeneration gap. A reactive social response to adultism takes the form of thechildren's rights movement, led by young people who strike against being exploited for their labor. Numerous popular outlets are employed to strike out against adultism, particularlymusicand movies. Additionally, many youth-led social change efforts have inherently responded to adultism, particularly those associated withyouth activismandstudent activism, each of which in their own respects have struggled with the effects of institutionalized and cultural adultism.[42] A growing number of governmental, academic, and educational institutions around the globe have created policy, conducted studies, and created publications that respond to many of the insinuations and implications of adultism. Much of popular researcherMargaret Mead's work can be said to be a response to adultism.[44]Current researchers whose work analyzes the effects of adultism include sociologistMike Males[45]and critical theoristHenry Giroux. The topic has recently been addressed inliberation psychologyliterature, as well.[46] Any inanimate or animate exhibition of adultism is said to be "adultist". This may include behaviors, policies, practices, institutions, or individuals. It is legal in most countries, towards people under 18. EducatorJohn Holtproposed that teaching adults about adultism is a vital step to addressing the effects of adultism,[47]and at least one organization[48]and one curriculum[49]do just that. Several educators have created curricula that seek to teach youth about adultism, as well.[50]Currently, organizations responding to the negative effects of adultism include the United Nations, which has conducted a great deal of research[51]in addition to recognizing the need tocounter adultismthrough policy and programs. TheCRChas particular Articles (5 and 12) which are specifically committed to combating adultism.[citation needed]The international organizationHuman Rights Watchhas done the same.[52] Common practice accepts the engagement ofyouth voiceand the formation ofyouth-adult partnershipsas essential steps to resisting adultism.[53] Some ways to challenge adultism also include youth-led programming and participating inyouth-led organizations. These are both ways of children stepping up and taking action to call out the bias towards adults. Youth-led programming allows the voices of the youth to be heard and taken into consideration.[54]Taking control of their autonomy can help children take control of their sexuality, as well. Moving away from an adultist framework leads to moving away from the idea that children are not capable of handling information about sex and their own sexuality. Accepting that children are ready to learn about themselves will decrease the amount of misinformation spread to them by their peers and allow them to receive accurate information from individuals educated on the topic.[55]
https://en.wikipedia.org/wiki/Adultism
Adolescence(fromLatinadolescere'to mature') is a transitional stage ofphysicalandpsychologicaldevelopmentthat generally occurs during the period frompubertytoadulthood(typically corresponding to theage of majority).[1][2]Adolescence is usually associated with theteenageyears,[3][4]but its physical, psychological or cultural expressions may begin earlier or end later. Puberty typically begins duringpreadolescence, particularly in females.[4][5]Physical growth (particularly in males) and cognitive development can extend past the teens. Age provides only a rough marker of adolescence, and scholars have not agreed upon a precise definition. Some definitions start as early as 10 and end as late as 30.[6][7][8]TheWorld Health Organizationdefinition officially designates adolescence as the phase of life from ages 10 to 19.[9] Puberty is a period of several years in which rapid physical growth and psychological changes occur, culminating in sexual maturity. The average age of onset of puberty is 10–11 for girls and 11–12 for boys.[10][11]Every person's individual timetable for puberty is influenced primarily byheredity, although environmental factors, such as diet and exercise, also exert some influences.[12][13]These factors can also contribute toprecociousanddelayed puberty.[14][13] Some of the most significant parts of pubertal development involve distinctive physiological changes in individuals' height, weight, body composition, andcirculatoryandrespiratorysystems.[15]These changes are largely influenced by hormonal activity.Hormonesplay an organizational role, priming the body to behave in a certain way once puberty begins,[16]and an active role, referring to changes in hormones during adolescence that trigger behavioral and physical changes.[17] Puberty occurs through a long process and begins with a surge in hormone production, which in turn causes a number of physical changes. It is the stage of life characterized by the appearance and development ofsecondary sex characteristics(for example, a deeper voice and largerAdam's applein boys, and development ofbreastsand more curved and prominenthipsin girls) and a strong shift in hormonal balance towards an adult state. This is triggered by thepituitary gland, which secretes a surge ofhormonalagents into the blood stream, initiating a chain reaction. The male and femalegonadsare thereby activated, which puts them into a state of rapid growth and development; the triggered gonads now commence mass production of hormones. The testes primarily releasetestosterone, and the ovaries predominantly dispenseestrogen. The production of these hormones increases gradually until sexual maturation is met. Some boys may developgynecomastiadue to an imbalance ofsex hormones, tissue responsiveness orobesity.[18] Facial hairin males normally appears in a specific order during puberty: The first facial hair to appear tends to grow at the corners of the upper lip, typically between 14 and 17 years of age.[19][20]It then spreads to form amoustacheover the entire upper lip. This is followed by the appearance of hair on the upper part of the cheeks, and the area under the lower lip.[19]The hair eventually spreads to the sides and lower border of the chin, and the rest of the lower face to form a full beard.[19]As with most human biological processes, this specific order may vary among some individuals. Facial hair is often present in late adolescence, around ages 17 and 18, but may not appear until significantly later.[20][21]Some men do not develop full facial hair for 10 years after puberty.[20]Facial hair continues to get coarser, much darker, and thicker for another 2–4 years after puberty.[20] The major landmark of puberty for males isspermarche, the firstejaculation, which occurs, on average, at age 13.[22]For females, it ismenarche, the onset of menstruation, which occurs, on average, between ages 12 and 13.[12][23][24][25]The age of menarche is influenced by heredity, but a girl's diet and lifestyle contribute as well.[12]Regardless of genes, a girl must have a certain proportion of body fat to attain menarche.[12]Consequently, girls who have a high-fat diet and who are not physically active begin menstruating earlier, on average, than girls whose diet contains less fat and whose activities involve fat reducing exercise (e.g. ballet and gymnastics).[12][13]Girls who experience malnutrition or are in societies in which children are expected to perform physical labor also begin menstruating at later ages.[12] The timing of puberty can have important psychological and social consequences. Early maturing boys are usually taller and stronger than their friends.[26]They have the advantage in capturing the attention of potential partners and in being picked first for sports. Pubescent boys often tend to have a good body image, are more confident, secure, and more independent.[27]Late maturing boys can be less confident because of poor body image when comparing themselves to already developed friends and peers. However, early puberty is not always positive for boys; early sexual maturation in boys can be accompanied by increased aggressiveness due to the surge of hormones that affect them.[27]Because they appear older than their peers, pubescent boys may face increased social pressure to conform to adult norms; society may view them as more emotionally advanced, despite the fact that theircognitiveandsocial developmentmay lag behind their appearance.[27]Studies have shown that early maturing boys are more likely to be sexually active and are more likely to participate in risky behaviors.[28] For girls, early maturation can sometimes lead to increased self-consciousness, a typical aspect in maturing females.[29]Because of their bodies' developing in advance, pubescent girls can become more insecure and dependent.[29]Consequently, girls that reach sexual maturation early are more likely than their peers to developeating disorders(such asanorexia nervosa). Nearly half of all American high school girls' diets are to lose weight.[29]In addition, girls may have to deal with sexual advances from older boys before they are emotionally and mentally mature.[30]In addition to having earlier sexual experiences and more unwanted pregnancies than late maturing girls, early maturing girls are more exposed toalcoholanddrug abuse.[31][32]Those who have had such experiences tend to not perform as well in school as their "inexperienced" peers.[33] Girls have usually reached full physical development around ages 15–17,[3][11][34]while boys usually complete puberty around ages 16–17.[11][34][35]Any increase in height beyond the post-pubertal age is uncommon. Girls attain reproductive maturity about four years after the first physical changes of puberty appear.[3]In contrast, boys develop more slowly but continue to grow for about six years after the first visible pubertal changes.[27][35] The physical development of girls during their teenage years can be broken down into three distinct stages. At the start, which generally coincides with the beginning of rapid growth, there is the development of breast buds and pubic hair. The peak period of physical growth occurs approximately one year later in concert with stage two of sexual maturity. Approximately 1 to 1.6 years after the onset of secondary sex characteristics, girls enter into the third stage which typically includes menarche. By this time, they will have finished their growth spurt and experience a notable broadening of the hips as well as an adult fat distribution. Additionally, breast development is complete and hair in both the pubic region and armpits (axillary hair) will be darker and more widespread. In comparison to girls, it can be tricky to define when exactly sexual development in boys begins. For boys, puberty typically takes around 5 years to finish, as opposed to just3+1⁄2years for girls (menarche). By this point in time, they have already experienced their growth spurt and there are evident changes in their body shape – wider hips and fat distribution is more adult-like. Breast development will also be completed by this stage. In boys, four stages in development can be correlated with the curve of general body growth at adolescence. The initial sign of sexual maturation in boys usually is the "fat spurt". The maturing boy gains weight and becomes almost chubby, with a somewhat feminine fat distribution. This probably occurs because estrogen production by the Leydig cells in the testes is stimulated before the more abundant Sertoli cells begin to produce significant amounts of testosterone. During this stage, boys may appear obese and somewhat awkward physically. Approximately 1 year after the scrotum begins to increase in size, stage II can be seen. During this time, there is a redistribution of subcutaneous fat and the start of pubic hair growth. Following 8 to 12 months of the peak velocity in height gain, stage III ensues. This period is marked by an obvious widenening of hips with a more adult-like fat distribution and full development of the breasts. All together, these three stages culminate in a complete growth spurt for most individuals. At this time, axillary hair appears and facial hair appears on the upper lip only. A spurt in muscle growth also occurs, along with a continued decrease in subcutaneous fat and an obviously harder and more angular body form. Pubic hair distribution appears more adult but has not yet spread to the medial area of the thighs. The penis and scrotum are near adult size. Stage IV for boys, which occurs anywhere from 15 to 24 months after stage III, is difficult to pinpoint. At this time, the spurt of growth in height ends. There is facial hair on the chin and the upper lip, adult distribution and color of pubic and axillary hair, and a further increase in muscular strength.[36] The adolescent growth spurt is a rapid increase in the individual'sheightand weight during puberty resulting from the simultaneous release of growth hormones,thyroid hormones, andandrogens.[37]: 55–56Males experience their growth spurt about two years later, on average, than females. During their peak height velocity (the time of most rapid growth), adolescents grow at a growth rate nearly identical to that of a toddler—about 10.3 cm (4 inches) per year for males and 9 cm (3.5 inches) per year for females.[38]In addition to changes in height, adolescents also experience a significant increase in weight (Marshall, 1978). The weight gained during adolescence constitutes nearly half of one's adult body weight.[38]Teenage and early adult males may continue to gain natural muscle growth even after puberty.[27] The accelerated growth in different body parts happens at different times, but for all adolescents, it has a fairly regular sequence. The first places to grow are the extremities—the head, hands and feet—followed by the arms and legs, then the torso and shoulders.[39]This non-uniform growth is one reason why an adolescent body may seem out of proportion. During puberty, bones become harder and more brittle. At the conclusion of puberty, the ends of the long bones close during the process calledepiphysis. There can be ethnic differences in these skeletal changes. For example, in the United States,bone densityincreases significantly more among black than white adolescents, which might account for decreased likelihood of black women developingosteoporosisand having fewer bone fractures there.[40] Another set of significant physical changes during puberty happen in bodily distribution of fat and muscle. This process is different for females and males. Before puberty, there are nearly no sex differences in fat and muscle distribution; during puberty, boys grow muscle much faster than girls, although both sexes experience rapid muscle development. In contrast, though both sexes experience an increase in body fat, the increase is much more significant for girls. Frequently, the increase in fat for girls happens in their years just before puberty. The ratio between muscle and fat among post-pubertal boys is around three to one, while for girls it is about five to four. This may help explain sex differences in athletic performance.[41] Pubertal development also affectscirculatoryandrespiratorysystems as an adolescents' heart and lungs increase in both size and capacity. These changes lead to increased strength and tolerance for exercise. Sex differences are apparent as males tend to develop "larger hearts and lungs, higher systolic blood pressure, a lower resting heart rate, a greater capacity for carrying oxygen to the blood, a greater power for neutralizing the chemical products of muscular exercise, higher blood hemoglobin and more red blood cells".[42] Despite some genetic sex differences, environmental factors play a large role in biological changes during adolescence. For example, girls tend to reduce their physical activity in preadolescence[43][44]and may receive inadequate nutrition from diets that often lack important nutrients, such as iron.[45]These environmental influences, in turn, affect female physical development. Primary sex characteristics are those directly related to thesex organs. In males, the first stages of puberty involve growth of the testes and scrotum, followed by growth of the penis.[39]At the time that the penis develops, theseminal vesicles, theprostate, and thebulbourethral glandalso enlarge and develop. The first ejaculation of seminal fluid generally occurs about one year after the beginning of accelerated penis growth, although this is often determined culturally rather than biologically, since for many boys the first ejaculation occurs as a result of masturbation.[39]Boys are generally fertile before they have an adult appearance.[37]: 54 In females, changes in the primary sex characteristics involve growth of the uterus, vagina, and other aspects of the reproductive system.Menarche, the beginning of menstruation, is a relatively late development which follows a long series of hormonal changes.[46]Generally, a girl is not fully fertile until several years after menarche, as regular ovulation follows menarche by about two years.[47]Unlike males, therefore, females usually appear physically mature before they are capable of becoming pregnant. Changes insecondary sex characteristicsinclude every change that is not directly related to sexual reproduction. In males, these changes involve appearance of pubic, facial, and body hair, deepening of the voice, roughening of the skin around the upper arms and thighs, and increased development of the sweat glands. In females, secondary sex changes involve elevation of the breasts, widening of the hips, development of pubic and underarm hair, widening of theareolae, and elevation of thenipples.[37]: 57–58The changes in secondary sex characteristics that take place during puberty are often referred to in terms of fiveTanner stages,[48]named after the British pediatrician who devised the categorization system. The human brain is not finished developing by the time a person reaches puberty, or even finishes it. The frontal lobe of the brain has been known to shape itself well into one's 30s.[49]Neuroscientists often cannot agree precisely on when this developmental period ends or if there is an exact age for the end of brain development.[50]Below the age of about roughly 30, the human brain has been implicated in human behavior andsocial immaturity.However, there has been no empirical study indicating a causal relationship with the development of the prefrontal cortex in adolescence and into early adulthood with any irrational behaviors.[51]The brain reaches 90% of its adult size by six years of age.[52]Thus, the brain does not grow in size much during adolescence. Over the course of adolescence, the amount ofwhite matterin the brain increases linearly, while the amount ofgrey matterin the brain follows an inverted-U pattern.[53]Through a process calledsynaptic pruning, unnecessary neuronal connections in the brain are eliminated and the amount of grey matter is pared down. However, this does not mean that the brain loses functionality; rather, it becomes more efficient due to increasedmyelination(insulation of axons) and the reduction of unused pathways.[54] The first areas of the brain to be pruned are those involving primary functions, such as motor and sensory areas. The areas of the brain involved in more complex processes lose matter later in development. These include the lateral andprefrontalcortices, among other regions.[55]Some of the most developmentally significant changes in the brain occur in the prefrontal cortex, which is involved indecision makingand cognitive control, as well as other higher cognitive functions. During adolescence, myelination and synaptic pruning in the prefrontal cortex increases, improving the efficiency of information processing, and neural connections between the prefrontal cortex and other regions of the brain are strengthened.[56]This leads to better evaluation of risks and rewards, as well as improved control over impulses. Specifically, developments in the dorsolateral prefrontal cortex are important for controlling impulses and planning ahead, while development in the ventromedial prefrontal cortex is important for decision making. Changes in the orbitofrontal cortex are important for evaluating rewards and risks. Threeneurotransmittersthat play important roles in adolescent brain development areglutamate,dopamineandserotonin. Glutamate is an excitatory neurotransmitter. During the synaptic pruning that occurs during adolescence, most of the neural connections that are pruned contain receptors for glutamate or other excitatory neurotransmitters.[57]Because of this, by early adulthood the synaptic balance in the brain is more inhibitory than excitatory. Dopamineis associated with pleasure and attuning to the environment during decision-making. During adolescence, dopamine levels in thelimbic systemincrease and input of dopamine to the prefrontal cortex increases.[58]The balance of excitatory to inhibitory neurotransmitters and increased dopamine activity in adolescence may have implications for adolescent risk-taking and vulnerability to boredom (seeCognitive developmentbelow). Serotoninis a neuromodulator involved in regulation of mood and behavior. Development in the limbic system plays an important role in determining rewards and punishments and processing emotional experience and social information. Changes in the levels of the neurotransmitters dopamine andserotoninin the limbic system make adolescents more emotional and more responsive to rewards and stress. The corresponding increase in emotional variability also can increase adolescents' vulnerability. The effect of serotonin is not limited to the limbic system: Several serotonin receptors have their gene expression change dramatically during adolescence, particularly in the human frontal and prefrontal cortex.[59] Adolescence is a time of rapid cognitive development.[60]Piagetdescribes adolescence as the stage of life in which the individual's thoughts start taking more of an abstract form and theegocentricthoughts decrease, allowing the individual to think and reason in a wider perspective.[61]A combination of behavioural andfMRIstudies have demonstrated development ofexecutive functions, that is, cognitive skills that enable the control and coordination of thoughts and behaviour, which are generally associated with theprefrontal cortex.[62]The thoughts, ideas and concepts developed at this period of life greatly influence one's future life, playing a major role in character and personality formation.[63] Biological changes in brain structure and connectivity within the brain interact with increased experience, knowledge, and changing social demands to produce rapid cognitive growth (seeChanges in the brainabove). The age at which particular changes take place varies between individuals, but the changes discussed below begin at puberty or shortly after that and some skills continue to develop as the adolescent ages. Thedual systems modelproposes a maturational imbalance between development of the socioemotional system and cognitive control systems in the brain that contribute to impulsivity and other behaviors characteristic of adolescence.[64]Some studies like theABCD Studyare researching on the baseline of adolescent cognitive development. There are at least two major approaches to understanding cognitive change during adolescence. One is theconstructivist viewof cognitive development. Based on the work ofPiaget, it takes a quantitative, state-theory approach, hypothesizing that adolescents' cognitive improvement is relatively sudden and drastic. The second is theinformation-processing perspective, which derives from the study of artificial intelligence and attempts to explain cognitive development in terms of the growth of specific components of the thinking process.[citation needed] By the time individuals have reached ages 12–14 or so[65][66]their critical thinking and decision-making competency[67]are comparable to those of adults. These improvements occur in five areas during adolescence: Studies newer than 2005 indicate that the brain is changing in efficiency well past its twenties, a 'point of maturity' in the twenties is somewhat arbitrary as many important parts of the brain are noted to be mature by 14 or 15, making 'maturity' hard to define and has often been disagreed with.[73] Prefrontal cortex pruning has been recorded to level off by age 14 or 15,[74]and has been seen to continue as late as into the sixth decade of life.[75]White matter is recorded to increase up until around the age of 45, and then it is lost via progressive aging. Adolescents' thinking is less bound to concrete events than that of children: they can contemplate possibilities outside the realm of what currently exists. One manifestation of the adolescent's increased facility with thinking about possibilities is the improvement of skill indeductive reasoning, which leads to the development of hypothetical thinking. This provides the ability to plan ahead, see the future consequences of an action and to provide alternative explanations of events. It also makes adolescents more skilled debaters, as they can reason against a friend's or parent's assumptions. Adolescents also develop a more sophisticated understanding of probability.[citation needed] The appearance of more systematic, abstract thinking is another notable aspect of cognitive development during adolescence. For example, adolescents find it easier than children to comprehend the sorts of higher-order abstract logic inherent in puns, proverbs, metaphors, and analogies. Their increased facility permits them to appreciate the ways in which language can be used to convey multiple messages, such as satire, metaphor, andsarcasm. (Children younger than age nine often cannot comprehend sarcasm at all.)[76]This also permits the application of advanced reasoning and logical processes to social and ideological matters such as interpersonal relationships, politics, philosophy, religion, morality, friendship, faith, fairness, and honesty. A third gain in cognitive ability involves thinking about thinking itself, a process referred to asmetacognition. It often involves monitoring one's own cognitive activity during the thinking process. Adolescents' improvements in knowledge of their own thinking patterns lead to better self-control and more effective studying. It is also relevant in social cognition, resulting in increasedintrospection,self-consciousness, and intellectualization (in the sense of thought about one's own thoughts, rather than the Freudian definition as a defense mechanism). Adolescents are much better able than children to understand that people do not have complete control over their mental activity. Being able to introspect may lead to two forms of adolescent egocentrism, which results in two distinct problems in thinking: theimaginary audienceand thepersonal fable. These likely peak at age fifteen, along with self-consciousness in general.[77] Related to metacognition andabstract thought, perspective-taking involves a more sophisticatedtheory of mind.[78]Adolescents reach a stage of social perspective-taking in which they can understand how the thoughts or actions of one person can influence those of another person, even if they personally are not involved.[79] Compared to children, adolescents are more likely to question others' assertions, and less likely to accept facts as absolute truths. Through experience outside the family circle, they learn that rules they were taught as absolute are in fact relativistic. They begin to differentiate between rules instituted out of common sense—not touching a hot stove—and those that are based on culturally relative standards (codes of etiquette, not dating until a certain age), a delineation that younger children do not make. This can lead to a period of questioning authority in all domains.[80] Because most injuries sustained by adolescents are related to risky behavior (alcohol consumption and drug use, reckless or distracted driving,unprotected sex), a great deal of research has been done on the cognitive and emotional processes underlying adolescent risk-taking. In addressing this question, it is important to distinguish whether adolescents are more likely to engage in risky behaviors (prevalence), whether they make risk-related decisions similarly or differently than adults (cognitive processing perspective), or whether they use the same processes but value different things and thus arrive at different conclusions. The behavioral decision-making theory proposes that adolescents and adults both weigh the potential rewards and consequences of an action. However, research has shown that adolescents seem to give more weight to rewards, particularly social rewards, than do adults.[81] Research seems to favor the hypothesis that adolescents and adults think about risk in similar ways, but hold different values and thus come to different conclusions. Some have argued that there may be evolutionary benefits to an increased propensity for risk-taking in adolescence. For example, without a willingness to take risks, teenagers would not have the motivation or confidence necessary to leave their family of origin. In addition, from a population perspective, there is an advantage to having a group of individuals willing to take more risks and try new methods, counterbalancing the more conservative elements more typical of the received knowledge held by older adults.[citation needed] Risk-taking may also have reproductive advantages: adolescents have a newfound priority in sexual attraction and dating, and risk-taking is required to impress potential mates. Research also indicates that baselinesensation seekingmay affect risk-taking behavior throughout the lifespan.[82][83]Given the potential consequences, engaging in sexual behavior is somewhat risky, particularly for adolescents. Having unprotected sex, using poor birth control methods (e.g., withdrawal), having multiple sexual partners, and poor communication are some aspects of sexual behavior that increase individual or social risk. Aspects of adolescents' lives that are correlated withrisky sexual behaviorinclude higher rates of parental abuse, and lower rates of parental support and monitoring.[84] Related to their increased tendency for risk-taking, adolescents show impaired behavioral inhibition, including deficits inextinction learning.[85]This has important implications for engaging in risky behavior such asunsafe sexor illicit drug use, as adolescents are less likely to inhibit actions that may have negative outcomes in the future.[86]This phenomenon also has consequences for behavioral treatments based on the principle of extinction, such as cue exposure therapy for anxiety ordrug addiction.[87][88]It has been suggested that impaired inhibition, specifically extinction, may help to explain adolescent propensity to relapse to drug-seeking even following behavioral treatment for addiction.[89] The formal study of adolescent psychology began with the publication ofG. Stanley Hall'sAdolescencein 1904. Hall, who was the first president of theAmerican Psychological Association, defined adolescence to be the period of life from ages 14 to 24, and viewed it primarily as a time of internal turmoil and upheaval (sturm und drang).[90]This understanding ofyouthwas based on two then-new ways of understandinghuman behavior:Darwin's evolutionary theoryand Freud'spsychodynamic theory. He believed that adolescence was a representation of our human ancestors' phylogenetic shift from being primitive to being civilized. Hall's assertions stood relatively uncontested until the 1950s when psychologists such asErik EriksonandAnna Freudstarted to formulate their theories about adolescence. Freud believed that the psychological disturbances associated with youth were biologically based and culturally universal while Erikson focused on the dichotomy betweenidentity formationand role fulfillment.[91]Even with their different theories, these three psychologists agreed that adolescence was inherently a time of disturbance and psychological confusion. The less turbulent aspects of adolescence, such as peer relations and cultural influence, were left largely ignored until the 1980s. From the '50s until the '80s, the focus of the field was mainly on describing patterns of behavior as opposed to explaining them.[91] Jean Macfarlanefounded theUniversity of California, Berkeley's Institute of Human Development, formerly called the Institute of Child Welfare, in 1927.[92]The institute was instrumental in initiating studies of healthy development, in contrast to previous work that had been dominated by theories based on pathological personalities.[92]The studies looked at human development during theGreat DepressionandWorld War II, unique historical circumstances under which a generation of children grew up. The Oakland Growth Study, initiated by Harold Jones and Herbert Stolz in 1931, aimed to study the physical, intellectual, and social development of children in the Oakland area. Data collection began in 1932 and continued until 1981, allowing the researchers to gather longitudinal data on the individuals that extended past adolescence into adulthood.Jean Macfarlanelaunched the Berkeley Guidance Study, which examined the development of children in terms of their socioeconomic and family backgrounds.[93]These studies provided the background forGlen Elderin the 1960s to propose alife course perspectiveof adolescent development. Elder formulated several descriptive principles of adolescent development. The principle of historical time and place states that an individual's development is shaped by the period and location in which they grow up. The principle of the importance of timing in one's life refers to the different impact that life events have on development based on when in one's life they occur. The idea of linked lives states that one's development is shaped by the interconnected network of relationships of which one is a part and the principle ofhuman agencyasserts that one's life course is constructed via the choices and actions of an individual within the context of their historical period and social network.[94] In 1984, the Society for Research on Adolescence (SRA) became the first official organization dedicated to the study of adolescent psychology. Some of the issues first addressed by this group include: thenature versus nurturedebate as it pertains to adolescence; understanding the interactions between adolescents and their environment; and considering culture, social groups, and historical context when interpreting adolescent behavior.[91] Evolutionary biologists likeJeremy Griffithhave drawn parallels between adolescent psychology and the developmental evolution of modern humans from hominid ancestors as a manifestation ofontogeny recapitulating phylogeny.[95] Identity development is a stage in the adolescent life cycle.[96]For most, the search for identity begins in the adolescent years. During these years, adolescents are more open to 'trying on' different behaviours and appearances to discover who they are.[97]In an attempt to find their identity and discover who they are, adolescents are likely to cycle through a number of identities to find one that suits them best. Developing and maintaining identity (in adolescent years) is a difficult task due to multiple factors such as family life, environment, and social status.[96]Empirical studies suggest that this process might be more accurately described asidentity development, rather than formation, but confirms a normative process of change in both content and structure of one's thoughts about the self.[98]The two main aspects of identity development are self-clarity and self-esteem.[97]Since choices made during adolescent years can influence later life, high levels of self-awareness and self-control during mid-adolescence will lead to better decisions during the transition to adulthood.[99]Researchers have used three general approaches to understanding identity development: self-concept, sense of identity, and self-esteem. The years of adolescence create a more conscientious group of young adults. Adolescents pay close attention and give more time and effort to their appearance as their body goes through changes. Unlike children, teens put forth an effort to look presentable (1991).[4]The environment in which an adolescent grows up also plays an important role in their identity development. Studies done by theAmerican Psychological Associationhave shown that adolescents with a less privileged upbringing have a more difficult time developing their identity.[100] The idea of self-concept is known as the ability of a person to have opinions and beliefs that are defined confidently, consistent and stable.[101]Early in adolescence,cognitive developmentsresult in greater self-awareness, greater awareness of others and their thoughts and judgments, the ability to think about abstract, future possibilities, and the ability to consider multiple possibilities at once. As a result, adolescents experience a significant shift from the simple, concrete, and global self-descriptions typical of young children; as children, they defined themselves by physical traits whereas adolescents define themselves based on their values, thoughts, and opinions.[102] Adolescents can conceptualize multiple "possible selves" that they could become[103]and long-term possibilities and consequences of their choices.[104]Exploring these possibilities may result in abrupt changes in self-presentation as the adolescent chooses or rejects qualities and behaviors, trying to guide theactualself toward theidealself (who the adolescent wishes to be) and away from the feared self (who the adolescent does not want to be). For many, these distinctions are uncomfortable, but they also appear to motivate achievement through behavior consistent with the ideal and distinct from the feared possible selves.[103][105] Further distinctions in self-concept, called "differentiation," occur as the adolescent recognizes the contextual influences on their own behavior and the perceptions of others, and begin to qualify their traits when asked to describe themselves.[106]Differentiation appears fully developed by mid-adolescence.[107]Peaking in the 7th-9th grades, thepersonality traitsadolescents use to describe themselves refer to specific contexts, and therefore may contradict one another. The recognition of inconsistent content in the self-concept is a common source of distress in these years (seeCognitive dissonance),[108]but this distress may benefit adolescents by encouraging structural development. Egocentrismin adolescents forms a self-conscious desire to feel important in their peer groups and enjoy social acceptance.[109]Unlike the conflicting aspects of self-concept, identity represents a coherent sense of self stable across circumstances and including past experiences and future goals. Everyone has a self-concept, whereasErik Eriksonargued that not everyone fully achieves identity. Erikson's theory ofstages of developmentincludes theidentity crisisin which adolescents must explore different possibilities and integrate different parts of themselves before committing to their beliefs. He described the resolution of this process as a stage of "identity achievement" but also stressed that the identity challenge "is never fully resolved once and for all at one point in time".[110]Adolescents begin by defining themselves based on theircrowd membership. "Clothes help teens explore new identities, separate from parents, and bond with peers." Fashion has played a major role when it comes to teenagers "finding their selves"; Fashion is always evolving, which corresponds with the evolution of change in the personality of teenagers.[111]Adolescents attempt to define their identity by consciously styling themselves in different manners to find what best suits them. Trial and error in matching both their perceived image and the image others respond to and see, allows for the adolescent to grasp an understanding of who they are.[112] Just as fashion is evolving to influence adolescents so is the media. "Modern life takes place amidst a never-ending barrage of flesh on screens, pages, and billboards."[113]This barrage consciously or subconsciously registers into the mind causing issues with self-image a factor that contributes to an adolescence sense of identity. Researcher James Marcia developed the current method for testing an individual's progress along these stages.[114][115]His questions are divided into three categories: occupation, ideology, andinterpersonal relationships. Answers are scored based on the extent to which the individual has explored and the degree to which he has made commitments. The result is classification of the individual into a) identity diffusion in which all children begin, b) Identity Foreclosure in which commitments are made without the exploration of alternatives, c) Moratorium, or the process of exploration, or d) Identity Achievement in which Moratorium has occurred and resulted in commitments.[116] Research since reveals self-examination beginning early in adolescence, but identity achievement rarely occurring before age 18.[117]The freshman year of college influences identity development significantly, but may actually prolong psychosocial moratorium by encouraging reexamination of previous commitments and further exploration of alternate possibilities without encouraging resolution.[118]For the most part, evidence has supported Erikson's stages: each correlates with the personality traits he originally predicted.[116]Studies also confirm the impermanence of the stages; there is no final endpoint in identity development.[119] An adolescent's environment plays a huge role in their identity development.[100]While most adolescent studies are conducted on white, middle class children, studies show that the more privileged upbringing people have, the more successfully they develop their identity.[100]The forming of an adolescent's identity is a crucial time in their life. It has been recently found that demographic patterns suggest that the transition to adulthood is now occurring over a longer span of years than was the case during the middle of the 20th century. Accordingly, youth, a period that spans late adolescence and early adulthood, has become a more prominent stage of the life course. This, therefore, has caused various factors to become important during this development.[120]So many factors contribute to the developing social identity of an adolescent from commitment, to coping devices,[121]to social media. All of these factors are affected by the environment an adolescent grows up in. A child from a more privileged upbringing is exposed to more opportunities and better situations in general. An adolescent from an inner city or a crime-driven neighborhood is more likely to be exposed to an environment that can be detrimental to their development. Adolescence is a sensitive period in the development process, and exposure to the wrong things at that time can have a major effect on future decisions. While children that grow up in nice suburban communities are not exposed to bad environments they are more likely to participate in activities that can benefit their identity and contribute to a more successful identity development.[100] Sexual orientationhas been defined as "an erotic inclination toward people of one or more genders, most often described as sexual or erotic attractions".[122]In recent years, psychologists have sought to understand how sexual orientation develops during adolescence. Some theorists believe that there are many different possible developmental paths one could take, and that the specific path an individual follows may be determined by their sex, orientation, and when they reached the onset of puberty.[122] In 1989, Troiden proposed a four-stage model for the development of homosexual sexual identity.[123]The first stage, known as sensitization, usually starts in childhood, and is marked by the child's becoming aware of same-sex attractions. The second stage, identity confusion, tends to occur a few years later. In this stage, the youth is overwhelmed by feelings of inner turmoil regarding their sexual orientation, and begins to engage in sexual experiences with same-sex partners. In the third stage of identity assumption, which usually takes place a few years after the adolescent has left home, adolescents begin to come out to their family and close friends, and assumes a self-definition as gay, lesbian, or bisexual.[124]In the final stage, known as commitment, the young adult adopts their sexual identity as a lifestyle. Therefore, this model estimates that the process of coming out begins in childhood, and continues through the early to mid 20s. This model has been contested, and alternate ideas have been explored in recent years. In terms ofsexual identity, adolescence is when most gay/lesbian andtransgenderadolescents begin to recognize and make sense of their feelings. Many adolescents may choose tocome outduring this period of their life once an identity has been formed; many others may go through a period ofquestioningor denial, which can include experimentation with both homosexual and heterosexual experiences.[125]A study of 194 lesbian, gay, and bisexual youths under the age of 21 found that having an awareness of one's sexual orientation occurred, on average, around age 10, but the process of coming out to peers and adults occurred around age 16 and 17, respectively.[126]Coming to terms with and creating a positiveLGBTidentity can be difficult for some youth for a variety of reasons. Peer pressure is a large factor when youth who are questioning their sexuality orgender identityare surrounded byheteronormativepeers and can cause great distress due to a feeling of being different from everyone else. While coming out can also foster better psychological adjustment, the risks associated are real. Indeed, coming out in the midst of a heteronormative peer environment often comes with the risk of ostracism, hurtful jokes, and even violence.[125]Because of this, statistically thesuicide rateamongst LGBT adolescents is up to four times higher than that of their heterosexual peers due to bullying and rejection from peers or family members.[127] The final major aspect of identity formation isself-esteem. Self-esteem is defined as one's thoughts and feelings about one's self-concept and identity.[128]Most theories on self-esteem state that there is a grand desire, across all genders and ages, to maintain, protect and enhance their self-esteem.[101]Contrary to popular belief, there is no empirical evidence for a significant drop in self-esteem over the course of adolescence.[129]"Barometric self-esteem" fluctuates rapidly and can cause severe distress and anxiety, but baseline self-esteem remains highly stable across adolescence.[130]The validity of global self-esteem scales has been questioned, and many suggest that more specific scales might reveal more about the adolescent experience.[131]Girls are most likely to enjoy high self-esteem when engaged in supportive relationships with friends, the most important function of friendship to them is having someone who can provide social and moral support. Girls suffer from low self-esteem when they fail to win friends' approval or cannot find someone with whom to share common activities and common interests. In contrast, boys are more concerned with establishing and asserting their independence and defining their relation to authority.[132]As such, they are more likely to derive high self-esteem from their ability to successfully influence their friends; on the other hand, the lack of romantic competence, for example, failure to win or maintain the affection of the opposite or same-sex (depending on sexual orientation), is the major contributor to low self-esteem in adolescent boys. Due to the fact that both men and women happen to have a low self-esteem after ending a romantic relationship, they are prone to other symptoms that is caused by this state. Depression and hopelessness are only two of the various symptoms and it is said that women are twice as likely to experience depression and men are three to four times more likely to commit suicide (Mearns, 1991; Ustun & Sartorius, 1995).[133] The relationships adolescents have with their peers, family, and members of their social sphere play a vital role in the social development of an adolescent. As an adolescent's social sphere develops rapidly as they distinguish the differences between friends and acquaintances, they often become heavily emotionally invested in friends.[134]This is not harmful; however, if these friends expose an individual to potentially harmful situations, this is an aspect ofpeer pressure. Adolescence is a critical period in social development because adolescents can be easily influenced by the people they develop close relationships with. This is the first time individuals can truly make their own decisions, which also makes this a sensitive period. Relationships are vital in the social development of an adolescent due to the extreme influence peers can have over an individual. These relationships become significant because they begin to help the adolescent understand the concept of personalities, how they form and why a person has that specific type of personality. "The use of psychological comparisons could serve both as an index of the growth of an implicit personality theory and as a component process accounting for its creation. In other words, by comparing one person's personality characteristics to another's, we would be setting up the framework for creating a general theory of personality (and, ... such a theory would serve as a useful framework for coming to understand specific persons)."[135]This can be likened to the use of social comparison in developing one's identity and self-concept, which includes ones personality, and underscores the importance of communication, and thus relationships, in one's development. In social comparison we use reference groups, with respect to both psychological and identity development.[136]These reference groups are the peers of adolescents. This means that who the teen chooses/accepts as their friends and who they communicate with on a frequent basis often makes up their reference groups and can therefore have a huge impact on who they become. Research shows that relationships have the largest affect over the social development of an individual. Adolescence marks a rapid change in one's role within a family. Young children tend to assert themselves forcefully, but are unable to demonstrate much influence over family decisions until early adolescence,[137]when they are increasingly viewed by parents as equals. The adolescent faces the task of increasing independence while preserving a caring relationship with his or her parents.[112]When children go through puberty, there is often a significant increase in parent–child conflict and a less cohesive familial bond. Arguments often concern minor issues of control, such as curfew, acceptable clothing, and the adolescent'sright to privacy,[138]which adolescents may have previously viewed as issues over which their parents had complete authority.[139]Parent-adolescent disagreement also increases as friends demonstrate a greater impact on one another, new influences on the adolescent that may be in opposition to parents' values. Social media has also played an increasing role in adolescent and parent disagreements.[140]While parents never had to worry about the threats of social media in the past, it has become a dangerous place for children. While adolescents strive for their freedoms, the unknowns to parents of what their child is doing on social media sites is a challenging subject, due to the increasing amount of predators on social media sites. Many parents have very little knowledge of social networking sites in the first place and this further increases their mistrust. An important challenge for the parent–adolescent relationship is to understand how to enhance the opportunities of online communication while managing its risks.[101]Although conflicts between children and parents increase during adolescence, these are just relatively minor issues. Regarding their important life issues, most adolescents still share the same attitudes and values as their parents.[141] Duringchildhood, siblings are a source of conflict and frustration as well as a support system.[142]Adolescence may affect this relationship differently, depending on sibling gender. In same-sex sibling pairs, intimacy increases during early adolescence, then remains stable. Mixed-sex siblings pairs act differently; siblings drift apart during early adolescent years, but experience an increase in intimacy starting at middle adolescence.[143]Sibling interactions are children's first relational experiences, the ones that shape their social and self-understanding for life.[144]Sustaining positive sibling relations can assist adolescents in a number of ways. Siblings are able to act as peers, and may increase one another's sociability and feelings of self-worth. Older siblings can give guidance to younger siblings, although the impact of this can be either positive or negative depending on the activity of the older sibling. A potential important influence on adolescence is change of the family dynamic, specifically divorce. With the divorce rate up to about 50%,[145]divorce is common and adds to the already great amount of change in adolescence.Custodydisputes soon after a divorce often reflect a playing out of control battles and ambivalence between parents. Divorce usually results in less contact between the adolescent and their noncustodial parent.[146]In extreme cases of instability and abuse in homes, divorce can have a positive effect on families due to less conflict in the home. However, most research suggests a negative effect on adolescence as well as later development. A recent study found that, compared with peers who grow up in stable post-divorce families, children of divorce who experience additional family transitions during late adolescence, make less progress in their math and social studies performance over time.[147]Another recent study put forth a new theory entitled the adolescent epistemological trauma theory,[148]which posited that traumatic life events such as parental divorce during the formative period of late adolescence portend lifelong effects on adult conflict behavior that can be mitigated by effective behavioral assessment and training.[148]A parental divorce during childhood or adolescence continues to have a negative effect when a person is in his or her twenties and early thirties. These negative effects include romantic relationships and conflict style, meaning as adults, they are more likely to use the styles of avoidance and competing in conflict management.[149] Despite changing family roles during adolescence, the home environment and parents are still important for the behaviors and choices of adolescents.[150]Adolescents who have a good relationship with their parents are less likely to engage in various risk behaviors, such as smoking, drinking, fighting or unprotectedsexual intercourse.[150]In addition, parents influence the education of adolescence. A study conducted by Adalbjarnardottir and Blondal (2009) showed that adolescents at the age of 14 who identify their parents as authoritative figures are more likely to complete secondary education by the age of 22—as support and encouragement from an authoritative parent motivates the adolescence to complete schooling to avoid disappointing that parent.[151] Peer groups are essential to social and general development. Communication with peers increases significantly during adolescence and peer relationships become more intense than in other stages[152]and more influential to the teen, affecting both the decisions and choices being made.[153]High quality friendships may enhance children's development regardless of the characteristics of those friends. As children begin to bond with various people and create friendships, it later helps them when they are adolescent and sets up the framework for adolescence and peer groups.[154]Peer groupsare especially important during adolescence, a period of development characterized by a dramatic increase in time spent with peers[155]and a decrease in adult supervision.[156]Adolescents also associate with friends of the opposite sex much more than in childhood[157]and tend to identify with larger groups of peers based on shared characteristics.[158]It is also common for adolescents to use friends as coping devices in different situations.[159]A three-factor structure of dealing with friends including avoidance, mastery, and nonchalance has shown that adolescents use friends as coping devices withsocial stresses. Communication within peer groups allows adolescents to explore their feelings and identity as well as develop and evaluate their social skills. Peer groups offer members the opportunity to develop social skills such as empathy, sharing, and leadership. Adolescents choose peer groups based on characteristics similarly found in themselves.[112]By utilizing these relationships, adolescents become more accepting of who they are becoming. Group norms and values are incorporated into an adolescent's own self-concept.[153]Through developing new communication skills and reflecting upon those of their peers, as well as self-opinions and values, an adolescent can share and express emotions and other concerns without fear of rejection or judgment. Peer groups can have positive influences on an individual, such as on academic motivation and performance. However, while peers may facilitate social development for one another they may also hinder it. Peers can have negative influences, such as encouraging experimentation with drugs, drinking, vandalism, and stealing through peer pressure.[160]Susceptibility to peer pressure increases during early adolescence, peaks around age 14, and declines thereafter.[161]Further evidence of peers hindering social development has been found in Spanish teenagers, where emotional (rather than solution-based) reactions to problems and emotional instability have been linked with physical aggression against peers.[162]Bothphysicalandrelational aggressionare linked to a vast number of enduring psychological difficulties, especially depression, as issocial rejection.[163]Because of this, bullied adolescents often develop problems that lead to further victimization.[164]Bullied adolescents are more likely to both continue to be bullied and to bully others in the future.[165]However, this relationship is less stable in cases ofcyberbullying, a relatively new issue among adolescents. Adolescents tend to associate with "cliques" on a small scale and "crowds" on a larger scale. During early adolescence, adolescents often associate incliques, exclusive, single-sex groups of peers with whom they are particularly close. Despite the common[according to whom?]notion that cliques are an inherently negative influence, they may help adolescents become socially acclimated and form a stronger sense of identity. Within a clique of highly athletic male-peers, for example, the clique may create a stronger sense of fidelity and competition. Cliques also have become somewhat a "collective parent", i.e. telling the adolescents what to do and not to do.[166]Towards late adolescence, cliques often merge into mixed-sex groups as teenagers begin romantically engaging with one another.[167]These small friend groups then break down further as socialization becomes more couple-oriented. On a larger scale, adolescents often associate withcrowds, groups of individuals who share a common interest or activity. Often, crowd identities may be the basis for stereotyping young people, such asjocksornerds. In large, multi-ethnic high schools, there are often ethnically determined crowds.[168]Adolescents use online technology to experiment with emerging identities and to broaden their peer groups, such as increasing the amount of friends acquired on Facebook and other social media sites.[153]Some adolescents use these newer channels to enhance relationships with peers however there can be negative uses as well such as cyberbullying, as mentioned previously, and negative impacts on the family.[169] Romantic relationshipstend to increase in prevalence throughout adolescence. By age 15, 53% of adolescents have had a romantic relationship that lasted at least one month over the course of the previous 18 months.[170]In a 2008 study conducted byYouGovforChannel 4, 20% of 14−17-year-olds surveyed revealed that they had their first sexual experience at 13 or under in the United Kingdom.[171]A 2002 American study found that those aged 15–44 reported that the average age of first sexual intercourse was 17.0 for males and 17.3 for females.[172]The typical duration of relationships increases throughout the teenage years as well. This constant increase in the likelihood of a long-term relationship can be explained bysexual maturationand the development of cognitive skills necessary to maintain a romantic bond (e.g. caregiving, appropriate attachment), although these skills are not strongly developed until late adolescence.[173]Long-term relationships allow adolescents to gain the skills necessary for high-quality relationships later in life[174]and develop feelings of self-worth. Overall, positive romantic relationships among adolescents can result in long-term benefits. High-quality romantic relationships are associated with higher commitment in early adulthood[175]and are positively associated with self-esteem, self-confidence, and social competence.[176][177]For example, an adolescent with positive self-confidence is likely to consider themselves a more successful partner, whereas negative experiences may lead to low confidence as a romantic partner.[178]Adolescents often date within their demographic in regards to race, ethnicity, popularity, and physical attractiveness.[179]However, there are traits in which certain individuals, particularly adolescent girls, seek diversity. While most adolescents date people approximately their own age, boys typically date partners the same age or younger; girls typically date partners the same age or older.[170] Some researchers are now focusing on learning about how adolescents view their own relationships and sexuality; they want to move away from a research point of view that focuses on the problems associated with adolescent sexuality.[why?]College Professor Lucia O'Sullivan and her colleagues found that there were no significant gender differences in the relationship events adolescent boys and girls from grades 7–12 reported.[180]Most teens said they had kissed their partners, held hands with them, thought of themselves as being a couple and told people they were in a relationship. This means that private thoughts about the relationship as well as public recognition of the relationship were both important to the adolescents in the sample. Sexual events (such as sexual touching, sexual intercourse) were less common than romantic events (holding hands) and social events (being with one's partner in a group setting). The researchers state that these results are important because the results focus on the more positive aspects of adolescents and their social and romantic interactions rather than focusing on sexual behavior and its consequences.[180] Adolescence marks a time of sexual maturation, which manifests in social interactions as well. While adolescents may engage incasual sexual encounters(often referred to as hookups), most sexual experience during this period of development takes place within romantic relationships.[181]Adolescents can use technologies and social media to seek out romantic relationships as they feel it is a safe place to try out dating and identity exploration. From these social media encounters, a further relationship may begin.[153]Kissing, hand holding, and hugging signify satisfaction and commitment. Among young adolescents, "heavy" sexual activity, marked by genital stimulation, is often associated with violence, depression, and poor relationship quality.[182][183]This effect does not hold true for sexual activity in late adolescence that takes place within a romantic relationship.[184]Some research suggest that there are genetic causes of early sexual activity that are also risk factors fordelinquency, suggesting that there is a group who are at risk for both early sexual activity and emotional distress. For older adolescents, though, sexual activity in the context of romantic relationships was actually correlated with lower levels of deviant behavior after controlling for genetic risks, as opposed to sex outside of a relationship (hook-ups).[185] Dating violencecan occur within adolescent relationships. When surveyed, 12–25% of adolescents reported having experienced physical violence in the context of a relationship while a quarter to a third of adolescents reported having experiencing psychological aggression. This reported aggression includes hitting, throwing things, or slaps, although most of this physical aggression does not result in a medical visit. Physical aggression in relationships tends to decline from high school through college and young adulthood. In heterosexual couples, there is no significant difference between the rates of male and female aggressors, unlike in adult relationships.[186][187][188] Female adolescents from minority populations are at increased risk forintimate partner violence(IPV). Recent research findings suggest that a substantial portion of young urban females are at high risk for being victims of multiple forms of IPV. Practitioners diagnosing depression among urban minority teens should assess for both physical and non-physical forms of IPV, and early detection can help to identify youths in need of intervention and care.[189][190]Similarly to adult victims, adolescent victims do not readily disclose abuse, and may seek out medical care for problems not directly related to incidences of IPV. Therefore, screening should be a routine part of medical treatment for adolescents regardless of chief complaint. Many adults discount instances of IPV in adolescents or believe they do not occur because relationships at young ages are viewed as "puppy love," however, it is crucial that adults take IPV in adolescents seriously even though often policy falls behind.[191] In contemporary society, adolescents also face some risks as their sexuality begins to transform. While some of these, such as emotional distress (fear of abuse or exploitation) andsexually transmitted infections/diseases (STIs/STDs), includingHIV/AIDS, are not necessarily inherent to adolescence, others such asteenage pregnancy(through non-use or failure of contraceptives) are seen as social problems in most western societies. One in four sexually active teenagers will contract an STI.[192]Adolescents in the United States often chose "anything but intercourse" for sexual activity because they mistakenly believe it reduces the risk of STIs. Across the country, clinicians report rising diagnoses ofherpesandhuman papillomavirus(HPV), which can cause genital warts, and is now thought to affect 15 percent of the teen population. Girls 15 to 19 have higher rates of gonorrhea than any other age group. One-quarter of all new HIV cases occur in those under the age of 21.[192]Multrine also states in her article that according to a March survey by theKaiser Family Foundation, eighty-one percent of parents want schools to discuss the use of condoms and contraception with their children. They also believe students should be able to be tested for STIs. Furthermore, teachers want to address such topics with their students. But, although 9 in 10sex educationinstructors across the country believe that students should be taught about contraceptives in school, over one quarter report receiving explicit instructions from school boards and administrators not to do so. According to anthropologistMargaret Mead, the turmoil found in adolescence in Western society has a cultural rather than a physical cause; they reported that societies where young women engaged in free sexual activity had no such adolescent turmoil. There are certain characteristics of adolescent development that are more rooted in culture than in human biology or cognitive structures. Culture has been defined as the "symbolic and behavioral inheritance received from the past that provides a community framework for what is valued".[193]Culture is learned and socially shared, and it affects all aspects of an individual's life.[194]Social responsibilities, sexual expression, and belief system development, for instance, are all things that are likely to vary by culture. Furthermore, distinguishing characteristics of youth, including dress, music and other uses of media, employment, art, food and beverage choices, recreation, and language, all constitute ayouth culture.[194]For these reasons, culture is a prevalent and powerful presence in the lives of adolescents, and therefore we cannot fully understand today's adolescents without studying and understanding their culture.[194]However, "culture" should not be seen as synonymous with nation or ethnicity. Many cultures are present within any given country and racial or socioeconomic group. Furthermore, to avoidethnocentrism, researchers must be careful not to define the culture's role in adolescence in terms of their own cultural beliefs.[195] In his short book "The Teenage Consumer" published in July 1959, the British market research pioneer Mark Abrams identified the emergence of a new economic group of people aged 13–25. Compared to children, people in this age range had more money, more discretion on how they chose to spend it, and greater mobility through the advent of the motor car. Compared to adults, people in this age range had fewer responsibilities and therefore made different choices on how to spend their money. These unique characteristics of this new economic group presented challenges and opportunities to advertisers. Mark Abrams coined the term "teenager" to describe this group of consumers aged 13–25.[196] In Britain, teenagers first came to public attention during the Second World War, when there were fears of juvenile delinquency.[197]By the 1950s, the media presented teenagers in terms of generational rebellion. The exaggerated moral panic among politicians and the older generation was typically belied by the growth in intergenerational cooperation between parents and children. Many working-class parents, enjoying newfound economic security, eagerly took the opportunity to encourage their teens to enjoy more adventurous lives.[198]Schools were falsely portrayed as dangerous blackboard jungles under the control of rowdy kids.[199]The media distortions of the teens as too affluent, and as promiscuous, delinquent, counter-cultural rebels do not reflect the actual experiences of ordinary young adults, particularly young women.[200] The degree to which adolescents are perceived as autonomous beings varies widely by culture, as do the behaviors that represent this emerging autonomy. Psychologists have identified three main types ofautonomy: emotional independence, behavioral autonomy, and cognitive autonomy.[201]Emotional autonomy is defined in terms of an adolescent's relationships with others, and often includes the development of more mature emotional connections with adults and peers.[201]Behavioral autonomy encompasses an adolescent's developing ability to regulate his or her own behavior, to act on personal decisions, and to self-govern. Cultural differences are especially visible in this category because it concerns issues of dating, social time with peers, and time-management decisions.[201]Cognitive autonomy describes the capacity for an adolescent to partake in processes of independent reasoning and decision-making without excessive reliance on social validation.[201]Converging influences from adolescent cognitive development, expanding social relationships, an increasingly adultlike appearance, and the acceptance of more rights and responsibilities enhance feelings of autonomy for adolescents.[201]Proper development of autonomy has been tied to good mental health, high self-esteem, self-motivated tendencies, positive self-concepts, and self-initiating and regulating behaviors.[201]Furthermore, it has been found that adolescents' mental health is best when their feelings about autonomy match closely with those of their parents.[202] A questionnaire called the teen timetable has been used to measure the age at which individuals believe adolescents should be able to engage in behaviors associated with autonomy.[203]This questionnaire has been used to gauge differences in cultural perceptions of adolescent autonomy, finding, for instance, that White parents and adolescents tend to expect autonomy earlier than those of Asian descent.[203]It is, therefore, clear that cultural differences exist in perceptions of adolescent autonomy, and such differences have implications for the lifestyles and development of adolescents. In sub-Saharan African youth, the notions of individuality and freedom may not be useful in understanding adolescent development. Rather, African notions of childhood and adolescent development are relational and interdependent.[204] The lifestyle of an adolescent in a given culture is profoundly shaped by the roles and responsibilities he or she is expected to assume. The extent to which an adolescent is expected to share family responsibilities is one large determining factor in normative adolescent behavior. For instance, adolescents in certain cultures are expected to contribute significantly to household chores and responsibilities.[205]Household chores are frequently divided into self-care tasks and family-care tasks. However, specific household responsibilities for adolescents may vary by culture, family type, and adolescent age.[206]Some research has shown that adolescent participation in family work and routines has a positive influence on the development of an adolescent's feelings of self-worth, care, and concern for others.[205] In addition to the sharing of household chores, certain cultures expect adolescents to share in their family's financial responsibilities. According to family economic and financial education specialists, adolescents develop sound money management skills through the practices of saving and spending money, as well as through planning ahead for future economic goals.[207]Differences between families in the distribution of financial responsibilities or provision ofallowancemay reflect various social background circumstances and intrafamilial processes, which are further influenced by cultural norms and values, as well as by the business sector and market economy of a given society.[208]For instance, in many developing countries it is common for children to attend fewer years of formal schooling so that, when they reach adolescence, they can begin working.[209] While adolescence is a time frequently marked by participation in the workforce, the number of adolescents in the workforce is much lower now than in years past as a result of increased accessibility and perceived importance of formal higher education.[210]For example, half of all 16-year-olds in China were employed in 1980, whereas less than one fourth of this same cohort were employed in 1990.[210] Furthermore, the amount of time adolescents spend on work and leisure activities varies greatly by culture as a result of cultural norms and expectations, as well as various socioeconomic factors. American teenagers spend less time in school or working and more time on leisure activities—which include playing sports, socializing, and caring for their appearance—than do adolescents in many other countries.[211]These differences may be influenced by cultural values of education and the amount of responsibility adolescents are expected to assume in their family or community. Time management, financial roles, and social responsibilities of adolescents are therefore closely connected with the education sector and processes of career development for adolescents, as well as to cultural norms and social expectations. In many ways, adolescents' experiences with their assumed social roles and responsibilities determine the length and quality of their initial pathway into adult roles.[212] Adolescence is frequently characterized by a transformation of an adolescent's understanding of the world, the rational direction towards a life course, and the active seeking of new ideas rather than the unquestioning acceptance of adult authority.[213]An adolescent begins to develop a uniquebelief systemthrough his or her interaction with social, familial, and cultural environments.[214]While organized religion is not necessarily a part of every adolescent's life experience, youth are still held responsible for forming a set of beliefs about themselves, the world around them, and whatever higher powers they may or may not believe in.[213]This process is often accompanied or aided by cultural traditions that intend to provide a meaningful transition to adulthood through a ceremony, ritual, confirmation, orrite of passage.[215] Many cultures define the transition into adultlike sexuality by specific biological or social milestones in an adolescent's life. For example,menarche(the first menstrual period of a female), orsemenarche(the first ejaculation of a male) are frequent sexual defining points for many cultures. In addition to biological factors, an adolescent's sexual socialization is highly dependent upon whether their culture takes a restrictive or permissive attitude toward teen or premarital sexual activity. In the United States specifically, adolescents are said to have "raging hormones" that drive their sexual desires. These sexual desires are then dramatized regardingteen sexand seen as "a site of danger and risk; that such danger and risk is a source of profound worry among adults".[216]There is little to no normalization regarding teenagers having sex in the U.S., which causes conflict in how adolescents are taught aboutsex education. There is a constant debate about whetherabstinence-only sex educationorcomprehensive sex educationshould be taught in schools and this stems back to whether or not the country it is being taught in is permissive or restrictive. Restrictive cultures overtly discourage sexual activity in unmarried adolescents or until an adolescent undergoes a formal rite of passage. These cultures may attempt to restrict sexual activity by separating males and females throughout their development, or throughpublic shamingand physical punishment when sexual activity does occur.[167][217]In less restrictive cultures, there is more tolerance for displays of adolescent sexuality, or of the interaction between males and females in public and private spaces. Less restrictive cultures may tolerate some aspects of adolescent sexuality, while objecting to other aspects. For instance, some cultures find teenage sexual activity acceptable but teenage pregnancy highly undesirable. Other cultures do not object to teenage sexual activity orteenage pregnancy, as long as they occur after marriage.[218]In permissive societies, overt sexual behavior among unmarried teens is perceived as acceptable, and is sometimes even encouraged.[218]Regardless of whether a culture is restrictive or permissive, there are likely to be discrepancies in how females versus males are expected to express their sexuality. Cultures vary in how overt this double standard is—in some it is legally inscribed, while in others it is communicated throughsocial convention.[219]Lesbian, gay, bisexual and transgender youth face much discrimination through bullying from those unlike them and may find telling others that they are gay to be a traumatic experience.[220]The range of sexual attitudes that a culture embraces could thus be seen to affect the beliefs, lifestyles, and societal perceptions of its adolescents. Adolescence is a period frequently marked by increased rights and privileges for individuals. While cultural variation exists for legal rights and their corresponding ages, considerable consistency is found across cultures. Furthermore, since the advent of the Convention on the Rights of the Child in 1989 (children here defined as under 18), almost every country in the world (except the U.S. and South Sudan) has legally committed to advancing an anti-discriminatory stance towards young people of all ages. This includes protecting children against uncheckedchild labor, enrollment in the military, prostitution, and pornography. In many societies, those who reach a certain age (often 18, though this varies) are considered to have reached theage of majorityand are legally regarded asadultswho are responsible for their actions. People below this age are consideredminorsor children. A person below the age of majority may gain adult rights throughlegal emancipation. Thelegal working agein Western countries is usually 14 to 16, depending on the number of hours and type of employment under consideration. Many countries also specify a minimumschool leaving age, at which a person is legally allowed to leavecompulsory education. This age varies greatly cross-culturally, spanning from 10 to 18, which further reflects the diverse ways formal education is viewed in cultures around the world. In most democratic countries, a citizen iseligible to voteat age 18. In a minority of countries, the voting age is as low as 16 (for example, Brazil), and at one time was as high as 25 inUzbekistan. Theage of consentto sexual activity varies widely between jurisdictions, ranging from 12 to 20 years, as does theage at which people are allowed to marry.[221]Specific legal ages for adolescents that also vary by culture are enlisting in the military, gambling, and the purchase ofalcohol, cigarettes or items with parental advisory labels. The legal coming of age often does not correspond with the sudden realization of autonomy; many adolescents who have legally reached adult age are still dependent on their guardians or peers for emotional and financial support. Nonetheless, new legal privileges converge with shifting social expectations to usher in a phase of heightened independence or social responsibility for most legal adolescents. Following a steady decline beginning in the late 1990s up through the mid-2000s and a moderate increase in the early 2010s, illicit drug use among adolescents has roughly plateaued in the U.S. Aside from alcohol,marijuanais the most commonly indulged drug habit during adolescent years. Data collected by theNational Institute on Drug Abuseshows that between the years of 2015 and 2018, past year marijuana usage among 8th graders declined from 11.8% to 10.5%; among 10th grade students, usage rose from 25.4% to 27.50%; and among 12th graders, usage rose slightly from 34.9% to 35.9%.[222]Additionally, while the early 2010s saw a surge in the popularity ofMDMA, usage has stabilized with 2.2% of 12th graders using MDMA in the past year in the U.S.[222]The heightened usage of ecstasy most likely ties in at least to some degree with the rising popularity ofrave culture. One significant contribution to the increase in teenagesubstance abuseis an increase in the availability ofprescription medication. With an increase in the diagnosis of behavioral and attentional disorders for students, taking pharmaceutical drugs such as Vicodin and Adderall for pleasure has become a prevalent activity among adolescents: 9.9% of high school seniors report having abused prescription drugs within the past year.[222] In the U.S., teenage alcohol use rose in the late 2000s and is currently stable at a moderate level. Out of a polled body of U.S. students age 12–18, 8.2% of 8th graders reported having been on at least one occasion having consumed alcohol within the previous month; for 10th graders, the number was 18.6%, and for 12th graders, 30.2%.[223]More drastically, cigarette smoking has become a far less prevalent activity among American middle- and high-school students; in fact, a greater number of teens now smoke marijuana than smoke cigarettes, with one recent study showing a respective 23.8% versus 43.6% of surveyed high school seniors.[223]Recent studies have shown that male late adolescents are far more likely to smoke cigarettes rather than females. The study indicated that there was a discernible gender difference in the prevalence of smoking among the students. The finding of the study shows that more males than females began smoking when they were in primary and high schools whereas most females started smoking after high school.[224]This may be attributed to recent changing social and political views towards marijuana; issues such as medicinal use and legalization have tended towards painting the drug in a more positive light than historically, while cigarettes continue to be vilified due to associated health risks. Different drug habits often relate to one another in a highly significant manner. It has been demonstrated that adolescents who drink at least to some degree may be as much as sixteen times more likely than non-drinkers to use illicit drugs.[225] Peer acceptance and social norms gain a significantly greater hand in directing behavior at the onset of adolescence; as such, the alcohol and illegal drug habits of teens tend to be shaped largely by the substance use of friends and other classmates. In fact, studies suggest that more significantly than actual drug norms, an individual's perception of the illicit drug use by friends and peers is highly associated with his or her own habits in substance use during both middle and high school, a relationship that increases in strength over time.[226]Whereas social influences on alcohol use and marijuana use tend to work directly in the short term, peer and friend norms on smoking cigarettes in middle school have a profound effect on one's own likelihood to smoke cigarettes well into high school.[226]Perhaps the strong correlation between peer influence in middle school and cigarette smoking in high school may be explained by the addictive nature of cigarettes, which could lead many students to continue their smoking habits from middle school into late adolescence. Until mid-to-late adolescence, boys and girls show relatively little difference in drinking motives.[227]Distinctions between the reasons for alcohol consumption of males and females begin to emerge around ages 14–15; overall, boys tend to view drinking in a more social light than girls, who report on average a more frequent use of alcohol as a coping mechanism.[227]The latter effect appears to shift in late adolescence and onset of early adulthood (20–21 years of age); however, despite this trend, age tends to bring a greater desire to drink for pleasure rather than coping in both boys and girls.[227] Drinking habits and the motives behind them often reflect certain aspects of an individual's personality; in fact, four dimensions of theFive-Factor Modelof personality demonstrate associations with drinking motives (all but 'Openness'). Greater enhancement motives for alcohol consumption tend to reflect high levels of extraversion and sensation-seeking in individuals; such enjoyment motivation often also indicates low conscientiousness, manifesting in lowered inhibition and a greater tendency towards aggression. On the other hand, drinking to cope withnegative emotionalstates correlates strongly with high neuroticism and low agreeableness.[227]Alcohol use as a negative emotion control mechanism often links with many other behavioral and emotional impairments, such as anxiety, depression, and low self-esteem.[227] Research has generally shown striking uniformity across different cultures in the motives behind teen alcohol use. Social engagement and personal enjoyment appear to play a fairly universal role in adolescents' decision to drink throughout separate cultural contexts. Surveys conducted in Argentina, Hong Kong, and Canada have each indicated the most common reason for drinking among adolescents to relate to pleasure and recreation; 80% of Argentinian teens reported drinking for enjoyment, while only 7% drank to improve a bad mood.[227]The most prevalent answers among Canadian adolescents were to "get in a party mood," 18%; "because I enjoy it," 16%; and "to get drunk," 10%.[227]In Hong Kong, female participants most frequently reported drinking for social enjoyment, while males most frequently reported drinking to feel the effects of alcohol.[227] Much research has been conducted on the psychological ramifications ofbody imageon adolescents. Modern day teenagers are exposed to more media on a daily basis than any generation before them. As such, modern day adolescents are exposed to many representations of ideal, societal beauty. The concept of a person being unhappy with their own image or appearance has been defined as "body dissatisfaction". In teenagers, body dissatisfaction is often associated with body mass, lowself-esteem, and atypical eating patterns that can result in health procedures.[228][229]Scholars continue to debate the effects of media on body dissatisfaction in teens.[230][231] Because exposure to media has increased over the past decade, adolescents' use of computers, cell phones, stereos and televisions to gain access to various mediums of popular culture has also increased. Almost all American households have at least one television, more than three-quarters of all adolescents' homes have access to the Internet, and more than 90% of American adolescents use the Internet at least occasionally.[232]As a result of the amount of time adolescents spend using these devices, their total media exposure is high. From 1996 to 2006, the amount of time that adolescents spent on the computer greatly increased.[233]Online activities with the highest rates of use among adolescents are video games (78% of adolescents), email (73%), instant messaging (68%), social networking sites (65%), news sources (63%), music (59%), and videos (57%). In the 2000s, social networking sites proliferated and a high proportion of adolescents used them. As of 2012, 73% of 12–17 year olds reported having at least one social networking profile;[234]two-thirds (68%) of teens texted every day, half (51%) visited social networking sites daily, and 11% sent or received tweets at least once every day. More than a third (34%) of teens visited their main social networking site several times a day. One in four (23%) teens were "heavy" social media users, meaning they used at least two different types of social media each and every day.[235] Although research has been inconclusive, some findings have indicated that electronic communication negatively affects adolescents' social development, replaces face-to-face communication, impairs their social skills, and can sometimes lead to unsafe interaction with strangers. A 2015 review reported that "adolescents lack awareness of strategies to cope with cyberbullying, which has been consistently associated with an increased likelihood of depression."[236]Furthermore, in 2020, 32% of adolescent girls that use Instagram reported feeling worse about their body image after using the platform.[237]Studies have shown differences in the ways the internet negatively impacts adolescents' social functioning. Online socializing tends to make girls particularly vulnerable, while socializing inInternet cafésseems only to affect boys' academic achievement. However, other research suggests that Internet communication brings friends closer and is beneficial forsocially anxiousteens, who find it easier to interact socially online.[238] A broad way of defining adolescence is the transition from child-to-adulthood. According to Hogan & Astone (1986), this transition can include markers such as leaving school, starting a full-time job, leaving the home of origin, getting married, and becoming a parent for the first time.[239]However, the time frame of this transition varies drastically by culture. In some countries, such as the United States, adolescence can last nearly a decade, but in others, the transition—often in the form of a ceremony—can last for only a few days.[240] Some examples of social and religious transition ceremonies that can be found in the U.S., as well as in other cultures around the world, areConfirmation,Bar and Bat Mitzvahs,Quinceañeras,sweet sixteens,cotillions, anddébutanteballs. In other countries, initiation ceremonies play an important role, marking the transition into adulthood or the entrance into adolescence. This transition may be accompanied by obvious physical changes, which can vary from a change in clothing to tattoos and scarification.[218]Furthermore, transitions into adulthood may also vary by gender, and specific rituals may be more common for males or for females. This illuminates the extent to which adolescence is, at least in part, a social construction; it takes shape differently depending on the cultural context, and may be enforced more by cultural practices or transitions than by universal chemical or biological physical changes. At the decision-making point of their lives, youth are susceptible to drug addiction, sexual abuse, peer pressure, violent crimes and other illegal activities. Developmental Intervention Science (DIS) is a fusion of the literature of both developmental and intervention sciences. This association conducts youth interventions that mutually assist both the needs of the community as well as psychologically stranded youth by focusing on risky and inappropriate behaviors while promoting positive self-development along with self-esteem among adolescents.[241] The concept of adolescence has been criticized by experts, such asRobert Epstein, who state that an undeveloped brain is not the main cause of teenagers' turmoils.[242][243]Some have criticized the concept of adolescence because it is a relatively recent phenomenon in human history created by modern society,[244][245][246][247]and have been highly critical of what they view as theinfantilizationof young adults in American society.[248]In an article forScientific American, Robert Epstein and Jennifer Ong state that "American-style teen turmoil is absent in more than 100 cultures around the world, suggesting that such mayhem is not biologically inevitable. Second, the brain itself changes in response to experiences, raising the question of whether adolescent brain characteristics are the cause of teen tumult or rather the result of lifestyle and experiences."[249]David Moshman has also stated in regards to adolescence that brain research "is crucial for a full picture, but itdoes not provide an ultimate explanation."[250] Other critics of the concept of adolescence do point at individual differences in brain growth rate, citing that some (though not all) early teens still have infantile undevelopedcorpus callosums, concluding that "the adult in *every* adolescent" is too generalizing. These people tend to support the notion that a more interconnected brain makes more precise distinctions (citingPavlov's comparisons ofconditioned reflexesin different species) and that there is anon-arbitrary thresholdat which distinctions become sufficiently precise to correct assumptions afterward as opposed to being ultimately dependent on exterior assumptions for communication. They argue that this threshold is the one at which an individual is objectively capable of speaking for himself or herself, as opposed to culturally arbitrary measures of "maturity" which often treat this ability as a sign of "immaturity" merely because it leads to questioning of authorities. These people also stress the low probability of the threshold being reached at a birthday, and instead advocate non-chronological emancipation at the threshold of afterward correction of assumptions.[251]They sometimes cite similarities between "adolescent" behavior and KZ syndrome (inmate behavior in adults in prison camps) such as aggression being explainable by oppression and "immature" financial or other risk behavior being explainable by a way out of captivity being more worth to captive people than any incremental improvement in captivity, and argue that this theory successfully predicted remaining "immature" behavior after reaching theage of majorityby means of longer-term traumatization. In this context, they refer to thefallibilityof official assumptions about what is good or bad for an individual, concluding thatpaternalistic"rights" may harm the individual. They also argue that since it never took many years to move from one group to another to avoid inbreeding in thePaleolithic,evolutionary psychologyis unable to account for a long period of "immature" risk behavior.[252]
https://en.wikipedia.org/wiki/Adolescence
Age of candidacyis the minimum age at which a person canlegallyhold certain elected government offices. In many cases, it also determines the age at which a person may beeligible to standfor an election or be grantedballot access. International electoral standards which are defined in the International Public Human Rights Law, allow restricting candidacy on the basis of age. The interpretation of the International Covenant for Civil and Political Rights offered by the United Nations Human Rights Committee in the General Comment 25 states "Any conditions which apply to the exercise of the rights protected by article 25 (of the ICCPR) should be based on objective and reasonable criteria. For example, it may be reasonable to require a higher age for election or appointment to particular offices than for exercising the right to vote, which should be available to every adult citizen."[1] The first known example of a law enforcing age of candidacy was theLex Villia Annalis, a Roman law enacted in 180 BCE which set the minimum ages for senatorialmagistrates.[206] InAustraliaa person must be aged 18 or over to stand for election to public office at federal, state or local government level. Prior to 1973, the age of candidacy for the federal parliament was 21.[207] The youngest ever member of theHouse of Representativeswas 20-year-oldWyatt Roy, elected in the 2010 federal election. InAustria, a person must be 18 years of age or older to stand in elections to theEuropean ParliamentorNational Council.[16]The Diets of regionalLänderare able to set a minimum age lower than 18 for to be in the polls in elections to the Diet itself as well as to municipal councils in the Land.[208]In presidential elections the candidacy age is 35. Any Belgian who has reached the age of 18 years can stand for election for theChamber of Representatives, can become a member of theSenate, or can be elected in one of the regional parliaments.[209]This is regulated in theConstitution(Art. 64) and in the Special Law on the Reform of the Institutions. According to theConstitution of Belize, a person must be at least 18 years old to be elected as a member of theHouse of Representativesand must be at least 30 to be Speaker of the House. A person must be at least 18 years old to be appointed to theSenateand must be at least 30 to be president or Vice-President of the Senate. As only members of the House of Representatives are eligible to be appointed prime minister, thePrime Ministermust be at least 18 years old. A person must also be at least 18 years old to be elected to a village council.[28] TheBrazilianConstitution (Article 14, Section 3 (VI)) defines 35 years as the minimum age for someone to be elected president, Vice-President or Senator; 30 years for state Governor or Vice-Governor; 21 for Federal or State Deputy, Mayor or Vice-Mayor; and 18 for city Council member.[33] InCanada,the constitutiondoes not outline any age requirements to run for elected office, simply stating "Every citizen of Canada has the right to vote in an election of the members of the House of Commons or of a legislative assembly and to be qualified for membership therein."[210]However under the currentElections CanadaAct to be eligible to run for elected office (municipal, provincial, federal) one must be a minimum of 18 years or older on the day of the election.[211]Prior to 1970, the age requirement was 21 along with the voting age. To be appointed to theSenate(Upper House), one must be at least 30 years of age, under 75 years of age, must possess land worth at least $4,000 in the province for which they are appointed, and must own real and personal property worth at least $4,000, above their debts and liabilities.[212] In the province of Ontario,Sam Oosterhoff, a member of theProgressive Conservative Party of Ontario, was first elected at the age of 19 in a November 2016 by-election, the youngest Ontario MPP to ever be elected.[213] Pierre-Luc Dusseault(born May 31, 1991) is a Canadian politician who was elected to the House of Commons of Canada in the 2011 federal election at the age of 19, becoming the youngest Member of Parliament in the country's history. He was sworn into office two days after his 20th birthday. He was re-elected in 2015 but lost his seat in the 2019 Canadian federal election.[214] Article 36 of the 2016Constitution of the Central African Republicrequires that candidates forPresidentmust "be aged thirty-five (35) years at least [on] the day of the deposit of the dossier of the candidature".[40] InChilethe minimum age required to be electedPresident of the Republicis 35 years on the day of the election. Before the 2005 reforms the requirement was 40 years, and from 1925 to 1981 it was 30 years. Forsenatorsit is 35 years (between 1981 and 2005 it was 40 years) and fordeputiesit is 21 years (between 1925 and 1970 it was 35 years).[215] InChinathe minimum age to be elected as president or vice-president is 45.[216]All citizens who have reached the age of 18 have the right to vote and stand for election.[217] InCyprusthe minimum age to be elected president is 35 years. The minimum age to run for theHouse of Representativeswas 25 years until theConstitutionwas amended in 2019 to lower the limit to 21.[218] In theCzech Republic, a person must be at least 18-years-oldto be electedinlocal elections. A person must be at least 21 years old to be elected to thelower houseof theCzech Parliamentor to theEuropean Parliamentand 40 years old to be a member of the upper house (Senate) of the Parliament[53]or thePresident of the Czech Republic. InDenmark, any adult 18 years of age or older can become a candidate and be elected in any public election. InEstonia, any citizen 18 years of age or oldercan be electedinlocal elections, and 21 years or older inparliamentary elections. The minimum age for thePresident of Estoniais 40.[65] InFrance, any citizen 18 years of age or older can be elected to thelower house of Parliament, and 24 years or older for theSenate. The minimum age for thePresident of Franceis 18.[citation needed] InGermanya citizen must be 18 or overto be electedat the national level, like theChancellor, and this age to be elected at the regional or local level. A person must be 40 or over to bePresident. InGreece, those aged 25 years old and over who hold Greekcitizenshipare eligible to stand and be elected to theHellenic Parliament.[76]All over 40 years old are eligible to stand for presidency. In Hong Kong a person must be at least 21 to be candidate in a district council or Legislative Council election.[219][220]A person must be at least 40 to be candidate in theChief Executiveelection, and also at least 40 to be candidate in the election for thePresident of the Legislative Councilfrom among the members of the Legislative Council.[221] For the office ofPresident, any Icelandic citizen who has reached the age of 35 and fulfills therequirementnecessary to vote in elections to theAlthingis eligible to be elected president.[222] InIndiaa person must be at least: Criticism has been on the rise to decrease the age of candidacy in India. Young India Foundation has been working on a campaign to decrease the age of candidacy in India forMPsandMLAsto better reflect the large young demographic of India.[223] InIndonesiaa person must be at least: InIsraelone must be at least 21 to become a member of theKnesset(Basic Law: The Knessetsection 6(a)) or amunicipality.[citation needed]When thePrime Ministerwas directly elected, one must have been a member of the Knesset who is at least 30 to be a candidate for prime minister.[citation needed]Every Israeli Citizen (including minors) can be appointed as aGovernmentMinister, or elected asPresident of Israel, but the latter role is mostly ceremonial and elected by the Parliament.[citation needed] InItaly, a person must be at least 50 to be President of the Republic, 40 to be aSenator, and 25 to be aDeputy, as specified in the 1947Constitution of Italy. 18 years of age is sufficient, however, to be elected member of the Council of Regions, Provinces, and Municipalities (Communes). InIrana person must be at least 21 years old to run for president.[87] The Iraqi constitution states that a person must be at least 40 years old to run for president[88]and 35 years old to be prime minister.[89]Until 2019, the electoral law set the age limit at 30 years old for candidates to run for the Council of Representatives.[224]However, the new Iraqi Council of Representatives Election Law (passed in 2019, yet to be enacted) lowered the age limit to 28.[225] The 1937Constitution of Irelandrequires thePresidentto be at least 35 and members of theOireachtas(legislature) to be 21.[91][92]Members of the European Parliamentfor Ireland must also be 21.[92][93]Members oflocal authoritiesmust be 18, reduced from 21 in 1973.[92][94]The 1922–1937Constitution of the Irish Free StaterequiredTDs(members of theDáil, lower house) to be 21,[226]whereasSenatorshad to be 35 (reduced to 30 in 1928).[95]At the1987 general election, theHigh Courtruled that a candidate (Hugh Hall) was eligible who reached the minimum age after the date of nomination but before the date of election.[227]TheThirty-fifth Amendment of the Constitution Bill 2015proposed to lower the presidential age limit to 21.[228]However, this proposal was rejected by 73% of the voters. InJapana person must be at least:[99] InLithuaniaa person must be at least: In Luxembourg a person must be at least 18-years-old to stand as a candidate to be a member of theChamber of Deputies, the country'sunicameralnational legislature.[114] In Malaysia a citizen shall be over 18 years of age to become a candidate and be elected to theDewan RakyatandDewan Undangan Negeri, and a person shall be over 30 to be theSenatorby constitution. InMexico, a person must be at least 35 to be president, 25 to be a senator, or 21 to be a Congressional Deputy, as specified in the1917 Constitution of Mexico. In theNetherlands, any adult 18 years of age or older can become elected in any public election. To be a candidate the person has to reach this age during the time for which the elections are held. InNew Zealandthe minimum age to bePrime Minister of New Zealandis 18 years old. Citizens and permanent residents who are enrolled as an elector are eligible to be a candidate for election as aMember of Parliament.[citation needed] InNigeria, a person must be at least 35 years of age to be electedPresidentorVice President, 35 to be a senator, 30 to be a State Governor, and 25 to be a Representative in parliament or Member of the States' House of Assembly.[229] InNorth Korea, any person eligible to vote in elections to theSupreme People's Assemblyis also eligible to stand for candidacy. The age for both voting and candidacy is 17.[230] InNorway, any adult, aged 18 or over within the calendar year, can become a candidate and be elected in any public election. Palestinian parliamentary candidates must be at least 28 years old, while the presidential candidates must be at least 40 years old.[231] InPakistan, a person must be at least 45 years old to bePresident. A person must be at least 25 years old to be a member of the provincial assembly or national assembly.[232] In Russia a person must be at least 35 to run for president.[158] InSingaporea person must be at least 45 years old to run for president.[238]21 year-olds can stand in parliamentary elections. Section 47, Clause 1 of the 1996 Constitution of South Africa states that "Every citizen who is qualified to vote for the National Assembly is eligible to be a member of the Assembly", defaulting to Section 46 which "provides for a minimum voting age of 18 years" in National Assembly elections; Sections 106 and 105 provide the same for provincial legislatures. [173] Spainhas two legislative chambers of Parliament, a lower house and an upper house. These are theCongress of Deputies(lower house) and theSenate of Spain(upper house) respectively. The minimum age requirement to stand and to be elected to either house is 18 years of age.[174] InSweden, any citizen at least 18 years old, who resides, or who has resided in the realm can be elected to parliament.[240]Citizens of Sweden, the European Union, Norway or Iceland aged 18 and over may be elected to county or municipal council. Citizens of other countries may also be elected to council, provided they have resided in the realm for at least three years.[241] InSwitzerland, any citizen aged 18 or over can become a candidate and be elected in any federal election. In theRepublic of China(commonly known as Taiwan), the minimum age of candidacy is 23, unless otherwise specified in the Constitution or any relevant laws.[242]The Civil Servants Election and Recall Act specifies that candidates for township, city, and indigenous district chiefs must be at least 26, and candidates for municipality, county, and city governors must be at least 30.[243]The minimum age to be elected as president or vice-president is 40.[244] The14th Dalai Lamawas enthroned at the age of 4, and none ofhis predecessorshave been enthroned before age 4. The coming of age for the Dalai Lama is 18, when responsibilities are assumed. The1876 constitutionset the age for parliamentary elections as 30. This remained unchanged until 13 October 2006, when it was lowered to 25 through a constitutional amendment. In 2017, it was further lowered to 18, the same as thevoting age.[245]In presidential elections the candidacy age is 40. In theUnited Kingdom, a person must be aged 18 or over to stand inelectionsto all parliaments, assemblies, and councils within the UK,devolved, or local level. This age requirement also applies in elections to any individual elective public office; the main example is that of anelected mayor, whether ofLondonor alocal authority. There are no higher age requirements for particular positions in public office. Candidates are required to be aged 18 on both the day of nomination and the day of the poll.[citation needed] Previously, the requirement was that candidates be 21 years old. During the early 2000s, theBritish Youth Counciland other groups successfully campaigned to lower age of candidacy requirements in the United Kingdom.[246]The age of candidacy was reduced from 21 to 18 inEngland,WalesandScotlandon 1 January 2007,[247]when section 17 of theElectoral Administration Act 2006entered into force.[248] In theUnited States, a person must be aged 35 or over to serve as president. To be a senator, a person must be aged 30 or over. To be a Representative, a person must be aged 25 or older. This is specified in theU.S. Constitution. Most states in the U.S. also have age requirements for the offices of Governor, State Senator, and State Representative.[249]Some states have a minimum age requirement to hold any elected office (usually 21 or 18). Manyyouth rightsgroups view current age of candidacy requirements as unjustifiedage discrimination.[250]Occasionally people who are younger than the minimum age will run for an office in protest of the requirement or because they do not know that the requirement exists. On extremely rare occasions, young people have been elected to offices they do not qualify for and have been deemed ineligible to assume the office. In 1872,Victoria Woodhullran for President of the United States, although according to the Constitution she would have been too young to be President if elected.[251] In 1934,Rush Holtof West Virginia was elected to theSenate of the United Statesat the age of 29. Since theU.S. Constitutionrequires senators to be at least 30, Holt was forced to wait until his 30th birthday, six months after the start of the session, before being sworn in.[252] In 1954,Richard Fultonwon election to theTennessee Senate. Shortly after being sworn in, Fulton was ousted from office because he was 27 years old at the time. TheTennessee State Constitutionrequired that senators be at least 30.[253]Rather than hold a new election, the previous incumbent,Clifford Allen, was allowed to resume his office for another term. Fulton went on to win the next State Senate election in 1956 and was later elected to theU.S. House of Representativeswhere he served for 10 years. In 1964,Congressman Jed Johnson Jr.of Oklahoma was elected to the89th Congressin the 1964 election while still aged 24 years. However, he becameeligiblefor the House after turning 25 on his birthday, 27 December 1964, seven days before his swearing in, making him the youngestlegallyelected and seated member of the United States Congress ever.[254] In South Carolina, two Senators aged 24 were elected, but were too young according to the State Constitution: Mike Laughlin in 1969 andBryan Dorn(later a U.S. congressman) in 1941. They were seated anyway.[255] On several occasions, theSocialist Workers Party (USA)has nominated candidates too young to qualify for the offices they were running for. In 1972,Linda Jennessran as the SWP presidential candidate, although she was 31 at the time. Since the U.S. Constitution requires that the President and Vice President be at least 35 years old, Jenness was not able to receiveballot accessin several states in which she otherwise qualified.[256]Despite this handicap, Jenness still received 83,380 votes.[257]In 2004, the SWP nominatedArrin Hawkinsas the party's vice-presidential candidate, although she was 28 at the time. Hawkins was also unable to receive ballot access in several states due to her age.[258] In the United States, many groups have attempted to lower age of candidacy requirements in various states. In 1994,South Dakotavoters rejected a ballot measure that would have lowered the age requirements to serve as a State Senator or State Representative from 25 to 18. In 1998, however, they approved a similarballotmeasure that reduced the age requirements for those offices from 25 to 21.[259]In 2002,Oregonvoters rejected a ballot measure that would have reduced the age requirement to serve as a State Representative from 21 to 18. InVenezuela, a person must be at least 30 to bePresidentorVice President,[260]21 to be a deputy for theNational Assembly[261]and 25 to be the Governor of astate.[262]
https://en.wikipedia.org/wiki/Age_of_candidacy
Theage of criminal responsibilityis the age below which a child is deemed incapable of having committed acriminal offence. In legal terms, it is referred to as adefence/defense of infancy, which is a form ofdefenseknown as anexcuseso thatdefendantsfalling within the definition of an "infant" are excluded fromcriminalliabilityfor theiractions, if at the relevant time, they had not reached an age of criminal responsibility. After reaching the initial age, there may be levels of responsibility dictated by age and the type of offense committed.[1] Under the English common law thedefenseof infancy was expressed as a set of presumptions in a doctrine known asdoli incapax.[2]A child under the age of seven was presumed incapable of committing acrime. The presumption was conclusive, prohibiting the prosecution from offering evidence that the child had the capacity to appreciate the nature and wrongfulness of what they had done. Children aged 7–13 were presumed incapable of committing a crime but thepresumption was rebuttable. The prosecution could overcome the presumption by proving that the child understood what they were doing and that it was wrong. In fact, capacity was a necessary element of the state's case (thus, therule of sevensdoctrine arose). If the state failed to offer sufficient evidence of capacity, the infant was entitled to have the charges dismissed at the close of the state's evidence.Doli incapaxwas abolished in England and Wales in 1998 for children over the age of 10,[3][4]but persists in other common law jurisdictions. The terminology regarding such a defense varies by jurisdiction and sphere. "Defense of infancy" is a mainly US term.[5]The "age of criminal responsibility" isused by most European countries, theUK,[6]Australia,New Zealand[7]and otherCommonwealth of Nationscountries.[8]Other instances of usage have included the termsage of accountability,[9]age of responsibility,[10]and age of liability.[11] The term minimum age of criminal responsibility (MACR) is a term commonly used in the literature.[12][7] The rationale behind the age of accountability laws are the same as those behind the insanity defense, insinuating both the mentally disabled and the young lack apprehension.[13] Governmentsenact laws to label certain types of activity as wrongful or illegal. Behavior of a moreantisocialnature can be stigmatized in a more positive way to show society's disapproval through the use of the wordcriminal. In this context, laws tend to use thephrase, "age of criminal responsibility" in two different ways:[14] This is an aspect of thepublic policyofparens patriae. In thecriminal law, each state will consider the nature of its own society and the available evidence of the age at which antisocial behaviors begins to manifest itself. Some societies will have qualities of indulgence toward the young and inexperienced, and will not wish them to be exposed to thecriminal lawsystem before all other avenues of response have been exhausted. Hence, some states have a policy ofdoli incapax(i.e. incapable of wrong) and exclude liability for all acts and omissions that would otherwise have been criminal after reaching a specified age.[15]Hence, no matter what the child may have done, there cannot be a criminalprosecution. However, although no criminal liability is inferred, other aspects of law may be applied. For example, inNordic countries, an offense by a person under 15 years of age is considered mostly a symptom of problems in child's development. This will cause the social authorities to take appropriate administrative measures to secure the development of the child. Such measures may range from counseling to placement at a special care unit. Being non-judicial, the measures are not dependent on the severity of the offense committed but on the overall circumstances of the child.[14] The policy of treating minors as incapable of committing crimes does not necessarily reflect modern sensibilities. Thus, if the rationale of the excuse is that children below a certain age lack the capacity to form themens reaof an offense, this may no longer be a sustainable argument. Indeed, given the different speeds at which people may develop both physically and intellectually, any form of explicit age limit may be arbitrary and irrational. Yet, the sense that children do not deserve to be exposed to criminal punishment in the same way as adults remains strong. Children have not had experience of life, nor do they have the same mental and intellectual capacities as adults. Hence, it might be considered unfair to treat young children in the same way as adults.[14] InScotland, the age of criminal responsibility was raised from 8 to 12 by the implementation of the Age of Criminal Responsibility (Scotland) Act 2019,[16]which came into force on 31 March 2020.[17][18]InEngland and WalesandNorthern Ireland, the age of responsibility is 10 years, and in theNetherlandsandCanadathe age of responsibility is 12 years.Sweden,Finland, andNorwayall set the age at 15 years. In theUnited States, the minimum age for federal crimes is 11 years. State minimums vary, with 24 states having no defined minimum age, and defined minimums ranging from 7 years inFloridato 13 years inMarylandandNew Hampshire.[19] As the treaty parties of theRome Statuteof theInternational Criminal Courtcould not agree on a minimum age for criminal responsibility, they chose to solve the question procedurally and excluded the jurisdiction of the Court for persons under eighteen years.[citation needed] Some jurisdictions do not have a set fixed minimum age, but leave discretion toprosecutorsto argue or thejudgesto rule on whether thechildoradolescent("juvenile") defendant understood that what was being done was wrong. If the defendant did not understand the difference betweenrightandwrong, it may not be considered appropriate to treat such a person asculpable. Alternatively, the lack of real fault in the offender can be recognized by rulings that avoid criminalsentencesand/or address more practical matters ofparental responsibilityby adjusting the rights ofparentsto unsupervised custody, or by separate criminal proceedings against the parents for breach of theirdutiesas parents.[citation needed] The following are the minimum ages at which people may be charged with a criminal offence in each country: Juvenile offenders aged 14–17 are always held criminally responsible, but they are always tried as young/juvenile offenders, meaning generally more lenient sentences compared to adults. Also, juvenile offenders' photos and names usually cannot be released by the media, and access to the juvenile court list/courtroom is restricted to authorized people only. Nevertheless, juvenile offenders convicted still obtain a permanent (but usually sealable or annuable) criminal record as if they were adults (hence the criminal responsibility), but with generally reduced waiting times and often more lenient standards for convictions to become 'spent' or 'annulled' depending on Commonwealth, State, or Territory legislation applicable. Full adult criminal responsibility in terms of sentencing and conviction annullement eligibility at age 18 (21 in Victoria). Malaysia has a dual system of secular and Islamic law, which has resulted in a number of different minimum ages of responsibility depending on which branch of the law is applicable. For offences for which an adult would be sentenced to life imprisonment, a person between the ages of 14 and 18 would be sentenced to no more than 15 years of "strict imprisonment" For offences for which an adult would be sentenced to "severe detention" a person between the ages of 14 and 18 would be sentenced to no more than 12 years of "strict imprisonment" A person under the age of 18 can be sentenced to closed detention for a single offence if: (i) the facts establish that the person has committed a felony under the Penal Code or special penal laws; (ii) the relevant crime was classified as a misdemeanour, but involved violence or intimidation against persons or has generated serious risk to life or physical safety; or (iii) the acts are classified as crimes committed in groups, organisations or associations Persons aged 14 or 15 may be sentenced to a maximum of three years detention and persons aged 16 or 17 may not be sentenced to more than six years detention. A child aged 16 or over when an offence was committed may not be sentenced to more than four years in detention, and only where: (i) he or she committed an offence for which an adult could be sentenced to detention of more than three months; or (ii) he or she has committed an offence under Article 122 of the Criminal Code, related to intentionally inflicting a life-threatening injury; or (iii) he or she commits robbery as a member of a group formed for the purposes of carrying out repeated robberies or thefts under Article 140(3) of the Criminal Code; or (iv) the behaviour of the offender is such as to show particular ruthlessness or the behaviour or purpose of the act reveal a highly reprehensible state of mind 10 inEngland, Wales, and Northern Ireland. Usually persons aged 10–11 will only be imprisoned in very serious cases, such as murder. Even more so the outcome for youth (12–17) criminal proceedings are usually age categorised (currently it will depend on whether the offender is under 12, under 14, under 16 or under 18, with the older the offender the more severity of punishment, especially for serious crimes). 12 inScotland. Children under 12 cannot be convicted or get a criminal record; from 12 to 15, decision usually made by the Children's Reporter whether to refer to a children's hearing, which can lead to a criminal record, but could be prosecuted for a criminal offence if the offence is serious. In some countries, ajuvenile courtis a court of special jurisdiction charged with adjudicating cases involving crimes committed by those who have not yet reached a specific age. If convicted in a juvenile court, the offender is found "responsible" for their actions as opposed to "guilty" of a criminal offense. Sometimes, in some jurisdictions (such as the United States of America), a minor may betried as an adult.
https://en.wikipedia.org/wiki/Age_of_criminal_responsibility
Legalcapacityis a quality denoting either the legal aptitude of a person to have rights and liabilities (in this sense also calledtransaction capacity), or the personhood itself in regard to an entity other than a natural person (in this sense also calledlegal personality).[1] Capacity covers day-to-day decisions, including: what to wear and what to buy, as well as, life-changing decisions, such as: whether to move into a care home or whether to have major surgery.[1] As an aspect of thesocial contractbetween a state and itscitizens, the state adopts a role of protector to the weaker and more vulnerable members of society. Inpublic policyterms, this is the policy ofparens patriae. Similarly, the state has a direct social and economic interest in promoting trade, so it will define the forms of business enterprise that may operate within its territory, and lay down rules that will allow both the businesses and those that wish to contract with them a fair opportunity to gain value. This system worked well until social and commercial mobility increased. Now persons routinely trade and travel across state boundaries (both physically and electronically), so the need is to provide stability across state lines given that laws differ from one state to the next. Thus, once defined by the personal law, persons take their capacity with them like a passport whether or however they may travel. In this way, a person will not gain or lose capacity depending on the accident of the local laws, e.g. if A does not have capacity to marry her cousin under her personal law (a rule ofconsanguinity), she cannotevadethat law by travelling to a state that does permit such a marriage (seenullity). In Saskatchewan Canada, an exception to this law allows married persons to become the common law spouse of other(s) prior to divorcing the first spouse. This law is not honored amongst other Canadian provinces. Standardized classes of person have had their freedom restricted. These limitations are exceptions to the general policy offreedom of contractand the detailedhumanandcivil rightsthat a person of ordinary capacity might enjoy. For example, freedom of movement may be modified, theright to votemay be withdrawn, etc. As societies have developed more equal treatment based on gender, race and ethnicity, many of the older incapacities have been removed. For example,English lawused to treat married women as lacking the capacity to own property or act independently of their husbands (the last of these rules was repealed by theDomicile and Matrimonial Proceedings Act 1973, which removed the wife's domicile of dependency for those marrying after 1974, so that a husband and wife could have different domiciles). The definition of aninfantorminorvaries, each state reflecting localcultureandprejudicesin defining theage of majority,marriageable age,voting age, etc. In manyjurisdictions, legalcontracts, in which (at least) one of the contracting parties is a minor, are voidable by the minor. For a minor to undergomedical procedure,consentis determined by the minor's parent(s) orlegal guardian(s). The right to vote in the United States is currently set at 18 years, while the right to buy and consumealcoholis often set at 21 years by U.S.state law. Some laws, such as marriage laws, may differentiate between the sexes and allow women to marry at a younger age. There are instances in which a person may be able to gain capacity earlier than the prescribed time through a process ofemancipation. Conversely, many states allow the inexperience of childhood to be an excusing condition to criminal liability and set theage of criminal responsibilityto match the local experience of emerging behavioral problems (seedoli incapax). For sexual crimes, theage of consentdetermines the potential liability ofadultaccused. As an example of liability in contract, the law in most of Canada provides that an infant is not bound by the contracts he or she enters into except for the purchase of necessaries and for beneficial contracts of service. Infants must pay fair price only for necessary goods and services. However, theBritish ColumbiaInfants Act(RSBC1996 c.223)[2]declares all contracts, including necessities and beneficial contracts of service, are unenforceable against an infant. Only student loans and other contracts made specifically enforceable by statute will be binding on infants in that province. In contracts between an adult and an infant, adults are bound but infants may escape contracts at their option (i.e. the contract is voidable). Infants may ratify a contract on reaching age of majority. In the case of executed contracts, when the infant has obtained some benefit under the contract, he/she cannot avoid obligations unless what was obtained was of no value. Upon repudiation of a contract, either party can apply to the court. The court may order restitution, damages, or discharge the contract. All contracts involving the transfer of real estate are considered valid until ruled otherwise. Aminor(typically under 18) can disaffirm a contract made, no matter the case. However, the entire contract must be disaffirmed. Depending on the jurisdiction, the minor may be required to return any of the goods still in his possession. Also,bartertransactions such as purchasing aretailitem in exchange for a cash payment are generally recognized through alegal fictionnot to be contracts due to the absence of promises of future action. A minor may not disavow such a trade.[3] Generally, the courts base their determination on whether the minor, after reaching the age of majority, has had ample opportunity to consider the nature of the contractual obligations he or she entered into as a minor and the extent to which the adult party to the contract has performed.[4]As one court put it, "the purpose of the infancy doctrine is to protect 'minors from foolishly squandering their wealth through improvident contracts with crafty adults who would take advantage of them in the marketplace.'"[5] InSingapore, while individuals under the age of 21 are regarded as minors, sections 35 and 36 of the Civil Law Act 1909 provide that certain contracts entered into by minors aged 18 and above are to be treated as though they were adults.[6]Additionally, the Minors' Contracts Act 1987 as applicable in Singapore and inEngland and Walesprovides that a contract entered into by a minor is not automatically unenforceable and that a "court may, if it is just and equitable to do so, require the [minor] defendant to transfer to the plaintiff any property acquired by the defendant under the contract, or any property representing it".[7] If individuals find themselves in a situation where they can no longer pay their debts, they lose their status as credit-worthy and become bankrupt. States differ on the means whereby their outstanding liabilities can be treated as discharged and on the precise extent of the limits that are placed on their capacities during this time but, after discharge, they are returned to full capacity. In the United States, somestateshave spendthrift laws under which an irresponsible spender may be deemed to lack capacity to enter into contracts (in Europe, these are termed prodigality laws) and both sets of laws may be denied extraterritorial effect under public policy as imposing a potentially penal status on the individuals affected.[citation needed] During times of war or civil strife, a state will limit the ability of its citizens to offer help or assistance in any form to those who are acting against the interests of the state. Hence, all commercial and other contracts with the "enemy", including terrorists, would be considered void or suspended until a cessation of hostilities is agreed.[citation needed] Loss of mental capacity occurs in individuals may have an inherent physical condition that prevents them from achieving the normal levels of performance expected from persons of comparable age, or their inability to match current levels of performance may be caused by contracting an illness. Whatever the cause, if the resulting condition is such that individuals cannot care for themselves, or may act in ways that are against their interests, those persons are vulnerable through dependency and require the protection of the state against the risks of abuse or exploitation. Hence, any agreements that were made are voidable, and a court may declare that person a ward of the state and grantpower of attorneyto an appointedlegal guardian. TheUK'sMental Capacity Act 2005or MCA sets out a two-stage test of capacity: The MCA states an individual is unable to make their own decision, if they are unable to do at least one of four things: In England and Wales, this is a specific function of theCourt of Protection, and all matters concerning persons who have lost, or expect soon to lose, mental capacity are regulated under theMental Capacity Act 2005. This makes provision forlasting powers of attorneyunder which decisions about the health, welfare, and financial assets of a person who has lost capacity may be dealt with in that person's interests. In Ireland, the Assisted Decision-Making (Capacity) Act was passed in 2015.[9]This Act addresses the capacity of people with intellectual disabilities. The general principles are set out in section 8 of the Act. Under Singapore's Mental Capacity Act 2008, "a person lacks capacity in relation to a matter if at the material time the person is unable to make a decision for himself or herself in relation to the matter because of an impairment of, or a disturbance in the functioning of, the mind or brain".[10]Where an individual lacks capacity on grounds of mental illness or senility, a relative or other responsible person may obtain a lastingpower of attorneyto make decisions concerning the "personal welfare" of the person lacking capacity, the "property and [financial] affairs" of the person, or both.[11]Questions as to whether an individual has the capacity to make decisions either generally or with regard to a particular matter or class of matters are generally resolved by ajudicial declarationand the court making the declaration may appoint one or more individuals to act as deputies for the person lacking capacity.[12] This sort of problem sometimes arises when people suffer some form of medical problem such as unconsciousness,coma, extensiveparalysis, or delirious states, from accidents or illnesses such asstrokes, or often when older people become afflicted with some form of medical/mentaldisabilitysuch asHuntington's disease,Alzheimer's disease,Lewy bodydisease, or similar dementia. Such persons are often unable to consent to medical treatment but otherwise handle their financial and other personal matters. If the afflicted person has prepared documents beforehand about what to do in such cases, often in a revocable livingtrustor related documents, then the named legal guardian may be able to take over their financial and other affairs. If the afflicted person owns their property jointly with a spouse or other able person, the able person may be able to take over many of the routine financial affairs. Otherwise, it is often necessary to petition a court, such as aprobate court, that the afflicted person lacks legal capacity and allow a legal guardian to take over their financial and personal affairs. Procedures and court review have been established, dependent on the area of jurisdiction, to prevent exploitation of the incapacitated person by the guardian. The guardian periodically provides a financial accounting for court review. In theCriminal Law, the traditional common lawM'Naghten Rulesexcusedall persons from liability if they did not understand what they were doing or if they did, that they did not know it was wrong. The consequences of this excuse were that those accused were detained indefinitely or until the medical authorities certified that it was safe to release them back into the community. This consequence was felt to be too draconian, and so statutes have introduced new defenses that will limit or reduce the liability of those accused of committing offenses if they were suffering from a mental illness at the relevant time (see theinsanityandmental disorderdefenses). Although individuals may have consumed a sufficient quantity of intoxicant or drug to reduce or eliminate their ability to understand exactly what they are doing, such conditions are self-induced and so the law does not generally allow any defense or excuse to be raised to any actions taken while incapacitated. The most generous states[which?]do permit individuals to repudiate agreements as soon as sober, but the conditions to exercising this right are strict.[citation needed] There is a clear division between the approach of states to the definition of partnerships. One group of states treatsgeneralandlimitedpartnerships as aggregate. In terms of capacity, this means that they are no more than the sum of the natural persons who conduct the business. The other group of states allows partnerships to have a separate legal personality which changes the capacity of the "firm" and those who conduct its business and makes such partnerships more like corporations. The extent of ajuridical person's capacity depends on the law of the place of incorporation and the enabling provisions included in theconstitutive documentsof incorporation. The general rule is that anything not included in the corporation's capacity, whether expressly or by implication, isultra vires, i.e. "beyond the power" of the corporation, and so may be unenforceable by the corporation, but the rights and interests of innocent third parties dealing with the corporations are usually protected. In American law,limited liability companies(LLCs) are legal persons. Some legal scholars have argued that they can be used to givelegal capacity to software programs, includingartificial intelligence.[13][14] In some states,trade unionshave limited capacity unless any contract made relates to union activities. When a business entity becomesinsolvent, anadministrator, receiver, or other similar legal functionary may be appointed to determine whether the entity shall continue to trade or be sold so that the creditors may receive all or a proportion of the money owing to them. During this time, the capacity of the entity is limited so that its liabilities are not increased unreasonably and to the detriment of the existing creditors.
https://en.wikipedia.org/wiki/Capacity_(law)
Thelegal drinking ageis the minimum age at which a person canlegallyconsumealcoholic beverages. The minimum age alcohol can be legally consumed can be different from the age when it can be purchased in some countries. These laws vary between countries and many laws have exemptions or special circumstances. Most laws apply only todrinking alcohol in public placeswith alcohol consumption in the home being mostly unregulated (one of the exceptions being England and Wales, which have a minimum legal age of five for supervised consumption in private places). Some countries also have different age limits for differenttypes of alcohol drinks.[1] The majority of countries have a minimum legal drinking age of 18.[2]The most commonly known reason for the law behind the legal drinking age is theeffect on the brainin adolescents. Since the brain is still maturing, alcohol can have anegative effect on the memoryand long-term thinking. Alongside that, it can causeliver failure, and create ahormone imbalancein teens due to the constant changes and maturing of hormones duringpuberty.[3]Some countries have a minimum legal drinking age of 19 to prevent the flow of alcoholic beverages inhigh schools,[4]while others like theUnited Stateshave a minimum legal purchasing age of 21 (18 inP.R.andUSVI) in an effort to reduce the amount ofdrunk drivingrates amongteenagersandyoung adults.[5] There is a concept calledunderage clubs, where individuals below the legal drinking age are catered for and are served non alcoholic beverages.[6][7] The most common minimum age to purchase alcohol in Africa is 18. However, Angola (except Luanda Province), Central African Republic, Comoros, Equatorial Guinea, Guinea-Bissau, and Mali have no laws on the books restricting the sale of alcohol to minors. In Libya, Somalia and Sudan the sale, production and consumption of alcohol is completely prohibited.[improper synthesis?] The Revised Family Code Proclamation No. 213 (2000) Article 215 defines a minor as anyone who has not attained the full age of 18. In 2019, the Ethiopian parliament passed a bill that bans a specific category of alcohol advertising in the media and also increases the age limit for purchasing alcohol from 18 to 21 years of age. A "young person" is defined as anyone under the age of 17 by the Children and Young Persons Act, 1949 Section 2. Children's Act Section 78 – It is prohibited for any person to sell, lend, give, supply, deliver or offer alcohol beverages to any child under the age of 16 years, except upon production of a written order signed by the parent or guardian of the child known to such person. The police has the duty to seize any alcohol beverage in the possession of a child under the age of 16 years without a written consent of the parents or legal guardian.[43] In Central America, the Caribbean, and South America the legal drinking age and legal purchase age varies from 0 to 20 years (see table below). In South America in particular, the legal purchase age is 18 years, with two exceptions: In North America the legal drinking age and legal purchase age varies from 18 to 21 years: In the late 20th century, much of North America changed its minimum legal drinking ages (MLDAs) as follows: In the 1970s, provincial and state policy makers in Canada and the United States moved to lower MLDAs (which were set at 21 years in most provinces/territories and states) to coincide with the jurisdictionalage of majority— typically 18 years of age.... As a result, MLDAs were reduced in all Canadian provinces [and] inmore than half of US states. In Canada, however, two provinces, Ontario [in 1979] and Saskatchewan [in 1976], quickly raised their subsequent MLDAs from 18 to 19 years in response to a few studies demonstrating an association between the lowered drinking age and increases in alcohol-related harms to youth and young adults, including increases inmotor vehicle accidents(MVAs) andalcohol intoxicationamong high school students. Following MLDA reductions in the US, research in several states provided persuasive evidence of sharp increases in rates of fatal and nonfatal MVAs appearing immediately after the implementation of lower drinking ages. These scientific findings galvanized public pressure on lawmakers to raise MLDAs and, in response, the federal government introduced theNational Minimum Drinking Age Actof 1984, which imposed a reduction of highway funds for states if they did not increase their MLDA to 21 years. All states complied and implemented an MLDA of 21 years by 1988.[47] Law 259 Against the Sale and Consumption of Alcoholic Beverages (2012) Article 20 Law prohibiting minors entry to entertainment venuesArticle 1 prohibits those under the age of 16 from entering cinemas and theaters (except during children's programming), clubs, cafes, or venues licensed to sell alcohol beverages. Code of Children and Adolescents Decree 73 (1996) Somestatesdo not allow those under the legal drinking age to be present in liquor stores or inbars(usually, the difference between a bar and arestaurantis that food is served only in the latter). Only a few states prohibitminorsandyoung adultsfrom consuming alcohol in private settings. The National Institute on Alcohol Abuse and Alcoholism maintains a database that details state-level regulations on consumption and sale age limits.[84] In the United States, the minimum legal age to purchase alcoholic beverages has mainly been 21 years of age since shortly after the passage of theNational Minimum Drinking Age Actin 1984. The two exceptions are Puerto Rico and the Virgin Islands where the age is 18. The legal drinking age varies by state, and many states have no age requirements for supervised drinking with one's parents or legal guardians. Despite a rekindled national debate in 2008 on the established drinking age (initiated by several university presidents), aFairleigh Dickinson UniversityPublicMindpollfound in September 2008 that 76% ofNew Jerseyanssupported leaving the legal drinking age at 21 years.[88]No significant differences emerged when considering gender, political affiliation, or region. However, parents of younger children were more likely to support keeping the age at 21 (83%) than parents of college-age students (67%).[88] Seventeen states (Arkansas,California,Connecticut,Florida,Kentucky,Maryland,Massachusetts,Mississippi,Missouri,Nevada,New Hampshire,New Mexico,New York,Oklahoma,Rhode Island,South Carolina, andWyoming) and theDistrict of Columbiahave laws against possession of alcohol by minors, but they do not prohibit its consumption by minors. Fourteen states (Alaska,Colorado,Delaware,Illinois,Louisiana,Maine,Minnesota,Missouri,Montana,Ohio,Oregon,Texas,Wisconsin, andVirginia) specifically permit minors to drink alcohol given to them by their parents or by someone entrusted by their parents.[citation needed] Many states also permit the drinking of alcohol under the age of 21 for religious or health reasons. Puerto Rico, a territory of the United States, has maintained a drinking age of 18. United States customs laws stipulate that no person under the age of 21 may bring any type or quantity of alcohol into the country.[89] The draft law has not yet been enacted.[96] Ministry of Commerce Decree No. 25 (2006) 21 in Arunachal Pradesh, Assam, Chandigarh, Chhattisgarh, Dadra and Nagar Haveli and Daman and Diu, Delhi, Haryana, Jammu and Kashmir, Jharkhand, Kerala, Ladakh, Madhya Pradesh, Mizoram, Odisha, Tamil Nadu, Telangana, Tripura, Uttarakhand, Uttar Pradesh, and West Bengal.[104] 25 in Maharashtra,[105]Meghalaya and Punjab.[106] Consumption of alcohol is prohibited in the states of Bihar, Gujarat, Lakshadweep, Manipur, and Nagaland.[107] 18 in public spaces[115] According to a global school health study, 40% of minors over 13 drink alcohol and up to 25% buy it from stores.[117] Increased from 18 to 21 in December 2017, effective 16 October 2018. Anyone caught selling to persons under 21 can be fined up to RM10,000 and jailed up to 2 years.[118] Malaysian identity cardsdisplay the word "ISLAM" if the holder is Muslim (and otherwise blank if the holder is non-Muslim) on the bottom right corner, which allows enforcement of the religion-based sales restriction. Excise Act (1958) Art 40(7) It is also prohibited for minors to purchase, or attempt to purchase alcohol from any licensed premises, in which the minors can also be fined up to $10,000. However, the authorities rarely enforced this on minors. It is technically legal for minors to possess and consume alcohol at home and in public (not in any licensed premises) as there is no law prohibiting it. It is also technically legal for someone to purchase alcohol and pass it to minors outside the store or licensed premise.[127] The method of calculating the legal age for alcohol slightly differs from Korean age reckoning in which another one year will be added to the person's age, whereas this method only doesn't take into account the month and day of birth but only the year instead.[128] On 28 June 2023, the law that requires measuring age in the western way came into force. However, the previous system to determine the age to drink alcohol will be maintained.[129] No person shall supply alcohol to anyone under the age of 18 (Art 91). Parents are required to forbid their children who have not reached age 20 to consume alcohol beverages.[133] Previously, expatriate non-Muslimresidents had to request a liquor permit to purchase alcohol beverages, but it was prohibited for such holders to provide drinks to others.[139] The legal age for drinking alcohol is 18 in Abu Dhabi (although a Ministry of Tourism by-law allows hotels to serve alcohol only to those over 21), and 21 in Dubai and the Northern Emirates (except Sharjah, where drinking alcohol is prohibited).[137] It is a punishable offence to drink, or to be under the influence of alcohol, in public.[137] Most countries inEuropehave set 18 as the minimum age to purchase alcohol. AlthoughAustria,Belgium,Denmark,Germany,Gibraltar,Liechtenstein,Luxembourg,Malta,PortugalandSwitzerland(exceptTicino) maintain a minimum purchase age below 18 years, minors are permitted either full or limited access to alcohol. In 2005,[143]2007[144]and 2015[145]harmonization at theEuropean Unionlevel toward a minimum purchase age of 18 was discussed, but not agreed. Timeline of changes to drinking/purchase age or laws restricting the access to alcohol for minors: 16 for other alcohol beverages[175] If a shop or bar fails to ask for an ID card and is identified as having sold alcohol to an underage person, it is subject to a fine. A national ID card, obtained in the local town hall, can serve as age verification.[182]This card is rarely used though since a passport or driver's license is more commonly used.[183] Both the legal drinking and purchasing age in the Faroe Islands is 18.[184] Police may search minors in public places and confiscate or destroy any alcohol beverages in their possession. Incidents are reported to the legal guardian and child protective services, who may intervene with child welfare procedures. In addition, those aged 15 or above are subject to a fine.[188] In private, offering alcohol to a minor is considered a criminal offence if it results in drunkenness and the act can be deemed reprehensible as a whole, considering the minor's age, degree of maturity and other circumstances.[186] 16(other alcohol beverages)[202] Drinking in public places, with the exception of designated drinking zones, is prohibited regardless of age. 20[219]18(beer, above 2.25% ABV and below or equal to 3.5% ABV) Liquor (Amendment) Act 12-2005 Art 27 Alcohol Restriction Act of 1994 Art 202(1)
https://en.wikipedia.org/wiki/Legal_drinking_age
Thesmoking ageis the minimum legal age required to purchase or usetobaccoorcannabisproducts. Most countries have laws that forbid sale of tobacco products to persons younger than certain ages, usually theage of majority. This article does not discuss laws thatregulate electronic cigarettes. It is illegal to sell or give, directly or indirectly any tobacco product to any under-aged person, and anyone caught doing so will be subjected to harsh penalties. Anyone caught selling tobacco products to an under-aged person will be charged in court and can be fined up to 5,000 Singapore dollars for the first offense, and up to $10,000 for the second and subsequent offenses. In addition, the store involved will have the tobacco license suspended for 6 months at the first offense, and permanently revoked for the second offense. However, if the store involved is caught selling to minors in school uniform, or to minors below the age of 12, the tobacco license will be permanently revoked even at its first offense. Anyone caught buying tobacco products for an under-aged person will be charged in court can be fined up to 2,500 Singapore dollars for the first offence, and up to 5,000 for the second and subsequent offences. Anyone caught giving tobacco products to an under-aged person will be charged in court and can be fined up to 500 Singapore dollars for the first offence, and up to 1,000 Singapore dollars for the second and subsequent offences. It is illegal for minors to purchase, use or possess any tobacco product in public. Minors caught doing so are usually given a warning or a 30-dollar composition fine, with their school and parents informed and follow-up actions taken by the school. Minors caught more than once will have to attend not less than two smoking cessation counseling session to have their offences compounded. Minors who fail to comply with the above requirements or if they are caught four or more times, they can be charged in court and be liable to a fine not exceeding 300 Singapore dollars upon conviction.[112] The minimum age was 16 years prior to 27 March 2002.[150] 16 (public possession de facto) (purchase de jure) No minimum age prior to 1908. Minimum age was 16 prior to 30 September 2007. Since 2012, various jurisdictions throughout the world have legalizedcannabisfor recreational use. InMexico,Uruguayand cannabis-legal jurisdictionsin the United States, the legal age to possess or purchase cannabis is identical to the tobacco purchase age (18 in Mexico and Uruguay and 21 in the United States). InCanada, the legal age to possess or purchase cannabis is 19 in all provinces and territories exceptAlberta(18) andQuebec(21). There are therefore three Canadian provinces (Manitoba,QuebecandSaskatchewan) and two territories (the Northwest Territories and Yukon) where the age to purchase tobacco is lower than the age to possess and purchase cannabis, and one province (Prince Edward Island) where the tobacco purchase age is higher. Prior to December 2019, when the United States raised its tobacco purchase age to 21 in all states and territories, several U.S. states had tobacco purchase ages lower than their cannabis possession and purchase ages.[224]
https://en.wikipedia.org/wiki/Legal_smoking_age
Thelegal working ageis the minimum age required by law in each country orjurisdictionfor a young person who has not yet reached theage of majorityto be allowed to work. Activities that aredangerous, harmful to the healthor that may affect the morals or well-being ofminorsfall into this category. Otherwise a bunch at 16 and a few at 18. Occupation Standards: Hours-Time Standards[23] None:(21 years of age to serve alcoholic beverages for the consumption on premises (19 if licensee is RVP certified); 18 years of age to work in that part of an establishment where alcoholic beverages are sold or served for consumption on premises; 14/15 year olds may not work in an establishment where alcoholic beverages are sold, served, or dispensed for consumption on premises.) Working hours: Working hours:See federal law See federal law(some additional restrictions by state law) Working hours: Additional Working Restrictions: Working hours: Additional working restrictions Working hours Work hour restriction: Work hour restriction: Work hour restriction: Work hour restriction: Work hour restriction: None(18 years of age to sell alcoholic beverages and to work as a barkeeper) Work hour restrictions: Work hour restrictions: None(19 years of age to handle sales of alcohol or to serve it in a restaurants) Work hour restrictions: None(18 years of age to sell tobacco products; 19 years of age can "ring-up" the sale of alcoholic beverages in the course of their employment at a drug or grocery store if there is at least one other employee on the licensed premise who is twenty-one years of age or older) Work hour restrictions: None(16 years of age to sell liquor, wine, or beer in original unopened containers for off-premises consumption (grocery stores, convenience stores, liquor stores, etc.); 18 years of age to sell and dispense liquor, wine, or beer for on-premises consumption (restaurants, bars, clubs, etc.); 21 years of age to deliver liquor, wine, or beer in original unopened containers to a home or other designated location for personal use (licensees and permittees authorized to sell liquor, wine, or beer for off-premises consumption)) Work hour restrictions: None(18 years of age to work as a server in a restaurant serving alcohol, and to sell 3.2 ABV beer; 21 years of age Unrestricted) Work hour restrictions:None (see federal law) Minors under the age of 18 are prohibited from working in/as:Occupations in or about Plants or Establishments Manufacturing or Storing Explosives or Articles Containing Explosive Components; Motor-vehicle Driver and outside helper on a motor vehicle; Coal Mine Occupations; Logging or Sawmill Operations; Operation of Power-Driven Woodworking machines; Exposure to Radioactive Substances; Power-driven hoisting apparatus, including forklifts; Operation of Power-Driven Metal Forming, punching, and shearing machines; Mining, other than coal mining; Operation of Power-driven bakery machines including vertical dough or batter mixers; Manufacturing bricks, tile, and kindred products; Wrecking, demolition, and shipbreaking operations; Roofing operations and all work on or about a roof; In, about or in connection with any establishment where alcoholic liquors are distilled, rectified, compounded, brewed, manufactured, bottled, sold for consumption or dispensed unless permitted by the rules and regulations of the Alcoholic Beverage Control Board (except they may be employed in places where the sale of alcoholic beverages by the package is merely incidental to the main business actually conducted); Pool or Billiard Room. Work hour restrictions: None(18 years of age to work as a server in a restaurant that sells alcohol or as a bartender. Age is not a factor in working for any business for which selling alcohol is not their primary purpose as long as that business holds an off-premises license, meaning alcohol cannot be consumed on the property.) Work hour restrictions: None(17 years of age to work as a server in a restaurant that serves alcohol or as a bartender and handle, transport, and sell beer, wine, and spirits. The presence of a supervisor over the age of 21 is required) Work hour restrictions: None(18 years of age to work as a server in a restaurant that sells alcohol and to sell wine and beer in retail stores) Work hour restrictions: Work hour restrictions: Minors may not work: (A licensee shall not allow any person who is less than 18 years of age to sell or serve alcoholic liquor.) Special requirements: Work hour restrictions: Work hour restrictions: (18: To serve alcoholic beverages in Restaurants; 21: To work as a bartender or to handle alcohol in a liquor store.) Work hour restrictions: Work hour restrictions: None(18 years of age to work as a server in a restaurant that sells alcohol and to sell alcohol in retail stores) Work hour restrictions: None(19 years of age to work as a server in a restaurant that sells alcohol and to sell alcohol in retail stores) Work hour restrictions: None(See federal law) (The minimum age to serve or sell alcohol in a grocery store or convenience mart is 16 as long as the minor is supervised by someone over the age of 21; The minimum age to work as a bartender or in a restaurant that sells alcohol is 21.) Work hour restrictions: None(17 years of age to work as a server in a restaurant that sells alcohol and to sell alcohol in retail stores) Work hour restrictions: (18 years of age to work as a bartender, serve alcohol in a restaurant, and work in a liquor store if supervised by someone over 21.) Work hour restrictions: Work hour restrictions: (21 to work as a bartender or to sell alcohol in a liquor store; 19 to serve alcohol in a restaurant if supervised by someone 21 or older) Work hour restrictions: (21 years of age to work as a bartender; 19 years of age to work as a server in a restaurant that sells alcohol; 18 is the minimum age to work in a liquor store or transport alcohol) Work hour restrictions: None(18 to work as a bartender, to transport alcohol, or to work as a server in a restaurant that sells alcohol) (18 to perform any work that involves selling or handling alcohol) Work hour restrictions: Work hour restrictions: (18: To serve, sell or handle alcohol) Work hour restrictions: Work hour restrictions: (Federal laws create requirements that are more stringent in many cases!) Work hour restrictions: (Federal laws create requirements that are more stringent in many cases!) None - see federal laws. Work hour restrictions: Male:15: Restricted occupations and hours of activity18: (Unrestricted) Female:15: With broad restrictions for working hours and the type of work18: May only participate in underground work if engaged in work specified by ordinance performed underground20: (Unrestricted)[43]Chapter 6, Articles 56–62 Female:15: Restricted working hours and the type of work.18: Some limitations for work in overtly unhealthy conditions.19: (Unrestricted)[44]Articles 64 and 70–72 of thelabour lawimplement the minimum age. 16: (Unrestricted)[80] 14: (Light work only. Must have parental permission. Restricted working hours and shortened working week, must not interfere with school education)16: (Light work only. Full school education required. Restricted working hours)18: Unrestricted 13: (Light work only. Must have parent permission. Restricted working hours and shortened working week.)16: (Light work only. Restricted working hours)18: (Unrestricted)[81] Age 15: (Must have parental permission) Age 16: Minimum age to serve someone in restaurants, café or hotels. Minimum age to work in a circus or cinema. Age 18: Unrestricted (and the minimum age to work in: Bars, Discos, Dancinghalls and Nightclubs)[82] Age 14: Part-time[83] Age 16: Full-time[83] 15 (Western Australia; most jobs, variations and restrictions apply for family businesses, entertainers/models, and newspaper delivery)[84]No minimum working age (New South Wales, South Australia, Tasmania, Australian Capital Territory)[85]
https://en.wikipedia.org/wiki/Legal_working_age
Marriageable ageis the minimumlegal ageofmarriage. Age and other prerequisites to marriage vary between jurisdictions, but in the vast majority of jurisdictions, the marriageable age as a right is set at theage of majority. Nevertheless, most jurisdictions allowmarriage at a younger agewith parental or judicial approval, especially ifthe female is pregnant. Among most indigenous cultures, people marry at fifteen, the age of sexual maturity for both the male and the female. In industrialized cultures, the age of marriage is most commonly 18 years old, but there are variations, and the marriageable age should not be confused with the age of majority or theage of consent, though they may be the same. The 55 parties to the 1962Convention on Consent to Marriage, Minimum Age for Marriage, and Registration of Marriageshave agreed to specify a minimum marriageable age by statute law‚ to override customary, religious, tribal laws and traditions. When the marriageable age under alaw of a religious communityis lower than that under thelaw of the land, the state law prevails. However, some religious communities do not accept the supremacy of state law in this respect, which may lead to child marriage or forced marriage. The 123 parties to the 1956Supplementary Convention on the Abolition of Slaveryhave agreed to adopt a prescribed "suitable" minimum age for marriage. In many developing countries, the official age prescriptions stand as mere guidelines.UNICEF, the United Nations children's organization, regards a marriage of a minor (legalchild), a person below the adult age, aschild marriageand a violation of rights.[1] Until recently, the minimum marriageable age for females was lower in many jurisdictions than for males, on the premise that females mature at an earlier age than males. This law has been viewed by some to be discriminatory, so that in many countries the marriageable age of females has been raised to equal that of males.[2] In Greece females married as young as 14 or 16.[3]In Spartan marriages, females were around 18 and males were around 25.[4] In theRoman Empire, the EmperorAugustusintroduced marriage legislation, theLex Papia Poppaea, which rewarded marriage and childbearing. The legislation also imposed penalties for both men and women who remained unmarried, or who married but for whatever reason failed to have children. For men it was between the ages of 25 and 60 while for women it was between ages 20 and 50.[5]Women who wereVestal Virginswere selected between the ages of 10 and 13 to serve as priestesses in the temple of goddess Vesta in the Roman Forum for 30 years, after which they could marry.[6] In Roman law the age of marriage was 12 years for females and 14 years for males, and age of betrothal was 7 years for both males and females.[7]The father had the right and duty to seek a good and useful match for his children.[8]To further the interests of their birth families, daughters of the elite would marry into respectable families.[9]If a daughter could prove the proposed husband to be of bad character, she could legitimately refuse the match.[9]Individuals remained under the authority of thepater familiasuntil his death, and the latter had the power to approve or rejectmarriagesfor his sons and daughters, but by the late antique period, Roman law permitted women over 25 to marry without parental consent.[10]: 29–37 Noblewomenwere known to marry as young as 12 years of age,[7]whereas women in the lowersocial classeswere more likely to marry slightly further into their teenage years.[11][12]43% of Pagan females married at 12–15 years and 42% ofChristianfemales married at 15–18 years.[13] Inlate antiquity, most Roman women married in their late teens to early twenties, butnoble womenmarried younger than those of the lower classes, as an aristocratic girl was expected to be virgin until her first marriage.[11]In late antiquity, under Roman law, daughters inherited equally from their parents if no will was produced.[10]: 63In addition, Roman law recognized wives' property as legally separate from husbands' property,[10]: 133–154as did some legal systems in parts of Europe and colonial Latin America. In 380 C.E., theEmperor Theodosiusissued theEdict of Thessalonica, which madeNicene Christianitythe official religion of theRoman Empire. TheHoly SeeadaptedRoman lawintoCanon law.[14] After thefall of the Western Roman Empireand the rise of theHoly Roman Empire, manorialism also helped weaken the ties of kinship and thus the power ofclans. As early as the 9th century innorthwestern France, families that worked onmanorswere small, consisting of parents and children and occasionally a grandparent. The Roman Catholic Church and State had become allies in erasing the solidarity and thus the political power of the clans; the Roman Catholic Church sought to replacetraditional religion, whose vehicle was the kin group, and substitute the authority of theeldersof the kin group with that of a religious elder. At the same time, the king's rule was undermined by revolts by the most powerful kin groups, clans or sections, whose conspiracies and murders threatened the power of the state and also the demands by manorial Lords for obedient, compliant workers.[15] As thepeasantsandserfslived and worked on farms that they rented from the lord of the manor, they also needed the permission of the lord to marry. Couples therefore had to comply with the lord of the manor and wait until a small farm became available before they could marry and thus produce children. Those who could and did not delay marriage were presumably rewarded by the landlord and those who did not marry were presumably denied that reward. For example, marriageable ages inMedieval Englandvaried depending on economic circumstances, with couples delaying marriage until their early twenties when times were bad, but might marry in their late teens after theBlack Death, when there was a severe labour shortage;[16]: 96by appearances, marriage of adolescents was not the norm in England.[16]: 98–100 In medievalWestern Europe, the rise ofCatholicismandmanorialismhad both created incentives to keep familiesnuclear, and thus the age of marriage increased; theWestern Churchinstituted marriage laws and practices that undermined large kinship groups. The Roman Catholic Church prohibitedconsanguineousmarriages, a marriage pattern that had been a means to maintainclans(and thus their power) throughout history.[17]TheRoman Catholic Churchcurtailed arranged marriages in which the bride did not clearly agree to the union.[18] In the 12th century, theRoman Catholic Churchdrastically changed legal standards for marital consent by allowing daughters over 12 years old and sons over 14 years old to marry without their parents' approval, which was previously required, even if their marriage was made clandestinely.[19]Parish studies have confirmed that in thelate medieval period, females did sometimes marry without their parents' approval in England.[20] In the 12th century,Canon lawjuristGratian, stated that consent for marriage could not take place before the age of 12 years old for females and 14 years old for males; also, consent for betrothal could not take place before the age of 7 years old for females and males, as that is the age of reason. TheChurch of England, after breaking away from theRoman Catholic Church, carried with it the same minimum age requirements. Age of consent for marriage of 12 years old for girls and of 14 years old for boys were written into English civil law.[14] The first recorded age-of-consent law, in England, dates back 800 years. The age of consent law in question has to do with the law of rape and not the law of marriage as sometimes misunderstood. In 1275, in England, as part of the rape law, theStatute of Westminster 1275, made it a misdemeanor to have sex with a "maiden within age", whether with or without her consent. The phrase "within age" was interpreted by jurist Sir Edward Coke as meaning the age of marriage, which at the time was 12 years old.[21]A 1576 law was created with more severe punishments for having sex with a girl for which the age of consent was set at 10 years old.[22]UnderEnglish common lawthe age of consent, as part of the law of rape, was 10 or 12 years old and rape was defined as forceful sexual intercourse with a woman against her will. To convict a man of rape, both force and lack of consent had to be proved, except in the case of a girl who is under the age of consent. Since the age of consent applied in all circumstances, not just in physical assaults, the law also made it impossible for an underage girl (under 12 years old) to consent to sexual activity. There was one exception: a man's acts with his wife (females over 12 years old), to which rape law did not apply.[23]JuristSir Matthew Halestated that both rape laws were valid at the same time.[24]In 1875, theOffence Against the Persons Actraised the age to 13 years in England; an act of sexual intercourse with a girl younger than 13 was a felony.[25] There were some fathers who arranged marriages for a son or a daughter before he or she reached the age of maturity, which issimilarto what some fathers in ancient Rome did.Consummationwould not take place until the age of maturity. Roman CatholicCanon lawdefines a marriage as consummated when the "spouses have performed between themselves in a human fashion a conjugal act which is suitable in itself for the procreation of offspring, to which marriage is ordered by its nature and by which the spouses become one flesh."[26]There are recorded marriages of two- and three-year-olds: in 1564, a three-year-old named John was married to a two-year-old named Jane in the Bishop's Court in Chester, England. The policy of theRoman Catholic Church, and later various protestant churches, of considering clandestine marriages and marriages made without parental consent to be valid was controversial, and in the 16th century both the French monarchy and the Lutheran Church sought to end these practices, with limited success.[27] In most ofNorthwestern Europe, marriages at very early ages were rare. One thousand marriage certificates from 1619 to 1660 in the Archdiocese ofCanterburyshow that only one bride was 13 years old, four were 15, twelve were 16, and seventeen were 17 years old; while the other 966 brides were at least 19 years old.[28] In England and Wales, theMarriage Act 1753required a marriage to be covered by a licence (requiring parental consent for those under 21) or the publication of bans (which parents of those under 21 could forbid). Additionally, theChurch of Englanddictated that both the bride and groom must be at least 21 years of age to marry without the consent of their families. In the certificates, the most common age for the brides is 22 years. For the grooms 24 years was the most common age, with average ages of 24 years for the brides and 27 for the grooms.[28]While European noblewomen often married early, they were a small minority of the population,[29]and the marriage certificates from Canterbury show that even among nobility it was very rare to marry women off at very early ages.[28] The minimum age requirements of 12 and 14 were eventually written into English civil law. By default, these provisions became the minimum marriageable ages in colonial America.[14]On the average, marriages occurred several years earlier in colonial America than in Europe, and much higher proportions of the population eventually got married. Community-based studies suggest an average age at marriage of about 20 years old for women in the early colonial period and about 26 years old for men.[30]In the late 19th century and throughout the 20th century, U.S. states began to slowly raise the minimum legal age at which individuals were allowed to marry. Age restrictions, as in most developed countries, have been revised upward so that they are now between 15 and 21 years of age.[14] Before 1929, the Scottish law adopted the Roman law in allowing a girl to marry at twelve years of age and a boy at fourteen, without any requirement for parental consent. However, in practice, marriages in Scotland at such young ages was almost unknown.[31] The highest average age at first marriage was in the Netherlands: on average 27 years for women and 30 years for men in both the rural and urban population from the late 1400's onward till the end of WWII, rising at times to 30 years for women and 32 years for men. On average 25-30% of people in the Netherlands remained unmarried throughout their life between 1500 and 1950.[32]InAmsterdamthe mean age at first marriage for women fluctuated between 23.5 and 25 years old from the late 15th century until the 1660s, when it started to rise even further.[33] From early on the Roman Catholic Church promoted sexual abstinence over marriage, but marriage over sexual promiscuity. This meant that remaining unmarried became socially acceptable in Western Europe. In the Middle Ages marriage was often not recorded and therefore could depend on the word of the couple that could either confirm or deny it having taken place. A majority of unmarried women would be in the service of the church as nuns or as lay women. A vast number of women also provided for themselves in specialised professions until the financial freedoms of women were curtailed by the guilds in the late Middle Ages. This meant that until the late Middle Ages many women could also run businesses to sustain themselves outside of marriage.[32] After the 1400's the first marriage age became better recorded and seems to be influenced largely by the economic situation. In times of economical uncertainty both women and men tended to marry younger (between 20-25 years old for women) but the age gap was somewhat larger. A major factor was that by marrying their daughter off young the parents had one mouth less to feed and the dowry was often lower for younger girls who had learned less skills and build up less savings. This also explains the larger age gap between husband and wife in economical harsher times: an older husband would already have established himself an income to sustain a wife and thus children. Though for political reasons nobility often engaged and married far younger than the general population in many cases the actual consummation of the marriage was postponed until both marriage partners had reached a more mature age.[32] Another contributing factor to later marriage age is that in the Middle Ages a culture of nuclear family structures developed from the multiple generational extended family structures that were common in pre-Christian tribal societies in Western Europe. Both men and women would typically spend several years of working as a maid, farmhand, labourer or apprentice in order to gain work experience, develop skills and save up money to sustain their own nuclear family, rather than continuing to live in multigenerational household. This development raised the socially accepted first marriage age of women from puberty onset (12-14 years old) in the early Middle Ages up to their late teens and older by the late medieval period, and during the renaissance up to their middle twenties on average. This development also brought the first marriage age of women and men far closer together. The great general wealth in the Netherlands from the spice trade also meant that women married later in life. The highest marriage ages for both men and women was passed 30 years old and are found in times of national financial prosperity.[32] An other contributing reason was that late marriage age was a recognised method of birth control. The later a woman married the less children she would birth and the less children a couple had to raise. It was also generally recognised that giving birth at a very young age was detrimental for the woman's health and therefore socially disapproved of. Social disapproval of a young marriage age for the woman and a large age gap between the marriage partners can still be recognised in sayings originating in those centuries. A well known example from neighbouring Britain is the cautionary tale of the playRomeo and JulietbyWilliam Shakespeareof whom the young ages were considered scandalous at the time.[34][32] In France, until theFrench Revolution, the marriageable age was 12 years for females and 14 for males. Revolutionary legislation in 1792 increased the age to 13 years for females and 15 for males. Under theNapoleonic Codein 1804, the marriageable age was set at 15 years old for females and 18 years old for males.[35]In 2006, the marriageable age for females was increased to 18, the same as for males. In jurisdictions where the ages are not the same, the marriageable age for females is more commonly two or three years lower than that of males. In 17th century Poland, in the Warsaw parish of St John, the average age of women entering marriage was 20.1, and that of men was 23.7. In the second half of the eighteenth century, women in the parish of Holy Cross married at 21.8, while men at 29.[36] In medievalEastern Europe, theSlavictraditions ofpatrilocalityof early and universal marriage (usually of a bride aged 13–15 years, withmenarcheoccurring on average at age 14) lingered;[37]the manorial system had yet to penetrate into Eastern Europe and generally had less effect on clan systems there. The bans oncross-cousinmarriages had also not been firmly enforced.[38] In Russia, before 1830 the age of consent for marriage was 15 years old for males and 13 years old for females[32](though 15 years old was preferred for females, so much so that it was written into the Law Code of 1649).[39]Teenage marriagewas practised forchastity. Both the female and the male teenager needed consent of their parents to marry because they were under 20 years old, the age of majority. In 1830, the age of consent for marriage was raised to 18 years old for males and 16 years old for females[32]Though 18 years old was preferred for females, the average age of marriage for females was around 19 years old.[40][41] Aztec family law generally followed customary law. Men got married between the ages of 20–22, and women generally got married at 15 to 18 years of age.[42] Maya family law appears to have been based on customary law. Maya men and women usually got married at around the age of 20, though women sometimes got married at the age of 16 or 17.[43] In majority of countries, a right to marry at age 18 is enshrined along with all other rights and responsibilities ofadulthood. However, most of these countries allow those younger than that age to marry, usually with parental consent or judicial authorization. These exceptions vary considerably by country. TheUnited Nations Population Fundstated:[44] In 2010, 158 countries reported that 18 years was the minimum legal age for marriage for women without parental consent or approval by a pertinent authority. However, in 146 [of those] countries, state or customary law allows girls younger than 18 to marry with the consent of parents or other authorities; in 52 countries, girls under age 15 can marry with parental consent. In contrast, 18 is the legal age for marriage without consent among males in 180 countries. Additionally, in 105 countries, boys can marry with the consent of a parent or a pertinent authority, and in 23 countries, boys under age 15 can marry with parental consent. In recent years, many countries in the EU have tightened their marriage laws, either banning marriage under 18 completely, or requiring judicial approval for such marriages. Countries which have reformed their marriage laws in recent years include Sweden (2014), Denmark (2017), Germany (2017), Luxembourg (2014), Spain (2015), Netherlands (2015), Finland (2019) and Ireland (2019). Many developing countries have also enacted similar laws in recent years: Honduras (2017), Ecuador (2015), Costa Rica (2017), Panama (2015), Trinidad & Tobago (2017), Malawi (2017). The minimum age requirements of 12 years old for females and 14 years old for males were written into English civil law. By default, these provisions became the minimum marriageable ages in colonial America. ThisEnglish common lawinherited from the British remained in force in America unless a specific state law was enacted to replace them. In the United States, as in most developed countries, age restrictions have been revised upward so that they are now between 15 and 21 years of age.[14] In Western countries, marriages of teenagers have become rare in recent years, with their frequency declining during the past few decades. For instance, in Finland, where in the early 21st century underage youth could obtain a special judicial authorization to marry, there were only 30–40 such marriages per year during that period (with most of the spouses being aged 17), while in the early 1990s, more than 100 such marriages were registered each year. Since 1 June 2019 Finland has banned marriages of anyone under 18 with no exemptions.[45][46] Marriageable ageas a rightis usually the same with the age of majority which is 18 years old in most countries. However, in some countries, the age of majority is under 18, while in others it is 19, 20 or 21 years. In Canada for example, the age of majority is 19 inNova Scotia,New Brunswick,British Columbia,Newfoundland and Labrador,Northwest Territories,YukonandNunavut. Marriage under 19 years in these provinces requires parental or court consent (seeMarriage in Canada). In USA for example, the age of majority is 21 inMississippiand 19 inNebraskaand requires parental consent. In many jurisdictions of North America, married minors becomelegally emancipated.[47] 21 in Puerto Rico Minors under 18 cannot marry in the states of New York State, Pennsylvania, New Jersey, Delaware, Minnesota, Rhode Island, Connecticut, Massachusetts, Virginia, New Hampshire, Washington State, Michigan and Vermont under any circumstance. This also holds true for the territories of the U.S. Virgin Islands and American Samoa. On 30 November 2022, The High court of Jharkhand reported that a Muslim Woman can marry a person of her choice after attaining 15 years.[169] under the age of 19, the revision stipulated that the court can grant such permission only if there are urgent reasons as well as supporting evidences to back them. The law revision also stresses that the court must consider the spirit of preventing child marriage, as well as moral, religious, cultural, psychological, and health considerations before granting the permission.[170] However, the next article allows persons between the ages of 16–18 to be married if they have been “commissioned the right of full legal capacity” in accordance to the Civil Code. Notwithstanding anything contained in clause (b) of sub-section (1), nothing shall bar the conclusion, or causing the conclusion of, a marriage within the relationship that is allowed to marry in accordance with the practices prevailing in their ethnic community or clan.[189] The marriageable ageas a rightis 18 years in all European countries, with the exception ofAndorraandScotlandwhere it is 16 (regardless of gender). Existingexceptionsto this general rule (usually requiring special judicial or parental consent) are discussed below. In both theEuropean Unionand theCouncil of Europethe marriage act states: TheIstanbul convention, the first legally binding instrument in Europe in the field of violence against women and domestic violence,[217]only requires countries which ratify it to prohibitforced marriage(Article 37) and to ensure that forced marriages can be easily voided without further victimization (Article 32), but does not make any reference to a minimum age of marriage. England and Wales: 18[262] Scotland: 16[263] Northern Ireland: 16 years with parental consent (with the court able to give consent in some cases).[264] Inancient Israelmen twenty years old and older would become warriors[281]and when they get married they would get one year leave of absence to be with their wife.[282] Rabbis estimated the age of maturity from about the beginning of the thirteenth year for women and about the beginning of the fourteenth year for men.[283] On the practice ofLevirate marriage, the Talmud advised against a large age gap between a man and his brother's widow.[284]A younger woman marrying a significantly older man, however, is especially problematic: marrying one's young daughter to an old man was declared by theSanhedrinas reprehensible as forcing her into prostitution.[285] InRabbinic Judaism, males cannot consent to marriage until they reach the age of 13 years and a day and have undergonepubertyand females cannot consent to marriage until they reach the age of 12 years and a day and have undergonepuberty. Males and females are consideredminorsuntil the age of twenty. After twenty, males are not considered adults if they show signs of impotence. If males show no signs of puberty or do show impotence, they automatically become adults by age 35 and can marry.[286][287]Marriage involved a double ceremony, which included the formal betrothal and wedding rites.[288] The minimum age for marriage was 13 years old for males and 12 years old for females but formal betrothal could take place before that and often did. Talmud advises males to get married at 18 years old or between 16 years old and 24 years old.[289] Aketannah(literally meaning "little [one]") was any girl between the age of 3 years and that of 12 years plus one day;[290]she was subject to her father's authority, and he could arrange a marriage for her without her agreement, and that marriage remains binding even after reaching the age of maturity.[290]If a girl was orphaned from her father, or she was married by his authority and subsequently divorced, she, her mother, or her brother could marry her in a quasi-binding fashion. Until the age of maturity, she could annul the marriage retroactively. After reaching the age of maturity, intercourse with her husband renders her officially married.[291][292] CatholicCanon lawadoptedRoman law, which set the minimum age of marriage at 12 years old for females and 14 years old for males. TheRoman Catholic Churchraised the minimum age of marriage to 14 years old for females and to 16 years old for males in 1917 and lowered the age of majority to 18 years old in 1983. TheCode of Canons of the Eastern Churchesstates the same requirements in canon 800. Büchler and Schlater state that "marriageable age according to classical Islamic law coincides with the occurrence of puberty. The notion of puberty refers to signs of physical maturity such as the emission of semen or the onset of menstruation".[299]Hanafi school of classical Islamic jurisprudence interpret the "age of marriage", in theQuran(24:59;65:4), as the beginning ofpuberty. Shafiʽi, Hanbali, Maliki, and Ja'fari schools of classical Islamic jurisprudence interpret the "age of marriage", in theQuran(24:59), as completion ofpuberty. For Shafiʽi, Hanbali, and Maliki schools of Islamic jurisprudence, in Sunni Islam, the condition for marriage is physical (bulugh) maturity and mental (rushd) maturity. In his Shafiʽi jurisprudential compilation,The Stocks of the Sojourner, Ahmad Ibn Naqib Al-Misri (died 1368 A.D.) writes: Guardians are, moreover, two types, a binder and a non-binder. The binder is the father and the grandfather, mainly as to the marriage of a virgin, and so is the master as to the marriage of his slave girl. The meaning of "binder" is that he may marry her off without her consent. The non-binder maynotmarry her off without her consent and permission. When virgin, though, the father or the grandfather may marry her off without her permission, but it is commendable to ask her, and her silence should signify acquiescence. The sane-minded non-virgin, however, maynotbe married off by anyone after maturity unless with her express consent, be it by the father, the grandfather, or anyone else. Before maturity, the non-virgin may not be married off at all.[300] Marriages are traditionally contracted by the father or guardian of the bride and her intended husband.[288] The 1917 codification of Islamic family law in theOttoman empiredistinguished between the age of competence for marriage, which was set at 18 years for boys and 17 years for girls, and the minimum age for marriage, which followed the traditional Hanafi minimum ages of 12 for boys and 9 for girls. Marriage below the age of competence was permissible only if proof of sexual maturity was accepted in court, while marriage under the minimum age was forbidden. During the 20th century, most countries in the Middle East followed the Ottoman precedent in defining the age of competence, while raising the minimum age to 15 or 16 for boys and 15–16 for girls. Marriage below the age of competence is subject to approval by a judge and the legal guardian of the child. Egypt diverged from this pattern by setting the age limits of 18 years for boys and 16 years for girls, without a distinction between competence for marriage and minimum age.[301] Many senior clerics in Saudi Arabia have opposed setting a minimum age for marriage, arguing that a girl reaches adulthood atpuberty.[302] However in 2019, members of the Saudi Shoura Council in 2019 approved fresh regulations forchild marriagethat will see to outlaw marrying off 15-year-old children and force the need for court approval for those under 18 years. The Chairman of the Human Rights Committee at the Shoura Council, Dr. Hadi Al-Yami, said that introduced controls were based on in-depth studies presented to the body. He pointed out that the regulation, vetted by the Islamic Affairs Committee at the Shoura Council, has raised the age of marriage to 18 years and prohibited it for those under 15 years.[303] In the Baháʼí Faith's religious bookKitáb-i-Aqdas, the age of marriage is set at 15 years for both boys and girls. It is forbidden to become engaged before the age of 15 years.[304] TheDharmaśāstrasstate that females can marry only after they have reached puberty.[305]Furthermore, The Legal Age for Marriage in India is being proposed to be amended, thereby increasing the marriageable age for girls in India in 2022 from 18 to 21 years.[167]
https://en.wikipedia.org/wiki/Marriageable_age
Secular coming-of-age ceremonies, sometimes calledcivil confirmations, are ceremonies arranged by organizations that aresecular, which is to say, not aligned to anyreligion. Their purpose is to prepareadolescentsfor their life asadults. Secularcoming of ageceremonies originated in the 19th century, when non-religious people wanted arite of passagecomparable to theChristianconfirmation. Nowadays, non-religious coming-of-age ceremonies are organized in several European countries; in almost every case these are connected withhumanist organisations. During the communist era, young people were given identity cards at the age of 15 in a collective ceremony. At the age of nineteen, boys were required to perform military service. Modern non-religious coming-of-age ceremonies originate inGermany, whereJugendweihe("youth consecration", today occasionally known asJugendfeier, 'youth ceremony') began in the 19th century. The activity was arranged by independentFreethinkerorganizations until 1954, when theCommunistparty ofEast Germanybanned it in its old form and changed it to promote Communist ideology. In theGDRJugendweihebecame, with the support of the state, the most popular form of coming-of-age ceremonies for the adolescents, replacing the Christian confirmation. After thereunification of Germany, theJugendweiheactivity regained its independence from Communism, but the non-religious rite of passage had become atradition, and thus approximately 60-70% of youngsters in the eastern states still participate in it. The age for participating in theJugendweiheis 13–14 years.[1] Before the ceremony the youngsters attend specially arranged events or a course, in which they work on topics likehistoryandmulticulturalism,cultureandcreativity,civil rightsand duties,natureandtechnology, professions and getting a job, as well aslifestylesand human relations.[2]Nowadays, there are many different groups organisingJugendweihen, but the most important ones areJugendweihe Deutschland e. V.,der Humanistische Verband Deutschland('the Humanist Association of Germany'),der Freidenkerverband('the Freethinker Association') anddie Arbeiterwohlfahrt('the Worker Welfare').[3] The first civil confirmation in theNordic countrieswas arranged inCopenhagen, Denmark, in 1915 byForeningen mod Kirkelig Konfirmation('Association Against Church Confirmation'). In 1924 the organisation changed its name toForeningen Borgerlig Konfirmation('The Association for Civil Confirmation').[4] Civil confirmation declined in the 1970s as central organized ceremonies, but instead they became increasingly more common as private celebrations and the civil confirmations are still common today in atheist families. They are also known as "nonfirmations", but are now rarely linked to any associations. InFinland, non-religious lower high school students planned a camp for a secular rite of passage as an alternative to the Christian confirmation. The firstPrometheus-leiri('Prometheus Camp') was held in 1989 by the Finnish Philosophy and Life Stance teachers' coalition. The following yearPrometheus-leirin tuki ry('Prometheus Camp Association') was founded for organising the week-longsummer camps. The ideology of the association is based on aHumanistworld view, but it is politically and religiously non-aligned. One of the main principles of the activity is tolerance towards other peoples'life stances.[5] The camp is primarily aimed at youngsters who do not belong to any religious denomination, but approximately 20% of yearly Prometheus Camp participants are members of some religious community, usually theEvangelical Lutheran Church of Finland, and also participate in a Christian confirmation. The usual age of participants in a Prometheus Camp is 14–15 years, but there are also "senior camps" for older youngsters. In recent years the yearly number of participants has been around 1000, which is approximately 1.5% of the age group. The themes in the Prometheus Camp are differences,prejudiceanddiscrimination;drugs,alcoholandaddiction;societyand making a difference in it; thefuture; world views, ideologies and religions;personal relationshipsandsexuality; and theenvironment. These topics are worked on in open discussions, debating, group work, small drama plays or playing games. Every camp is organised and led by a team of seven members: two adults and five youngsters. At the end of the camp, there is a Prometheus Ceremony, in which the participants perform a chronicle about their week for their friends and family. They also get a Prometheus diploma, a silver-coloured Prometheus medallion and a crown of leaves that is bound by the camp leaders. Weekend-long continuation camps are arranged in the autumn.[6]Annually, one Prometheus-camp has been arranged inEnglish, two inSwedishand approximately 65 inFinnish. InIcelandborgaraleg ferming('civil confirmations') are organised bySiðmennt, a Humanist association, as an alternative to the Christian confirmation for 13-year-olds. The program started in 1989. Before the civil confirmation, the youngsters take a preparation course aboutethics, personal relationships, human rights, equality,critical thinking, relations between the sexes, prevention ofsubstance abuse,skepticism, protecting the environment, getting along with parents, being a teenager in a consumer society, and what it means to be an adult and take responsibility for one's views and behavior. The course consists of 11 weekly group meetings, each lasting 80 minutes. Youngsters living outsideReykjavíkcan take the course in an intensive two-weekend version. The teachers of the course are usually philosophers. At the end of the course, there is a formal graduation ceremony in which the participants receive diplomas, and some of them performmusic,poetryandspeeches. There are also prominent members of Icelandic society giving speeches. An increasing number of youngsters have taken the course every year, with 577 taking the course for the confirmation in 2020, which accounts for 13% of the total age group.[7] Human-Etisk Forbund('The Norwegian Humanist Association') has arranged non-religious confirmation courses inNorwaysince 1951. During the last ten years, there has been rapid growth in the popularity of the course. In 2006, over 10,500 youngsters, approximately 17% of the age group, chose thehumanistisk konfirmasjonorborgerlig konfirmasjon('civil confirmation'). The course can be taken during the year of one's 15th birthday. Norwegians living abroad can take the course ascorrespondence courseby e-mail.[8][9] Humanistforbundet, not to be confused with HEF (Human-Etisk Forbund) has since 2006 arranged an alternative to HEF's confirmation. It is a non-religious civil confirmation based on academics. The program usually consists of several lectures by various prestigious, well-known and competent organisations like theRed Cross,UNICEFandDyrevernalliansen(a Norwegian animal welfare interest-organisation). People likeThomas Hylland Eriksenhave also held lectures. The associationHumanisterna('The Humanists') started secular coming-of-age courses inSwedenin the 1990s in the form ofstudy circles, but they were soon replaced by a week-long camp where the subjects are dealt with through discussions, games, group works and other activities. In recent years,[when?]there have been approximately 100 participants annually in theHumanistisk konfirmation('Humanist confirmation') camps. The camp's themes concern one's life stance, for examplehuman rights,equality,racism,gender roles,love, sexuality and lifestyles, but the topics under discussion depend on the participating youngsters' own choices. At the end of the camp, there is a festive ceremony in which the participants demonstrate to their families and relatives what they did during the week, e.g. through plays and songs. There are also speeches held by the organisers of the camp, the youngsters themselves, and invited speakers.[10] Edifices of theEthical movementin the United States perform secular coming-of-age ceremonies for 14-year-old members, in which, after spending a year performing community service activities and attending workshops regarding various topics concerning adulthood, the honoree and one's parent(s) speak before the congregation about their growth over the year. Similar ceremonies are performed by congregations of theUnitarian Universalist AssociationandCanadian Unitarian Council.
https://en.wikipedia.org/wiki/Secular_coming-of-age_ceremony
A legalvoting ageis the minimum age that a person is allowed tovotein ademocratic process. Forgeneral electionsaround the world, theright to voteis restricted to adults, and most nations use 18 years of age as their voting age, but for other countries their voting age ranges between 16 and 21 (with the sole exception of theUnited Arab Emirateswhere the voting age is 25). A nation's voting age may therefore coincide with the country'sage of majority, but in many cases the two are not tied. In 1890, theSouth African Republic, commonly known as the Transvaal Republic, set a voting age of 18 years.[1]The effort was, like later legislation expanding voting rightsfor womenandimpoverished whites, in part an attempt to skew the electorate further in favor ofAfrikanerinterests againstuitlanders. Prior to theSecond World Warof 1939–1945, the voting age in almost all countries was 21 years or higher. In1946 Czechoslovakiabecame the first state to reduce the voting age to 18 years,[2]and by 1968 a total of 17 countries had lowered their voting age, of which 8 were in Latin America, and 8 were communist countries.[3] Australia, Japan, Sweden and Switzerland had lowered their voting age to 20 by the end of the 1960s.[4] Many major democratic countries, beginning in Western Europe and North America, reduced their voting ages to 18 years during the 1970s, starting with the United Kingdom (Representation of the People Act 1969),[4][5][6]Canada,West Germany(1970), the United States (26th Amendment, 1971), Australia (1974), France (1974), Sweden (1975) and others. It was argued that if young men could be drafted to go to war at 18, they should be able to vote at the age of 18.[7] In the late 20th and early 21st centuries voting ages were lowered to 18 in Japan,[8]India, Switzerland, Austria, the Maldives, and Morocco. By the end of the 20th century, 18 had become by far the most common voting age. However, a few countries maintain a voting age of 20 years or higher, and a few countries have a lower voting age of 16 or 17.[9] The vast majority of countries and territories have a minimum voting age of 18-years-old as of October 2020.[10]According to data from the ACE Electoral Knowledge Network, 205 countries and territories have a minimum voting age of 18 for national elections out of 237 countries and territories the organization has data on as of October 2020.[10]As of the aforementioned date, 12 countries or territories have a minimum voting age of less than 18, with 3 countries or territories at 17-years-old, and 9 countries or territories at 16-years-old.[10]16-years-old is the lowest minimum age globally for national elections, while the highest is 25-years-old which is only the case in theUnited Arab Emirates(UAE).[10]This age of 25 was also the case in Italy for Senate (upper house) elections until it was lowered to 18 in 2021.[11]Italy'slower houseof Parliament, the Chamber of Deputies, has had a minimum voting age of 18 since 1975, when it was lowered from 21.[12] Around 2000, a number of countries began to consider whether the voting age ought to be reduced further, with arguments most often being made in favor of a reduction to 16. In Brazil, the age was lowered to 16 in the 1988 Constitution, while the lower voting age took effect for the first time in the 1989 Presidential Election. The earliest moves in Europe came during the 1990s, when the voting age formunicipal electionsin someStates of Germanywas lowered to 16.Lower Saxonywas the first state to make such a reduction, in 1995, and four other states did likewise.[13] In 2007, Austria became the first country to allow 16- and 17-year-olds to vote in national elections, with the expanded franchise first being consummated in the2009 European Parliament election. A study of young voters' behavior on that occasion showed them to be as capable as older voters to articulate their beliefs and to make voting decisions appropriate for their preferences. Their knowledge of the political process was only insignificantly lower than in older cohorts, while trust in democracy and willingness to participate in the process were markedly higher.[14]Additionally, there was evidence found for the first time of a voting boost among young people age 16–25 in Austria.[15] During the 2000s several proposals for a reduced voting age were put forward inU.S. states, includingCalifornia,FloridaandAlaska,[16]but none were successful. In Oregon, Senate Joint Resolution 22 has been introduced to reduce the voting age from 18 to 16.[17]A national reduction was proposed in 2005 inCanada[18]and in theAustralianstate ofNew South Wales,[19]but these proposals were not adopted. In May 2009, Danish Member of ParliamentMogens Jensenpresented an initiative to theParliamentary Assembly of the Council of EuropeinStrasbourgto lower the voting age in Europe to 16.[20] Demands to reduce the voting age to 16 years were again brought forward by activists of theschool strike for climatemovement in several countries (including Germany and the UK).[21][22] After PremierDon Dunstanintroduced the Age of Majority (Reduction) Bill in October 1970, the voting age in South Australia was lowered from 21 to 18 in 1973. On 21 October 2019,GreensMPAdam Bandtintroduced a bill in the House of Representatives to lower the voting age to 16.[23] A report suggesting that consideration be given to reducing the voting age to 16 in theAustralian Capital TerritoryinCanberra, Australia was tabled in theterritorial legislatureon 26 September 2007 and defeated.[24] In 2015, federal Opposition LeaderBill Shortensaid that the voting age should be lowered to 16.[25] In 2007, Austria became the first member of theEuropean Unionto adopt a voting age of 16 for most purposes.[26][27]The voting age had been reduced in Austria from 19 to 18 at all levels in 1992. At that time a voting age of 16 was proposed by theGreen Party, but was not adopted.[28] The voting age for municipal elections in somestateswas lowered to 16 shortly after 2000.[13]Three states had made the reduction by 2003 (Burgenland,CarinthiaandStyria),[13]and in May 2003Viennabecame the fourth.[29]Salzburgfollowed suit,[30][31]and so by the start of 2005 the total had reached at least five states out of nine.[32]As a consequence of state law, reduction of the municipal voting age in the states of Burgenland, Salzburg and Vienna resulted in the reduction of the regional voting age in those states as well.[31] After the2006 election, the winningSPÖ-ÖVPcoalition announced on 12 January 2007 that one of its policies would be the reduction of the voting age to 16 for elections in all states and at all levels in Austria.[33]The policy was set in motion by a Government announcement on 14 March,[34]and a bill proposing an amendment to theConstitutionwas presented to thelegislatureon 2 May.[35][36]On 5 June theNational Councilapproved the proposal following a recommendation from its Constitution Committee.[26][28][37]During the passage of the bill through the chamber relatively little opposition was raised to the reduction, with four out of five parties explicitly supporting it; indeed, there was some dispute over which party had been the first to suggest the idea. Greater controversy surrounded the other provisions of the bill concerning theBriefwahl, orpostal vote, and the extension of the legislative period for the National Council from four to five years.[28]A further uncontroversial inclusion was a reduction in thecandidacy agefrom 19 to 18. TheFederal Councilapproved the Bill on 21 June, with no party voting against it.[38]The voting age was reduced when the Bill's provisions came into force on 1 July 2007.[39]Austria thus became the first member of the European Union, and the first of thedeveloped worlddemocracies, to adopt a voting age of 16 for all purposes.[26]Lowering the voting age encouraged political interest in young people in Austria. More sixteen- and seventeen-year-olds voted than eighteen-to-twenty-one-year-olds in Austria.[40] Brazillowered the voting age from 18 to 16 in the1988 constitution. The presidential election of 1989 was the first with the lower voting age. People between the ages 18 and 70 are required to vote. The person must be 16 full years old on the eve of the election (In years without election, the person must be 16 full years old on or before 31 December). If they turn 18 years old after the election, the vote is not compulsory. When they turn 18 years old before the election, the vote is compulsory. Canada lowered its federal voting age from 21 to 18 in 1970.[41][42]MostCanadian provincessoon followed suit, though several initially lowered their voting age to 19. It wasn't until 1992 when the last province,British Columbia, lowered its voting age to 18.[43]A further reduction to 16 was proposed federally in 2005, but was not adopted.[18][44]It was proposed again in 2011, but was not adopted.[45] In August 2018, inBritish Columbia, a group of 20 youth partnered withDogwood BCto launch a Vote16 campaign.[46]Currently, they have unanimous support from theUnion of BC Municipalities,[47]as well as endorsements from the province'sGreen Party of British ColumbiaandBritish Columbia New Democratic Partyrepresentatives.[48][better source needed]The campaign is now waiting for it to be brought up in the legislative assembly by the NDP and for it to pass there.[46][better source needed] In 2020, Canadian SenatorMarilou McPhedranintroduced a bill to lower the federal voting age from 18 to 16. She reintroduced it again in November 2021, (bill S-201), but it died on the floor when Parliament wasproroguedinJanuary 2025.[49][50][51]In December 2021, a group of young people filed a court challenge to lower the federal voting age from 18, arguing that the voting age is unconstitutional for violating multiple sections of theCanadian Charter of Rights and Freedoms.[42]Several weeks later,Taylor Bachrachof theNew Democratic Party(NDP) introduced a private member's bill to lower the voting age to 16.[44]The bill (C-210) was debated in May 2022.[52]The bill was defeated in its second reading with 245 members of parliament voting to oppose the bill and 77 voting to support it.[53] Internal elections run byCanadian political partieshave a lower voting age than that of general elections set by the government, typically allowing party members 14 and up to vote.[54][55][56][57][58][59] As stated in the Constitution of the Republic of Cuba, the voting age is 16 for men and women.[60] As part of their2021 coalition deal, theSPD,GreensandFDPagreed to lower the voting age for European Elections to 16 within the course of the20th Bundestag. They successfully did so in time for the2024 European parliament elections. They also aimed to lower the voting age for elections to the German parliament. However, this would need a change of the constitution, which was blocked by the oppositionCDU.[61]Seven of the 16stateshave also lowered their voting age for state elections and 11 of the 16 have lowered it for local elections. The first proposal to lower the voting age to 16 years was submitted inparliamentin 2007. A bill to lower the voting age for municipal elections reached the final reading in 2018, but wasfilibusteredby opponents until the close of the parliamentary session.[62] On 28 October 2023, the municipalities ofVesturbyggðandTálknafjarðarhreppurheld a referendum on unification; the voting age in this referendum was lowered to 16.[63] Iran had been unique in awarding suffrage at 15, but raised the age to 18 in January 2007 despite the opposition of the Government.[64]In May 2007 the Iranian Cabinet proposed a bill to reverse the increase.[citation needed] Currently,Luxembourghascompulsory votingfrom the age of 18. Discussion about lowering the voting age to 16 was first introduced as part of a widerJune 2015 referendum. The broader principles of the referendum which concerned electoral reform were rejected by 81% of voters. Discussion, specifically surrounding the lowering of the voting age to 16 received almost universal support in 2025.[65]Politically, only the ADR and CSV oppose the idea. Malta On 20 November 2013, Malta lowered the voting age from 18 to 16 for local elections starting from 2015. The proposal had wide support from both the government and opposition, social scientists and youth organizations. On Monday, 5 March 2018, the Maltese Parliament unanimously voted in favor of amending the constitution, lowering the official voting age from 18 to 16 for general elections, European Parliament Elections and referendums, making Malta the second state in theEUto lower its voting age to 16.[66] TheNew Zealand Green PartyMPSue Bradfordannounced on 21 June 2007 that she intended to introduce her Civics Education and Voting Age Bill on the next occasion upon which a place became available for the consideration of Members' Bills.[67]When this happened on 25 July Bradford abandoned the idea, citing an adverse public reaction.[68]The Bill would have sought to reduce the voting age to 16 in New Zealand and makecivics educationpart of the compulsory curriculum in schools. On 21 November 2022, theSupreme Court of New Zealandruled inMake It 16 Incorporated v Attorney-Generalthat the voting age of 18 was "inconsistent with the bill of rights to be free from discrimination on the basis of age".[69]Prime MinisterJacinda Ardernsubsequently announced that a bill to lower the voting age to 16 would be debated in parliament, requiring a supermajority to pass.[70]This bill was subsequently withdrawn in January 2024, after theSixth National Government of New Zealandwas elected.[71] TheRepresentation of the People Act 1969lowered the voting age from 21 to 18 for elections to theHouse of Commonsof theParliament of the United Kingdom, the first major democratic nation to do so.[4][5][6]The1970 United Kingdom general electionis the first in which this Act had effect. Men in military service who turned 19 during the first world war were entitled to vote in 1918 irrespective of their age as part of theRepresentation of the People Act 1918which also allowed some women over the age of 30 to vote. TheRepresentation of the People (Equal Franchise) Act 1928brought the voting age for women down to 21.[72] The reduction of the voting age to 16 in the United Kingdom was first given serious consideration in 1999, when the House of Commons considered in Committee an amendment proposed bySimon Hughesto the Representation of the People Bill.[73]This was the first time the reduction of a voting age below 18 had ever been put to a vote in the Commons.[74]The Government opposed the amendment, and it was defeated by 434 votes to 36.[74] TheVotes at 16coalition, a group of political and charitable organizations supporting a reduction of the voting age to 16, was launched on in 2003.[75]At that time aPrivate Member's Billwas also proposed in theHouse of LordsbyLord Lucas.[76] In 2004, theUK Electoral Commissionconducted a major consultation on the subject of the voting age andage of candidacy, and received a significant response. In its conclusions, it recommended that the voting age remain at 18.[77]In 2005, the House of Commons voted 136-128 (on afree vote) against a Private Member's Bill for a reduction in the voting age to 16 proposed byLiberal DemocratMPStephen Williams. Parliament chose not to include a provision reducing the voting age in theElectoral Administration Actduring its passage in 2006. The report of thePower Inquiryin 2006 called for a reduction of the voting age, and of the candidacy age for the House of Commons, to 16.[78]On the same day theChancellor of the Exchequer,Gordon Brown, indicated in an article inThe Guardianthat he favored a reduction provided it was made concurrently with effectivecitizenship education.[79] TheMinistry of Justicepublished in 2007 aGreen Paperentitled The Governance of Britain, in which it proposed the establishment of a "Youth Citizenship Commission".[80]The Commission would examine the case for lowering the voting age. On launching the paper in the House of Commons,Prime MinisterGordon Brown said: "Although the voting age has been 18 since 1969, it is right, as part of that debate, to examine, and hear from young people themselves, whether lowering that age would increase participation."[81] During the Youth Parliament debates of in 2009 in the House of Commons, Votes at 16 was debated and young people of that age group voted for it overwhelmingly as a campaign priority. In April 2015, Labour announced that it would support the policy if it won an overall majority in the2015 general election,[82]which it failed to do. In July 2024 however,Keir Starmerthe current leader of the UKLabour Party, became elected asPrime Minister of the United Kingdom. As part of the parties 2024 manifesto (in the run up to the general election) Labour maintained this previous position, Keir Starmer himself confirming that he would lower the voting age from 18 to 16 in all elections (if elected). Prior to the 2024 election, the voting age in bothScotlandandWaleswas and is already set at 16, by the relevant governments of both UK nations (see detail specifics below). There was some criticism about not reducing the voting age to 16 years for thereferendum on the membership in the European Union in 2016.[83][84] YouGov poll research from 2018 shows that whilst the public are still opposed, there is growing support for extending the franchise. As of May 2019, all the main parties, with the exception of theConservatives, back reducing the age to 16. Some have argued the Conservatives are hypocritical not to support this, as they allow 16-year-olds to vote in their leadership elections. It is also argued that all the main parties' approach is self-serving as younger voters are thought more likely to support left leaning parties and remaining in the EU, and less likely to support right leaning parties, and leaving the EU.[85] TheScottish National Partyconference voted unanimously on 27 October 2007 for a policy of reducing the voting age to 16 (theage of majorityin Scotland), as well as in favor of a campaign for the necessary power to be devolved to theScottish Parliament.[86] In September 2011, it was announced that the voting age was likely to be reduced from 18 to 16 for theScottish independence referendum.[87]This was approved by the Scottish Parliament in June 2013.[88] In June 2015, theScottish Parliamentvoted unanimously to reduce the voting age to 16 for elections for the Scottish Parliament and for Scottishlocal governmentelections. The voting age in Scotland remains 18 for UK general elections.[89] Major reforms were recommended in 2017 in the 'A Parliament That Works For Wales' report, by the expert panel on Assembly Electoral Reform led by ProfessorLaura McAllister. It included increasing the size of the Assembly, adapting or changing the electoral system and of course reducing the age of voting to 16.[90] TheWelsh Assembly's Commission, the corporate body, introduced a bill in 2019 to reduce the voting age to 16 and change the name to Senedd.[91]TheNational Assembly for Walespassed theSenedd and Election (Wales) Actlater that year.[92]A vote to remove this enfranchisement was defeated by 41 votes to 11. The first election to include the biggest enfranchisement in Welsh politics since 1969 was the2021 Senedd election.[93] TheWelsh Governmentalso legislated for the enfranchisement of 16 and 17-year-olds in the Local Government and Elections (Wales) Act, which received royal assent in 2021. The changes were in place for local Welsh elections in 2022. The voting age in Wales remains 18 for UK general elections.[94][95][96] The voting age in theBritish Overseas Territories(those parts of the British Realm that lie outside the archipelago of the British Isles, which, before 1983, were termedBritish colonies, and, from 1983 to 2002,British Dependent Territories) for the national (ie, "British") Parliamentary elections is the same as in that part of the realm that lies within the British Isles, although - as no electoral district has ever been created for any British Overseas Territory, British nationals from the territories must first establish residency in an existing electoral district in order to exercise their voting rights in national elections. Local elected legislatures were established inVirginiain 1619 andBermuda(originally settled as part of Virginia) in 1620. Sovereignty remained with the national (British) government, with the British Parliament asserting its right to legislate for the colonies,[97]though in practice certain competencies were delegated by the British government to the local governments (varying depending upon the degree of representation in the local government of each colony). Since the 1960s, most of the remaining colonies have been given elected legislatures similar to Bermuda's (or the Councils that advise the appointed governors, originally made up only of appointees, now include elected members), with the enfranchisement for local elections determined by local legislation (subject, like all local legislation, to the approval of the national government). InAnguilla, Bermuda, theBritish Virgin Islands, theCayman Islands, theFalkland Islands,Gibraltar,Montserrat, thePitcairn Islands,Saint Helena(and presumablyAscension IslandandTristan da Cunha), andTurks and Caicos Islandsthe current voting ages for local elections are all 18. There are no permanent inhabitants, and no local legislatures inBritish Antarctic Territory,British Indian Ocean Territory, andSouth Georgia and the South Sandwich Islands. Under the agreement withCyprusby which Britain retained the Sovereign Base Areas ofAkrotiri and Dhekelia, the British government agreed not to set up and administer "colonies" and not to allow new settlement of people in the Sovereign Base Areas other than for temporary purposes. There is no local legislature and consequently there are no local elections. As of 2025, the voting age in allBritish Crown Dependenciesis now set at16.[98][99][100][101][102] Moves to lower the voting age to 16 were first successful in three BritishCrown dependenciesfrom 2006 to 2008. TheIsle of Manwas the first to amend previous legislation in 2006, when it reduced the voting age to 16 for itsgeneral elections, with the House of Keys approving the move by 19 votes to 4.[103] Jerseyfollowed suit in 2007, when it approved a reduction of the voting age to 16. TheStates of Jerseyvoted narrowly in favour, by 25 votes to 21,[104]and the legislative amendments were adopted.[105]The law was sanctioned byOrder in Council,[106][107]and was brought into force in time for thegeneral elections in late 2008.[108][109] In 2007, a proposal[110][111]for a reduction (in voting age to 16) made by the House Committee of theStates of Guernsey, and approved by the States' Policy Committee, was adopted by the assembly by 30 votes to 15.[111][112]An Order in Council sanctioned the law,[106]and it was registered at the Court of Guernsey. It came into force immediately, and the voting age was accordingly reduced in time for the2008 Guernsey general election.[113] In 2022, bothAlderneyandSarkpassed legislation which lowered the voting age to 16 for all elections going forward.[114][102] In the United States, the debate about lowering the voting age from 21 to 18 began duringWorld War IIand intensified during theVietnam War, when most of those subjected to the draft were too young to vote, and the image of young men being forced to risk their lives in the military without the privileges of voting successfully pressured legislators to lower the voting age nationally and in many states. By 1968, several states had lowered the voting age below 21 years: Alaska and Hawaii's minimum age was 20,[115]while Georgia[115]and Kentucky's was 18.[116]In 1970, the Supreme Court inOregon v. Mitchellruled that Congress had the right to regulate the minimum voting age in federal elections; however, it decided it could not regulate it at local and state level. TheTwenty-sixth Amendment to the United States Constitution(passed and ratified in 1971)[117]prevents states from setting a voting age higher than 18.[118]Except for the express limitations provided for in Amendments XIV, XV, XIX and XXVI, voter qualifications for House and Senate elections are largely delegated to the States under Article I, Section 2 and Amendment XVII of the United States Constitution, which respectively state that "The House of Representatives shall be composed of Members chosen every second Year by the People of the several States, andthe Electors in each State shall have the Qualifications requisite for Electors of the most numerous Branch of the State Legislature." and "The Senate of the United States shall be composed of two Senators from each State, elected by the people thereof, for six years; and each Senator shall have one vote. Theelectors in each State shall have the qualifications requisite for electors of the most numerous branch of the State legislatures."[119] Seventeen states permit 17-year-olds to vote inprimary electionsand caucuses if they will be 18 by election day: Colorado, Connecticut, Delaware, Illinois, Indiana, Kentucky, Maine, Maryland, Mississippi, Nebraska, New Mexico,[120]North Carolina, Ohio, South Carolina, Virginia, Vermont, and West Virginia. Iowa, Minnesota, and Nevada allow 17-year-olds to participate in all presidential caucuses, but may not vote in primary elections for other offices. Alaska, Hawaii, Idaho, Kansas, Washington, and Wyoming allow 17-year-olds to participate in only Democratic caucuses, but not in the Republican caucus.[121] All states allow someone not yet 18 to preregister to vote. Fifteen states — California, Colorado, Delaware, District of Columbia, Florida, Hawaii, Louisiana, Maryland, Massachusetts, New York, North Carolina, Oregon, Rhode Island, Utah, Virginia, and Washington — and Washington, D.C., allow 16-year-olds to preregister. In Maine, Nevada, New Jersey, and West Virginia, 17-year-olds can preregister. Alaska allows a teen to preregister within 90 days of their 18th birthday. Georgia, Iowa, and Missouri allow 17.5-year-olds to preregister if they turn 18 before the next election. Texas allows someone 17 year and 10 months old to preregister. The remaining states, excepting North Dakota, do not specify an age for preregistration so long as the teen will be 18 by the next election (usually the next general election). North Dakota does not require voter registration.[122] On 3 April 2019,Andrew Yangbecame the first major presidential candidate to advocate for the United States to lower its voting age to 16.[123]At 16, Americans do not have hourly limits imposed on their work, and they pay taxes. According to Yang, their livelihoods are directly impacted by legislation, and they should therefore be allowed to vote for their representatives.[124] In 2018, a bill in theCouncil of the District of Columbiawas proposed to lower the voting age to 16, which would make the federal district the first jurisdiction to lower the voting age for federal level elections.[125]In 2019, Washington D.C., Council Member Charles Allen sponsored a debate on whether or not the city should lower the voting age to 16 for all elections, including the presidential election in the city. Allen gained a magnitude of public support although the measure to lower the age of voting stalled.[126] In 2013, theCity of Takoma Park, Maryland, became the first place in the United States to lower its voting age to 16 for municipal elections and referendums.[127][128]As of 2024[update],Greenbelt,Hyattsville,Riverdale Park,Mount Rainier,SomersetandChevy Chasehad followed suit.[129][130] Starting in 2024, 16 and 17-year-olds can vote on School Board races inBerkeley,[131]Oakland[131]andNewark.[132] In Massachusetts, the state has blocked efforts to lower the voting age for local elections to 16 inAshfield,Boston,Brookline,Cambridge,Concord,Harwich,Lowell,Northampton,Shelburne,Somerville, andWendell.[133][134] During the2024 Republican Party presidential primaries,Vivek Ramaswamyannounced that he favored raising the voting age to 25 in most circumstances. The policy change, which would have to be done through aconstitutional amendment, would only allow for citizens between 18 and 24 to vote if they are enlisted in themilitary, work asfirst-responderpersonnel, or pass a civics test.[135] A request to lower the voting age to 16 was made during consideration of revisions to theConstitution of Venezuelain 2007.Cilia Flores, president of theNational Assembly, announced that the Mixed Committee for Constitutional Reform had found the idea acceptable. Following approval in the legislature[136]the amendment formed part of the package of constitutional proposals, and was defeated in the2007 referendum. There are occasional calls for a maximum voting age, on the grounds that older people have less of a stake in the future of the country or jurisdiction.[137]In fact, however, the only jurisdiction with a maximum voting age is theVatican City Statewhose sovereign, (thePope) is elected by theCollege of Cardinals. A Cardinal must be below the age of 80 on the date of the previous Pope's death or resignation, in order to vote to elect a new Pope.[138] 18 is the most common voting age. In some countries and territories 16 or 17 year-olds can vote in at least some elections. Examples of places with full enfranchisement for those aged 16 or 17 include Argentina, Austria, Brazil, Cuba, Ecuador, Nicaragua, East Timor, Greece, and Indonesia. The only known maximum voting age is in theHoly See, where the franchise for electing a new Pope in the Papal Conclave is restricted toCardinalsunder the age of 80. The following is an alphabetical list of voting ages in the various countries and territories of the world.[139] Baden-Württemberg,Berlin,Brandenburg,Bremen,Hamburg,Mecklenburg-VorpommernandSchleswig-Holstein. Voting age 16 for municipal elections: Baden-Württemberg,Berlin,Brandenburg,Bremen,Hamburg,Mecklenburg-Vorpommern,Lower Saxony,North Rhine-Westphalia,Saxony-Anhalt,Schleswig-HolsteinandThuringia.[148][circular reference] 16 in European elections[149] 16 forScottish Parliamentelections, Scottishlocal governmentelections, and theScottish Independence Referendum.[89] 16 forSenedd(Welsh Parliament) elections andWelsh local elections. The following is achronologicallist of the dates upon which countries lowered the voting age to 18; unless otherwise indicated, the reduction was from 21. In some cases the age was lowereddecrementally, and so the "staging points" are also given. Some information is also included on the relevant legal instruments involved. This is a further list, similar to the above but of the dates upon which countries or territories lowered the voting age to 16; unless otherwise indicated, the reduction was from 18. The following are political parties and othercampaigningorganizations that have either endorsed a lower voting age or who favor its removal entirely. In 2013, theConstitutional Conventionwas asked to consider reducing the voting age to 17 and recommended lowering it to 16.[263]Thethen governmentagreed to hold a referendum,[264]but in 2015 postponed it indefinitely to give priority to other referendums.[265]
https://en.wikipedia.org/wiki/Voting_age
Theyouth rightsmovement (also known asyouth liberation) seeks to grant therightstoyoung peoplethat are traditionally reserved foradults. This is closely akin to the notion ofevolving capacitieswithin thechildren's rightsmovement, but the youth rights movement differs from the children's rights movement in that the latter places emphasis on the welfare and protection of children through the actions and decisions of adults, while the youth rights movement seeks to grant youth the liberty to make their own decisions autonomously in the ways adults are permitted to, or to abolish the legal minimum ages at which such rights are acquired, such as theage of majorityand thevoting age.[1] Codified youth rights constitute one aspect of how youth are treated in society. Other aspects include social questions of how adults see and treat youth, and how open a society is toyouth participation.[2] Of primary importance to advocates of youth rights are historical perceptions of young people, considered to beoppressiveand informed bypaternalism,adultismandageismin general, as well as fears ofchildrenandyouth. Several of these perceptions include the assumption that young people are incapable of making crucial decisions and need protecting from their tendency to act impulsively or irrationally.[3]Such perceptions can informlawsthroughout society, includingvoting age,child labor laws,the right to work,curfews,drinking age,smoking age,gambling age,age of consent,driving age,voting age,emancipation, medical autonomy,closed adoption,corporal punishment, theage of majority, andmilitary conscription. Restrictions on young people that aren't applied to adults may be called status offenses and viewed as a form of unjustifieddiscrimination.[4] There are specific sets of issues addressing the rights of youth in schools, includingzero tolerance, "gulag schools",In loco parentis, andstudent rightsin general.Homeschooling,unschooling, andalternative schoolsare popular youth rights issues. A long-standing effort within the youth rights movements has focused oncivic engagement. Other issues include mandatoryallowance[5]andnon-authoritarian parenting.[6]There have been a number of historical campaigns to increaseyouth voting rightsby lowering thevoting ageand theage of candidacy. There are also efforts to get young people elected to prominent positions in local communities, including as members ofcity councilsand as mayors. For example, in the2011 Raleigh mayoral election17-year-old Seth Keel launched a campaign for Mayor despite the age requirement of 21.[7]Strategies for gaining youth rights that are frequently utilized by their advocates include developingyouth programsandorganizationsthat promoteyouth activism,youth participation,youth empowerment,youth voice,youth/adult partnerships,intergenerational equityandcivil disobediencebetween young people and adults. First emerging as a distinct movement in the 1930s, youth rights have long been concerned withcivil rightsandintergenerational equity. Tracing its roots toyouth activistsduring theGreat Depression, youth rights has influenced thecivil rights movement,opposition to the Vietnam War, and many other movements. Since the advent of theInternet, the youth rights movement has been gaining predominance again.[citation needed] Some youth rights advocates use the argument offallibilityagainst the belief that others can know what is best or worst for an individual, and criticize the children's rights movement for assuming that exterior legislators, parents, authorities and so on can know what is for a minor's own good. These thinkers argue that the ability to correct what others think about one's ownwelfarein afalsificationist(as opposed topostmodernist) manner constitutes a non-arbitrary mental threshold at which an individual can speak for his or herself independently of exterior assumptions, as opposed to arbitrary chronological minimum ages in legislation. They also criticize the carte blanche for arbitrary definitions of "maturity" implicit in children's rights laws such as "with rising age and maturity" for being part of the problem, and suggest the absolute threshold of conceptual after-correcture to remedy it.[8] These views are often supported by people with experience of the belief in absolutely gradual mental development being abused as an argument for "necessity" of arbitrary distinctions such asage of majoritywhich they perceive as oppressive (either currently oppressing or having formerly oppressed them, depending on age and jurisdiction), and instead cite types ofconnectionismthat allows forcritical phenomenathat encompasses the entirebrain. These thinkers tend to stress that different individuals reach the critical threshold at somewhat different ages with no more than one in 365 (one in 366 in the case of leap years) chance of coinciding with a birthday, and that the relevant difference that it is acceptable to base different treatment on is only between individuals and not between jurisdictions. Generally, the importance of judging each individual by observable relevant behaviors and not by birth date is stressed by advocates of these views.[9] Children's rightscover all rights belonging to children. When individuals grow up, they are granted new rights (such as voting, consent, and driving) and duties (such as criminal responsibility and draft eligibility). There are differentminimumlimits of age at whichyouthare, situationally, not independent or deemed legally competent to make certain decisions or take certain actions. Some rights and responsibilities that legally come with age are: After youth reach these limits they are free tovote, buy or consumealcohol beverages, and drivecars, among other acts. The "youth rights movement", also described as "youth liberation", is a nascentgrass-roots movementwhose aim is to fight againstageismand for thecivil rightsof young people – those "under the age of majority", which is 18 in most countries. Some groups combatpedophobiaandephebiphobiathroughout society by promotingyouth voice,youth empowermentand ultimately,intergenerational equitythroughyouth/adult partnerships.[10]Many advocates of youth rights distinguish their movement from thechildren's rightsmovement, which they argue advocates changes that are often restrictive towards children and youth.[11] International Youth Rights(IYR) is a student-run youth rights organization in China, with regional chapters across the country and abroad. Its aim is to make voices of youth be heard across the world and give opportunities for youths to carry out their own creative solutions to world issues in real life. TheEuropean Youth Forum(YFJ, from Youth Forum Jeunesse) is the platform of the National Youth Council and International Non-Governmental Youth Organisations in Europe. It strives for youth rights in International Institutions such as the European Union, the Council of Europe and the United Nations. The European Youth Forum works in the fields of youth policy and youth work development. It focuses its work on European youth policy matters, whilst through engagement on the global level it is enhancing the capacities of its members and promoting global interdependence. In its daily work the European Youth Forum represents the views and opinions of youth organisations in all relevant policy areas and promotes the cross-sectoral nature of youth policy towards a variety of institutional actors. The principles of equality and sustainable development are mainstreamed in the work of the European Youth Forum. Other International youth rights organizations includeArticle 12 in Scotlandand K.R.A.T.Z.A. in Germany. InMalta, the voting age has been lowered to 16 in 2018 to vote in national and European Parliament elections.[12] TheEuropean Youth Portalis the starting place for the European Union's youth policy, withErasmus+as one of its key initiatives. TheNational Youth Rights Associationis the primary youth rights organization for theyouths in the United States, with local chapters across the country. The organization known as Americans for a Society Free from Age Restrictions is also an important organization.The Freechild Projecthas gained a reputation for interjecting youth rights issues into organizations historically focused onyouth developmentandyouth servicethrough their consulting and training activities. TheGlobal Youth Action Networkengages young people around the world in advocating for youth rights, andPeacefireprovidestechnology-specific support for youth rights activists. Choose Responsibilityand their successor organization, theAmethyst Initiative, founded byJohn McCardell, Jr., exist to promote the discussion of the drinking age, specifically. Choose Responsibility focuses on promoting a legal drinking age of 18, but includes provisions such as education and licensing. The Amethyst Initiative, a collaboration of college presidents and other educators, focuses on discussion and examination of the drinking age, with specific attention paid to the culture of alcohol as it exists on college campuses and the negative impact of the drinking age on alcohol education and responsible drinking. Young India Foundation(YIF) is a youth-led youth rights organization in India, based in Gurgaon with regional chapters across India. Its aim is to make voices of youth be heard across India and seek representation for the 60% of India's demographic that is below the age of 25.[13]YIF is also the organization behind the age of candidacy campaign to bring down the age when a Member of Legislative Assembly or Member of Parliament can contest.[14] Youth rights, as a philosophy and as a movement, has been informed and is led by a variety of individuals and institutions across the United States and around the world. In the 1960s and 70sJohn Holt,Richard Farson,Paul GoodmanandNeil Postmanwere regarded authors who spoke out about youth rights throughout society, including education, government, social services and popular citizenship.Shulamith Firestonealso wrote about youth rights issues in the second-wave feminist classicThe Dialectic of Sex.Alex Koroknay-Paliczhas become a vocal youth rights proponent, making regular appearances on television and in newspapers.Mike A. Malesis a prominentsociologistand researcher who has published several books regarding the rights of young people across the United States.Robert Epsteinis another prominent author who has called for greater rights and responsibilities for youth. Several organizational leaders, includingSarah Fitz-ClaridgeofTaking Children Seriously,Bennett HaseltonofPeacefireandAdam Fletcher (activist)ofThe Freechild Projectconduct local, national, and internationaloutreachfor youth and adults regarding youth rights.Giuseppe Porcaroduring his mandate as Secretary General of theEuropean Youth Forumedited the second edition of the volume "The International Law of Youth Rights" published byBrill Publishers.
https://en.wikipedia.org/wiki/Youth_rights
Youth suffrageis theright to votefor young people. It forms part of the broaderuniversal suffrageandyouth rightsmovements. Mostdemocracieshave lowered thevoting ageto between 16 and 18, while some advocates for children's suffrage hope to remove age restrictions entirely.[1] According to advocates, the "one man, one vote" democratic ideal supports giving voting rights to as many people as possible in order for the wisdom of a more representative electorate to create better outcomes for society. Advocates suggest that setting avoting ageat or below 16, would accomplish that goal, while also creating a more ethical democracy for those who believe that those most impacted by government decisions (those with the longest life expectancy[2]) are given at least an equal say in decision-making. The idea ofpresumptive inclusionholds that individuals should be given the right to vote by default and only removed if the government can decisively prove why someone shouldn't have that right.[3][4][5]Erring on the side of over-inclusion also checks the temptations of those with power (or simplystatus quo bias) to exclude capable voters. The first reason for exclusion that is seen as legitimate by some democratic theorists is competence, while the second is connection to the community. Age-related debates fall under the question of competence.[3] Many countries don't require literacy in order to vote, validating the idea that attaining a certain level of education is not needed to understand how to cast a vote according to one's interest or beliefs. Inthe 1965 U.S. Voting Rights Actfor example, it was determined that a6th gradeeducation (typically achieved by age 12-13) provided "sufficient literacy, comprehension and intelligence to vote in any election."[6]If kids were given the same tests that adults whose brains are atypical must pass in order to vote, then many pre-adolescents would qualify as competent[3](see also:ableism,neurodiversity, andSuffrage for Americans with disabilities). Additionally, ballots cast by someone (ie kids) with little understanding might simply randomly allocate votes and have no impact on the outcome of the election.[3] Further, law professor Vivian Hamilton argues that in light of findings from research in developmental psychology and cognitive and social neuroscience, governments can "no longer justify the electoral exclusion of mid-adolescents by claiming that they lack the relevant competencies."[3] John Wallargues that precisely because children and youth think differently than adults, that they would make unique contributions to decisions around issues with their fresh perspectives and useful abilities such as compassion for suffering and even great wisdom.[7] As for knowledge around the political decisions at the ballot box, Daniel Hart argues that 16-year-olds have proved just as capable of evaluating the candidates that align with their values and interests as 18 and 19-year-olds (though not as much knowledge as 30 year-olds).[8] Others dispute whether not having the average political knowledge of an 18 year-old is a good reason for exclusion, given the double-standard of how adults don't have to prove some level of political knowledge before voting.[3]Additionally, not every voter is expected to know about every issue, but the wisdom of the crowd from different expertise and life experiences is what contributes to a healthy and informed citizenry, including perspectives that are unique to those under 18.[3]Most people use heuristics (political party, endorsements, etc.) to decide who to vote for, there's evidence that heuristics can be a more effective approach in voting rationally than a detailed issue-by-issue analysis of each candidate in each race.[3]Additionally, while prior knowledge and experience can provide greater understanding, it can also lead to less informed decision-making by closing an otherwise open mind.[3] Some scholars advocating for a further reduced voting age, promote the idea that it should be always be optional below a certain age, so that those who feel they don't know enough yet aren't forced to participate until they want to.[9] Disputes over youth suffrage have historically been linked to partisan efforts to restrict voting. The 1971 passage of the Twenty-Sixth Amendment to the US. Constitution, which gave young people the vote at age eighteen, spurred conflicts with regard towherestudents should vote. Those who opposed allowing students to vote in their college towns argued that students should be forced to vote where their parents lived, and sometimes these efforts were specifically aimed at Black students.[10] Youthandstudent activistshave a long history of learning about and advocating for more inclusive futures, so young advocates have begun asking for the ability to vote on some or all issues.[11][12] Parents have not been shown to have influence over youth voting behavior in studies of countries where the vote has been given to 16-year-olds, just as this fear didn't manifest whenwomen were given the right to vote.[8][13]Likewise, peer pressure has been shown to have no greater influence on teens than on adults when it comes to voting.[14] John Wallargues that even if children chose to vote exactly as either their parents or their peers, it would not justify their disenfranchisement just as such behavior would not disqualify adults.[7] While teenagers can be more impulsive in certain 'hot' contexts until their early 20's,[15]in a 'cool contexts,' such as in a voting booth, there is no significant difference in a 16-year-old's ability to make careful, rational decisions like any other voter.[16]Others contend that governments shouldn't withhold rights that young children can perform, like voting, just because they haven't received other rights that they can't perform, like driving.[17]A lot of development in that analytical part of the brain takes place between 14 and 16, which is why 16 year-olds are often given more societal privileges like being able to work jobs or drive a car that are more difficult than voting.[18]Under Roman law, the age minimum for full citizenship was 14 (for males), while in much of 9th-11th century France, Germany and Northern Europe the age of adulthood (largely for fighting in wars) was 15.[3] Scholars have found no negative effects from lowering the voting age in countries around the world, and in many places, positive ones like increased trust in institutions and a more favorable view of the lower voting age over time.[19]A study of five countries in Latin America, for example, where the voting age was lowered to 16 showed a significant association with trust in government and a marginal association with satisfaction.[20]In addition totaxation without representation, governments derive their just authority from theconsent of the governed. To be legitimate, those who govern and those who legislate, the argument goes, must be elected by the people, not a special subset of the people. Scholars have found no negative effects from lowering the voting age below 18 in countries around the world, and in many places, positive ones like increased turnout and engagement.[19]Youth enfranchisement at a more stable life stage (before 18) has been shown to develop more robust and long-lasting voting habits,[21]leading to greater rates (~25% higher, according to one study) of voting in the future.[6]Studies in Norway,[22]Austria[23]and Scotland[24][25]found that allowing 16-year-olds to vote led those voters to have "substantially higher levels of engagement with representative democracy (through voting) as well as other forms of political participation". A study of preregistration (registering individuals before they are eligible to vote) in the U.S. found that it was linked to higher youth turnout, and that politicians became more responsive to issues that the young have strong preferences on, such as higher education spending.[26]While some South American countries (Argentina,BrazilandEcuador) lower theirvoting ageto 16, they also havecompulsory votingstarting at 18, making it difficult to study turnout effects from the lower voting age.Indonesiaprovides a potential case-study for non-western democracies, though they have only lowered their voting age to 17.[27]Educating children for and about democracy would likely be longer lasting if thevoting agewere lowered or eliminated,[28]while just how skilled kids could become over the course of a few elections is unknowable since it has yet to be tried below the age of 16.[7] Currently the lowestnational voting agearound the world, there seems to be a consensus in studies of elections that voters at 16 have proven to be substantially the same as voters at 18.[29][30]The majority of campaigns to lower the voting age worldwide (as of January 2023) seek avoting age of 16, with perhaps the most notable example being theEuropean Union's endorsement that its members lower their voting ages to 16.[31]In countries with bothcompulsory votingand avoting ageat 16 (Argentina, Brazil and Ecuador), the penalties for not voting start at 18. The United Nations defines "youth" as being from ages 15 to 24.[32]In the United States, Avi Hein andTa-Nehisi Coatescalled for lowering the voting age to 15.[33][34] Politics professorDavid Runcimanargues for lowering the voting age to 6, given that at that agechildrentend to be in school and have enough ability to read and fill out a multiple-choice ballot.[35][36] Youth councils (or children's parliaments)often include children starting at age 5, whichJohn Wallsubmits as evidence of their readiness for other civic roles such as voting (note: he advocates eliminating age requirements altogether).[31] Democratic schoolspractice and supportuniversal suffragein school, which allows a vote to every member of the school including students and staff. Schools hold that this feature is essential for students to be ready to move into society at large. TheSudbury Valley School, for example, allows all children ages 4 and up an equal say in its operation.[37][38] Some advocate for eliminating age as a factor altogether in enfranchisement noting that in practice most very young children won't choose to vote, but that they should have the right to do so when they feel ready,[17]with some supporting aproxy voteto beawarded to their parentsuntil the child wants to vote.[31]Others cite how literacy tests were banned for adults, and therefore should be done away with for young kids too by removing the voting age.[39][40]
https://en.wikipedia.org/wiki/Youth_suffrage
Youthis the time oflifewhen one is young. The word, youth, can also mean the time betweenchildhoodandadulthood(maturity), but it can also refer to one's peak, in terms of health or the period of life known as being ayoung adult.[1][2]Youth is also defined as "the appearance, freshness, vigor, spirit, etc., characteristic of one, who is young".[3]Its definitions of a specific age range varies, as youth is not definedchronologicallyas a stage that can be tied to specific age ranges; nor can its end point be linked to specific activities, such as takingunpaid work, or havingsexual relations.[4][5] Youth is an experience that may shape an individual's level ofdependency, which can be marked in various ways according to differentculturalperspectives. Personalexperienceis marked by an individual's cultural norms ortraditions, while a youth's level of dependency means the extent to which they still rely on theirfamilyemotionallyandeconomically.[4] Around the world, theEnglishtermsyouth,adolescent,teenager,kid,youngsterandyoung personoften mean the same thing,[6]but they are occasionally differentiated.Youthcan be referred to as the time of life, when one is young. The meaning may in some instances also include childhood.[7][8]Youthalso identifies a particular mindset of attitude, as in "He is veryyouthful". For certain uses, such as employment statistics, the term also sometimes refers to individuals from the ages of up to 21.[9]However, the termadolescencerefers to a specific age range during a specific developmental period in a person's life, unlike youth, which is a socially constructed category.[4] TheUnited Nationsdefinesyouthas persons between the ages of roughly 12 and 24, with all UN statistics based on this range, the UN states education as a source for these statistics. The UN also recognizes that this varies without prejudice to other age groups listed by member states such as 18–30. A useful distinction within the UN itself can be made between teenagers (i.e. those between the ages of 13 and 19) and young adults (those between the ages of 20 and 24). While seeking to impose some uniformity on statistical approaches, the UN is aware of contradictions between approaches in its own statutes. Hence, under the 15–24 definition (introduced in 1981) children are defined as those under the age of (someone 12 and younger) while under the 1979 Convention on theRights of the Child, those under the age of 18 are regarded as children.[10]The UN also states they are aware that several definitions exist foryouth within UN entitiessuch asYouth Habitat 15–32,NCSL 12-24, and African Youth Charter 15–35. On November 11, 2020, theState Dumaof theRussian Federationapproved a project to raise the cap on the age of young people from 30 to 35 years (the range now extending from 14 to 35 years).[11] Although linked to biological processes of development and aging,youthis also defined as asocial positionthat reflects the meanings different cultures and societies give to individuals between childhood and adulthood. The term in itself when referred to in a manner of social position can beambiguouswhen applied to someone of an older age with very low social position; potentially when still dependent on their guardians.[12]Scholars argue that age-based definitions have not been consistent across cultures or times and that thus it is more accurate to focus on social processes in the transition to adult independence for defining youth.[13] Youth is the stage of constructing theself-concept. The self-concept of youth is influenced by variables such as peers, lifestyle, gender, and culture.[15]It is a time of a person's life when their choices are most likely to affect their future.[16][17] In much of sub-Saharan Africa, the term "youth" is associated with young men from 12 to 30 or 35 years of age.Youth in Nigeriaincludes all members of the Federal Republic of Nigeria aged 18–35.[18]Many African girls experience youth as a brief interlude between the onset ofpubertyandmarriageandmotherhood. But in urban settings, poor women are often considered youth much longer, even if they bear children outside of marriage. Varying culturally, the gender constructions of youth in Latin America and Southeast Asia differ from those of sub-Saharan Africa. In Vietnam, widespread notions of youth are sociopolitical constructions for both sexes between the ages of 15 and 35.[19] InBrazil, the termyouthrefers to people of both sexes from 15 to 29 years old. This age bracket reflects the influence on Brazilian law of international organizations like theWorld Health Organization(WHO). It is also shaped by the notion of adolescence that has entered everyday life in Brazil through a discourse on children's rights.[19] TheOECDdefinesyouthas "those between 15 and 29 years of age".[20][21] August 12 was declaredInternational Youth Dayby the United Nations. Children's rightscover all the rights that belong to children. When they grow up, they are granted new rights (like voting, consent, driving, etc.) and duties (criminal response, etc.). There are differentminimumlimits of age at which youth are not free, independent or legally competent to take some decisions or actions. Some of these limits are:voting age,age of candidacy,age of consent,age of majority,age of criminal responsibility,drinking age,driving age, etc. After youth reach these limits, they are free to vote, have sexual intercourse, buy or consume alcoholic beverages or drivecars, etc. Voting ageis theminimum ageestablished by law that a person must attain to be eligible tovote in a public election. Typically, the age is set at 18 years; however, ages as low as 16 and as high as 21 exist (see list below). Studies show that 21% of all 18-year-olds have experience with voting. This is an important right since, by voting, they can support politics selected by themselves and not only by people of older generations. Age of candidacyis the minimum age at which a person can legally qualify to hold certain elected government offices. In many cases, it also determines the age at which a person may beeligible to standfor an election or be grantedballot access. Theage of consentis the age at which a person is consideredlegally competentto consent tosexual acts, and is thus the minimum age of a person with whom another person is legally permitted to engage in sexual activity. The distinguishing aspect of the age of consent laws is that the person below the minimum age is regarded as the victim, and their sex partner as the offender. Thedefense of infancyis a form ofdefenseknown as anexcuseso thatdefendantsfalling within the definition of an "infant" are excluded fromcriminalliabilityfor theiractions, if at the relevant time, they had not reached an age of criminal responsibility. This implies that children lack the judgment that comes with age and experience to be held criminally responsible. After reaching the initial age, there may be levels of responsibility dictated by age and the type of offense committed. Thelegal drinking ageis the age at which a person can consume or purchasealcoholic beverages. These laws cover a wide range of issues and behaviors, addressing when and where alcohol can be consumed. The minimum age alcohol can be legally consumed can be different from the age when it can be purchased in some countries. These laws vary among different countries and many laws have exemptions or special circumstances. Most laws apply only to drinking alcohol in public places, with alcohol consumption in the home being mostly unregulated (an exception being the UK, which has a minimum legal age of five for supervised consumption in private places). Some countries also have different age limits for different types of alcoholic drinks.[22] Driving ageis the age at which a person can apply for adriver's license. Countries with the lowest driving ages (below 17) areArgentina,Australia, Canada,El Salvador, Iceland,Israel,Macedonia,Malaysia,New Zealand, the Philippines,Saudi Arabia,Slovenia, the United Kingdom (Mainland) and the United States. The Canadian province of Alberta and several U.S. states permit youth driving as low as 14.Nigerhas the highest minimum driving age in the world at 23. In India, driving is legal after getting a license at the age of 18. Thelegal working ageis the minimum age required bylawfor a person to work in eachcountryorjurisdiction. The threshold of adulthood, or "theage of majority" as recognized or declared in law in most countries, has been set at age 18. Some types of labor are commonly prohibited even for those above the working age, if they have not reached the age of majority. Activities that are dangerous, harmful to thehealthor that may affect themoralsofminorsfall into this category. Student rightsare thoserights, such as civil, constitutional, contractual and consumer rights, which regulate student rights and freedoms and allowstudentsto make use of their educational investment. These include such things as the right to free speech and association, to due process, equality,autonomy, safety and privacy, and accountability in contracts and advertising, which regulate the treatment of students by teachers and administrators. The smoking age is the minimum age a person can buytobaccoand/or smoke in public. Most countries regulate this law at the national level while at some point it is done by the state or province. Young people spend much of their lives in educational settings, and their experiences in schools, colleges and universities can shape much of their subsequent lives.[23]Research shows thatpovertyand income affect the likelihood for the incompletion of high school. These factors also increase the likelihood for the youth to not go to a college or university.[24]In the United States, 12.3 percent of young people ages 16 to 24 are disconnected, meaning they are neither in school nor working.[25] The leading causes of morbidity and mortality among youth and adults are due to certain health-risk behaviors. These behaviors are often established during youth and extend into adulthood. Since the risk behaviors in adulthood and youth are interrelated, problems in adulthood are preventable by influencing youth behavior. A 2004mortalitystudy of youth (defined in this study as ages 10–24) mortality worldwide found that 97% of deaths occurred in low to middle-income countries, with the majority in southeast Asia and sub-Saharan Africa. Maternal conditions accounted for 15% of female deaths, whileHIV/AIDSandtuberculosiswere responsible for 11% of deaths; 14% of male and 5% of female deaths were attributed to traffic accidents, the largest cause overall. Violence accounted for 12% of male deaths.Suicidewas the cause of 6% of all deaths.[26] The U.S.Centers for Disease Control and Preventiondeveloped its Youth Risk Behavior Surveillance System (YRBSS) in 2003 to help assess risk behavior.[27]YRBSS monitors six categories of priority health-risk behaviors among youth and young adults. These are behaviors that contribute to unintentionalinjuriesandviolence; YRBSS includes a national school-based survey conducted by CDC as well as state and local school-based surveys conducted by education and health agencies.[28] Universal school-based interventions such as formal classroom curricula, behavioural management practices, role‐play, and goal‐setting may be effective in preventing tobacco use, alcohol use, illicit drug use, antisocial behaviour, and improving physical activity of young people.[29] Type 1 diabetes(T1D) is anautoimmune diseasethat occurs whenpancreaticcells, also calledbeta cells, are destroyed by theimmune system. Beta cells are responsible to produceinsulin, which is required by the body to convert blood sugar into energy. Symptoms associated with T1D include frequent urination, increased hunger and thirst, weight loss, blurry vision, and tiredness.[30] Type 2 diabetes(T2D) is characterized by high blood sugar and insulin resistance. This is not an autoimmune disease and is mostly a result of obesity and lack of exercise. Exercise is a crucial addition to a child's everyday routine. It can increase the overallpsychosocialwell-being, metabolic health and cardiovascular benefits.American College of Sports Medicinerecommends at least 60 minutes of moderate to vigorous intensity each day. Recommended activities include running, bicycle riding and team sports. Furthermore, at least 3 days of bone and muscle strengthening activities should be incorporated.[31] Unfortunately, in reality a large percentage of T1D youth population is not meeting this guideline. Common barriers include fear ofhypoglycemia, loss of glucose stability, low fitness levels, insufficient or inadequate knowledge of strategies to prevent hypoglycemia, lack of time, and lack of confidence in the topic of exercise management in type 1 diabetes.[31] ManyHealth and Care Professions Councilmembers who work with children aren't educated enough about diabetes in children and exercise recommendations. It is important for parents to educate themselves to support their children throughout this chapter of their lives. Structured exercises lasting longer than 60 minutes can reduceHbA1clevels and insulin dose per day. Moderate activities can increase cardiorespiratory fitness in children which is crucial for their future health. Cardiorespiratory fitness reduces the risk of other diseases, such as microvascular complications andcardiovascular diseases.[32] To avoid a possible type 2 diabetes, children are encouraged to keep theirBMIandadipose tissuepercentage at normal levels. Exercising regularly improves insulin resistance, reduces blood glucose levels, and keep an individual at a healthy weight to stay away from a possible T2D diagnosis.[33] Obesitynow affects one in five children in the United States, and is the most prevalent nutritional disease of children and adolescents in the United States. Although obesity-associated morbidities occur more frequently in adults, significant consequences of obesity as well as the antecedents of adult disease occur in obese children and adolescents. Discrimination against overweight children begins early in childhood and becomes progressively institutionalized. Obese children may be taller than their non-overweight peers, in which case they are apt to be viewed as more mature. The inappropriate expectations that result may have an adverse effect on their socialization. Many of the cardiovascular consequences that characterize adult-onset obesity are preceded by abnormalities that begin in childhood.Hyperlipidemia,hypertension, and abnormalglucose toleranceoccur with increased frequency in obese children and adolescents. The relationship of cardiovascular risk factors to visceral fat independent of total body fat remains unclear.Sleep apnea,pseudotumor cerebri, andBlount's diseaserepresent major sources of morbidity for which rapid and sustained weight reduction is essential. Although several periods of increased risk appear in childhood, it is not clear whether obesity with onset early in childhood carries a greater risk of adult morbidity and mortality.[34] Bullyingamong school-aged youth is increasingly being recognized as an important problem affecting well-being and social functioning. While a certain amount of conflict and harassment is typical of youth peer relations, bullying presents a potentially more serious threat to healthy youth development. The definition of bullying is widely agreed on in literature on bullying.[35][36][37][38] The majority ofresearchon bullying has been conducted in Europe and Australia.[39]Considerable variability among countries in the prevalence of bullying has been reported. In an international survey of adolescent health-related behaviors, the percentage of students who reported being bullied at least once during the current term ranged from a low of 15% to 20% in some countries to a high of 70% in others.[40][41]Of particular concern is frequent bullying, typically defined as bullying that occurs once a week or more. The prevalence of frequent bullying reported internationally ranges from a low of 1.9% among one Irish sample to a high of 19% in a Malta study.[42][43][44][45][46][47] Research examining characteristics of youth involved in bullying has consistently found that both bullies and those bullied demonstrate poorerpsychosocialfunctioning than their non-involved peers. Youth who bully others tend to demonstrate higher levels of conduct problems and dislike of school, whereas youth who are bullied generally show higher levels ofinsecurity,anxiety,depression,loneliness,unhappiness, physical and mental symptoms, and lowself-esteem. Males who are bullied also tend to be physically weaker than males in general. The few studies that have examined the characteristics of youth who both bully and are bullied found that these individuals exhibit the poorest psychosocial functioning overall.[48][49][50][51] Globalizationand transnational flows have had tangible effects on sexual relations, identities, and subjectivities. In the wake of an increasingly globalized world order under waning Western dominance, within ideologies of modernity, civilization, and programs for social improvement, discourses onpopulation control, 'safe sex', and 'sexual rights'.[52]Sex educationprogrammes grounded in evidence-based approaches are a cornerstone in reducing adolescent sexual risk behaviours and promoting sexual health. In addition to providing accurate information about consequences ofSexually transmitted diseaseor STIs andearly pregnancy, such programmes build life skills for interpersonal communication and decision making. Such programmes are most commonly implemented in schools, which reach large numbers of teenagers in areas where school enrollment rates are high. However, since not all young people are in school, sex education programmes have also been implemented in clinics, juvenile detention centers and youth-oriented community agencies. Notably, some programmes have been found to reduce risky sexual behaviours when implemented in both school and community settings with only minor modifications to the curricula.[53] TheSangguniang Kabataan("Youth Council" inEnglish), commonly known as SK, was a youth council in eachbarangay(village or district) in thePhilippines, before being put "on hold", but not quite abolished, prior to the2013 barangay elections.[54]The council represented teenagers from 15 to 17 years old who have resided in their barangay for at least six months and registered to vote. It was the local youth legislature in the village and therefore led the local youth program and projects of the government. The Sangguniang Kabataan was an offshoot of the KB or theKabataang Barangay(Village Youth) which was abolished when theLocal Government Codeof 1991 was enacted. The vast majority of young people live indeveloping countries: according to theUnited Nations, globally around 85 per cent of 15- to 25-year-olds live in developing countries, a figure projected to grow 89.5 per cent by 2025. Moreover, this majority are extremely diverse: some live in rural areas but many inhabit the overcrowded metropolises ofIndia,Mongoliaand other parts of Asia and in South America, some live traditional lives intribal societies, while others participate in global youth culture inghettocontexts.[55] Many young lives in developing countries are defined by poverty, some suffer from famine and a lack of clean water, while involvement in armed conflict is all common. Health problems are rife, especially due to the prevalence ofHIV/AIDSin certain regions. The United Nations estimates that 200 million young people live in poverty, 130 million are illiterate and 10 million live with HIV/AIDS.[55]
https://en.wikipedia.org/wiki/Youth
Inrobust statistics,robust regressionseeks to overcome some limitations of traditionalregression analysis. A regression analysis models the relationship between one or moreindependent variablesand adependent variable. Standard types of regression, such asordinary least squares, have favourable properties if their underlying assumptions are true, but can give misleading results otherwise (i.e. are notrobustto assumption violations). Robust regression methods are designed to limit the effect that violations of assumptions by the underlying data-generating process have on regression estimates. For example,least squaresestimates forregression modelsare highly sensitive tooutliers: an outlier with twice the error magnitude of a typical observation contributes four (two squared) times as much to the squared errorloss, and therefore has moreleverageover the regression estimates. TheHuber lossfunction is a robust alternative to standard square error loss that reduces outliers' contributions to the squared error loss, thereby limiting their impact on regression estimates. One instance in which robust estimation should be considered is when there is a strong suspicion ofheteroscedasticity. In thehomoscedasticmodel, it is assumed that the variance of the error term is constant for all values ofx. Heteroscedasticity allows the variance to be dependent onx, which is more accurate for many real scenarios. For example, the variance of expenditure is often larger for individuals with higher income than for individuals with lower incomes. Software packages usually default to a homoscedastic model, even though such a model may be less accurate than a heteroscedastic model. One simple approach (Tofallis, 2008) is to apply least squares to percentage errors, as this reduces the influence of the larger values of the dependent variable compared to ordinary least squares. Another common situation in which robust estimation is used occurs when the data contain outliers. In the presence of outliers that do not come from the same data-generating process as the rest of the data, least squares estimation isinefficientand can be biased. Because the least squares predictions are dragged towards the outliers, and because the variance of the estimates is artificially inflated, the result is that outliers can be masked. (In many situations, including some areas ofgeostatisticsand medical statistics, it is precisely the outliers that are of interest.) Although it is sometimes claimed that least squares (or classical statistical methods in general) are robust, they are only robust in the sense that thetype I error ratedoes not increase under violations of the model. In fact, the type I error rate tends to be lower than the nominal level when outliers are present, and there is often a dramatic increase in thetype II error rate. The reduction of the type I error rate has been labelled as theconservatismof classical methods. Despite their superior performance over least squares estimation in many situations, robust methods for regression are still not widely used. Several reasons may help explain their unpopularity (Hampel et al. 1986, 2005). One possible reason is that there are several competing methods[citation needed]and the field got off to many false starts. Also, robust estimates are much more computationally intensive than least squares estimation[citation needed]; in recent years, however, this objection has become less relevant, as computing power has increased greatly. Another reason may be that some popular statistical software packages failed to implement the methods (Stromberg, 2004). Perhaps the most important reason for the unpopularity of robust regression methods is that when the error variance is quite large or does not exist, for any given dataset, any estimate of the regression coefficients, robust or otherwise, will likely be practically worthless unless the sample is quite large. Although uptake of robust methods has been slow, modern mainstream statistics text books often include discussion of these methods (for example,the books by Seber and Lee, and by Faraway[vague]; for a good general description of how the various robust regression methods developed from one another seeAndersen's book[vague]). Also, modern statistical software packages such asR,SAS, Statsmodels,StataandS-PLUSinclude considerable functionality for robust estimation (see, for example,the books by Venables and Ripley, and by Maronna et al.[vague]). The simplest methods of estimating parameters in a regression model that are less sensitive to outliers than the least squares estimates, is to useleast absolute deviations. Even then, gross outliers can still have a considerable impact on the model, motivating research into even more robust approaches. In 1964, Huber introducedM-estimationfor regression. The M in M-estimation stands for "maximum likelihood type". The method is robust to outliers in the response variable, but turned out not to be resistant to outliers in theexplanatory variables(leveragepoints). In fact, when there are outliers in the explanatory variables, the method has no advantage over least squares. In the 1980s, several alternatives to M-estimation were proposed as attempts to overcome the lack of resistance. Seethe book byRousseeuwand Leroy[vague]for a very practical review.Least trimmed squares(LTS) is a viable alternative and is currently (2007) the preferred choice of Rousseeuw and Ryan (1997, 2008). TheTheil–Sen estimatorhas a lower breakdown point than LTS but is statistically efficient and popular. Another proposed solution was S-estimation. This method finds a line (plane or hyperplane) that minimizes a robust estimate of the scale (from which the method gets the S in its name) of the residuals. This method is highly resistant to leverage points and is robust to outliers in the response. However, this method was also found to be inefficient. MM-estimationattempts to retain the robustness and resistance of S-estimation, whilst gaining the efficiency of M-estimation. The method proceeds by finding a highly robust and resistant S-estimate that minimizes an M-estimate of the scale of the residuals (the first M in the method's name). The estimated scale is then held constant whilst a close by M-estimate of the parameters is located (the second M). Another approach to robust estimation of regression models is to replace the normal distribution with a heavy-tailed distribution. At-distributionwith 4–6 degrees of freedom has been reported to be a good choice in various practical situations. Bayesian robust regression, being fully parametric, relies heavily on such distributions. Under the assumption oft-distributed residuals, the distribution is a location-scale family. That is,x←(x−μ)/σ{\displaystyle x\leftarrow (x-\mu )/\sigma }. The degrees of freedom of thet-distribution is sometimes called thekurtosis parameter. Lange, Little and Taylor (1989) discuss this model in some depth from a non-Bayesian point of view. A Bayesian account appears in Gelman et al. (2003). An alternative parametric approach is to assume that the residuals follow amixtureof normal distributions (Daemi et al. 2019); in particular, acontaminated normal distributionin which the majority of observations are from a specified normal distribution, but a small proportion are from a normal distribution with much higher variance. That is, residuals have probability1−ε{\displaystyle 1-\varepsilon }of coming from a normal distribution with varianceσ2{\displaystyle \sigma ^{2}}, whereε{\displaystyle \varepsilon }is small, and probabilityε{\displaystyle \varepsilon }of coming from a normal distribution with variancecσ2{\displaystyle c\sigma ^{2}}for somec>1{\displaystyle c>1}: Typically,ε<0.1{\displaystyle \varepsilon <0.1}. This is sometimes called theε{\displaystyle \varepsilon }-contamination model. Parametric approaches have the advantage that likelihood theory provides an "off-the-shelf" approach to inference (although for mixture models such as theε{\displaystyle \varepsilon }-contamination model, the usual regularity conditions might not apply), and it is possible to build simulation models from the fit. However, such parametric models still assume that the underlying model is literally true. As such, they do not account for skewed residual distributions or finite observation precisions. Another robust method is the use ofunit weights(Wainer& Thissen, 1976), a method that can be applied when there are multiple predictors of a single outcome.Ernest Burgess(1928) used unit weights to predict success on parole. He scored 21 positive factors as present (e.g., "no prior arrest" = 1) or absent ("prior arrest" = 0), then summed to yield a predictor score, which was shown to be a useful predictor of parole success.Samuel S. Wilks(1938) showed that nearly all sets of regression weights sum to composites that are very highly correlated with one another, including unit weights, a result referred to asWilks' theorem(Ree, Carretta, & Earles, 1998).Robyn Dawes(1979) examined decision making in applied settings, showing that simple models with unit weights often outperformed human experts. Bobko, Roth, and Buster (2007) reviewed the literature on unit weights and concluded that decades of empirical studies show that unit weights perform similar to ordinary regression weights on cross validation. TheBUPAliver data have been studied by various authors, including Breiman (2001). The data can be found at theclassic data setspage, and there is some discussion in the article on theBox–Cox transformation. A plot of the logs of ALT versus the logs of γGT appears below. The two regression lines are those estimated by ordinary least squares (OLS) and by robust MM-estimation. The analysis was performed inRusing software made available by Venables and Ripley (2002). The two regression lines appear to be very similar (and this is not unusual in a data set of this size). However, the advantage of the robust approach comes to light when the estimates of residual scale are considered. For ordinary least squares, the estimate of scale is 0.420, compared to 0.373 for the robust method. Thus, the relative efficiency of ordinary least squares to MM-estimation in this example is 1.266. This inefficiency leads to loss of power in hypothesis tests and to unnecessarily wide confidence intervals on estimated parameters. Another consequence of the inefficiency of theordinary least squaresfit is that several outliers are masked because the estimate of residual scale is inflated; the scaled residuals are pushed closer to zero than when a more appropriate estimate of scale is used. The plots of the scaled residuals from the two models appear below. The variable on thexaxis is just the observation number as it appeared in the data set. Rousseeuw and Leroy (1986) contains many such plots. The horizontal reference lines are at 2 and −2, so that any observed scaled residual beyond these boundaries can be considered to be an outlier. Clearly, the least squares method leads to many interesting observations being masked. Whilst in one or two dimensions outlier detection using classical methods can be performed manually, with large data sets and in high dimensions the problem of masking can make identification of many outliers impossible. Robust methods automatically detect these observations, offering a serious advantage over classical methods when outliers are present.
https://en.wikipedia.org/wiki/Contaminated_normal_distribution
Inconvex geometryandvector algebra, aconvex combinationis alinear combinationofpoints(which can bevectors,scalars, or more generally points in anaffine space) where allcoefficientsarenon-negativeand sum to 1.[1]In other words, the operation is equivalent to a standardweighted average, but whose weights are expressed as a percent of the total weight, instead of as a fraction of thecountof the weights as in a standard weighted average. More formally, given a finite number of pointsx1,x2,…,xn{\displaystyle x_{1},x_{2},\dots ,x_{n}}in areal vector space, a convex combination of these points is a point of the form where the real numbersαi{\displaystyle \alpha _{i}}satisfyαi≥0{\displaystyle \alpha _{i}\geq 0}andα1+α2+⋯+αn=1.{\displaystyle \alpha _{1}+\alpha _{2}+\cdots +\alpha _{n}=1.}[1] As a particular example, every convex combination of two points lies on theline segmentbetween the points.[1] A set isconvexif it contains all convex combinations of its points. Theconvex hullof a given set of points is identical to the set of all their convex combinations.[1] There exist subsets of a vector space that are not closed under linear combinations but are closed under convex combinations. For example, the interval[0,1]{\displaystyle [0,1]}is convex but generates the real-number line under linear combinations. Another example is the convex set ofprobability distributions, as linear combinations preserve neither nonnegativity nor affinity (i.e., having total integral one).
https://en.wikipedia.org/wiki/Convex_combination
Amixed Poisson distributionis aunivariatediscreteprobability distributionin stochastics. It results from assuming that the conditional distribution of a random variable, given the value of the rate parameter, is aPoisson distribution, and that therate parameteritself is considered as a random variable. Hence it is a special case of acompound probability distribution. Mixed Poisson distributions can be found inactuarial mathematicsas a general approach for the distribution of the number of claims and is also examined as anepidemiological model.[1]It should not be confused withcompound Poisson distributionorcompound Poisson process.[2] Arandom variableXsatisfies the mixed Poisson distribution with densityπ(λ) if it has the probability distribution[3] P⁡(X=k)=∫0∞λkk!e−λπ(λ)dλ.{\displaystyle \operatorname {P} (X=k)=\int _{0}^{\infty }{\frac {\lambda ^{k}}{k!}}e^{-\lambda }\,\,\pi (\lambda )\,d\lambda .} If we denote the probabilities of the Poisson distribution byqλ(k), then P⁡(X=k)=∫0∞qλ(k)π(λ)dλ.{\displaystyle \operatorname {P} (X=k)=\int _{0}^{\infty }q_{\lambda }(k)\,\,\pi (\lambda )\,d\lambda .} In the following letμπ=∫0∞λπ(λ)dλ{\displaystyle \mu _{\pi }=\int _{0}^{\infty }\lambda \,\,\pi (\lambda )\,d\lambda \,}be the expected value of the densityπ(λ){\displaystyle \pi (\lambda )\,}andσπ2=∫0∞(λ−μπ)2π(λ)dλ{\displaystyle \sigma _{\pi }^{2}=\int _{0}^{\infty }(\lambda -\mu _{\pi })^{2}\,\,\pi (\lambda )\,d\lambda \,}be the variance of the density. Theexpected valueof the mixed Poisson distribution is E⁡(X)=μπ.{\displaystyle \operatorname {E} (X)=\mu _{\pi }.} For thevarianceone gets[3] Var⁡(X)=μπ+σπ2.{\displaystyle \operatorname {Var} (X)=\mu _{\pi }+\sigma _{\pi }^{2}.} Theskewnesscan be represented as v⁡(X)=(μπ+σπ2)−3/2[∫0∞(λ−μπ)3π(λ)dλ+μπ].{\displaystyle \operatorname {v} (X)={\Bigl (}\mu _{\pi }+\sigma _{\pi }^{2}{\Bigr )}^{-3/2}\,{\Biggl [}\int _{0}^{\infty }(\lambda -\mu _{\pi })^{3}\,\pi (\lambda )\,d{\lambda }+\mu _{\pi }{\Biggr ]}.} Thecharacteristic functionhas the form φX(s)=Mπ(eis−1).{\displaystyle \varphi _{X}(s)=M_{\pi }(e^{is}-1).\,} WhereMπ{\displaystyle M_{\pi }}is themoment generating functionof the density. For theprobability generating function, one obtains[3] mX(s)=Mπ(s−1).{\displaystyle m_{X}(s)=M_{\pi }(s-1).\,} Themoment-generating functionof the mixed Poisson distribution is MX(s)=Mπ(es−1).{\displaystyle M_{X}(s)=M_{\pi }(e^{s}-1).\,} Theorem—Compounding aPoisson distributionwith rate parameter distributed according to agamma distributionyields anegative binomial distribution.[3] Letπ(λ)=(p1−p)rΓ(r)λr−1e−p1−pλ{\displaystyle \pi (\lambda )={\frac {({\frac {p}{1-p}})^{r}}{\Gamma (r)}}\lambda ^{r-1}e^{-{\frac {p}{1-p}}\lambda }}be a density of aΓ⁡(r,p1−p){\displaystyle \operatorname {\Gamma } \left(r,{\frac {p}{1-p}}\right)}distributed random variable. P⁡(X=k)=1k!∫0∞λke−λ(p1−p)rΓ(r)λr−1e−p1−pλdλ=pr(1−p)−rΓ(r)k!∫0∞λk+r−1e−λ11−pdλ=pr(1−p)−rΓ(r)k!(1−p)k+r∫0∞λk+r−1e−λdλ⏟=Γ(r+k)=Γ(r+k)Γ(r)k!(1−p)kpr{\displaystyle {\begin{aligned}\operatorname {P} (X=k)&={\frac {1}{k!}}\int _{0}^{\infty }\lambda ^{k}e^{-\lambda }{\frac {({\frac {p}{1-p}})^{r}}{\Gamma (r)}}\lambda ^{r-1}e^{-{\frac {p}{1-p}}\lambda }\,d\lambda \\&={\frac {p^{r}(1-p)^{-r}}{\Gamma (r)k!}}\int _{0}^{\infty }\lambda ^{k+r-1}e^{-\lambda {\frac {1}{1-p}}}\,d\lambda \\&={\frac {p^{r}(1-p)^{-r}}{\Gamma (r)k!}}(1-p)^{k+r}\underbrace {\int _{0}^{\infty }\lambda ^{k+r-1}e^{-\lambda }\,d\lambda } _{=\Gamma (r+k)}\\&={\frac {\Gamma (r+k)}{\Gamma (r)k!}}(1-p)^{k}p^{r}\end{aligned}}} Therefore we getX∼NegB⁡(r,p).{\displaystyle X\sim \operatorname {NegB} (r,p).} Theorem—Compounding aPoisson distributionwith rate parameter distributed according to anexponential distributionyields ageometric distribution. Letπ(λ)=1βe−λβ{\displaystyle \pi (\lambda )={\frac {1}{\beta }}e^{-{\frac {\lambda }{\beta }}}}be a density of aExp⁡(1β){\displaystyle \operatorname {Exp} \left({\frac {1}{\beta }}\right)}distributed random variable. Usingintegration by partsntimes yields:P⁡(X=k)=1k!∫0∞λke−λ1βe−λβdλ=1k!β∫0∞λke−λ(1+ββ)dλ=1k!β⋅k!(β1+β)k∫0∞e−λ(1+ββ)dλ=(β1+β)k(11+β){\displaystyle {\begin{aligned}\operatorname {P} (X=k)&={\frac {1}{k!}}\int _{0}^{\infty }\lambda ^{k}e^{-\lambda }{\frac {1}{\beta }}e^{-{\frac {\lambda }{\beta }}}\,d\lambda \\&={\frac {1}{k!\beta }}\int _{0}^{\infty }\lambda ^{k}e^{-\lambda \left({\frac {1+\beta }{\beta }}\right)}\,d\lambda \\&={\frac {1}{k!\beta }}\cdot k!\left({\frac {\beta }{1+\beta }}\right)^{k}\int _{0}^{\infty }e^{-\lambda \left({\frac {1+\beta }{\beta }}\right)}\,d\lambda \\&=\left({\frac {\beta }{1+\beta }}\right)^{k}\left({\frac {1}{1+\beta }}\right)\end{aligned}}}Therefore we getX∼Geo(11+β).{\displaystyle X\sim \operatorname {Geo\left({\frac {1}{1+\beta }}\right)} .}
https://en.wikipedia.org/wiki/Mixed_Poisson_distribution
Bayesian hierarchical modellingis astatistical modelwritten in multiple levels (hierarchical form) that estimates theparametersof theposterior distributionusing theBayesian method.[1]The sub-models combine to form the hierarchical model, andBayes' theoremis used to integrate them with the observed data and account for all the uncertainty that is present. The result of this integration is it allows calculation of the posterior distribution of theprior, providing an updated probability estimate. Frequentist statisticsmay yield conclusions seemingly incompatible with those offered by Bayesian statistics due to the Bayesian treatment of the parameters asrandom variablesand its use of subjective information in establishing assumptions on these parameters.[2]As the approaches answer different questions the formal results aren't technically contradictory but the two approaches disagree over which answer is relevant to particular applications. Bayesians argue that relevant information regarding decision-making and updating beliefs cannot be ignored and that hierarchical modeling has the potential to overrule classical methods in applications where respondents give multiple observational data. Moreover, the model has proven to berobust, with the posterior distribution less sensitive to the more flexible hierarchical priors. Hierarchical modeling, as its name implies, retains nested data structure, and is used when information is available at several different levels of observational units. For example, in epidemiological modeling to describe infection trajectories for multiple countries, observational units are countries, and each country has its own time-based profile of daily infected cases.[3]Indecline curve analysisto describe oil or gas production decline curve for multiple wells, observational units are oil or gas wells in a reservoir region, and each well has each own time-based profile of oil or gas production rates (usually, barrels per month).[4]Hierarchical modeling is used to devise computatation based strategies for multiparameter problems.[5] Statistical methods and models commonly involve multiple parameters that can be regarded as related or connected in such a way that the problem implies a dependence of the joint probability model for these parameters.[6]Individual degrees of belief, expressed in the form of probabilities, come with uncertainty.[7]Amidst this is the change of the degrees of belief over time. As was stated by ProfessorJosé M. Bernardoand ProfessorAdrian F. Smith, “The actuality of the learning process consists in the evolution of individual and subjective beliefs about the reality.” These subjective probabilities are more directly involved in the mind rather than the physical probabilities.[7]Hence, it is with this need of updating beliefs that Bayesians have formulated an alternative statistical model which takes into account the prior occurrence of a particular event.[8] The assumed occurrence of a real-world event will typically modify preferences between certain options. This is done by modifying the degrees of belief attached, by an individual, to the events defining the options.[9] Suppose in a study of the effectiveness of cardiac treatments, with the patients in hospitaljhaving survival probabilityθj{\displaystyle \theta _{j}}, the survival probability will be updated with the occurrence ofy, the event in which a controversial serum is created which, as believed by some, increases survival in cardiac patients. In order to make updated probability statements aboutθj{\displaystyle \theta _{j}}, given the occurrence of eventy, we must begin with a model providing ajoint probability distributionforθj{\displaystyle \theta _{j}}andy. This can be written as a product of the two distributions that are often referred to as the prior distributionP(θ){\displaystyle P(\theta )}and thesampling distributionP(y∣θ){\displaystyle P(y\mid \theta )}respectively: Using the basic property ofconditional probability, the posterior distribution will yield: This equation, showing the relationship between the conditional probability and the individual events, is known as Bayes' theorem. This simple expression encapsulates the technical core of Bayesian inference which aims to deconstruct the probability,P(θ∣y){\displaystyle P(\theta \mid y)}, relative to solvable subsets of its supportive evidence.[9] The usual starting point of a statistical analysis is the assumption that thenvaluesy1,y2,…,yn{\displaystyle y_{1},y_{2},\ldots ,y_{n}}are exchangeable. If no information – other than datay– is available to distinguish any of theθj{\displaystyle \theta _{j}}’s from any others, and no ordering or grouping of the parameters can be made, one must assume symmetry of prior distribution parameters.[10]This symmetry is represented probabilistically by exchangeability. Generally, it is useful and appropriate to model data from an exchangeable distribution asindependently and identically distributed, given some unknown parameter vectorθ{\displaystyle \theta }, with distributionP(θ){\displaystyle P(\theta )}. For a fixed numbern, the sety1,y2,…,yn{\displaystyle y_{1},y_{2},\ldots ,y_{n}}is exchangeable if the joint probabilityP(y1,y2,…,yn){\displaystyle P(y_{1},y_{2},\ldots ,y_{n})}is invariant underpermutationsof the indices. That is, for every permutationπ{\displaystyle \pi }or(π1,π2,…,πn){\displaystyle (\pi _{1},\pi _{2},\ldots ,\pi _{n})}of (1, 2, …,n),P(y1,y2,…,yn)=P(yπ1,yπ2,…,yπn).{\displaystyle P(y_{1},y_{2},\ldots ,y_{n})=P(y_{\pi _{1}},y_{\pi _{2}},\ldots ,y_{\pi _{n}}).}[11] The following is an exchangeable, but not independent and identical (iid), example: Consider an urn with a red ball and a blue ball inside, with probability12{\displaystyle {\frac {1}{2}}}of drawing either. Balls are drawn without replacement, i.e. after one ball is drawn from thenballs, there will ben− 1 remaining balls left for the next draw. The probability of selecting a red ball in the first draw and a blue ball in the second draw is equal to the probability of selecting a blue ball on the first draw and a red on the second, both of which are 1/2: This makesy1{\displaystyle y_{1}}andy2{\displaystyle y_{2}}exchangeable. But the probability of selecting a red ball on the second draw given that the red ball has already been selected in the first is 0. This is not equal to the probability that the red ball is selected in the second draw, which is 1/2: Thus,y1{\displaystyle y_{1}}andy2{\displaystyle y_{2}}are not independent. Ifx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}are independent and identically distributed, then they are exchangeable, but the converse is not necessarily true.[12] Infinite exchangeability is the property that every finite subset of an infinite sequencey1{\displaystyle y_{1}},y2,…{\displaystyle y_{2},\ldots }is exchangeable. For anyn, the sequencey1,y2,…,yn{\displaystyle y_{1},y_{2},\ldots ,y_{n}}is exchangeable.[12] Bayesian hierarchical modeling makes use of two important concepts in deriving the posterior distribution,[1]namely: Suppose a random variableYfollows a normal distribution with parameterθ{\displaystyle \theta }as themeanand 1 as thevariance, that isY∣θ∼N(θ,1){\displaystyle Y\mid \theta \sim N(\theta ,1)}. Thetilderelation∼{\displaystyle \sim }can be read as "has the distribution of" or "is distributed as". Suppose also that the parameterθ{\displaystyle \theta }has a distribution given by anormal distributionwith meanμ{\displaystyle \mu }and variance 1, i.e.θ∣μ∼N(μ,1){\displaystyle \theta \mid \mu \sim N(\mu ,1)}. Furthermore,μ{\displaystyle \mu }follows another distribution given, for example, by thestandard normal distribution,N(0,1){\displaystyle {\text{N}}(0,1)}. The parameterμ{\displaystyle \mu }is called the hyperparameter, while its distribution given byN(0,1){\displaystyle {\text{N}}(0,1)}is an example of a hyperprior distribution. The notation of the distribution ofYchanges as another parameter is added, i.e.Y∣θ,μ∼N(θ,1){\displaystyle Y\mid \theta ,\mu \sim N(\theta ,1)}. If there is another stage, say,μ{\displaystyle \mu }following another normal distribution with a mean ofβ{\displaystyle \beta }and a variance ofϵ{\displaystyle \epsilon }, thenμ∼N(β,ϵ){\displaystyle \mu \sim N(\beta ,\epsilon )},{\displaystyle {\mbox{ }}}β{\displaystyle \beta }andϵ{\displaystyle \epsilon }can also be called hyperparameters with hyperprior distributions.[6] Letyj{\displaystyle y_{j}}be an observation andθj{\displaystyle \theta _{j}}a parameter governing the data generating process foryj{\displaystyle y_{j}}. Assume further that the parametersθ1,θ2,…,θj{\displaystyle \theta _{1},\theta _{2},\ldots ,\theta _{j}}are generated exchangeably from a common population, with distribution governed by a hyperparameterϕ{\displaystyle \phi }.The Bayesian hierarchical model contains the following stages: The likelihood, as seen in stage I isP(yj∣θj,ϕ){\displaystyle P(y_{j}\mid \theta _{j},\phi )}, withP(θj,ϕ){\displaystyle P(\theta _{j},\phi )}as its prior distribution. Note that the likelihood depends onϕ{\displaystyle \phi }only throughθj{\displaystyle \theta _{j}}. The prior distribution from stage I can be broken down into: Withϕ{\displaystyle \phi }as its hyperparameter with hyperprior distribution,P(ϕ){\displaystyle P(\phi )}. Thus, the posterior distribution is proportional to: As an example, a teacher wants to estimate how well a student did on theSAT. The teacher uses the currentgrade point average(GPA) of the student for an estimate. Their current GPA, denoted byY{\displaystyle Y}, has a likelihood given by some probability function with parameterθ{\displaystyle \theta }, i.e.Y∣θ∼P(Y∣θ){\displaystyle Y\mid \theta \sim P(Y\mid \theta )}. This parameterθ{\displaystyle \theta }is the SAT score of the student. The SAT score is viewed as a sample coming from a common population distribution indexed by another parameterϕ{\displaystyle \phi }, which is the high school grade of the student (freshman, sophomore, junior or senior).[14]That is,θ∣ϕ∼P(θ∣ϕ){\displaystyle \theta \mid \phi \sim P(\theta \mid \phi )}. Moreover, the hyperparameterϕ{\displaystyle \phi }follows its own distribution given byP(ϕ){\displaystyle P(\phi )}, a hyperprior. These relationships can be used to calculate the likelihood of a specific SAT score relative to a particular GPA: All information in the problem will be used to solve for the posterior distribution. Instead of solving only using the prior distribution and the likelihood function, using hyperpriors allows a more nuanced distinction of relationships between given variables.[15] In general, the joint posterior distribution of interest in 2-stage hierarchical models is: For 3-stage hierarchical models, the posterior distribution is given by: A three stage version of Bayesian hierarchical modeling could be used to calculate probability at 1) an individual level, 2) at the level of population and 3) the prior, which is an assumed probability distribution that takes place before evidence is initially acquired: Stage 1: Individual-Level Model yij=f(tij;θ1i,θ2i,…,θli,…,θKi)+ϵij,ϵij∼N(0,σ2),i=1,…,N,j=1,…,Mi.{\displaystyle {y}_{ij}=f(t_{ij};\theta _{1i},\theta _{2i},\ldots ,\theta _{li},\ldots ,\theta _{Ki})+\epsilon _{ij},\quad \epsilon _{ij}\sim N(0,\sigma ^{2}),\quad i=1,\ldots ,N,\,j=1,\ldots ,M_{i}.} Stage 2: Population Model θli=αl+∑b=1Pβlbxib+ηli,ηli∼N(0,ωl2),i=1,…,N,l=1,…,K.{\displaystyle \theta _{li}=\alpha _{l}+\sum _{b=1}^{P}\beta _{lb}x_{ib}+\eta _{li},\quad \eta _{li}\sim N(0,\omega _{l}^{2}),\quad i=1,\ldots ,N,\,l=1,\ldots ,K.} Stage 3: Prior σ2∼π(σ2),αl∼π(αl),(βl1,…,βlb,…,βlP)∼π(βl1,…,βlb,…,βlP),ωl2∼π(ωl2),l=1,…,K.{\displaystyle \sigma ^{2}\sim \pi (\sigma ^{2}),\quad \alpha _{l}\sim \pi (\alpha _{l}),\quad (\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP})\sim \pi (\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP}),\quad \omega _{l}^{2}\sim \pi (\omega _{l}^{2}),\quad l=1,\ldots ,K.} Here,yij{\displaystyle y_{ij}}denotes the continuous response of thei{\displaystyle i}-th subject at the time pointtij{\displaystyle t_{ij}}, andxib{\displaystyle x_{ib}}is theb{\displaystyle b}-th covariate of thei{\displaystyle i}-th subject. Parameters involved in the model are written in Greek letters. The variablef(t;θ1,…,θK){\displaystyle f(t;\theta _{1},\ldots ,\theta _{K})}is a known function parameterized by theK{\displaystyle K}-dimensional vector(θ1,…,θK){\displaystyle (\theta _{1},\ldots ,\theta _{K})}. Typically,f{\displaystyle f}is a `nonlinear' function and describes the temporal trajectory of individuals. In the model,ϵij{\displaystyle \epsilon _{ij}}andηli{\displaystyle \eta _{li}}describe within-individual variability and between-individual variability, respectively. If the prior is not considered, the relationship reduces to a frequentist nonlinear mixed-effect model. A central task in the application of the Bayesian nonlinear mixed-effect models is to evaluate posterior density: π({θli}i=1,l=1N,K,σ2,{αl}l=1K,{βlb}l=1,b=1K,P,{ωl}l=1K|{yij}i=1,j=1N,Mi){\displaystyle \pi (\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K}|\{y_{ij}\}_{i=1,j=1}^{N,M_{i}})} ∝π({yij}i=1,j=1N,Mi,{θli}i=1,l=1N,K,σ2,{αl}l=1K,{βlb}l=1,b=1K,P,{ωl}l=1K){\displaystyle \propto \pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}},\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})} =π({yij}i=1,j=1N,Mi|{θli}i=1,l=1N,K,σ2)⏟Stage1:Individual−LevelModel×π({θli}i=1,l=1N,K|{αl}l=1K,{βlb}l=1,b=1K,P,{ωl}l=1K)⏟Stage2:PopulationModel×p(σ2,{αl}l=1K,{βlb}l=1,b=1K,P,{ωl}l=1K)⏟Stage3:Prior{\displaystyle =\underbrace {\pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}}|\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2})} _{Stage1:Individual-LevelModel}\times \underbrace {\pi (\{\theta _{li}\}_{i=1,l=1}^{N,K}|\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})} _{Stage2:PopulationModel}\times \underbrace {p(\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})} _{Stage3:Prior}} The panel on the right displays Bayesian research cycle using Bayesian nonlinear mixed-effects model.[16]A research cycle using the Bayesian nonlinear mixed-effects model comprises two steps: (a) standard research cycle and (b) Bayesian-specific workflow. A standard research cycle involves 1) literature review, 2) defining a problem and 3) specifying the research question and hypothesis. Bayesian-specific workflow stratifies this approach to include three sub-steps: (b)–(i) formalizing prior distributions based on background knowledge and prior elicitation; (b)–(ii) determining the likelihood function based on a nonlinear functionf{\displaystyle f}; and (b)–(iii) making a posterior inference. The resulting posterior inference can be used to start a new research cycle.
https://en.wikipedia.org/wiki/Bayesian_hierarchical_modeling
Inprobability theoryandstatistics, themarginal distributionof asubsetof acollectionofrandom variablesis theprobability distributionof the variables contained in the subset. It gives the probabilities of various values of the variables in the subset without reference to the values of the other variables. This contrasts with aconditional distribution, which gives the probabilities contingent upon the values of the other variables. Marginal variablesare those variables in the subset of variables being retained. These concepts are "marginal" because they can be found by summing values in a table along rows or columns, and writing the sum in the margins of the table.[1]The distribution of the marginal variables (the marginal distribution) is obtained bymarginalizing(that is, focusing on the sums in the margin) over the distribution of the variables being discarded, and the discarded variables are said to have beenmarginalized out. The context here is that the theoretical studies being undertaken, or thedata analysisbeing done, involves a wider set of random variables but that attention is being limited to a reduced number of those variables. In many applications, an analysis may start with a given collection of random variables, then first extend the set by defining new ones (such as the sum of the original random variables) and finally reduce the number by placing interest in the marginal distribution of a subset (such as the sum). Several different analyses may be done, each treating a different subset of variables as the marginal distribution. Given a knownjoint distributionof twodiscreterandom variables, say,XandY, the marginal distribution of either variable –Xfor example – is theprobability distributionofXwhen the values ofYare not taken into consideration. This can be calculated by summing thejoint probabilitydistribution over all values ofY. Naturally, the converse is also true: the marginal distribution can be obtained forYby summing over the separate values ofX. Amarginal probabilitycan always be written as anexpected value:pX(x)=∫ypX∣Y(x∣y)pY(y)dy=EY⁡[pX∣Y(x∣Y)].{\displaystyle p_{X}(x)=\int _{y}p_{X\mid Y}(x\mid y)\,p_{Y}(y)\,\mathrm {d} y=\operatorname {E} _{Y}[p_{X\mid Y}(x\mid Y)]\;.} Intuitively, the marginal probability ofXis computed by examining the conditional probability ofXgiven a particular value ofY, and then averaging this conditional probability over the distribution of all values ofY. This follows from the definition ofexpected value(after applying thelaw of the unconscious statistician)EY⁡[f(Y)]=∫yf(y)pY(y)dy.{\displaystyle \operatorname {E} _{Y}[f(Y)]=\int _{y}f(y)p_{Y}(y)\,\mathrm {d} y.} Therefore, marginalization provides the rule for the transformation of the probability distribution of a random variableYand another random variableX=g(Y):pX(x)=∫ypX∣Y(x∣y)pY(y)dy=∫yδ(x−g(y))pY(y)dy.{\displaystyle p_{X}(x)=\int _{y}p_{X\mid Y}(x\mid y)\,p_{Y}(y)\,\mathrm {d} y=\int _{y}\delta {\big (}x-g(y){\big )}\,p_{Y}(y)\,\mathrm {d} y.} Given twocontinuousrandom variablesXandYwhosejoint distributionis known, then the marginalprobability density functioncan be obtained by integrating thejoint probabilitydistribution,f, overY,and vice versa. That is wherex∈[a,b]{\displaystyle x\in [a,b]}, andy∈[c,d]{\displaystyle y\in [c,d]}. Finding the marginalcumulative distribution functionfrom the joint cumulative distribution function is easy. Recall that: IfXandYjointly take values on [a,b] × [c,d] then Ifdis ∞, then this becomes a limitFX(x)=limy→∞F(x,y){\textstyle F_{X}(x)=\lim _{y\to \infty }F(x,y)}. Likewise forFY(y){\displaystyle F_{Y}(y)}. Themarginal probabilityis the probability of a single event occurring, independent of other events. Aconditional probability, on the other hand, is the probability that an event occurs given that another specific eventhas alreadyoccurred. This means that the calculation for one variable is dependent on another variable.[2] The conditional distribution of a variable given another variable is the joint distribution of both variables divided by the marginal distribution of the other variable.[3]That is, Suppose there is data from a classroom of 200 students on the amount of time studied (X) and the percentage of correct answers (Y).[4]Assuming thatXandYare discrete random variables, the joint distribution ofXandYcan be described by listing all the possible values ofp(xi,yj), as shown in Table.3. Themarginal distributioncan be used to determine how many students scored 20 or below:pY(y1)=PY(Y=y1)=∑i=14P(xi,y1)=2200+8200=10200{\displaystyle p_{Y}(y_{1})=P_{Y}(Y=y_{1})=\sum _{i=1}^{4}P(x_{i},y_{1})={\frac {2}{200}}+{\frac {8}{200}}={\frac {10}{200}}}, meaning 10 students or 5%. Theconditional distributioncan be used to determine the probability that a student that studied 60 minutes or more obtains a scored of 20 or below:pY|X(y1|x4)=P(Y=y1|X=x4)=P(X=x4,Y=y1)P(X=x4)=8/20070/200=870=435{\displaystyle p_{Y|X}(y_{1}|x_{4})=P(Y=y_{1}|X=x_{4})={\frac {P(X=x_{4},Y=y_{1})}{P(X=x_{4})}}={\frac {8/200}{70/200}}={\frac {8}{70}}={\frac {4}{35}}}, meaning there is about a 11% probability of scoring 20 after having studied for at least 60 minutes. Suppose that the probability that a pedestrian will be hit by a car, while crossing the road at a pedestrian crossing, without paying attention to the traffic light, is to be computed. Let H be adiscrete random variabletaking one value from {Hit, Not Hit}. Let L (for traffic light) be a discrete random variable taking one value from {Red, Yellow, Green}. Realistically, H will be dependent on L. That is, P(H = Hit) will take different values depending on whether L is red, yellow or green (and likewise for P(H = Not Hit)). A person is, for example, far more likely to be hit by a car when trying to cross while the lights for perpendicular traffic are green than if they are red. In other words, for any given possible pair of values for H and L, one must consider thejoint probability distributionof H and L to find the probability of that pair of events occurring together if the pedestrian ignores the state of the light. However, in trying to calculate themarginal probabilityP(H = Hit), what is being sought is the probability that H = Hit in the situation in which the particular value of L is unknown and in which the pedestrian ignores the state of the light. In general, a pedestrian can be hit if the lights are red OR if the lights are yellow OR if the lights are green. So, the answer for the marginal probability can be found by summing P(H | L) for all possible values of L, with each value of L weighted by its probability of occurring. Here is a table showing the conditional probabilities of being hit, depending on the state of the lights. (Note that the columns in this table must add up to 1 because the probability of being hit or not hit is 1 regardless of the state of the light.) To find the joint probability distribution, more data is required. For example, suppose P(L = red) = 0.2, P(L = yellow) = 0.1, and P(L = green) = 0.7. Multiplying each column in the conditional distribution by the probability of that column occurring results in the joint probability distribution of H and L, given in the central 2×3 block of entries. (Note that the cells in this 2×3 block add up to 1). The marginal probability P(H = Hit) is the sum 0.572 along the H = Hit row of this joint distribution table, as this is the probability of being hit when the lights are red OR yellow OR green. Similarly, the marginal probability that P(H = Not Hit) is the sum along the H = Not Hit row. Formultivariate distributions, formulae similar to those above apply with the symbolsXand/orYbeing interpreted as vectors. In particular, each summation or integration would be over all variables except those contained inX.[5] That means, IfX1,X2,…,Xnarediscreterandom variables, then the marginalprobability mass functionshould bepXi(k)=∑p(x1,x2,…,xi−1,k,xi+1,…,xn);{\displaystyle p_{X_{i}}(k)=\sum p(x_{1},x_{2},\dots ,x_{i-1},k,x_{i+1},\dots ,x_{n});}ifX1,X2,…,Xnarecontinuous random variables, then the marginalprobability density functionshould befXi(xi)=∫−∞∞∫−∞∞∫−∞∞⋯∫−∞∞f(x1,x2,…,xn)dx1dx2⋯dxi−1dxi+1⋯dxn.{\displaystyle f_{X_{i}}(x_{i})=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\cdots \int _{-\infty }^{\infty }f(x_{1},x_{2},\dots ,x_{n})dx_{1}dx_{2}\cdots dx_{i-1}dx_{i+1}\cdots dx_{n}.}
https://en.wikipedia.org/wiki/Marginal_distribution
Inprobability theoryandstatistics, the conditional probability distribution is a probability distribution that describes the probability of an outcome given the occurrence of a particular event. Given twojointly distributedrandom variablesX{\displaystyle X}andY{\displaystyle Y}, theconditional probability distributionofY{\displaystyle Y}givenX{\displaystyle X}is theprobability distributionofY{\displaystyle Y}whenX{\displaystyle X}is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified valuex{\displaystyle x}ofX{\displaystyle X}as a parameter. When bothX{\displaystyle X}andY{\displaystyle Y}arecategorical variables, aconditional probability tableis typically used to represent the conditional probability. The conditional distribution contrasts with themarginal distributionof a random variable, which is its distribution without reference to the value of the other variable. If the conditional distribution ofY{\displaystyle Y}givenX{\displaystyle X}is acontinuous distribution, then itsprobability density functionis known as theconditional density function.[1]The properties of a conditional distribution, such as themoments, are often referred to by corresponding names such as theconditional meanandconditional variance. More generally, one can refer to the conditional distribution of a subset of a set of more than two variables; this conditional distribution is contingent on the values of all the remaining variables, and if more than one variable is included in the subset then this conditional distribution is the conditionaljoint distributionof the included variables. Fordiscrete random variables, the conditional probability mass function ofY{\displaystyle Y}givenX=x{\displaystyle X=x}can be written according to its definition as: Due to the occurrence ofP(X=x){\displaystyle P(X=x)}in the denominator, this is defined only for non-zero (hence strictly positive)P(X=x).{\displaystyle P(X=x).} The relation with the probability distribution ofX{\displaystyle X}givenY{\displaystyle Y}is: Consider the roll of a fair die and letX=1{\displaystyle X=1}if the number is even (i.e., 2, 4, or 6) andX=0{\displaystyle X=0}otherwise. Furthermore, letY=1{\displaystyle Y=1}if the number is prime (i.e., 2, 3, or 5) andY=0{\displaystyle Y=0}otherwise. Then the unconditional probability thatX=1{\displaystyle X=1}is 3/6 = 1/2 (since there are six possible rolls of the dice, of which three are even), whereas the probability thatX=1{\displaystyle X=1}conditional onY=1{\displaystyle Y=1}is 1/3 (since there are three possible prime number rolls—2, 3, and 5—of which one is even). Similarly forcontinuous random variables, the conditionalprobability density functionofY{\displaystyle Y}given the occurrence of the valuex{\displaystyle x}ofX{\displaystyle X}can be written as[2] wherefX,Y(x,y){\displaystyle f_{X,Y}(x,y)}gives thejoint densityofX{\displaystyle X}andY{\displaystyle Y}, whilefX(x){\displaystyle f_{X}(x)}gives themarginal densityforX{\displaystyle X}. Also in this case it is necessary thatfX(x)>0{\displaystyle f_{X}(x)>0}. The relation with the probability distribution ofX{\displaystyle X}givenY{\displaystyle Y}is given by: The concept of the conditional distribution of a continuous random variable is not as intuitive as it might seem:Borel's paradoxshows that conditional probability density functions need not be invariant under coordinate transformations. The graph shows abivariate normal joint densityfor random variablesX{\displaystyle X}andY{\displaystyle Y}. To see the distribution ofY{\displaystyle Y}conditional onX=70{\displaystyle X=70}, one can first visualize the lineX=70{\displaystyle X=70}in theX,Y{\displaystyle X,Y}plane, and then visualize the plane containing that line and perpendicular to theX,Y{\displaystyle X,Y}plane. The intersection of that plane with the joint normal density, once rescaled to give unit area under the intersection, is the relevant conditional density ofY{\displaystyle Y}. Y∣X=70∼N(μY+σYσXρ(70−μX),(1−ρ2)σY2).{\displaystyle Y\mid X=70\ \sim \ {\mathcal {N}}\left(\mu _{Y}+{\frac {\sigma _{Y}}{\sigma _{X}}}\rho (70-\mu _{X}),\,(1-\rho ^{2})\sigma _{Y}^{2}\right).} Random variablesX{\displaystyle X},Y{\displaystyle Y}areindependentif and only if the conditional distribution ofY{\displaystyle Y}givenX{\displaystyle X}is, for all possible realizations ofX{\displaystyle X}, equal to the unconditional distribution ofY{\displaystyle Y}. For discrete random variables this meansP(Y=y|X=x)=P(Y=y){\displaystyle P(Y=y|X=x)=P(Y=y)}for all possibley{\displaystyle y}andx{\displaystyle x}withP(X=x)>0{\displaystyle P(X=x)>0}. For continuous random variablesX{\displaystyle X}andY{\displaystyle Y}, having ajoint density function, it meansfY(y|X=x)=fY(y){\displaystyle f_{Y}(y|X=x)=f_{Y}(y)}for all possibley{\displaystyle y}andx{\displaystyle x}withfX(x)>0{\displaystyle f_{X}(x)>0}. Seen as a function ofy{\displaystyle y}for givenx{\displaystyle x},P(Y=y|X=x){\displaystyle P(Y=y|X=x)}is a probability mass function and so the sum over ally{\displaystyle y}(or integral if it is a conditional probability density) is 1. Seen as a function ofx{\displaystyle x}for giveny{\displaystyle y}, it is alikelihood function, so that the sum (or integral) over allx{\displaystyle x}need not be 1. Additionally, a marginal of a joint distribution can be expressed as the expectation of the corresponding conditional distribution. For instance,pX(x)=EY[pX|Y(x|Y)]{\displaystyle p_{X}(x)=E_{Y}[p_{X|Y}(x\ |\ Y)]}. Let(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}be a probability space,G⊆F{\displaystyle {\mathcal {G}}\subseteq {\mathcal {F}}}aσ{\displaystyle \sigma }-field inF{\displaystyle {\mathcal {F}}}. GivenA∈F{\displaystyle A\in {\mathcal {F}}}, theRadon-Nikodym theoremimplies that there is[3]aG{\displaystyle {\mathcal {G}}}-measurable random variableP(A∣G):Ω→R{\displaystyle P(A\mid {\mathcal {G}}):\Omega \to \mathbb {R} }, called theconditional probability, such that∫GP(A∣G)(ω)dP(ω)=P(A∩G){\displaystyle \int _{G}P(A\mid {\mathcal {G}})(\omega )dP(\omega )=P(A\cap G)}for everyG∈G{\displaystyle G\in {\mathcal {G}}}, and such a random variable is uniquely defined up to sets of probability zero. A conditional probability is calledregularifP⁡(⋅∣G)(ω){\displaystyle \operatorname {P} (\cdot \mid {\mathcal {G}})(\omega )}is aprobability measureon(Ω,F){\displaystyle (\Omega ,{\mathcal {F}})}for allω∈Ω{\displaystyle \omega \in \Omega }a.e. Special cases: LetX:Ω→E{\displaystyle X:\Omega \to E}be a(E,E){\displaystyle (E,{\mathcal {E}})}-valued random variable. For eachB∈E{\displaystyle B\in {\mathcal {E}}}, defineμX|G(B|G)=P(X−1(B)|G).{\displaystyle \mu _{X\,|\,{\mathcal {G}}}(B\,|\,{\mathcal {G}})=\mathrm {P} (X^{-1}(B)\,|\,{\mathcal {G}}).}For anyω∈Ω{\displaystyle \omega \in \Omega }, the functionμX|G(⋅|G)(ω):E→R{\displaystyle \mu _{X\,|{\mathcal {G}}}(\cdot \,|{\mathcal {G}})(\omega ):{\mathcal {E}}\to \mathbb {R} }is called theconditional probability distributionofX{\displaystyle X}givenG{\displaystyle {\mathcal {G}}}. If it is a probability measure on(E,E){\displaystyle (E,{\mathcal {E}})}, then it is calledregular. For a real-valued random variable (with respect to the Borelσ{\displaystyle \sigma }-fieldR1{\displaystyle {\mathcal {R}}^{1}}onR{\displaystyle \mathbb {R} }), every conditional probability distribution is regular.[4]In this case,E[X∣G]=∫−∞∞xμX∣G(dx,⋅){\displaystyle E[X\mid {\mathcal {G}}]=\int _{-\infty }^{\infty }x\,\mu _{X\mid {\mathcal {G}}}(dx,\cdot )}almost surely. For any eventA∈F{\displaystyle A\in {\mathcal {F}}}, define theindicator function: which is a random variable. Note that the expectation of this random variable is equal to the probability ofAitself: Given aσ{\displaystyle \sigma }-fieldG⊆F{\displaystyle {\mathcal {G}}\subseteq {\mathcal {F}}}, the conditional probabilityP⁡(A∣G){\displaystyle \operatorname {P} (A\mid {\mathcal {G}})}is a version of theconditional expectationof the indicator function forA{\displaystyle A}: An expectation of a random variable with respect to a regular conditional probability is equal to its conditional expectation. Consider the probability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )}and a sub-sigma fieldA⊂F{\displaystyle {\mathcal {A}}\subset {\mathcal {F}}}. The sub-sigma fieldA{\displaystyle {\mathcal {A}}}can be loosely interpreted as containing a subset of the information inF{\displaystyle {\mathcal {F}}}. For example, we might think ofP(B|A){\displaystyle \mathbb {P} (B|{\mathcal {A}})}as the probability of the eventB{\displaystyle B}given the information inA{\displaystyle {\mathcal {A}}}. Also recall that an eventB{\displaystyle B}is independent of a sub-sigma fieldA{\displaystyle {\mathcal {A}}}ifP(B|A)=P(B){\displaystyle \mathbb {P} (B|A)=\mathbb {P} (B)}for allA∈A{\displaystyle A\in {\mathcal {A}}}. It is incorrect to conclude in general that the information inA{\displaystyle {\mathcal {A}}}does not tell us anything about the probability of eventB{\displaystyle B}occurring. This can be shown with a counter-example: Consider a probability space on the unit interval,Ω=[0,1]{\displaystyle \Omega =[0,1]}. LetG{\displaystyle {\mathcal {G}}}be the sigma-field of all countable sets and sets whose complement is countable. So each set inG{\displaystyle {\mathcal {G}}}has measure0{\displaystyle 0}or1{\displaystyle 1}and so is independent of each event inF{\displaystyle {\mathcal {F}}}. However, notice thatG{\displaystyle {\mathcal {G}}}also contains all the singleton events inF{\displaystyle {\mathcal {F}}}(those sets which contain only a singleω∈Ω{\displaystyle \omega \in \Omega }). So knowing which of the events inG{\displaystyle {\mathcal {G}}}occurred is equivalent to knowing exactly whichω∈Ω{\displaystyle \omega \in \Omega }occurred! So in one sense,G{\displaystyle {\mathcal {G}}}contains no information aboutF{\displaystyle {\mathcal {F}}}(it is independent of it), and in another sense it contains all the information inF{\displaystyle {\mathcal {F}}}.[5][page needed]
https://en.wikipedia.org/wiki/Conditional_probability_distribution
Instatistics,overdispersionis the presence of greater variability (statistical dispersion) in a data set than would be expected based on a givenstatistical model. A common task in appliedstatisticsis choosing aparametric modelto fit a given set of empirical observations. This necessitates an assessment of thefitof the chosen model. It is usually possible to choose the model parameters in such a way that the theoreticalpopulation meanof the model is approximately equal to thesample mean. However, especially for simple models with few parameters, theoretical predictions may not match empirical observations for highermoments. When the observedvarianceis higher than the variance of a theoretical model,overdispersionhas occurred. Conversely,underdispersionmeans that there was less variation in the data than predicted. Overdispersion is a very common feature in applied data analysis because in practice, populations are frequentlyheterogeneous(non-uniform) contrary to the assumptions implicit within widely used simple parametric models. Overdispersion is often encountered when fitting very simple parametric models, such as those based on thePoisson distribution. The Poisson distribution has one free parameter and does not allow for the variance to be adjusted independently of the mean. The choice of a distribution from the Poisson family is often dictated by the nature of the empirical data. For example,Poisson regressionanalysis is commonly used to modelcount data. If overdispersion is a feature, an alternative model with additional free parameters may provide a better fit. In the case of count data, a Poissonmixture modellike thenegative binomial distributioncan be proposed instead, in which the mean of the Poisson distribution can itself be thought of as a random variable drawn – in this case – from thegamma distributionthereby introducing an additional free parameter (note the resulting negative binomial distribution is completely characterized by two parameters). As a more concrete example, it has been observed that the number of boys born to families does not conform faithfully to abinomial distributionas might be expected.[1]Instead, the sex ratios of families seem to skew toward either boys or girls (see, for example theTrivers–Willard hypothesisfor one possible explanation) i.e. there are more all-boy families, more all-girl families and not enough families close to the population 51:49 boy-to-girl mean ratio than expected from a binomial distribution, and the resulting empirical variance is larger than specified by a binomial model. In this case, thebeta-binomial modeldistribution is a popular and analytically tractable alternative model to the binomial distribution since it provides a better fit to the observed data.[2]To capture the heterogeneity of the families, one can think of the probability parameter of the binomial model (say, probability of being a boy) is itself a random variable (i.e.random effects model) drawn for each family from abeta distributionas the mixing distribution. The resultingcompound distribution(beta-binomial) has an additional free parameter. Another common model for overdispersion—when some of the observations are notBernoulli—arises from introducing anormal random variableinto alogistic model. Software is widely available for fitting this type ofmultilevel model. In this case, if the variance of the normal variable is zero, the model reduces to the standard (undispersed)logistic regression. This model has an additional free parameter, namely the variance of the normal variable. With respect to binomial random variables, the concept of overdispersion makes sense only if n>1 (i.e. overdispersion is nonsensical for Bernoulli random variables). As thenormal distribution(Gaussian) has variance as a parameter, any data with finite variance (including any finite data) can be modeled with a normal distribution with the exact variance – the normal distribution is a two-parameter model, with mean and variance. Thus, in the absence of an underlying model, there is no notion of data being overdispersed relative to the normal model, though the fit may be poor in other respects (such as the higher moments ofskew,kurtosis, etc.). However, in the case that the data is modeled by a normal distribution with an expected variation, it can be over- or under-dispersed relative to that prediction. For example, in astatistical survey, themargin of error(determined by sample size) predicts thesampling errorand hence dispersion of results on repeated surveys. If one performs ameta-analysisof repeated surveys of a fixed population (say with a given sample size, so margin of error is the same), one expects the results to fall on normal distribution with standard deviation equal to the margin of error. However, in the presence ofstudy heterogeneitywhere studies have differentsampling bias, the distribution is instead acompound distributionand will be overdistributed relative to the predicted distribution. For example, given repeatedopinion pollsall with a margin of error of 3%, if they are conducted by different polling organizations, one expects the results to have standard deviation greater than 3%, due to pollster bias from different methodologies. Over- and underdispersion are terms which have been adopted in branches of thebiological sciences. Inparasitology, the term 'overdispersion' is generally used as defined here – meaning a distribution with a higher than expected variance. In some areas ofecology, however, meanings have been transposed, so that overdispersion is actually taken to mean more even (lower variance) than expected. This confusion has caused some ecologists to suggest that the terms 'aggregated', or 'contagious', would be better used in ecology for 'overdispersed'.[3]Such preferences are creeping intoparasitologytoo.[4]Generally this suggestion has not been heeded, and confusion persists in the literature. Furthermore indemography, overdispersion is often evident in the analysis of death count data, but demographers prefer the term 'unobserved heterogeneity'.
https://en.wikipedia.org/wiki/Overdispersion
Instatistics, anexpectation–maximization(EM)algorithmis aniterative methodto find (local)maximum likelihoodormaximum a posteriori(MAP) estimates ofparametersinstatistical models, where the model depends on unobservedlatent variables.[1]The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of thelog-likelihoodevaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on theEstep. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture ofgaussians, or to solve the multiple linear regression problem.[2] The EM algorithm was explained and given its name in a classic 1977 paper byArthur Dempster,Nan Laird, andDonald Rubin.[3]They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies byCedric Smith.[4]Another was proposed byH.O. Hartleyin 1958, and Hartley and Hocking in 1977, from which many of the ideas in the Dempster–Laird–Rubin paper originated.[5]Another one by S.K Ng, Thriyambakam Krishnan and G.J McLachlan in 1977.[6]Hartley’s ideas can be broadened to any grouped discrete distribution. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers,[7][8][9]following his collaboration withPer Martin-LöfandAnders Martin-Löf.[10][11][12][13][14]The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis. See also Meng and van Dyk (1997). The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published byC. F. Jeff Wuin 1983.[15]Wu's proof established the EM method's convergence also outside of theexponential family, as claimed by Dempster–Laird–Rubin.[15] The EM algorithm is used to find (local)maximum likelihoodparameters of astatistical modelin cases where the equations cannot be solved directly. Typically these models involvelatent variablesin addition to unknownparametersand known data observations. That is, eithermissing valuesexist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, amixture modelcan be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs. Finding a maximum likelihood solution typically requires taking thederivativesof thelikelihood functionwith respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation. The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven in this context. Additionally, it can be proven that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a local maximum or asaddle point.[15]In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also havesingularitiesin them, i.e., nonsensical maxima. For example, one of thesolutionsthat may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points. Given thestatistical modelwhich generates a setX{\displaystyle \mathbf {X} }of observed data, a set of unobserved latent data ormissing valuesZ{\displaystyle \mathbf {Z} }, and a vector of unknown parametersθ{\displaystyle {\boldsymbol {\theta }}}, along with alikelihood functionL(θ;X,Z)=p(X,Z∣θ){\displaystyle L({\boldsymbol {\theta }};\mathbf {X} ,\mathbf {Z} )=p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})}, themaximum likelihood estimate(MLE) of the unknown parameters is determined by maximizing themarginal likelihoodof the observed data However, this quantity is often intractable sinceZ{\displaystyle \mathbf {Z} }is unobserved and the distribution ofZ{\displaystyle \mathbf {Z} }is unknown before attainingθ{\displaystyle {\boldsymbol {\theta }}}. The EM algorithm seeks to find the maximum likelihood estimate of the marginal likelihood by iteratively applying these two steps: More succinctly, we can write it as one equation:θ(t+1)=argmaxθEZ∼p(⋅|X,θ(t))⁡[log⁡p(X,Z|θ)]{\displaystyle {\boldsymbol {\theta }}^{(t+1)}={\underset {\boldsymbol {\theta }}{\operatorname {arg\,max} }}\operatorname {E} _{\mathbf {Z} \sim p(\cdot |\mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}\left[\log p(\mathbf {X} ,\mathbf {Z} |{\boldsymbol {\theta }})\right]\,} The typical models to which EM is applied useZ{\displaystyle \mathbf {Z} }as a latent variable indicating membership in one of a set of groups: However, it is possible to apply EM to other sorts of models. The motivation is as follows. If the value of the parametersθ{\displaystyle {\boldsymbol {\theta }}}is known, usually the value of the latent variablesZ{\displaystyle \mathbf {Z} }can be found by maximizing the log-likelihood over all possible values ofZ{\displaystyle \mathbf {Z} }, either simply by iterating overZ{\displaystyle \mathbf {Z} }or through an algorithm such as theViterbi algorithmforhidden Markov models. Conversely, if we know the value of the latent variablesZ{\displaystyle \mathbf {Z} }, we can find an estimate of the parametersθ{\displaystyle {\boldsymbol {\theta }}}fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where bothθ{\displaystyle {\boldsymbol {\theta }}}andZ{\displaystyle \mathbf {Z} }are unknown: The algorithm as just described monotonically approaches a local minimum of the cost function. Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to amaximum likelihood estimator. Formultimodal distributions, this means that an EM algorithm may converge to alocal maximumof the observed data likelihood function, depending on starting values. A variety of heuristic ormetaheuristicapproaches exist to escape a local maximum, such as random-restarthill climbing(starting with several different random initial estimatesθ(t){\displaystyle {\boldsymbol {\theta }}^{(t)}}), or applyingsimulated annealingmethods. EM is especially useful when the likelihood is anexponential family, see Sundberg (2019, Ch. 8) for a comprehensive treatment:[16]the E step becomes the sum of expectations ofsufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to deriveclosed-form expressionupdates for each step, using the Sundberg formula[17](proved and published by Rolf Sundberg, based on unpublished results ofPer Martin-LöfandAnders Martin-Löf).[8][9][11][12][13][14] The EM method was modified to computemaximum a posteriori(MAP) estimates forBayesian inferencein the original paper by Dempster, Laird, and Rubin. Other methods exist to find maximum likelihood estimates, such asgradient descent,conjugate gradient, or variants of theGauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function. Expectation-Maximization works to improveQ(θ∣θ(t)){\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}rather than directly improvinglog⁡p(X∣θ){\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})}. Here it is shown that improvements to the former imply improvements to the latter.[18] For anyZ{\displaystyle \mathbf {Z} }with non-zero probabilityp(Z∣X,θ){\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }})}, we can write We take the expectation over possible values of the unknown dataZ{\displaystyle \mathbf {Z} }under the current parameter estimateθ(t){\displaystyle \theta ^{(t)}}by multiplying both sides byp(Z∣X,θ(t)){\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}and summing (or integrating) overZ{\displaystyle \mathbf {Z} }. The left-hand side is the expectation of a constant, so we get: whereH(θ∣θ(t)){\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}is defined by the negated sum it is replacing. This last equation holds for every value ofθ{\displaystyle {\boldsymbol {\theta }}}includingθ=θ(t){\displaystyle {\boldsymbol {\theta }}={\boldsymbol {\theta }}^{(t)}}, and subtracting this last equation from the previous equation gives However,Gibbs' inequalitytells us thatH(θ∣θ(t))≥H(θ(t)∣θ(t)){\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})\geq H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})}, so we can conclude that In words, choosingθ{\displaystyle {\boldsymbol {\theta }}}to improveQ(θ∣θ(t)){\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}causeslog⁡p(X∣θ){\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})}to improve at least as much. The EM algorithm can be viewed as two alternating maximization steps, that is, as an example ofcoordinate descent.[19][20]Consider the function: whereqis an arbitrary probability distribution over the unobserved datazandH(q)is theentropyof the distributionq. This function can be written as wherepZ∣X(⋅∣x;θ){\displaystyle p_{Z\mid X}(\cdot \mid x;\theta )}is the conditional distribution of the unobserved data given the observed datax{\displaystyle x}andDKL{\displaystyle D_{KL}}is theKullback–Leibler divergence. Then the steps in the EM algorithm may be viewed as: AKalman filteris typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems. Filtering and smoothing EM algorithms arise by repeating this two-step procedure: Suppose that aKalman filteror minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from themaximum likelihoodcalculation wherex^k{\displaystyle {\widehat {x}}_{k}}are scalar output estimates calculated by a filter or a smoother from N scalar measurementszk{\displaystyle z_{k}}. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by wherex^k{\displaystyle {\widehat {x}}_{k}}andx^k+1{\displaystyle {\widehat {x}}_{k+1}}are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via The convergence of parameter estimates such as those above are well studied.[26][27][28][29] A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those usingconjugate gradientand modifiedNewton's methods(Newton–Raphson).[30]Also, EM can be used with constrained estimation methods. Parameter-expanded expectation maximization (PX-EM)algorithm often provides speed up by "us[ing] a `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".[31] Expectation conditional maximization (ECM)replaces each M step with a sequence of conditional maximization (CM) steps in which each parameterθiis maximized individually, conditionally on the other parameters remaining fixed.[32]Itself can be extended into theExpectation conditional maximization either (ECME)algorithm.[33] This idea is further extended ingeneralized expectation maximization (GEM)algorithm, in which is sought only an increase in the objective functionFfor both the E step and M step as described in theAs a maximization–maximization proceduresection.[19]GEM is further developed in a distributed environment and shows promising results.[34] It is also possible to consider the EM algorithm as a subclass of theMM(Majorize/Minimize or Minorize/Maximize, depending on context) algorithm,[35]and therefore use any machinery developed in the more general case. The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm[36]which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm byYasuo Matsuyamais an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.[37] EM is a partially non-Bayesian, maximum likelihood method. Its final result gives aprobability distributionover the latent variables (in the Bayesian style) together with a point estimate forθ(either amaximum likelihood estimateor a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution overθand the latent variables. The Bayesian approach to inference is simply to treatθas another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now includingθ) and optimize them one at a time. Now,ksteps per iteration are needed, wherekis the number of latent variables. Forgraphical modelsthis is easy to do as each variable's newQdepends only on itsMarkov blanket, so localmessage passingcan be used for efficient inference. Ininformation geometry, the E step and the M step are interpreted as projections under dualaffine connections, called the e-connection and the m-connection; theKullback–Leibler divergencecan also be understood in these terms. Letx=(x1,x2,…,xn){\displaystyle \mathbf {x} =(\mathbf {x} _{1},\mathbf {x} _{2},\ldots ,\mathbf {x} _{n})}be a sample ofn{\displaystyle n}independent observations from amixtureof twomultivariate normal distributionsof dimensiond{\displaystyle d}, and letz=(z1,z2,…,zn){\displaystyle \mathbf {z} =(z_{1},z_{2},\ldots ,z_{n})}be the latent variables that determine the component from which the observation originates.[20] where The aim is to estimate the unknown parameters representing themixingvalue between the Gaussians and the means and covariances of each: where the incomplete-data likelihood function is and the complete-data likelihood function is or whereI{\displaystyle \mathbb {I} }is anindicator functionandf{\displaystyle f}is theprobability density functionof a multivariate normal. In the last equality, for eachi, one indicatorI(zi=j){\displaystyle \mathbb {I} (z_{i}=j)}is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term. Given our current estimate of the parametersθ(t), the conditional distribution of theZiis determined byBayes' theoremto be the proportional height of the normaldensityweighted byτ: These are called the "membership probabilities", which are normally considered the output of the E step (although this is not the Q function of below). This E step corresponds with setting up this function for Q: The expectation oflog⁡L(θ;xi,Zi){\displaystyle \log L(\theta ;\mathbf {x} _{i},Z_{i})}inside the sum is taken with respect to the probability density functionP(Zi∣Xi=xi;θ(t)){\displaystyle P(Z_{i}\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})}, which might be different for eachxi{\displaystyle \mathbf {x} _{i}}of the training set. Everything in the E step is known before the step is taken exceptTj,i{\displaystyle T_{j,i}}, which is computed according to the equation at the beginning of the E step section. This full conditional expectation does not need to be calculated in one step, becauseτandμ/Σappear in separate linear terms and can thus be maximized independently. Q(θ∣θ(t)){\displaystyle Q(\theta \mid \theta ^{(t)})}being quadratic in form means that determining the maximizing values ofθ{\displaystyle \theta }is relatively straightforward. Also,τ{\displaystyle \tau },(μ1,Σ1){\displaystyle ({\boldsymbol {\mu }}_{1},\Sigma _{1})}and(μ2,Σ2){\displaystyle ({\boldsymbol {\mu }}_{2},\Sigma _{2})}may all be maximized independently since they all appear in separate linear terms. To begin, considerτ{\displaystyle \tau }, which has the constraintτ1+τ2=1{\displaystyle \tau _{1}+\tau _{2}=1}: This has the same form as the maximum likelihood estimate for thebinomial distribution, so For the next estimates of(μ1,Σ1){\displaystyle ({\boldsymbol {\mu }}_{1},\Sigma _{1})}: This has the same form as a weighted maximum likelihood estimate for a normal distribution, so and, by symmetry, Conclude the iterative process ifEZ∣θ(t),x[log⁡L(θ(t);x,Z)]≤EZ∣θ(t−1),x[log⁡L(θ(t−1);x,Z)]+ε{\displaystyle E_{Z\mid \theta ^{(t)},\mathbf {x} }[\log L(\theta ^{(t)};\mathbf {x} ,\mathbf {Z} )]\leq E_{Z\mid \theta ^{(t-1)},\mathbf {x} }[\log L(\theta ^{(t-1)};\mathbf {x} ,\mathbf {Z} )]+\varepsilon }forε{\displaystyle \varepsilon }below some preset threshold. The algorithm illustrated above can be generalized for mixtures of more than twomultivariate normal distributions. The EM algorithm has been implemented in the case where an underlyinglinear regressionmodel exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model.[38]Special cases of this model include censored or truncated observations from onenormal distribution.[38] EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termedmoment-based approaches[39]or the so-calledspectral techniques.[40][41]Moment-based approaches to learning the parameters of a probabilistic model enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.[citation needed]
https://en.wikipedia.org/wiki/EM-algorithm
Instatistics, themean integrated squared error (MISE)is used indensity estimation. The MISE of anestimateof an unknownprobability densityis given by[1] whereƒis the unknown density,ƒnis its estimate based on asampleofnindependent and identically distributedrandom variables. Here, E denotes theexpected valuewith respect to that sample. The MISE is also known asL2risk function.
https://en.wikipedia.org/wiki/Mean_integrated_squared_error
Ahistogramis a visual representation of thedistributionof quantitative data. To construct a histogram, the first step is to"bin" (or "bucket")the range of values— divide the entire range of values into a series of intervals—and then count how many values fall into each interval. The bins are usually specified as consecutive, non-overlappingintervalsof a variable. The bins (intervals) are adjacent and are typically (but not required to be) of equal size.[1] Histograms give a rough sense of the density of the underlying distribution of the data, and often fordensity estimation: estimating theprobability density functionof the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on thex-axis are all 1, then a histogram is identical to arelative frequencyplot. Histograms are sometimes confused withbar charts. In a histogram, each bin is for a different range of values, so altogether the histogram illustrates the distribution of values. But in a bar chart, each bar is for a different category of observations (e.g., each bar might be for a different population), so altogether the bar chart can be used to compare different categories. Some authors recommend that bar charts always have gaps between the bars to clarify that they are not histograms.[2][3] The term "histogram" was first introduced byKarl Pearson, the founder of mathematicalstatistics, in lectures delivered in1892atUniversity College London. Pearson's term is sometimes incorrectly said to combine the Greek rootγραμμα(gramma) = "figure" or "drawing" with the rootἱστορία(historia) = "inquiry" or "history". Alternatively the rootἱστίον(histion) is also proposed, meaning "web" or "tissue" (as inhistology, the study of biological tissue). Both of theseetymologiesare incorrect, and in fact Pearson, who knew Ancient Greek well, derived the term from a different ifhomophonousGreek root,ἱστός= "something set upright", "mast", referring to the vertical bars in the graph. Pearson's new term was embedded in a series of other analogousneologisms, such as "stigmogram" and "radiogram".[4] Pearson himself noted in 1895 that although the term "histogram" was new, the type of graph it designates was "a common form of graphical representation".[5]In fact the technique of using a bar graph to represent statistical measurements was devised by the Scottisheconomist,William Playfair, in hisCommercial and political atlas(1786).[4] This is the data for the histogram to the right, using 500 items: The words used to describe the patterns in a histogram are: "symmetric", "skewed left" or "right", "unimodal", "bimodal" or "multimodal". It is a good idea to plot the data using several different bin widths to learn more about it. Here is an example on tips given in a restaurant. TheU.S. Census Bureaufound that there were 124 million people who work outside of their homes.[6]Using their data on the time occupied by travel to work, the table below shows the absolute number of people who responded with travel times "at least 30 but less than 35 minutes" is higher than the numbers for the categories above and below it. This is likely due to people rounding their reported journey time.[citation needed]The problem of reporting values as somewhat arbitrarilyrounded numbersis a common phenomenon when collecting data from people.[citation needed] This histogram shows the number of cases perunit intervalas the height of each block, so that the area of each block is equal to the number of people in the survey who fall into its category. The area under thecurverepresents the total number of cases (124 million). This type of histogram shows absolute numbers, with Q in thousands. This histogram differs from the first only in theverticalscale. The area of each block is the fraction of the total that each category represents, and the total area of all the bars is equal to 1 (the fraction meaning "all"). The curve displayed is a simpledensity estimate. This version shows proportions, and is also known as a unit area histogram. In other words, a histogram represents a frequency distribution by means of rectangles whose widths represent class intervals and whose areas are proportional to the corresponding frequencies: the height of each is the average frequency density for the interval. The intervals are placed together in order to show that the data represented by the histogram, while exclusive, is also contiguous. (E.g., in a histogram it is possible to have two connecting intervals of 10.5–20.5 and 20.5–33.5, but not two connecting intervals of 10.5–20.5 and 22.5–32.5. Empty intervals are represented as empty and not skipped.)[7] The data used to construct a histogram are generated via a functionmithat counts the number of observations that fall into each of the disjoint categories (known asbins). Thus, if we letnbe the total number of observations andkbe the total number of bins, the histogram datamimeet the following conditions: A histogram can be thought of as a simplistickernel density estimation, which uses akernelto smooth frequencies over the bins. This yields asmootherprobability density function, which will in general more accurately reflect distribution of the underlying variable. The density estimate could be plotted as an alternative to the histogram, and is usually drawn as a curve rather than a set of boxes. Histograms are nevertheless preferred in applications, when their statistical properties need to be modeled. The correlated variation of a kernel density estimate is very difficult to describe mathematically, while it is simple for a histogram where each bin varies independently. An alternative to kernel density estimation is the average shifted histogram,[8]which is fast to compute and gives a smooth curve estimate of the density without using kernels. A cumulative histogram: a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogramMiof a histogrammjcan be defined as: There is no "best" number of bins, and different bin sizes can reveal different features of the data. Grouping data is at least as old asGraunt's work in the 17th century, but no systematic guidelines were given[9]untilSturges's work in 1926.[10] Using wider bins where the density of the underlying data points is low reduces noise due to sampling randomness; using narrower bins where the density is high (so the signal drowns the noise) gives greater precision to the density estimation. Thus varying the bin-width within a histogram can be beneficial. Nonetheless, equal-width bins are widely used. Some theoreticians have attempted to determine an optimal number of bins, but these methods generally make strong assumptions about the shape of the distribution. Depending on the actual data distribution and the goals of the analysis, different bin widths may be appropriate, so experimentation is usually needed to determine an appropriate width. There are, however, various useful guidelines and rules of thumb.[11] The number of binskcan be assigned directly or can be calculated from a suggested bin widthhas: The braces indicate theceiling function. which takes the square root of the number of data points in the sample and rounds to the nextinteger. This rule is suggested by a number of elementary statistics textbooks[12]and widely implemented in many software packages.[13] Sturges's rule[10]is derived from abinomial distributionand implicitly assumes an approximately normal distribution. Sturges's formula implicitly bases bin sizes on the range of the data, and can perform poorly ifn< 30, because the number of bins will be small—less than seven—and unlikely to show trends in the data well. On the other extreme, Sturges's formula may overestimate bin width for very large datasets, resulting in oversmoothed histograms.[14]It may also perform poorly if the data are not normally distributed. When compared to Scott's rule and the Terrell-Scott rule, two other widely accepted formulas for histogram bins, the output of Sturges's formula is closest whenn≈ 100.[14] TheRice rule[15]is presented as a simple alternative to Sturges's rule. Doane's formula[16]is a modification of Sturges's formula which attempts to improve its performance with non-normal data. whereg1{\displaystyle g_{1}}is the estimated 3rd-moment-skewnessof the distribution and Bin widthh{\displaystyle h}is given by whereσ^{\displaystyle {\hat {\sigma }}}is the samplestandard deviation.Scott's normal reference rule[17]is optimal for random samples of normally distributed data, in the sense that it minimizes the integrated mean squared error of the density estimate.[9]This is the default rule used in Microsoft Excel.[18] TheTerrell–Scott rule[14][19]is not a normal reference rule. It gives the minimum number of bins required for an asymptotically optimal histogram, where optimality is measured by the integrated mean squared error. The bound is derived by finding the 'smoothest' possible density, which turns out to be34(1−x2){\displaystyle {\frac {3}{4}}(1-x^{2})}. Any other density will require more bins, hence the above estimate is also referred to as the 'oversmoothed' rule. The similarity of the formulas and the fact that Terrell and Scott were at Rice University when the proposed it suggests that this is also the origin of the Rice rule. TheFreedman–Diaconis rulegives bin widthh{\displaystyle h}as:[20][9] which is based on theinterquartile range, denoted by IQR. It replaces 3.5σ of Scott's rule with 2 IQR, which is less sensitive than the standard deviation to outliers in data. This approach of minimizing integrated mean squared error from Scott's rule can be generalized beyond normal distributions, by using leave-one out cross validation:[21][22] Here,Nk{\displaystyle N_{k}}is the number of datapoints in thekth bin, and choosing the value ofhthat minimizesJwill minimize integrated mean squared error. The choice is based on minimization of an estimatedL2risk function[23] wherem¯{\displaystyle \textstyle {\bar {m}}}andv{\displaystyle \textstyle v}are mean and biased variance of a histogram with bin-widthh{\displaystyle \textstyle h},m¯=1k∑i=1kmi{\displaystyle \textstyle {\bar {m}}={\frac {1}{k}}\sum _{i=1}^{k}m_{i}}andv=1k∑i=1k(mi−m¯)2{\displaystyle \textstyle v={\frac {1}{k}}\sum _{i=1}^{k}(m_{i}-{\bar {m}})^{2}}. Rather than choosing evenly spaced bins, for some applications it is preferable to vary the bin width. This avoids bins with low counts. A common case is to chooseequiprobable bins, where the number of samples in each bin is expected to be approximately equal. The bins may be chosen according to some known distribution or may be chosen based on the data so that each bin has≈n/k{\displaystyle \approx n/k}samples. When plotting the histogram, thefrequency densityis used for the dependent axis. While all bins have approximately equal area, the heights of the histogram approximate the density distribution. For equiprobable bins, the following rule for the number of bins is suggested:[24] This choice of bins is motivated by maximizing the power of aPearson chi-squared testtesting whether the bins do contain equal numbers of samples. More specifically, for a given confidence intervalα{\displaystyle \alpha }it is recommended to choose between 1/2 and 1 times the following equation:[25] WhereΦ−1{\displaystyle \Phi ^{-1}}is theprobitfunction. Following this rule forα=0.05{\displaystyle \alpha =0.05}would give between1.88n2/5{\displaystyle 1.88n^{2/5}}and3.77n2/5{\displaystyle 3.77n^{2/5}}; the coefficient of 2 is chosen as an easy-to-remember value from this broad optimum. A good reason why the number of bins should be proportional ton3{\displaystyle {\sqrt[{3}]{n}}}is the following: suppose that the data are obtained asn{\displaystyle n}independent realizations of a bounded probability distribution with smooth density. Then the histogram remains equally "rugged" asn{\displaystyle n}tends to infinity. Ifs{\displaystyle s}is the "width" of the distribution (e. g., the standard deviation or the inter-quartile range), then the number of units in a bin (the frequency) is of ordernh/s{\displaystyle nh/s}and therelativestandard error is of orders/(nh){\displaystyle {\sqrt {s/(nh)}}}. Compared to the next bin, the relative change of the frequency is of orderh/s{\displaystyle h/s}provided that the derivative of the density is non-zero. These two are of the same order ifh{\displaystyle h}is of orders/n3{\displaystyle s/{\sqrt[{3}]{n}}}, so thatk{\displaystyle k}is of ordern3{\displaystyle {\sqrt[{3}]{n}}}. This simple cubic root choice can also be applied to bins with non-constant widths.[citation needed]
https://en.wikipedia.org/wiki/Histogram
Kernel density estimationis anonparametrictechnique fordensity estimationi.e., estimation ofprobability density functions, which is one of the fundamental questions instatistics. It can be viewed as a generalisation ofhistogramdensity estimation with improved statistical properties. Apart from histograms, other types of density estimators includeparametric,spline,waveletandFourier series. Kernel density estimators were first introduced in the scientific literature forunivariatedata in the 1950s and 1960s[1][2]and subsequently have been widely adopted. It was soon recognised that analogous estimators for multivariate data would be an important addition tomultivariate statistics. Based on research carried out in the 1990s and 2000s,multivariate kernel density estimationhas reached a level of maturity comparable to its univariate counterparts.[3][4][5] We take an illustrativesyntheticbivariatedata set of 50 points to illustrate the construction of histograms. This requires the choice of an anchor point (the lower left corner of the histogram grid). For the histogram on the left, we choose (−1.5, −1.5): for the one on the right, we shift the anchor point by 0.125 in both directions to (−1.625, −1.625). Both histograms have a binwidth of 0.5, so any differences are due to the change in the anchor point only. The colour-coding indicates the number of data points which fall into a bin: 0=white, 1=pale yellow, 2=bright yellow, 3=orange, 4=red. The left histogram appears to indicate that the upper half has a higher density than the lower half, whereas the reverse is the case for the right-hand histogram, confirming that histograms are highly sensitive to the placement of the anchor point.[6] One possible solution to this anchor point placement problem is to remove the histogram binning grid completely. In the left figure below, a kernel (represented by the grey lines) is centred at each of the 50 data points above. The result of summing these kernels is given on the right figure, which is a kernel density estimate. The most striking difference between kernel density estimates and histograms is that the former are easier to interpret since they do not contain artifices induced by a binning grid. The coloured contours correspond to the smallest region which contains the respective probability mass: red = 25%, orange + red = 50%, yellow + orange + red = 75%, thus indicating that a single central region contains the highest density. The goal of density estimation is to take a finite sample of data and to make inferences about the underlying probability density function everywhere, including where no data are observed. In kernel density estimation, the contribution of each data point is smoothed out from a single point into a region of space surrounding it. Aggregating the individually smoothed contributions gives an overall picture of the structure of the data and its density function. In the details to follow, we show that this approach leads to a reasonable estimate of the underlying density function. The previous figure is a graphical representation of kernel density estimate, which we now define in an exact manner. Letx1,x2, ...,xnbe asampleofd-variaterandom vectorsdrawn from a common distribution described by thedensity functionƒ. The kernel density estimate is defined to be where The choice of the kernel functionKis not crucial to the accuracy of kernel density estimators, so we use the standardmultivariate normalkernel throughout:KH(x)=(2π)−d/2|H|−1/2e−12xTH−1x{\textstyle K_{\mathbf {H} }(\mathbf {x} )={(2\pi )^{-d/2}}\mathbf {|H|} ^{-1/2}e^{-{\frac {1}{2}}\mathbf {x^{T}} \mathbf {H^{-1}} \mathbf {x} }}, where H plays the role of thecovariance matrix. On the other hand, the choice of the bandwidth matrixHis the single most important factor affecting its accuracy since it controls the amount and orientation of smoothing induced.[3]: 36–39That the bandwidth matrix also induces an orientation is a basic difference between multivariate kernel density estimation from its univariate analogue since orientation is not defined for 1D kernels. This leads to the choice of the parametrisation of this bandwidth matrix. The three main parametrisation classes (in increasing order of complexity) areS, the class of positive scalars times the identity matrix;D, diagonal matrices with positive entries on the main diagonal; andF, symmetric positive definite matrices. TheSclass kernels have the same amount of smoothing applied in all coordinate directions,Dkernels allow different amounts of smoothing in each of the coordinates, andFkernels allow arbitrary amounts and orientation of the smoothing. HistoricallySandDkernels are the most widespread due to computational reasons, but research indicates that important gains in accuracy can be obtained using the more generalFclass kernels.[7][8] The most commonly used optimality criterion for selecting a bandwidth matrix is the MISE ormean integrated squared error This in general does not possess aclosed-form expression, so it is usual to use its asymptotic approximation (AMISE) as a proxy where The quality of the AMISE approximation to the MISE[3]: 97is given by whereoindicates the usualsmall o notation. Heuristically this statement implies that the AMISE is a 'good' approximation of the MISE as the sample sizen→ ∞. It can be shown that any reasonable bandwidth selectorHhasH=O(n−2/(d+4)) where thebig O notationis applied elementwise. Substituting this into the MISE formula yields that the optimal MISE isO(n−4/(d+4)).[3]: 99–100Thus asn→ ∞, the MISE → 0, i.e. the kernel density estimateconverges in mean squareand thus also in probability to the true densityf. These modes of convergence are confirmation of the statement in the motivation section that kernel methods lead to reasonable density estimators. An ideal optimal bandwidth selector is Since this ideal selector contains the unknown density functionƒ, it cannot be used directly. The many different varieties of data-based bandwidth selectors arise from the different estimators of the AMISE. We concentrate on two classes of selectors which have been shown to be the most widely applicable in practice: smoothed cross validation and plug-in selectors. The plug-in (PI) estimate of the AMISE is formed by replacingΨ4by its estimatorΨ^4{\displaystyle {\hat {\mathbf {\Psi } }}_{4}} whereΨ^4(G)=n−2∑i=1n∑j=1n[(vecD2)(vecT⁡D2)]KG(Xi−Xj){\displaystyle {\hat {\mathbf {\Psi } }}_{4}(\mathbf {G} )=n^{-2}\sum _{i=1}^{n}\sum _{j=1}^{n}[(\operatorname {vec} \,\operatorname {D} ^{2})(\operatorname {vec} ^{T}\operatorname {D} ^{2})]K_{\mathbf {G} }(\mathbf {X} _{i}-\mathbf {X} _{j})}. ThusH^PI=argminH∈FPI⁡(H){\displaystyle {\hat {\mathbf {H} }}_{\operatorname {PI} }=\operatorname {argmin} _{\mathbf {H} \in F}\,\operatorname {PI} (\mathbf {H} )}is the plug-in selector.[9][10]These references also contain algorithms on optimal estimation of the pilot bandwidth matrixGand establish thatH^PI{\displaystyle {\hat {\mathbf {H} }}_{\operatorname {PI} }}converges in probabilitytoHAMISE. Smoothed cross validation (SCV) is a subset of a larger class ofcross validationtechniques. The SCV estimator differs from the plug-in estimator in the second term ThusH^SCV=argminH∈FSCV⁡(H){\displaystyle {\hat {\mathbf {H} }}_{\operatorname {SCV} }=\operatorname {argmin} _{\mathbf {H} \in F}\,\operatorname {SCV} (\mathbf {H} )}is the SCV selector.[10][11]These references also contain algorithms on optimal estimation of the pilot bandwidth matrixGand establish thatH^SCV{\displaystyle {\hat {\mathbf {H} }}_{\operatorname {SCV} }}converges in probability toHAMISE. Silverman's rule of thumb suggests usingHii=(4d+2)1d+4n−1d+4σi{\displaystyle {\sqrt {\mathbf {H} _{ii}}}=\left({\frac {4}{d+2}}\right)^{\frac {1}{d+4}}n^{\frac {-1}{d+4}}\sigma _{i}}, whereσi{\displaystyle \sigma _{i}}is the standard deviation of the ith variable andd{\displaystyle d}is the number of dimensions, andHij=0,i≠j{\displaystyle \mathbf {H} _{ij}=0,i\neq j}. Scott's rule isHii=n−1d+4σi{\displaystyle {\sqrt {\mathbf {H} _{ii}}}=n^{\frac {-1}{d+4}}\sigma _{i}}. In the optimal bandwidth selection section, we introduced the MISE. Its construction relies on theexpected valueand thevarianceof the density estimator[3]: 97 where * is theconvolutionoperator between two functions, and For these two expressions to be well-defined, we require that all elements ofHtend to 0 and thatn−1|H|−1/2tends to 0 asntends to infinity. Assuming these two conditions, we see that the expected value tends to the true densityfi.e. the kernel density estimator is asymptoticallyunbiased; and that the variance tends to zero. Using the standard mean squared value decomposition we have that the MSE tends to 0, implying that the kernel density estimator is (mean square) consistent and hence converges in probability to the true densityf. The rate of convergence of the MSE to 0 is the necessarily the same as the MISE rate noted previouslyO(n−4/(d+4)), hence the convergence rate of the density estimator tofisOp(n−2/(d+4)) whereOpdenotesorder in probability. This establishes pointwise convergence. The functional convergence is established similarly by considering the behaviour of the MISE, and noting that under sufficient regularity, integration does not affect the convergence rates. For the data-based bandwidth selectors considered, the target is the AMISE bandwidth matrix. We say that a data-based selector converges to the AMISE selector at relative rateOp(n−α),α> 0 if It has been established that the plug-in and smoothed cross validation selectors (given a single pilot bandwidthG) both converge at a relative rate ofOp(n−2/(d+6))[10][12]i.e., both these data-based selectors are consistent estimators. Theks package[13]inRimplements the plug-in and smoothed cross validation selectors (amongst others). This dataset (included in the base distribution of R) contains 272 records with two measurements each: the duration time of an eruption (minutes) and the waiting time until the next eruption (minutes) of theOld Faithful Geyserin Yellowstone National Park, USA. The code fragment computes the kernel density estimate with the plug-in bandwidth matrixH^PI=[0.0520.5100.5108.882].{\displaystyle {\hat {\mathbf {H} }}_{\operatorname {PI} }={\begin{bmatrix}0.052&0.510\\0.510&8.882\end{bmatrix}}.}Again, the coloured contours correspond to the smallest region which contains the respective probability mass: red = 25%, orange + red = 50%, yellow + orange + red = 75%. To compute the SCV selector,Hpiis replaced withHscv. This is not displayed here since it is mostly similar to the plug-in estimate for this example. We consider estimating the density of the Gaussian mixture(4π)−1exp(−1⁄2(x12+x22)) + (4π)−1exp(−1⁄2((x1- 3.5)2+x22)), from 500 randomly generated points. We employ the Matlab routine for2-dimensional data. The routine is an automatic bandwidth selection method specifically designed for a second order Gaussian kernel.[14]The figure shows the joint density estimate that results from using the automatically selected bandwidth. Matlab script for the example Type the following commands in Matlab afterdownloadingand saving the function kde2d.m in the current directory. The MISE is the expected integratedL2distance between the density estimate and the true density functionf. It is the most widely used, mostly due to its tractability and most software implement MISE-based bandwidth selectors. There are alternative optimality criteria, which attempt to cover cases where MISE is not an appropriate measure.[4]: 34–37, 78The equivalentL1measure, Mean Integrated Absolute Error, is Its mathematical analysis is considerably more difficult than the MISE ones. In practice, the gain appears not to be significant.[15]TheL∞norm is the Mean Uniform Absolute Error which has been investigated only briefly.[16]Likelihood error criteria include those based on the MeanKullback–Leibler divergence and the MeanHellinger distance The KL can be estimated using a cross-validation method, although KL cross-validation selectors can be sub-optimal even if it remainsconsistentfor bounded density functions.[17]MH selectors have been briefly examined in the literature.[18] All these optimality criteria are distance based measures, and do not always correspond to more intuitive notions of closeness, so more visual criteria have been developed in response to this concern.[19] Recent research has shown that the kernel and its bandwidth can both be optimally and objectively chosen from the input data itself without making any assumptions about the form of the distribution.[20]The resulting kernel density estimate converges rapidly to the true probability distribution as samples are added: at a rate close to then−1{\displaystyle n^{-1}}expected for parametric estimators.[20][21][22]This kernel estimator works for univariate and multivariate samples alike. The optimal kernel is defined in Fourier space—as the optimal damping functionψh^(t→){\displaystyle {\hat {\psi _{h}}}({\vec {t}})}(the Fourier transform of the kernelK^(x→){\displaystyle {\hat {K}}({\vec {x}})})-- in terms of the Fourier transform of the dataφ^(t→){\displaystyle {\hat {\varphi }}({\vec {t}})}, theempirical characteristic function(seeKernel density estimation): ψh^(t→)≡N2(N−1)[1+1−4(N−1)N2|φ^(t→)|2IA→(t→)]{\displaystyle {\hat {\psi _{h}}}({\vec {t}})\equiv {\frac {N}{2(N-1)}}\left[1+{\sqrt {1-{\frac {4(N-1)}{N^{2}|{\hat {\varphi }}({\vec {t}})|^{2}}}}}I_{\vec {A}}({\vec {t}})\right]}[22] f^(x)=1(2π)d∫φ^(t→)ψh(t→)e−it→⋅x→dt→{\displaystyle {\hat {f}}(x)={\frac {1}{(2\pi )^{d}}}\int {\hat {\varphi }}({\vec {t}})\psi _{h}({\vec {t}})e^{-i{\vec {t}}\cdot {\vec {x}}}d{\vec {t}}} where,Nis the number of data points,dis the number of dimensions (variables), andIA→(t→){\displaystyle I_{\vec {A}}({\vec {t}})}is a filter that is equal to 1 for 'accepted frequencies' and 0 otherwise. There are various ways to define this filter function, and a simple one that works for univariate or multivariate samples is called the 'lowest contiguous hypervolume filter';IA→(t→){\displaystyle I_{\vec {A}}({\vec {t}})}is chosen such that the only accepted frequencies are a contiguous subset of frequencies surrounding the origin for which|φ^(t→)|2≥4(N−1)N−2{\displaystyle |{\hat {\varphi }}({\vec {t}})|^{2}\geq 4(N-1)N^{-2}}(see[22]for a discussion of this and other filter functions). Note that direct calculation of theempirical characteristic function(ECF) is slow, since it essentially involves a direct Fourier transform of the data samples. However, it has been found that the ECF can be approximated accurately using anon-uniform fast Fourier transform(nuFFT) method,[21][22]which increases the calculation speed by several orders of magnitude (depending on the dimensionality of the problem). The combination of this objective KDE method and the nuFFT-based ECF approximation has been referred to asfastKDEin the literature.[22]
https://en.wikipedia.org/wiki/Multivariate_kernel_density_estimation
Inmachine learning, thekernel embedding of distributions(also called thekernel meanormean map) comprises a class ofnonparametricmethods in which aprobability distributionis represented as an element of areproducing kernel Hilbert space(RKHS).[1]A generalization of the individual data-point feature mapping done in classicalkernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such asinner products, distances,projections,linear transformations, andspectral analysis.[2]Thislearningframework is very general and can be applied to distributions over any spaceΩ{\displaystyle \Omega }on which a sensiblekernel function(measuring similarity between elements ofΩ{\displaystyle \Omega }) may be defined. For example, various kernels have been proposed for learning from data which are:vectorsinRd{\displaystyle \mathbb {R} ^{d}}, discrete classes/categories,strings,graphs/networks, images,time series,manifolds,dynamical systems, and other structured objects.[3][4]The theory behind kernel embeddings of distributions has been primarily developed byAlex Smola,Le Song,Arthur Gretton, andBernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.[5] The analysis of distributions is fundamental inmachine learningandstatistics, and many algorithms in these fields rely on information theoretic approaches such asentropy,mutual information, orKullback–Leibler divergence. However, to estimate these quantities, one must first either perform density estimation, or employ sophisticated space-partitioning/bias-correction strategies which are typically infeasible for high-dimensional data.[6]Commonly, methods for modeling complex distributions rely on parametric assumptions that may be unfounded or computationally challenging (e.g.Gaussian mixture models), while nonparametric methods likekernel density estimation(Note: the smoothing kernels in this context have a different interpretation than the kernels discussed here) orcharacteristic functionrepresentation (via theFourier transformof the distribution) break down in high-dimensional settings.[2] Methods based on the kernel embedding of distributions sidestep these problems and also possess the following advantages:[6] Thus, learning via the kernel embedding of distributions offers a principled drop-in replacement for information theoretic approaches and is a framework which not only subsumes many popular methods in machine learning and statistics as special cases, but also can lead to entirely new learning algorithms. LetX{\displaystyle X}denote a random variable with domainΩ{\displaystyle \Omega }and distributionP{\displaystyle P}. Given a symmetric,positive-definite kernelk:Ω×Ω→R{\displaystyle k:\Omega \times \Omega \rightarrow \mathbb {R} }theMoore–Aronszajn theoremasserts the existence of a unique RKHSH{\displaystyle {\mathcal {H}}}onΩ{\displaystyle \Omega }(aHilbert spaceof functionsf:Ω→R{\displaystyle f:\Omega \to \mathbb {R} }equipped with an inner product⟨⋅,⋅⟩H{\displaystyle \langle \cdot ,\cdot \rangle _{\mathcal {H}}}and a norm‖⋅‖H{\displaystyle \|\cdot \|_{\mathcal {H}}}) for whichk{\displaystyle k}is a reproducing kernel, i.e., in which the elementk(x,⋅){\displaystyle k(x,\cdot )}satisfies the reproducing property One may alternatively considerx↦k(x,⋅){\displaystyle x\mapsto k(x,\cdot )}as an implicit feature mappingφ:Ω→H{\displaystyle \varphi :\Omega \rightarrow {\mathcal {H}}}(which is therefore also called the feature space), so thatk(x,x′)=⟨φ(x),φ(x′)⟩H{\displaystyle k(x,x')=\langle \varphi (x),\varphi (x')\rangle _{\mathcal {H}}}can be viewed as a measure of similarity between pointsx,x′∈Ω.{\displaystyle x,x'\in \Omega .}While thesimilarity measureis linear in the feature space, it may be highly nonlinear in the original space depending on the choice of kernel. The kernel embedding of the distributionP{\displaystyle P}inH{\displaystyle {\mathcal {H}}}(also called thekernel meanormean map) is given by:[1] IfP{\displaystyle P}allows a square integrable densityp{\displaystyle p}, thenμX=Ekp{\displaystyle \mu _{X}={\mathcal {E}}_{k}p}, whereEk{\displaystyle {\mathcal {E}}_{k}}is theHilbert–Schmidt integral operator. A kernel ischaracteristicif the mean embeddingμ:{family of distributions overΩ}→H{\displaystyle \mu :\{{\text{family of distributions over }}\Omega \}\to {\mathcal {H}}}is injective.[7]Each distribution can thus be uniquely represented in the RKHS and all statistical features of distributions are preserved by the kernel embedding if a characteristic kernel is used. Givenn{\displaystyle n}training examples{x1,…,xn}{\displaystyle \{x_{1},\ldots ,x_{n}\}}drawnindependently and identically distributed(i.i.d.) fromP,{\displaystyle P,}the kernel embedding ofP{\displaystyle P}can be empirically estimated as IfY{\displaystyle Y}denotes another random variable (for simplicity, assume the co-domain ofY{\displaystyle Y}is alsoΩ{\displaystyle \Omega }with the same kernelk{\displaystyle k}which satisfies⟨φ(x)⊗φ(y),φ(x′)⊗φ(y′)⟩=k(x,x′)k(y,y′){\displaystyle \langle \varphi (x)\otimes \varphi (y),\varphi (x')\otimes \varphi (y')\rangle =k(x,x')k(y,y')}), then thejoint distributionP(x,y)){\displaystyle P(x,y))}can be mapped into atensor productfeature spaceH⊗H{\displaystyle {\mathcal {H}}\otimes {\mathcal {H}}}via[2] By the equivalence between atensorand alinear map, this joint embedding may be interpreted as an uncenteredcross-covarianceoperatorCXY:H→H{\displaystyle {\mathcal {C}}_{XY}:{\mathcal {H}}\to {\mathcal {H}}}from which the cross-covariance of functionsf,g∈H{\displaystyle f,g\in {\mathcal {H}}}can be computed as[8] Givenn{\displaystyle n}pairs of training examples{(x1,y1),…,(xn,yn)}{\displaystyle \{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\}}drawn i.i.d. fromP{\displaystyle P}, we can also empirically estimate the joint distribution kernel embedding via Given aconditional distributionP(y∣x),{\displaystyle P(y\mid x),}one can define the corresponding RKHS embedding as[2] Note that the embedding ofP(y∣x){\displaystyle P(y\mid x)}thus defines a family of points in the RKHS indexed by the valuesx{\displaystyle x}taken by conditioning variableX{\displaystyle X}. By fixingX{\displaystyle X}to a particular value, we obtain a single element inH{\displaystyle {\mathcal {H}}}, and thus it is natural to define the operator which given the feature mapping ofx{\displaystyle x}outputs the conditional embedding ofY{\displaystyle Y}givenX=x.{\displaystyle X=x.}Assuming that for allg∈H:E[g(Y)∣X]∈H,{\displaystyle g\in {\mathcal {H}}:\mathbb {E} [g(Y)\mid X]\in {\mathcal {H}},}it can be shown that[8] This assumption is always true for finite domains with characteristic kernels, but may not necessarily hold for continuous domains.[2]Nevertheless, even in cases where the assumption fails,CY∣Xφ(x){\displaystyle {\mathcal {C}}_{Y\mid X}\varphi (x)}may still be used to approximate the conditional kernel embeddingμY∣x,{\displaystyle \mu _{Y\mid x},}and in practice, the inversion operator is replaced with a regularized version of itself(CXX+λI)−1{\displaystyle ({\mathcal {C}}_{XX}+\lambda \mathbf {I} )^{-1}}(whereI{\displaystyle \mathbf {I} }denotes theidentity matrix). Given training examples{(x1,y1),…,(xn,yn)},{\displaystyle \{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\},}the empirical kernel conditional embedding operator may be estimated as[2] whereΦ=(φ(y1),…,φ(yn)),Υ=(φ(x1),…,φ(xn)){\displaystyle {\boldsymbol {\Phi }}=\left(\varphi (y_{1}),\dots ,\varphi (y_{n})\right),{\boldsymbol {\Upsilon }}=\left(\varphi (x_{1}),\dots ,\varphi (x_{n})\right)}are implicitly formed feature matrices,K=ΥTΥ{\displaystyle \mathbf {K} ={\boldsymbol {\Upsilon }}^{T}{\boldsymbol {\Upsilon }}}is the Gram matrix for samples ofX{\displaystyle X}, andλ{\displaystyle \lambda }is aregularizationparameter needed to avoidoverfitting. Thus, the empirical estimate of the kernel conditional embedding is given by a weighted sum of samples ofY{\displaystyle Y}in the feature space: whereβ(x)=(K+λI)−1Kx{\displaystyle {\boldsymbol {\beta }}(x)=(\mathbf {K} +\lambda \mathbf {I} )^{-1}\mathbf {K} _{x}}andKx=(k(x1,x),…,k(xn,x))T{\displaystyle \mathbf {K} _{x}=\left(k(x_{1},x),\dots ,k(x_{n},x)\right)^{T}} This section illustrates how basic probabilistic rules may be reformulated as (multi)linear algebraic operations in the kernel embedding framework and is primarily based on the work of Song et al.[2][8]The following notation is adopted: In practice, all embeddings are empirically estimated from data{(x1,y1),…,(xn,yn)}{\displaystyle \{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\}}and it assumed that a set of samples{y~1,…,y~n~}{\displaystyle \{{\widetilde {y}}_{1},\ldots ,{\widetilde {y}}_{\widetilde {n}}\}}may be used to estimate the kernel embedding of the prior distributionπ(Y){\displaystyle \pi (Y)}. In probability theory, the marginal distribution ofX{\displaystyle X}can be computed by integrating outY{\displaystyle Y}from the joint density (including the prior distribution onY{\displaystyle Y}) The analog of this rule in the kernel embedding framework states thatμXπ,{\displaystyle \mu _{X}^{\pi },}the RKHS embedding ofQ(X){\displaystyle Q(X)}, can be computed via whereμYπ{\displaystyle \mu _{Y}^{\pi }}is the kernel embedding ofπ(Y).{\displaystyle \pi (Y).}In practical implementations, the kernel sum rule takes the following form where is the empirical kernel embedding of the prior distribution,α=(α1,…,αn~)T,{\displaystyle {\boldsymbol {\alpha }}=(\alpha _{1},\ldots ,\alpha _{\widetilde {n}})^{T},}Υ=(φ(x1),…,φ(xn)){\displaystyle {\boldsymbol {\Upsilon }}=\left(\varphi (x_{1}),\ldots ,\varphi (x_{n})\right)}, andG,G~{\displaystyle \mathbf {G} ,{\widetilde {\mathbf {G} }}}are Gram matrices with entriesGij=k(yi,yj),G~ij=k(yi,y~j){\displaystyle \mathbf {G} _{ij}=k(y_{i},y_{j}),{\widetilde {\mathbf {G} }}_{ij}=k(y_{i},{\widetilde {y}}_{j})}respectively. In probability theory, a joint distribution can be factorized into a product between conditional and marginal distributions The analog of this rule in the kernel embedding framework states thatCXYπ,{\displaystyle {\mathcal {C}}_{XY}^{\pi },}the joint embedding ofQ(X,Y),{\displaystyle Q(X,Y),}can be factorized as a composition of conditional embedding operator with the auto-covariance operator associated withπ(Y){\displaystyle \pi (Y)} where In practical implementations, the kernel chain rule takes the following form In probability theory, a posterior distribution can be expressed in terms of a prior distribution and a likelihood function as The analog of this rule in the kernel embedding framework expresses the kernel embedding of the conditional distribution in terms of conditional embedding operators which are modified by the prior distribution where from the chain rule: In practical implementations, the kernel Bayes' rule takes the following form where Two regularization parameters are used in this framework:λ{\displaystyle \lambda }for the estimation ofC^YXπ,C^XXπ=ΥDΥT{\displaystyle {\widehat {\mathcal {C}}}_{YX}^{\pi },{\widehat {\mathcal {C}}}_{XX}^{\pi }={\boldsymbol {\Upsilon }}\mathbf {D} {\boldsymbol {\Upsilon }}^{T}}andλ~{\displaystyle {\widetilde {\lambda }}}for the estimation of the final conditional embedding operator The latter regularization is done on square ofC^XXπ{\displaystyle {\widehat {\mathcal {C}}}_{XX}^{\pi }}becauseD{\displaystyle D}may not bepositive definite. Themaximum mean discrepancy (MMD)is a distance-measure between distributionsP(X){\displaystyle P(X)}andQ(Y){\displaystyle Q(Y)}which is defined as the distance between their embeddings in the RKHS[6] While most distance-measures between distributions such as the widely usedKullback–Leibler divergenceeither require density estimation (either parametrically or nonparametrically) or space partitioning/bias correction strategies,[6]the MMD is easily estimated as an empirical mean which is concentrated around the true value of the MMD. The characterization of this distance as themaximum mean discrepancyrefers to the fact that computing the MMD is equivalent to finding the RKHS function that maximizes the difference in expectations between the two probability distributions a form ofintegral probability metric. Givenntraining examples fromP(X){\displaystyle P(X)}andmsamples fromQ(Y){\displaystyle Q(Y)}, one can formulate a test statistic based on the empirical estimate of the MMD to obtain atwo-sample test[15]of the null hypothesis that both samples stem from the same distribution (i.e.P=Q{\displaystyle P=Q}) against the broad alternativeP≠Q{\displaystyle P\neq Q}. Although learning algorithms in the kernel embedding framework circumvent the need for intermediate density estimation, one may nonetheless use the empirical embedding to perform density estimation based onnsamples drawn from an underlying distributionPX∗{\displaystyle P_{X}^{*}}. This can be done by solving the following optimization problem[6][16] where the maximization is done over the entire space of distributions onΩ.{\displaystyle \Omega .}Here,μX[PX]{\displaystyle \mu _{X}[P_{X}]}is the kernel embedding of the proposed densityPX{\displaystyle P_{X}}andH{\displaystyle H}is an entropy-like quantity (e.g.Entropy,KL divergence,Bregman divergence). The distribution which solves this optimization may be interpreted as a compromise between fitting the empirical kernel means of the samples well, while still allocating a substantial portion of the probability mass to all regions of the probability space (much of which may not be represented in the training examples). In practice, a good approximate solution of the difficult optimization may be found by restricting the space of candidate densities to a mixture ofMcandidate distributions with regularized mixing proportions. Connections between the ideas underlyingGaussian processesandconditional random fieldsmay be drawn with the estimation of conditional probability distributions in this fashion, if one views the feature mappings associated with the kernel as sufficient statistics in generalized (possibly infinite-dimensional)exponential families.[6] A measure of the statistical dependence between random variablesX{\displaystyle X}andY{\displaystyle Y}(from any domains on which sensible kernels can be defined) can be formulated based on the Hilbert–Schmidt Independence Criterion[17] and can be used as a principled replacement formutual information,Pearson correlationor any other dependence measure used in learning algorithms. Most notably, HSIC can detect arbitrary dependencies (when a characteristic kernel is used in the embeddings, HSIC is zero if and only if the variables areindependent), and can be used to measure dependence between different types of data (e.g. images and text captions). Givenni.i.d. samples of each random variable, a simple parameter-freeunbiasedestimator of HSIC which exhibitsconcentrationabout the true value can be computed inO(n(df2+dg2)){\displaystyle O(n(d_{f}^{2}+d_{g}^{2}))}time,[6]where the Gram matrices of the two datasets are approximated usingAAT,BBT{\displaystyle \mathbf {A} \mathbf {A} ^{T},\mathbf {B} \mathbf {B} ^{T}}withA∈Rn×df,B∈Rn×dg{\displaystyle \mathbf {A} \in \mathbb {R} ^{n\times d_{f}},\mathbf {B} \in \mathbb {R} ^{n\times d_{g}}}. The desirable properties of HSIC have led to the formulation of numerous algorithms which utilize this dependence measure for a variety of common machine learning tasks such as:feature selection(BAHSIC[18]),clustering(CLUHSIC[19]), anddimensionality reduction(MUHSIC[20]). HSIC can be extended to measure the dependence of multiple random variables. The question of when HSIC captures independence in this case has recently been studied:[21]for more than two variables Belief propagationis a fundamental algorithm for inference ingraphical modelsin which nodes repeatedly pass and receive messages corresponding to the evaluation of conditional expectations. In the kernel embedding framework, the messages may be represented as RKHS functions and the conditional distribution embeddings can be applied to efficiently compute message updates. Givennsamples of random variables represented by nodes in aMarkov random field, the incoming message to nodetfrom nodeucan be expressed as if it assumed to lie in the RKHS. Thekernel belief propagation updatemessage fromtto nodesis then given by[2] where⊙{\displaystyle \odot }denotes the element-wise vector product,N(t)∖s{\displaystyle N(t)\backslash s}is the set of nodes connected totexcluding nodes,βut=(βut1,…,βutn){\displaystyle {\boldsymbol {\beta }}_{ut}=\left(\beta _{ut}^{1},\dots ,\beta _{ut}^{n}\right)},Kt,Ks{\displaystyle \mathbf {K} _{t},\mathbf {K} _{s}}are the Gram matrices of the samples from variablesXt,Xs{\displaystyle X_{t},X_{s}}, respectively, andΥs=(φ(xs1),…,φ(xsn)){\displaystyle {\boldsymbol {\Upsilon }}_{s}=\left(\varphi (x_{s}^{1}),\dots ,\varphi (x_{s}^{n})\right)}is the feature matrix for the samples fromXs{\displaystyle X_{s}}. Thus, if the incoming messages to nodetare linear combinations of feature mapped samples fromXt{\displaystyle X_{t}}, then the outgoing message from this node is also a linear combination of feature mapped samples fromXs{\displaystyle X_{s}}. This RKHS function representation of message-passing updates therefore produces an efficient belief propagation algorithm in which thepotentialsare nonparametric functions inferred from the data so that arbitrary statistical relationships may be modeled.[2] In thehidden Markov model(HMM), two key quantities of interest are the transition probabilities between hidden statesP(St∣St−1){\displaystyle P(S^{t}\mid S^{t-1})}and the emission probabilitiesP(Ot∣St){\displaystyle P(O^{t}\mid S^{t})}for observations. Using the kernel conditional distribution embedding framework, these quantities may be expressed in terms of samples from the HMM. A serious limitation of the embedding methods in this domain is the need for training samples containing hidden states, as otherwise inference with arbitrary distributions in the HMM is not possible. One common use of HMMs isfilteringin which the goal is to estimate posterior distribution over the hidden statest{\displaystyle s^{t}}at time steptgiven a history of previous observationsht=(o1,…,ot){\displaystyle h^{t}=(o^{1},\dots ,o^{t})}from the system. In filtering, abelief stateP(St+1∣ht+1){\displaystyle P(S^{t+1}\mid h^{t+1})}is recursively maintained via a prediction step (where updatesP(St+1∣ht)=E[P(St+1∣St)∣ht]{\displaystyle P(S^{t+1}\mid h^{t})=\mathbb {E} [P(S^{t+1}\mid S^{t})\mid h^{t}]}are computed by marginalizing out the previous hidden state) followed by a conditioning step (where updatesP(St+1∣ht,ot+1)∝P(ot+1∣St+1)P(St+1∣ht){\displaystyle P(S^{t+1}\mid h^{t},o^{t+1})\propto P(o^{t+1}\mid S^{t+1})P(S^{t+1}\mid h^{t})}are computed by applying Bayes' rule to condition on a new observation).[2]The RKHS embedding of the belief state at timet+1can be recursively expressed as by computing the embeddings of the prediction step via thekernel sum ruleand the embedding of the conditioning step viakernel Bayes' rule. Assuming a training sample(s~1,…,s~T,o~1,…,o~T){\displaystyle ({\widetilde {s}}^{1},\dots ,{\widetilde {s}}^{T},{\widetilde {o}}^{1},\dots ,{\widetilde {o}}^{T})}is given, one can in practice estimate and filtering with kernel embeddings is thus implemented recursively using the following updates for the weightsα=(α1,…,αT){\displaystyle {\boldsymbol {\alpha }}=(\alpha _{1},\dots ,\alpha _{T})}[2] whereG,K{\displaystyle \mathbf {G} ,\mathbf {K} }denote the Gram matrices ofs~1,…,s~T{\displaystyle {\widetilde {s}}^{1},\dots ,{\widetilde {s}}^{T}}ando~1,…,o~T{\displaystyle {\widetilde {o}}^{1},\dots ,{\widetilde {o}}^{T}}respectively,G~{\displaystyle {\widetilde {\mathbf {G} }}}is a transfer Gram matrix defined asG~ij=k(s~i,s~j+1),{\displaystyle {\widetilde {\mathbf {G} }}_{ij}=k({\widetilde {s}}_{i},{\widetilde {s}}_{j+1}),}andKot+1=(k(o~1,ot+1),…,k(o~T,ot+1))T.{\displaystyle \mathbf {K} _{o^{t+1}}=(k({\widetilde {o}}^{1},o^{t+1}),\dots ,k({\widetilde {o}}^{T},o^{t+1}))^{T}.} Thesupport measure machine(SMM) is a generalization of thesupport vector machine(SVM) in which the training examples are probability distributions paired with labels{Pi,yi}i=1n,yi∈{+1,−1}{\displaystyle \{P_{i},y_{i}\}_{i=1}^{n},\ y_{i}\in \{+1,-1\}}.[22]SMMs solve the standard SVMdual optimization problemusing the followingexpected kernel which is computable in closed form for many common specific distributionsPi{\displaystyle P_{i}}(such as the Gaussian distribution) combined with popular embedding kernelsk{\displaystyle k}(e.g. the Gaussian kernel or polynomial kernel), or can be accurately empirically estimated from i.i.d. samples{xi}i=1n∼P(X),{zj}j=1m∼Q(Z){\displaystyle \{x_{i}\}_{i=1}^{n}\sim P(X),\{z_{j}\}_{j=1}^{m}\sim Q(Z)}via Under certain choices of the embedding kernelk{\displaystyle k}, the SMM applied to training examples{Pi,yi}i=1n{\displaystyle \{P_{i},y_{i}\}_{i=1}^{n}}is equivalent to a SVM trained on samples{xi,yi}i=1n{\displaystyle \{x_{i},y_{i}\}_{i=1}^{n}}, and thus the SMM can be viewed as aflexibleSVM in which a different data-dependent kernel (specified by the assumed form of the distributionPi{\displaystyle P_{i}}) may be placed on each training point.[22] The goal ofdomain adaptationis the formulation of learning algorithms which generalize well when the training and test data have different distributions. Given training examples{(xitr,yitr)}i=1n{\displaystyle \{(x_{i}^{\text{tr}},y_{i}^{\text{tr}})\}_{i=1}^{n}}and a test set{(xjte,yjte)}j=1m{\displaystyle \{(x_{j}^{\text{te}},y_{j}^{\text{te}})\}_{j=1}^{m}}where theyjte{\displaystyle y_{j}^{\text{te}}}are unknown, three types of differences are commonly assumed between the distribution of the training examplesPtr(X,Y){\displaystyle P^{\text{tr}}(X,Y)}and the test distributionPte(X,Y){\displaystyle P^{\text{te}}(X,Y)}:[23][24] By utilizing the kernel embedding of marginal and conditional distributions, practical approaches to deal with the presence of these types of differences between training and test domains can be formulated. Covariate shift may be accounted for by reweighting examples via estimates of the ratioPte(X)/Ptr(X){\displaystyle P^{\text{te}}(X)/P^{\text{tr}}(X)}obtained directly from the kernel embeddings of the marginal distributions ofX{\displaystyle X}in each domain without any need for explicit estimation of the distributions.[24]Target shift, which cannot be similarly dealt with since no samples fromY{\displaystyle Y}are available in the test domain, is accounted for by weighting training examples using the vectorβ∗(ytr){\displaystyle {\boldsymbol {\beta }}^{*}(\mathbf {y} ^{\text{tr}})}which solves the following optimization problem (where in practice, empirical approximations must be used)[23] To deal with location scale conditional shift, one can perform a LS transformation of the training points to obtain new transformed training dataXnew=Xtr⊙W+B{\displaystyle \mathbf {X} ^{\text{new}}=\mathbf {X} ^{\text{tr}}\odot \mathbf {W} +\mathbf {B} }(where⊙{\displaystyle \odot }denotes the element-wise vector product). To ensure similar distributions between the new transformed training samples and the test data,W,B{\displaystyle \mathbf {W} ,\mathbf {B} }are estimated by minimizing the following empirical kernel embedding distance[23] In general, the kernel embedding methods for dealing with LS conditional shift and target shift may be combined to find a reweighted transformation of the training data which mimics the test distribution, and these methods may perform well even in the presence of conditional shifts other than location-scale changes.[23] GivenNsets of training examples sampled i.i.d. from distributionsP(1)(X,Y),P(2)(X,Y),…,P(N)(X,Y){\displaystyle P^{(1)}(X,Y),P^{(2)}(X,Y),\ldots ,P^{(N)}(X,Y)}, the goal ofdomain generalizationis to formulate learning algorithms which perform well on test examples sampled from a previously unseen domainP∗(X,Y){\displaystyle P^{*}(X,Y)}where no data from the test domain is available at training time. If conditional distributionsP(Y∣X){\displaystyle P(Y\mid X)}are assumed to be relatively similar across all domains, then a learner capable of domain generalization must estimate a functional relationship between the variables which is robust to changes in the marginalsP(X){\displaystyle P(X)}. Based on kernel embeddings of these distributions, Domain Invariant Component Analysis (DICA) is a method which determines the transformation of the training data that minimizes the difference between marginal distributions while preserving a common conditional distribution shared between all training domains.[25]DICA thus extractsinvariants, features that transfer across domains, and may be viewed as a generalization of many popular dimension-reduction methods such askernel principal component analysis, transfer component analysis, and covariance operator inverse regression.[25] Defining a probability distributionP{\displaystyle {\mathcal {P}}}on the RKHSH{\displaystyle {\mathcal {H}}}with DICA measures dissimilarity between domains viadistributional variancewhich is computed as where soG{\displaystyle \mathbf {G} }is aN×N{\displaystyle N\times N}Gram matrix over the distributions from which the training data are sampled. Finding anorthogonal transformonto a low-dimensionalsubspaceB(in the feature space) which minimizes the distributional variance, DICA simultaneously ensures thatBaligns with thebasesof acentral subspaceCfor whichY{\displaystyle Y}becomes independent ofX{\displaystyle X}givenCTX{\displaystyle C^{T}X}across all domains. In the absence of target valuesY{\displaystyle Y}, an unsupervised version of DICA may be formulated which finds a low-dimensional subspace that minimizes distributional variance while simultaneously maximizing the variance ofX{\displaystyle X}(in the feature space) across all domains (rather than preserving a central subspace).[25] In distribution regression, the goal is to regress from probability distributions to reals (or vectors). Many importantmachine learningand statistical tasks fit into this framework, includingmulti-instance learning, andpoint estimationproblems without analytical solution (such ashyperparameterorentropy estimation). In practice only samples from sampled distributions are observable, and the estimates have to rely on similarities computed betweensets of points. Distribution regression has been successfully applied for example in supervised entropy learning, and aerosol prediction using multispectral satellite images.[26] Given({Xi,n}n=1Ni,yi)i=1ℓ{\displaystyle {\left(\{X_{i,n}\}_{n=1}^{N_{i}},y_{i}\right)}_{i=1}^{\ell }}training data, where theXi^:={Xi,n}n=1Ni{\displaystyle {\hat {X_{i}}}:=\{X_{i,n}\}_{n=1}^{N_{i}}}bag contains samples from a probability distributionXi{\displaystyle X_{i}}and theith{\displaystyle i^{\text{th}}}output label isyi∈R{\displaystyle y_{i}\in \mathbb {R} }, one can tackle the distribution regression task by taking the embeddings of the distributions, and learning the regressor from the embeddings to the outputs. In other words, one can consider the following kernelridge regressionproblem(λ>0){\displaystyle (\lambda >0)} where with ak{\displaystyle k}kernel on the domain ofXi{\displaystyle X_{i}}-s(k:Ω×Ω→R){\displaystyle (k:\Omega \times \Omega \to \mathbb {R} )},K{\displaystyle K}is a kernel on the embedded distributions, andH(K){\displaystyle {\mathcal {H}}(K)}is the RKHS determined byK{\displaystyle K}. Examples forK{\displaystyle K}include the linear kernel[K(μP,μQ)=⟨μP,μQ⟩H(k)]{\displaystyle \left[K(\mu _{P},\mu _{Q})=\langle \mu _{P},\mu _{Q}\rangle _{{\mathcal {H}}(k)}\right]}, the Gaussian kernel[K(μP,μQ)=e−‖μP−μQ‖H(k)2/(2σ2)]{\displaystyle \left[K(\mu _{P},\mu _{Q})=e^{-\left\|\mu _{P}-\mu _{Q}\right\|_{H(k)}^{2}/(2\sigma ^{2})}\right]}, the exponential kernel[K(μP,μQ)=e−‖μP−μQ‖H(k)/(2σ2)]{\displaystyle \left[K(\mu _{P},\mu _{Q})=e^{-\left\|\mu _{P}-\mu _{Q}\right\|_{H(k)}/(2\sigma ^{2})}\right]}, the Cauchy kernel[K(μP,μQ)=(1+‖μP−μQ‖H(k)2/σ2)−1]{\displaystyle \left[K(\mu _{P},\mu _{Q})=\left(1+\left\|\mu _{P}-\mu _{Q}\right\|_{H(k)}^{2}/\sigma ^{2}\right)^{-1}\right]}, the generalized t-student kernel[K(μP,μQ)=(1+‖μP−μQ‖H(k)σ)−1,(σ≤2)]{\displaystyle \left[K(\mu _{P},\mu _{Q})=\left(1+\left\|\mu _{P}-\mu _{Q}\right\|_{H(k)}^{\sigma }\right)^{-1},(\sigma \leq 2)\right]}, or the inverse multiquadrics kernel[K(μP,μQ)=(‖μP−μQ‖H(k)2+σ2)−12]{\displaystyle \left[K(\mu _{P},\mu _{Q})=\left(\left\|\mu _{P}-\mu _{Q}\right\|_{H(k)}^{2}+\sigma ^{2}\right)^{-{\frac {1}{2}}}\right]}. The prediction on a new distribution(X^){\displaystyle ({\hat {X}})}takes the simple, analytical form wherek=[K(μX^i,μX^)]∈R1×ℓ{\displaystyle \mathbf {k} ={\big [}K{\big (}\mu _{{\hat {X}}_{i}},\mu _{\hat {X}}{\big )}{\big ]}\in \mathbb {R} ^{1\times \ell }},G=[Gij]∈Rℓ×ℓ{\displaystyle \mathbf {G} =[G_{ij}]\in \mathbb {R} ^{\ell \times \ell }},Gij=K(μX^i,μX^j)∈R{\displaystyle G_{ij}=K{\big (}\mu _{{\hat {X}}_{i}},\mu _{{\hat {X}}_{j}}{\big )}\in \mathbb {R} },y=[y1;…;yℓ]∈Rℓ{\displaystyle \mathbf {y} =[y_{1};\ldots ;y_{\ell }]\in \mathbb {R} ^{\ell }}. Under mild regularity conditions this estimator can be shown to be consistent and it can achieve the one-stage sampled (as if one had access to the trueXi{\displaystyle X_{i}}-s)minimax optimalrate.[26]In theJ{\displaystyle J}objective functionyi{\displaystyle y_{i}}-s are real numbers; the results can also be extended to the case whenyi{\displaystyle y_{i}}-s ared{\displaystyle d}-dimensional vectors, or more generally elements of aseparableHilbert spaceusing operator-valuedK{\displaystyle K}kernels. In this simple example, which is taken from Song et al.,[2]X,Y{\displaystyle X,Y}are assumed to bediscrete random variableswhich take values in the set{1,…,K}{\displaystyle \{1,\ldots ,K\}}and the kernel is chosen to be theKronecker deltafunction, sok(x,x′)=δ(x,x′){\displaystyle k(x,x')=\delta (x,x')}. The feature map corresponding to this kernel is thestandard basisvectorφ(x)=ex{\displaystyle \varphi (x)=\mathbf {e} _{x}}. The kernel embeddings of such a distributions are thus vectors of marginal probabilities while the embeddings of joint distributions in this setting areK×K{\displaystyle K\times K}matrices specifying joint probability tables, and the explicit form of these embeddings is WhenP(X=s)>0{\displaystyle P(X=s)>0}, for alls∈{1,…,K}{\displaystyle s\in \{1,\ldots ,K\}}, the conditional distribution embedding operator, is in this setting a conditional probability table and Thus, the embeddings of the conditional distribution under a fixed value ofX{\displaystyle X}may be computed as In this discrete-valued setting with the Kronecker delta kernel, thekernel sum rulebecomes Thekernel chain rulein this case is given by
https://en.wikipedia.org/wiki/Kernel_embedding_of_distributions
Instatistical classification, two main approaches are called thegenerativeapproach and thediscriminativeapproach. These computeclassifiersby different approaches, differing in the degree ofstatistical modelling. Terminology is inconsistent,[a]but three major types can be distinguished:[1] The distinction between these last two classes is not consistently made;[5]Jebara (2004)refers to these three classes asgenerative learning,conditional learning, anddiscriminative learning, butNg & Jordan (2002)only distinguish two classes, calling them generative classifiers (joint distribution) and discriminative classifiers (conditional distribution or no distribution), not distinguishing between the latter two classes.[6]Analogously, a classifier based on a generative model is a generative classifier, while a classifier based on a discriminative model is a discriminative classifier, though this term also refers to classifiers that are not based on a model. Standard examples of each, all of which arelinear classifiers, are: In application to classification, one wishes to go from an observationxto a labely(or probability distribution on labels). One can compute this directly, without using a probability distribution (distribution-free classifier); one can estimate the probability of a label given an observation,P(Y|X=x){\displaystyle P(Y|X=x)}(discriminative model), and base classification on that; or one can estimate the joint distributionP(X,Y){\displaystyle P(X,Y)}(generative model), from that compute the conditional probabilityP(Y|X=x){\displaystyle P(Y|X=x)}, and then base classification on that. These are increasingly indirect, but increasingly probabilistic, allowing moredomain knowledgeand probability theory to be applied. In practice different approaches are used, depending on the particular problem, and hybrids can combine strengths of multiple approaches. An alternative division defines these symmetrically as: Regardless of precise definition, the terminology is constitutional because a generative model can be used to "generate" random instances (outcomes), either of an observation and target(x,y){\displaystyle (x,y)}, or of an observationxgiven a target valuey,[3]while a discriminative model or discriminative classifier (without a model) can be used to "discriminate" the value of the target variableY, given an observationx.[4]The difference between "discriminate" (distinguish) and "classify" is subtle, and these are not consistently distinguished. (The term "discriminative classifier" becomes apleonasmwhen "discrimination" is equivalent to "classification".) The term "generative model" is also used to describe models that generate instances of output variables in a way that has no clear relationship to probability distributions over potential samples of input variables.Generative adversarial networksare examples of this class of generative models, and are judged primarily by the similarity of particular outputs to potential inputs. Such models are not classifiers. In application to classification, the observableXis frequently acontinuous variable, the targetYis generally adiscrete variableconsisting of a finite set of labels, and the conditional probabilityP(Y∣X){\displaystyle P(Y\mid X)}can also be interpreted as a (non-deterministic)target functionf:X→Y{\displaystyle f\colon X\to Y}, consideringXas inputs andYas outputs. Given a finite set of labels, the two definitions of "generative model" are closely related. A model of the conditional distributionP(X∣Y=y){\displaystyle P(X\mid Y=y)}is a model of the distribution of each label, and a model of the joint distribution is equivalent to a model of the distribution of label valuesP(Y){\displaystyle P(Y)}, together with the distribution of observations given a label,P(X∣Y){\displaystyle P(X\mid Y)}; symbolically,P(X,Y)=P(X∣Y)P(Y).{\displaystyle P(X,Y)=P(X\mid Y)P(Y).}Thus, while a model of the joint probability distribution is more informative than a model of the distribution of label (but without their relative frequencies), it is a relatively small step, hence these are not always distinguished. Given a model of the joint distribution,P(X,Y){\displaystyle P(X,Y)}, the distribution of the individual variables can be computed as themarginal distributionsP(X)=∑yP(X,Y=y){\displaystyle P(X)=\sum _{y}P(X,Y=y)}andP(Y)=∫xP(Y,X=x){\displaystyle P(Y)=\int _{x}P(Y,X=x)}(consideringXas continuous, hence integrating over it, andYas discrete, hence summing over it), and either conditional distribution can be computed from the definition ofconditional probability:P(X∣Y)=P(X,Y)/P(Y){\displaystyle P(X\mid Y)=P(X,Y)/P(Y)}andP(Y∣X)=P(X,Y)/P(X){\displaystyle P(Y\mid X)=P(X,Y)/P(X)}. Given a model of one conditional probability, and estimatedprobability distributionsfor the variablesXandY, denotedP(X){\displaystyle P(X)}andP(Y){\displaystyle P(Y)}, one can estimate the opposite conditional probability usingBayes' rule: For example, given a generative model forP(X∣Y){\displaystyle P(X\mid Y)}, one can estimate: and given a discriminative model forP(Y∣X){\displaystyle P(Y\mid X)}, one can estimate: Note that Bayes' rule (computing one conditional probability in terms of the other) and the definition of conditional probability (computing conditional probability in terms of the joint distribution) are frequently conflated as well. A generative algorithm models how the data was generated in order to categorize a signal. It asks the question: based on my generation assumptions, which category is most likely to generate this signal? A discriminative algorithm does not care about how the data was generated, it simply categorizes a given signal. So, discriminative algorithms try to learnp(y|x){\displaystyle p(y|x)}directly from the data and then try to classify data. On the other hand, generative algorithms try to learnp(x,y){\displaystyle p(x,y)}which can be transformed intop(y|x){\displaystyle p(y|x)}later to classify the data. One of the advantages of generative algorithms is that you can usep(x,y){\displaystyle p(x,y)}to generate new data similar to existing data. On the other hand, it has been proved that some discriminative algorithms give better performance than some generative algorithms in classification tasks.[7] Despite the fact that discriminative models do not need to model the distribution of the observed variables, they cannot generally express complex relationships between the observed and target variables. But in general, they don't necessarily perform better than generative models atclassificationandregressiontasks. The two classes are seen as complementary or as different views of the same procedure.[8] With the rise ofdeep learning, a new family of methods, called deep generative models (DGMs),[9][10]is formed through the combination of generative models and deep neural networks. An increase in the scale of the neural networks is typically accompanied by an increase in the scale of the training data, both of which are required for good performance.[11] Popular DGMs includevariational autoencoders(VAEs),generative adversarial networks(GANs), and auto-regressive models. Recently, there has been a trend to build very large deep generative models.[9]For example,GPT-3, and its precursorGPT-2,[12]are auto-regressive neural language models that contain billions of parameters, BigGAN[13]and VQ-VAE[14]which are used for image generation that can have hundreds of millions of parameters, and Jukebox is a very large generative model for musical audio that contains billions of parameters.[15] Types of generative models are: If the observed data are truly sampled from the generative model, then fitting the parameters of the generative model tomaximize the data likelihoodis a common method. However, since most statistical models are only approximations to thetruedistribution, if the model's application is to infer about a subset of variables conditional on known values of others, then it can be argued that the approximation makes more assumptions than are necessary to solve the problem at hand. In such cases, it can be more accurate to model the conditional density functions directly using adiscriminative model(see below), although application-specific details will ultimately dictate which approach is most suitable in any particular case. Suppose the input data isx∈{1,2}{\displaystyle x\in \{1,2\}}, the set of labels forx{\displaystyle x}isy∈{0,1}{\displaystyle y\in \{0,1\}}, and there are the following 4 data points:(x,y)={(1,0),(1,1),(2,0),(2,1)}{\displaystyle (x,y)=\{(1,0),(1,1),(2,0),(2,1)\}} For the above data, estimating the joint probability distributionp(x,y){\displaystyle p(x,y)}from theempirical measurewill be the following: whilep(y|x){\displaystyle p(y|x)}will be following: Shannon (1948)gives an example in which a table of frequencies of English word pairs is used to generate a sentence beginning with "representing and speedily is an good"; which is not proper English but which will increasingly approximate it as the table is moved from word pairs to word triplets etc.
https://en.wikipedia.org/wiki/Generative_model
Thepandemonium effectis a problem that may appear when high-resolution detectors (usually germaniumsemiconductor detectors) are used inbeta decaystudies. It can affect the correct determination of the feeding to the different levels of thedaughter nucleus. It was first introduced in 1977.[1] Typically, when a parent nucleus beta-decays into its daughter, there is some final energy available which is shared between the final products of the decay. This is called theQvalueof the beta decay (Qβ). The daughter nucleus doesn't necessarily end up in theground stateafter the decay, this only happens when the other products have taken all the available energy with them (usually askinetic energy). So, in general, the daughter nucleus keeps an amount of the available energy as excitation energy and ends up in anexcited stateassociated to some energy level, as seen in the picture. The daughter nucleus can only stay in that excited state for a small amount of time[2](the half life of the level) after which it suffers a series of gamma transitions to its lower energy levels. These transitions allow the daughter nucleus to emit the excitation energy as one or moregamma raysuntil it reaches its ground state, thus getting rid of all the excitation energy that it kept from the decay. According to this, the energy levels of the daughter nucleus can be populated in two ways: The totalgamma raysemitted by an energy level (IT) should be equal to the sum of these two contributions, that is, direct beta feeding (Iβ) plus upper-level gamma de-excitations (ΣIi). The beta feeding Iβ(that is, how many times a level is populated by direct feeding from the parent) can not be measured directly. Since the only magnitude that can be measured are the gamma intensities ΣIiand IT(that is, the amount of gammas emitted by the daughter with a certain energy), the beta feeding has to be extracted indirectly by subtracting the contribution from gamma de-excitations of higher energy levels (ΣIi) to the total gamma intensity that leaves the level (IT), that is: The pandemonium effect appears when the daughter nucleus has a largeQvalue, allowing the access to manynuclear configurations, which translates in many excitation-energy levels available. This means that the total beta feeding will be fragmented, because it will spread over all the available levels (with a certain distribution given by the strength, the level densities, theselection rules, etc.). Then, the gamma intensity emitted from the less populated levels will be weak, and it will be weaker as we go to higher energies where the level density can be huge. Also, the energy of the gammas de-excitating this high-density-level region can be high. Measuring these gamma rays with high-resolution detectors may present two problems: These two effects reduce how much of the beta feeding to the higher energy levels of the daughter nucleus is detected, so less ΣIiis subtracted from the IT, and the energy levels are incorrectly assigned more Iβthan present: When this happens, the low-lying energy levels are the more affected ones. Some of the level schemes of nuclei that appear in the nuclear databases[3]suffer from this Pandemonium effect and are not reliable until better measurements are made in the future. To avoid the pandemonium effect, a detector that solves the problems that high-resolution detectors present should be used. It needs to have an efficiency close to 100% and a good efficiency for gamma rays of huge energies. One possible solution is to use a calorimeter like thetotal absorption spectrometer(TAS), which is made of ascintillator material. It has been shown[4]that even with a high-efficiency array of germanium detectors in a close geometry (for example, the cluster cube array), about 57% of the total B(GT) observed with the TAS technique is lost. The calculation of the beta feeding, (Iβ) is important for different applications, like the calculation of theresidual heatinnuclear reactorsornuclear structurestudies.
https://en.wikipedia.org/wiki/Pandemonium_effect
Ascintillation counteris an instrument for detecting and measuringionizing radiationby using theexcitationeffect of incident radiation on ascintillatingmaterial, and detecting the resultant light pulses. It consists of ascintillatorwhich generates photons in response to incident radiation, a sensitivephotodetector(usually aphotomultipliertube (PMT), acharge-coupled device(CCD) camera, or aphotodiode), which converts the light to an electrical signal and electronics to process this signal. Scintillation counters are widely used in radiation protection, assay of radioactive materials and physics research because they can be made inexpensively yet with goodquantum efficiency, and can measure both the intensity and theenergyof incident radiation. The first electronic scintillation counter was invented in 1944 bySir Samuel Curran[1][2]whilst he was working on theManhattan Projectat theUniversity of California at Berkeley. There was a requirement to measure the radiation from small quantities of uranium, and his innovation was to use one of the newly available highly sensitivephotomultipliertubes made by theRadio Corporation of Americato accurately count the flashes of light from a scintillator subjected to radiation. This built upon the work of earlier researchers such asAntoine Henri Becquerel, who discoveredradioactivitywhilst working on thephosphorescenceof uranium salts in 1896. Previously, scintillation events had to be laboriously detected by eye, using aspinthariscope(a simple microscope) to observe light flashes in the scintillator. The first commercial liquid scintillation counter was made by Lyle E. Packard and sold to Argonne Cancer Research Hospital at the University of Chicago in 1953. The production model was designed especially fortritiumandcarbon-14which were used in metabolic studiesin vivoandin vitro.[3] When an ionizing particle passes into the scintillator material, atoms are excited along a track. For charged particles the track is the path of the particle itself. For gamma rays (uncharged), their energy is converted to an energetic electron via either thephotoelectric effect,Compton scatteringorpair production. The chemistry of atomic de-excitation in the scintillator produces a multitude of low-energy photons, typically near the blue end of the visible spectrum. The quantity is proportional to the energy deposited by the ionizing particle. These can be directed to the photocathode of a photomultiplier tube which emits at most one electron for each arriving photon due to thephotoelectric effect. This group of primary electrons is electrostatically accelerated and focused by an electrical potential so that they strike the first dynode of the tube. The impact of a single electron on the dynode releases a number of secondary electrons which are in turn accelerated to strike the second dynode. Each subsequent dynode impact releases further electrons, and so there is a current amplifying effect at each dynode stage. Each stage is at a higher potential than the previous to provide the accelerating field. The resultant output signal at the anode is a measurable pulse for each group of photons from an original ionizing event in the scintillator that arrived at the photocathode and carries information about the energy of the original incident radiation. When it is fed to acharge amplifierwhich integrates the energy information, an output pulse is obtained which is proportional to the energy of the particle exciting the scintillator. The number of such pulses per unit time also gives information about the intensity of the radiation. In some applications individual pulses are not counted, but rather only the average current at the anode is used as a measure of radiation intensity. The scintillator must be shielded from all ambient light so that external photons do not swamp the ionization events caused by incident radiation. To achieve this a thin opaque foil, such as aluminized mylar, is often used, though it must have a low enough mass to minimize undue attenuation of the incident radiation being measured. The article on thephotomultipliertube carries a detailed description of the tube's operation. The scintillator consists of a transparentcrystal, usually a phosphor, plastic (usually containinganthracene) ororganic liquid(seeliquid scintillation counting) that fluoresces when struck byionizing radiation. Cesium iodide(CsI) in crystalline form is used as the scintillator for the detection of protons and alpha particles.Sodium iodide(NaI) containing a small amount ofthalliumis used as a scintillator for the detection of gamma waves andzinc sulfide(ZnS) is widely used as a detector of alpha particles.Zinc sulfideis the materialRutherfordused to perform his scattering experiment.Lithium iodide(LiI) is used in neutron detectors. The quantum efficiency of agamma-raydetector (per unit volume) depends upon thedensityofelectronsin the detector, and certain scintillating materials, such assodium iodideandbismuth germanate, achieve high electron densities as a result of the highatomic numbersof some of the elements of which they are composed. However,detectors based on semiconductors, notably hyperpuregermanium, have better intrinsic energy resolution than scintillators, and are preferred where feasible forgamma-ray spectrometry. In the case ofneutrondetectors, high efficiency is gained through the use of scintillating materials rich inhydrogenthatscatterneutrons efficiently.Liquid scintillation countersare an efficient and practical means of quantifyingbeta radiation. Scintillation counters are used to measure radiation in a variety of applications including hand heldradiation survey meters, personnel andenvironmental monitoringforradioactive contamination, medical imaging, radiometric assay, nuclear security and nuclear plant safety. Several products have been introduced in the market utilising scintillation counters for detection of potentially dangerous gamma-emitting materials during transport. These include scintillation counters designed for freight terminals, border security, ports, weigh bridge applications, scrap metal yards and contamination monitoring of nuclear waste. There are variants of scintillation counters mounted on pick-up trucks and helicopters for rapid response in case of a security situation due todirty bombsorradioactive waste.[4][failed verification][5][failed verification]Hand-held units are also commonly used.[6] In theUnited Kingdom, theHealth and Safety Executive, or HSE, has issued a user guidance note on selecting the correct radiation measurement instrument for the application concerned. This covers all radiation instrument technologies, and is a useful comparative guide to the use of scintillation detectors.[7] Radioactive contaminationmonitors, for area or personal surveys require a large detection area to ensure efficient and rapid coverage of monitored surfaces. For this a thin scintillator with a large area window and an integrated photomultiplier tube is ideally suited. They find wide application in the field of radioactive contamination monitoring of personnel and the environment. Detectors are designed to have one or two scintillation materials, depending on the application. "Single phosphor" detectors are used for either alpha or beta, and "Dual phosphor" detectors are used to detect both.[8] A scintillator such as zinc sulphide is used for alpha particle detection, whilst plastic scintillators are used for beta detection. The resultant scintillation energies can be discriminated so that alpha and beta counts can be measured separately with the same detector,[8]This technique is used in both hand-held and fixed monitoring equipment, and such instruments are relatively inexpensive compared with the gas proportional detector. Scintillation materials are used for ambient gamma dose measurement, though a different construction is used to detect contamination, as no thin window is required. Scintillators often convert a singlephotonof high energyradiationinto a high number of lower-energy photons, where the number of photons permegaelectronvoltof input energy is fairly constant. By measuring the intensity of the flash (the number of the photons produced by thex-rayor gamma photon) it is therefore possible to discern the original photon's energy. The spectrometer consists of a suitablescintillatorcrystal, aphotomultipliertube, and a circuit for measuring the height of the pulses produced by the photomultiplier. The pulses are counted and sorted by their height, producing a x-y plot of scintillator flashbrightnessvs number of the flashes, which approximates the energy spectrum of the incident radiation, with some additional artifacts. A monochromatic gamma radiation produces a photopeak at its energy. The detector also shows response at the lower energies, caused byCompton scattering, two smaller escape peaks at energies 0.511 and 1.022 MeV below the photopeak for the creation of electron-positron pairs when one or both annihilation photons escape, and abackscatterpeak. Higher energies can be measured when two or more photons strike the detector almost simultaneously (pile-up, within the time resolution of thedata acquisitionchain), appearing as sum peaks with energies up to the value of two or more photopeaks added[8]
https://en.wikipedia.org/wiki/Scintillation_counter
Apreferential attachment processis any of a class of processes in which some quantity, typically some form of wealth or credit, is distributed among a number of individuals or objects according to how much they already have, so that those who are already wealthy receive more than those who are not. "Preferential attachment" is only the most recent of many names that have been given to such processes. They are also referred to under the namesYule process,cumulative advantage,the rich get richer, and theMatthew effect. They are also related toGibrat's law. The principal reason for scientific interest in preferential attachment is that it can, under suitable circumstances, generatepower lawdistributions.[1]If preferential attachment is non-linear, measured distributions may deviate from a power law.[2]These mechanisms may generate distributions which are approximately power law over transient periods.[3][4] A preferential attachment process is astochasticurn process, meaning a process in which discrete units of wealth, usually called "balls", are added in a random or partly random fashion to a set of objects or containers, usually called "urns". A preferential attachment process is an urn process in which additional balls are added continuously to the system and are distributed among the urns as an increasing function of the number of balls the urns already have. In the most commonly studied examples, the number of urns also increases continuously, although this is not a necessary condition for preferential attachment and examples have been studied with constant or even decreasing numbers of urns. A classic example of a preferential attachment process is the growth in the number ofspeciespergenusin some highertaxonof biotic organisms.[5]New genera ("urns") are added to a taxon whenever a newly appearing species is considered sufficiently different from its predecessors that it does not belong in any of the current genera. New species ("balls") are added as old onesspeciate(i.e., split in two) and, assuming that new species belong to the same genus as their parent (except for those that start new genera), the probability that a new species is added to a genus will be proportional to the number of species the genus already has. This process, first studied by British statisticianUdny Yule, is alinearpreferential attachment process, since the rate at which genera accrue new species is linear in the number they already have. Linear preferential attachment processes in which the number of urns increases are known to produce a distribution of balls over the urns following the so-calledYule distribution. In the most general form of the process, balls are added to the system at an overall rate ofmnew balls for each new urn. Each newly created urn starts out withk0balls and further balls are added to urns at a rate proportional to the numberkthat they already have plus a constanta> −k0. With these definitions, the fractionP(k) of urns havingkballs in the limit of long time is given by[6] P(k)=B(k+a,γ)B(k0+a,γ−1),{\displaystyle P(k)={\mathrm {B} (k+a,\gamma ) \over \mathrm {B} (k_{0}+a,\gamma -1)},} fork≥k0(and zero otherwise), where B(x,y) is the Eulerbeta function: B(x,y)=Γ(x)Γ(y)Γ(x+y),{\displaystyle \mathrm {B} (x,y)={\Gamma (x)\Gamma (y) \over \Gamma (x+y)},} with Γ(x) being the standardgamma function, and γ=2+k0+am.{\displaystyle \gamma =2+{k_{0}+a \over m}.} The beta function behaves asymptotically as B(x,y) ~x−yfor largexand fixedy, which implies that for large values ofkwe have P(k)∝k−γ.{\displaystyle P(k)\propto k^{-\gamma }.} In other words, the preferential attachment process generates a "long-tailed" distribution following aPareto distributionorpower lawin its tail. This is the primary reason for the historical interest in preferential attachment: the species distribution and many other phenomena are observed empirically to follow power laws and the preferential attachment process is a leading candidate mechanism to explain this behavior. Preferential attachment is considered a possible candidate for, among other things, the distribution of the sizes of cities,[7]the wealth of extremely wealthy individuals,[7]the number of citations received by learned publications,[8]and the number of links to pages on the World Wide Web.[1] The general model described here includes many other specific models as special cases. In the species/genus example above, for instance, each genus starts out with a single species (k0= 1) and gains new species in direct proportion to the number it already has (a= 0), and henceP(k) = B(k,γ)/B(k0,γ− 1) withγ=2 + 1/m. Similarly the Price model for scientific citations[8]corresponds to the casek0= 0,a= 1 and the widely studiedBarabási-Albert model[1]corresponds tok0=m,a= 0. Preferential attachment is sometimes referred to as theMatthew effect, but the two are not precisely equivalent. The Matthew effect, first discussed byRobert K. Merton,[9]is named for a passage in thebiblicalGospel of Matthew: "For everyone who has will be given more, and he will have an abundance. Whoever does not have, even what he has will be taken from him." (Matthew25:29,New International Version.) The preferential attachment process does not incorporate the taking away part. This point may be moot, however, since the scientific insight behind the Matthew effect is in any case entirely different. Qualitatively it is intended to describe not a mechanical multiplicative effect like preferential attachment but a specific human behavior in which people are more likely to give credit to the famous than to the little known. The classic example of the Matthew effect is a scientific discovery made simultaneously by two different people, one well known and the other little known. It is claimed that under these circumstances people tend more often to credit the discovery to the well-known scientist. Thus the real-world phenomenon the Matthew effect is intended to describe is quite distinct from (though certainly related to) preferential attachment. The first rigorous consideration of preferential attachment seems to be that ofUdny Yulein 1925, who used it to explain the power-law distribution of the number of species per genus of flowering plants.[5]The process is sometimes called a "Yule process" in his honor. Yule was able to show that the process gave rise to a distribution with a power-law tail, but the details of his proof are, by today's standards, contorted and difficult, since the modern tools of stochastic process theory did not yet exist and he was forced to use more cumbersome methods of proof. Most modern treatments of preferential attachment make use of themaster equationmethod, whose use in this context was pioneered bySimonin 1955, in work on the distribution of sizes of cities and other phenomena.[7] The first application of preferential attachment to learned citations was given byPricein 1976.[8](He referred to the process as a "cumulative advantage" process.) His was also the first application of the process to the growth of a network, producing what would now be called ascale-free network. It is in the context of network growth that the process is most frequently studied today. Price also promoted preferential attachment as a possible explanation for power laws in many other phenomena, includingLotka's lawof scientific productivity andBradford's lawof journal use. The application of preferential attachment to the growth of the World Wide Web was proposed byBarabási and Albertin 1999.[1]Barabási and Albert also coined the name "preferential attachment" by which the process is best known today[10]and suggested that the process might apply to the growth of other networks as well. For growing networks, the precise functional form of preferential attachment can be estimated bymaximum likelihood estimation.[11]
https://en.wikipedia.org/wiki/Cumulative_advantage
TheMatthew effect, sometimes called theMatthew principleorcumulative advantage,[1]is the tendency of individuals to accrue social or economic success in proportion to their initial level of popularity, friends, and wealth. It is sometimes summarized by the adage or platitude "the rich get richer and the poor get poorer".[2][3]Also termed the "Matthew effect of accumulated advantage", taking its name from theParable of the Talentsin the biblicalGospel of Matthew, it was coined by sociologistsRobert K. MertonandHarriet Zuckermanin 1968.[4][5] Early studies of Matthew effects were primarily concerned with the inequality in the way scientists were recognized for their work. However, Norman W. Storer, of Columbia University, led a new wave of research. He believed he discovered that the inequality that existed in the social sciences also existed in other institutions.[6] Later, innetwork science, a form of the Matthew effect was discovered in internet networks and calledpreferential attachment. The mathematics used for this network analysis of the internet was later reapplied to the Matthew effect in general, whereby wealth or credit is distributed among individuals according to how much they already have. This has the net effect of making it increasingly difficult for low ranked individuals to increase their totals because they have fewer resources to risk over time, and increasingly easy for high rank individuals to preserve a large total because they have a large amount to risk.[7] The concept is named according to two of theparables of Jesusin thesynoptic Gospels(Table 2, of theEusebian Canons). The concept concludes both synoptic versions of theparable of the talents: For to every one who has will more be given, and he will have abundance; but from him who has not, even what he has will be taken away. I tell you, that to every one who has will more be given; but from him who has not, even what he has will be taken away. The concept concludes two of the three synoptic versions of the parable of thelamp under a bushel(absent in the version of Matthew): For to him who has will more be given; and from him who has not, even what he has will be taken away. Take heed then how you hear; for to him who has will more be given, and from him who has not, even what he thinks that he has will be taken away. The concept is presented again in Matthew outside of a parable duringChrist's explanation to his disciples of the purpose of parables: And he answered them, "To you it has been given to know the secrets of the kingdom of heaven, but to them it has not been given. For to him who has will more be given, and he will have abundance; but from him who has not, even what he has will be taken away." Prior to being called "The Matthew effect",Udny Yule, in 1925, noticed the effect in flower populations, which in population growth studies is called theYule processin his honor. In thesociology of science, the first description of the Matthew effect was given byPricein 1976.[8](He referred to the process as a "cumulative advantage" process.) His was also the first application of the process to the growth of a network, producing what would now be called ascale-free network. It is in the context of network growth that the process is most frequently studied today. Price also promoted preferential attachment as a possible explanation for power laws in many other phenomena, includingLotka's lawof scientific productivity andBradford's lawof journal use. "Matthew effect" was a term coined byRobert K. MertonandHarriet Anne Zuckermanto describe how, among other things, eminent scientists will often get more credit than a comparatively unknown researcher, even if their work is similar; it also means that credit will usually be given to researchers who are already famous.[4][5]For example, a prize will almost always be awarded to the most senior researcher involved in a project, even if all the work was done by agraduate student. This was later formulated byStephen StiglerasStigler's law of eponymy– "No scientific discovery is named after its original discoverer" – with Stigler explicitly naming Merton as the true discoverer, making his "law" an example of itself. Merton and Zuckerman further argued that in the scientific community the Matthew effect reaches beyond simple reputation to influence the wider communication system, playing a part in social selection processes and resulting in a concentration of resources and talent. They gave as an example the disproportionate visibility given to articles from acknowledged authors, at the expense of equally valid or superior articles written by unknown authors. They also noted that the concentration of attention on eminent individuals can lead to an increase in their self-assurance, pushing them to perform research in important but risky problem areas.[4] The Matthew Effect also relates to broader patterns of scientific productivity, which can be explained by additional sociological concepts in science, such as the sacred spark, cumulative advantage, and search costs minimization by journal editors. The sacred spark paradigm suggests that scientists differ in their initial abilities, talent, skills, persistence, work habits, etc. that provide particular individuals with an early advantage. These factors have a multiplicative effect which helps these scholars succeed later. The cumulative advantage model argues that an initial success helps a researcher gain access to resources (e.g., teaching release, best graduate students, funding, facilities, etc.), which in turn results in further success. Search costs minimization by journal editors takes place when editors try to save time and effort by consciously or subconsciously selecting articles from well-known scholars. Whereas the exact mechanism underlying these phenomena is yet unknown, it is documented that a minority of all academics produce the most research output and attract the most citations.[9] In addition to its influence on recognition and productivity, the Matthew Effect can also be observed in the distribution of scientific resources, such as funding. A large Matthew effect was discovered in a study of science funding in the Netherlands, where winners just above the funding threshold were found to accumulate more than twice as much funding during the subsequent eight years as non-winners with near-identical review scores that fell just below the threshold.[10] In education, the term "Matthew effect" has been adopted by psychologistKeith Stanovich[11]and popularised by education theoristAnthony Kellyto describe a phenomenon observed in research on how new readers acquire the skills to read. Effectively, early success in acquiring reading skills usually leads to later successes in reading as the learner grows, while failing to learn to read before the third or fourth year of schooling may be indicative of lifelong problems in learning new skills.[12] This is because children who fall behind in reading would read less, increasing the gap between them and their peers. Later, when students need to "read to learn" (where before they were learning to read), their reading difficulty creates difficulty in most other subjects. In this way they fall further and further behind in school, dropping out at a much higher rate than their peers.[13]This effect has been used in legal cases, such asBrody v. Dare County Board of Education.[14]Such cases argue that early education intervention is essential fordisabledchildren, and that failing to do so negatively impacts those children.[15] A 2014 review of Matthew effect in education found mixed empirical evidence, where Matthew effect tends to describe the development of primary school skills, while a compensatory pattern was found for skills with ceiling effects.[16]A 2016 study on reading comprehension assessments for 99 thousand students found a pattern of stable differences, with some narrowing of the gap for students with learning disabilities.[17] Innetwork science, the Matthew effect was noticed aspreferential attachmentof earlier nodes in a network, which explains that these nodes tend to attract more links early on.[18] The application of preferential attachment to the growth of the World Wide Web was proposed byBarabási and Albertin 1999.[19]Barabási and Albert also coined the name "preferential attachment", and suggested that the process might apply to the growth of other networks as well. For growing networks, the precise functional form of preferential attachment can be estimated bymaximum likelihood estimation.[20] Due to preferential attachment, Matjaž Perc writes "a node that acquires more connections than another one will increase its connectivity at a higher rate, and thus an initial difference in the connectivity between two nodes will increase further as the network grows, while the degree of individual nodes will grow proportional with the square root of time."[7]The Matthew Effect therefore explains the growth of some nodes in vast networks such as the Internet.[21] A model for career progress quantitatively incorporates the Matthew Effect in order to predict the distribution of individual career length in competitive professions. The model predictions are validated by analyzing the empirical distributions of career length for careers in science and professional sports (e.g.Major League Baseball).[22]As a result, the disparity between the large number of short careers and the relatively small number of extremely long careers can be explained by the "rich-get-richer" mechanism, which in this framework, provides more experienced and more reputable individuals with a competitive advantage in obtaining new career opportunities. Bask (2024) reviewed theoretical research on academic career progression and found that Feichtinger et al. developed a model where a researcher’s reputation grows through scientific effort but declines without continual activity[23]Their model incorporates the Matthew effect, in that researchers with high initial reputations benefit more from their efforts, while those with low reputations may see theirs diminish even with similar effort. They showed that if a researcher starts with low reputation, their career is likely to decline and eventually end, whereas researchers starting with high reputation may either sustain a successful career or face early exit depending on their effort over time.[23] Experiments manipulating download counts or bestseller lists for books and music have shown consumer activity follows the apparent popularity.[24][25][26] Social influence often induces a rich-get-richer phenomenon where popular products tend to become even more popular.[27]An example of the Matthew Effect's role on social influence is an experiment by Salganik, Dodds, and Watts in which they created an experimental virtual market named MUSICLAB. In MUSICLAB, people could listen to music and choose to download the songs they enjoyed the most. The song choices were unknown songs produced by unknown bands. There were two groups tested; one group was given zero additional information on the songs and one group was told the popularity of each song and the number of times it had previously been downloaded.[28]As a result, the group that saw which songs were the most popular and were downloaded the most were then biased to choose those songs as well. The songs that were most popular and downloaded the most stayed at the top of the list and consistently received the most plays. To summarize the experiment's findings, the performance rankings had the largest effect boosting expected downloads the most. Download rankings had a decent effect; however, not as impactful as the performance rankings.[29]Abeliuk et al. (2016) also proved that when utilizing "performance rankings", a monopoly will be created for the most popular songs.[30] The ideas of this theory were developed by Kenneth Ferraro and colleagues as an integrative ormiddle-range theory. Originally specified in fiveaxiomsand nineteen propositions, cumulative inequality theory incorporates elements from the following theories and perspectives, several of which are related to the study of society: In recent years, Ferraro and several other researchers have been testing and elaborating elements of the theory on a variety of topics to provide evidence for the theoretical framework. In the following information you will find some of the uses of this theory in sociological studies. '"social systems generate inequality, which is manifested over the life course via demographic and developmental processes."[31] McDonough, Worts, Booker, et al. (2015) for example studied cumulative disadvantage in the generations of health inequality among mothers in Britain and the United States. The study examined "if adverse circumstances early in the life course cumulate as health harming biographical patterns across working and family caregiving years."[32]Also, it was examined if institutional context moderated cumulative effects of micro level processes. The results showed that existing health disparities of women in midlife, during work and family rearing time, were intensified by cumulative disadvantages caused by adversities in early life. Thus, the accumulation of disadvantage had negative connotations for the well-being of women's occupational experiences and family life. McLean (2010), on the other hand, studied U.S. combat and non combat veterans through cumulative disadvantage. He discovered that cumulated negative disadvantages caused by disability and unemployment were more likely to influence the lives of combat veterans versus non combat veterans. Combat veterans suffered physical and emotional trauma that had a disabling effect which impeded their ability to successfully obtain employment. . The research is crucial for social policy implementation that assist United States Veterans to find and retain employment that is suitable to their personal conditions.[citation needed] In continuation, Woolredge, Frank, Coulette, et al. (2016) studied the prison sentencing of racial groups. specifically of African American males with prior felony convictions. They examined how pre-trial processes affect trial outcomes. It was determined that cumulative disadvantage was existent for African American males and young men; the results were measured by: set bail amounts, pre-trial detention, prison sentencing, and no reduction in sentencing length. The research are striving to create changes in the justice system that reduce incarceration rates of African American Males by reducing bail amounts, and pre trial imprisonment. Further studies are important to decrease the incarceration of minority groups in society, and to create a non biased justice system.[citation needed] Additionally, Ferraro & Moore (2003) have applied the theory to the study of long-term consequences of early obesity for midlife health and socioeconomic attainment. The study shows that obesity experienced in early life leads to lower-body disability, but higher risk factors to health.[33]Moreover. The research mentions a risk that has been brought to attention in the past years; it ties being over weight to negative stigma (DeJong 1980),and has influenced fair labor market positioning[34]and wages.[35] Lastly, Crystal, Shea, & Reyes (2016) studied the effects of cumulative advantage in increasing within age cohort economic inequalities in diverse periods of time. The study utilized economic patterns such as annual wealth value and household size. The inequalities of age were analyzed by using the gini coefficient. The study took place between 1980 and 2010. The results showed that at age 65 plus individuals had higher rates of inequality and it increased significantly for baby boomers or during economic recession and times of war. The research is written to estimate the possible impacts of social security changes on older adults in American Society. In conclusion, Cumulative Inequality or Cumulative Disadvantage Theory, is broadly examining various topics that impact public policy, and the view of our role within society. Further benefits of the theory are still to be seen in the next coming years. The concept of cumulative advantage, based on Merton and Zuckerman's Matthew Effect, has been widely applied to the study oflife courseinequality.[36][37]Dannefer (2003) argued that inequalities in resources, health, and social status systematically widen over time, shaped by social institutions, economic structures, and psychosocial factors like perceived agency and self-efficacy. Early advantages or disadvantages become amplified, producing growing disparities as individuals age. Pallas (2009) further highlighted how cumulative advantage involves shifts between different types of capital, such as human, economic, and symbolic, complicating efforts to measure inequality over time.[38] Research has expanded cumulative advantage beyond aging to domains such as education, work, health, and wealth.[37]In education, early academic differences lead to greater access to opportunities and resources, compounding over time. In the workforce, initial job placements and early career achievements create divergent paths in earnings and occupational mobility. Family background and neighborhood contexts also play a role, reinforcing early disparities across the life course[37] Open Scienceis "the movement to make scientific research (including publications, data, physical samples, and software) and its dissemination accessible to all levels of society, amateur or professional". One of its key motivations is increasing equity in scientific endeavors. However, Ross-Hellauer, T. et. al. (2022) argue that Open Science's ambition to reduce inequalities in academia may inadvertently perpetuate or exacerbate existing disparities caused by cumulative advantage.[39]As Open Science progresses, it faces the challenge of balancing its goals of openness and accessibility with the risk that its practices could reinforce the privileges of the more advantaged, particularly in terms of access to knowledge, technology, and funding. The authors make this critique to urge professionals to reflect "upon the ways in which implementation may run counter to ideals".[39]
https://en.wikipedia.org/wiki/Cumulative_inequality_theory
Dominant narratives, sometimes calleddominant cultural narratives, are frequently-repeated stories that are shared in society through various social and cultural institutions.[1]The term is most frequently used inpedagogy, the study of education. Dominant narratives are often discussed in tandem withcounternarratives. This term has been described as an "invisible hand" that guides reality and perceived reality.[2]Dominant cultureis defined as the majority cultural practices of a society.[3] Dominant narrative is similar in some ways to the ideas ofmetanarrativeorgrand narrative. SociologistJudith Lorberdefines and describes "A-category" members as those that occupy the dominant group in different aspects of life.[4] Dominant narratives are generally characterized as coming from, or being supported by,privilegedorpowerfulgroups.[5]According to political scientist Ronald R. Krebs, dominant narratives are maintained through public support because "even those who disagree with their premises typically abstain from publicly challenging them, for fear of being ignored or castigated."[6]Scholars have usedcritical discourse analysisto study dominant narratives, with the goal of disrupting the narratives.[7]InK–12economics educationin the United States,neoclassical economicsis considered a dominant narrative.[8] According to psychologistRobyn Fivush, counternarratives "use the dominant narrative as a starting point, agreeing on many of the main facts" while changing thesubjectiveperspective.[9]
https://en.wikipedia.org/wiki/Dominant_narrative
Doomscrollingordoomsurfingis the act of spending an excessive amount of time reading large quantities ofnews, particularly negative news, on thewebandsocial media.[1][2]The concept was coined around 2020, particularly in the context of theCOVID-19 pandemic. Surveys and studies suggest doomscrolling is predominant among youth.[3][4]It can be considered a form ofinternet addiction disorder. In 2019, a study by theNational Academy of Sciencesfound that doomscrolling can be linked to a decline in mental and physical health.[5]Numerous reasons for doomscrolling have been cited, includingnegativity bias,fear of missing out, increasedanxiety, and attempts at gaining control over uncertainty. The practice of doomscrolling can be compared to an older phenomenon from the 1970s called themean world syndrome, described as "the belief that the world is a more dangerous place to live in than it actually is as a result of long-term exposure to violence-related content on television".[6]Studies show that seeing upsetting news leads people to seek out more information on the topic, creating a self-perpetuating cycle.[7] In common parlance, the word "doom" connotesdarknessandevil, referring to one'sfate(cf.damnation).[8]In the internet's infancy, "surfing" was a common verb used in reference to browsing the internet; similarly, the word "scrolling" refers to sliding through online content.[8]After 3 years of being on theMerriam-Webster"watching" list, "doomscrolling" was recognized as an official word in September 2023.[9]Dictionary.comchose it as the top monthly trend in August 2020.[10]TheMacquarie Dictionarynamed doomscrolling as the 2020 Committee's Choice Word of the Year.[11] According to the Wall Street Journal, the term was first used in 2018.[12]The term continued to gain traction in the early 2020s[1][13]through events such as theCOVID-19 pandemic, theGeorge Floyd protests, the2020 U.S. presidential election, thestorming of the U.S. Capitolin 2021, and theRussian invasion of Ukrainesince 2022,[14]all of which have been noted to have exacerbated the practice of doomscrolling.[8][15][16]Doomscrolling became widespread among users ofTwitterduring the COVID-19 pandemic,[17]and has also been discussed in relation to theclimate crisis.[18]A 2024 survey conducted byMorning Consult, concluded that approximately 31% of American adults doomscroll on a regular basis. This percentage is further exaggerated the younger the adults are, with millennials at 46%, and Gen Z adults at 51%.[3] Infinite scrolling is a design approach which loads content continuously as the user scrolls down. It eliminates the need for pagination thereby encouraging doomscrolling behaviours. The feature allows a social media user to "infinitely scroll", as the software is continuously loading new content and displaying an endless stream of information. Consequently, this feature can exacerbate doomscrolling as it removes natural stopping points that a user might pause at.[19]The concept of infinite scrolling is sometimes attributed toAza Raskinby the elimination ofpaginationof web pages, in favor of continuously loading content as the user scrolls down the page.[20]Raskin later expressed regret at the invention, describing it as "one of the first products designed to not simply help a user, but to deliberately keep them online for as long as possible".[21]Usability research suggests infinite scrolling can present an accessibility issue.[20]The lack of stopping cues has been described as a pathway to bothproblematic smartphone useandproblematic social media use.[22][23] Social media companies play a significant role in the perpetuation of doomscrolling by leveraging algorithms designed to maximize user engagement. These algorithms prioritize content that is emotionally stimulating, often favoring negative news and sensationalized headlines to keep users scrolling. The business models of most social media platforms rely heavily on user engagement, which means that the longer people stay on their platforms, the more advertisements they see, and the more data is collected on their behavior. This creates a cycle where emotionally charged content—often involving negative or anxiety-inducing information—is repeatedly pushed to users, encouraging them to keep scrolling and consuming more content. Despite the well-documented negative effects of doomscrolling on mental health, social media companies are incentivized to maintain user engagement through these methods, making it challenging for individuals to break free from the habit.[24] The act of doomscrolling can be attributed to the naturalnegativity biaspeople have when consuming information.[13]Negativity biasis the idea that negative events have a larger impact on one's mental well-being than good ones.[25]Jeffrey Hall, a professor of communication studies at theUniversity of Kansasin Lawrence, notes that due to an individual's regular state of contentment, potential threats provoke one's attention.[26]One psychiatrist at theOhio State University Wexner Medical Centernotes that humans are "all hardwired to see the negative and be drawn to the negative because it can harm [them] physically."[27]He cites evolution as the reason for why humans seek out such negatives: if one's ancestors, for example, discovered how an ancient creature could injure them, they could avoid that fate.[28] As opposed to primitive humans, however, most people in modern times do not realize that they are even seeking negative information. Social media algorithms heed the content users engage in and display posts similar in nature, which can aid in the act of doomscrolling.[26]As per the clinic director of thePerelman School of Medicine's Center for the Treatment and Study of Anxiety: "People have a question, they want an answer, and assume getting it will make them feel better ... You keep scrolling and scrolling. Many think that will be helpful, but they end up feeling worse afterward."[28] Doomscrolling can also be explained by the fear of missing out, a common fear that causes people to take part in activities that may not be explicitly beneficial to them, but which they fear "missing out on".[29]This fear is also applied within the world of news, and social media. A research study conducted byStatistain 2013 found that more than half of Americans experienced FOMO on social media; further studies found FOMO affected 67% of Italian users in 2017, and 59% of Polish teenagers in 2021.[30] Thus, Bethany Teachman, a professor of psychology at theUniversity of Virginia, states that FOMO is likely to be correlated with doomscrolling due to the person's fear of missing out on crucial negative information.[31] Obsessively consuming negative news online can additionally be partially attributed to a person's psychological need for control. As stated earlier, the COVID-19 pandemic coincided with the popularity of doomscrolling. A likely reasoning behind this is that during uncertain times, people are likely to engage in doomscrolling as a way to help them gather information and a sense of mastery over the situation. This is done by people to reinforce their belief that staying informed, and in control will provide them with protection from grim situations.[32]However, while attempting to seize control, more often than not as a result of doomscrolling, individuals develop more anxiety towards the situation rather than lessen it.[33] Doomscrolling, the compulsion to engross oneself in negative news, may be the result of an evolutionary mechanism where humans are "wired to screen for and anticipate danger".[34]By frequently monitoring events surrounding negative headlines, staying informed may grant the feeling of being better prepared; however, prolonged scrolling may also lead to worsened mood and mental health as personal fears are heightened.[34] Theinferior frontal gyrus(IFG) plays an important role in information processing and integrating new information into beliefs about reality.[34][35]In the IFG, the brain "selectively filters bad news" when presented with new information as it updates beliefs.[34]When a person engages in doomscrolling, the brain may feel under threat and shut off its "bad news filter" in response.[34] In a study where researchers manipulated the left IFG usingtranscranial magnetic stimulation(TMS), patients were more likely to incorporate negative information when updating beliefs.[35]This suggests that the left IFG may be responsible for inhibiting bad news from altering personal beliefs; when participants were presented with favorable information and received TMS, the brain still updated beliefs in response to the positive news.[35]The study also suggests that the brain selectively filters information and updates beliefs in a way that reduces stress and anxiety by processing good news with higher regard (seeoptimistic bias).[35]Increased doomscrolling exposes the brain to greater quantities of unfavorable news and may restrict the brain's ability to embrace good news and discount bad news;[35]this can result innegative emotionsthat make one feel anxious, depressed, and isolated.[28] Health professionals have advised that doomscrolling can negatively impact existing mental health issues.[34][36][37]While the overall impact that doomscrolling has on people may vary,[38]it can often make one feel anxious, stressed, fearful, depressed, and isolated.[34] Professors of psychology at theUniversity of Sussexconducted a study in which participants watched television news consisting of "positive-, neutral-, and negative valenced material".[39][40]The study revealed that participants who watched the negative news programs showed an increase in anxiety, sadness, and catastrophic tendencies regarding personal worries.[39] A study conducted by psychology researchers in conjunction with theHuffington Postfound that participants who watched three minutes of negative news in the morning were 27% more likely to have reported experiencing a bad day six to eight hours later.[40]Comparatively, the group who watched solutions-focused news stories reported a good day 88% of the time.[40] A common method of studying doomscrolling is the questionnaire developed by Yurii B. Melnyk and Anatoliy V. Stadnik. Consisting of 12 items, the questionnaire is based on four criteria: addiction, rigidity, mental health, and reflection. The authors of the methodology also provide guidance on interpreting the severity levels of doomscrolling symptoms[41]. Some people have beguncopingwith the abundance of negative news stories by avoiding news altogether. A study from 2017 to 2022 showed that news avoidance is increasing, and that 38% of people admitted to sometimes or often actively avoiding the news in 2022, up from 29% in 2017.[42]Some journalists have admitted to avoiding the news; journalistAmanda Ripleywrote that "people producing the news themselves are struggling, and while they aren't likely to admit it, it is warping the coverage."[43]She also identified ways she believes could help fix the problem, such as intentionally adding more hope, agency, and dignity into stories so readers don't feel the helplessness which leads them to tune out entirely.[43] In 2024, a study by theUniversity of Oxford'sReuters Institute for the Study of Journalismindicated that an increasing number of people are avoiding the news.[44]In 2023, 39% of people worldwide reported actively avoiding the news, up from 29% in 2017. The study suggests that conflicts in Ukraine and the Middle East may be contributing factors to this trend. In the UK, interest in news has nearly halved since 2015.[45]
https://en.wikipedia.org/wiki/Doomscrolling
Egotismis defined as the drive to maintain and enhance favorable views of oneself and generally features an inflated opinion of one's personal features andimportancedistinguished by a person's amplified vision of one's self and self-importance. It often includes intellectual, physical, social, and other overestimations.[1]The egotist has an overwhelming sense of the centrality of the "me" regarding their personal qualities.[2] Egotism is closely related to an egocentric love for one's imagined self ornarcissism.[3]Egotists have a strong tendency to talk about themselves in a self-promoting fashion, and they may well be arrogant and boastful with agrandiosesense of their own importance.[4]Their inability to recognise the accomplishments of others[5]leaves them profoundly self-promoting; while sensitivity to criticism may lead, on the egotist's part, tonarcissistic rageat a sense of insult.[6] Egotism differs from bothaltruism– or behaviour motivated by the concern for others rather than for oneself – and fromegoism, the constant pursuit of one's self-interest. Various forms of "empirical egoism" have been considered consistent with egotism, but do not – which is also the case with egoism in general – necessitate having an inflated sense of self.[7] In developmental terms, two different paths can be taken to reach egotism – one being individual, and the other being cultural. With respect to the developing individual, a movement takes place from egocentricity to sociality during the process of growing up.[8]It is normal for an infant to have an inflated sense of egotism.[9]The over-evaluation of one's own ego[10]regularly appears in childish forms of love.[11] Optimal development allows a gradual decrease into a more realistic view of one's own place in the world.[12]A less optimal adjustment may later lead to what has been called defensive egotism, serving to overcompensate for a fragile concept of self.[13]Robin Skynnerhowever considered that in the main growing up leads to a state where "your ego is still there, but it's taking its proper limited place among all the other egos".[14] However, alongside such a positive trajectory of diminishingindividualegotism, a rather different arc of development can be noted in cultural terms, linked to what has been seen as the increasing infantilism of post-modern society.[15]Whereas in the nineteenth century egotism was still widely regarded as a traditional vice – forNathaniel Hawthorneegotism was a sort of diseased self-contemplation[16]–Romanticismhad already set in motion a countervailing current, whatRichard Eldridgedescribed as a kind of "cultural egotism, substituting the individual imagination for vanishing social tradition".[17]The romantic idea of the self-creating individual – of a self-authorizing, artistic egotism[18]– then took on broader social dimensions in the following century.Keatsmight still attackWordsworthfor the regressive nature of his retreat into the egotistical sublime;[19]but by the close of the twentieth century egotism had been naturalized much more widely by theMe generationinto theCulture of Narcissism. In the 21st century, romantic egotism has been seen as feeding into techno-capitalism in two complementary ways:[20]on the one hand, through the self-centred consumer, focused on their own self-fashioning through brand 'identity'; on the other through the equally egotistical voices of 'authentic' protest, as they rage against the machine, only to produce new commodity forms that serve to fuel the system for further consumption. There is a question mark over the relationship between sexuality and egotism.Sigmund Freudpopularly made the claim that intimacy can transform the egotist,[21]giving a new sense of humility in relation to others.[22] At the same time, it is very apparent that egotism can readily show itself in sexual ways[23]and indeed arguably one's whole sexuality may function in the service of egotistical needs.[24] Leo Tolstoy, used the termaduyevschina(after the protagonist Aduyev ofGoncharov's first novel,A Common Story) to describe social egotism as the inability of some people to see beyond their immediate interests.[25] The term egotism is derived from the Greek ("εγώ") and subsequently its Latinised ego (ego), meaning "self" or "I," and-ism, used to denote a system of belief. As such, the term shares early etymology withegoism. Egotism differs frompride. Although they share the state of mind of an individual, ego is defined by a person's self-perception.[citation needed]That is how the particular individual thinks, feels and distinguishes him/herself from others. Pride may be equated to the feeling one experiences as the direct result of one's accomplishment or success.[26]
https://en.wikipedia.org/wiki/Egotism
Anempathy gap, sometimes referred to as anempathy bias, is a breakdown or reduction inempathy(the ability to recognize, understand, and share another's thoughts and feelings) where it might otherwise be expected to occur. Empathy gaps may occur due to a failure in the process of empathizing[1]or as a consequence of stable personality characteristics,[2][3][4]and may reflect either a lack of ability or motivation to empathize. Empathy gaps can be interpersonal (toward others) or intrapersonal (toward the self, e.g. when predicting one's own future preferences). A great deal of social psychological research has focused on intergroup empathy gaps, their underlying psychological and neural mechanisms, and their implications for downstream behavior (e.g. prejudice toward outgroup members). Failures in cognitive empathy (also referred to asperspective-taking) may sometimes result from a lack of ability. For example, young children often engage in failures of perspective-taking (e.g., on false belief tasks) due to underdeveloped social cognitive abilities.[5]Neurodivergent individualsoftenface difficultiesinferring others' emotional and cognitive states, though thedouble empathy problemproposes that the problem is mutual, occurring as well in non-neurodivergent individuals' struggle to understand and relate to neurodivergent people.[6]Failures in cognitive empathy may also result from cognitive biases that impair one's ability to understand another's perspective (for example, see the related concept ofnaive realism.)[7] One's ability to perspective-take may be limited by one's current emotional state. For example, behavioral economics research has described a number of failures in empathy that occur due to emotional influences on perspective-taking when people make social predictions. People may either fail to accurately predict one's own preferences and decisions (intrapersonal empathy gaps), or to consider how others' preferences might differ from one's own (interpersonal empathy gaps).[8]For example, people not owning a certain good underestimate their attachment to that good were they to own it.[9] In other circumstances, failures in cognitive empathy may occur due to a lack of motivation.[10]For example, people are less likely to take the perspective of outgroup members with whom they disagree. Affective (i.e. emotional) empathy gaps may describe instances in which an observer and target do not experience similar emotions,[11]or when an observer does not experience anticipated emotional responses toward a target, such as sympathy and compassion.[12] Certain affective empathy gaps may be driven by a limited ability to share another's emotions. For example,psychopathyis characterized byimpairmentsin emotional empathy.[13] Individuals may be motivated to avoid empathizing with others' emotions due to the emotional costs of doing so. For example, according to C. D. Batson's model of empathy, empathizing with others may either result in empathic concern (i.e. feelings of warmth and concern for another) or personal distress (i.e. when another's distress causes distress for the self).[14]A trait-level tendency to experience personal distress (vs. empathic concern) may motivate individuals to avoid situations which would require them to empathize with others, and indeed predicts reduced helping behavior. Humans are less likely to helpoutgroupmembers in need, as compared to ingroup members.[15]People are also less likely to value outgroup members' lives as highly as those of ingroup members.[16]These effects are indicative of aningroup empathy bias,in which people empathize more with ingroup (vs. outgroup) members. Intergroup empathy gaps are often affective or cognitive in nature, but also extend to other domains such aspain. For example, a great deal of research has demonstrated that people show reduced responses (e.g. neural activity) when observing outgroup (vs. ingroup) members in pain.[17][18][19][20]These effects may occur for real-world social groups such as members of different races. In one study utilizing aminimal groups paradigm(in which groups are randomly assigned, ostensibly based on an arbitrary distinction), individuals also judged the perceived pain of ingroup members to be more painful than that of outgroup members.[21] Perhaps the most well-known "counter-empathic" emotion—i.e., an emotion that reflects an empathy gap for the target—isschadenfreude, or the experience of pleasure when observing or learning about another's suffering or misfortune.[22]Schadenfreude frequently occurs in intergroup contexts.[23][24]In fact, the two factors that most strongly predict schadenfreude are identification with one's group and the presence of competition between groups in conflict.[25][26]Competition may be explicit; for example, one study found that soccer fans were less likely to help an injured stranger wearing a rival team shirt than someone wearing an ingroup team shirt.[27]However, schadenfreude may also be directed toward members of groups associated with high-status, competitive stereotypes.[28]These findings correspond with thestereotype content model, which proposes that such groups elicit envy, thereby precipitating schadenfreude. Stress related to the experience of empathy may causeempathic distress fatigueandoccupational burnout,[29]particularly among those in the medical profession. Expressing empathy is an important component of patient-centered care, and can be expressed through behaviors such as concern, attentiveness, sharing emotions, vulnerability, understanding, dialogue, reflection, and authenticity.[30]However, expressing empathy can be cognitively and emotionally demanding for providers.[31]Physicians who lack proper support may experience depression and burnout, particularly in the face of the extended or frequent experiences of personal distress. Within the domain of social psychology, "empathy gaps" typically describe breakdowns in empathy toward others (interpersonal empathy gaps). However, research in behavioral economics has also identified a number of intrapersonal empathy gaps (i.e. toward one's self). For example, "hot-cold empathy gaps" describe a breakdown in empathy for one's future self—specifically, a failure to anticipate how one's future affective states will affect one's preferences.[32]Such failures can negatively impactdecision-making, particularly in regards to health outcomes. Hot-cold empathy gaps are related to the psychological concepts ofaffective forecastingandtemporal discounting. Both affective and cognitive empathy gaps can occur due to a breakdown in the process ofmentalizingothers' states. For example, breakdowns in mentalizing may include but are not limited to: Neural evidence also supports the key role of mentalizing in supporting empathic responses, particularly in an intergroup context. For example, a meta-analysis of neuroimaging studies of intergroup social cognition found that thinking about ingroup members (in comparison to outgroup members) was more frequently related to brain regions known to underlie mentalizing.[35] Gender differences in the experience of empathy have been a subject of debate. In particular, scientists have sought to determine whether observedgender differences in empathyare due to variance in ability, motivation, or both between men and women. Research to date raises the possibility that gender norms regarding the experience and expression of empathy may decrease men's willingness to empathize with others, and therefore their tendency to engage in empathy. A number of studies, primarily utilizing self-report, have found gender differences in men's and women's empathy. A 1977 review of nine studies found women to be more empathic than men on average.[36]A 1983 review found a similar result, although differences in scores were stronger forself-report,as compared to observational, measures.[37]In recent decades, a number of studies utilizing self-reported empathy have shown gender differences in empathy.[38][39][40]According to the results of a nationally representative survey, men reported less willingness to give money or volunteer time to a poverty relief organization as compared to women, a finding mediated by men's lower self-reported feelings of empathic concern toward others.[41] However, more recent work has found little evidence that gender differences in self-reported empathy are related to neurophysiological measures (hemodynamic responsesand pupil dilation).[42]This finding raises the possibility that self-reported empathy may not be driven by biological differences in responses, but rather gender differences in willingness to report empathy. Specifically, women may be more likely to report experiencing empathy because it is more gender-normative for women than men.[43]In support of this idea, a study found that manipulating the perceived gender normativity of empathy eliminated gender differences in men and women's self-reported empathy. Specifically, assigning male and female participants to read a narrative describing fictitious neurological research evidence which claimed that males score higher on measures of empathy eliminated the gender gap in self-reported empathy.[44] Psychological research has identified a number of trait differences associated with reduced empathic responses, including but not limited to: According to theperception–action-modelof empathy,[51]perception–action-coupling(i.e., the vicarious activation of the neural system for action during the perception of action) allows humans to understand others' actions, intentions, and emotions. According to this theory, when a "subject" individual observes an "object" individual, the object's physical movements and facial expressions activate corresponding neural mechanisms in the subject.[52]That is, by neurally simulating the object's observed states, the subject also experiences these states, the basis of empathy. Themirror neuron system[53]has been proposed as a neural mechanism supportingperception-action couplingandempathy, although such claims remain a subject of scientific debate. Although the exact (if any) role of mirror neurons in supporting empathy is unclear, evidence suggests that neural simulation (i.e., recreating neural states associated with a process observed in another) may generally support a variety of psychological processes in humans, including disgust,[54]pain,[55]touch,[56]and facial expressions.[57] Reduced neural simulation of responses to suffering may account in part for observed empathy gaps, particularly in an intergroup context. This possibility is supported by research demonstrating that people show reduced neural activity when they witness ethnic outgroup (vs. ingroup) members in physical or emotional pain.[17][18]In one study, Chinese and Causian participants viewed videos of Chinese and Causasian targets, who displayed neutral facial expressions as they received either painful or non-painful stimulation to their cheeks.[17]Witnessing racial ingroup faces receive painful stimulation increased activity in the dorsal anterior cingulate cortex and anterior insula (two regions which generally activate during the experience of pain.) However, these responses were diminished toward outgroup members in pain. These results replicated among White-Italian and Black-African participants.[19]Additionally, EEG work has shown reduced neural simulation of movement (in primary motor cortex) for outgroup members, compared to in-group members.[20]This effect was magnified by prejudice and toward disliked groups (i.e. South-Asians, Blacks, and East Asians). A great deal of social neuroscience research has been conducted to investigate the social functions of the hormoneoxytocin,[58]including its role in empathy. Generally speaking, oxytocin is associated with cooperation between individuals (in both humans and non-human animals). However, these effects interact with group membership in intergroup settings: oxytocin is associated with increased bonding with ingroup, but not outgroup, members, and may thereby contribute to ingroup favoritism and intergroup empathy bias.[59]However, in one study of Israelis and Palestinians, intranasal oxytocin administration improved opposing partisans' empathy for outgroup members by increasing the salience of their pain.[60] In addition to temporary changes in oxytocin levels, the influence of oxytocin on empathic responses may also be influenced by an oxytocin receptor gene polymorphism,[61]such that certain individuals may differ in the extent to which oxytocin promotes ingroup favoritism. A number of studies have been conducted to identify the neural regions implicated in intergroup empathy biases.[62][33][63]This work has highlighted candidate regions supporting psychological processes such as mentalizing for ingroup members, deindividuation of outgroup members, and the pleasure associated with the experience of schadenfreude. A meta-analysis of 50 fMRI studies of intergroup social cognition found more consistent activation indorsomedial prefrontal cortex(dmPFC) during ingroup (vs. outgroup) social cognition.[35]dmPFC has previously been linked to the ability to infer others' mental states,[64][65][66]which suggests that individuals may be more likely to engage in mentalizing for ingroup (as compared to outgroup) members. dmPFC activity has also been linked to prosocial behavior;[67][68]thus, dmPFC's association with cognition about ingroup members suggests a potential neurocognitive mechanism underlying ingroup favoritism. Activation patterns in theanterior insula(AI) have been observed when thinking about both ingroup and outgroup members. For example, greater activity in the anterior insula has been observed when participants view ingroup members on a sports team receiving pain, compared to outgroup members receiving pain.[69][70]In contrast, the meta-analysis referenced previously[35]found that anterior insula activation was more reliably related to social cognition about outgroup members. These seemingly divergent results may be due in part to functional differences between anatomic subregions of the anterior insula. Meta-analyses have identified two distinct subregions of the anterior insula: ventral AI, which is linked to emotional and visceral experiences (e.g. subjective arousal); and dorsal AI, which has been associated with exogenous attention processes such as attention orientation, salience detection, and task performance monitoring.[71][72][73]Therefore, anterior insula activation may occur more often when thinking about outgroup members because doing is more attentionally demanding than thinking about ingroup members.[35] Lateralizationof function within the anterior insula may also help account for divergent results, due to differences in connectivity between left and right AI. The right anterior insula has greater connectivity with regions supporting attentional orientation and arousal (e.g. postcentral gyrus and supramarginal gyrus), compared to the left anterior insula, which has greater connectivity with regions involved in perspective-taking and cognitive motor control (e.g. dmPFC and superior frontal gyrus).[74]The previously referenced meta-analysis found right lateralization of anterior insula for outgroup compared to ingroup processing.[35]These findings raise the possibility that when thinking about outgroup members, individuals may use their attention to focus on targets' salient outgroup status, as opposed to thinking about the outgroup member as an individual. In contrast, the meta-analysis found left lateralization of anterior insula activity for thinking about ingroup compared to outgroup members. This finding suggests that left anterior insula may help support perspective-taking and mentalizing about ingroup members, and thinking about them in an individuated way. However, these possibilities are speculative and lateralization may vary due to characteristics such as age, gender, and other individual differences, which should be accounted for in future research.[75][74] A number of fMRI studies have attempted to identify the neural activation patterns underlying the experience of intergroup schadenfreude, particularly toward outgroup members in pain. These studies have found increased activation in theventral striatum, a region related to reward processing and pleasure.[76] Breakdowns in empathy may reduce helping behavior,[77][78]a phenomenon illustrated by theidentifiable victim effect. Specifically, humans are less likely to assist others who are not identifiable on an individual level.[79]A related concept is psychological distance—that is, we are less likely to help those who feel more psychologically distant from us.[80] Reduced empathy for outgroup members is associated with a reduction in willingness to entertain another's points of view, the likelihood of ignoring a customer's complaints, the likelihood of helping others during a natural disaster, and the chance that one opposes social programs designed to benefit disadvantaged individuals.[81][71] Empathy gaps may contribute to prejudicial attitudes and behavior. However, training people in perspective-taking, for example by providing instructions about how to take an outgroup member's perspective, has been shown to increase intergroup helping and the recognition of group disparities.[82]Perspective-taking interventions are more likely to be effective when a multicultural approach is used (i.e., an approach that appreciates intergroup differences), as opposed to a "colorblind" approach (e.g. an approach that attempts to emphasize a shared group identity).[82][83][84]
https://en.wikipedia.org/wiki/Empathy_gap
Famous for being famousis aparadoxicalterm, often usedpejoratively, for someone who attainscelebritystatus for no clearly identifiable reason—as opposed to fame based onachievement,skill, ortalent—and appears to generate their own fame, or someone who achieves fame through a family or relationship association with an existing celebrity.[1] The term originates from an analysis of the media-dominated world calledThe Image: A Guide to Pseudo-events in America(1962), by historian and social theoristDaniel J. Boorstin.[2]In it, he defined the celebrity as "a person who is known for his well-knownness".[3]He further argued that the graphic revolution in journalism and other forms of communication had severed fame from greatness, and that this severance hastened the decay of fame into mere notoriety. Over the years, the phrase has been glossed as "a celebrity is someone who is famous for being famous".[2] The British journalistMalcolm Muggeridgemay have been the first to use the actual phrase in the introduction to his bookMuggeridge Through The Microphone(1967) in which he wrote: In the past if someone was famous or notorious, it was for something—as a writer or an actor or a criminal; for some talent or distinction or abomination. Today one is famous for being famous. People who come up to one in the street or in public places to claim recognition nearly always say: "I've seen you on the telly!"[4] Neal Gablermore recently refined the definition of celebrity to distinguish those who have gained recognition for having done virtually nothing of significance—a phenomenon he dubbed the "Zsa Zsa Factor" in honor ofZsa Zsa Gabor, who parlayed her marriage to actorGeorge Sandersinto a brief movie career and the movie career into a much more enduring celebrity.[5]He goes on to define the celebrity as "human entertainment", by which he means a person who provides entertainment by the very process of living.[5] This topic is also known in German-speaking countries. Terms like "Schickeria" or "Adabei" characterize the media, which on the one hand are also understood critically but on the other hand are an important editorial topic that electronic quality media do not want to do without today for commercial reasons. People's reporting is fundamentally an important area of journalism that functions according to its own rules, especially in the print medium, and according to journalistNorman Schenzis characterized as "We no longer just write about an event, we tell stories".[6][7][8] The Washington PostwriterAmy Argetsingercoined the termfamesqueto define actors, singers, or athletes whose fame is mostly (if not entirely) due to one's physical attractiveness and/or personal life, rather than actual talent and (if any) successful career accomplishments. Argetsinger argued, "The famesque of 2009 are descended from that dawn-of-TV creation, the Famous for Being Famous. Turn on a talk show orHollywood Squaresand there'd beZsa Zsa Gabor,Joyce Brothers,Charles Nelson Reilly, so friendly and familiar and—what was it they did again?" She also used actressSienna Milleras a modern-day example; "Miller became famesque by datingJude Law. . .and then really famesque when he cheated on her with the nanny—to the point that she was the one who madeBalthazar Gettyfamesque (even though he's the one with the hit TV series,Brothers & Sisters) when he reportedly ran off from his wife with her for a while."[9] Celebutanteis aportmanteauof the words "celebrity" and "debutante". The male equivalent is sometimes spelledcelebutant. The term has been used to describe heiresses likeParis HiltonandNicole Richieinentertainment journalism.[10]More recently, the term and descriptions similar to the term have been applied to theKardashian-Jenner family. During an interview in 2011 with some of the Kardashians, interviewerBarbara Walterssaid, "You are all often described as 'famous for being famous'. You don't really act, you don’t sing, you don’t dance. You don't have any - forgive me - any talent."[11]Later in 2016,Timedescribed the Kardashian-Jenner family as ubiquitous celebutantes for being the highest earning reality stars.[12] The term has been traced back to a 1939Walter Winchellsociety column in which he used the word to describe prominent society debutanteBrenda Frazier, who was a traditional "high-society" debutante from a noted family, but whose debut attracted an unprecedented wave of media attention.[10][13]The word appeared again in a 1985Newsweekarticle aboutNew York City's clubland celebrities, focusing on the lifestyles of writerJames St. James,Lisa EdelsteinandDianne Brill, who was crowned "Queen of the Night" byAndy Warhol.[10][14]
https://en.wikipedia.org/wiki/Famous_for_being_famous
First World privilegeis any advantages accrued by an individual by virtue of being a national of aFirst World country. First-World privilege is often explicitly maintained by legal means such asimmigrationlaws andtrade barriers.[1]Further, very few nations have laws that prevent explicit discrimination on the basis of nationality for access to employment, promotions, education, scholarships, etc.[2]Laws of many nations actively encourage thediscriminationagainst foreign nationals, for employment and educational purposes, via stringent immigration requirements, exorbitant fees, devaluation of educational qualifications, and scholarship quotas that usually favor citizens from developed nations.[3] First World nations usually have mutual trade and immigration arrangements and treaties that limit the discrimination faced by First-World nationals regarding employment, education and business in other First World countries. The existence of discriminatory laws and barriers across the world, according to First World privilege theory, on balance systematically favor the employment, business, access to education and health care, and subsequently welfare of citizens of First World nations at the cost of the welfare and oppression of the people of developing nations.[3] In general, the term "privilege" when referring to social inequality has been criticized for not distinguishing between "spared injustice" and "unjust enrichment".[4] This article aboutcritical theoryis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/First_World_privilege
TheKardashian Index(K-Index), named after media personalityKim Kardashian, is a satirical measure of the discrepancy between a scientist'ssocial mediaprofile andpublicationrecord.[1][2]Proposed by Neil Hall in 2014, the measure compares the number of followers aresearch scientisthas onTwitterto the number ofcitationsthey have for theirpeer-reviewedwork. The relationship between the expected number of Twitter followersF{\displaystyle F}given the number of citationsC{\displaystyle C}is described asF(C)=43.3C0.32,{\displaystyle F(C)=43.3\,C^{0.32},} which is derived from the Twitter accounts and citation counts of a "randomish selection of 40 scientists" in 2014.[1]The Kardashian Index is thus calculated asK-index=FaF(C),{\displaystyle {\text{K-index}}={\frac {F_{a}}{F(C)}},} whereFa{\displaystyle F_{a}}is the actual number of Twitter followers of researcherX{\displaystyle X}, andF(C){\displaystyle F(C)}is the number that researcher should have, given their citations. A high K-index indicates an over-blown scientific fame, while a low K-index suggests that a scientist is being undervalued. According to the author Hall, researchers whose K-index > 5 can be considered "Science Kardashians". Hall wrote:[1] I propose that all scientists calculate their own K-index on an annual basis and include it in their Twitter profile. Not only does this help others decide how much weight they should give to someone’s 140 character wisdom, it can also be an incentive – if your K-index gets above 5, then it’s time to get off Twitter and write those papers. Hall also added "a serious note" noticing the gender disparity in his sample. Of 14 female scientists, 11 had lower than predicted K-indices, while only one of the high-index scientists was female.[1] On February 11, 2022, on Twitter, Neil Hall stated that he intended the Kardashian Index to be a “dig at metrics not Kardashians” and that “the entire premise is satire”.[3] Many jocular indices of scientific productivity were proposed in the immediate aftermath of publication of the K-Index paper.[2]The Tesla Index measured social isolation of scientists relative to their productivity, named afterNikola Tesla, whose work was hugely influential, while he remained a social recluse.[4]People tweeted suggestions hashtagged#alternatesciencemetrics.[2][5] In 2022,John Ioannidisauthored a paper inThe BMJarguing that signatories of theGreat Barrington Declarationabout how to deal with theCOVID-19 pandemicwere shunned as a fringe minority by those in favor of theJohn Snow Memorandum. According to him, the latter used their large numbers of followers onTwitterand othersocial mediaandop-edsto shape a scientific "groupthink" against the former, who had less influence.[6]The version of the index that Ioannidis usedScopuscitations instead of Google Scholar citations, since many of the signatories had no Google Scholar pages.[7] The K-index suggests that the number of citations of a given scientist is comparable to their scientific value. This assumption has been criticized.[8][9] The proposal of the K-Index has been interpreted as a criticism to the assumption that scientists should have a social media impact at all, while in reality, social media footprint has no correlation to the scientific quality or scientific impact.[10]
https://en.wikipedia.org/wiki/Kardashian_index
Theknowledge gap hypothesisis amass communicationtheory created by Philip J. Tichenor, George A. Donohue, and Clarice. N Olien in 1970.[1]The theory is based on how a member of society processes information frommass mediadifferently based on education level andsocioeconomic status(SES). Since there is already a pre-existing gap in knowledge between groups in a population, mass media amplifies this gap to another level. The Knowledge Gap Hypothesis overviews and covers theoretical conceptsthat the hypothesis builds upon,historical background,operationalizationand the means by which the hypothesis is measured, narrative review,meta-analyticsupport that draws data from multiple studies, new communication technologies that have affected the hypothesis, as well as the idea of Digital Divide, and the existing critiques and scholarly debates surrounding the hypothesis. The knowledge gap hypothesis has been implicit throughout the mass communication literature. Research published as early as the 1920s had already begun to examine the influence of individual characteristics on people's media content preferences. 1929 William S. Gray and Ruth Munroe authors of The Reading Interests and Habits of Adults examined the education advantages of adults which influenced their reading habits. The well educated reader grasped the subject matter in newspaper articles more quickly and moved on to other types of reading materials that fit their interests. The less educated reader spent more time with the newspaper article because it took that person longer to comprehend the topic.[2] 1940 Paul Lazarsfeld, head of the Office of Radio Research at Columbia University, set out to examine whether (1) the total amount of time that people listened to the radio and (2) the type of content they listened to correlated with their socioeconomic status. Not only did Lazarsfeld's data indicate people of lower socioeconomic status tended to listen to more radio programming, but also they were simultaneously less likely to listen to "serious" radio content.[3] 1950 The authors: Shirley A. Star, a professor in the University of Chicago's sociology department and Helen MacGill Hughes, a sociologist of the University of Chicago worte, "Report on an Educational Campaign: The Cincinnati Plan for the United Nations" discovered that while the campaign was successful in reaching better-educated people, those with less education virtually ignored the campaign. Additionally, after realizing that the highly educated people reached by the campaign also tended to be more interested in the topic, Star and Hughes suggested that knowledge, education, and interest may be interdependent.[4] 1965 Philip Tichenor wrote his doctoral dissertation titled Communication and Knowledge of Science in the Adult Population of the US, which served as a source for some of the information used and analyzed in the later article where the term Knowledge Gap Hypothesis was coined[5] 1970 Philip J. Tichenor, George A. Donohue, and Clarice. N Olien (later known as the Minnesota Team), the authors of the original article Mass Media Flow and Differential Growth in Knowledge, which proposes the hypothesis and applies the idea to social and public life and generally relevant information, and less so to “audience-specific topics such as stock market quotations, society news, sports and lawn and garden care” (Tichenor, Donohue, & Olien, 1970, p. 160)[6] 1983 Gaziano put out a review of 58 studies on SES-based knowledge inequities, which emphasizes how variations in media exposure, knowledge definitions, and population differences contribute to inconsistent findings on knowledge gaps.[7] Tichenor, Donohue, and Olien suggest five factors why the knowledge gap should exist: According to the authors, Jack Rosenberry and Lauren A.Vicker, " A hypothesis is basically a research question: the researcher needs to ask questions and answer them in order to formulate theory. The term "hypothesis" also can be used to describe a theory that is still in the development stage or that has not been fully researched and verified. Because of the somewhat contradictory nature of the research findings, the knowledge gap has not yet achieved theory status and is still known as a hypothesis."[9] Since the 1970s, many policy makers and social scientists have been concerned with how community members acquire information via mass media. Throughout the years, extensive research has been conducted and taken different approaches to researching the Knowledge Gap Hypothesis. The hypothesis operationalization consists of the following: Since the 1970s, many policy makers and social scientists have been concerned with how community members acquire information via mass media. Throughout the years, extensive research has been conducted and taken different approaches to researching the Knowledge Gap Hypothesis. Cecilie Gaziano, a researcher of Communication and Media, Quantitative Social Research and Social Stratification wrote Forecast 2000: Widening Knowledge Gaps, to update her 1983 analysis of knowledge gap studies.[11]Gaziano discusses the connection between education and income disparities between the "haves" and "have-nots." Gaziano conducted two narrative reviews, one of 58 articles with relevant data in 1983[12]and the other of 39 additional studies in 1997.[11] The interconnection between income, education and occupation are factors of the knowledge gap throughout history. Here is a closer look at the economic gaps caused by major economic events: Hwang and Jeong (2009) conducted a meta-analysis of 46 knowledge gap studies. Consistent with Gaziano's results, however, Hwang and Jeong found constant knowledge gaps across time.[13]Gaziano writes, "the most consistent result is the presence of knowledge differentials, regardless of topic, methodological, or theoretical variations, study excellence, or other variables and conditions" (1997, p. 240). Evidence from several decades, Gaziano concludes, underscores the enduring character of knowledge gaps and indicates that they transcend topics and research settings. Gaziano explains the conceptual framework of the knowledge barriers, the critical conceptual issues are the following measurements: Jeffrey Mondak and Mary Anderson (2004) released a statistical analysis of the knowledge gap hypothesis, finding out that while increased media exposure can enhance political knowledge, pre-existing socioeconomic and gender disparities often determine who benefits the most, reinforcing rather than reducing knowledge inequities.[14] "All analyses point to a common conclusion: approximately 50% of the gender gap is illusory, reflecting response patterns that work to the collective advantage of male respondents."[14] Theinternethas changed how people engage media. The internet-based media has to be accessed with digital devices and accessed to the internet. In the United States, there is a concern about thedigital dividebecause not all Americans have access to the internet and devices. With the hope that Internet would close the knowledge gap, it has exposed the following inequities: access, motivation and cognitive ability. The following research displays the link between access to internet and socioeconomic status, SES. According to a Pew Research Center survey of U.S. adults conducted Jan. 25-Feb. 8, 2021, Emily Vogels, a research associate focusing on internet and technology, wrote, "More than 30 years after the debut of the World Wide Web, internet use, broadband adoption and smartphone ownership have grown rapidly for all Americans – including those who are less well-off financially. However, the digital lives of Americans with lower and higher incomes remain markedly different."[15] "Americans with higher household incomes are also more likely to have multiple devices that enable them to go online. Roughly six-in-ten adults living in households earning $100,000 or more a year (63%) report having home broadband services, a smartphone, a desktop or laptop computer and a tablet, compared with 23% of those living in lower-income households."[15] Emily Vogels, continues, "The digital divide has been a central topic in tech circles for decades, with researchers, advocates and policymakers examining this issue. However, this topic has gained special attention during the coronavirus outbreak as much of daily life (such as work and school) moved online, leaving families with lower incomes more likely to face obstacles in navigating this increasing digital environment. For example, in April 2020, 59% of parents with lower incomes who had children in schools that were remote due to the pandemic said their children would likely face at least one of three digital obstacles to their schooling, such as a lack of reliable internet at home, no computer at home, or needing to use a smartphone to complete schoolwork."[15] The framework of the hypothesis was widely criticized throughout mass communications studies. In 1977, Ettema and Kline moved the lens of focus of the Knowledge gap hypothesis from deficits of knowledge acquisition to differences in acquiring knowledge. Central to their argument was the aspect of motivation that people of different SES would demonstrate to learn new information. Ettema and Kline concluded that the less education and knowledge held by people of lower SES was functional, thus enough for them.[16] In 1980, Dervin started questioning the traditional source-receiver model of mass communication, as concentrating on receivers’ failure to get and interpret information is “blaming the victim.”[17] In 2003, Everett Rogers renamed the Knowledge gap hypothesis to the Communication Effects Gap hypothesis, as the existing gap was attributed to miscommunication and had nothing to do with receivers of information. Further debates surrounded the Knowledge Gap Hypothesis regarding the definition of the hypothesis in the textbook as it seemed unattractive to people of different SESs. The idea of posing open-ended questions was introduced to let responders answer the questions more profoundly. However, Gaziano states that gaps in knowledge were still found, and according to Hwang and Jeong (2009), they resulted in smaller gaps compared to other methods of analyzing the hypothesis.[18][19]
https://en.wikipedia.org/wiki/Knowledge_gap_hypothesis
Thelaw of trivialityisC. Northcote Parkinson's 1957 argument that people within an organization commonly give disproportionate weight to trivial issues.[1]Parkinson provides the example of a fictional committee whose job was to approve the plans for anuclear power plantspending the majority of its time on discussions about relatively minor but easy-to-grasp issues, such as what materials to use for the staff bicycle shed, while neglecting the proposed design of the plant itself, which is far more important and a far more difficult and complex task. The law has been applied tosoftware developmentand other activities.[2]The termsbicycle-shed effect,bike-shed effect, andbike-sheddingwere coined based on Parkinson's example; it was popularized in theBerkeley Software Distributioncommunity by the Danish software developerPoul-Henning Kampin 1999[3]and, due to that, has since become popular within the field of software development generally. The concept was first presented as a corollary of his broader "Parkinson's law" spoof of management. He dramatizes this "law of triviality" with the example of a committee's deliberations on an atomic reactor, contrasting it to deliberations on a bicycle shed. As he put it: "The time spent on any item of the agenda will be in inverse proportion to the sum [of money] involved." A reactor is so vastly expensive and complicated that an average person cannot understand it (seeambiguity aversion), so one assumes that those who work on it understand it. However, everyone can visualize a cheap, simple bicycle shed, so planning one can result in endless discussions because everyone involved wants to implement their own proposal and demonstrate personal contribution.[4] After a suggestion of building something new for the community, like a bike shed, problems arise when everyone involved argues about the details. This is a metaphor indicating that it is not necessary to argue about every little feature based simply on having the knowledge to do so. Some people have commented that the amount of noise generated by a change is inversely proportional to the complexity of the change.[3] Behavioral research has produced evidence which confirms theories proposed by the law of triviality. People tend to spend more time on small decisions than they should, and less time on big decisions than they should. A simple explanation is that during the process of making a decision, one has to assess whether enough information has been collected to make the decision. If people make mistakes about whether they have enough information, then they will tend to feel overwhelmed by large and complex matters and stop collecting information too early to adequately inform their big decisions. The reason is that big decisions require collecting information for a long time and working hard to understand its complex ramifications. This leaves more of an opportunity to make a mistake (and stop) before getting enough information. Conversely, for small decisions, where people should devote little attention and act without hesitation, they may inefficiently continue to ponder for too long, partly because they are better able to understand the subject.[5] There are several other principles, well known in specific problem domains, which express a similar sentiment. Sayre's lawis a more general principle, which holds (among other formulations) that "In any dispute, the intensity of feeling is inversely proportional to the value of the issues at stake"; many formulations of the principle focus onacademia. Wadler's law, named for computer scientistPhilip Wadler,[6]is a principle which asserts that the bulk of discussion onprogramming-language designcenters onsyntax(which, for purposes of the argument, is considered a solved problem), as opposed tosemantics.[7]
https://en.wikipedia.org/wiki/Law_of_triviality
TheOrtega hypothesisholds that average or mediocrescientistscontribute substantially to the advancement ofscience.[1]According to this hypothesis, scientific progress occurs mainly by the accumulation of a mass of modest, narrowly specialized intellectual contributions. On this view, major breakthroughs draw heavily upon a large body of minor and little-known work, without which the major advances could not happen.[2] The Ortega hypothesis is widely held,[2]but a number of systematic studies of scientificcitationshave favored the opposing "Newton hypothesis", which says that scientific progress is mostly the work of a relatively small number of great scientists (afterIsaac Newton's statement that he "stood on the shoulders of giants").[1]The most important papers mostly cite other important papers by a small number of outstanding scientists, suggesting that the breakthroughs do not actually draw heavily on a large body of minor work.[2]Rather, the pattern of citations suggests that most minor work draws heavily on a small number of outstanding papers and outstanding scientists. Even minor papers by the most eminent scientists are cited much more than papers by relatively unknown scientists; and these elite scientists are clustered mostly in a small group of elite departments and universities.[2]The same pattern of disproportionate citation of a small number of scholars appears in fields as diverse as physics andcriminology.[3] The matter is not settled. No research has established that citation counts reflect the real influence or worth of scientific work. So, the apparent disproof of the Ortega hypothesis may be an artifact of inappropriately chosen data.[4]Stratification within the social networks of scientists may skew the citation statistics.[5]Many authors cite research papers without actually reading them or being influenced by them.[6]Experimental results in physics make heavy use of techniques and devices that have been honed by many previous inventors and researchers, but these are seldom cited in reports on those results.[7][8]Theoretical papers have the broadest relevance to future research, while reports of experimental results have a narrower relevance but form the basis of the theories. This suggests that citation counts merely favor theoretical results.[7] The name of the hypothesis refers toJosé Ortega y Gasset, who wrote inThe Revolt of the Massesthat "astoundingly mediocre" men of narrow specialties do most of the work of experimental science.[9]Ortega most likely would have disagreed with the hypothesis that has been named after him, as he held not that scientific progress is driven mainly by the accumulation of small works by mediocrities, but that scientificgeniusescreate a framework within which intellectually commonplace people can work successfully. For example, Ortega thought thatAlbert Einsteindrew upon the ideas ofImmanuel KantandErnst Machto form his own synthesis, and that Einstein did not draw upon masses of tiny results produced systematically by mediocrities. According to Ortega, science is mostly the work of geniuses, and geniuses mostly build on each other's work, but in some fields there is a real need for systematic laboratory work that could be done by almost anyone.[10] The "Ortega hypothesis" derives only from this last element of Ortega's theory, not the main thrust of it. Ortega characterized this type of research as "mechanical work of the mind" that does not require special talent or even much understanding of the results, performed by people who specialize in one narrow corner of one science and hold no curiosity beyond it.[10]
https://en.wikipedia.org/wiki/Ortega_hypothesis
Overconsumptiondescribes a situation whereconsumersoveruse their availablegoods and servicesto where they can't, or don't want to, replenish or reuse them.[1]Inmicroeconomics, this is the point where themarginal costof a consumer is greater than theirmarginal utility. The term overconsumption is quite controversial and does not necessarily have a single unifying definition.[2]When used to refer to natural resources to the point where theenvironmentis negatively affected, it is synonymous with the termoverexploitation. However, when used in the broader economic sense, overconsumption can refer to all types of goods and services, including artificial ones, e.g., "the overconsumption ofalcoholcan lead toalcohol poisoning."[3][4]Overconsumption is driven by several factors of the currentglobal economy, including forces likeconsumerism,planned obsolescence,economic materialism, and other unsustainable business models, and can be contrasted withsustainable consumption. Defining the amount of anatural resourcerequired to be consumed for it to count as "overconsumption" is challenging because defining a sustainable capacity of the system requires accounting for many variables. A system's total capacity occurs at regional and worldwide levels, which means that specific regions may have higher consumption levels of certain resources than others due to greater resources without overconsuming a resource. A long-term pattern of overconsumption in any region or ecological system can cause a reduction in natural resources, often resulting inenvironmental degradation. However, this is only when applying the word toenvironmental impacts. When used in an economic sense, this point is defined as when the marginal cost of a consumer is equal to their marginal utility.Gossen's law of diminishing utilitystates that at this point, the consumer realizes the cost of consuming/purchasing another item/good is not worth the amount ofutility(also known as happiness or satisfaction from the good) they'd receive, and therefore is not conducive to the consumer's wellbeing.[5] When used in the environmental sense, the discussion of overconsumption often parallels population size,growth, andhuman development: more people demanding a higher quality of living requires greater extraction of resources, which causes subsequentenvironmental degradation, such asclimate changeandbiodiversity loss.[6][7][8][9][10]Currently, the inhabitants of high-wealth, "developed" nations consume resources at a rate almost 32 times greater than those of the developing world, making up most of the human population (7.9 billion people).[11]However, the developing world is a growing consumer market. These nations are quickly gaining more purchasing power. The Global South, which includes cities in Asia, America, and Africa, is expected to account for 56% of consumption growth by 2030,[12]meaning that if current trends continue, relative consumption rates will shift more into these developing countries, whereas developed countries would start to plateau.Sustainable Development Goal 12, "responsible consumption and production," is the main international policy tool with goals to abate the impact of overconsumption. If everyone consumed resources at the US level, you will need another four or five Earths. Economic growthis sometimes seen as a driver for overconsumption due to a growing economy requiring compounding amounts of resource input to sustain the growth. China is an example where this phenomenon has been observed readily. China’s GDP increased massively from 1978, and energy consumption has increased by 6-fold.[14]By 1983, China’s consumption surpassed the biocapacity of their natural resources, leading to overconsumption.[15]In the last 30–40 years, China has seen significant increases in its pollution,land degradation, and non-renewable resource depletion, which aligns with its considerable economic growth.[16]It is unknown if other rapidly developing nations will see similar trends in resource overconsumption. TheWorldwatch Institutesaid China and India, with their booming economies, along with the United States, are the three planetary forces that are shaping the globalbiosphere.[17]TheState of the World2005 report said the two countries' higheconomic growthexposed the reality of severe pollution. The report states that The world's ecological capacity is simply insufficient to satisfy the ambitions of China, India, Japan, Europe, and the United States as well as the aspirations of the rest of the world in a sustainable way. In 2019, awarningon theclimate crisissigned by 11,000 scientists from over 150 nations said economic growth is the driving force behind the "excessive extraction of materials andoverexploitationof ecosystems" and that this "must be quickly curtailed to maintain long-term sustainability of the biosphere."[18][19]Also in 2019, theGlobal Assessment Report on Biodiversity and Ecosystem Servicespublished by theUnited Nations'Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, which found that up to one million species of plants and animals are at risk ofextinctionfrom human activity,[20]asserted that A key element of more sustainable future policies is the evolution of global financial and economic systems to build a global sustainable economy, steering away from the current limited paradigm of economic growth.[21] In addition,globalizationhas amplified resource overuse as developing economies serve as manufacturing hubs for wealthier nations. This results in an "outsourcing" of pollution and resource depletion, with developed countries benefiting from consumption while production-related ecological damage accumulates elsewhere.[22] Philip Cafaro, professor of philosophy at the School of Global Environmental Sustainability atColorado State University, wrote in 2022 that a scientific consensus has emerged which demonstrates that humanity is on the precipice of unleashing a majorextinction event, and that a major driver of this is a "rapidly growing human economy."[23] While often seen as a solution, technology can paradoxically contribute to increased resource use. TheJevons Paradoxsuggests that improvements in energy efficiency lead to greater overall consumption rather than reduced demand.[24]For example, while China has invested heavily inrenewable energy, overall energy demand continues to rise due toeconomic expansion, offsetting sustainability gains.[25]Furthermore, culturally, economic growth has fosteredmaterialismandconsumerismas indicators of success, further exacerbating overconsumption. Advertising, planned obsolescence, and fast economic cycles create a continuous push for higher consumption, making it challenging to curb unsustainable resource use.[26] Thus, whileeconomic growthis often seen as a marker of progress, it comes at a significant environmental cost. Without structural changes in globalmonetary policies, consumer behavior, and production models, overconsumption will likely continue accelerating alongsideeconomic expansion. Consumerismis a social and economic order that encourages the acquisition of goods and services in ever-increasing amounts. There is a spectrum of goods and services that the world population constantly consumes. These range from food and beverage, clothing and footwear, housing, energy, technology, transportation, education, health and personal care, financial services, and other utilities.Consumerism likewise refers to a preoccupation with purchasing goods that are not necessary for personal or family survival, and a value system that makes this preoccupation an important component of individual and social evaluation.[27]When the resources required to produce these goods and services are depleted beyond a reasonable level, it can be considered to be overconsumption. Third World countries, here referred to as "developing countries", have certain general characteristics, such as relatively low per capita economic structures, occupational concentrations in agriculture and animal husbandry, high levels of urbanization, high population growth rates, and low levels of education.[28]Despite the diversity of the socio-cultural environment of consumers in developing countries, they face similar economic problems. The socio-economic environment is transitional, with self-sufficient consumers at one end and urban elites with purchasing power who can enjoy a Western lifestyle at the other.[29] Because developing nations are rising quickly into the consumer class, the trends happening in these nations are of special interest. One prominent example is China's economic reforms in the late 1970s, in which the previously economically isolated state opened to foreign investment. Some argue that this economic revolution's outcome was most significantly influenced by foreign consumer economies,[30]while others focus on China's internal market developments.[31]Regardless, there as been much interest placed upon China's economic, political, and social shift towards consumerism. According to the World Bank, the highest shares of consumption, regardless of income lie in food, beverage, clothing, and footwear.[32]As of 2015, the top five consumer markets in the world were the United States, Japan, Germany, China, and France.[33] Planned and perceived obsolescenceis an important factor that explains why some overconsumption of consumer products exists.[34]This factor of the production revolves around designing products with the intent to be discarded after a short period of time. Perceived obsolescence is prevalent within the fashion and technology industries. Through this technique, products are made obsolete and replaced on a semi-regular basis. Frequent new launches of technology or fashion lines can be seen as a form of marketing-induced perceived obsolescence. Products designed to break after a certain period of time or use would be considered to be planned obsolescence.[35]The dark side of this vicious cycle is that we have no choice but to keep replacing certain products, which results in a huge amount of waste, known as e-waste.[36] According to a 2020 paper written by a team of scientists titled "Scientists' warning on affluence", the entrenchment of "capitalist, growth-driven economic systems" since World War II gave rise to increasing affluence along with "enormous increases in inequality, financial instability, resource consumption and environmental pressures on vital earth support systems." And the world's wealthiest citizens, referred to as "super-affluent consumers . . . which overlap with powerful fractions of the capitalist class," are the most responsible for environmental impacts through their consumption patterns worldwide. Anysustainablesocial and environmental pathways must include transcending paradigms fixated on economic growth and also reducing, not simply "greening", the overconsumption of the super-affluent, the authors contend, and propose adopting either reformist policies which can be implemented within a capitalist framework such as wealth redistribution through taxation (in particulareco-taxes), green investments,basic income guaranteesand reduced work hours to accomplish this, or looking to more radical approaches associated withdegrowth,eco-socialismandeco-anarchism, which would "entail a shift beyond capitalism and/or current centralised states."[37][38] In other words, the concentration of wealth allows the affluent to shape policies that maintain consumption-driven economies, limiting systemic change.[39]A 2020 Oxfam-SEI report found that the top 10% of earners contribute over half of globalcarbon emissions, while the wealthiest 1% emit more than double the poorest 50% combined.[40]Whilegreen technologiesoffer solutions, theJevons Paradoxsuggests efficiency gains often lead to increased overall consumption rather than reductions.[41]Thus, tackling affluence-driven overconsumption requires progressive taxation on high-carbon activities, curbing luxury emissions, and shifting economic priorities from GDP growth to sustainability.[42]Without intervention, extreme resource use by the wealthiest will continue to undermine global sustainability efforts. A fundamental effect of overconsumption is a reduction in the planet'scarrying capacity. Excessive unsustainable consumption will exceed the long-term carrying capacity of its environment (ecological overshoot) and subsequently cause resource depletion,environmental degradationand reducedecosystem health. In 2020 multinational team of scientists published a study, saying that overconsumption is the biggest threat to sustainability. According to the study, a drastic lifestyle change is necessary for solving the ecological crisis. According to one of the authors Julia Steinberger: “To protect ourselves from the worsening climate crisis, we must reduce inequality and challenge the notion that riches, and those who possess them, are inherently good.” The research was published on the site of theWorld Economic Forum. The leader of the forum professorKlaus Schwab, calls for a "great reset of capitalism".[43] A 2020 study published inScientific Reports, in which bothpopulation growthanddeforestationwere used as proxies for total resource consumption, warns that if consumption continues at the current rate for the next several decades, it can trigger a full or almost fullextinction of humanity. The study says that "while violent events, such as global war or natural catastrophic events, are of immediate concern to everyone, a relatively slow consumption of the planetary resources may be not perceived as strongly as a mortal danger for the human civilization." To avoid it humanity should pass from a civilization dominated by the economy to a "cultural society" that "privileges the interest of the ecosystem above the individual interest of its components, but eventually in accordance with the overall communal interest."[44][45] The scale of modern life's overconsumption can lead to a decline in economy and an increase in financial instability.[46]Some argue that overconsumption enables the existence of an "overclass", while others disagree with the role of overconsumption in class inequality.[47]Population, Development, and Poverty all coincide with overconsumption; how they interplay with each other is complex.[48]Because of this complexity it is difficult to determine the role of consumption in terms of economic inequality. In the long term, these effects can lead to increased conflict over dwindling resources[49]and in the worst case aMalthusian catastrophe.Lester Brownof theEarth Policy Institute, has said: "It would take 1.5 Earths to sustain our present level of consumption. Environmentally, the world is in an overshoot mode."[50] As of 2012, theUnited Statesalone was using 30% of the world's resources and if everyone were to consume at that rate, we would need 3-5 planets to sustain this type of living. Resources are quickly becoming depleted, with about ⅓ already gone. With new consumer markets rising in the developing countries which account for a much higher percentage of the world's population, this number can only rise.[51]According toSierra Club’s Dave Tilford, "With less than 5 percent of world population, the U.S. uses one-third of the world’s paper, a quarter of the world’s oil, 23 percent of the coal, 27 percent of the aluminum, and 19 percent of the copper."[52]According to BBC, aWorld Bankstudy has found that "Americans produce 16.5 tonnes ofcarbon dioxideper capita every year. By comparison, only 0.1 tonnes of the greenhouse gas is generated inEthiopiaper inhabitant."[53] A 2021 study published inFrontiers in Conservation Scienceposits that aggregate consumption growth will continue into the near future and perhaps beyond, largely due to increasing affluence and population growth. The authors argue that "there is no way—ethically or otherwise (barring extreme and unprecedented increases in human mortality)—to avoid rising human numbers and the accompanying overconsumption", although they do say that the negative impacts of overconsumption can perhaps be diminished by implementing human rights policies to lower fertility rates and decelerate current consumption patterns.[54] A report from the Lancet Commission says the same. The experts write: "Until now, undernutrition and obesity have been seen as polar opposites of either too few or too many calories," "In reality, they are both driven by the same unhealthy, inequitablefood systems, underpinned by the same political economy that is single-focused on economic growth, and ignores the negative health and equity outcomes. Climate change has the same story of profits and power,".[55]Obesity was a medical problem for people who overconsumed food and worked too little already in ancient Rome, and its impact slowly grew through history.[56]As to 2012, mortality fromobesitywas 3 times larger than from hunger,[57]reaching 2.8 million people per year by 2017[58] Just as overconsumption of food has led to widespread health crises such asobesityandmetabolic diseases, the overconsumption offossil fuelshas created an equally dire threat to both human health and the environment. Both forms of overconsumption stem from economic models that prioritize growth and short-term gains over long-term sustainability.[59]While industrialized food systems have fueled rising obesity rates, the relentless burning of fossil fuels—especially coal—has exacerbatedair pollution,climate change, and public health risks on a global scale.[60] The overconsumption offossil fuels, particularly coal, has profound implications for both environmental and human health. Burning fossil fuels releases a variety of harmful pollutants, includingsulfur dioxide(SO₂),nitrogen oxides(NOₓ),particulate matter(PM), andcarbon dioxide(CO₂).[61]These emissions contribute to environmental issues such asacid rain,smog, andclimate change, while also posing significant health risks. In addition, exposure to fine particulate matter (PM2.5) from fossil fuel combustion is associated withrespiratory diseases,cardiovascular diseases, and premature mortality. A 2021 study estimated that fossil fuel-related air pollution is responsible for over 10 million premature deaths annually worldwide.[62]Coal-fired power plants are particularly detrimental, emitting toxic substances that adversely affect human health. Communities near these plants experience higher rates ofasthma, lung disease, and other health issues.[63]Workers at extraction sites and refineries face particularly severe occupational health risks, including end-stage respiratory diseases such as black lung disease, silicosis, chronic obstructive pulmonary disease, mesothelioma and other cancers, as well as safety risks from industrial fires and explosions.[64] In China, the extensive use of coal has led to severeair pollution, yet China's coal-fired power generation capacity is growing rapidly,[65]resulting in significant public health challenges. The country's reliance oncoal-fired power plantshas been linked to increasedrespiratory illnessesandpremature deaths. Thus, addressing the health impacts offossil fueloverconsumption necessitates a transition tocleaner energy sources, implementation of stricteremission regulations, and promotion of sustainable practices to reduce reliance on fossil fuels.[66] In 2010, theInternational Resource Panelpublished the first global scientific assessment on the impacts of consumption and production.[67]The study found that the most critical impacts are related toecosystemhealth, human health andresource depletion. From a production perspective, it found that fossil-fuel combustion processes, agriculture andfisherieshave the most important impacts. Meanwhile, from a finalconsumptionperspective, it found that household consumption related to mobility, shelter, food, and energy-using products causes the majority oflife-cycleimpacts of consumption. According to theIPCC Fifth Assessment Report, human consumption, with current policy, by the year 2100 will be seven times bigger than in the year 2010.[68] The planet can’t support billions of meat-eaters. The idea of overconsumption is also strongly tied to the idea of anecological footprint. The term "ecological footprint" refers to the "resource accounting framework for measuring human demand on the biosphere." Currently, China, for instance, has a per person ecological footprint roughlyhalf the size of the US, yet has a population that is more than four times the size of the US. It is estimated that if China developed to the level of the United States that world consumption rates would roughly double.[70]Other metrics have been formed to reflect different factors in calculating a country's carbon footprint. These include carbon intensity, which tracks carbon dioxide emissions per unit of GDP, of which China had 0.37 kilograms and the US had 0.25 kilograms in 2018, as well as consumption-based emissions, which attribute carbon emissions to the country in which a product is consumed, rather than the country in which it is produced. Accounting for such also has China at a higher emissions percentage, 25%, compared to the US's 16%.[71] Humans, their prevailing growth of demands forlivestockand otherdomestic animals, has added overshoot through domestic animal breeding, keeping, and consumption, especially with the environmentally destructiveindustrial livestock production.[citation needed]Globalizationand modernization have brought Western consumer cultures to countries like China and India, including meat-intensive diets which are supplanting traditionalplant-based diets. Between 166 to more than 200 billion land and aquatic animals are consumed by a global population of over 7 billion annually.[72][73]A 2018 study published inSciencepostulates that meat consumption is set to increase as the result ofhuman population growthand rising affluence, which will increase greenhouse gas emissions and further reducebiodiversity.[74][75]Meat consumption needs to be reduced in order to make agriculture sustainable by up to 90% according to a 2018 study published inNature.[76] With the developments in consumerism and growing demands for consumption otherwise, the concept of theclimate debthas arisen. The term refers to the idea that larger countries have, in broad terms, caused more damage to the environment than their share of the world. These larger countries are often centered around consumer societies and, as a result, are the biggest producers and consumers, meaning that they contribute to pollution from the beginning (production) of a product’s life, all the way to the end (consumption and disposal). Additionally, such societies are built upon economic growth, which begets more pollution from consumerism. The term also encapsulates the related idea that developing countries are the places that are the most affected by climate change, both in their effects and their ability to respond to and recover from those effects. In total, some argue that there is a severely one sided disparity between nations that create many forms of pollution and those who create very little pollution, which aligns opposite to the disparity between nations who are affected by the pollution and those who are more well-supplied to handle it. Action upon repaying climate debt are proposed to be through reduction of emissions from the more developed, consumerist countries that are the biggest carbon emitters, which includes efforts to understand and heavily limit the extent to which they should reasonably emit greenhouse gasses in relation to their geographical and political boundaries. Additionally, action is proposed to be taken by supporting the affected underdeveloped countries by financial, industrial, and environmentally cleansing means.[77] 56% of respondents to a 2022 climate survey support a carbon budget system to limit the most climate-damaging consumption (62% of those under 30).[78] The most obvious solution to the issue of overconsumption is to simply slow the rate at which materials are becomingdepleted. From a capitalistic point of view, less consumption has negative effects on economies and so instead, countries must look to curb consumption rates but also allow for new industries, such asrenewable energyandrecyclingtechnologies, to flourish and deflect some of the economic burdens. Some movements think that a reduction in consumption in some cases can benefit the economy and society. They think that a fundamental shift in the global economy may be necessary to account for the current change that is taking place or that will need to take place. Movements and lifestyle choices related to stopping overconsumption include:anti-consumerism,freeganism,green economics,ecological economics,degrowth,frugality,downshifting,simple living,minimalism, theslow movement, and thrifting.[81][82] Many consider the final target of the movements as arriving to asteady-state economyin which the rate of consumption is optimal for health and environment.[83] Recent grassroots movements have been coming up with creative ways to decrease the number of goods we consume.The Freecycle Networkis a network of people in one's community that are willing to trade goods for other goods or services. It is a new take on thrifting while still being beneficial to both parties.[84] Other researchers and movements such asthe Zeitgeist Movementsuggest a new socioeconomic model which, through a structural increase ofefficiency, collaboration and locality in production as well as effectivesharing, increasedmodularity, sustainability and optimal design of products, are expected to reduce resource-consumption.[85]Solutions offered include consumers using market forces to influence businesses towards more sustainable manufacturing and products.[86] Another way to reduce consumption is to slow population growth by improving family planning services worldwide. In developing countries, more than 200 million women do not have adequate access.[87]Women's empowerment in these countries will also result in smaller families. Reducing resource consumption requires a fundamental shift away from selfish, consumer-oriented values towards pro-social values that motivate people to work towards limiting consumption in order to achieve environmental sustainability and promote the development and acceptance of economic and social policies aimed at curbing consumption levels.[88] Mindful consumption encourages individuals to moderate excessive acquisition and repetitive consumption by aligning their behavior with broader social and environmental goals. Emphasizing socially relevant benefits can help highlight the sustainable purpose of these services, thereby curbing overconsumption.[89]
https://en.wikipedia.org/wiki/Overconsumption
Social privilegeis an advantage or entitlement that benefits individuals belonging to certain groups, often to the detriment of others. Privileged groups can be advantaged based onsocial class,wealth,education,caste,age,height,skin color,physical fitness,nationality,geographic location,cultural differences,ethnicorracialcategory,gender,gender identity,neurodiversity,physical disability,sexual orientation,religion, and other differentiating factors.[1][2]Individuals can be privileged in one area, such as education, and not privileged in another area, such as health. The amount of privilege any individual has may change over time, such as when a person becomes disabled, or when a child becomes a young adult. The concept of privilege is generally considered to be a theoretical concept used in a variety of subjects and often linked to social inequality.[2]Privilege is also linked to social and cultural forms of power.[2]It began as an academic concept, but has since been invoked more widely, outside of academia.[3]This subject is based on the interactions of different forms of privilege within certain situations.[4]It can be understood as the inverse ofsocial inequality, in that it focuses on how power structures in society aid societally privileged people, as opposed to how those structures oppress others.[4] Arguably[further explanation needed], the history of privilege as a concept dates back to AmericansociologistandhistorianW. E. B. Du Bois's 1903 bookThe Souls of Black Folk. Here, he wrote that althoughAfrican Americanswere observant ofwhite Americansand conscious ofracial discrimination, white Americans did not think much about African-Americans, nor about the effects of racial discrimination.[5][6][7]In 1935, Du Bois wrote about what he called the "wages of whiteness" held by white Americans. He wrote that these included courtesy and deference, unimpeded admittance to all public functions, lenient treatment in court, and access to the best schools.[8] Early concepts that would lead to the term White Privilege were developed by the Weather Underground in the 1960s.[9][10]In 1988, Americanfeministand anti-racism activistPeggy McIntoshpublished "White Privilege and Male Privilege: A Personal Account of Coming to See Correspondences through Work in Women's Studies". Here, McIntosh documented forty-six privileges which she, as a white person, experienced in the United States. As an example, "I can be sure that if I need legal or medical help, my race will not work against me", and "I do not have to educate my children to be aware ofsystemic racismfor their own daily physical protection". McIntosh describedwhite privilegeas an "invisible package of unearned assets" whichwhite peopledo not want to acknowledge, and which leads to them being confident, comfortable, and oblivious about racial issues, while non-white people become unconfident, uncomfortable, and alienated.[11]McIntosh's essay has been credited for stimulating academic interest in privilege, which has been extensively studied in the decades since.[12] Historically, academic study of social inequality focused mainly on the ways in which minority groups were discriminated against, and ignored the privileges accorded to dominant social groups. That changed in the late 1980s, when researchers began studying the concept of privilege.[12] Privilege, as understood and described by researchers, is a function of multiple variables of varying importance, such asrace,age,gender,sexual orientation,gender identity,neurology,citizenship,religion,physical ability,health,level of education, and others. Race and gender tend to have the highest impacts given that one is born with these characteristics and they are immediately visible. However, religion, sexuality and physical ability are also highly relevant.[4]Some such as social class are relatively stable and others, such as age, wealth, religion and attractiveness, will or may change over time.[13]Some attributes of privilege are at least partly determined by the individual, such as level of education, whereas others such as race or class background are entirely involuntary. American sociologistMichael S. Kimmeluses the metaphor of a wind to explain the concept. He explains that when you walk into the wind you have to struggle for each step that you take. When you walk with the wind, you do not feel the wind at all but you still move faster than you would otherwise. The wind is social privilege and if it flows with you, it simply propels you forward with little effort of your own.[4] In the context of the theory, privileged people are considered to be "the norm", and, as such, gain invisibility and ease in society, with others being cast as inferior variants.[14]Privileged people see themselves reflected throughout society both in mass media and face-to-face in their encounters with teachers, workplace managers and other authorities, which researchers argue leads to a sense of entitlement and the assumption that the privileged person will succeed in life, as well as protecting the privileged person from worry that they may face discrimination from people in positions of authority.[15] Some academics, such as Peggy McIntosh, highlight a pattern where those who benefit from a type of privilege are unwilling to acknowledge it.[16][17][18]The argument may follow that such a denial constitutes a further injustice against those who do not benefit from the same form of privilege.Derald Wing Suehas referred to such denial as a form of "microaggression" or microinvalidation that negates the experiences of people who do not have privilege and minimizes the impediments they face.[19] McIntosh wrote that most people are reluctant to acknowledge their privilege, and instead look for ways to justify or minimize the effects of privilege stating that their privilege was fully earned. They justify this by acknowledging the acts of individuals of unearned dominance, but deny that privilege is institutionalized as well as embedded throughout our society. She wrote that those who believe privilege is systemic may nonetheless deny having personally benefited from it, and may oppose efforts to dismantle it.[11]According to researchers[who?], privileged individuals resist acknowledging their privileges because doing so would require them to acknowledge that whatever success they have achieved did not result solely through their own efforts. Instead it was partly due to a system that has developed to support them.[19]The concept of privilege calls into question the idea that society is ameritocracy, which researchers[who?]have argued is particularly unsettling for Americans for whom belief that they live in a meritocracy is a deeply held cultural value, and one that researchers commonly characterize as amyth.[14][20][21][22] InThe Gendered Society, Michael Kimmel wrote that when people at all levels of privilege do not feel personally powerful, arguments that they have benefited from unearned advantages seem unpersuasive.[21][further explanation needed] Catherine D'IgnazioandLauren Kleinin their bookData Feminism[23]used the termprivilege hazardwhen referring to the phenomenon where individuals in privileged positions remain unaware of their inherent advantages. This lack of awareness perpetuates societal inequalities and obstructs efforts to advocate for marginalized groups.[24]Privilege hazard is cited by other authors to acknowledge their positionality and risk of misinterpretating others' experiences.[25]Authors such asFelicia Pratto, Andrew Stewart,Peggy McIntoshand Taylor Phillips have contributed to this discourse by examining various forms of privilege hazards, including group dominance, white, male and class privilege. This exploration sheds light on how privilege manifests in different societal spheres and its implications for marginalized communities. In their exploration of Data Feminism,[23]Catherine D'IgnazioandLauren Kleindefine "privilege hazard" as the potential risks arising when privileged individuals, equipped with access to resources and data, attempt to address issues faced by marginalized groups. Relying solely on data may reinforce existing power dynamics. Software and data developers with privilege hazard may misinterpret data from contexts they don't understand.[26]The consequences may further marginalizing disadvantaged communities. To counter this, they advocate for an inclusive approach to data practices that centers on marginalized voices, aiming for a more equitable and just data ecosystem. The continuous presence of privilege hazard is evident in the concept of group dominance, wherein one social group holds significant advantages over others, leading to the consolidation of power and resources.Prattoand Stewart's research emphasizes that dominant groups often lack awareness of their privileged identities, viewing them as normal rather than as privileges.[27]Kaidi Wu andDavid Dunningdelve intohypocognition[28]within group dominance privilege, highlighting how individuals from dominant groups may struggle to grasp the difficulties faced by minorities due to lack of exposure. Racismis the belief that groups of humans possess different behavioral traits corresponding to physical appearance and can be divided based on the superiority of one race over another. This can result in particular ethnic and cultural groups having privileged access to a multitude of resources and opportunities, including education and work positions. Educational racism has been entrenched in American society since the creation of the United States of America. A system of laws in the 18th and 19th century known as theBlack Codes, criminalized the access to education for black people. Until the introduction of theThirteenth Amendment to the United States Constitution, theFourteenth Amendment to the United States Constitutionand theCivil Rights Act of 1866, seeking out an education was punishable by the law for them. This thus served to keep African Americans illiterate and only value them as a workforce. However, even after these institutional and legal changes, African Americans were still targeted by educational racism in the form ofschool segregation in the United States. In the 20th century the fight against educational racism reached its climax with the landmark Supreme Court caseBrown v. Board of Education.[29] Educational racism also took other forms throughout history such as the creation ofCanadian Indian residential school systemin 1831, which forcefully integrated indigenous children into schools aimed at erasing their ethnic, linguistic and cultural specificities in order to assimilate them into a white settler society. Until the last residential school closed in 1996, Canada had an educational system which specifically harmed and targeted indigenous children. An estimated 6,000 children died under that system.[30] Nowadays the opportunity gap pinpoints how educational racism is present in societies. The term refers to "the ways in which race, ethnicity, socioeconomic status, English proficiency, community wealth, familial situations, or other factors contribute to or perpetuate lower educational aspirations, achievement, and attainment for certain groups of students."[31]In other words, it is "the disparity in access to quality schools and the resources needed for all children to be academically successful."[32]Concretely this can be seen in the United States by considering how, according to the Schott Foundation's Opportunity to Learn Index, "students from historically disadvantaged families have just a 51 percent Opportunity to Learn when compared to White, non-Latino students."[32] According to McKinley et al. Students of color are pushed toward academic failure and continued social disenfranchisement. Racist policies and beliefs, in part, explain why children and young adults from racially marginalized groups fail to achieve academically at the same rate as their White peers.[33] Heterosexual privilegecan be defined as "the rights and unearned advantages bestowed on heterosexuals in society".[34]There are both institutional and cultural forces encouraging heterosexuality in society.[34]Sexual orientationis a repeated romantic, sexual or emotional attraction to one or multiple genders. There are a variety of categories includingheterosexual,gay,lesbian, andbisexual.[35]Heterosexual is considered the normative form of sexual orientation.[1] Heterosexual privilege is based in the existence ofhomophobiain society, particularly at the individual level. Between 2014 and 2018, 849 sexual orientation related hate crimes were committed inCanada.[36]Despite the fact thatCanadalegalizedsame-sex marriagein 2005 and has enshrined the protection of the human rights of all people of all sexual orientations, there is still societal bias against those who do not conform to heterosexuality.[37][38] Beyond this, institutions such as marriage stop homosexual partners from accessing each other's health insurance, tax benefits or adopting a child together.[34]Same sex marriage is legal in only 27 countries, mostly in the northern hemisphere.[39]This results in an inability for non-heterosexual couples to benefit from the institutional structures that are based on heterosexuality, resulting in privilege for those who are heterosexual. Peggy McIntosh and scholars likeBrian Loweryand Taylor Phillips discuss white privilege, highlighting the unseen benefits white individuals enjoy due to their race. McIntosh describes it as aninvisible knapsackof unearned advantages, leading to limited perspectives and empathy towards marginalized groups.[40]Taylor Phillips and Brian Lowery's research further elaborates on how whites tend to hide their privilege from themselves, maintaining thestatus quoand hindering progress towardequity. Male privilege encompasses the advantages men experience solely due to their gender.Peggy McIntoshnotes that males are conditioned not to recognize their privilege, leading to obliviousness and perpetuation of the privilege hazard.[40]Real-life examples, such as unequal distribution of household chores, illustrate howmale privilegeremains invisible to men due to societal norms.Tal Peretz expands on McIntosh's concept, questioning if men tend to overlook or critically examine their privilege.[41] Class privilegerefers to the benefits individuals enjoy based on their social or economic status. Taylor Phillips andBrian Lowery's study[42]reveals that when confronted with their privilege, individuals tend to defend themselves, attributing success to personal efforts rather than acknowledging systemic advantages. This defensive response shields individuals from accepting their unearned advantages, representing a form ofprivilege hazard. Shai Davaidai and Jacklyn Stein's works delve into perceptions of wealth and poverty, highlighting the impact of environments on individuals' views of their circumstances.[43][44] Privilege theory argues that each individual is embedded in a matrix of categories and contexts, and will be in some ways privileged and other ways disadvantaged, with privileged attributes lessening disadvantage and membership in a disadvantaged group lessening the benefits of privilege.[17]This can be further supported by the idea ofintersectionality, which was coined by Kimberle Crenshaw in 1989.[45]When applying intersectionality to the concept of social privilege, it can be understood as the way one form of privilege can be mitigated by other areas in which a person lacks privilege, for example, a black man who hasmale privilegebut no white privilege.[46]It is also argued that members of privileged social identity groups often do not recognize their advantages.[47] Intersections of forms of identity can either enhance privilege or decrease its effects.[48]Psychological analysis has found that people tend to frame their lives on different elements of their identity and therefore frame their lives through the privilege they do or do not have.[49]However, this analysis also found that this framing was stronger amongst certain nationalities, suggesting that identity and privilege may be more central in certain countries.[49]Often people construct themselves in relation to the majority, so ties to identity and therefore degrees of privilege can be stronger for more marginalized groups. Forms of privilege one might have can actually be decreased by the presence of other factors. For example, the feminization of a gay man may reduce his male privilege in addition to already lacking heterosexual privilege.[46]When acknowledging privilege, multifaceted situations must be understood individually. Privilege is a nuanced notion and an intersectional understanding helps bridge gaps in the original analysis. The concept of privilege has been criticized for ignoring relative differences among groups. For example, Lawrence Blum argued that in American culture there are status differences amongChinese,Japanese,Indians,Koreans, andCambodians, and amongAfrican Americans, black immigrants from theCaribbean, and black immigrants fromAfrica.[50] Blum agreed that privilege exists and is systemic yet nonetheless criticized the label itself, saying that the word "privilege" implies luxuries rather than rights, and arguing that some benefits of privilege such as unimpeded access toeducationandhousingwould be better understood as rights; Blum suggested that privilege theory should distinguish between "spared injustice" and "unjust enrichment" as some effects of being privileged are the former and others the latter. Blum also argued that privilege can end up homogenising both privileged and non-privileged groups when in fact it needs to take account the role of interacting effects and an individual's multiple group identities.[50]"White privilege", Michael Monahan argued, would be more accurately described as the advantages gained by whites through historical disenfranchisement of non-whites rather than something that gives whites privilege above and beyond normal human status.[51] Psychologist Erin Cooley reported in a study published in 2019 that reading about white privilege decreased social liberals' sympathy for poor whites and increased their will to punish/blame but did not increase their sympathy for poor blacks.[52] The existence of privilege across various categories leads to variation in experiences within specific privileged groups, raising concerns about the legitimacy of privilege hazard. Jamie Abrams' article[53]challenges the notion of privilege, discussing how efforts solely focused on highlighting male privilege may inadvertently reinforce existing cultural norms and fail to foster inclusivity. This perspective underscores the complexity of addressing systemic privilege, emphasizing the need to reshapesocietal normsand institutional structures.Herb Goldberg'sbook sheds light on how the idea ofmale privilegeand power has hurt men's personalself-realization.[54]
https://en.wikipedia.org/wiki/Privilege_hazard
Heterodox Rational expectationsis an economic theory that seeks to infer themacroeconomicconsequences of individuals' decisions based on all available knowledge. It assumes that individuals' actions are based on the best available economic theory and information. The concept of rational expectations was first introduced byJohn F. Muthin his paper "Rational Expectations and the Theory of Price Movements" published in 1961.Robert LucasandThomas Sargentfurther developed the theory in the 1970s and 1980s which became seminal works on the topic and were widely used inmicroeconomics.[1] Significant Findings Muth’swork introduces the concept of rational expectations and discusses its implications for economic theory. He argues that individuals are rational and use all available information to make unbiased, informed predictions about the future. This means that individuals do not make systematic errors in their predictions and that their predictions are not biased by past errors. Muth’s paper also discusses the implication of rational expectations for economic theory. One key implication is that government policies, such as changes in monetary or fiscal policy, may not be as effective if individuals’ expectations are not considered. For example, if individuals expect inflation to increase, they may anticipate that the central bank will raise interest rates to combat inflation, which could lead to higher borrowing costs and slower economic growth. Similarly, if individuals expect a recession, they may reduce their spending and investment, which could lead to aself-fulfilling prophecy.[2] Lucas’ paper “Expectations and the Neutrality of Money” expands on Muth's work and sheds light on the relationship between rational expectations and monetary policy. The paper argues that when individuals hold rational expectations, changes in the money supply do not have real effects on the economy and the neutrality of money holds. Lucas presents a theoretical model that incorporates rational expectations into an analysis of the effects of changes in the money supply. The model suggests that individuals adjust their expectations in response to changes in the money supply, which eliminates the effect on real variables such as output and employment. He argues that a stable monetary policy that is consistent with individuals' rational expectations will be more effective in promoting economic stability than attempts to manipulate the money supply.[3] In 1973,Thomas J Sargentpublished the article “Rational Expectations, the Real Rate of Interest, and the Natural Rate of Unemployment”, which was an important contribution to the development and application of the concept of rational expectations in economic theory and policy. By assuming individuals are forward-looking and rational, Sargent argues that rational expectations can help explain fluctuations in key economic variables such as the real interest rate and the natural rate of employment. He also suggests that the concept of the natural rate of unemployment can be used to help policymakers set macroeconomic policy. This concept suggests that there is a trade-off between unemployment and inflation in the short run, but in the long run, the economy will return to the natural rate of unemployment, which is determined by structural factors such as the skills of the labour force and the efficiency of the labour market. Sargent argues that policymakers should take this concept into account when setting macroeconomic policy, as policies that try to push unemployment below the natural rate will only lead to higher inflation in the long run.[4] The key idea of rational expectations is that individuals make decisions based on all available information, including their own expectations about future events. This implies that individuals are rational and use all available information to make decisions. Another important idea is that individuals adjust their expectations in response to new information. In this way, individuals are assumed to be forward-looking and able to adapt to changing circumstances. They will learn from past trends and experiences to make their best guess of the future.[1] It is assumed that an individual's predicted outcome do not differ systematically from the marketequilibriumgiven that they do not make systematic errors when predicting the future. In an economic model, this is typically modelled by assuming that the expected value of a variable is equal to the expected value predicted by the model. For example, suppose thatPis the equilibrium price in a simple market, determined bysupply and demand. The theory of rational expectations implies that the actual price will only deviate from the expectation if there is an 'information shock' caused by information unforeseeable at the time expectations were formed. In other words,ex antethe price is anticipated to equal its rational expectation: whereP∗{\displaystyle P^{*}}is the rational expectation andϵ{\displaystyle \epsilon }is the random error term, which has an expected value of zero, and is independent ofP∗{\displaystyle P^{*}}. If rational expectations are applied to the Phillips curve analysis, the distinction between long and short term will be completely negated, that is, there is no Phillips curve, and there is no substitute relationship between inflation rate and unemployment rate that can be utilized. The mathematical derivation is as follows: Rational expectation is consistent with objective mathematical expectation: EP˙t=P˙t+εt{\displaystyle E{\dot {P}}_{t}={\dot {P}}_{t}+\varepsilon _{t}} Mathematical derivation (1) We denote unemployment rate byut{\displaystyle u_{t}}. Assuming that the actual process is known, the rate of inflation (P˙t{\displaystyle {\dot {P}}_{t}}) depends on previous monetary changes (M˙t−1{\displaystyle {\dot {M}}_{t-1}}) and changes in short-term variables such as X (for example, oil prices): (1)P˙=qM˙t−1+zX˙t−1+εt{\displaystyle {\dot {P}}=q{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}+\varepsilon _{t}} Taking expected values, (2)EP˙t=qM˙t−1+zX˙t−1{\displaystyle E{\dot {P}}_{t}=q{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}} On the other hand, inflation rate is related to unemployment by the Phillips curve: (3)P˙t=α−βut+γEt−1(P˙t){\displaystyle {\dot {P}}_{t}=\alpha -\beta u_{t}+\gamma E_{t-1}({\dot {P}}_{t})},γ=1{\displaystyle \gamma =1} Equating (1) and (3): (4)α−βut+qM˙t−1+zX˙t−1=qM˙t−1+zX˙t−1+εt{\displaystyle \alpha -\beta u_{t}+q{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}=q{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}+\varepsilon _{t}} Cancelling terms and rearrangement gives (5)ut=α−εtβ{\displaystyle u_{t}={\frac {\alpha -\varepsilon _{t}}{\beta }}} Thus, even in the short run, there is no substitute relationship between inflation and unemployment. Random shocks, which are completely unpredictable, are the only reason why the unemployment rate deviates from the natural rate. Mathematical derivation (2) Even if the actual rate of inflation is dependent on current monetary changes, the public can make rational expectations as long as they know how monetary policy is being decided: (1)P˙t=qM˙t+zX˙t−1+εt{\displaystyle {\dot {P}}_{t}=q{\dot {M}}_{t}+z{\dot {X}}_{t-1}+\varepsilon _{t}} Denote the change due to monetary policy byμt{\displaystyle \mu _{t}}. (2)M˙t=gM˙t−1+μt{\displaystyle {\dot {M}}_{t}=g{\dot {M}}_{t-1}+\mu _{t}} We then substitute (2) into (1): (3)P˙t=qgM˙t−1+zX˙t−1+qμt+εt{\displaystyle {\dot {P}}_{t}=qg{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}+q\mu _{t}+\varepsilon _{t}} Taking expected value at timet−1{\displaystyle t-1}, (4)Et−1P˙=qgM˙t−1+zX˙t−1{\displaystyle E_{t-1}{\dot {P}}=qg{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}} Using the Phillips curve relation, cancelling terms on both sides and rearrangement gives (5)ut=α−qμt−εtβ{\displaystyle u_{t}={\frac {\alpha -q\mu _{t}-\varepsilon _{t}}{\beta }}} The conclusion is essentially the same: random shocks that are completely unpredictable are the only thing that can cause the unemployment rate to deviate from the natural rate. Rational expectations theories were developed in response to perceived flaws in theories based onadaptive expectations. Under adaptive expectations, expectations of the future value of an economic variable are based on past values. For example, it assumes that individuals predict inflation by looking at historical inflation data. Under adaptive expectations, if the economy suffers from a prolonged period of rising inflation, people are assumed to always underestimate inflation. Many economists suggested that it was an unrealistic and irrational assumption, as they believe that rational individuals will learn from past experiences and trends and adjust their predictions accordingly. The rational expectations hypothesis has been used to support conclusions about economic policymaking. An example is thepolicy ineffectiveness propositiondeveloped byThomas SargentandNeil Wallace. If the Federal Reserve attempts to lower unemployment through expansionarymonetary policy, economic agents will anticipate the effects of the change of policy and raise their expectations of future inflation accordingly. This will counteract the expansionary effect of the increased money supply, suggesting that the government can only increase the inflation rate but not employment. If agents do not form rational expectations or if prices are not completely flexible, discretional and completely anticipated, economic policy actions can trigger real changes.[5] While the rational expectations theory has been widely influential in macroeconomic analysis, it has also been subject to criticism: Unrealistic assumptions: The theory implies that individuals are in a fixed point, where their expectations about aggregate economic variables on average are correct. This is unlikely to be the case, due to limited information available and human error.[6] Limited empirical support: While there is some evidence that individuals do incorporate expectations into their decision-making, it is unclear whether they do so in the way predicted by the rational expectations theory.[6] Misspecification of models: The rational expectations theory assumes that individuals have a common understanding of the model used to make predictions. However, if the model is misspecified, this can lead to incorrect predictions.[7] Inability to explain certain phenomena:The theory is also criticized for its inability to explain certain phenomena, such as 'irrational' bubbles and crashes in financial markets.[8] Lack of attention to distributional effects:Critics argue that the rational expectations theory focuses too much on aggregate outcomes and does not pay enough attention to the distributional effects of economic policies.[6]
https://en.wikipedia.org/wiki/Rational_expectations
Social invisibilityis the condition in which a group of people isseparatedor systematicallyignoredby the majority of a society. As a result, those who are marginalized feel neglected or being invisible in the society. It can includedisadvantaged, elderly homes, childorphanages,homelesspeople or anyone who experiences a sense of being ignored or separated from society as a whole.[1][2][3][4] The subjective experience of being unseen by others in a social environment is social invisibility. A sense of disconnectedness from the surrounding world is often experienced by invisible people. This disconnectedness can lead to absorbed coping and breakdowns, based on the asymmetrical relationship between someone made invisible and others.[5] AmongAfrican-Americanmen, invisibility can often take the form of a psychological process that both deals with the stress of racialized invisibility, and the choices made in becoming visible within a social framework thatpredeterminesthese choices. In order to become visible and gain acceptance, an African-American man has to avoid adopting behavior that made him invisible in the first place, which intensifies the stress already brought on throughracism.[6] Although social invisibility is usually considered a form of marginalization of certain individuals and groups, in recent debates, some scholars have also insisted on the function of invisibility as a strategy for evading identification and categorization. In the wake of authors likeEdouard Glissantand his defense of a "right to opacity", it has been argued that "tactical invisibility" can serve as a means of resistance in a world of data surveillance.[7] Thissociology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Social_invisibility
In probability theory and statistics, aMarkov chainorMarkov processis astochastic processdescribing asequenceof possible events in which theprobabilityof each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairsnow." Acountably infinitesequence, in which the chain moves state at discrete time steps, gives adiscrete-time Markov chain(DTMC). Acontinuous-timeprocess is called acontinuous-time Markov chain(CTMC). Markov processes are named in honor of theRussianmathematicianAndrey Markov. Markov chains have many applications asstatistical modelsof real-world processes.[1]They provide the basis for general stochastic simulation methods known asMarkov chain Monte Carlo, which are used for simulating sampling from complexprobability distributions, and have found application in areas includingBayesian statistics,biology,chemistry,economics,finance,information theory,physics,signal processing, andspeech processing.[1][2][3] The adjectivesMarkovianandMarkovare used to describe something that is related to a Markov process.[4] A Markov process is astochastic processthat satisfies theMarkov property(sometimes characterized as "memorylessness"). In simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history.[5]In other words,conditionalon the present state of the system, its future and past states areindependent. A Markov chain is a type of Markov process that has either a discretestate spaceor a discrete index set (often representing time), but the precise definition of a Markov chain varies.[6]For example, it is common to define a Markov chain as a Markov process in eitherdiscrete or continuous timewith a countable state space (thus regardless of the nature of time),[7][8][9][10]but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space).[6] The system'sstate spaceand time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time: Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes. Usually the term "Markov chain" is reserved for a process with a discrete set of times, that is, adiscrete-time Markov chain (DTMC),[11]but a few authors use the term "Markov process" to refer to acontinuous-time Markov chain (CTMC)without explicit mention.[12][13][14]In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (seeMarkov model). Moreover, the time index need not necessarily be real-valued; like with the state space, there are conceivable processes that move through index sets with other mathematical constructs. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is usually discrete, thestate spaceof a Markov chain does not have any generally agreed-on restrictions: the term may refer to a process on an arbitrary state space.[15]However, many applications of Markov chains employ finite orcountably infinitestate spaces, which have a more straightforward statistical analysis. Besides time-index and state-space parameters, there are many other variations, extensions and generalizations (seeVariations). For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise. The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, atransition matrixdescribing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate. A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are theintegersornatural numbers, and the random process is a mapping of these to states. The Markov property states that theconditional probability distributionfor the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps. Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important. Andrey Markovstudied Markov processes in the early 20th century, publishing his first paper on the topic in 1906.[16][17][18]Markov Processes in continuous time were discovered long before his work in the early 20th century in the form of thePoisson process.[19][20][21]Markov was interested in studying an extension of independent random sequences, motivated by a disagreement withPavel Nekrasovwho claimed independence was necessary for theweak law of large numbersto hold.[22]In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption,[16][17][18]which had been commonly regarded as a requirement for such mathematical laws to hold.[18]Markov later used Markov chains to study the distribution of vowels inEugene Onegin, written byAlexander Pushkin, and proved acentral limit theoremfor such chains.[16] In 1912Henri Poincaréstudied Markov chains onfinite groupswith an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced byPaulandTatyana Ehrenfestin 1907, and a branching process, introduced byFrancis GaltonandHenry William Watsonin 1873, preceding the work of Markov.[16][17]After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier byIrénée-Jules Bienaymé.[23]Starting in 1928,Maurice Fréchetbecame interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains.[16][24] Andrey Kolmogorovdeveloped in a 1931 paper a large part of the early theory of continuous-time Markov processes.[25][26]Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well asNorbert Wiener's work on Einstein's model of Brownian movement.[25][27]He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes.[25][28]Independent of Kolmogorov's work,Sydney Chapmanderived in a 1928 paper an equation, now called theChapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement.[29]The differential equations are now called the Kolmogorov equations[30]or the Kolmogorov–Chapman equations.[31]Other mathematicians who contributed significantly to the foundations of Markov processes includeWilliam Feller, starting in 1930s, and then laterEugene Dynkin, starting in the 1950s.[26] Suppose that there is a coin purse containing five coins worth 25¢, five coins worth 10¢ and five coins worth 5¢, and one by one, coins are randomly drawn from the purse and are set on a table. IfXn{\displaystyle X_{n}}represents the total value of the coins set on the table afterndraws, withX0=0{\displaystyle X_{0}=0}, then the sequence{Xn:n∈N}{\displaystyle \{X_{n}:n\in \mathbb {N} \}}isnota Markov process. To see why this is the case, suppose that in the first six draws, all five nickels and a quarter are drawn. ThusX6=$0.50{\displaystyle X_{6}=\$0.50}. If we know not justX6{\displaystyle X_{6}}, but the earlier values as well, then we can determine which coins have been drawn, and we know that the next coin will not be a nickel; so we can determine thatX7≥$0.60{\displaystyle X_{7}\geq \$0.60}with probability 1. But if we do not know the earlier values, then based only on the valueX6{\displaystyle X_{6}}we might guess that we had drawn four dimes and two nickels, in which case it would certainly be possible to draw another nickel next. Thus, our guesses aboutX7{\displaystyle X_{7}}are impacted by our knowledge of values prior toX6{\displaystyle X_{6}}. However, it is possible to model this scenario as a Markov process. Instead of definingXn{\displaystyle X_{n}}to represent thetotal valueof the coins on the table, we could defineXn{\displaystyle X_{n}}to represent thecountof the various coin types on the table. For instance,X6=1,0,5{\displaystyle X_{6}=1,0,5}could be defined to represent the state where there is one quarter, zero dimes, and five nickels on the table after 6 one-by-one draws. This new model could be represented by6×6×6=216{\displaystyle 6\times 6\times 6=216}possible states, where each state represents the number of coins of each type (from 0 to 5) that are on the table. (Not all of these states are reachable within 6 draws.) Suppose that the first draw results in stateX1=0,1,0{\displaystyle X_{1}=0,1,0}. The probability of achievingX2{\displaystyle X_{2}}now depends onX1{\displaystyle X_{1}}; for example, the stateX2=1,0,1{\displaystyle X_{2}=1,0,1}is not possible. After the second draw, the third draw depends on which coins have so far been drawn, but no longer only on the coins that were drawn for the first state (since probabilistically important information has since been added to the scenario). In this way, the likelihood of theXn=i,j,k{\displaystyle X_{n}=i,j,k}state depends exclusively on the outcome of theXn−1=ℓ,m,p{\displaystyle X_{n-1}=\ell ,m,p}state. A discrete-time Markov chain is a sequence ofrandom variablesX1,X2,X3, ... with theMarkov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states: The possible values ofXiform acountable setScalled the state space of the chain. A continuous-time Markov chain (Xt)t≥ 0is defined by a finite or countable state spaceS, atransition rate matrixQwith dimensions equal to that of the state space and initial probability distribution defined on the state space. Fori≠j, the elementsqijare non-negative and describe the rate of the process transitions from stateito statej. The elementsqiiare chosen such that each row of the transition rate matrix sums to zero, while the row-sums of a probability transition matrix in a (discrete) Markov chain are all equal to one. There are three equivalent definitions of the process.[40] LetXt{\displaystyle X_{t}}be the random variable describing the state of the process at timet, and assume the process is in a stateiat timet. Then, knowingXt=i{\displaystyle X_{t}=i},Xt+h=j{\displaystyle X_{t+h}=j}is independent of previous values(Xs:s<t){\displaystyle \left(X_{s}:s<t\right)}, and ash→ 0 for alljand for allt,Pr(X(t+h)=j∣X(t)=i)=δij+qijh+o(h),{\displaystyle \Pr(X(t+h)=j\mid X(t)=i)=\delta _{ij}+q_{ij}h+o(h),}whereδij{\displaystyle \delta _{ij}}is theKronecker delta, using thelittle-o notation. Theqij{\displaystyle q_{ij}}can be seen as measuring how quickly the transition fromitojhappens. Define a discrete-time Markov chainYnto describe thenth jump of the process and variablesS1,S2,S3, ... to describe holding times in each of the states whereSifollows theexponential distributionwith rate parameter −qYiYi. For any valuen= 0, 1, 2, 3, ... and times indexed up to this value ofn:t0,t1,t2, ... and all states recorded at these timesi0,i1,i2,i3, ... it holds that wherepijis the solution of theforward equation(afirst-order differential equation) with initial condition P(0) is theidentity matrix. If the state space isfinite, the transition probability distribution can be represented by amatrix, called the transition matrix, with the (i,j)thelementofPequal to Since each row ofPsums to one and all elements are non-negative,Pis aright stochastic matrix. A stationary distributionπis a (row) vector, whose entries are non-negative and sum to 1, is unchanged by the operation of transition matrixPon it and so is defined by By comparing this definition with that of aneigenvectorwe see that the two concepts are related and that is a normalized (∑iπi=1{\textstyle \sum _{i}\pi _{i}=1}) multiple of a left eigenvectoreof the transition matrixPwith aneigenvalueof 1. If there is more than one unit eigenvector then a weighted sum of the corresponding stationary states is also a stationary state. But for a Markov chain one is usually more interested in a stationary state that is the limit of the sequence of distributions for some initial distribution. The values of a stationary distributionπi{\displaystyle \textstyle \pi _{i}}are associated with the state space ofPand its eigenvectors have their relative proportions preserved. Since the components of π are positive and the constraint that their sum is unity can be rewritten as∑i1⋅πi=1{\textstyle \sum _{i}1\cdot \pi _{i}=1}we see that thedot productof π with a vector whose components are all 1 is unity and that π lies on asimplex. If the Markov chain is time-homogeneous, then the transition matrixPis the same after each step, so thek-step transition probability can be computed as thek-th power of the transition matrix,Pk. If the Markov chain is irreducible and aperiodic, then there is a unique stationary distributionπ.[41]Additionally, in this casePkconverges to a rank-one matrix in which each row is the stationary distributionπ: where1is the column vector with all entries equal to 1. This is stated by thePerron–Frobenius theorem. If, by whatever means,limk→∞Pk{\textstyle \lim _{k\to \infty }\mathbf {P} ^{k}}is found, then the stationary distribution of the Markov chain in question can be easily determined for any starting distribution, as will be explained below. For some stochastic matricesP, the limitlimk→∞Pk{\textstyle \lim _{k\to \infty }\mathbf {P} ^{k}}does not exist while the stationary distribution does, as shown by this example: (This example illustrates a periodic Markov chain.) Because there are a number of different special cases to consider, the process of finding this limit if it exists can be a lengthy task. However, there are many techniques that can assist in finding this limit. LetPbe ann×nmatrix, and defineQ=limk→∞Pk.{\textstyle \mathbf {Q} =\lim _{k\to \infty }\mathbf {P} ^{k}.} It is always true that SubtractingQfrom both sides and factoring then yields whereInis theidentity matrixof sizen, and0n,nis thezero matrixof sizen×n. Multiplying together stochastic matrices always yields another stochastic matrix, soQmust be astochastic matrix(see the definition above). It is sometimes sufficient to use the matrix equation above and the fact thatQis a stochastic matrix to solve forQ. Including the fact that the sum of each the rows inPis 1, there aren+1equations for determiningnunknowns, so it is computationally easier if on the one hand one selects one row inQand substitutes each of its elements by one, and on the other one substitutes the corresponding element (the one in the same column) in the vector0, and next left-multiplies this latter vector by the inverse of transformed former matrix to findQ. Here is one method for doing so: first, define the functionf(A) to return the matrixAwith its right-most column replaced with all 1's. If [f(P−In)]−1exists then[42][41] One thing to notice is that ifPhas an elementPi,ion its main diagonal that is equal to 1 and theith row or column is otherwise filled with 0's, then that row or column will remain unchanged in all of the subsequent powersPk. Hence, theith row or column ofQwill have the 1 and the 0's in the same positions as inP. As stated earlier, from the equationπ=πP,{\displaystyle {\boldsymbol {\pi }}={\boldsymbol {\pi }}\mathbf {P} ,}(if exists) the stationary (or steady state) distributionπis a left eigenvector of rowstochastic matrixP. Then assuming thatPis diagonalizable or equivalently thatPhasnlinearly independent eigenvectors, speed of convergence is elaborated as follows. (For non-diagonalizable, that is,defective matrices, one may start with theJordan normal formofPand proceed with a bit more involved set of arguments in a similar way.[43]) LetUbe the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each column is a left eigenvector ofPand letΣbe the diagonal matrix of left eigenvalues ofP, that is,Σ= diag(λ1,λ2,λ3,...,λn). Then byeigendecomposition Let the eigenvalues be enumerated such that: SincePis a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary distribution, then the largest eigenvalue and the corresponding eigenvector is unique too (because there is no otherπwhich solves the stationary distribution equation above). Letuibe thei-th column ofUmatrix, that is,uiis the left eigenvector ofPcorresponding to λi. Also letxbe a lengthnrow vector that represents a valid probability distribution; since the eigenvectorsuispanRn,{\displaystyle \mathbb {R} ^{n},}we can write If we multiplyxwithPfrom right and continue this operation with the results, in the end we get the stationary distributionπ. In other words,π=a1u1←xPP...P=xPkask→ ∞. That means Sinceπis parallel tou1(normalized by L2 norm) andπ(k)is a probability vector,π(k)approaches toa1u1=πask→ ∞ with a speed in the order ofλ2/λ1exponentially. This follows because|λ2|≥⋯≥|λn|,{\displaystyle |\lambda _{2}|\geq \cdots \geq |\lambda _{n}|,}henceλ2/λ1is the dominant term. The smaller the ratio is, the faster the convergence is.[44]Random noise in the state distributionπcan also speed up this convergence to the stationary distribution.[45] Many results for Markov chains with finite state space can be generalized to chains with uncountable state space throughHarris chains. The use of Markov chains in Markov chain Monte Carlo methods covers cases where the process follows a continuous state space. "Locally interacting Markov chains" are Markov chains with an evolution that takes into account the state of other Markov chains. This corresponds to the situation when the state space has a (Cartesian-) product form. Seeinteracting particle systemandstochastic cellular automata(probabilistic cellular automata). See for instanceInteraction of Markov Processes[46]or.[47] Two states are said tocommunicatewith each other if both are reachable from one another by a sequence of transitions that have positive probability. This is an equivalence relation which yields a set of communicating classes. A class isclosedif the probability of leaving the class is zero. A Markov chain isirreducibleif there is one communicating class, the state space. A stateihas periodkifkis thegreatest common divisorof the number of transitions by whichican be reached, starting fromi. That is: The state isperiodicifk>1{\displaystyle k>1}; otherwisek=1{\displaystyle k=1}and the state isaperiodic. A stateiis said to betransientif, starting fromi, there is a non-zero probability that the chain will never return toi. It is calledrecurrent(orpersistent) otherwise.[48]For a recurrent statei, the meanhitting timeis defined as: Stateiispositive recurrentifMi{\displaystyle M_{i}}is finite andnull recurrentotherwise. Periodicity, transience, recurrence and positive and null recurrence are class properties — that is, if one state has the property then all states in its communicating class have the property.[49] A stateiis calledabsorbingif there are no outgoing transitions from the state. Since periodicity is a class property, if a Markov chain is irreducible, then all its states have the same period. In particular, if one state is aperiodic, then the whole Markov chain is aperiodic.[50] If a finite Markov chain is irreducible, then all states are positive recurrent, and it has a unique stationary distribution given byπi=1/E[Ti]{\displaystyle \pi _{i}=1/E[T_{i}]}. A stateiis said to beergodicif it is aperiodic and positive recurrent. In other words, a stateiis ergodic if it is recurrent, has a period of 1, and has finite mean recurrence time. If all states in an irreducible Markov chain are ergodic, then the chain is said to be ergodic. Equivalently, there exists some integerk{\displaystyle k}such that all entries ofMk{\displaystyle M^{k}}are positive. It can be shown that a finite state irreducible Markov chain is ergodic if it has an aperiodic state. More generally, a Markov chain is ergodic if there is a numberNsuch that any state can be reached from any other state in any number of steps less or equal to a numberN. In case of a fully connected transition matrix, where all transitions have a non-zero probability, this condition is fulfilled withN= 1. A Markov chain with more than one state and just one out-going transition per state is either not irreducible or not aperiodic, hence cannot be ergodic. Some authors call any irreducible, positive recurrent Markov chains ergodic, even periodic ones.[51]In fact, merely irreducible Markov chains correspond toergodic processes, defined according toergodic theory.[52] Some authors call a matrixprimitiveif there exists some integerk{\displaystyle k}such that all entries ofMk{\displaystyle M^{k}}are positive.[53]Some authors call itregular.[54] Theindex of primitivity, orexponent, of a regular matrix, is the smallestk{\displaystyle k}such that all entries ofMk{\displaystyle M^{k}}are positive. The exponent is purely a graph-theoretic property, since it depends only on whether each entry ofM{\displaystyle M}is zero or positive, and therefore can be found on a directed graph withsign(M){\displaystyle \mathrm {sign} (M)}as its adjacency matrix. There are several combinatorial results about the exponent when there are finitely many states. Letn{\displaystyle n}be the number of states, then[55] If a Markov chain has a stationary distribution, then it can be converted to ameasure-preserving dynamical system: Let the probability space beΩ=ΣN{\displaystyle \Omega =\Sigma ^{\mathbb {N} }}, whereΣ{\displaystyle \Sigma }is the set of all states for the Markov chain. Let the sigma-algebra on the probability space be generated by the cylinder sets. Let the probability measure be generated by the stationary distribution, and the Markov chain transition. LetT:Ω→Ω{\displaystyle T:\Omega \to \Omega }be the shift operator:T(X0,X1,…)=(X1,…){\displaystyle T(X_{0},X_{1},\dots )=(X_{1},\dots )}. Similarly we can construct such a dynamical system withΩ=ΣZ{\displaystyle \Omega =\Sigma ^{\mathbb {Z} }}instead.[57] SinceirreducibleMarkov chains with finite state spaces have a unique stationary distribution, the above construction is unambiguous for irreducible Markov chains. Inergodic theory, a measure-preserving dynamical system is calledergodicif any measurable subsetS{\displaystyle S}such thatT−1(S)=S{\displaystyle T^{-1}(S)=S}impliesS=∅{\displaystyle S=\emptyset }orΩ{\displaystyle \Omega }(up to a null set). The terminology is inconsistent. Given a Markov chain with a stationary distribution that is strictly positive on all states, the Markov chain isirreducibleif its corresponding measure-preserving dynamical system isergodic.[52] In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the "current" and "future" states. For example, letXbe a non-Markovian process. Then define a processY, such that each state ofYrepresents a time-interval of states ofX. Mathematically, this takes the form: IfYhas the Markov property, then it is a Markovian representation ofX. An example of a non-Markovian process with a Markovian representation is anautoregressivetime seriesof order greater than one.[58] Thehitting timeis the time, starting in a given set of states, until the chain arrives in a given state or set of states. The distribution of such a time period has a phase type distribution. The simplest such distribution is that of a single exponentially distributed transition. For a subset of statesA⊆S, the vectorkAof hitting times (where elementkiA{\displaystyle k_{i}^{A}}represents theexpected value, starting in stateithat the chain enters one of the states in the setA) is the minimal non-negative solution to[59] For a CTMCXt, the time-reversed process is defined to beX^t=XT−t{\displaystyle {\hat {X}}_{t}=X_{T-t}}. ByKelly's lemmathis process has the same stationary distribution as the forward process. A chain is said to bereversibleif the reversed process is the same as the forward process.Kolmogorov's criterionstates that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions. One method of finding thestationary probability distribution,π, of anergodiccontinuous-time Markov chain,Q, is by first finding itsembedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as ajump process. Each element of the one-step transition probability matrix of the EMC,S, is denoted bysij, and represents theconditional probabilityof transitioning from stateiinto statej. These conditional probabilities may be found by From this,Smay be written as whereIis theidentity matrixand diag(Q) is thediagonal matrixformed by selecting themain diagonalfrom the matrixQand setting all other elements to zero. To find the stationary probability distribution vector, we must next findφ{\displaystyle \varphi }such that withφ{\displaystyle \varphi }being a row vector, such that all elements inφ{\displaystyle \varphi }are greater than 0 and‖φ‖1{\displaystyle \|\varphi \|_{1}}= 1. From this,πmay be found as (Smay be periodic, even ifQis not. Onceπis found, it must be normalized to aunit vector.) Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observingX(t) at intervals of δ units of time. The random variablesX(0),X(δ),X(2δ), ... give the sequence of states visited by the δ-skeleton. Markov models are used to model changing systems. There are 4 main types of models, that generalize Markov chains depending on whether every sequential state is observable or not, and whether the system is to be adjusted on the basis of observations made: ABernoulli schemeis a special case of a Markov chain where the transition probability matrix has identical rows, which means that the next state is independent of even the current state (in addition to being independent of the past states). A Bernoulli scheme with only two possible states is known as aBernoulli process. Note, however, by theOrnstein isomorphism theorem, that every aperiodic and irreducible Markov chain is isomorphic to a Bernoulli scheme;[60]thus, one might equally claim that Markov chains are a "special case" of Bernoulli schemes. The isomorphism generally requires a complicated recoding. The isomorphism theorem is even a bit stronger: it states thatanystationary stochastic processis isomorphic to a Bernoulli scheme; the Markov chain is just one such example. When the Markov matrix is replaced by theadjacency matrixof afinite graph, the resulting shift is termed atopological Markov chainor asubshift of finite type.[60]A Markov matrix that is compatible with the adjacency matrix can then provide ameasureon the subshift. Many chaoticdynamical systemsare isomorphic to topological Markov chains; examples includediffeomorphismsofclosed manifolds, theProuhet–Thue–Morse system, theChacon system,sofic systems,context-free systemsandblock-coding systems.[60] Markov chains have been employed in a wide range of topics across the natural and social sciences, and in technological applications. They have been used for forecasting in several areas: for example, price trends,[61]wind power,[62]stochastic terrorism,[63][64]andsolar irradiance.[65]The Markov chain forecasting models utilize a variety of settings, from discretizing the time series,[62]to hidden Markov models combined with wavelets,[61]and the Markov chain mixture distribution model (MCM).[65] Markovian systems appear extensively inthermodynamicsandstatistical mechanics, whenever probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed that the dynamics are time-invariant, and that no relevant history need be considered which is not already included in the state description.[66][67]For example, a thermodynamic state operates under a probability distribution that is difficult or expensive to acquire. Therefore, Markov Chain Monte Carlo method can be used to draw samples randomly from a black-box to approximate the probability distribution of attributes over a range of objects.[67] Markov chains are used inlattice QCDsimulations.[68] A reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain.[69]Markov chains and continuous-time Markov processes are useful in chemistry when physical systems closely approximate the Markov property. For example, imagine a large numbernof molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chain, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time isntimes the probability a given molecule is in that state. The classical model of enzyme activity,Michaelis–Menten kinetics, can be viewed as a Markov chain, where at each time step the reaction proceeds in some direction. While Michaelis-Menten is fairly straightforward, far more complicated reaction networks can also be modeled with Markov chains.[70] An algorithm based on a Markov chain was also used to focus the fragment-based growth of chemicalsin silicotowards a desired class of compounds such as drugs or natural products.[71]As a molecule is grown, a fragment is selected from the nascent molecule as the "current" state. It is not aware of its past (that is, it is not aware of what is already bonded to it). It then transitions to the next state when a fragment is attached to it. The transition probabilities are trained on databases of authentic classes of compounds.[72] Also, the growth (and composition) ofcopolymersmay be modeled using Markov chains. Based on the reactivity ratios of the monomers that make up the growing polymer chain, the chain's composition may be calculated (for example, whether monomers tend to add in alternating fashion or in long runs of the same monomer). Due tosteric effects, second-order Markov effects may also play a role in the growth of some polymer chains. Similarly, it has been suggested that the crystallization and growth of some epitaxialsuperlatticeoxide materials can be accurately described by Markov chains.[73] Markov chains are used in various areas of biology. Notable examples include: Several theorists have proposed the idea of the Markov chain statistical test (MCST), a method of conjoining Markov chains to form a "Markov blanket", arranging these chains in several recursive layers ("wafering") and producing more efficient test sets—samples—as a replacement for exhaustive testing.[citation needed] Solar irradiancevariability assessments are useful forsolar powerapplications. Solar irradiance variability at any location over time is mainly a consequence of the deterministic variability of the sun's path across the sky dome and the variability in cloudiness. The variability of accessible solar irradiance on Earth's surface has been modeled using Markov chains,[76][77][78][79]also including modeling the two states of clear and cloudiness as a two-state Markov chain.[80][81] Hidden Markov modelshave been used inautomatic speech recognitionsystems.[82] Markov chains are used throughout information processing.Claude Shannon's famous 1948 paperA Mathematical Theory of Communication, which in a single step created the field ofinformation theory, opens by introducing the concept ofentropyby modeling texts in a natural language (such as English) as generated by an ergodic Markov process, where each letter may depend statistically on previous letters.[83]Such idealized models can capture many of the statistical regularities of systems. Even without describing the full structure of the system perfectly, such signal models can make possible very effectivedata compressionthroughentropy encodingtechniques such asarithmetic coding. They also allow effectivestate estimationandpattern recognition. Markov chains also play an important role inreinforcement learning. Markov chains are also the basis for hidden Markov models, which are an important tool in such diverse fields as telephone networks (which use theViterbi algorithmfor error correction), speech recognition andbioinformatics(such as in rearrangements detection[84]). TheLZMAlossless data compression algorithm combines Markov chains withLempel-Ziv compressionto achieve very high compression ratios. Markov chains are the basis for the analytical treatment of queues (queueing theory).Agner Krarup Erlanginitiated the subject in 1917.[85]This makes them critical for optimizing the performance of telecommunications networks, where messages must often compete for limited resources (such as bandwidth).[86] Numerous queueing models use continuous-time Markov chains. For example, anM/M/1 queueis a CTMC on the non-negative integers where upward transitions fromitoi+ 1 occur at rateλaccording to aPoisson processand describe job arrivals, while transitions fromitoi– 1 (fori> 1) occur at rateμ(job service times are exponentially distributed) and describe completed services (departures) from the queue. ThePageRankof a webpage as used byGoogleis defined by a Markov chain.[87][88][89]It is the probability to be at pagei{\displaystyle i}in the stationary distribution on the following Markov chain on all (known) webpages. IfN{\displaystyle N}is the number of known webpages, and a pagei{\displaystyle i}haski{\displaystyle k_{i}}links to it then it has transition probabilityαki+1−αN{\displaystyle {\frac {\alpha }{k_{i}}}+{\frac {1-\alpha }{N}}}for all pages that are linked to and1−αN{\displaystyle {\frac {1-\alpha }{N}}}for all pages that are not linked to. The parameterα{\displaystyle \alpha }is taken to be about 0.15.[90] Markov models have also been used to analyze web navigation behavior of users. A user's web link transition on a particular website can be modeled using first- or second-order Markov models and can be used to make predictions regarding future navigation and to personalize the web page for an individual user.[citation needed] Markov chain methods have also become very important for generating sequences of random numbers to accurately reflect very complicated desired probability distributions, via a process calledMarkov chain Monte Carlo(MCMC). In recent years this has revolutionized the practicability ofBayesian inferencemethods, allowing a wide range ofposterior distributionsto be simulated and their parameters found numerically.[citation needed] In 1971 aNaval Postgraduate SchoolMaster's thesis proposed to model a variety of combat between adversaries as a Markov chain "with states reflecting the control, maneuver, target acquisition, and target destruction actions of a weapons system" and discussed the parallels between the resulting Markov chain andLanchester's laws.[91] In 1975 Duncan and Siverson remarked that Markov chains could be used to model conflict between state actors, and thought that their analysis would help understand "the behavior of social and political organizations in situations of conflict."[92] Markov chains are used in finance and economics to model a variety of different phenomena, including the distribution of income, the size distribution of firms, asset prices and market crashes.D. G. Champernownebuilt a Markov chain model of the distribution of income in 1953.[93]Herbert A. Simonand co-author Charles Bonini used a Markov chain model to derive a stationary Yule distribution of firm sizes.[94]Louis Bachelierwas the first to observe that stock prices followed a random walk.[95]The random walk was later seen as evidence in favor of theefficient-market hypothesisand random walk models were popular in the literature of the 1960s.[96]Regime-switching models of business cycles were popularized byJames D. Hamilton(1989), who used a Markov chain to model switches between periods of high and low GDP growth (or, alternatively, economic expansions and recessions).[97]A more recent example is theMarkov switching multifractalmodel ofLaurent E. Calvetand Adlai J. Fisher, which builds upon the convenience of earlier regime-switching models.[98][99]It uses an arbitrarily large Markov chain to drive the level of volatility of asset returns. Dynamic macroeconomics makes heavy use of Markov chains. An example is using Markov chains to exogenously model prices of equity (stock) in ageneral equilibriumsetting.[100] Credit rating agenciesproduce annual tables of the transition probabilities for bonds of different credit ratings.[101] Markov chains are generally used in describingpath-dependentarguments, where current structural configurations condition future outcomes. An example is the reformulation of the idea, originally due toKarl Marx'sDas Kapital, tyingeconomic developmentto the rise ofcapitalism. In current research, it is common to use a Markov chain to model how once a country reaches a specific level of economic development, the configuration of structural factors, such as size of themiddle class, the ratio of urban to rural residence, the rate ofpoliticalmobilization, etc., will generate a higher probability of transitioning fromauthoritariantodemocratic regime.[102] Markov chains are employed inalgorithmic music composition, particularly insoftwaresuch asCsound,Max, andSuperCollider. In a first-order chain, the states of the system become note or pitch values, and aprobability vectorfor each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce output note values based on the transition matrix weightings, which could beMIDInote values, frequency (Hz), or any other desirable metric.[103] A second-order Markov chain can be introduced by considering the current stateandalso the previous state, as indicated in the second table. Higher,nth-order chains tend to "group" particular notes together, while 'breaking off' into other patterns and sequences occasionally. These higher-order chains tend to generate results with a sense ofphrasalstructure, rather than the 'aimless wandering' produced by a first-order system.[104] Markov chains can be used structurally, as in Xenakis's Analogique A and B.[105]Markov chains are also used in systems which use a Markov model to react interactively to music input.[106] Usually musical systems need to enforce specific control constraints on the finite-length sequences they generate, but control constraints are not compatible with Markov models, since they induce long-range dependencies that violate the Markov hypothesis of limited memory. In order to overcome this limitation, a new approach has been proposed.[107] Markov chains can be used to model many games of chance. The children's gamesSnakes and Laddersand "Hi Ho! Cherry-O", for example, are represented exactly by Markov chains. At each turn, the player starts in a given state (on a given square) and from there has fixed odds of moving to certain other states (squares).[citation needed] Markov chain models have been used in advanced baseball analysis since 1960, although their use is still rare. Each half-inning of a baseball game fits the Markov chain state when the number of runners and outs are considered. During any at-bat, there are 24 possible combinations of number of outs and position of the runners. Mark Pankin shows that Markov chain models can be used to evaluate runs created for both individual players as well as a team.[108]He also discusses various kinds of strategies and play conditions: how Markov chain models have been used to analyze statistics for game situations such asbuntingandbase stealingand differences when playing on grass vs.AstroTurf.[109] Markov processes can also be used togenerate superficially real-looking textgiven a sample document. Markov processes are used in a variety of recreational "parody generator" software (seedissociated press, Jeff Harrison,[110]Mark V. Shaney,[111][112]and Academias Neutronium). Several open-source text generation libraries using Markov chains exist.
https://en.wikipedia.org/wiki/Markov_chains
Inmathematics, thetransfer operatorencodes information about aniterated mapand is frequently used to study the behavior ofdynamical systems,statistical mechanics,quantum chaosandfractals. In all usual cases, the largest eigenvalue is 1, and the corresponding eigenvector is theinvariant measureof the system. The transfer operator is sometimes called theRuelle operator, afterDavid Ruelle, or thePerron–Frobenius operatororRuelle–Perron–Frobenius operator, in reference to the applicability of thePerron–Frobenius theoremto the determination of theeigenvaluesof the operator. The iterated function to be studied is a mapf:X→X{\displaystyle f\colon X\rightarrow X}for an arbitrary setX{\displaystyle X}. The transfer operator is defined as an operatorL{\displaystyle {\mathcal {L}}}acting on the space of functions{Φ:X→C}{\displaystyle \{\Phi \colon X\rightarrow \mathbb {C} \}}as whereg:X→C{\displaystyle g\colon X\rightarrow \mathbb {C} }is an auxiliary valuation function. Whenf{\displaystyle f}has aJacobiandeterminant|J|{\displaystyle |J|}, theng{\displaystyle g}is usually taken to beg=1/|J|{\displaystyle g=1/|J|}. The above definition of the transfer operator can be shown to be the point-set limit of the measure-theoreticpushforwardofg: in essence, the transfer operator is thedirect image functorin the category ofmeasurable spaces. The left-adjoint of the Perron–Frobenius operator is theKoopman operatororcomposition operator. The general setting is provided by theBorel functional calculus. As a general rule, the transfer operator can usually be interpreted as a (left-)shift operatoracting on ashift space. The most commonly studied shifts are thesubshifts of finite type. The adjoint to the transfer operator can likewise usually be interpreted as a right-shift. Particularly well studied right-shifts include theJacobi operatorand theHessenberg matrix, both of which generate systems oforthogonal polynomialsvia a right-shift. Whereas the iteration of a functionf{\displaystyle f}naturally leads to a study of the orbits of points of X under iteration (the study ofpoint dynamics), the transfer operator defines how (smooth) maps evolve under iteration. Thus, transfer operators typically appear inphysicsproblems, such asquantum chaosandstatistical mechanics, where attention is focused on the time evolution of smooth functions. In turn, this has medical applications torational drug design, through the field ofmolecular dynamics. It is often the case that the transfer operator is positive, has discrete positive real-valuedeigenvalues, with the largest eigenvalue being equal to one. For this reason, the transfer operator is sometimes called the Frobenius–Perron operator. Theeigenfunctionsof the transfer operator are usually fractals. When the logarithm of the transfer operator corresponds to a quantumHamiltonian, the eigenvalues will typically be very closely spaced, and thus even a very narrow and carefully selectedensembleof quantum states will encompass a large number of very different fractal eigenstates with non-zerosupportover the entire volume. This can be used to explain many results from classical statistical mechanics, including the irreversibility of time and the increase ofentropy. The transfer operator of theBernoulli mapb(x)=2x−⌊2x⌋{\displaystyle b(x)=2x-\lfloor 2x\rfloor }is exactly solvable and is a classic example ofdeterministic chaos; the discrete eigenvalues correspond to theBernoulli polynomials. This operator also has a continuous spectrum consisting of theHurwitz zeta function. The transfer operator of the Gauss maph(x)=1/x−⌊1/x⌋{\displaystyle h(x)=1/x-\lfloor 1/x\rfloor }is called theGauss–Kuzmin–Wirsing (GKW) operator. The theory of the GKW dates back to a hypothesis by Gauss oncontinued fractionsand is closely related to theRiemann zeta function.
https://en.wikipedia.org/wiki/Transfer_operator
Inmatrix theory, thePerron–Frobenius theorem, proved byOskar Perron(1907) andGeorg Frobenius(1912), asserts that areal square matrixwith positive entries has a uniqueeigenvalueof largest magnitude and that eigenvalue is real. The correspondingeigenvectorcan be chosen to have strictly positive components, and also asserts a similar statement for certain classes ofnonnegative matrices. This theorem has important applications to probability theory (ergodicityofMarkov chains); to the theory ofdynamical systems(subshifts of finite type); to economics (Okishio's theorem,[1]Hawkins–Simon condition[2]); to demography (Leslie population age distribution model);[3]to social networks (DeGroot learning process); to Internet search engines (PageRank);[4]and even to ranking of American football teams.[5]The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors isEdmund Landau.[6][7] Letpositiveandnon-negativerespectively describematriceswith exclusivelypositivereal numbers as elements and matrices with exclusively non-negative real numbers as elements. Theeigenvaluesof a realsquare matrixAarecomplex numbersthat make up thespectrumof the matrix. Theexponential growth rateof the matrix powersAkask→ ∞ is controlled by the eigenvalue ofAwith the largestabsolute value(modulus). The Perron–Frobenius theorem describes the properties of the leading eigenvalue and of the corresponding eigenvectors whenAis a non-negative real square matrix. Early results were due toOskar Perron(1907) and concerned positive matrices. Later,Georg Frobenius(1912) found their extension to certain classes of non-negative matrices. LetA=(aij){\displaystyle A=(a_{ij})}be ann×n{\displaystyle n\times n}positive matrix:aij>0{\displaystyle a_{ij}>0}for1≤i,j≤n{\displaystyle 1\leq i,j\leq n}. Then the following statements hold. All of these properties extend beyond strictly positive matrices toprimitive matrices(see below). Facts 1–7 can be found in Meyer[12]chapter 8claims 8.2.11–15 page 667 and exercises 8.2.5,7,9 pages 668–669. The left and right eigenvectorswandvare sometimes normalized so that the sum of their components is equal to 1; in this case, they are sometimes calledstochastic eigenvectors. Often they are normalized so that the right eigenvectorvsums to one, whilewTv=1{\displaystyle w^{T}v=1}. There is an extension to matrices with non-negative entries. Since any non-negative matrix can be obtained as a limit of positive matrices, one obtains the existence of an eigenvector with non-negative components; the corresponding eigenvalue will be non-negative and greater thanor equal, in absolute value, to all other eigenvalues.[13][14]However, for the exampleA=(0110){\displaystyle A=\left({\begin{smallmatrix}0&1\\1&0\end{smallmatrix}}\right)}, the maximum eigenvaluer= 1 has the same absolute value as the other eigenvalue −1; while forA=(0100){\displaystyle A=\left({\begin{smallmatrix}0&1\\0&0\end{smallmatrix}}\right)}, the maximum eigenvalue isr= 0, which is not a simple root of the characteristic polynomial, and the corresponding eigenvector (1, 0) is not strictly positive. However, Frobenius found a special subclass of non-negative matrices —irreduciblematrices — for which a non-trivial generalization is possible. For such a matrix, although the eigenvalues attaining the maximal absolute value might not be unique, their structure is under control: they have the formωr{\displaystyle \omega r}, wherer{\displaystyle r}is a real strictly positive eigenvalue, andω{\displaystyle \omega }ranges over the complexh'throots of 1for some positive integerhcalled theperiodof the matrix. The eigenvector corresponding tor{\displaystyle r}has strictly positive components (in contrast with the general case of non-negative matrices, where components are only non-negative). Also all such eigenvalues are simple roots of the characteristic polynomial. Further properties are described below. LetAbe an×nsquare matrix overfieldF. The matrixAisirreducibleif any of the following equivalent properties holds. Definition 1 :Adoes not have non-trivial invariantcoordinatesubspaces. Here a non-trivial coordinate subspace means alinear subspacespanned by anyproper subsetof standard basis vectors ofFn. More explicitly, for any linear subspace spanned by standard basis vectorsei1, ...,eik, 0 <k<nits image under the action ofAis not contained in the same subspace. Definition 2:Acannot be conjugated into block upper triangular form by apermutation matrixP: whereEandGare non-trivial (i.e. of size greater than zero) square matrices. Definition 3:One can associate with a matrixAa certaindirected graphGA. It hasnvertices labeled 1,...,n, and there is an edge from vertexito vertexjprecisely whenaij≠ 0. Then the matrixAis irreducible if and only if its associated graphGAisstrongly connected. IfFis the field of real or complex numbers, then we also have the following condition. Definition 4:Thegroup representationof(R,+){\displaystyle (\mathbb {R} ,+)}onRn{\displaystyle \mathbb {R} ^{n}}or(C,+){\displaystyle (\mathbb {C} ,+)}onCn{\displaystyle \mathbb {C} ^{n}}given byt↦exp⁡(tA){\displaystyle t\mapsto \exp(tA)}has no non-trivial invariant coordinate subspaces. (By comparison, this would be anirreducible representationif there were no non-trivial invariant subspaces at all, not only considering coordinate subspaces.) A matrix isreducibleif it is not irreducible. A real matrixAisprimitiveif it is non-negative and itsmth power is positive for some natural numberm(i.e. all entries ofAmare positive). LetAbe real and non-negative. Fix an indexiand define theperiod of indexito be thegreatest common divisorof all natural numbersmsuch that (Am)ii> 0. WhenAis irreducible, the period of every index is the same and is called theperiod ofA.In fact, whenAis irreducible, the period can be defined as the greatest common divisor of the lengths of the closed directed paths inGA(see Kitchens[15]page 16). The period is also called the index of imprimitivity (Meyer[12]page 674) or the order of cyclicity. If the period is 1,Aisaperiodic. It can be proved that primitive matrices are the same as irreducible aperiodic non-negative matrices. All statements of the Perron–Frobenius theorem for positive matrices remain true for primitive matrices. The same statements also hold for a non-negative irreducible matrix, except that it may possess several eigenvalues whose absolute value is equal to its spectral radius, so the statements need to be correspondingly modified. In fact the number of such eigenvalues is equal to the period. Results for non-negative matrices were first obtained by Frobenius in 1912. LetA{\displaystyle A}be an irreducible non-negativeN×N{\displaystyle N\times N}matrix with periodh{\displaystyle h}andspectral radiusρ(A)=r{\displaystyle \rho (A)=r}. Then the following statements hold. whereO{\displaystyle O}denotes a zero matrix and the blocks along the main diagonal are square matrices. The exampleA=(001001110){\displaystyle A=\left({\begin{smallmatrix}0&0&1\\0&0&1\\1&1&0\end{smallmatrix}}\right)}shows that the (square) zero-matrices along the diagonal may be of different sizes, the blocksAjneed not be square, andhneed not dividen. LetAbe an irreducible non-negative matrix, then: A matrixAis primitive provided it is non-negative andAmis positive for somem, and henceAkis positive for allk ≥ m. To check primitivity, one needs a bound on how large the minimal suchmcan be, depending on the size ofA:[24] Numerous books have been written on the subject of non-negative matrices, and Perron–Frobenius theory is invariably a central feature. The following examples given below only scratch the surface of its vast application domain. The Perron–Frobenius theorem does not apply directly to non-negative matrices. Nevertheless, any reducible square matrixAmay be written in upper-triangular block form (known as thenormal form of a reducible matrix)[25] wherePis a permutation matrix and eachBiis a square matrix that is either irreducible or zero. Now ifAis non-negative then so too is each block ofPAP−1, moreover the spectrum ofAis just the union of the spectra of theBi. The invertibility ofAcan also be studied. The inverse ofPAP−1(if it exists) must have diagonal blocks of the formBi−1so if anyBiisn't invertible then neither isPAP−1orA. Conversely letDbe the block-diagonal matrix corresponding toPAP−1, in other wordsPAP−1with the asterisks zeroised. If eachBiis invertible then so isDandD−1(PAP−1) is equal to the identity plus a nilpotent matrix. But such a matrix is always invertible (ifNk= 0 the inverse of 1 −Nis 1 +N+N2+ ... +Nk−1) soPAP−1andAare both invertible. Therefore, many of the spectral properties ofAmay be deduced by applying the theorem to the irreducibleBi. For example, the Perron root is the maximum of the ρ(Bi). While there will still be eigenvectors with non-negative components it is quite possible that none of these will be positive. A row (column)stochastic matrixis a square matrix each of whose rows (columns) consists of non-negative real numbers whose sum is unity. The theorem cannot be applied directly to such matrices because they need not be irreducible. IfAis row-stochastic then the column vector with each entry 1 is an eigenvector corresponding to the eigenvalue 1, which is also ρ(A) by the remark above. It might not be the only eigenvalue on the unit circle: and the associated eigenspace can be multi-dimensional. IfAis row-stochastic and irreducible then the Perron projection is also row-stochastic and all its rows are equal. The theorem has particular use inalgebraic graph theory. The "underlying graph" of a nonnegativen-square matrix is the graph with vertices numbered 1, ...,nand arcijif and only ifAij≠ 0. If the underlying graph of such a matrix is strongly connected, then the matrix is irreducible, and thus the theorem applies. In particular, theadjacency matrixof astrongly connected graphis irreducible.[26][27] The theorem has a natural interpretation in the theory of finiteMarkov chains(where it is the matrix-theoretic equivalent of the convergence of an irreducible finite Markov chain to its stationary distribution, formulated in terms of the transition matrix of the chain; see, for example, the article on thesubshift of finite type). More generally, it can be extended to the case of non-negativecompact operators, which, in many ways, resemble finite-dimensional matrices. These are commonly studied in physics, under the name oftransfer operators, or sometimesRuelle–Perron–Frobenius operators(afterDavid Ruelle). In this case, the leading eigenvalue corresponds to thethermodynamic equilibriumof adynamical system, and the lesser eigenvalues to the decay modes of a system that is not in equilibrium. Thus, the theory offers a way of discovering thearrow of timein what would otherwise appear to be reversible, deterministic dynamical processes, when examined from the point of view ofpoint-set topology.[28] A common thread in many proofs is theBrouwer fixed point theorem. Another popular method is that of Wielandt (1950). He used theCollatz–Wielandt formula described above to extend and clarify Frobenius's work.[29]Another proof is based on thespectral theory[30]from which part of the arguments are borrowed. IfAis a positive (or more generally primitive) matrix, then there exists a real positive eigenvaluer(Perron–Frobenius eigenvalue or Perron root), which is strictly greater in absolute value than all other eigenvalues, henceris thespectral radiusofA. This statement does not hold for general non-negative irreducible matrices, which haveheigenvalues with the same absolute eigenvalue asr, wherehis the period ofA. LetAbe a positive matrix, assume that its spectral radius ρ(A) = 1 (otherwise considerA/ρ(A)). Hence, there exists an eigenvalue λ on the unit circle, and all the other eigenvalues are less or equal 1 in absolute value. Suppose that another eigenvalue λ ≠ 1 also falls on the unit circle. Then there exists a positive integermsuch thatAmis a positive matrix and the real part of λmis negative. Let ε be half the smallest diagonal entry ofAmand setT=Am−εIwhich is yet another positive matrix. Moreover, ifAx=λxthenAmx=λmxthusλm−εis an eigenvalue ofT. Because of the choice ofmthis point lies outside the unit disk consequentlyρ(T) > 1. On the other hand, all the entries inTare positive and less than or equal to those inAmso byGelfand's formulaρ(T) ≤ρ(Am) ≤ρ(A)m= 1. This contradiction means that λ=1 and there can be no other eigenvalues on the unit circle. Absolutely the same arguments can be applied to the case of primitive matrices; we just need to mention the following simple lemma, which clarifies the properties of primitive matrices. Given a non-negativeA, assume there existsm, such thatAmis positive, thenAm+1,Am+2,Am+3,... are all positive. Am+1=AAm, so it can have zero element only if some row ofAis entirely zero, but in this case the same row ofAmwill be zero. Applying the same arguments as above for primitive matrices, prove the main claim. For a positive (or more generally irreducible non-negative) matrixAthe dominanteigenvectoris real and strictly positive (for non-negativeArespectively non-negative.) This can be established using thepower method, which states that for a sufficiently generic (in the sense below) matrixAthe sequence of vectorsbk+1=Abk/ |Abk| converges to theeigenvectorwith the maximumeigenvalue. (The initial vectorb0can be chosen arbitrarily except for some measure zero set). Starting with a non-negative vectorb0produces the sequence of non-negative vectorsbk. Hence the limiting vector is also non-negative. By the power method this limiting vector is the dominant eigenvector forA, proving the assertion. The corresponding eigenvalue is non-negative. The proof requires two additional arguments. First, the power method converges for matrices which do not have several eigenvalues of the same absolute value as the maximal one. The previous section's argument guarantees this. Second, to ensure strict positivity of all of the components of the eigenvector for the case of irreducible matrices. This follows from the following fact, which is of independent interest: Proof. One of the definitions of irreducibility for non-negative matrices is that for all indexesi,jthere existsm, such that (Am)ijis strictly positive. Given a non-negative eigenvectorv, and that at least one of its components sayi-th is strictly positive, the corresponding eigenvalue is strictly positive, indeed, givennsuch that (An)ii>0, hence:rnvi=Anvi≥ (An)iivi>0. Henceris strictly positive. The eigenvector is strict positivity. Then givenm, such that (Am)ji>0, hence:rmvj= (Amv)j≥ (Am)jivi>0, hencevjis strictly positive, i.e., the eigenvector is strictly positive. This section proves that the Perron–Frobenius eigenvalue is a simple root of the characteristic polynomial of the matrix. Hence the eigenspace associated to Perron–Frobenius eigenvalueris one-dimensional. The arguments here are close to those in Meyer.[12] Given a strictly positive eigenvectorvcorresponding torand another eigenvectorwwith the same eigenvalue. (The vectorsvandwcan be chosen to be real, becauseAandrare both real, so the null space ofA-rhas a basis consisting of real vectors.) Assuming at least one of the components ofwis positive (otherwise multiplywby −1). Given maximal possibleαsuch thatu=v- α wis non-negative, then one of the components ofuis zero, otherwiseαis not maximum. Vectoruis an eigenvector. It is non-negative, hence by the lemma described in theprevious sectionnon-negativity implies strict positivity for any eigenvector. On the other hand, as above at least one component ofuis zero. The contradiction implies thatwdoes not exist. Case: There are no Jordan blocks corresponding to the Perron–Frobenius eigenvaluerand all other eigenvalues which have the same absolute value. If there is a Jordan block, then theinfinity norm(A/r)k∞tends to infinity fork → ∞, but that contradicts the existence of the positive eigenvector. Givenr= 1, orA/r. Lettingvbe a Perron–Frobenius strictly positive eigenvector, soAv=v, then: ‖v‖∞=‖Akv‖∞≥‖Ak‖∞mini(vi),⇒‖Ak‖∞≤‖v‖/mini(vi){\displaystyle \|v\|_{\infty }=\|A^{k}v\|_{\infty }\geq \|A^{k}\|_{\infty }\min _{i}(v_{i}),~~\Rightarrow ~~\|A^{k}\|_{\infty }\leq \|v\|/\min _{i}(v_{i})}So ‖Ak‖∞is bounded for allk. This gives another proof that there are no eigenvalues which have greater absolute value than Perron–Frobenius one. It also contradicts the existence of the Jordan block for any eigenvalue which has absolute value equal to 1 (in particular for the Perron–Frobenius one), because existence of the Jordan block implies that ‖Ak‖∞is unbounded. For a two by two matrix: hence ‖Jk‖∞= |k+λ| (for |λ| = 1), so it tends to infinity whenkdoes so. SinceJk=C−1AkC, thenAk≥Jk/ (C−1C), so it also tends to infinity. The resulting contradiction implies that there are no Jordan blocks for the corresponding eigenvalues. Combining the two claims above reveals that the Perron–Frobenius eigenvalueris simple root of the characteristic polynomial. In the case of nonprimitive matrices, there exist other eigenvalues which have the same absolute value asr. The same claim is true for them, but requires more work. Given positive (or more generally irreducible non-negative matrix)A, the Perron–Frobenius eigenvector is the only (up to multiplication by constant) non-negative eigenvector forA. Other eigenvectors must contain negative or complex components since eigenvectors for different eigenvalues are orthogonal in some sense, but two positive eigenvectors cannot be orthogonal, so they must correspond to the same eigenvalue, but the eigenspace for the Perron–Frobenius is one-dimensional. Assuming there exists an eigenpair (λ,y) forA, such that vectoryis positive, and given (r,x), wherex– is the left Perron–Frobenius eigenvector forA(i.e. eigenvector forAT), thenrxTy= (xTA)y=xT(Ay) =λxTy, alsoxTy> 0, so one has:r=λ. Since the eigenspace for the Perron–Frobenius eigenvalueris one-dimensional, non-negative eigenvectoryis a multiple of the Perron–Frobenius one.[31] Given a positive (or more generally irreducible non-negative matrix)A, one defines the functionfon the set of all non-negative non-zero vectorsxsuch thatf(x)is the minimum value of [Ax]i/xitaken over all thoseisuch thatxi≠ 0. Thenfis a real-valued function, whosemaximumis the Perron–Frobenius eigenvaluer. For the proof we denote the maximum offby the valueR. The proof requires to showR = r. Inserting the Perron-Frobenius eigenvectorvintof, we obtainf(v) = rand concluder ≤ R. For the opposite inequality, we consider an arbitrary nonnegative vectorxand letξ=f(x). The definition offgives0 ≤ ξx ≤ Ax(componentwise). Now, we use the positive right eigenvectorwforAfor the Perron-Frobenius eigenvaluer, thenξ wTx = wTξx ≤ wT(Ax) = (wTA)x = r wTx. Hencef(x) = ξ ≤ r, which impliesR ≤ r.[32] LetAbe a positive (or more generally, primitive) matrix, and letrbe its Perron–Frobenius eigenvalue. HencePis aspectral projectionfor the Perron–Frobenius eigenvaluer, and is called the Perron projection. The above assertion is not true for general non-negative irreducible matrices. Actually the claims above (except claim 5) are valid for any matrixMsuch that there exists an eigenvaluerwhich is strictly greater than the other eigenvalues in absolute value and is the simple root of the characteristicpolynomial. (These requirements hold for primitive matrices as above). Given thatMis diagonalizable,Mis conjugate to a diagonal matrix with eigenvaluesr1, ... ,rnon the diagonal (denoter1=r). The matrixMk/rkwill be conjugate (1, (r2/r)k, ... , (rn/r)k), which tends to (1,0,0,...,0), fork → ∞, so the limit exists. The same method works for generalM(without assuming thatMis diagonalizable). The projection and commutativity properties are elementary corollaries of the definition:MMk/rk=Mk/rkM;P2= limM2k/r2k=P. The third fact is also elementary:M(Pu) =MlimMk/rku= limrMk+1/rk+1u, so taking the limit yieldsM(Pu) =r(Pu), so image ofPlies in ther-eigenspace forM, which is one-dimensional by the assumptions. Denoting byv,r-eigenvector forM(bywforMT). Columns ofPare multiples ofv, because the image ofPis spanned by it. Respectively, rows ofw. SoPtakes a form(a v wT), for somea. Hence its trace equals to(a wTv). Trace of projector equals the dimension of its image. It was proved before that it is not more than one-dimensional. From the definition one sees thatPacts identically on ther-eigenvector forM. So it is one-dimensional. So choosing (wTv) = 1, impliesP=vwT. For any non-negative matrixAits Perron–Frobenius eigenvaluersatisfies the inequality: This is not specific to non-negative matrices: for any matrixAwith an eigenvalueλ{\displaystyle \scriptstyle \lambda }it is true that|λ|≤maxi∑j|aij|{\displaystyle \scriptstyle |\lambda |\;\leq \;\max _{i}\sum _{j}|a_{ij}|}. This is an immediate corollary of theGershgorin circle theorem. However another proof is more direct: Anymatrix induced normsatisfies the inequality‖A‖≥|λ|{\displaystyle \scriptstyle \|A\|\geq |\lambda |}for any eigenvalueλ{\displaystyle \scriptstyle \lambda }because, ifx{\displaystyle \scriptstyle x}is a corresponding eigenvector,‖A‖≥|Ax|/|x|=|λx|/|x|=|λ|{\displaystyle \scriptstyle \|A\|\geq |Ax|/|x|=|\lambda x|/|x|=|\lambda |}. Theinfinity normof a matrix is the maximum of row sums:‖A‖∞=max1≤i≤m∑j=1n|aij|.{\displaystyle \scriptstyle \left\|A\right\|_{\infty }=\max \limits _{1\leq i\leq m}\sum _{j=1}^{n}|a_{ij}|.}Hence the desired inequality is exactly‖A‖∞≥|λ|{\displaystyle \scriptstyle \|A\|_{\infty }\geq |\lambda |}applied to the non-negative matrixA. Another inequality is: This fact is specific to non-negative matrices; for general matrices there is nothing similar. Given thatAis positive (not just non-negative), then there exists a positive eigenvectorwsuch thatAw=rwand the smallest component ofw(saywi) is 1. Thenr= (Aw)i≥ the sum of the numbers in rowiofA. Thus the minimum row sum gives a lower bound forrand this observation can be extended to all non-negative matrices by continuity. Another way to argue it is via theCollatz-Wielandt formula. One takes the vectorx= (1, 1, ..., 1) and immediately obtains the inequality. The proof now proceeds usingspectral decomposition. The trick here is to split the Perron root from the other eigenvalues. The spectral projection associated with the Perron root is called the Perron projection and it enjoys the following property: The Perron projection of an irreducible non-negative square matrix is a positive matrix. Perron's findings and also (1)–(5) of the theorem are corollaries of this result. The key point is that a positive projection always has rank one. This means that ifAis an irreducible non-negative square matrix then the algebraic and geometric multiplicities of its Perron root are both one. Also ifPis its Perron projection thenAP=PA= ρ(A)Pso every column ofPis a positive right eigenvector ofAand every row is a positive left eigenvector. Moreover, ifAx= λxthenPAx= λPx= ρ(A)Pxwhich meansPx= 0 if λ ≠ ρ(A). Thus the only positive eigenvectors are those associated with ρ(A). IfAis a primitive matrix with ρ(A) = 1 then it can be decomposed asP⊕ (1 −P)Aso thatAn=P+ (1 −P)An. Asnincreases the second of these terms decays to zero leavingPas the limit ofAnasn→ ∞. The power method is a convenient way to compute the Perron projection of a primitive matrix. Ifvandware the positive row and column vectors that it generates then the Perron projection is justwv/vw. The spectral projections aren't neatly blocked as in the Jordan form. Here they are overlaid and each generally has complex entries extending to all four corners of the square matrix. Nevertheless, they retain their mutual orthogonality which is what facilitates the decomposition. The analysis whenAis irreducible and non-negative is broadly similar. The Perron projection is still positive but there may now be other eigenvalues of modulus ρ(A) that negate use of the power method and prevent the powers of (1 −P)Adecaying as in the primitive case whenever ρ(A) = 1. So we consider theperipheral projection, which is the spectral projection ofAcorresponding to all the eigenvalues that have modulusρ(A). It may then be shown that the peripheral projection of an irreducible non-negative square matrix is a non-negative matrix with a positive diagonal. Suppose in addition that ρ(A) = 1 andAhasheigenvalues on the unit circle. IfPis the peripheral projection then the matrixR=AP=PAis non-negative and irreducible,Rh=P, and the cyclic groupP,R,R2, ....,Rh−1represents the harmonics ofA. The spectral projection ofAat the eigenvalue λ on the unit circle is given by the formulah−1∑1hλ−kRk{\displaystyle \scriptstyle h^{-1}\sum _{1}^{h}\lambda ^{-k}R^{k}}. All of these projections (including the Perron projection) have the same positive diagonal, moreover choosing any one of them and then taking the modulus of every entry invariably yields the Perron projection. Some donkey work is still needed in order to establish the cyclic properties (6)–(8) but it's essentially just a matter of turning the handle. The spectral decomposition ofAis given byA=R⊕ (1 −P)Aso the difference betweenAnandRnisAn−Rn= (1 −P)Anrepresenting the transients ofAnwhich eventually decay to zero.Pmay be computed as the limit ofAnhasn→ ∞. The matricesL=(100100111){\displaystyle \left({\begin{smallmatrix}1&0&0\\1&0&0\\1&1&1\end{smallmatrix}}\right)},P=(100100−111){\displaystyle \left({\begin{smallmatrix}1&0&0\\1&0&0\\\!-1&1&1\end{smallmatrix}}\right)},T=(011101110){\displaystyle \left({\begin{smallmatrix}0&1&1\\1&0&1\\1&1&0\end{smallmatrix}}\right)},M=(0100010000000100000100100){\displaystyle \left({\begin{smallmatrix}0&1&0&0&0\\1&0&0&0&0\\0&0&0&1&0\\0&0&0&0&1\\0&0&1&0&0\end{smallmatrix}}\right)}provide simple examples of what can go wrong if the necessary conditions are not met. It is easily seen that the Perron and peripheral projections ofLare both equal toP, thus when the original matrix is reducible the projections may lose non-negativity and there is no chance of expressing them as limits of its powers. The matrixTis an example of a primitive matrix with zero diagonal. If the diagonal of an irreducible non-negative square matrix is non-zero then the matrix must be primitive but this example demonstrates that the converse is false.Mis an example of a matrix with several missing spectral teeth. If ω = eiπ/3then ω6= 1 and the eigenvalues ofMare {1,ω2,ω3=-1,ω4} with a dimension 2 eigenspace for +1 so ω and ω5are both absent. More precisely, sinceMis block-diagonal cyclic, then the eigenvalues are {1,-1} for the first block, and {1,ω2,ω4} for the lower one[citation needed] A problem that causes confusion is a lack of standardisation in the definitions. For example, some authors use the termsstrictly positiveandpositiveto mean > 0 and ≥ 0 respectively. In this articlepositivemeans > 0 andnon-negativemeans ≥ 0. Another vexed area concernsdecomposabilityandreducibility:irreducibleis an overloaded term. For avoidance of doubt a non-zero non-negative square matrixAsuch that 1 +Ais primitive is sometimes said to beconnected. Then irreducible non-negative square matrices and connected matrices are synonymous.[33] The nonnegative eigenvector is often normalized so that the sum of its components is equal to unity; in this case, the eigenvector is the vector of aprobability distributionand is sometimes called astochastic eigenvector. Perron–Frobenius eigenvalueanddominant eigenvalueare alternative names for the Perron root. Spectral projections are also known asspectral projectorsandspectral idempotents. The period is sometimes referred to as theindex of imprimitivityor theorder of cyclicity.
https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem
Asearch engineis asoftware systemthat provideshyperlinkstoweb pagesand other relevant information onthe Webin response to a user'squery. The userinputsa query within aweb browseror amobile app, and thesearch resultsare often a list of hyperlinks, accompanied by textual summaries and images. Users also have the option of limiting the search to a specific type of results, such as images, videos, or news. For a search provider, itsengineis part of adistributed computingsystem that can encompass manydata centersthroughout the world. The speed and accuracy of an engine's response to a query is based on a complex system ofindexingthat is continuously updated by automatedweb crawlers. This can includedata miningthefilesanddatabasesstored onweb servers, but some content isnot accessibleto crawlers. There have been many search engines since the dawn of the Web in the 1990s, butGoogle Searchbecame the dominant one in the 2000s and has remained so. It currently has a 90% global market share.[1][2]The business ofwebsitesimproving their visibility insearch results, known asmarketingandoptimization, has thus largely focused on Google. In 1945,Vannevar Bushdescribed an information retrieval system that would allow a user to access a great expanse of information, all at a single desk.[3]He called it amemex. He described the system in an article titled "As We May Think" that was published inThe Atlantic Monthly.[4]The memex was intended to give a user the capability to overcome the ever-increasing difficulty of locating information in ever-growing centralized indices of scientific work. Vannevar Bush envisioned libraries of research with connected annotations, which are similar to modernhyperlinks.[5] Link analysiseventually became a crucial component of search engines through algorithms such asHyper SearchandPageRank.[6][7] The first internet search engines predate the debut of the Web in December 1990:WHOISuser search dates back to 1982,[8]and theKnowbot Information Servicemulti-network user search was first implemented in 1989.[9]The first well documented search engine that searched content files, namelyFTPfiles, wasArchie, which debuted on 10 September 1990.[10] Prior to September 1993, theWorld Wide Webwas entirely indexed by hand. There was a list ofwebserversedited byTim Berners-Leeand hosted on theCERNwebserver. One snapshot of the list in 1992 remains,[11]but as more and more web servers went online the central list could no longer keep up. On theNCSAsite, new servers were announced under the title "What's New!".[12] The first tool used for searching content (as opposed to users) on theInternetwasArchie.[13]The name stands for "archive" without the "v".[14]It was created byAlan Emtage,[14][15][16][17]computer sciencestudent atMcGill UniversityinMontreal, Quebec, Canada. The program downloaded the directory listings of all the files located on public anonymous FTP (File Transfer Protocol) sites, creating a searchabledatabaseof file names; however,Archie Search Enginedid not index the contents of these sites since the amount of data was so limited it could be readily searched manually. The rise ofGopher(created in 1991 byMark McCahillat theUniversity of Minnesota) led to two new search programs,VeronicaandJughead. Like Archie, they searched the file names and titles stored in Gopher index systems. Veronica (Very Easy Rodent-Oriented Net-wide Index to Computerized Archives) provided a keyword search of most Gopher menu titles in the entire Gopher listings. Jughead (Jonzy's Universal Gopher Hierarchy Excavation And Display) was a tool for obtaining menu information from specific Gopher servers. While the name of the search engine "Archie Search Engine" was not a reference to theArchie comic bookseries, "Veronica" and "Jughead" are characters in the series, thus referencing their predecessor. In the summer of 1993, no search engine existed for the web, though numerous specialized catalogs were maintained by hand.Oscar Nierstraszat theUniversity of Genevawrote a series ofPerlscripts that periodically mirrored these pages and rewrote them into a standard format. This formed the basis forW3Catalog, the web's first primitive search engine, released on September 2, 1993.[18] In June 1993, Matthew Gray, then atMIT, produced what was probably the firstweb robot, thePerl-basedWorld Wide Web Wanderer, and used it to generate an index called "Wandex". The purpose of the Wanderer was to measure the size of the World Wide Web, which it did until late 1995. The web's second search engineAliwebappeared in November 1993. Aliweb did not use aweb robot, but instead depended on being notified bywebsite administratorsof the existence at each site of an index file in a particular format. JumpStation(created in December 1993[19]byJonathon Fletcher) used aweb robotto find web pages and to build its index, and used aweb formas the interface to its query program. It was thus the firstWWWresource-discovery tool to combine the three essential features of a web search engine (crawling, indexing, and searching) as described below. Because of the limited resources available on the platform it ran on, its indexing and hence searching were limited to the titles and headings found in theweb pagesthe crawler encountered. One of the first "all text" crawler-based search engines wasWebCrawler, which came out in 1994. Unlike its predecessors, it allowed users to search for any word in anyweb page, which has become the standard for all major search engines since. It was also the search engine that was widely known by the public. Also, in 1994,Lycos(which started atCarnegie Mellon University) was launched and became a major commercial endeavor. The first popular search engine on the Web wasYahoo! Search.[20]The first product fromYahoo!, founded byJerry YangandDavid Filoin January 1994, was aWeb directorycalledYahoo! Directory. In 1995, a search function was added, allowing users to search Yahoo! Directory.[21][22]It became one of the most popular ways for people to find web pages of interest, but its search function operated on its web directory, rather than its full-text copies of web pages. Soon after, a number of search engines appeared and vied for popularity. These includedMagellan,Excite,Infoseek,Inktomi,Northern Light, andAltaVista. Information seekers could also browse the directory instead of doing a keyword-based search. In 1996,Robin Lideveloped theRankDexsite-scoringalgorithmfor search engines results page ranking[23][24][25]and received a US patent for the technology.[26]It was the first search engine that usedhyperlinksto measure the quality of websites it was indexing,[27]predating the very similar algorithm patent filed byGoogletwo years later in 1998.[28]Larry Pagereferenced Li's work in some of his U.S. patents for PageRank.[29]Li later used his Rankdex technology for theBaidusearch engine, which was founded by him in China and launched in 2000. In 1996,Netscapewas looking to give a single search engine an exclusive deal as the featured search engine on Netscape's web browser. There was so much interest that instead, Netscape struck deals with five of the major search engines: for $5 million a year, each search engine would be in rotation on the Netscape search engine page. The five engines were Yahoo!, Magellan, Lycos, Infoseek, and Excite.[30][31] Googleadopted the idea of selling search terms in 1998 from a small search engine company namedgoto.com. This move had a significant effect on the search engine business, which went from struggling to one of the most profitable businesses in the Internet.[32][33] Search engines were also known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990s.[34]Several companies entered the market spectacularly, receiving record gains during theirinitial public offerings. Some have taken down their public search engine and are marketing enterprise-only editions, such as Northern Light. Many search engine companies were caught up in thedot-com bubble, a speculation-driven market boom that peaked in March 2000. Around 2000,Google's search enginerose to prominence.[35]The company achieved better results for many searches with an algorithm calledPageRank, as was explained in the paperAnatomy of a Search Enginewritten bySergey BrinandLarry Page, the later founders of Google.[7]Thisiterative algorithmranks web pages based on the number and PageRank of other web sites and pages that link there, on the premise that good or desirable pages are linked to more than others. Larry Page's patent for PageRank citesRobin Li's earlierRankDexpatent as an influence.[29][25]Google also maintained a minimalist interface to its search engine. In contrast, many of its competitors embedded a search engine in aweb portal. In fact, the Google search engine became so popular that spoof engines emerged such asMystery Seeker. By 2000,Yahoo!was providing search services based on Inktomi's search engine. Yahoo! acquired Inktomi in 2002, andOverture(which ownedAlltheWeband AltaVista) in 2003. Yahoo! switched to Google's search engine until 2004, when it launched its own search engine based on the combined technologies of its acquisitions. Microsoftfirst launched MSN Search in the fall of 1998 using search results from Inktomi. In early 1999, the site began to display listings fromLooksmart, blended with results from Inktomi. For a short time in 1999, MSN Search used results from AltaVista instead. In 2004,Microsoftbegan a transition to its own search technology, powered by its ownweb crawler(calledmsnbot). Microsoft's rebranded search engine,Bing, was launched on June 1, 2009. On July 29, 2009, Yahoo! and Microsoft finalized a deal in whichYahoo! Searchwould be powered by Microsoft Bing technology. As of 2019,[update]active search engine crawlers include those of Google,Sogou, Baidu, Bing,Gigablast,Mojeek,DuckDuckGoandYandex. A search engine maintains the following processes in near real time:[36] Web search engines get their information byweb crawlingfrom site to site. The "spider" checks for the standard filenamerobots.txt, addressed to it. The robots.txt file contains directives for search spiders, telling it which pages to crawl and which pages not to crawl. After checking for robots.txt and either finding it or not, the spider sends certain information back to beindexeddepending on many factors, such as the titles, page content,JavaScript,Cascading Style Sheets(CSS), headings, or itsmetadatain HTMLmeta tags. After a certain number of pages crawled, amount of data indexed, or time spent on the website, the spider stops crawling and moves on. "[N]o web crawler may actually crawl the entire reachable web. Due to infinite websites, spider traps, spam, and other exigencies of the real web, crawlers instead apply a crawl policy to determine when the crawling of a site should be deemed sufficient. Some websites are crawled exhaustively, while others are crawled only partially".[38] Indexing means associating words and other definable tokens found on web pages to their domain names andHTML-based fields. The associations are stored in a public database and accessible through web search queries. A query from a user can be a single word, multiple words or a sentence. The index helps find information relating to the query as quickly as possible.[37]Some of the techniques for indexing, andcachingare trade secrets, whereas web crawling is a straightforward process of visiting all sites on a systematic basis. Between visits by thespider, thecachedversion of the page (some or all the content needed to render it) stored in the search engine working memory is quickly sent to an inquirer. If a visit is overdue, the search engine can just act as aweb proxyinstead. In this case, the page may differ from the search terms indexed.[37]The cached page holds the appearance of the version whose words were previously indexed, so a cached version of a page can be useful to the website when the actual page has been lost, but this problem is also considered a mild form oflinkrot. Typically when a user enters aqueryinto a search engine it is a fewkeywords.[39]Theindexalready has the names of the sites containing the keywords, and these are instantly obtained from the index. The real processing load is in generating the web pages that are the search results list: Every page in the entire list must beweightedaccording to information in the indexes.[37]Then the top search result item requires the lookup, reconstruction, and markup of thesnippetsshowing the context of the keywords matched. These are only part of the processing each search results web page requires, and further pages (next to the top) require more of this post-processing. Beyond simple keyword lookups, search engines offer their ownGUI- or command-driven operators and search parameters to refine the search results. These provide the necessary controls for the user engaged in the feedback loop users create byfilteringandweightingwhile refining the search results, given the initial pages of the first search results. For example, from 2007 the Google.com search engine has allowed one tofilterby date by clicking "Show search tools" in the leftmost column of the initial search results page, and then selecting the desired date range.[40]It is also possible toweightby date because each page has a modification time. Most search engines support the use of theBoolean operatorsAND, OR and NOT to help end users refine thesearch query. Boolean operators are for literal searches that allow the user to refine and extend the terms of the search. The engine looks for the words or phrases exactly as entered. Some search engines provide an advanced feature calledproximity search, which allows users to define the distance between keywords.[37]There is alsoconcept-based searchingwhere the research involves using statistical analysis on pages containing the words or phrases you search for. The usefulness of a search engine depends on therelevanceof theresult setit gives back. While there may be millions of web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods torankthe results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another.[37]The methods also change over time as Internet usage changes and new techniques evolve. There are two main types of search engine that have evolved: one is a system of predefined and hierarchically ordered keywords that humans have programmed extensively. The other is a system that generates an "inverted index" by analyzing texts it locates. This first form relies much more heavily on the computer itself to do the bulk of the work. Most Web search engines are commercial ventures supported byadvertisingrevenue and thus some of them allow advertisers tohave their listings ranked higherin search results for a fee. Search engines that do not accept money for their search results make money by runningsearch related adsalongside the regular search engine results. The search engines make money every time someone clicks on one of these ads.[41] Local searchis the process that optimizes the efforts of local businesses. They focus on change to make sure all searches are consistent. It is important because many people determine where they plan to go and what to buy based on their searches.[42] As of January 2022,[update]Googleis by far the world's most used search engine, with a market share of 90%, and the world's other most used search engines wereBingat 4%,Yandexat 2%,Yahoo!at 1%. Other search engines not listed have less than a 3% market share.[2]In 2024, Google's dominance was ruled an illegal monopoly in a case brought by the US Department of Justice.[43] In Russia,Yandexhas a market share of 62.6%, compared to Google's 28.3%. Yandex is the second most used search engine on smartphones in Asia and Europe.[44]In China, Baidu is the most popular search engine.[45]South Korea-based search portalNaveris used for 62.8% of online searches in the country.[46]Yahoo! JapanandYahoo! Taiwanare the most popular choices for Internet searches in Japan and Taiwan, respectively.[47]China is one of few countries where Google is not in the top three web search engines for market share. Google was previously more popular in China, but withdrew significantly after a disagreement with the government over censorship and a cyberattack. Bing, however, is in the top three web search engines with a market share of 14.95%. Baidu is top with 49.1% of the market share.[48][failed verification] Most countries' markets in the European Union are dominated by Google, except for theCzech Republic, whereSeznamis a strong competitor.[49] The search engineQwantis based inParis,France, where it attracts most of its 50 million monthly registered users from. Although search engines are programmed to rank websites based on some combination of their popularity and relevancy, empirical studies indicate various political, economic, and social biases in the information they provide[50][51]and the underlying assumptions about the technology.[52]These biases can be a direct result of economic and commercial processes (e.g., companies that advertise with a search engine can become also more popular in itsorganic searchresults), and political processes (e.g., the removal of search results to comply with local laws).[53]For example, Google will not surface certainneo-Naziwebsites in France and Germany, whereHolocaust denialis illegal. Biases can also be a result of social processes, as search engine algorithms are frequently designed to exclude non-normative viewpoints in favor of more "popular" results.[54]Indexing algorithms of major search engines skew towards coverage of U.S.-based sites, rather than websites from non-U.S. countries.[51] Google Bombingis one example of an attempt to manipulate search results for political, social or commercial reasons. Several scholars have studied the cultural changes triggered by search engines,[55]and the representation of certain controversial topics in their results, such asterrorism in Ireland,[56]climate change denial,[57]andconspiracy theories.[58] There has been concern raised that search engines such as Google and Bing provide customized results based on the user's activity history, leading to what has been termed echo chambers orfilter bubblesbyEli Pariserin 2011.[59]The argument is that search engines and social media platforms usealgorithmsto selectively guess what information a user would like to see, based on information about the user (such as location, past click behaviour and search history). As a result, websites tend to show only information that agrees with the user's past viewpoint. According toEli Pariserusers get less exposure to conflicting viewpoints and are isolated intellectually in their own informational bubble. Since this problem has been identified, competing search engines have emerged that seek to avoid this problem by not tracking or "bubbling" users, such asDuckDuckGo. However many scholars have questioned Pariser's view, finding that there is little evidence for the filter bubble.[60][61][62]On the contrary, a number of studies trying to verify the existence of filter bubbles have found only minor levels of personalisation in search,[62]that most people encounter a range of views when browsing online, and that Google news tends to promote mainstream established news outlets.[63][61] The global growth of the Internet and electronic media in theArabandMuslimworld during the last decade has encouraged Islamic adherents inthe Middle EastandAsian sub-continent, to attempt their own search engines, their own filtered search portals that would enable users to performsafe searches. More than usualsafe searchfilters, these Islamic web portals categorizing websites into being either "halal" or "haram", based on interpretation ofSharia law.ImHalalcame online in September 2011.Halalgooglingcame online in July 2013. These useharamfilters on the collections fromGoogleandBing(and others).[64] While lack of investment and slow pace in technologies in the Muslim world has hindered progress and thwarted success of an Islamic search engine, targeting as the main consumers Islamic adherents, projects likeMuxlim(a Muslim lifestyle site) received millions of dollars from investors like Rite Internet Ventures, and it also faltered. Other religion-oriented search engines are Jewogle, the Jewish version of Google,[65]and Christian search engine SeekFind.org. SeekFind filters sites that attack or degrade their faith.[66] Web search engine submission is a process in which a webmaster submits a website directly to a search engine. While search engine submission is sometimes presented as a way to promote a website, it generally is not necessary because the major search engines use web crawlers that will eventually find most web sites on the Internet without assistance. They can either submit one web page at a time, or they can submit the entire site using asitemap, but it is normally only necessary to submit thehome pageof a web site as search engines are able to crawl a well designed website. There are two remaining reasons to submit a web site or web page to a search engine: to add an entirely new web site without waiting for a search engine to discover it, and to have a web site's record updated after a substantial redesign. Some search engine submission software not only submits websites to multiple search engines, but also adds links to websites from their own pages. This could appear helpful in increasing a website'sranking, because external links are one of the most important factors determining a website's ranking. However, John Mueller ofGooglehas stated that this "can lead to a tremendous number of unnatural links for your site" with a negative impact on site ranking.[67] In comparison to search engines, a social bookmarking system has several advantages over traditional automated resource location and classification software, such assearch enginespiders. All tag-based classification of Internet resources (such as web sites) is done by human beings, who understand the content of the resource, as opposed to software, which algorithmically attempts to determine the meaning and quality of a resource. Also, people can find andbookmark web pagesthat have not yet been noticed or indexed by web spiders.[68]Additionally, a social bookmarking system can rank a resource based on how many times it has been bookmarked by users, which may be a more usefulmetricforend-usersthan systems that rank resources based on the number of external links pointing to it. However, both types of ranking are vulnerable to fraud, (seeGaming the system), and both need technical countermeasures to try to deal with this. The first web search engine wasArchie, created in 1990[69]byAlan Emtage, a student atMcGill Universityin Montreal. The author originally wanted to call the program "archives", but had to shorten it to comply with the Unix world standard of assigning programs and files short, cryptic names such as grep, cat, troff, sed, awk, perl, and so on. The primary method of storing and retrieving files was via theFile Transfer Protocol(FTP). This was (and still is) a system that specified a common way for computers to exchange files over the Internet. It works like this: Some administrator decides that he wants to make files available from his computer. He sets up a program on his computer, called an FTP server. When someone on the Internet wants to retrieve a file from this computer, he or she connects to it via another program called an FTP client. Any FTP client program can connect with any FTP server program as long as the client and server programs both fully follow the specifications set forth in the FTP protocol. Initially, anyone who wanted to share a file had to set up an FTP server in order to make the file available to others. Later, "anonymous" FTP sites became repositories for files, allowing all users to post and retrieve them. Even with archive sites, many important files were still scattered on small FTP servers. These files could be located only by the Internet equivalent of word of mouth: Somebody would post an e-mail to a message list or a discussion forum announcing the availability of a file. Archie changed all that. It combined a script-based data gatherer, which fetched site listings of anonymous FTP files, with a regular expression matcher for retrieving file names matching a user query. (4) In other words, Archie's gatherer scoured FTP sites across the Internet and indexed all of the files it found. Its regular expression matcher provided users with access to its database.[70] In 1993, the University of Nevada System Computing Services group developedVeronica.[69]It was created as a type of searching device similar to Archie but for Gopher files. Another Gopher search service, called Jughead, appeared a little later, probably for the sole purpose of rounding out the comic-strip triumvirate. Jughead is an acronym for Jonzy's Universal Gopher Hierarchy Excavation and Display, although, like Veronica, it is probably safe to assume that the creator backed into the acronym. Jughead's functionality was pretty much identical to Veronica's, although it appears to be a little rougher around the edges.[70] TheWorld Wide Web Wanderer, developed by Matthew Gray in 1993[71]was the first robot on the Web and was designed to track the Web's growth. Initially, the Wanderer counted only Web servers, but shortly after its introduction, it started to capture URLs as it went along. The database of captured URLs became the Wandex, the first web database. Matthew Gray's Wanderer created quite a controversy at the time, partially because early versions of the software ran rampant through the Net and caused a noticeable netwide performance degradation. This degradation occurred because the Wanderer would access the same page hundreds of times a day. The Wanderer soon amended its ways, but the controversy over whether robots were good or bad for the Internet remained. In response to the Wanderer, Martijn Koster created Archie-Like Indexing of the Web, or ALIWEB, in October 1993. As the name implies, ALIWEB was the HTTP equivalent of Archie, and because of this, it is still unique in many ways. ALIWEB does not have a web-searching robot. Instead, webmasters of participating sites post their own index information for each page they want listed. The advantage to this method is that users get to describe their own site, and a robot does not run about eating up Net bandwidth. The disadvantages of ALIWEB are more of a problem today. The primary disadvantage is that a special indexing file must be submitted. Most users do not understand how to create such a file, and therefore they do not submit their pages. This leads to a relatively small database, which meant that users are less likely to search ALIWEB than one of the large bot-based sites. This Catch-22 has been somewhat offset by incorporating other databases into the ALIWEB search, but it still does not have the mass appeal of search engines such as Yahoo! or Lycos.[70] Excite, initially called Architext, was started by six Stanford undergraduates in February 1993. Their idea was to use statistical analysis of word relationships in order to provide more efficient searches through the large amount of information on the Internet. Their project was fully funded by mid-1993. Once funding was secured. they released a version of their search software for webmasters to use on their own web sites. At the time, the software was called Architext, but it now goes by the name of Excite for Web Servers.[70] Excite was the first serious commercial search engine which launched in 1995.[72]It was developed in Stanford and was purchased for $6.5 billion by @Home. In 2001 Excite and @Home went bankrupt andInfoSpacebought Excite for $10 million. Some of the first analysis of web searching was conducted on search logs from Excite[73][39] In April 1994, two Stanford University Ph.D. candidates,David FiloandJerry Yang, created some pages that became rather popular. They called the collection of pagesYahoo!Their official explanation for the name choice was that they considered themselves to be a pair of yahoos. As the number of links grew and their pages began to receive thousands of hits a day, the team created ways to better organize the data. In order to aid in data retrieval, Yahoo! (www.yahoo.com) became a searchable directory. The search feature was a simple database search engine. Because Yahoo! entries were entered and categorized manually, Yahoo! was not really classified as a search engine. Instead, it was generally considered to be a searchable directory. Yahoo! has since automated some aspects of the gathering and classification process, blurring the distinction between engine and directory. The Wanderer captured only URLs, which made it difficult to find things that were not explicitly described by their URL. Because URLs are rather cryptic to begin with, this did not help the average user. Searching Yahoo! or the Galaxy was much more effective because they contained additional descriptive information about the indexed sites. At Carnegie Mellon University during July 1994, Michael Mauldin, on leave from CMU, developed theLycossearch engine. Search engines on the web are sites enriched with facility to search the content stored on other sites. There is difference in the way various search engines work, but they all perform three basic tasks.[74] The process begins when a user enters a query statement into the system through the interface provided. There are basically three types of search engines: Those that are powered by robots (calledcrawlers; ants or spiders) and those that are powered by human submissions; and those that are a hybrid of the two. Crawler-based search engines are those that use automated software agents (called crawlers) that visit a Web site, read the information on the actual site, read the site's meta tags and also follow the links that the site connects to performing indexing on all linked Web sites as well. The crawler returns all that information back to a central depository, where the data is indexed. The crawler will periodically return to the sites to check for any information that has changed. The frequency with which this happens is determined by the administrators of the search engine. Human-powered search engines rely on humans to submit information that is subsequently indexed and catalogued. Only information that is submitted is put into the index. In both cases, when you query a search engine to locate information, you're actually searching through the index that the search engine has created —you are not actually searching the Web. These indices are giant databases of information that is collected and stored and subsequently searched. This explains why sometimes a search on a commercial search engine, such as Yahoo! or Google, will return results that are, in fact, dead links. Since the search results are based on the index, if the index has not been updated since a Web page became invalid the search engine treats the page as still an active link even though it no longer is. It will remain that way until the index is updated. So why will the same search on different search engines produce different results? Part of the answer to that question is because not all indices are going to be exactly the same. It depends on what the spiders find or what the humans submitted. But more important, not every search engine uses the same algorithm to search through the indices. The algorithm is what the search engines use to determine therelevanceof the information in the index to what the user is searching for. One of the elements that a search engine algorithm scans for is the frequency and location of keywords on a Web page. Those with higher frequency are typically considered more relevant. But search engine technology is becoming sophisticated in its attempt to discourage what is known as keyword stuffing, or spamdexing. Another common element that algorithms analyze is the way that pages link to other pages in the Web. By analyzing how pages link to each other, an engine can both determine what a page is about (if the keywords of the linked pages are similar to the keywords on the original page) and whether that page is considered "important" and deserving of a boost in ranking. Just as the technology is becoming increasingly sophisticated to ignore keyword stuffing, it is also becoming more savvy to Web masters who build artificial links into their sites in order to build an artificial ranking. Modern web search engines are highly intricate software systems that employ technology that has evolved over the years. There are a number of sub-categories of search engine software that are separately applicable to specific 'browsing' needs. These include web search engines (e.g.Google), database or structured data search engines (e.g.Dieselpoint), and mixed search engines or enterprise search. The more prevalent search engines, such as Google andYahoo!, utilize hundreds of thousands computers to process trillions of web pages in order to return fairly well-aimed results. Due to this high volume of queries and text processing, the software is required to run in a highly dispersed environment with a high degree of superfluity. Another category of search engines is scientific search engines. These are search engines which search scientific literature. The best known example is Google Scholar. Researchers are working on improving search engine technology by making them understand the content element of the articles, such as extracting theoretical constructs or key research findings.[75]
https://en.wikipedia.org/wiki/Web_search_engines
Eigenis a high-levelC++libraryoftemplate headersforlinear algebra,matrixandvectoroperations, geometrical transformations,numerical solversand related algorithms. Eigen isopen-source softwarelicensed under theMozilla Public License2.0 since version 3.1.1. Earlier versions were licensed under theGNU Lesser General Public License.[2]Version 1.0 was released in Dec 2006.[3] Eigen is implemented using theexpression templatesmetaprogrammingtechnique, meaning it builds expression trees at compile time and generates custom code to evaluate these. Using expression templates and acost modeloffloating pointoperations, the library performs its ownloop unrollingandvectorization.[4]Eigen itself can provideBLASand a subset ofLAPACKinterfaces.[5] Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Eigen_(C%2B%2B_library)
Thesandbox effect(orsandboxing) is a theory about the wayGoogleranks web pages in its index. It is the subject of much debate—its existence has been written about[1][2]since 2004,[3]but not confirmed, with several statements to the contrary.[4] According to the theory of the sandbox effect, links that may normally be weighted by Google's ranking algorithm but don't improve the position of a webpage in Google's index, could be subjected to filtering to prevent their full impact. Some observations have suggested that two important factors causing this filter are the active age of a domain and the competitiveness of the keywords used in links. Active age of a domain[5]should not be confused with the date of registration on a domain'sWHOISrecord, but instead refers to the time when Google first indexed pages on the domain. Keyword competitiveness refers to the search frequency of a word onGoogle search, with observation suggesting that the higher the search frequency of a word, the more likely the sandbox filter effect will come into play. While the presence of the Google sandbox has long been debated, Google has made no direct disclosure. However, as the sandbox effect almost certainly refers to a set of filters in play for anti-spam purposes, it is unlikely Google would ever provide details on the matter. However, in one instance, Google's John Mueller[6]did mention that "it can take a bit of time for search engines to catch up with your content, and to learn to treat it appropriately. It's one thing to have a fantastic website. Still, search engines generally need a bit more to be able to confirm that and to rank yoursite‍—‍yourcontent‍—‍appropriately".[7]This could be understood as the cause for the sandbox effect. Google has long been aware that its historical use of links as a "vote" for ranking web documents can be subject to manipulation and stated such in its original IPO documentation. Over the years, Google has filed a number of patents that seek to qualify or minimise the impact of such manipulation, which Google terms as "link spam". Link spam is primarily driven bysearch engine optimizers, who attempt to manipulate Google's page ranking by creating many inbound links to a new website from other websites they own. Some SEO experts also claim that the sandbox only applies to highly competitive or broad keyword phrases and can be counteracted by targeting narrow or so-called long-tail phrases.[8] Google has been updating its algorithm for as long as it has been fighting the manipulation of organic search results. However, until May 10, 2012, when Google launched theGoogle Penguinupdate, many people wrongly believed low-qualitybacklinkswould not negatively affect a site ranking; Google had been applying such link-based penalties[9]for many years but not made public how the company approached and dealt with what they called "link spam". Since then, there has been a much wider acknowledgment of the dangers of bad SEO and forensic analysis of backlinks to ensure no harmful links. As a result, the algorithm penalised Google's own products too. A well-known example is Google Chrome, which was penalised for purchasing links to boost theweb browser's results. Penalties are generally caused by manipulative backlinks intended to favor particular companies in the search results. By adding such links, companies break Google's terms and conditions. When Google discovers such links, it imposes penalties to discourage other companies from following this practice and to remove any gains that may have been enjoyed from such links. Google also penalizes those who took part in the manipulation and helped other companies by linking to them. These types of companies are often low-quality directories that list a link to a company website with manipulativeanchor textfor a fee. Google argues that such pages offer no value to the Internet and are often deindexed. Such links are often referred to as paid links. Paid links are links that people place on their site for a fee, believing that this will positively impact the search results. The practice of paid links was prevalent before the Penguin update when companies believed they could add any type of link with impunity since Google claimed prior that they ignored these links instead of penalizing websites. To comply with Google's recent TOS, applying thenofollowattribute to paid advertisement links is imperative. Businesses that buy backlinks from low-quality sites attract Google penalties. These are links left in the comments of articles that are impossible to remove. As this practice became widespread, Google launched a feature to help curb such practices. The nofollow tag tells search engines not to trust such links. Blog networks are sometimes thousands of blogs that appear unconnected, which link out to those prepared to pay for such links. Google has typically targeted blog networks and once detecting them has penalized thousands of sites that gained benefits. Google has encouraged companies to reform their bad practices and as a result, demand that efforts be taken to remove manipulative links. Google launched the Disavow tool on 16 October 2012 so that people could report the bad links they had. The Disavow tool was launched mainly in response to many reports of negative SEO, where companies were being targeted with manipulative links by competitors knowing full well that they would be penalized.[citation needed]There has been some controversy[10]over whether the Disavow tool has any effect when manipulation has taken place over many years. At the same time, some anecdotal case studies have been presented,[11]which suggest that the tool is effective and that former ranking positions can be restored. Negative SEO started to occur following the Penguin update when it became common knowledge that Google would apply penalties for manipulative links, Such practices as negative SEO led companies to diligently monitor their backlinks to ensure they are not being targeted by hostile competitors through negative SEO services.[12][13] In the US and UK, these types of activities by competitors attempting to sabotage a website's rankings are considered to be illegal. A "reverse sandbox" effect is also claimed to exist, whereby new pages with good content, butwithoutinbound links, are temporarilyincreasedin rank — much like the "New Releases" in a book store are displayed more prominently — to encourage the organic building of the World Wide Web.[4][25] David George disputes the claim that Google applies sandboxing toallnew websites, saying that the claim "doesn't seem to be borne out by experience". He states that he created a new website in October 2004 and had it ranked in the top 20 Google results for a target keyword within one month. He asserts that "no one knows for sure if the Google sandbox exists", and comments that it "seems to fit the observations and experiments of many search engine optimizers". He theorizes that the sandbox "has introduced somehysteresisinto the system to restore a bit of sanity to Google's results".[4] In an interview with the Search Engine Roundtable website,Matt Cuttsis reported to have said that some things in the algorithm may be perceived as a sandbox that does not apply to all industries.[26]Jaimie Sirovich and Cristian Darie, authors ofProfessional Search Engine Optimization with PHP, state that they believe that, while Google does not actuallyhavean explicit "sandbox", the effect itself (however caused) is real.[25]
https://en.wikipedia.org/wiki/Google_penalty
Google Pigeonis thecode name[1]given to one ofGoogle's localsearch algorithmupdates. This update was released on July 24, 2014.[2]It is aimed to increase the ranking of local listings in a search. The changes will also affect the search results shown inGoogle Mapsalong with the regular Google search results. As of the initial release date, it was released in US English and was intended to shortly be released in other languages and locations. This update provides the results based on the user location and the listing available in the local directory. The purpose of Pigeon is to provide preference tolocal searchresults. On the day of release, it received mixed responses fromwebmasters. Some complained about the ranking being decreased whereas others reported improvement in the search rankings.[3]As per the webmasters' understanding, this update has location and distance as key parts of the search strategy. The local directory listings are getting preferences in web results. To improve the quality of local searches and provide more relevant results to the user, Google relies on factors such as location and distance. This update alters the local listings in the search results; along with this, the local directory sites are given preference.
https://en.wikipedia.org/wiki/Google_Pigeon
TheGoogle Knowledge Graphis aknowledge basefrom whichGoogleserves relevant information in an infobox beside itssearch results. This allows the user to see the answer in a glance, as aninstant answer. The data is generated automatically from a variety of sources, covering places, people, businesses, and more.[1][2] The information covered by Google's Knowledge Graph grew quickly after launch, tripling its data size within seven months (covering 570 million entities and 18 billion facts[3]). By mid-2016, Google reported that it held 70 billion facts[4]and answered "roughly one-third" of the 100 billion monthly searches they handled. By May 2020, this had grown to 500 billion facts on 5 billion entities.[5] There is no official documentation of how the Google Knowledge Graph is implemented.[6]According to Google, its information is retrieved from many sources, including theCIA World FactbookandWikipedia.[7]It is used to answer direct spoken questions inGoogle Assistant[8][9]andGoogle Homevoice queries.[10]It has been criticized for providing answers with neither source attribution norcitations.[11] Google announced its Knowledge Graph on May 16, 2012, as a way to significantly enhance the value of information returned by Google searches.[7]Initially available only in English, it was expanded in December 2012 toSpanish,French,German,Portuguese,Japanese,RussianandItalian.[12]Bengalisupport was added in March 2017.[13] The Knowledge Graph was powered in part byFreebase.[7] In August 2014,New Scientistreported that Google had launched aKnowledge Vaultproject.[14]After publication, Google reached out toSearch Engine Landto explain that Knowledge Vault was a research report, not an active Google service.Search Engine Landexpressed indications that Google was experimenting with "numerous models" for gathering meaning from text.[15] Google's Knowledge Vault was meant to deal with facts, automatically gathering and merging information from across the Internet into a knowledge base capable of answering direct questions, such as "Where wasMadonnaborn?" In a 2014 report, the Vault was reported to have collected over 1.6 billion facts, 271 million of which were considered "confident facts" deemed to be more than 90% true. It was reported to be different from the Knowledge Graph in that it gathered information automatically instead of relying on crowd-sourced facts compiled by humans.[15] A Google Knowledge Panel[16]which is part of Google search engine result pages, presents an overview of entities such as individuals, organizations, locations, or objects directly within the search interface. This feature uses data from Google Knowledge Graph,[17]an extensive database that organizes and interconnects information about entities, enhancing the retrieval and presentation of relevant content to users. By May 2016, knowledge boxes were appearing for "roughly one-third" of the 100 billion monthly searches the company processed.[11]Dario Taraborelli, head of research at theWikimedia Foundation, toldThe Washington Postthat Google's omission of sources in its knowledge boxes "undermines people’s ability to verify information and, ultimately, to develop well-informed opinions". The publication also reported that the boxes are "frequently unattributed", such as a knowledge box on the age of actressBetty White, which is "as unsourced and absolute as if handed down by God".[11] According toThe Registerin 2014 the display of direct answers in knowledge panels alongside Google search results caused significant readership declines forWikipedia, from which the panels obtained some of their information.[18]Also in 2014,The Daily Dotnoted that "Wikipedia still has no real competitor as far as actual content is concerned. All that's up for grabs are traffic stats. And as a nonprofit, traffic numbers don't equate into revenue in the same way they do for a commercial media site". After the article's publication, a spokesperson for theWikimedia Foundation, which operates Wikipedia, stated that it "welcomes" the knowledge panel functionality, that it was "looking into" the traffic drops, and that "We've also not noticed a significant drop in search engine referrals. We also have a continuing dialog with staff from Google working on the Knowledge Panel".[19] In his 2020 book,Dariusz Jemielniaknoted that as most Google users do not realize that many answers to their questions that appear in the Knowledge Graph come from Wikipedia, this reduces Wikipedia's popularity, and in turn limited the site's ability to raise new funds and attract new volunteers.[20] The algorithm has been criticized for presenting biased or inaccurate information, usually because of sourcing information from websites with highsearch engine optimization. It had been noted in 2014 that while there was a Knowledge Graph for most major historical or pseudo-historicalreligiousfigures such asMoses,MuhammadandGautama Buddha, there was none forJesus, the central figure ofChristianity.[21][22]On June 3, 2021, a knowledge box identifiedKannadaas the ugliest language in India, prompting outrage from the Kannada-language community; the state ofKarnataka, where most Kannada speakers live, also threatened to sue Google for damaging the public image of the language. Google promptly changed the featured snippet for the search query and issued a formal apology.[23][24]
https://en.wikipedia.org/wiki/Google_Knowledge_Graph