text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
Clandestine HUMINT asset recruiting, also known asagent cultivation, refers to the recruitment of human agents, commonly known asspies, who work for a foreign government, or within a host country's government or other target of intelligence interest for the gathering ofhuman intelligence. The work of detecting and "doubling" spies who betray their oaths to work on behalf of a foreign intelligence agency is an important part ofcounterintelligence.
The termspyrefers to human agents that are recruited bycase officersof a foreign intelligence agency.
Acquiring information may not involve collecting secret documents, but something as simple as observing the number and types of ships in a port. Even though such information might be readily visible, the laws of many countries would consider reporting it to a foreign power as espionage. Other asset roles include support functions such as communications, forgery, disguise, etc.
According toVictor Suvorov, a formerSoviet military intelligence officer, a Soviet officer may be underdiplomaticornonofficial cover, handling two kinds of agent: basic and supplementary.[1]
Basic agents can be formed into groups with leaders, or report directly to the controller. Basic agents include information providers, perhaps through espionage or expertise about some local subject. Also in the basic group are "executive agents", who will kill or commit sabotage, and recruiting agents. In US practice, recruiting agents are calledaccess agents.
Both operation leaders and the supplementary group must be clandestine in support roles to basic agents. They may be clandestine officers of the FIS, such asRudolf Abel, recruited in the target country, or recruited in a third country. One of the supplementary functions is communications, which includes clandestine radio transmission, dead drops, couriers, and finding places for secure radio transmissions. Other supplementary functions include people who can "legalise" agents with cover jobs, and specialists that can forge documents or get illicit copies from the actual source. Safe houses, safe mail drops, and safe telephones are other supplementary functions.
Recruiting process typically begins with "spotting". Spotting is the identification of targets—people—who appear to have access to information, or are attractive for some support role. This individual may be "developed" over a period of time before the approach is made or it could be made "cold". Alternatively, the potential agent may approach the agency; many intelligence assets were not recruited but were "walk-ins" or "write-ins" who offered information to foreign intelligence services. Background research is conducted on the potential agent to identify any ties to a foreign intelligence agency, select the most promising candidates and approach method.
Obvious candidates are staff officers under diplomatic cover, or officers under nonofficial contact, have routine contact. Also possible contacts ofaccess agents, existing agents, or through information that suggests they may be compromised.
Surveillance of targets (e.g., military or other establishments, open source or compromised reference documents) sometimes reveals people with potential access to information, but no clear means of approaching them. With this group, a secondary survey is in order. Headquarters may be able to suggest an approach, perhaps through a third country or through resources not known to the field station.[2]
Recruiting people that may have access to the activities of non-state groups will be much harder, since these groups tend to be much more clandestine, and often operate on acell systemcomposed of relatives and people who have known one another for years. Access agents may be especially important here, and it may be worth the effort to spot potential access agents.
The deliberate spotting process complements more obvious candidates, but candidates who are also more likely to be compromised by counterintelligence, such as walk-ins and write-ins.
According to Suvorov, the Soviet GRU was not always able to collect enough material, principally from open sources, to understand the candidate's motivations and vulnerabilities.[3]It was GRU doctrine, therefore, to use every meeting to continue to elicit this information. Other intelligence authorities also see eliciting information as a continuing process.[4]
That continued meetings both provide substantive intelligence, as well as knowledge about the asset, are not incompatible with security. Agent handlers still observe all the rules of clandestinity[clarification needed]while developing the agent relationship. Knowledge of meetings, and indeed knowledge of the existence of the asset, must be on a strict need-to-know basis.[3]
After the selection of a candidate for recruitment, the first stage, tracing and cultivating, commences. Details are collected about the candidate, details which may be obtained through reference, books, telephone directories, the press, and other recruited agents. Further definition of motives which will be used in the actual recruitment of the person are cultivated and weaknesses are exacerbated.[5]
Especially when the case officer is of a different culture than the one whose people he is targeting, the recruitment process begins not necessarily with a person that has the desired information. Instead, the first recruit may be someone well-connected in the local culture. Such a person may be called aprincipal agentor anaccess agent, who may simply arrange introductions, or actually run the operations of subagents.[6]Some agents of this type may be able to help in the pre-recruitment stages of assessment and development, or may only be involved in finding possible assets.
Indeed, an access agent may arrange introductions without being completely witting that the purpose of meeting the target is to find people who will participate in espionage. A well-respected technical professional, or a professor, often will make introductions within their field. Such introductions are perfectly reasonable in non-espionage contexts, such as looking for jobs or people to fill them. The process of personnel recruiting for industry is not completely dissimilar from recruiting spies. Both may use personal networks, and, in industrialized countries, computer-assisted personal "networking" (for example, through websites such asLinkedIn).
A professional intelligence officer may very well obtain contacts through counterparts in the intelligence services of allied countries. The other service may arrange direct contact and then drop out of the process, or may jointly operate an asset such as the joint U.S.–UK operation withOleg Penkovsky. The allied officer may not actually provide access to his assets, but will convey information requests and responses. There is the example of a CIA learned from theMalaysianservice about anal-Qaedameeting inKuala Lumpur, something that would have been impossible for a lone CIA case officer to discover.[6]
HUMINT collectors often have analysts in their own organizations with a sophisticated understanding of the people, with specialized knowledge in targeted countries, industries, or other groups. The analyst may or may not know details of the target's personality.[6]
Even when no personal details are available, the recruiter, in an intelligence service, may have additional resources to use before the first contact.OSINTresearch can find the publications of a professional, but also social interests. With due regard to the risks and resources required,SIGINTcan tap phone lines or intercept other communications that will give the recruiter more information about the target.
This step differs from the next one,assessment of potential recruits, in that it is not focused on the recruit himself or herself, the probability of recruitment, etc., but, when there is more than one possible recruit, and a finite amount of case officer time, the discussion here gives criteria to select the most important targets.
One analysis by a U.S. clandestine service officer with knowledge of European practices has five descending priorities in the period between 1957 and 1962.[7]He developed these in the context of being stationed in Europe during theCold War, with a goal of acquiringHUMINTrecruits in the Soviet Union and its satellite nations. His operations were focused on people with the general characteristics:
Among this group, the priorities were:
In deciding whether to recruit a prospect, there needs to be a process to make sure that the person is not actively working for the adversary's counterintelligence, is under surveillance by them, or presents other risks that may not make recruitment wise. The assessment process applies both to walk-ins and targeted recruits, but additional assessment needs to apply to the walk-in, who is most likely to be someone sent by a counterintelligence service.
Major intelligence services are very cautious about walk-ins or write-ins, but some of the most important known assets were walk-ins, such asOleg Penkovskyor write-ins (using intelligence tradecraft) such asRobert Hanssen. Most walk-ins though are rejected.
The U.S. Army defines three classes of people who may present themselves:[8]
A Soviet response, to someone visiting the embassy, was "This is a diplomatic representation and not an espionage centre. Be so kind as to leave the building or we will call the police". According to Suvorov, the police are usually not called but the embassy staff chase the would-be agent out quickly.[3]
Cautious handling of walk-ins was not exclusively a Soviet concern.[8]U.S. Army procedure is to have military intelligence (MI) or military police (MP) personnel handle all aspects of walk-ins. Under U.S. Army Regulations, military police are not intended to do interrogations, which are the responsibility of military intelligence personnel.[9]
While serious discussion with the contact will be done bycounter-intelligencespecialists, information about the walk-in must be restricted to people with a "need-to-know." "The information the walk-in provides must be guarded and classified appropriately".[8]
While the interviewers will actually be from MI, the walk-in will never be told the identities of intelligence personnel. If they ask to see an intelligence representative, they will be told none is available. Both these measures are intended to prevent hostile intelligence services learning about the structure or procedure of US intelligence/counterintelligence personnel.
The Soviets showed caution equivalent to the United States. Soviet GRU doctrine was that on general, a walk-in is considered only when they can show some evidence of access to valuable material. The best way of doing this is actuallybringing a sample of information. Suvorov observes that "this is perhaps the only way to convince the GRU that they can trust the person".
Suvorov describes an ideal; the one-time head of the GRU, Gen.Ivan Serov, was willing to explore potential agents who gave plausible information about themselves and their access, or a "write-in" whose potential could be verified before a meeting.[10]
It has been the general experience of intelligence agencies that potential recruits, recruits early in the development process, and those currently reporting to a local case officer, still need to be checked against master biographical and other files, which will help spot foreign counterintelligence. This counterintelligence interest could be from their own or third countries.
Once an individual is seen to have potential access to information, the recruiting officer or staff considers why a presumably loyal and trusted individual might be willing to betray his own side. A basic set of reasons are theclassic MICE motivations, with further insights inattitudes predisposing to cooperation.
Headquarters may have access to information that a field office does not, such as being able to access credit records to identify financial stress, through a cutout that hides the request as having come from country B's service. Another area where a central office can help is to correlate possible penetration attempts by an individual who approaches one's own, or allied, intelligence services in different locations, as, for example, embassies in different cities.
There are both local and headquarters-based means of validation. The case officer should compare information, provided by the agent, with locally known facts, both from overt and covert sources. Some services, especially the Russian/Soviet, may not have formal or extensive OSINT, and either the case officer may need to check such things (or set up checkable requests) within the station/residency.
Some definite warnings, which may come from local or headquarters reporting, include:
An access agent does not have significant access to material of intelligence value, but has contact with those who do. Such a person could be as simple as the barber outside a military base, or as complex as a mid or high level staff officer of any organization, or think tank (i.e., outside the government), who can deal on a regular basis with both hostile and friendly government personnel with such access to sensitive material that it may warrant for their individual or joint recruitment, upon the recommendation from the access agent, by the intelligence apparatus for which the access agent is actually working for.[2]
Within private and public organizations that handle sensitive material, human resource workers, receptionists, and other seemingly low-level personnel know a great deal about the people with sensitive access. Certain employees, such as guards and janitors, who have no formal access, still may be able to gain access to secured rooms and containers; there is a blurry area between an access agent that might let your collector into an area, and a support employee who can collect information that he may or may not understand.
U.S. intelligence services, for example, are concerned when their own personnel could be subject to sexual blackmail. This applied to any homosexual relationship until the mid-1990s, and also applied to heterosexual relationships with most foreign nationals.[11]Seehoneypots in espionage fictionfor fictional examples. In some cases, especially when the national was a citizen of a friendly nation, the relationship needed to be reported. Failure to do so, even with a friendly nation, could result in dismissal.
One former CIA officer said that while sexual entrapment was not generally a good tool to recruit a foreign official, it was sometimes employed successfully to solve short-term problems.Seductionis a classic technique; "swallow" was the KGB tradecraft term for women, and "raven" the term for men, trained to seduce intelligence targets.[12]
During the Cold War, the KGB (and allied services, including the East GermanStasiunderMarkus Wolf, and the CubanIntelligence Directorate[formerly known as Dirección General de Inteligencia or DGI]) frequently sought to entrap CIA officers. The KGB believed that Americans were sex-obsessed materialists, and that U.S. spies could easily be entrapped by sexual lures.[citation needed]The best-known incident, however, was ofClayton Lonetree, a Marine guard supervisor at the Moscow embassy, who was seduced by a "swallow" who was a translator at theEmbassy of the United States in Moscow. Once the seduction took place, she put him in touch with a KGB handler. The espionage continued after his transfer toVienna, although he eventually turned himself in.
The Soviets used sex not only for direct recruitment, but as a contingency for a possible future need ofkompromatof an American officer. The CIA itself made limited use of sexual recruitment against foreign intelligence services. "Coercive recruitment generally didn't work. We found that offers of money and freedom worked better".[11]If the Agency found a Soviet intelligence officer had a girlfriend, they would try to recruit the girlfriend as anaccess agent. Once the CIA personnel had access to the Soviet officer, they might attempt to double him.
Examples of people trapped by sexual means include:
Yet other factors may apply. True friendship or romance may draw others to become involved with a current agent.John Anthony Walker, who spied for money, recruited friends and relatives. Rosario Ames, wife ofAldrich Ames, was brought into her husband's activities.
Katrina Leungis one of the more complex cases, who came to the United States on aTaiwanesepassport, became involved with a PRC activist, on whom she was recruited to report to theFBI. She seduced her FBI case officer, and eventually was recruited by the FBI as a "dangle" to PRC targets, specifically in the ChineseMinistry of State Security(MSS). Her first reports were independently confirmed by the CIA. Later, however, she was found to be passing FBI documents to the MSS, while still reporting to the FBI. While she was allowed, at first, to continue, on the belief that the information she provided to the United States was more important than that which she was giving to the PRC, she was eventually arrested and charged with a relatively low-level crime. Eventually, that was dismissed for reasons of prosecutorial misconduct, although a subsequent U.S. government appeal resulted in a plea bargain. Her true loyalty was never made public, but, at various times, she appears to have been a dangled mole and a doubled agent, as well as possibly a PRC access agent to FBI personnel.
Development, the preparation for actual recruiting, involves a direct approach by a case officer who has some existing access to the potential recruit, an indirect approach through an access agent or proprietary, or has reason to risk a "cold" approach. Before the direct recruitment, there may be a delicate period of development.
The case officer, possibly through an access agent, works on establishing a relationship. This phase, in which the potential agent is called a developmental, has not yet reached the recruiting pitch. After the cultivation stage, overt contact is established with the candidate under the guise of an official meeting. After the acquaintanceship has ripened and official meetings evolve into personal meetings, the developmental stage begins.[5]
Suvorov describes the "crash approach" as the most demanding form of recruitment, which is to be done only if the localrezident, or chief of the GRU unit, convinces GRU headquarters that the risk is worthwhile.[3]"Quite a few examples are known of recruitment at the first meeting, of course following the secret cultivation which has gone on for many months".
"The crash approach, or 'love at first sight' in GRU jargon, has a number of irrefutable advantages. Contact with the future agent takes place only once, instead of at meetings over many months, as is the case with the gradual approach. After the first contact the newly recruited agent will himself take action on his own security. He will never talk to his wife, or tell her that he has a charming friend in the Soviet military attaché who is also very interested in stamp collecting."
Telling one's wife or colleagues about a charming Soviet friend can compromise the entire development.[3]The United States is also aware that the Soviet developmental process is, preferably, gradual. "The developmental stage cements the relationship and encourages loyalty to it. The hostile intelligence officer may then, through friendly persuasion, ask for a very innocent and insignificant favor from the candidate and pay him generously for it, thus placing the candidate in a position of obligation. During this stage the future agent becomes accustomed to being asked favors and fulfilling them accurately. The future agent's ambitions, financial and work problems, hobbies, etc., are continuously assessed by an intelligence team to exacerbate weaknesses. The future agent's professional, social, and private personalities are soon stripped away.[5]
It is the goal of the case officer, with appropriate support from his organization, to learn vulnerabilities, build trust, and solve problems for the developmental. These are all preparatory steps to asking, perhaps subtly, the developmental to betray his own side.
Information requests begin innocently, usually asking for public information, but to get the development on the path to betrayal. At first, any requests for documents are for open ones, which the case officer gives some pretext for not himself obtaining. A creative task officer may then ask for a technically restricted, but still fairly innocent document, such as an unclassified telephone directory.
The interaction becomes more sensitive, especially when the case officer asks for something technically classified, but with an explanation that lets the potential recruit rationalize that he is not really betraying any trust. During this time, the case officer is building psychological control. In some cases, it may be possible to get useful information without ever asking the developmental to betray his country. These cases may mean the role of the recruit is not to be as a direct agent, but perhaps as an access or a support agent. The recruit may not even be witting of his relationship to a FIS.
Finally, the relationship will move from clandestine to overt, when the foreign service has significant compromising information on the asset. If the asset had been motivated by money, he will find the tasks given him may become more challenging, but the payments reduce, because he is no longer in a position to negotiate.[5]
The traditional openness of the scientific community can be exploited to obtain information from an individual with access to commercially, scientifically, or militarily valuable material.[17]"There is another important source, peculiar to scientific intelligence. This is the body of experts in particular fields of science available in one's own country for consultation. The phenomena of nature are independent of political boundaries, and the experts are in the position of agents spying on these phenomena insofar as they throw light on the feasibility of a suspected enemy development".
A 1998 document describes typical foreign intelligence recruitment against citizens with access to sensitive technology.[5]"Hostile intelligence services begin the agent recruitment process by scrupulously collecting information on persons who are connected to industry, RDT&E laboratories, government institution staffs, military bases, and design organizations". A candidate for recruitment usually fulfills the following criteria:
If the decision is made to make a formal recruitment, the case officer gets the developmental accustomed to meeting in more obscure places and at more unusual times. These can have the function ofcountersurveillance, during which additional officers may be watching the meeting, and the travel to it, for evidence of "country A" counterintelligence interest. Without the developmental fully realizing it, he is being drawn into increasingly treasonous activity, which would be harder and harder to explain were he caught. It is considered important for the case officer to offer money, perhaps dismissed as covering expenses, but really as a means of compromising the developmental.
Eventually, especially if the recruit is ideologically sympathetic to Country B, the case officer makes the direct recruitment pitch. This may not work, and, in fact, may induce rage. The case officer has to be prepared for physical defense and escape if this becomes necessary. If the case officer can produce compromising photographs, receipts, etc., even an originally ideological subject now moves into the realm of compromise.
As part ofTechnical intelligencegathering, key business personnel, or even a business itself, might be recruited. Both theKGBandGRUused this route. The KGB service for this wasLine X, which reported to Directorate T of the KGB First Chief Directorate.
The GRU ran recruitments at industry trade shows.[3]Suvorov explains that recruitment was extremely effective with small firms: "The owner of a small firm, even a very successful one, is always at great risk, always keen to strengthen his situation...In any case, if he sells his product he can hide the fact from the authorities. It is equally easy for him to hide the money he has received". The businessman, however, may forget that while he might not report the cash transaction to his own government, the GRU certainly has recorded the act of payment, and can use it for subsequent blackmail.
Suvorov explained that while the most strategic information appears to be associated with major firms, there are several reasons why an approach to a smaller company is the place to begin.
While usual practice is to recruit residents of the country being targeted, there have been instances where the FIS brought illegal agents into that country. Common practice was to have them enter via a third country, and perhaps claim to be immigrants from a fourth.
The introduction of an "illegal" might be due to the need to bring a specialist in to carry out some part of the operation. The SovietKGBhad a Department V, staffed with officers qualified to kill or to carry out sabotage.[18]
Specialists of the FIS might also need legalization to carry out clandestine intelligence collection. The United States has had a group ofCIAandNSAspecialists who would emplace technical sensors, ranging from telephone taps to specialized devices for measuring weapons tests, into the target country.
Obtaining the documentation and other resources is the role of legalizing agents and documentalists. Candidate for this category of agents are sought among officials of the police and passport departments, consular clerks, customs and immigration officials, and small employers of labor. Agent legalizers are subjected to especially thorough vetting, because the fate of illegals is entrusted to them. When a Soviet illegal arrives in a country the task of the legalizing agent is to ensure the issue of documents by making the necessary entries in the registration books and to ensure that the illegal is in possession of the necessary documentation.[1]
A common technique was to find birth records of someone known later to have died. The "documentation agent" who did this would not have contact with illegals.[1]
|
https://en.wikipedia.org/wiki/Recruitment_of_spies
|
Coded anti-piracy(CAP) is an anti-copyright infringementtechnology which marks each film print of amotion picturewith a distinguishing pattern of dots, used as aforensic identifierto identify the source of illegal copies.
They are not to be confused withcue marks, which are black or white circles usually in the upper right-hand corner of the frame. A cue mark is used to signal the projectionist that a particular reel of a film is ending, as most films come to theaters on several reels of celluloid.
CAP coding is a multi-dot pattern that is printed in several frames of a film print of a theatrically exhibited motion picture. It is sometimes accompanied by text code printed on the edge of a motion picture print, outside the visible picture area.
The dots are arranged in a unique pattern as identification of the particular print of a movie, and are added during manufacture. The marks are not present on the original film negative; they are produced either by physical imprint on the final film print or by digitally postprocessing a digitally distributed film. This enables codes to be customized on a per-copy basis so that they can be used to trace the print to the theaters that played that particular print and to trace anybootlegcopies however they were made – be theytelecined,cammed, ortelesynced.
The original style of CAP code, developed in 1982 byKodakalong with theMotion Picture Association, is a series of very small dots printed in the picture area of a film print.
The original instance of CAP developed by Kodak is a technology forwatermarkingfilm prints to trace copies of a print, whether legal or not.
A newer and more common variation has been developed byDeluxe Media. It makes use of more visible dots, and was developed to thwart film copying from theatergoers with camcorders, or prints that have been illicitlytelecinedtovideotapeorDVD.
Deluxe's version has been given thepejorativename of "crap code" by filmgoers.[who?]The term "crap code" was coined on a movie projectionists' discussion forum, due to its quite intrusive nature when viewing. These dots are usually placed on bright areas of a film frame, so they can be more easily identified, and are a reddish-brown color.
A different marking system is CineFence, introduced byPhilipsin 2006 and commercially available in 2008.
TheDigital Cinema System SpecificationbyDigital Cinema Initiativesmandates forensic marking of digital film;[1]CineFence is the first marking system that complies with this standard.
CineFence claims to be imperceptible to the viewer, but robust to copying and encoding,[1]and encodes 35bits/5minutes.[2]
|
https://en.wikipedia.org/wiki/Coded_anti-piracy
|
Fictitiousorfake entriesare deliberately incorrect entries inreference workssuch asdictionaries,encyclopedias, maps, and directories, added by the editors ascopyright trapsto reveal subsequentplagiarismorcopyright infringement. There are more specific terms for particular kinds of fictitious entry, such as Mountweazel,trap street,paper town,phantom settlement, andnihilartikel.[1]
TheneologismMountweazelwas coined byThe New Yorkerwriter Henry Alford in an article that mentioned a fictitious biographical entry intentionally placed as a copyright trap in the 1975New Columbia Encyclopedia.[2][3]The entry described Lillian Virginia Mountweazel as a fountain designer turned photographer, who died in an explosion while on assignment forCombustiblesmagazine. Allegedly, she was widely known for her photo-essays of unusual subject matter, including New York City buses, the cemeteries of Paris, and rural American mailboxes. According to the encyclopedia's editor, it is a tradition for encyclopedias to put a fake entry to trap competitors for plagiarism.[4]The surname came to be associated with all such fictitious entries.[5][6]
The termnihilartikel, combining the Latinnihil("nothing") and GermanArtikel("article"), is sometimes used.[1]
By including a trivial piece of false information in a larger work, it is easier to demonstrate subsequentplagiarismif the fictitious entry is copied along with other material. An admission of this motive appears in the preface toChambers' 1964mathematical tables: "those [errors] that are known to exist form an uncomfortable trap for any would-be plagiarist".[7]Similarly,trap streetsmay be included in amap, or invented phone numbers in atelephone directory.
Fictitious entries may be used to demonstrate copying, but to prove legal infringement, the material must also be shown to be eligible for copyright (seeFeist v. Rural,Fred Worth lawsuitorNester's Map & Guide Corp. v. Hagstrom Map Co., 796 F.Supp. 729,E.D.N.Y., 1992).[8]
Fictitious entries on maps may be calledphantom settlements,trap streets,[14]paper towns, cartographer's follies, or other names. They are intended to help reveal copyright infringements.[15]They are not to be confused withpaper streets, which are streets which are planned but as of the printing of the map have not yet been built.
Some publications such as those published byHarvardbiologistJohn Bohannonare used to detect lack ofacademic scrutiny,editorial oversight,fraud, ordata dredgingon the part of authors or their publishers. Trap publications may be used by publishers to immediately reject articles citing them, or by academics to detect journals of ill repute (those that would publish them or works citing them).
A survey of food tastes by the US Army in the 1970s included "funistrada", "butteredermal" and "braised trake" to control for inattentive answers.[27]
In 1985, the fictitious town ofRipton, Massachusetts, was "created" in an effort to protest the ignorance of state officials about rural areas. The town received a budget appropriation and several grants before the hoax was revealed.[28]
Fictitious entries may be used to demonstrate copying, but to prove legal infringement, the material must also be shown to be eligible for copyright. However, due toFeist v. Ruraldecision that "information alone without a minimum of original creativity cannot be protected by copyright", there are very few cases where copyright has been proven and many are dismissed.
Often there will be errors in maps, dictionaries, and other publications, that are not deliberate and thus are not fictitious entries. For example, within dictionaries there are such mistakes known asghost words, "words which have no real existence [...] being mere coinages due to the blunders of printers or scribes, or to the perfervid imaginations of ignorant or blundering editors."[38]
|
https://en.wikipedia.org/wiki/Fictitious_entry
|
Printer tracking dots, also known asprinter steganography,DocuColor tracking dots,yellow dots,secret dots, or amachine identification code(MIC), is adigital watermarkwhich many colorlaser printersandphotocopiersproduce on every printed page that identifies the specific device that was used to print the document. Developed byXeroxandCanonin the mid-1980s, the existence of these tracking codes became public only in 2004.
In the mid-1980s, Xerox pioneered an encoding mechanism for a unique number represented by tiny dots spread over the entire print area, and first deployed this scheme in its DocuColor line of printers. Xerox developed this surreptitious tracking code "to assuage fears that their color copiers could be used to counterfeit bills"[1]and received U.S. Patent No. 5515451 describing the use of the yellow dots to identify the source of a copied or printed document.[2][3]The scheme was then widely deployed in other printers, including those made by other manufacturers.
The public first became aware of the tracking scheme in October 2004, when Dutch authorities used it to track counterfeiters who had used aCanoncolor laser printer.[4]In November 2004,PC Worldreported the machine identification code had been used for decades in some printers, allowing law enforcement to identify and track counterfeiters.[1]TheCentral Bank Counterfeit Deterrence Group(CBCDG) has denied that it developed the feature.[2]
In 2005, the civil liberties activist groupElectronic Frontier Foundation(EFF) encouraged the public to send in sample printouts and subsequently decoded the pattern.[5]The pattern has been demonstrated on a wide range of printers from different manufacturers and models.[6]The EFF stated in 2015 that the documents that they previously received through aFreedom of Information Actrequest[7]suggested that all major manufacturers of color laser printers entered a secret agreement with governments to ensure that the output of those printers is forensically traceable.[6]
Although we still don't know if this is correct, or how subsequent generations of forensic tracking technologies might work, it is probably safest to assume that all modern color laser printers do include some form of tracking information that associates documents with the printer's serial number. (If any manufacturer wishes to go on record with a statement to the contrary, we'll be happy to publish that here.)
In 2007, the European Parliament was asked about the question of invasion of privacy.[8][2]
The pattern consists of a dot-matrix spread of yellow dots, which can barely be seen with the naked eye. The dots have a diameter of one-tenth millimetre (0.004 in) and a spacing of about one millimetre (0.04 in). Their arrangement encodes the serial number of the device, date and time of the printing, and is repeated several times across the printing area in case of errors. For example, if the code consists of 8 × 16 dots in a square or hexagonal pattern, it spreads over a surface of about 4 square centimetres (0.62 sq in) and appears on a sheet ofsize A4 paperabout 150 times. Thus, it can be analyzed even if only fragments or excerpts are available. Some printers arrange yellow dots in seemingly random point clouds.
According to theChaos Computer Clubin 2005, color printers leave the code in a matrix of 32 × 16 dots and thus can store 64 bytes of data (64×8).[9]
As of 2011[update],Xeroxwas one of the few manufacturers to draw attention to the marked pages, stating in a product description, "The digital color printing system is equipped with an anti-counterfeit identification and banknote recognition system according to the requirements of numerous governments. Each copy shall be marked with a label which, if necessary, allows identification of the printing system with which it was created. This code is not visible under normal conditions."[10]
In 2018, scientists at theTU Dresdenanalyzed the patterns of 106 printer models from 18 manufacturers and found four different encoding schemes.[11]
The dots can be made visible by printing or copying a page and subsequently scanning a small section with a high-resolutionscanner. The yellow color channel can then be enhanced with an image processing program to make the dots of the identification code clearly visible. Under good lighting conditions, amagnifying glassmay be enough to see the pattern. UnderUV-light, the yellow dots are clearly recognizable.[12]
Using thissteganographicprocess, high-quality copies of an original (e.g. abanknote) underblue lightcan be made identifiable. Using this process, even shredded prints can be identified: the 2011 "Shredder Challenge" initiated by theDARPAwas solved by a team called "All Your Shreds Are Belong To U.S." consisting ofOtávio Goodand two colleagues.[13][14]
Both journalists and security experts have suggested thatThe Intercept'shandling of the leaks by whistleblowerReality Winner, which included publishing secret NSA documents unredacted and including the printer tracking dots, was used to identify Winner as the leaker, leading to her arrest in 2017 and conviction.[15][16]
Copies or printouts of documents with confidential personal information, for example health care information, account statements, tax declaration or balance sheets, can be traced to the owner of the printer and the inception date of the documents can be revealed. This traceability is unknown to many users and inaccessible, as manufacturers do not publicize the code that produces these patterns. It is unclear which data may be unintentionally passed on with a copy or printout. In particular, there are no mentions of the technique in the support materials of most affected printers. In 2005, theElectronic Frontier Foundation(EFF) sought a decoding method and made available aPython scriptfor analysis.[17]
In 2018, scientists fromTU Dresdendeveloped and published a tool to extract and analyze the steganographic codes of a given color printer and subsequently to anonymize prints from that printer. The anonymization works by printing additional yellow dots on top of the printer's tracking dots.[18][11][19]The scientists made the software available to supportwhistleblowersin their efforts to publicize grievances.[20]
Other methods of identification are not as easily recognizable as yellow dots. For example, a modulation of laser intensity and a variation of shades of grey in texts are feasible. As of 2006[update], it was unknown whether manufacturers were also using these techniques.[21]
|
https://en.wikipedia.org/wiki/Machine_Identification_Code
|
Traitor tracingschemes help trace the source of leaks when secret or proprietary data is sold to many customers.
In a traitor tracing scheme, each customer is given a different personal decryption key.
(Traitor tracing schemes are often combined withconditional accesssystems so that, once the traitor tracing algorithm identifies a personal decryption key associated with the leak, the content distributor can revoke that personal decryption key, allowing honest customers to continue to watch pay television while the traitor and all the unauthorized users using the traitor's personal decryption key are cut off.)
Traitor tracing schemes are used inpay televisionto discouragepirate decryption– to discourage legitimate subscribers from giving away decryption keys.[1][2][3][4][5]Traitor tracing schemes are ineffective if the traitor rebroadcasts the entire (decrypted) original content.
There are other kinds of schemes that discourages pirate rebroadcast – i.e., discourages legitimate subscribers from giving away decrypted original content. These other schemes use tamper-resistantdigital watermarkingto generate different versions of the original content. Traitor tracing key assignment schemes can be translated into such digital watermarking schemes.[6][7][8]
Traitor tracing is acopyright infringementdetection system which works by tracing the source of leaked files rather than by directcopy protection. The method is that the distributor adds a uniquesaltto each copy given out. When a copy of it is leaked to the public, the distributor can check the value on it and trace it back to the "leak".
The main concept is that each licensee (the user) is given a unique key which unlocks the software or allows the media to be decrypted.
If the key is made public, the content owner then knows exactly who did it from their database of assigned codes.
A major attack on this strategy is the key generator (keygen). Byreverse engineeringthe software, the code used to recognise a valid key can be characterised and then a program to spit out valid keys on command can be made.
The practice of traitor tracing is most often implemented withcomputer software, and evolved from the previous method ofactivation codes. In this model, each box of software ships with a unique activation number on a sticker or label that can only be read after the package is opened, separate from theCD-ROMor aDVD-ROM. This number is an encodedserial number, expanded to a usually large number or string of letters, digits, and hyphens. When the software is being installed, or the first time it is run, the user is prompted to type in the license code. This code is then decoded back to its base serial number. This process reduces the number in complexity, and the additional information removed by this process is used to verify the authenticity of the serial number. If the user mistypes a single character in what is sometimes a very long code, the software will refuse to install and require the number to be retyped until it is correct.
This activation code is generated during the packaging phase of manufacture, so that every user is receiving the same software but a different activation code. If a user performs a "casual copy" of the software for a friend, that friend must have the license code as well as the software to install it on their system. Since the software itself cannot determine that it is a copy, this is a way to beat this basic system.
With the expansion ofcomputer networking, two additional levels of software protection have evolved, "network registration" and "online registration".
Software that employs this additional security keeps a copy of the actual serial number being used in the license code. When it is active, it is broadcasting this number on a clandestine channel on the local network. If the software has been installed on another computer on that same network, using the same license code, when the second copy is run it will detect its serial number in use on the network and typically will refuse to run. It may also cause the other copy of itself already in use to close. This prevents a small business from buying one copy of expensive software and installing it on several of the computers at their location, provided they are networked.
The process of online registration is very similar to activation codes, but adds an additional step. Most modern companies are now not only internally networked, but are also connected to the internet. This allows the software manufacturers to add an additional check to their system during the installation process. When the user enters a valid license code, the software does not immediately install. Instead, it uses the active internet connection to contact aserverbeing operated by the software manufacturer. The license code is transmitted to the server, and it waits for the server to tell it whether the install should be permitted. The server maintains adatabaseof all the serial numbers that have been used to install their software. If a single serial number is used on a number of machines (a typical limit would be five machines) then the server tells the software that it is likely a copy and to abort the installation. The users are usually presented with a dialog instructing them to contact the manufacturer.
Websites offering subscriber downloads may embed adigital watermarkin the download, usually in a way that is not readily apparent to the user. For example, an identification number may be embedded in an image, or in metadata such as the date of a file. It is also possible to watermark multiple copies of a file with a unique watermark per recipient before sending them. In this case the embedded identification number can be the ID of the recipient.
Some software that implements online registration extends this with a process commonly known as "phoning home". In this case, the software, either each time it is used or at some preset interval such as monthly, makes another connection back to the registration server. It does this to check in with the server to see if the serial number it is using has been determined to be one that is being used to install in many places. Serial numbers that have been identified as "pirated" (illegally distributed) are added to ablackliston the server, a process referred to as being "burned". Burned serial numbers cannot be used to install or activate the product. Serial number lists are available on the internet that include a large number of valid registration codes for many software titles. It is common for software manufacturers to seek out these lists and invalidate the serial numbers that appear on these lists. This discourages individuals from giving out their registration codes for fear that this code will later be invalidated, disabling the original install of the software the next time that it "phones home".
Some of the more expensive software requires the user to send personal information to the software vendor before receiving the activation code. The activation code is usually a large sequence of numbers and letters, and encodes information including the license serial number, information to ensure the code is valid, and also includes the ability to verify the personal information the user sent to the software vendor. In this way, the user's name or business name must be entered along with the registration code. The registration code will not be accepted by the software unless the user types in the business name exactly as submitted to the software vendor. The business name is usually displayed by the software on its opening banner whenever the software is used. If the customer gives away his activation code it will be useless without his business name, and anyone that uses the activation code must enter it in during the activation process, leaving the original buyer's business name on the banner of the software. This makes it very easy to "trace the traitor" and find any customers who originally gave out their activation codes. Since giving away the registration code is a violation of the license agreement, the software vendor may invalidate the user's serial number (disabling that user's software in the process) and may take legal action. This does raiseprivacyconcerns in some areas.
|
https://en.wikipedia.org/wiki/Traitor_tracing
|
Awatermarkis an identifying image or pattern inpaperthat appears as various shades of lightness/darkness when viewed by transmitted light (or when viewed by reflected light, atop a dark background), caused by thickness or density variations in the paper.[1]Watermarks have been used onpostage stamps,currency, and other government documents to discouragecounterfeiting. There are two main ways of producing watermarks in paper; thedandy roll process, and the more complexcylinder mould process.
Watermarks vary greatly in their visibility; while some are obvious on casual inspection, others require some study to pick out. Various aids have been developed, such aswatermark fluidthat wets the paper without damaging it. A watermark is very useful in theexaminationof paper because it can be used for dating documents and artworks, identifying sizes, mill trademarks and locations, and determining the quality of a sheet of paper. The word is also used fordigital practices that share similaritieswith physical watermarks. In one case, overprint on computer-printed output may be used to identify output from an unlicensedtrial version of a program. In another instance, identifying codes can be encoded as adigital watermarkfor amusic,video,picture, or otherfile. Or an artist adding their identifying digital Signature, graphic, logo in theirdigital artworksas an identifier or anti-counterfeit measure .
Watermarks were first introduced inFabriano, Italy, in 1282.[2]At the time, watermarks were created by changing the thickness of paper during a stage in the manufacturing process when it was still wet.
Traditionally, a watermark was made by impressing a water-coated metal stamp onto thepaperduring manufacturing. The invention of the dandy roll in 1826 by John Marshall revolutionised the watermark process and made it easier for producers to watermark their paper.
Thedandy rollis a light roller covered by material similar towindow screenthat is embossed with a pattern. Faint lines are made bylaid wiresthat run parallel to the axis of the dandy roll, and the bold lines are made bychain wiresthat run around the circumference to secure the laid wires to the roll from the outside. Because the chain wires are located on the outside of the laid wires, they have a greater influence on the impression in the pulp, hence their bolder appearance than the laid wire lines.
This embossing is transferred to thepulpfibres, compressing and reducing their thickness in that area. Because the patterned portion of thepageis thinner, it transmits morelightthrough and therefore has a lighter appearance than the surrounding paper. If these lines are distinct and parallel, and/or there is a watermark, then the paper is termedlaid paper. If the lines appear as ameshor are indiscernible, and/or there is no watermark, then it is calledwove paper. This method is calledline drawing watermarks.
Another type of watermark is called thecylinder mould watermark. It is a shaded watermark first used in 1848 that incorporates tonal depth and creates agreyscaleimage. Instead of using a wire covering for the dandy roll, the shaded watermark is created by areas of relief on the roll's own surface. Once dry, the paper may then be rolled again to produce a watermark of even thickness but with varying density. The resulting watermark is generally much clearer and more detailed than those made by the Dandy Roll process, and as such, Cylinder Mould Watermark Paper is the preferred type of watermarked paper for banknotes, passports, motor vehicle titles, and other documents where it is an important anti-counterfeiting measure.
Inphilately, the watermark is a key feature of a stamp, and often constitutes the difference between a common and a rare stamp. Collectors who encounter two otherwise identical stamps with different watermarks consider each stamp to be a separate identifiable issue.[3]The "classic" stamp watermark is a small crown or other national symbol, appearing either once on each stamp or a continuous pattern. Watermarks were nearly universal on stamps in the 19th and early 20th centuries, but generally fell out of use, but some countries continue to use them.[4]
Some types ofembossing, such as that used to make the "cross on oval" design on early stamps ofSwitzerland, resemble a watermark in that the paper is thinner, but can be distinguished by having sharper edges than is usual for a normal watermark. Stamp paper watermarks also show various designs, letters, numbers and pictorial elements.
The process of bringing out the stamp watermark is fairly simple. Sometimes a watermark in stamp paper can be seen just by looking at the unprinted back side of a stamp. More often, the collector must use a few basic items to get a good look at the watermark. For example, watermark fluid may be applied to the back of a stamp to temporarily reveal the watermark.[4]
Even using the simple watermarking method described, it can be difficult to distinguish some watermarks. Watermarks on stamps printed in yellow and orange can be particularly difficult to see. A few mechanical devices are also used by collectors to detect watermarks on stamps such as the Morley-Bright watermark detector and the more expensiveSafe Signoscope.[5]Such devices can be very useful for they can be used without the application of watermark fluid and also allow the collector to look at the watermark for a longer period of time to more easily detect the watermark.
|
https://en.wikipedia.org/wiki/Watermark
|
Honeypotsare security devices whose value lie in being probed and compromised. Traditional honeypots are servers (or devices that expose server services) that wait passively to be attacked.Client Honeypotsare active security devices in search of malicious servers that attack clients. The client honeypot poses as a client and interacts with the server to examine whether an attack has occurred. Often the focus of client honeypots is on web browsers, but any client that interacts with servers can be part of a client honeypot (for exampleftp, email,ssh, etc.).
There are several terms that are used to describe client honeypots. Besides client honeypot, which is the generic classification, honeyclient is the other term that is generally used and accepted. However, there is a subtlety here, as "honeyclient" is actually ahomographthat could also refer to the first known open source client honeypot implementation (see below), although this should be clear from the context.
A client honeypot is composed of three components. The first component, a queuer, is responsible for creating a list of servers for the client to visit. This list can be created, for example, through crawling. The second component is the client itself, which is able to make a requests to servers identified by the queuer. After the interaction with the server has taken place, the third component, an analysis engine, is responsible for determining whether an attack has taken place on the client honeypot.
In addition to these components, client honeypots are usually equipped with some sort of containment strategy to prevent successful attacks from spreading beyond the client honeypot. This is usually achieved through the use of firewalls and virtual machine sandboxes.
Analogous to traditional server honeypots, client honeypots are mainly classified by their interaction level: high or low; which denotes the level of functional interaction the server can utilize on the client honeypot. In addition to this there are also newly hybrid approaches which denotes the usage of both high and low interaction detection techniques.
High interaction client honeypots are fully functional systems comparable to real systems with real clients. As such, no functional limitations (besides the containment strategy) exist on high interaction client honeypots. Attacks on high interaction client honeypots are detected via inspection of the state of the system after a server has been interacted with. The detection of changes to the client honeypot may indicate the occurrence of an attack against that has exploited a vulnerability of the client. An example of such a change is the presence of a new or altered file.
High interaction client honeypots are very effective at detecting unknown attacks on clients. However, the tradeoff for this accuracy is a performance hit from the amount of system state that has to be monitored to make an attack assessment. Also, this detection mechanism is prone to various forms of evasion by the exploit. For example, an attack could delay the exploit from immediately triggering (time bombs) or could trigger upon a particular set of conditions or actions (logic bombs). Since no immediate, detectable state change occurred, the client honeypot is likely to incorrectly classify the server as safe even though it did successfully perform its attack on the client. Finally, if the client honeypots are running in virtual machines, then an exploit may try to detect the presence of the virtual environment and cease from triggering or behave differently.
Capture[1]is a high interaction client honeypot developed by researchers at Victoria University of Wellington, NZ. Capture differs from existing client honeypots in various ways. First, it is designed to be fast. State changes are being detected using an event based model allowing to react to state changes as they occur. Second, Capture is designed to be scalable. A central Capture server is able to control numerous clients across a network. Third, Capture is supposed to be a framework that allows to utilize different clients. The initial version of Capture supports Internet Explorer, but the current version supports all major browsers (Internet Explorer, Firefox, Opera, Safari) as well as other HTTP aware client applications, such as office applications and media players.
HoneyClient[2]is a web browser based (IE/FireFox) high interaction client honeypot designed by Kathy Wang in 2004 and subsequently developed atMITRE. It was the first open source client honeypot and is a mix of Perl, C++, and Ruby. HoneyClient is state-based and detects attacks on Windows clients by monitoring files, process events, and registry entries. It has integrated the Capture-HPC real-time integrity checker to perform this detection. HoneyClient also contains a crawler, so it can be seeded with a list of initial URLs from which to start and can then continue to traverse web sites in search of client-side malware.
HoneyMonkey[3]is a web browser based (IE) high interaction client honeypot implemented by Microsoft in 2005. It is not available for download. HoneyMonkey is state based and detects attacks on clients by monitoring files, registry, and processes. A unique characteristic of HoneyMonkey is its layered approach to interacting with servers in order to identify zero-day exploits. HoneyMonkey initially crawls the web with a vulnerable configuration. Once an attack has been identified, the server is reexamined with a fully patched configuration. If the attack is still detected, one can conclude that the attack utilizes an exploit for which no patch has been publicly released yet and therefore is quite dangerous.
Shelia[4]is a high interaction client honeypot developed by Joan Robert Rocaspana at Vrije Universiteit Amsterdam. It integrates with an email reader and processes each email it receives (URLs & attachments). Depending on the type of URL or attachment received, it opens a different client application (e.g. browser, office application, etc.) It monitors whether executable instructions are executed in data area of memory (which would indicate a buffer overflow exploit has been triggered). With such an approach, SHELIA is not only able to detect exploits, but is able to actually ward off exploits from triggering.
The Spycrawler[5]developed at the University of Washington is yet another browser based (Mozilla) high interaction client honeypot developed by Moshchuk et al. in 2005. This client honeypot is not available for download. The Spycrawler is state based and detects attacks on clients by monitoring files, processes, registry, and browser crashes. Spycrawlers detection mechanism is event based. Further, it increases the passage of time of the virtual machine the Spycrawler is operating in to overcome (or rather reduce the impact of) time bombs.
WEF[6]is an implementation of an automatic drive-by-download – detection in a virtualized environment, developed by Thomas Müller, Benjamin Mack and Mehmet Arziman, three students from the Hochschule der Medien (HdM), Stuttgart during the summer term in 2006. WEF can be used as an active HoneyNet with a complete virtualization architecture underneath for rollbacks of compromised virtualized machines.
Low interaction client honeypots differ from high interaction client honeypots in that they do not utilize an entire real system, but rather use lightweight or simulated clients to interact with the server. (in the browser world, they are similar to web crawlers). Responses from servers are examined directly to assess whether an attack has taken place. This could be done, for example, by examining the response for the presence of malicious strings.
Low interaction client honeypots are easier to deploy and operate than high interaction client honeypots and also perform better. However, they are likely to have a lower detection rate since attacks have to be known to the client honeypot in order for it to detect them; new attacks are likely to go unnoticed. They also suffer from the problem of evasion by exploits, which may be exacerbated due to their simplicity, thus making it easier for an exploit to detect the presence of the client honeypot.
HoneyC[7]is a low interaction client honeypot developed at Victoria University of Wellington by Christian Seifert in 2006. HoneyC is a platform independent open source framework written in Ruby. It currently concentrates driving a web browser simulator to interact with servers. Malicious servers are detected by statically examining the web server's response for malicious strings through the usage of Snort signatures.
Monkey-Spider[8]is a low-interaction client honeypot initially developed at the University of Mannheim by Ali Ikinci. Monkey-Spider is a crawler based client honeypot initially utilizing anti-virus solutions to detect malware. It is claimed to be fast and expandable with other detection mechanisms. The work has started as a diploma thesis and is continued and released as Free Software under theGPL.
PhoneyC[9]is a low-interaction client developed by Jose Nazario. PhoneyC mimics legitimate web browsers and can understand dynamic content by de-obfuscating malicious content for detection. Furthermore, PhoneyC emulates specific vulnerabilities to pinpoint the attack vector. PhoneyC is a modular framework that enables the study of malicious HTTP pages and understands modern vulnerabilities and attacker techniques.
SpyBye[10]is a low interaction client honeypot developed byNiels Provos. SpyBye allows a web master to determine whether a web site is malicious by a set of heuristics and scanning of content against the ClamAV engine.
Thug[11]is a low-interaction client honeypot developed by Angelo Dell'Aera. Thug emulates the behaviour of a web browser and is focused on detection of malicious web pages. The tool uses Google V8 Javascript engine and implements its own Document Object Model (DOM). The most important and unique features of Thug are: the ActiveX controls handling module (vulnerability module), and static + dynamic
analysis capabilities (using Abstract Syntax Tree and Libemu shellcode analyser). Thug is written in Python under GNU General Public License.
YALIH (Yet Another Low Interaction Honeyclient)[12]is a low Interaction Client honeypot developed by Masood Mansoori from the honeynet chapter of the Victoria University of Wellington, New Zealand and designed to detect malicious websites through signature and pattern matching techniques. YALIH has the capability to collect suspicious URLs from malicious website databases, Bing API, inbox and SPAM folder through POP3 and IMAP protocol. It can perform Javascript extraction, de-obfuscation and de-minification of scripts embedded within a website and can emulate referrer, browser agents and handle redirection, cookies and sessions. Its visitor agent is capable of fetching a website from multiple locations to bypass geo-location and IP cloaking attacks. YALIH can also generate automated signatures to detect variations of an attack. YALIH is available as an open source project.
miniC[13]is a low interaction client honeypot based on wget retriever and Yara engine. It is designed to be light, fast and suitable for retrieval of a large number of websites. miniC allows to set and simulate referrer, user-agent, accept_language and few other variables. miniC was designed at New Zealand Honeynet chapter of the Victoria University of Wellington.
Hybrid client honeypots combine both low and high interaction client honeypots to gain from the advantages of both approaches.
The HoneySpider[14]network is a hybrid client honeypot developed as a joint venture betweenNASK/CERT Polska,GOVCERT.NL[nl][1]andSURFnet.[2]The projects goal is to develop a complete client honeypot system, based on existing client honeypot solutions and a crawler specially for the bulk processing of URLs.
|
https://en.wikipedia.org/wiki/Client_honeypot_/_honeyclient
|
Incartography, atrap streetis afictitious entryin the form of a misrepresented street on a map, often outside the area the map nominally covers, for the purpose of "trapping" potential plagiarists of the map who, if caught, would be unable to explain the inclusion of the "trap street" on their map as innocent. On maps that are not of streets, other "trap" features (such asnonexistent towns, or mountains with the wrong elevations) may be inserted or altered for the same purpose.[1]
Trap streets are often nonexistent streets, but sometimes, rather than actually depicting a street where none exists, a map will misrepresent the nature of a street in a fashion that can still be used to detectcopyright violatorsbut is less likely to interfere with navigation. For instance, a map might add nonexistent bends to a street, or depict a major street as a narrow lane, without changing its location or its connections to other streets, or the trap street might be placed in an obscure location of a map that is unlikely to be referenced.
Trap streets are rarely acknowledged by publishers. One exception is a popular driver's atlas for the city ofAthens,Greece, which has a warning inside its front cover that potential copyright violators should beware of trap streets.[2]
Trap streets are not copyrightable under the federal law of theUnited States. InNester's Map & Guide Corp. v. Hagstrom Map Co.(1992),[3][4]aUnited Statesfederal court found that copyright traps are not themselves protectable bycopyright. There, the court stated: "[t]o treat 'false' facts interspersed among actual facts and represented as actual facts as fiction would mean that no one could ever reproduce or copy actual facts without risk of reproducing a false fact and thereby violating a copyright ... If such were the law, information could never be reproduced or widely disseminated." (Id. at 733)
In a 2001 case,The Automobile Associationin theUnited Kingdomagreed to settle a case for £20,000,000 when it was caught copyingOrdnance Surveymaps. In this case, the identifying "fingerprints" were not deliberate errors but rather stylistic features such as the width of roads.[5]
In another case, theSingapore Land AuthoritysuedVirtual Map, an online publisher of maps, for infringing on its copyright. The Singapore Land Authority stated in its case that there were deliberate errors in maps they had provided to Virtual Map years earlier. Virtual Map denied this and insisted that it had done its owncartography.[6]
The 1979 science fiction novelThe Ultimate EnemybyFred Saberhagenincludes the short story "The Annihilation of Angkor Apeiron" in which a salesman allows a draft of a newEncyclopedia Galacticato be captured by alien war machines. It leads them to believe there is a nearby planet ripe for attack, but the planet is actually a copyright trap and the aliens are led away from inhabited worlds, saving millions of lives.
The 2010 novelKrakenbyChina Miévillefeatures the trap streets of theLondon A-Zbeing places where the magical denizens of the city can exist without risk of being disturbed by normal folk.
A 2013 film,Trap Street, inverts the usual meaning of atrap street, becoming a real street which is deliberately obscured or removed from a map—and anyone who attempts to identify it by placing it on public record is then "trapped".[7]
The 2015Doctor Whoepisode "Face the Raven" features a hidden street where alien asylum seekers have taken shelter. Due to a psychic field that subconsciously makes observers ignore it, outsiders consider it a trap street when they see it on maps. One scene involves the characterClara Oswalddiscussing the definition of "trap street". The episode's working title was also "Trap Street".
|
https://en.wikipedia.org/wiki/Trap_street
|
Internet background noise(IBN, also known asInternet background radiation, by analogy with naturalbackground radiation) consists of datapacketson theInternetwhich are addressed toIP addressesorportswhere there is no network device set up to receive them.Network telescopesobserve the Internet background radiation.
These packets often contain unsolicited commercial ornetwork control messages,backscatters,port scans, andwormactivities.
Smaller devices such asDSL modemsmay have ahard-codedIP addressto look up the correct time using theNetwork Time Protocol. If, for some reason, the hard-coded NTP server is no longer available, faulty software might retry failed requests up to every second, which, if many devices are affected, generates asignificant amount of unnecessary request traffic.
In the first 10 years of the Internet, there was very little background noise but with its commercialization in the 1990s the noise factor became a permanent feature.
TheConfickerworm was responsible in 2010[1]for a large amount of background noise generated by viruses looking for new victims. In addition to malicious activities, misconfigured hardware and leaks from private networks are also sources of background noise.[2]
It was estimated in the early 2000s that adial-up modemuser lost about 20 bits per second of theirbandwidthto unsolicited traffic.[3]During that decade, the amount of background noise for an IPv4 /8 address block (which contains 16.7 million address) increased from 1 to 50Mbit/s(1KB/sto 6.25MB/s).[4]By November 2010, it was estimated that 5.5 gigabits (687.5 megabytes) of background noise were being generated every second.[4]The newerIPv6protocol, which has a much larger address space, made it more difficult for viruses toscan portsand also limited the impact of misconfigured equipment.[4]
Internet background noise has been used to detect significant changes in Internet traffic and connectivity during the2011 political unrestfrom IP address blocks that weregeolocatedto Libya.[5]
Backscatteris a term coined byVern Paxsonto describe Internet background noise resulting from aDDoSattack using multiple spoofed addresses.[6]This noise is used bynetwork telescopesto indirectly observe large scale attacks in real time.
|
https://en.wikipedia.org/wiki/Internet_background_noise
|
Active measures(Russian:активные мероприятия,romanized:aktivnye meropriyatiya) is a term used to describepolitical warfareconducted by theSoviet Unionand theRussian Federation. The term, which dates back to the 1920s, includes operations such asespionage,propaganda,sabotageandassassination, based on foreign policy objectives of the Soviet and Russian governments.[1][2][3]Active measures have continued to be used by the administration ofVladimir Putin.[4][5]
Active measures were conducted by theSovietandRussian security servicesandsecret policeorganizations (Cheka,OGPU,NKVD,KGB, andFSB) to influence the course of world events, in addition tocollecting intelligenceand producing revised assessments of it. Active measures range "frommedia manipulationstospecial actionsinvolving various degrees of violence". Beginning in the 1920s, they were used both abroad and domestically.[3]
Active measures includes the establishment and support of internationalfront organizations(e.g., theWorld Peace Council); foreigncommunist,socialistandoppositionparties;wars of national liberationin theThird World. It also included supporting underground, revolutionary,insurgency,criminal, andterroristgroups. The programs also focused oncounterfeitingofficial documents,assassinations, andpolitical repression, such as penetration into churches, and persecution of politicaldissidents. The intelligence agencies ofEastern Blocstates also contributed to the program, providing operatives and intelligence for assassinations and other types ofcovert operations.[3]
Retired KGB Major GeneralOleg Kalugin, former head of Foreign Counter Intelligence for the KGB (1973–1979), described active measures as "the heart and soul of theSoviet intelligence":[6]
Not intelligence collection, butsubversion: active measures to weaken the West, to drive wedges in the Western community alliances of all sorts, particularlyNATO, to sow discord among allies, to weaken theUnited Statesin the eyes of the people ofEurope,Asia,Africa,Latin America, and thus to prepare ground in case the war really occurs.[6]
According to theMitrokhin Archives, active measures was taught in theAndropov Instituteof the KGB situated atForeign Intelligence Service(SVR) headquarters inYasenevo Districtof Moscow. The head of the "active measures department" wasYuri Modin, former controller of theCambridge Fivespy ring.[3]
DefectorIon Mihai Pacepaclaimed thatJoseph Stalincoined the termdisinformationin 1923 by giving it aFrenchsounding name in order to deceive other nations into believing it was a practice invented inFrance. The noundisinformationdoes not originate from Russia, it is a translation of the French worddésinformation.[7][8]
Soviet secret services have been described as "the primary instructors of guerrillas worldwide".[9][10][11]According toIon Mihai Pacepa, KGB GeneralAleksandr Sakharovskyonce said: "In today's world, when nuclear arms have made military force obsolete, terrorism should become our main weapon."[12]He also claimed that "Airplane hijacking is my own invention". In 1969 alone, 82 planes were hijacked worldwide by the KGB-financedPLO.[12]
Lt. GeneralIon Mihai Pacepastated that operation "SIG" ("ZionistGovernments"), devised in 1972, intended to turn the whole Islamic world againstIsraeland theUnited States. KGB ChairmanYuri Andropovallegedly explained to Pacepa that
a billion adversaries could inflict far greater damage on America than could a few millions. We needed to instill aNazi-style hatred for theJewsthroughout the Islamic world, and to turn this weapon of the emotions into a terrorist bloodbath againstIsraeland its main supporter, the United States[12]
After World War II, Soviet security organizations played a key role in installingpuppet communist governmentsinEastern Europe, thePeople's Republic of China,North Korea, and laterAfghanistan. Their strategy included masspolitical repressionsand establishment of subordinate secret services in all occupied countries.[13][14]
Some of the active measures were undertaken by the Soviet secret services against their own governments or communist rulers. Russian historiansAnton Antonov-OvseenkoandEdvard Radzinskysuggested thatJoseph Stalinwas killed by associates ofNKVDchiefLavrentiy Beria, based on the interviews of a former Stalin bodyguard and circumstantial evidence.[15]According toYevgenia Albats' allegations,Chief of the KGBVladimir Semichastnywas among the plotters againstNikita Khrushchevin 1964, which led to the latter's downfall.[16]
KGB ChairmanYuri Andropovreportedly struggled for power withLeonid Brezhnev.[17]TheSoviet coup attempt of 1991againstMikhail Gorbachevwas organized by KGB ChairmanVladimir Kryuchkovand other hardliners.[16]Gen. Viktor Barannikov, then the former State Security head, became one of the leaders of the uprising againstBoris Yeltsinduring theRussian constitutional crisis of 1993.[16]
The current Russianintelligence service, theSVR, allegedly works to undermine governments of former Sovietsatellite stateslikePoland, theBaltic states,[18]andGeorgia.[19]During the2006 Georgian-Russian espionage controversy, several Russian GRU case officers were accused by Georgian authorities of preparations to commit sabotage and terrorist acts.[citation needed]
The highest-ranking Soviet Bloc intelligence defector, Lt. Gen.Ion Mihai Pacepaclaimed to have had a conversation withNicolae Ceaușescu, who told him about "ten international leaders the Kremlin killed or tried to kill":László RajkandImre Nagyfrom Hungary;Lucrețiu PătrășcanuandGheorghe Gheorghiu-Dejfrom Romania;Rudolf SlánskýandJan MasarykfromCzechoslovakia; theShah of Iran;Muhammad Zia-ul-Haq, President ofPakistan;Palmiro Togliattifrom Italy;John F. Kennedy; andMao Zedong. Pacepa also discussed a KGB plot to kill Mao Zedong with the help ofLin Biaoorganized by the Soviet intelligence agencies and alleged that "among the leaders of Moscow's satellite intelligence services there was unanimous agreement that the KGB had been involved in the assassination of President Kennedy."[20]
The second President ofAfghanistan,Hafizullah Amin, was killed by the KGB'sAlpha GroupinOperation Storm-333before the full-scaleSoviet invasion of Afghanistanin 1979. Presidents of the unrecognizedChechen Republic of Ichkeriaorganized by Chechen separatists, includingDzhokhar Dudaev,Zelimkhan Yandarbiev,Aslan Maskhadov, andAbdul-Khalim Saidullaev, were killed by theFSBand affiliated forces.
Other widely publicized cases are murders of Russian communistLeon Trotskyand Bulgarian writerGeorgi MarkovbyNKVD.
There were also allegations that the KGB was behind theassassination attempt against Pope John Paul IIin 1981. The ItalianMitrokhin Commission, headed by senatorPaolo Guzzanti(Forza Italia), worked on the Mitrokhin Archives from 2003 to March 2006. The Mitrokhin Commission received criticism during and after its existence.[21]It was closed in March 2006 without any proof brought to its various controversial allegations, including the claim thatRomano Prodi, former Prime Minister of Italy and formerPresident of the European Commission, was the "KGB's man in Europe." One of Guzzanti's informers,Mario Scaramella, was arrested for defamation and arms trading at the end of 2006.[22]
In "Operation Trust" (1921–1926), theState Political Directorate(OGPU) set up a fake anti-Bolshevikunderground organization, "Monarchist Union of Central Russia".[23]The main success of this operation was luringBoris SavinkovandSidney Reillyinto the Soviet Union, where they were arrested and executed.
TheIslamicanti-SovietBasmachi movementinCentral Asiaposed an early threat to the Bolshevik movement. The movement's roots lay in theanti-conscription violence of 1916that erupted when the Russian Empire began to draft Muslims for army service inWorld War I.[24]In the months following theOctober Revolutionof 1917, theBolsheviksseized power in many parts of the Russian Empire and theRussian Civil Warbegan.TurkestaniMuslim political movements attempted to form an autonomous government in the city ofKokand, in theFergana Valley. The Bolsheviks launched an assault on Kokand in February 1918 and carried out a general massacre of up to 25,000 people.[citation needed]The massacre rallied support to the Basmachi who waged aguerrillaand conventional war that seized control of large parts of the Fergana Valley and much ofTurkestan.[25][26]The group's notable leaders wereEnver Pashaand, later,Ibrahim Bek. Soviet Russia responded by deploying special Soviet military detachments masqueraded asBasmachiforces and received support from British and Turkish intelligence services. The operations of these detachments facilitated the collapse of the Basmachi movement and the assassination of Pasha.[27][28]
Following World War II, various partisan organizations in the Baltic states, Poland and Western Ukraine fought for independence of their countries, which were underSoviet occupation, against Soviet forces. ManyNKVDagents were sent to join and penetrate the independence movements. Puppet rebel forces were also created by the NKVD and permitted to attack local Soviet authorities to gain credibility and exfiltrate senior NKVD agents to the West.[29]
According toStanislav Lunev,GRUalone spent more than $1 billion for thepeace movementsagainst theVietnam War, which was a "hugely successful campaign and well worth the cost".[9]Lunev claimed that "the GRU and the KGB helped to fund just about everyantiwar movementand organization in America and abroad".[9]
By the 1980s, the US intelligence community was skeptical of claims that attemptedSoviet influence on the peace movementhad a direct influence on the non-aligned part of the movement.[30]However, the KGB's widespread attempts at influence in the United States,Switzerland, andDenmarktargeting the peace movement were known, and the World Peace Council was categorized as acommunist frontorganization by the CIA.[30]
TheWorld Peace Councilwas established on the orders of the Communist Party of the USSR in the late 1940s, and for over forty years carried out campaigns against western, mainly American, military action. Many organisations controlled or influenced by Communists affiliated themselves with it. According toOleg Kalugin,
... the Soviet intelligence [was] really unparalleled. ... The [KGB] programs—which would run all sorts of congresses, peace congresses, youth congresses, festivals, women's movements, trade union movements, campaigns against U.S. missiles in Europe, campaigns against neutron weapons, allegations that AIDS ... was invented by the CIA ... all sorts of forgeries and faked material—[were] targeted at politicians, the academic community, at [the] public at large. ...[6]
It has been widely claimed that the Soviet Union organised and financed western peace movements; for example, ex-KGB agentSergei Tretyakovclaimed that in the early 1980s the KGB wanted to prevent the United States from deploying nuclear missiles inWestern Europeas a counterweight to Soviet missiles inEastern Europe,[31]and that they used theSoviet Peace Committeeto organize and finance anti-American demonstrations in western Europe.[32][33][34]The Soviet Union first deployed theRSD-10 Pioneer(calledSS-20 Saberin the West) in its European territories in March 1976, a mobile, concealableintermediate-range ballistic missile(IRBM) with amultiple independently targetable reentry vehicle(MIRV) containing three nuclear 150-kilotonwarheads.[35]The SS-20's range of 4,700–5,000 kilometers (2,900–3,100 mi) was great enough to reach Western Europe from well within Soviet territory; the range was just below theStrategic Arms Limitation Talks II (SALT II) Treatyminimum range for anintercontinental ballistic missile(ICBM).5,500 km (3,400 mi).[36][37][38]Tretyakov made further stated that "[t]he KGB was responsible for creating the entirenuclear winterstory to stop thePershing IImissiles,"[32]and that they fed misinformation to western peace groups and thereby influenced a key scientific paper on the topic by western scientists.[39]
According to intelligence historianChristopher Andrew, the KGB in Britain was unable to infiltrate major figures in theCND, and the Soviets relied on influencing "less influential contacts" which were more receptive to the Moscow line. Andrew wrote thatMI5"found no evidence that KGB funding to the British peace movement went beyond occasional payment of fares and expenses to individuals."[40]
Some of the active measures by the USSR against theUnited Stateswere exposed in theMitrokhin Archive:[3]
In 1974, according to KGB statistics, over 250 active measures were targeted against the CIA alone, leading to denunciations of Agency abuses, both real and (more frequently) imaginary,[49]in media, parliamentary debates, demonstrations and speeches by leading politicians around the world.[45]
Soviet intelligence, as part of active measures, frequently spreaddisinformationto distort their adversaries' decision-making. However, sometimes this information filtered back through the KGB's own contacts, leading to distorted reports.[50]Lawrence Bittmanalso addressed Soviet intelligence blowback inThe KGB and Soviet Disinformation, stating that "There are, of course, instances in which the operator is partially or completely exposed and subjected to countermeasures taken by the government of the target country."[51]
Active measures have continued in the post-SovietRussian Federationand are in many ways based on Cold War schematics.[1]After theannexation of Crimea, Kremlin-controlled media spread disinformation about Ukraine's government. In July 2014,Malaysia Airlines flight MH17was shot down by a Russian missile over eastern Ukraine, killing all 298 passengers. Kremlin-controlled media and online agents spread disinformation, claiming Ukraine had shot down the airplane.[52]
Russia's alleged disinformation campaign, its involvement inthe UK's withdrawal from the EU,interference in the 2016 United States presidential election, and its alleged support of far-left and documented support of far-right movements in the West, has been compared to the Soviet Union's active measures in that it aims to "disrupt and discredit Western democracies".[53][54]
In testimony before theUnited States Senate Intelligence Committeehearing on the US policy response to Russian interference in the 2016 elections,Victoria Nuland, former US Ambassador toNATO, referred to herself as "a regular target of Russian active measures."[55][56]
The introduction of the Internet, specifically social media offered new opportunities for active measures. The Kremlin-affiliatedInternet Research Agency, also referred to as the Information Warfare Branch, was established in 2013.[57]This agency is devoted to spreading disinformation through the Internet, the most well-known and prominent operation being its part in the interference in the 2016 US presidential election.[58]According to theHouse Intelligence Committee, by 2018, organic content created by the Russian IRA reached at least 126 million US Facebook users, while its politically divisive ads reached 11.4 million US Facebook users. Tweets by the IRA reached approximately 288 million American users. According to committee chairAdam Schiff, "[The Russian] social media campaign was designed to further a broader Kremlin objective: sowing discord in the U.S. by inflaming passions on a range of divisive issues. The Russians did so by weaving together fake accounts, pages, and communities to push politicized content and videos, and to mobilize real Americans to signonline petitionsand join rallies and protests."[59]
|
https://en.wikipedia.org/wiki/Active_measures
|
Afalse flagoperation is an act committed with the intent of disguising the actual source of responsibility and pinning blame on another party. The term "false flag" originated in the 16th century as an expression meaning an intentional misrepresentation of someone's allegiance.[1][2]The term was originally used to describe aruseinnaval warfarewhereby a vessel flew the flag of a neutral or enemy country in order to hide its true identity.[1][2][3]The tactic was initially used bypiratesandprivateersto deceive other ships into allowing them to move closer before attacking them. It later was deemed an acceptable practice during naval warfare according to international maritime laws, provided the attacking vessel displayed its true flag before commencing an attack.[4][5][6]
The term today extends to include countries that organize attacks on themselves and make the attacks appear to be by enemy nations or terrorists, thus giving the nation that was supposedly attacked apretextfor domestic repression or foreign military aggression[7](as well as to engender sympathy). Similarly deceptive activities carried out during peacetime by individuals or nongovernmental organizations have been called false-flag operations, but the more common legal term is a "frameup", "stitch up", or "setup".
In land warfare, such operations are generally deemed acceptable under certain circumstances, such as todeceive enemies, provided the deception is notperfidiousand that all such deceptions are discarded before opening fire upon the enemy. Similarly, innaval warfaresuch a deception is considered permissible, provided the false flag is lowered and the true flag raised before engaging in battle.[8]Auxiliary cruisersoperated in such a fashion in both World Wars, as didQ-ships, while merchant vessels were encouraged to use false flags for protection. Such masquerades promoted confusion not just of the enemy but of historical accounts. In 1914, theBattle of Trindadewas fought between the British auxiliary cruiserRMSCarmaniaand the German auxiliary cruiserSMSCap Trafalgar, which had been altered to look likeCarmania. (Contrary to some accounts, theCarmaniahad not been altered to resemble theCap Trafalgar.)
Another notable example was theWorld War IIGerman commerce raiderKormoran, whichsurprised and sankthe Australian light cruiserHMASSydneyin 1941 while disguised as a Dutch merchant ship, causing the greatest loss of life on an Australian warship. WhileKormoranwas fatally damaged in the engagement and its crew captured, the outcome represented a considerable psychological victory for the Germans.[9]
The British used aKriegsmarineensignin theSt Nazaire Raidand captured a Germancodebook. The old destroyerCampbeltown, which the British planned to sacrifice in the operation, was provided with cosmetic modifications that involved cutting the ship's funnels andchamferingthe edges to resemble a GermanType 23 torpedo boat. By this ruse the British got within two miles (3 km) of the harbour before the defences responded, where the explosive-riggedCampbeltownand commandos successfully disabled or destroyed the key dock structures of the port.[10][11]
Between December 1922 and February 1923, a commission of jurists atthe Haguedrafted a set of rules concerning the Control of Wireless Telegraphy in Time of War and Air Warfare. They included:[12]
This draft was never adopted as a legally binding treaty, but theInternational Committee of the Red Crossstates in its introduction on the draft: "To a great extent, [the draft rules] correspond to the customary rules and general principles underlying treaties on the law of war on land and at sea",[13]and as such these two non-controversial articles were already part of customary law.[14]
In land warfare, the use of a false flag is similar to that of naval warfare: the trial ofWaffen SSofficerOtto Skorzeny—who planned and commandedOperation Greif—by a U.S.military tribunalat theDachau trialsincluded a finding that Skorzeny was not guilty of a crime by ordering his men into action in American uniforms. He had relayed to his men the warning of German legal experts: if they fought in American uniforms, they would be breaking thelaws of war; however, they probably were not doing so simply by wearing the American uniforms. During the trial, a number of arguments were advanced to substantiate this position and the German and U.S. military seem to have been in agreement.
In the transcript of the trial,[15]it is mentioned that Paragraph 43 of theField Manualpublished by theWar Department,United States Army, on 1 October 1940, under the entryRules of Land Warfarestates: "National flags, insignias and uniforms as a ruse – in practice it has been authorized to make use of these as a ruse. The foregoing rule (Article 23 of the Annex of theIV Hague Convention), does not prohibit such use, but does prohibit their improper use. It is certainly forbidden to make use of them during a combat. Before opening fire upon the enemy, they must be discarded."
In 1788, the head tailor at theRoyal Swedish Operareceived an order to sew a number of Russian military uniforms. These were then used by Swedes to stage an attack onPuumala, a Swedish outpost on the Russo-Swedish border, on 27 June 1788. This caused an outrage inStockholmand impressed theRiksdag of the Estates, the Swedish national assembly, who until then had refused to agree to an offensive war against Russia. The Puumala incident allowed KingGustav IIIof Sweden, who lacked the constitutional authority to initiate unprovoked hostilities without the Estates' consent, to launch theRusso-Swedish War (1788–1790).[16]
On July 13, 1870,Otto von Bismarckpublished theEms Dispatch, an internal message from KingWilhelm Ito Bismarck regarding certain demands made by the French ambassador. In the version purposefully released to the public, Bismarck instead made it sound like the King had gravely disrespected the ambassador – a ploy to trick EmperorNapoleon IIIinto declaring war on theNorth German Confederation, with the end goal of unifying the northern and southern German states. This ploy would be successful, as Napoleon III would declare war six days later; and six months later, the Confederation would win andunify the German states.
In September 1931,Seishirō Itagakiand otherJapanesemid- to junior-grade officers, without the knowledge of the Tokyo government, fabricated a pretext for invadingManchuriaby blowing up a section of railway. Though the explosion was too weak to disrupt operations on the rail line, the Japanese nevertheless used theMukden incidentto seize Manchuria and create apuppet governmentin the form of the nominally independent state ofManchukuo.[17]
TheGleiwitz incidentin 1939 involvedReinhard Heydrichfabricating evidence of aPolishattack againstGermanyto mobilize German public opinion for war and to justify thewar against Poland.Alfred Naujockswas a key organiser of the operation under orders from Heydrich. It led to the deaths ofNazi concentration campvictims who were dressed as German soldiers and then shot by theGestapoto make it seem that they had been shot by Polish soldiers. This, along with other false flag operations inOperation Himmler, would be used to mobilize support from the German population for the start ofWorld War II in Europe.[18]
The operation failed to convince international public opinion of the German claims, and both Britain and France – Poland's allies – declared war two days after Germany invaded Poland.[19]
On 26 November 1939, the Soviet armyshelled Mainila, a Russian village near the Finnish border. Soviet authorities blamedFinlandfor the attack and used the incident as a pretext to invade Finland, starting theWinter War, four days later.[20][21]
Operation Northwoods, a 1962 plot proposed but never executed by theU.S. Department of Defensefor a war withCuba, involved scenarios such as fabricating the hijacking or shooting down of passenger and military planes, sinking a U.S. ship in the vicinity of Cuba, burning crops, sinking a boat filled with Cuban refugees, attacks by alleged Cuban infiltrators inside the United States, and harassment of U.S. aircraft and shipping, and the destruction of aerial drones by aircraft disguised as Cuban MiGs.[22]These actions would be blamed on Cuba, and would be a pretext for an invasion of Cuba and the overthrow ofFidel Castro's communist government. It was authorised by theJoint Chiefs of Staff, but then rejected by PresidentJohn F. Kennedy. The surprise discovery of the documents relating to Operation Northwoods was a result of the comprehensive search for records related to theassassination of President John F. Kennedyby theAssassination Records Review Boardin the mid-1990s.[23]Information about Operation Northwoods was later publicized byJames Bamford.[24]
In January and February 2022, U.S. officials warned that Russian operatives were planning a false flag operation in Ukraine in order to justify a military intervention.[25]In the days leading up to theRussian invasion of Ukraine on 24 February, the Russian government intensified itsdisinformation campaign, with Russian state media promoting false flags on a nearly hourly basis purporting to show Ukrainian forces attacking Russia, in a bid to justify an invasion of Ukraine.[26][27]Many of the disinformation videos were poor and amateur in quality, with mismatchingmetadatashowing incorrect dates,[27]and evidence fromBellingcatresearchers and other independent journalists showed that the claimed attacks, explosions, and evacuations inDonbaswere staged by Russia.[26][27][28][29][30]
On 4 April 1953, theCIAwas ordered to undermine the government of Iran over a four-month period, as a precursor to overthrowing Prime MinisterMohammad Mosaddegh. One tactic used to undermine Mosaddegh was to carry out false flag attacks "on mosques and key public figures", to be blamed on Iranian communists loyal to the government.[31]
The CIA operation was code-namedTPAJAX. The tactic of a "directed campaign of bombings by Iranians posing as members of the Communist party" involved the bombing of "at least one" well known Muslim's house by CIA agents posing as Communists.[32]The CIA determined that the tactic of false flag attacks added to the "positive outcome" ofTPAJAX.[31]
However, as "the C.I.A. burned nearly all of its files on its role in the 1953 coup in Iran", the true extent of the tactic has been difficult for historians to discern.[33]
In the summer of 1954, a group of Egyptian Jews recruited by Israeli army intelligence were caught with plans to bomb American, British, and Egyptian civil targets in Egypt. The bombs were timed to detonate several hours after closing time. The bombings were to be blamed on theMuslim Brotherhood, EgyptianCommunists, "unspecified malcontents", or "local nationalists", with the aim of creating a climate of sufficient violence and instability to induce the British government to refrain from evacuating its troops occupying Egypt'sSuez Canalzone, a move that would emboldenEgyptian PresidentNasseragainst Israel. However, the plot was exposed before launch and Egyptian authorities tailed an operative to his target, arrest him, and later search his apartment where the entire plan, including the names of other agents and explosive materials, was held. Unknown to Israel Prime MinisterMoshe Sharet, the exposé caused a scandal in Israel, with Israeli officials blaming one another for the operation and the Israeli defense minister,Pinhas Lavon, resigning under pressure. Later, two investigative committees found that Lavon was unaware of the operation.[34][35][36]
Due to its deceptive nature a false flag operation can fail in such a manner as to implicate the perpetrator rather than the intended victim.
A notable example is an April 2022FSBoperation where would-be Ukrainian assassins of Russian propagandistVladimir Solovyovwere filmed while being arrested. The footage published by the FSB was, however, found to implicate the FSB as having staged the arrest. Together with weapons, drugs, Ukrainian passports, and Nazi memorabilia the footage also prominently showed three expansion packs forThe Sims 3video game. Investigative journalistEliot Higginsinterpreted this to mean that the arrest was in fact staged, with its organizers misunderstanding an instruction "to get 3SIMs". Further lending credence to the arrest being staged was footage of a note with a Russian phrase, which in fact readsignature unclear. This was again interpreted as a misunderstood instruction, this time taken too literally. The FSB subsequently published a version of the footage with the Sims games blurred out.[37][38][39]
Pseudo-operations are those in which forces of one power disguise themselves as enemy forces. For example, a state power may disguise teams of operatives as insurgents and, with the aid of defectors, infiltrate insurgent areas.[40]The aim of such pseudo-operations may be to gather short- or long-termintelligenceor to engage in active operations, in particularassassinationsof important enemies. However, they usually involve both, as the risks of exposure rapidly increase with time and intelligence gathering eventually leads to violent confrontation. Pseudo-operations may be directed by military or police forces, or both. Police forces are usually best suited to intelligence tasks; however, military provide the structure needed to back up such pseudo-ops with military response forces. According to US military expert Lawrence Cline (2005), "the teams typically have been controlled by police services, but this largely was due to the weaknesses in the respective military intelligence systems."[41]
TheState Political Directorate(OGPU) of theSoviet Unionset up such an operation from 1921 to 1926. DuringOperation Trust, they used loose networks ofWhite Armysupporters and extended them, creating the pseudo-"Monarchist Union of Central Russia" (MUCR) in order to help the OGPU identify real monarchists and anti-Bolsheviks.[42]
An example of a successful assassination wasUnited States MarineSergeantHerman H. Hannekenleading a patrol of hisHaitianGendarmeriedisguised as enemyguerrillasin 1919. The patrol successfully passed several enemy checkpoints in order to assassinate the guerilla leaderCharlemagne PéraltenearGrande-Rivière-du-Nord. Hanneken was awarded theMedal of Honor[43]and was commissioned a Second Lieutenant for his deed.[citation needed]
During theMau Mau uprisingin the 1950s, captured Mau Mau members who switched sides and specially trained British troops initiated the pseudo-gang concept to successfully counter Mau Mau. In 1960,Frank Kitson, who was later involved in theNorthern Irish conflict, publishedGangs and Counter-gangs, an account of his experiences with the technique inKenya. Information included how to counter gangs and measures of deception, including the use of defectors, which brought the issue a wider audience.[citation needed]
Another example of combined police and military oversight of pseudo-operations include theSelous Scoutsin the former countryRhodesia(nowZimbabwe), governed bywhite minority ruleuntil 1980. The Selous Scouts were formed at the beginning ofOperation Hurricane, in November 1973, by Major (later Lieutenant Colonel)Ronald Reid-Daly. As with all Special Forces in Rhodesia, by 1977, they were controlled by COMOPS (Commander, Combined Operations) Commander Lieutenant GeneralPeter Walls. The Selous Scouts were originally composed of 120 members, with all officers being white and the highest rank initially available for black soldiers beingcolour sergeant. They succeeded in turning approximately 800 insurgents who were then paid by Special Branch, ultimately reaching the number of 1,500 members. Engaging mainly in long-range reconnaissance and surveillance missions, they increasingly turned to offensive actions, including the attempted assassination ofZimbabwe People's Revolutionary ArmyleaderJoshua NkomoinZambia. This mission was finally aborted by the Selous Scouts, and attempted again, unsuccessfully, by theRhodesian Special Air Service.[44]
Some offensive operations attracted international condemnation, in particular the Selous Scouts' raid on aZimbabwe African National Liberation Army(ZANLA) camp at Nyadzonya Pungwe,Mozambiquein August 1976. ZANLA was then led byJosiah Tongogara. Using Rhodesian trucks and armored cars disguised as Mozambique military vehicles, 84 scouts killed 1,284 people in the camp, registered as arefugee campby theUnited Nations(UN). Even according to Reid-Daly, most of those killed were unarmed guerrillas standing in formation for a parade. The camp hospital was also set ablaze by the rounds fired by the Scouts, killing all patients.[45]According to David Martin and Phyllis Johnson, who visited the camp shortly before the raid, it was only a refugee camp that did not host any guerrillas. It was staged for UN approval.[46]
According to a 1978 study by the Directorate of Military Intelligence, 68% of all insurgent deaths inside Rhodesia could be attributed to the Selous Scouts, who were disbanded in 1980.[47]
If the action is a police action, then these tactics would fall within the laws of the state initiating the pseudo, but if such actions are taken in acivil waror during abelligerent military occupationthen those who participate in such actions would not beprivileged belligerents. The principle ofplausible deniabilityis usually applied for pseudo-teams. (See the above sectionLaws of war)[clarification needed]. Some false flag operations have been described by Lawrence E. Cline, a retiredUS Army intelligenceofficer, as pseudo-operations, or "the use of organized teams which are disguised as guerrilla groups for long- or short-term penetration ofinsurgent-controlled areas".[40]
"Pseudo-operations should be distinguished," notes Cline, "from the more common police or intelligenceinfiltrationof guerrilla or criminal organizations. In the latter case, infiltration is normally done by individuals. Pseudo teams, on the other hand, are formed as needed from organized units, usually military orparamilitary. The use of pseudo teams has been a hallmark of a number of foreigncounterinsurgencycampaigns."[40]
Similar false flag tactics were also employed during theAlgerian Civil War, starting in the middle of 1994.Death squadscomposed ofDépartement du Renseignement et de la Sécurité(DRS) security forces disguised themselves as Islamist terrorists and committed false flag terror attacks. Such groups included theOrganisation of Young Free Algerians(OJAL) or the Secret Organisation for the Safeguard of the Algerian Republic (OSSRA).[48]According toRoger Faligotand Pascal Kropp (1999), the OJAL was reminiscent of "the Organization of the French Algerian Resistance (ORAF), a group of counter-terrorists created in December 1956 by theDirection de la surveillance du territoire(Territorial Surveillance Directorate, or DST) whose mission was to carry out terrorist attacks with the aim of quashing any hopes of political compromise".[49]
Inespionage, the term "false flag" describes the recruiting of agents by operatives posing as representatives of a cause the prospective agents are sympathetic to, or even the agents' own government. For example, during theCold War, several femaleWest Germancivil servants were tricked into stealing classified documents by agents of theEast GermanStasiintelligence service pretending to be members of West German peace advocacy groups (theStasiagents were also described as "Romeos", indicating that they also used their sex appeal to manipulate their targets, making this operation a combination of the false flag and "honey trap" techniques).[50]
According to ex-KGB defectorJack Barsky, "Many a right-wing radical had given information to the Soviets under a 'false flag', thinking they were working with a Western ally, such as Israel, when in fact their contact was a KGB operative."[51]
False flag operations are also utilized bynon-state actorsandterroristorganizations. During theIndian security forces siege prior to the storming of the Golden Temple,Babbar Khalsamilitants allegedly infiltrated buildings between CRPF lines and the positions of pro-Bhindranwalemilitants and fired in both directions in he hope of provoking firefights. This was allegedly done as a result of Babbar Khalsa leader Bibi Amarjit Kaur blaming Bhindranwale for the death of her husband, Fauja Singh, during the1978 Sikh-Nirankari clashe.[52]
On October 5, 1987,LTTEfighters infiltrated betweenIPKFandSri Lankan armypositions in theKankesanturaiarea and provoked a firefight between the two forces as part of the revenge operations in retaliation for the suicide in custody of 15 LTTE leaders who were about to be handed into Sri Lankan custody.[53]
The term is popular amongconspiracy theorypromoters in referring tocovert operationsof various governments and claimedcabals.[54]According toColumbia Journalism Review, this usage mostly "migrated to the right", however because some historical false flag incidents occurred, historians should not fully cede the usage of the term to conspiracy theorists. Perlman says "The real danger is if we use the nonattributive 'false flags' as shorthand for conspiracy theories, without explaining what they are and who is promoting them." At the same time, Perlman writes that "people yelling that any attack attributed to someone on 'their side' was committed by 'the other side' drown out the voices of reason."[2]
Political campaigning has a long history of this tactic in various forms, including in person, print media and electronically in recent years. This can involve when supporters of one candidate pose as supporters of another, or act as "straw men" for their preferred candidate to debate against. This can happen with or without the candidate's knowledge. TheCanuck letteris an example of one candidate's creating a false document and attributing it as coming from another candidate in order to discredit that candidate.[citation needed]
In 2006, individuals practicing false flag behavior were discovered and "outed" inNew Hampshire[55][56]andNew Jersey[57]afterblogcomments claiming to be from supporters of a political candidate were traced to theIP addressof paid staffers for that candidate's opponent.
On 19 February 2011, Indiana Deputy Prosecutor Carlos Lam sent a private email to Wisconsin GovernorScott Walkersuggesting that he run a "'false flag' operation" to counter theprotestsagainst Walker's proposed restrictions on public employees'collective bargainingrights:
If you could employ an associate who pretends to be sympathetic to the unions' cause to physically attack you (or even use a firearm against you), you could discredit the unions... Employing a false flag operation would assist in undercutting any support the media may be creating in favor of the unions.[58][59]
The press had acquired a court order to access all of Walker's emails and Lam's email was exposed. At first, Lam vehemently denied it, but eventually admitted it and resigned.[59]
Some conservative commentators suggested thatpipe bombs that were sent to prominent Democrats prior to the 2018 mid-term electionswere part of a false flag effort to discredit Republicans and supporters of then-President Donald Trump.[60]Cesar Sayoc, motivated by his belief that Democrats were "evil", was later convicted of mailing the devices to Trump's critics.[61]
On the internet, aconcern trollis a false flagpseudonymcreated by a user whose actualpoint of viewis opposed to the one that the troll claims to hold. The concern troll posts in web forums devoted to its declared point of view and attempts to sway the group's actions or opinions while claiming toshare their goals, but with professed "concerns". The goal is to sowfear, uncertainty, and doubtwithin the group often by appealing tooutrage culture.[62]This is a particular case ofsockpuppetingandsafe-baiting.
During the2025 Canadian federal electioncampaign,Liberal Party of Canadastrategists were exposed after their false flag operation failed. ACBC Newsjournalist who was speaking with Liberal staff at a bar inOttawalearned how "Stop the Steal" buttons were placed at aConservative Party of Canadaevent. The operatives hoped attendees would wear them, which would allow Liberals to publicly conflate Conservative supporters and leaderPierre PoilievrewithDonald J. Trump. After the false flag mission was reported by the journalist, Liberal leader,Mark Carney, reassigned those involved.[63][64]
Proponents of political or religious ideologies will sometimes use false flag tactics. This can be done to discredit or implicate rival groups, create the appearance of enemies when none exist, or create the illusion of organized and directed persecution. This can be used to gain attention and sympathy from outsiders, in particular the media, or to convince others within the group that their beliefs are under attack and in need of protection.
In retaliation for writingThe Scandal of Scientology, some members of the Church ofScientologystole stationery from authorPaulette Cooper's home and then used that stationery to forge bomb threats and have them mailed to a Scientology office. TheGuardian's Officealso had a plan for further operations to discredit Cooper known asOperation Freakout, but several Scientology operatives were arrested in a separate investigation and the plan was exposed.[65]
According toPolitiFact, some false flag conspiracy theories (such as claims that mass shootings are hoaxes) are themselves spread byastroturfing, which is an attempt to create false impression of popularity in a belief.[66]
|
https://en.wikipedia.org/wiki/False_flag
|
TheHundred Flowers Campaign, also termed theHundred Flowers Movement(Chinese:百花齐放;pinyin:Bǎihuā Qífàng) and theDouble Hundred Movement(双百方针;Shuāngbǎi Fāngzhēn), was a period from 1956 to 1957 in thePeople's Republic of Chinaduring which theChinese Communist Party(CCP), led byMao Zedong, proposed to "let one hundred flowers bloom in social science and arts and let one hundred points of view be expressed in the field of science."[1][2]It was a campaign that allowed citizens to offer criticism and advice to the government and the party;[3]hence it was intended to serve an antibureaucratic purpose, at least on the Maoists' part.[4]The campaign resulted in a groundswell of criticism aimed at the Party and its policies by those outside its rank and represented a brief period of relaxation in ideological and cultural control.[5]
The movement was in part a response to tensions between the CCP and Chinese intellectuals.[6]Mao had realized that the CCP's control over intellectual life was stifling potentially useful new ideas. He was also worried about the emergence of new party elites who could threaten his position.[3]He sought to use the movement to restrain the new forces within the party. However, criticism quickly grew out of hand and posed a threat to the communist regime. The liberation was short-lived. Afterwards, a crackdown continued through 1957 and 1959, developing into anAnti-Rightist Campaignagainst those who were critical of the regime and its ideology. Citizens were rounded up in waves by the hundreds of thousands, publicly criticized duringstruggle sessions, and condemned to prison camps for re-education through labor or execution.[7]The ideological crackdown re-imposedMaoistorthodoxy in public expression, and catalyzed the Anti-Rightist Movement.
The name of the movement consists two parts. The first part "Let a hundred flowers bloom" is originated from a novel namedFlowers in the Mirrorby Qing AuthorLi Ruzhen; the second part "Let a hundred schools of thought contend" comes fromTreatise on Literatureof theBook of Hanauthored by the Chinese historianBan Gu:
百花齊放,百家爭鳴(Bǎihuā qífàng, bǎijiā zhēngmíng)
Let a hundred flowers bloom; let a hundred schools of thought contend.
The slogan was first used by Mao Zedong on May 2, 1956, during a public speech. The name was used to arouse the interest of China's intellectuals, referring to theWarring States periodwhennumerous schools of thoughtcompeted for ideological, not military, supremacy. Historically,Confucianism,Chinese BuddhismandTaoismhad gained prominence, and socialism would now face its test. At the time, the movement was opposed by even some of Mao's most devout followers, as well as some within the academic circle, most notably,Guo Moruo.[8][9][10]
In March 1951, the Peking Opera Research Institute was considered to be expanded and established as the Chinese Opera Research Institute. Mao was invited to inscribe a dedication for the institute. Meanwhile, half of the people argued that Peking Opera is outdated and that revolutionary opera should be promoted. In late March, Mao inscribed a dedication for the establishment of the Chinese Opera Research Institute: "Let a hundred flowers bloom; weed through the old to bring forth the new." In 1953, Boda Chen, who was in charge of the Committee for the Study of Chinese Historical Issues, sought Mao's guidance on the work principles, to which Chairman Mao responded with four characters: "Let a hundred schools of thought contend." Until April 28, 1956, in his concluding speech at an expanded meeting of thePolitburo of the Chinese Communist Party, he mentioned, "Let a hundred flowers bloom, let a hundred schools of thought contend" in first time.[11]It is suggested that the launching of the campaign was delayed by the shocking impact of the speech denouncing Stalin at the Twentieth Soviet Party Congress in February 1956 delivered byNikita Khrushchev.[12]
In the opening stage of the movement, during March and April, issues discussed were relatively minor and unimportant in the grand scheme. Emphasis was placed on a distinction being drawn between "friend and foe".[9]Intellectuals approached the campaign with suspicion, due to a lack of guidelines on what speech was acceptable; few also had suspicions about whether the campaign was bait, and whether disallowed speech would get them in trouble.[9]Resultantly, theCentral Governmentdid not receive much criticism, although there was a significant rise in letters of conservative advice. PremierZhou Enlaireceived some of these letters, and once again realized that, although the campaign had gained notable publicity, it was not progressing as had been hoped. Zhou approached Mao about the situation, stating that more encouragement was needed from the central bureaucracy to lead the intellectuals into further discussion.Mao Zedongfound the concept interesting and superseded Zhou to take control.Guo Moruodeclared that the contending of diverse schools should be guided by the central aim of building a socialist society.[9]
The idea was to have intellectuals discuss the country's problems to promote new forms ofartsand new cultural institutions. Mao also saw this as the chance to promotesocialism, believing that after discussion it would be apparent that socialist ideology was the dominant ideology overcapitalism, even amongst non-communist Chinese, and would thus propel the development and spread of the goals of socialism. To this end, in an attempt to reduce hesitancy, intellectuals were invited to forums in which they were allowed to ask exploratory questions, slowly discovering what was deemed acceptable speech. During this time, criticisms were often indirect and lauded the Hundred Flowers campaign itself. In Leknor's research, it is stated that the conventional understanding of communication and power is inverted during the campaign since the right to speak up and be heard was not the right resevered for those in powers, but the right to keep one's voice out of the unfolding campaign. In other words, students were pressured by teachers to speak out; inferiors were asked to speak by superiors.[13]
Criticisms became more specific in May, citing the regimentation of education, thought reforms in previous years that were described as "painful", and the lack of employment prospects for those who went to American and British scholars. Additionally, some recanted theirself-criticismand confessions from previous years.[9]
By the spring of 1957, Mao had announced that criticism was "preferred" and had begun to mount pressure on those who did not turn in healthy criticism on policy to the Central Government. The reception was immediate with intellectuals, who began voicing concerns without anytaboo. In the period from 1 May to 7 June that year, millions of letters were pouring into the Premier's Office and other authorities.
From May to June 1957, newspapers published a huge range of critical articles.[14]The majority of these critiques argued that the Party had become less revolutionary and more bureaucratic.[14]Nonetheless, most of the commentary was premised on complete acceptance of socialism and the legitimacy of the Communist Party and focused on making the existing socialist system work better.[14]
Criticism increasingly arose from Chinese citizens of varying backgrounds. Peasants criticized the effectiveness of cooperatives and demanded the repossession of their land.[15]Workers argued that the wage system was irrational and complained about the requirement to work overtime without pay.[15]Some individuals even argued that the people were better off under the administration of theKMT.[15]There was even advocation by ethnic minorities within the China to separate from the nation to form independent states.[15]
Others spoke out by putting up posters around campuses, rallying in the streets, holding meetings for CPC members, and publishing magazine articles. A journalist wrote that the party became alienated from the masses and that its members had become "flatterers, sycophants, and yes-men."[15]One professor mentioned that Marx and Lenin had repeatedly revised their theories and suggested that the two would be displeased if they had seen how strictly the CCP leaders were applying doctrine.[15]Notably, students atPeking Universitycreated a "Democratic Wall" on which they criticized the CCP with posters and letters.[16]
They protested CPC control over intellectuals, the harshness of previous mass campaigns such as that against counter-revolutionaries, the slavish following of Soviet models, the low standards of living in China, the proscription of foreign literature, economic corruption among party cadres, and the fact that 'Party members [enjoyed] many privileges which make them a race apart'.[16]
On June 8, 1957, the major party newspaper,People's Daily, published an editorial that signaled the conclusion of the Hundred Flowers Campaign.[17]The editorial asserted that "rightists" had exploited the new found freedom to attack the party and undermine the revolution. This, the editorial claimed, amounted to a hostile struggle "between the enemy and the people", indicating the beginning of a crackdown that later became the Anti-Rightist Campaign led by then party General SecretaryDeng Xiaoping.[17]Mao announced that "poisonous weeds" had grown amongst the "fragrant flowers" of the campaign, further terminology which signified the impeding crackdown.[14]
In a revised version ofOn the Correct Handling of Contradictions Among the People, an essay aimed to revive the Hundred Flowers campaign published on June 19, 1957, Mao Zedong clarified the distinction between "beautiful flowers" and "poisonous weeds";[18]
In July 1957, Mao ordered a halt to the campaign.[citation needed]Unexpected demands for power sharing led to the abrupt change of policy.[20]By that time, Mao had witnessedNikita KhrushchevdenouncingJoseph Stalinand theHungarian Revolution of 1956,eventswhich he felt were threatening. In essence, Mao was threatened by the intellectuals efforts to reclaim the position as loyal guardians of the proper moral framework for the political system.[6]
The campaign made a lasting impact on Mao's ideological perception. Mao, who is known historically to be more ideological and theoretical, less pragmatic and practical, continued to attempt to solidifysocialistideals in future movements in a more pragmatic manner, and in the case of theCultural Revolution, employed more violent means. Another consequence of the Hundred Flowers Campaign was that it discouraged dissent and made intellectuals reluctant to criticize Mao and his party in the future. The Anti-Rightist Movement that shortly followed, and was caused by the Hundred Flowers Campaign, resulted in thepersecutionof intellectuals, officials, students, artists, and dissidents labeled "rightists".[21]The campaign led to a loss of individual rights, especially for any Chinese intellectuals educated in Western centers of learning. The campaign was conducted indiscriminately, as numerous individuals were labeled as "rightists" based on anonymous denunciations. Local officials across the country were even assigned quotas for the number of "rightists" they needed to identify and denounce within their units. In the summer and early fall of 1957, roughly four hundred thousand urban residents, including many intellectuals, were branded as rightists and either sent to penal camps or forced into labor in the countryside.[22]While the party attempted to improve relations with intellectuals at the end of the Great Leap Forward, the Cultural Revolution obliterated any semblance of intellectual influence and prestige, "very few, if any, intellectuals survived the Cultural Revolution without having suffered physical and psychological abuse".[23]
The Hundred Flowers Movement was the first of its kind in thehistory of the People's Republic of Chinain that the government opened up to ideological criticisms from the general public. Although its true nature has always been questioned by historians, it can be generally concluded that the events that took place alarmed the central communist leadership. The movement also represented a pattern that has emerged from Chinese history wherein free thought is promoted by the government, and then suppressed by it. A similar surge in ideological thought would not occur again until the late 1980s, leading up to the1989 Tiananmen Square protests and massacre. The latter surge, however, did not receive the same amount of government backing and encouragement.
Another important issue of the campaign was the tension that surfaced between the political center and national minorities. With criticism allowed, some of the minorities' activists made public their protest against "Han chauvinism" which they saw the informal approach of party officials toward the local specifics.[24]
The prominent party figures' attitudes toward the campaign is also a prime example of divided opinion on leadership level within the party on the issue of corruption among the party officials. As Lieberthal puts it, "The Chairman…in the Hundred Flowers campaign and in the Cultural Revolution, proved willing to bring in non-party people as part of his effort to curb officiousness by cadres. Other leaders, such as Liu Shaoqi, opposed "rectifying" the party by going outside of its ranks."[23]
Historians debate whether Mao's motivations for launching the campaign were genuine. Some find it possible that Mao originally had pure intentions, but later decided to utilize the opportunity to destroy criticism. HistorianJonathan Spencesuggests that the campaign was the culmination of a muddled and convoluted dispute within the Party regarding how to address dissent.[25]
AuthorsClive JamesandJung Changposit that the campaign was, from the start, a ruse intended to expose rightists and counter-revolutionaries, and that Mao Zedong persecuted those whose views were different from those of the Party. The first part of the phrase from which the campaign takes its name is often remembered as "let a hundred flowers bloom." This is used to refer to an orchestrated campaign to flush outdissidentsby encouraging them to show themselves as critical of the regime, and then subsequently imprison them, according to Chang and James.[citation needed]
InMao: The Unknown Storyby Jung Chang andJon Halliday, Chang asserts that "Mao was setting a trap, and...was inviting people to speak out so that he could use what they said as an excuse to victimise them."[26]Prominent criticHarry Wu, who as a teenager was a victim, later wrote that he "could only assume that Mao never meant what he said, that he was setting a trap for millions."[27]
One supposedly authentic letter written by Mao indicates that the campaign was a ploy for entrapment from the beginning. Circulated to higher party cadres in mid-May 1957, the letter stated:
Things are just beginning to change. The rightist offensive has not yet reached its peak. [The rightists] are still very enthusiastic. We want to let them rage for a while and climb to the very summit.[15]
Mao's personal physician,Li Zhisui, suggested that:[28]
[The campaign was] a gamble, based on a calculation that genuinecounter-revolutionarieswere few, that rebels likeHu Fenghad been permanently intimidated into silence, and that other intellectuals would follow Mao's lead, speaking out only against the people and practices Mao himself most wanted to subject to reform.
Indeed, Mao responded to the accusation on July 1, 1956, editorial of People's Press:
Some say this is a conspiracy. We say this is an open strategy. Because we informed the enemy in advance: only by allowing the monsters and demons to come out of their lairs can we exterminate them; only by letting the poisonous weeds emerge from the ground can we easily uproot them. Don't farmers weed several times a year? Weeds, once removed, can still be used as fertilizer. Class enemies will inevitably seek opportunities to express themselves. They are unwilling to accept the downfall of the nation and the rise of communism.[29]
Professor Lin Chun characterizes as a "conspiracy theory" the depiction of the Hundred Flowers campaign as a calculated trap. In her analysis, this depiction is disputed by empirical research from archival sources and oral histories. She writes that many interpretations of the Hundred Flowers campaign "underestimate the fear on the part of Mao and party leadership over an escalating atmosphere ofanticommunismwithin the communist world in the aftermath of the East European uprisings."[20]
Author Christine Vidal similarly rejects the idea of the campaign as being initially calculated to lure dissidents for later repression, stating that "the repression was not the initial aim of Mao and of his Hundred Flowers policy."[30]
The party's internal attitude towards the campaign can be found inResolution on Certain Questions in the History of Our Party since the Founding of the People's Republic of China:
The economic task in 1957, due to the serious implementation of the correct policies of the Party's "Eighth National Congress", was one of the most effective years since the founding of the country. This year, the entire Party launched the Rectification Campaign, mobilizing the masses to criticize and offer suggestions to the Party. This was a normal step in promoting socialist democracy. During the Rectification process, a very small number of bourgeois rightists took the opportunity to advocate for so-called "big revelations and big debates", launching a brazen attack on the Party and the new socialist system, attempting to replace the leadership of the Communist Party. It was entirely correct and necessary to firmly counteract this attack. However, the Anti-Rightist Campaign was seriously expanded, misclassifying a group of intellectuals, patriots, and Party cadres as "rightists", resulting in unfortunate consequences.[31]
|
https://en.wikipedia.org/wiki/Hundred_Flowers_Campaign
|
The Inner Line(Russian:Внутренняя Линия) was a secretcounter-intelligencebranch of theRussian All-Military Union(ROVS), the leadingRussianWhite émigréorganization. GeneralAlexander Kutepovis credited with setting it up in the mid-1920s.[1][2]An alternative account sees the Inner Line as a group secretly established bySoviet intelligencewithin the ROVS.[3][4]
Whatever its origin, the Inner Line became subject to severe penetration byOGPU/NKVD.[5]It was seriously discredited afterSovietagents kidnapped the ROVS chairman GeneralYevgeny Millerin 1937, and following the subsequent disappearance ofNikolai Skoblin(Miller's aide and Inner Line senior operative), who, as a covertNKVDagent, lured Miller into the abduction operation.
ThisRussian history–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Inner_Line
|
Political warfareis the use of hostile political means to compel an opponent to do one's will. The termpoliticaldescribes the calculated interaction between a government and a target audience, including another state's government, military, and/or general population. Governments use a variety of techniques to coerce certain actions, thereby gaining relative advantage over an opponent. The techniques includepropagandaandpsychological operations("PsyOps"), which service national and military objectives respectively. Propaganda has many aspects and a hostile and coercive political purpose. Psychological operations are for strategic and tactical military objectives and may be intended for hostile military and civilian populations.[1]
Political warfare's coercive nature leads to weakening or destroying an opponent's political, social, or societal will, and forcing a course of action favorable to a state's interest. Political war may be combined with violence,economic pressure,subversion, anddiplomacy, but its chief aspect is "the use of words, images and ideas".[2]The creation, deployment, and continuation of these coercive methods are a function of statecraft for nations and serve as a potential substitute for more direct military action.[3]For instance, methods like economic sanctions or embargoes are intended to inflict the necessary economic damage to force political change. The utilized methods and techniques in political war depend on the state's political vision and composition. Conduct will differ according to whether the state is totalitarian, authoritarian, or democratic.[4]
The ultimate goal of political warfare is to alter an opponent's opinions and actions in favour of one state's interests without utilizing military power. This type of organized persuasion or coercion also has the practical purpose of saving lives through eschewing the use of violence in order to further political goals. Thus, political warfare also involves "the art of heartening friends and disheartening enemies, of gaining help for one's cause and causing the abandonment of the enemies'".[5]: 151Generally, political warfare is distinguished by its hostile intent and through potential escalation; but the loss of life is an accepted consequence.
Political warfare utilizes all instruments short of war available to a nation to achieve its national objectives. The best tool of political warfare is "effective policy forcefully explained",[6]or more directly, "overt policy forcefully backed".[7]But political warfare is used, as one leading thinker on the topic has explained, "when public relations statements and gentle,public diplomacy-style persuasion—the policies of 'soft power'—fail to win the needed sentiments and actions" around the world.[8]
The major way political warfare is waged is throughpropaganda. The essence of these operations can be either overt or covert.White propagandaismaximallyovert: there is attribution to a promoter; the attributed promoter and the actual promoter are one and the same; and no attempt is made to hide the fact that a viewpoint or "line" is being promoted. Most television advertisements are white propaganda turned to commercial ends.Grey propagandaranges in overtness from maximal to a slightly lesser degree: as in white propaganda, there is honest attribution to a source; but it differs from white propaganda by being less forthright either about the link between the source and the line being promoted or about its status as propaganda in the first place. Grey propaganda has alternatively been defined as the "semiofficial amplification of a government's voice";[6]guerilla advertisinguses the tools of grey propaganda to sell products and services, while in public service, examples includeRadio Free EuropeandRadio Liberty(established duringCold War I).Black propagandais covert: this may be limited to the anonymous dissemination of internally consistent talking points (differing from white propaganda only in its lack of credited authorship), but it may also be the impersonation of a widely trusted organization (through the use of its branding, corporate style guide, distinctive turns of phrase, etc.) under a false flag, or a strategy as complex as the use of abotnetto inundate asocial networkwith self-contradictorydisinformationas if through afirehose of falsehood, amplifying the botnet's posts with so-called Likes and Retweets, and frustrating genuine users' bona fide searches for pertinent information by diminishing thesignal-to-noise ratio. What unifies these disparate strategies implementing black propaganda is that, in all cases, it "appears to come from a disinterested source when in fact it does not".[9]
There are channels which can be used to transmit propaganda. Sophisticated use of technology allows an organization to disseminate information to a vast number of people. The most basic channel is the spoken word. This can include live speeches or radio and television broadcasts. Overt or covert radio broadcasting can be an especially useful tool.[clarification needed]The printed word is also very powerful, including pamphlets, leaflets, books, magazines, political cartoons, and planted newspaper articles (clandestine or otherwise). Subversion, agents of influence, spies, journalists, and "useful idiots" can all be used as powerful tools in political warfare.[10]
Political warfare also includes aggressive activities by one actor to offensively gain relative advantage or control over another. Between nation states, it can end in the seizure of power or in the open assimilation of the victimized state into the political system or power complex of the aggressor. This aggressor-victim relationship has also been seen between rivals within a state and may involve tactics likeassassination, paramilitary activity,sabotage,coup d'état,insurgency,revolution,guerrilla warfare, andcivil war.
A coup utilizes political resources to gain support within the existing state and neutralize or immobilize those who are capable of rallying against the coup. A successful coup occurs rapidly and, after taking over the government, stabilizes the situation by controlling communications and mobility. Furthermore, a new government must gain acceptance from the public, military, and administrative structures by reducing the sense of insecurity. Ultimately, the new government will seek legitimacy in the eyes of its own people as well as seek foreign recognition.[17]The coup d'état can be led by national forces or involve foreign influence, similar to foreign liberation or infiltration.[12]
The history of political warfare can be traced to antiquity. The Chinese general and strategistSun Tzucaptures its essence in the ancient Chinese military strategy book,The Art of War: "So to win a hundred victories in a hundred battles is not the highest excellence; the highest excellence is to subdue the enemy's army without fighting at all...The expert in using the military subdues the enemy's forces without going to battle, takes the enemy's walled cities without launching an attack, and crushes the enemy's state without protracted war."[23]
There are abundant examples of political warfare in antiquity. Inancient Greece, a famous example is that of theTrojan Horse, which used deception for tactical military objectives. Propaganda was commonly utilized, including Greek rhetoric and theatre which used words and images to influence populations throughout the Hellenic world. This practice has left a lasting legacy of speech as a mechanism of political power, greater than force in solving disputes and inducing submission.[24]During this same period,Alexander the Greatused coinage imprinted with his own image, indirectly forcing conquered nations to accept his legitimacy as national ruler and to unite disparate nations together under his dominion.[25]
Ancient Romeutilized similar political warfare as the Greeks including rhetoric, as displayed by Cicero; and art, as seen in coinage, statues, architecture, engineering, and mosaics. All of these elements were intended to portray Rome's imperial dominance over its subject nations and the superior nature of Roman society.[26]Following a religious vision, the emperorConstantine Iin 330 CE bound the Roman state to the universalChristian Church. In doing so, he linked "religious commitment with imperial ambition" that proved to be quite successful and powerful.[27]One long-lasting symbol of this is theChi Rho, which forms the first two letters of the Greeks' name forChrist. This symbol was used for over one thousand years by Constantine's successors as a symbol of "imperial majesty and divine authority"[28]and still is a powerful symbol withinChristianity.
Since the founding of thePeople's Republic of China(PRC) in 1949, theChinese Communist Party(CCP) have centered much of their political warfare efforts within theUnited Front Work Department.[29][30][31]The CCP conception of political warfare includes the "three warfares" of public opinion warfare,psychological warfare, andlegal warfare, among others.[32][33]Political warfare encompasses influence operations such as the doctrine ofChina's peaceful rise.[34]
Taiwan remains a major target of PRC political warfare efforts.[35][36]The PRC's political warfare campaign aims to isolate Taiwan from the international community and interfere in Taiwan's democratic system and institutions.[30]India has also become a target of increasing importance for PRC influence operations.[37]
TheIsraeli Defense Forcewas an early adopter of social media platforms to promote a positive image about itself by postingsexy images of female Israeli soldiers,ASMRmedia involving firearms, as well as more traditional content about humanitarianism and national glory. It operates multiple accounts that have a large following and actively recruitsinfluencersonFacebook,Instagram,Telegram,TikTok,Twitter, andYouTube. The IDF's online presence in English is about twice as large as its activity in Hebrew and mainly targets youngJewish people in the United States. Besides positive self-portrayals, other objectives included shaping the narrative during its media blockade of theGaza War (2008–2009), preparing information ahead IDF operations in order to be the first one out with the desired story, and quickly responding to unexpected incidents.[38][39]In 2021 the IDF awarded a military police officer, who had more followers than the IDF spokesperson or the prime minister, for promoting Israel on TikTok.[40]The IDF's social media campaign has led to backlashes such as when it implied Iranian children were terrorists. It has also been welcomed by supporters for luring out "antisemites [who] need blocking".[41]
Throughout the Cold War, theSoviet Unionwas committed to political warfare on classic totalitarian lines and continued to utilize propaganda towards internal and external audiences.[42]"Active measures" (Russian: Активные мероприятия) was a Russian term to describe its political warfare activities both at home and abroad in support of Soviet domestic and foreign policy. Soviet efforts took many forms, ranging from propaganda, forgeries, and general disinformation to assassinations. The measures aimed to damage the enemy's image, create confusion, mould public opinion, and to exploit existing strains in international relations.[43]The Soviet Union dedicated vast resources and attention to these active measures, believing that "mass production of active measures would have a significant cumulative effect over a period of several decades".[44]Soviet active measures were notorious for targeting intended audience's public attitudes, to include prejudices, beliefs, and suspicions deeply rooted in the local history. Soviet campaigns fed disinformation that was psychologically consistent with the audience.[45]Examples of Soviet active measures include:
Communist strategy and tactics continually focused on revolutionary objectives, "for them the real war is the political warfare waged daily under the guise of peace".[49]the purpose of which is to "disorient and disarm the opposition...to induce the desire to surrender in opposing peoples...to corrode the entire moral, political, and economic infrastructure of a nation".[50]Lenin's mastery of "politics and struggle", remained objectives for the Soviet Union and other global communist regimes, such as thePeople's Republic of China.
The Soviet Union remains a comprehensive example of an aggressive nation which expanded its empire through covert infiltration and direct military involvement.[51]FollowingWorld War II, the Soviet Union believed European economies would disintegrate, leaving social and economic chaos and allowing for Soviet expansion into new territories. The Soviets quickly deployed organizational weapons such as non-political front groups, sponsored 'spontaneous' mass appeals, and puppet politicians. While many of these countries' political and social structures were in post-war disarray, the Soviet Union's proxy communist parties were well-organized and able to take control of these weak, newly formed Eastern European governments.[52]Moreover, the clandestine operations of the Soviet intelligence services and the occupying forces of the Soviet military further infiltrated the political and social spheres of the new satellites.[53]Conversely, in 1979, the Soviet Union was unable to successfully penetrate the Afghan society after supporting a coup which brought a new Marxist government to power. While Soviet units were already in Kabul, Afghanistan at the time of the coup, additional Soviet troops arrived to reinforce the units and seize important provincial cities, bringing the total of Soviet troops inside Afghanistan to 125,000–140,000. The Soviets were unprepared for the Afghan resistance which included classic guerrilla tactics with foreign support. In 1989, Soviet forces withdrew from Afghanistan, having been unable to infiltrate the Afghan society or immobilize the resistance.[54]
TheRepublic of China GovernmentinTaiwanrecognized that its Communist adversary astutely employed political warfare to capitalize upon Kuomintang weaknesses over the years sinceSun Yat-senfirst mounted his revolution in the 1920s, andChiang Kai-shek's regime had come to embrace a political warfare philosophy as both a defensive necessity and as the best foundation for consolidating its power in hope of their optimistic goal of "retaking the mainland". Both the Nationalist and Communist Chinese political warfare doctrines stem from the same historical antecedents at theWhampoa Military Academyin 1924 under Soviet tutelage.[55]
The Nationalist Chinese experience with political warfare can be treated in a much more tangible way than merely tracing doctrinal development. In the Taiwan of the 1970s, the concept was virtually synonymous with theGeneral Political Warfare Departmentof the Ministry of National Defense, which authored political doctrine and was the culmination of a series of organizational manifestations of its application.[55]
In the 21st century political warfare is primarily the responsibility of thePolitical Warfare Bureau.[56]
American foreign policydemonstrates a tendency to move towards political warfare in times of tension and perceived threat, and toward public diplomacy in times of improved relations and peace. American use of political warfare depends on its central political vision of the world and its subsequent foreign policy objectives.[57]After World War II, the threat of Soviet expansion brought two new aims for American political warfare:
PresidentHarry S. Trumanestablished a government political warfare capability in theNational Security Act of 1947. The act created theU.S. National Security Council, which became the infrastructure necessary to apply military power to political purposes.[59]
The Truman Doctrine was the post-WWII basis for American political warfare operations on which the United States government went further to formulate an active, defensive strategy to contain the Soviet threat.[58]On 4 May 1948,George F. Kennan, the father of thecontainment policy, wrote the Policy Planning Staff Memorandum titled "The Inauguration of Organized Political Warfare". ThisNational Security Council(NSC) memo established a directorate of political warfare operations, under the control of the NSC, known as the Consultative (or Evaluation) Board of the National Security Council. This directorate fell under the authority of the Secretary of State, while the Board had complete authority over covert political operations. It recognized political warfare as one instrument in the United States' grand strategy. Kennan defined 'political warfare' as "the employment of all means at a nation's command, short of war, to achieve its national objectives. Such actions are both overt and covert. They range from such overt actions as political alliances, economic measures (such as ERP – the Marshall Plan), and 'white' propaganda to such covert operations as clandestine support of 'friendly' foreign elements, 'black' psychological warfare and even encouragement of underground resistance in hostile states."[citation needed]
The memo further defined four projects that were activated by the Board to combat growing Communist influence abroad, including:
The United States used gray and black propaganda research, broadcasting, and print media operations during theCold Warto achieve its political warfare goals. These operations were conducted against Eastern European targets from Western Europe by two public-private organizations supported partly by theCentral Intelligence Agencyand the NSC, and partly by private corporations. These organizations were Free Europe, which was launched in 1941 and targeted Eastern Europe, and the American Committee for Liberation (AmComLib) created in 1951 to broadcast information into Soviet Russia. Both were renamed shortly thereafter and combined asRadio Free Europe/Radio Liberty(RFE/RL).[60]Many RFE/RL recruits came either from European emigrants families who were strongly anti-Communist or from US government agencies, most notably from CIA. Officially, "the US government denied any responsibility for the radios and took care to conceal the channels of funding, personnel recruitment, and policy influence. Obviously, the major support was American, but it was plausibly not official American and it could be excluded from diplomatic intercourse and international legal complication."[61]RFE/RL was considered to be a gray operation until its existence was publicly acknowledged by "activists" in the United States during the late 1960s. The goal of the radios was to present the truth to suppressed peoples behind the Iron Curtain "to aid in rebuilding a lively and diversified intellectual life in Europe which could ... defeat Soviet ... incursions on their freedom".[62]
In addition,Voice of America(VOA) started broadcasting to theSovietcitizens in 1947 under the pretext of countering "more harmful instances ofSoviet propagandadirected against American leaders and policies" on the part of the internal Soviet Russian-language media.[63]The Soviet Union responded by initiating electronicjammingof VOA broadcasts on April 24, 1949.[63]
In the fall of 1950 a group of scholars including physicists, historians and psychologists fromHarvard University, theMassachusetts Institute of TechnologyandRAND Corporationundertook a research study ofpsychological warfarefor theDepartment of State.[64]TheProject TroyReport to the Secretary of State, presented to Secretary of State on 1 February 1951, made various proposals for political warfare, including possible methods of minimizing the effects of Soviet jamming on the Voice of America broadcasts.[65]It can be assumed that the Truman administration tried to implement plans established by the Project Troy in the projectOverload and Delay.[66]The purpose of the latter was to break theStalinistsystem by increasing the number of input points in the system and by creating complex and unpredictable situations requiring action.[66]
An overt, non-governmental form of political warfare during the Cold War emerged afterPresident Ronald Reagan's8 June 1982 speech to the British Parliament. In his speech, Reagan appealed for a "global crusade for democracy"[67]and as a result, theNational Endowment for Democracy(NED) was created in December 1983. The NED was anon-governmental organization(NGO) based on four fundamental foundations:
The NED "funded programs in support of candidates acceptable to the US in elections in Grenada, Panama, El Salvador, and Guatemala throughout 1984 and 1985 in order to prevent communist victories, and create stable pro-US governments".[67]It was also active in Europe, funding groups to carry promote pro-North Atlantic Treaty Organization (NATO) propaganda in Britain, as well as a "right wing French student organisation ... linked to fascist paramilitaries". Other notable efforts included anti-Sandinista propaganda and opposition efforts in Nicaragua as well as anti-Communist propaganda and opposition efforts in support of the Solidarity movement in Poland between 1984 and 1990.[67]According to a 1991 interview inThe Washington Postwith one of the creators of the NED,Allen Weinstein, "a lot of what we (NED) do today was done covertly 25 years ago by the CIA".[68]
From 2010 to 2012, the United States operatedZunZuneo, a social media service similar toTwitter, in an attempt to instigate uprisings against the Cuban government in the long run. The program was initiated and overseen by theUnited States Agency for International Development, but its involvement was concealed behind internationalfront companies. The plan was to draw in users, chiefly young people, with free text messages and non-controversial content until acritical massis reached, after which more political messaging would be introduced and full opposition members can sign up. A large user base would also make it difficult for the Cuban government to shut down the service due to popular demand and a loss of revenue. During its development phase before official launch, engineers working on the project leveraged the user data they had on Cubans and prototype software to gather demographic and other intelligence, such as public opinions about opposition music acts in the lead-up to the 2009Paz Sin Fronteras IIconcert. At its peak, more than 40,000 unsuspecting Cubans interacted on the platform. As the number of users grew, USAID struggled to cover the operating costs, especially the amount of text messaging fees paid to Cuba's mobile network providers. The project reached out to potential investors such as Twitter co-founderJack Dorseyand interviewed industry executives to lead a front company that could make the program more viable commercially. These efforts ultimately failed to help, leaving many Cubans mystified after the service suddenly stopped working.[69]
According to former U.S. officials who spoke toReuters, theTrump administrationauthorized a covert influence campaign across social media in China in order to turn public opinion against theChinese government. The operation began in 2019 and involvedCentral Intelligence Agencyagents using bogus internet identities. They promoted negative narratives aboutCCP general secretaryXi Jinping'sgovernmentand leaked disparaging information to international news outlets. In addition to China, the campaign also targeted countries in Southeast Asia, Africa and the South Pacific. The CIA used the same method during theCold Warwhen it planted daily articles against the formerSoviet Union, but this risks backfiring from counter-accusations by Beijing and endangers Chinese dissidents as well as journalists, who could be falsely accused for being spies.[70]
|
https://en.wikipedia.org/wiki/Political_warfare
|
Roman Vatslavovich Malinovsky(Russian:Рома́н Ва́цлавович Малино́вский; 18 March 1876 – 5 November 1918) was a prominentBolshevikpolitician before theRussian revolution, while at the same time working as the best-paid agent for theOkhrana, the Tsarist secret police. They codenamed him 'Portnoi' (the tailor).
He was a brilliant orator, tall, red-haired, yellow-eyed and pockmarked,[1]"robust, ruddy complexioned, vigorous, excitable, a heavy drinker, a gifted leader of men."[2]
Malinovsky was born inPlotskprovince,Poland, at the time part of theRussian Empire. His parents were ethnicPolishpeasants, who died while he was still a child. He was jailed for several robberies from 1894 to 1899, for which he spent three years in prison and was also charged with attempted rape. In 1902, he enlisted in the prestigiousIzmaylovsky Regimentby impersonating a cousin with the same name.[3]Malinovsky began as an Okhrana agent within the regiment, reporting on fellow soldiers and officers. He was discharged from the army at the end of theRusso-Japanese Warand relocated toSaint Petersburg.
In 1906, he found a job as a lathe operator and joined the Petersburg Metalworkers' Union and theRussian Social Democratic Labour Party(RSDLP). Initially, he was inclined to support theMensheviks, who believed in trade union autonomy, rather than theBolshevikfaction, who sought to control the union. He was arrested five times as a union activist, but his Okhrana handlers arranged each time for him to be released without arousing suspicion.[4]Exiled from St Petersburg in 1910, he moved to Moscow. Here, for the first time, he was awarded a regular salary as a police informer, to supplement his wages as a metal turner, and was instructed by the Okhrana DirectorS. P. Beletskyto ensure that the different factions of the RSDLP never reunited. Malinovsky, therefore, joined the Bolsheviks. In January 1912, he travelled to Prague, whereVladimir Leninhad organised a conference to finalise the break with the Mensheviks and create a separate Bolshevik organisation. He made such a good impression on Lenin that he was elected to the Central Committee, and chosen to represent the Bolsheviks in the forthcoming elections to theFourth Duma, to which he was elected as its most prominent working-class deputy, in November 1912. He was simultaneously the Okhrana's best-paid agent, earning 8,000 rubles a year, 1,000 more than the Director of the Imperial Police.[5]He led the six-member Bolshevik group (two of whom were Okhrana agents) and was deputy chairman of the Social Democrats in the Duma. As a secret agent, he helped send several important Bolsheviks (likeSergo Ordzhonikidze,Joseph Stalin, andYakov Sverdlov) into Siberian exile.
In November 1912, he visited Lenin inKrakówand was urged not to unite with the Mensheviks. Malinovsky ignored that by reading a conciliatory speech in the Duma, to throw any suspicion off of himself.[6]On 28 December 1912, he attended aCentral Committeemeeting inVienna. He persuaded Lenin to appoint an Okhrana agent,Miron Chernomazov, as editor ofPravdaas opposed to Stalin's candidateStepan Shahumyan. The tsarist regime was determined to keep the RSDLP split, meaning that conciliators and pro-party groups were targeted for sabotage, whileliquidatorsandrecallistswere encouraged.
WhenMenshevikleaderJulius Martovfirst denounced Malinovsky as a spy in January 1913, Lenin refused to believe him and stood by Malinovsky. The accusing article was signed Ts, short for Tsederbaum, Martov's real name. Stalin threatened Martov's sister and brother-in-law, Lydia andFedor Danby saying they would regret it if the Mensheviks denounced Malinovsky.[7]
Malinovsky's efforts helped the Okhrana arrestSergo Ordzhonikidze(14 April 1912),Yakov Sverdlov(10 February 1913) and Stalin (23 February 1913). The latter was arrested at a Bolshevik fundraising ball, which Malinovsky had persuaded him to attend by lending him a suit and silk cravat. Malinovsky was talking to Stalin when detectives took him and even shouting he would free him.[8]
In July 1913, he betrayed a plan for Sverdlov and Stalin to escape, warning the police chief inTurukhansk. He was then the only Bolshevik leader not in foreign or Siberian exile. Soon after this foiled escape plan, Stalin came over to Martov's view and strongly suspected Malinovsky to be an Okhrana spy, which was confirmed correct years later, fuelling Stalin's future distrust of his comrades.
On 8 May 1914, he was forced to resign from the Duma after Russia's recently promoted Deputy Minister for the Interior, GeneralVladimir Dzhunkovsky, decided that having a police agent in such a prominent position might cause a scandal.[9]He was given a pay off of 6,000 roubles, and ordered to leave the country. He joined Lenin in Kraków, where a Bolshevik commission looked into rumours that he was a police spy. Despite testimony fromNikolai BukharinandElena Troyanovskaya, who both suspected that they had been betrayed to the police by Malinovsky when they were arrested in Moscow, respectively in 1910 and 1912, the commission accepted Malinovsky's story that he had been forced to resign when the police had blackmailed him by threatening to publicise the old charge of attempted rape.[10]When World War I broke out, he was interned in a POW camp by the Germans. Lenin, still standing by him, sent him clothes. He said: "If he is a provocateur, the police gained less from it than our Party did." This refers to his strong anti-Menshevism. Eventually, Lenin changed his mind: "What a swine: shooting's too good for him!"[11]
In 1918, he tried to join thePetrograd Soviet, butGrigory Zinovievrecognized him. In November, after a brief trial, Malinovsky was executed by a firing squad.
According to the British historianSimon Sebag Montefiore, his successful infiltration into the Bolsheviks helped fuel the paranoia of the Soviets (and, more specifically, Stalin) that eventually gave way to theGreat Terror.
According to the transcribed recollections of Nikolay Vladimirovich Veselago, a former Okhrana officer and relative of the director of the Russian police departmentStepan Petrovich Beletsky, both Malinovsky and Stalin reported on Lenin as well as on each other although Stalin was unaware that Malinovosky was also a penetration agent.[12][13][14]
|
https://en.wikipedia.org/wiki/Roman_Malinovsky
|
In law enforcement, asting operationis adeceptiveoperation designed to catch a person attempting to commit a crime. A typical sting will have anundercoverlaw enforcementofficer, detective, or co-operative member of the public play a role as criminal partner or potential victim and go along with a suspect's actions to gather evidence of the suspect's wrongdoing.Mass mediajournalists have used sting operations to record video and broadcast to expose criminal activity.[1]
Sting operations are common in many countries, such as the United States,[2]but they are not permitted in some countries, such as Sweden.[3]There are prohibitions on conducting certain types of sting operations, such as in the Philippines, where it is illegal for law enforcers to pose as drug dealers to apprehend buyers of illegal drugs.[4]In countries like France, Germany, and Italy, sting operations are relatively rare.[5]
|
https://en.wikipedia.org/wiki/Sting_operation
|
"Syndicate–2"was adisinformation operationdeveloped and carried out by theState Political Directorate, aimed at eliminatingSavinkov's anti–Soviet underground.
The interest of the famous terroristBoris Savinkovto participate in underground anti–Soviet activities prompted the extraordinary commissioners to develop a plan to involve him in such activities under the supervision of special services, in order to eliminate the entire underground network. Such a plan was developed in the Counterintelligence Department of the State Political Administration under the People's Commissariat of Internal Affairs of the Russian Socialist Federative Soviet Republic, created in 1922. On May 12, 1922, a circular letter "On the Savinkov's Organization" was issued (it was published on the fourth day of the department's existence and became the first circular letter published). This letter addressed the issue of a new method ofcounterintelligencework – the creation of legendary organizations. Operation Syndicate–2 was carried out in parallel with a similar OperationTrust, aimed at liquidating the monarchist underground. Operation Syndicate–2 involved the head of the Counterintelligence DivisionArtur Artuzov, Deputy ChiefRoman Pilar, Assistant Chief Sergei Puzitskiy and the personnel of the 6th Division of the Counterintelligence Division: Chief Ignatiy Sosnovskiy, Assistant Chief Nikolai Demidenko, secret officer Andrey Fedorov, authorized Grigory Syroezhkin,Semyon Gendin, Assistant to the Authorized Counterintelligence Department of the Plenipotentiary Representation of the State Political Administration for the Western Territory, Jan Krikman.[1]
After the failure of the resistance inPolandand a series of failures in the anti–Soviet field, Boris Savinkov decided to single–handedly organize uprisings and terrorist acts in Russia, reviving thePeople's Union for the Defense of the Motherland and Freedom. While inParis, in the summer of 1922, he sent hisadjutantLeonid Sheshenya to Russia, where he was detained by Soviet border guards[2]while crossing the border from Poland.[3]Through Sheshenya (who was under the threat of beingshotfor participating inBalakhovich's formations and agreed to cooperate with the United State Political Directorate), the extraordinary commissioners managed to uncover two agents – Mikhail Zekunov and V. Gerasimov, who turned out to be the leader of an underground organization.[4]Also, on the basis of Sheshenya's testimony, the cells of the People's Union for the Defense of the Motherland and Freedom in the Western Territory were liquidated.[5]
In the Counterintelligence Department, a project was developed according to which secret officer Andrei Fedorov should go abroad under the guise of a member of the Central Committee of the Liberal Democrats Party, Andrei Mukhin, in order to convince Savinkov of the existence of a capable underground organization in the Soviet Union and persuade him to cooperate. In addition, the extraordinary commissioners managed to recruit Zekunov, arrested in September 1922, who, after a month of briefing, was sent to Poland, where he met with Sheshenya's relative, a member of the People's Union for the Defense of the Motherland and Freedom, Ivan Fomichev. Fomichev sent Zekunov toWarsaw, where he reported to the resident of Savinkov,Dmitry Filosofov, the information that Sheshenya had come into contact with a large counter–revolutionary organization in the Soviet State, and handed over a letter to Sheshenya addressed to Savinkov. In June1923, Fedorov went to Poland. InVilno, he met with Ivan Fomichev, with whom they went to Warsaw. Fedorov demanded a meeting with Savinkov, in which he was denied (as envisaged by the extraordinary commissioners), instead Filosofov talked with him. Filosofov was suspicious of Fedorov, but he listened to his statement, and decided to send Fomichev to the Soviet Union for reconnaissance, which he informed Savinkov about in a letter. Savinkov approved the decision of his resident.[4]
Fomichev, who arrived inMoscow, was first set up by extraordinary commissioners with a real counter–revolutionary, Professor Isachenko, who headed a monarchist organisation, in the hope that the political opponents would quarrel and Fomichev would get the impression that the only force with which to cooperate was the "Liberal Democrats". And so it happened, after this conversation, Fomichev got to a meeting of the joint center of "Liberal Democrats" and Savinkovites, where he made a proposal for cooperation (Professor Isachenko was sent to theInternal Prison of the State Political Administration on Lubyanka, and, apparently, was shot). The "Liberal Democrats" accepted the proposal to work together, but set a condition for political consultations with Savinkov personally. The information received from Fomichev was accepted by Filosofov with enthusiasm, and he even forgot to inform Savinkov himself about it, who learned about the result of the trip by accident. Savinkov was very angry at this behavior of the residents, and threatened to remove all local leaders of the Union. He was in thought, comparing all the known facts, studying the program documents of the Liberal Democrats Party, which were drawn up with the participation of Artuzov, Puzitsky andMenzhinsky.[4]
Meanwhile, on June 11, 1923, Fedorov went to Paris from Warsaw to meet with Savinkov.[3]Savinkov was still not sure that the Liberal Democrats were not a provocation by the extraordinary commissioners, and decided to send one of his closest associates, Sergei Pavlovsky, to Fedorov, who suspected that the organization was provocative in nature. However, the check failed, Fedorov did not succumb to the provocation, and achieved an audience with Savinkov, playing a scene of a quarrel over resentment and disappointment in Savinkov and his associates. Savinkov calmed Fedorov, and sent Pavlovsky to the Soviet Union (without giving details of the leadership of the Liberal Democrats).[4]In addition, Fomichev and Fedorov contacted Polish intelligence, passed on some false information (prepared by the State Political Administration) and agreed on permanent cooperation.[5]
Pavlovsky arrived in Poland in August 1923, illegally crossed the border with the Soviet Union on August 17,[3]killing a Red Army soldier, but instead of immediately proceeding to check the activities of Sheshenya, Zekunov and others, inBelarushe organized an armed group from among the members of the People's Union for the Defense of the Motherland and freedoms, along with which he began to deal with the expropriation of banks, mail trains and the murder of communists. On September 16,[3]he moved to Moscow, where two days later, during a meeting with Sheshenya and the leaders of the Liberal Democrats, he was arrested and taken to the internal prison of the State Political Administration. There he was presented with a list of his most important crimes and made it clear that he would only be able to avoid execution by cooperating with the State Political Administration. He was asked to write a letter to Filosofov, and he agreed, having an agreement with Savinkov, that if he did not put an end to any proposal in the letter, it would be a sign that he was arrested. But the attempt to send a letter with a conditional sign failed, since Pavlovsky, with his persistent interest in whether the emergency commissioners were not afraid that Savinkov would learn about the arrest of his assistant, aroused their suspicions, and a secret technique was unraveled by the cipher clerks. The letter was forced to be rewritten. Savinkov, having received a letter without a secret sign, trusted Pavlovsky and wrote a message to the Liberal Democrats in which he expressed a desire to come to Russia. Savinkov's wish was that Pavlovsky should come to Europe for him, but the extraordinary commissioners could not let Pavlovsky go, and they invented the legend that he allegedly went to the south of Russia with the aim of expropriation, was wounded and bedridden. Such news confused Savinkov, planting suspicions in his mind, but they were discarded, since Savinkov was driven by the fear of being late at the right moment for active action. In addition, the extraordinary commissioners organized meetings of Fomichev with the "leaders of anti–Soviet groups" inRostov–on–DonandMineralnye Vody, who were represented by officers of the Counterintelligence Department Ibragim Abyssalov and Ignatiy Sosnovsky. In June 1924, Fomichev arrived in Paris and convinced Savinkov of the need for a trip.[5]
In August 1924, Savinkov returned to the Soviet Union, accompanied by Alexander Dikhoff, his wife Lyubov, Fomichev and Fedorov.[4]Fedorov separated from the group in Vilno, promising to meet them on Soviet territory. On August 15, they crossed the border through a passage prepared by the United State Political Administration. They reachedMinsk, where Savinkov, and Alexander and Lyubov Dikgof, betrayed by Andrei Fedorov, were arrested in a safe house on August 16, 1924, and sent to Moscow.[3]On August 18, they were taken to the inner prison of the United State Political Administration.[1]
At the trial Savinkov confessed to everything and especially noted the fact that "all his life he worked only for the people and in their name". Cooperating with the investigation, he presented at the trial the version invented by the extraordinary commissioners in order to keep the details of Operation Syndicate–2 secret, and stated that he repented of his crimes and admitted "all his political activities since the October Socialist Revolution were a mistake and delusion". On August 29, 1924 theSupreme Courtsentenced Savinkov to death with confiscation of property, since he deserved five years in prison and five death sentences for the cumulative crimes. However, the court petitioned for a mitigation of the sentence due to the convicted person's admission of his guilt and his readiness to make amends to the Soviet authorities. The motion of the Military Collegium of the Supreme Court was approved by the Presidium of theCentral Executive Committee of the Soviet Union, and the capital punishment was replaced by imprisonment for a term of ten years.[4]
Being in the internal prison of the United State Political Administration on Lubyanka, in unprecedented conditions (in the cell where his mistress Lyubov Dikhoff lived with him periodically, there was a carpet, furniture, the prisoner was allowed to write memoirs, some of which were even published, and he was paid a fee), Savinkov kept a diary in which he continued to insist on a fictional version of the motives for crossing the border. On the morning of May 7, 1925, Savinkov wrote a letter toFelix Dzerzhinsky, in which he asked to explain why he was being held in prison, and not shot or not allowed to work for the Soviet regime. Dzerzhinsky did not answer in writing, only ordering to convey that he started talking about freedom early. In the evening of that day, employees of the United State Political Administration Speransky, Puzitsky and Syroezhkin accompanied Savinkov for a walk inTsaritsinsky Park, three hours later they returned to Lubyanka, to Pillar's office on the fifth floor to wait for the guards. At some point, Puzitsky left the office, in which there was Speransky, sitting on the sofa, Syroezhkin, sitting at the table, and Savinkov.
Researchers have not yet come to a consensus about further events. The official version says that Savinkov paced the room and suddenly jumped out of the window into the courtyard. However, the investigator conducting the official investigation notes that Savinkov was sitting at a round table opposite one of the emergency commissioners.Boris Gudz, a close friend of Syroezhkin, who was at that moment in the next room, in the 90s said that Savinkov walked around the room and jumped through the window upside down, Syroezhkin managed to catch him by the leg, but he could not hold back, as he had an injury to one of his hands. For the first time, a message about Savinkov's suicide, written in the United State Political Administration, edited by Felix Dzerzhinsky and approved byJoseph Stalin, was published on May 13 in the newspaperPravda. The suicide version was circulated by the Soviet press and part of the émigré community. Doubts about the official version were one of the first to be expressed byAlexander Solzhenitsynin theArchipelago of the Main Administration of the Camps. He wrote that in the Kolyma Camp, the former extraordinary commissioner Arthur Prubel, dying, told someone that he was one of four people who threw Savinkov out of the window. Some modern historians are also inclined to believe that Savinkov was killed.[4]
During the operation "Syndicate–2", most of the "Savinkovites" were identified, conducting clandestine work on the territory of the Soviet Union, the "People's Union for the Defense of the Motherland and Freedom" was finally defeated: the cells of the "Union" were liquidated inSmolensk, Bryansk,Vitebsk,GomelProvinces,[5]on the territory of the Petrograd Military District, 23 Savinkov's residencies inMoscow,Samara,Saratov,Kharkov,Kiev,Tula,Odessa. There have been several major trials, including the "Case of Forty–Four", "Case of Twelve", "Case of Forty–Three".[2]Agents of the People's Union for the Defense of the Motherland and Freedom Veselov, Gorelov, Nagel–Neiman, Rosselevich, the organizers of the terrorist acts V. I. Svezhevsky and Mikhail Gnilorybov and others were arrested and convicted.[5]Alexander and Lyubov Dikhoff were amnestied and lived in Moscow until 1936, when they were sentenced to 5 years in aforced labor campas "socially dangerous elements", they ended up in the Kolyma. There, in 1939, Alexander Arkadyevich was shot. Lyubov Efimovna survived, settled in exile in Magadan, and worked as a librarian. She died in Mariupol in 1969. Pavlovsky was shot in 1924, Sheshenya worked for the United State Political Administration, and was shot in 1937, Fomichev was released, lived in the village, was shot in 1929.[3][5]
Vyacheslav Menzhinsky, Roman Pillar, Sergei Puzitsky, Nikolai Demidenko, Andrey Fedorov, Grigory Syroezhkin were presented with theOrder of the Red Banner.Artur Artuzov, Ignatiy Sosnovsky,Semyon Gendinand I. P. Krikman received gratitude from the government of the Soviet Union.[5]
Subsequently, almost all the extraordinary commissioners who participated in the operation were shot during theStalinist Purges:[3]Andrei Fedorov, in the organs of the All–Russian Extraordinary Commission since 1920, one of the main participant in the Syndicate–2 operation and the capture of Boris Savinkov (disguised as officer Mukhin), shot on September 20, 1937 in Moscow. Roman Pillar, Sergei Puzitsky,Artur Artuzov, Ignatiy Sosnovsky were shot in 1937. Grigory Syroezhkin andSemyon Gendin– in 1939.
Based on the operation in 1968, the novel "Retribution" was written by the writer Vasily Ardamatsky. In the same year, according to a script based on the novel "Retribution", the film "Crash" was shot by directorVladimir Chebotaryov. In 1981, director Mark Orlov filmed a remake of the six–part television movie "Syndicate–2". An operation similar to Syndicate–2 is also at the heart of the 2014 television series Wolf Sun, which chronicles the activities of Soviet intelligence in Poland in the 1920s.
|
https://en.wikipedia.org/wiki/Syndicate%E2%80%932
|
TheTagantsev conspiracy(or the case of thePetrograd Military Organization) was a non-existentmonarchistconspiracy fabricated by theSoviet secret policein 1921 to both decimate and terrorize potentialSoviet dissidentsagainst the rulingBolshevikregime.[1]As its result, more than 800 people, mostly from scientific and artistic communities in Petrograd (modern-daySaint Petersburg), were arrested on false terrorism charges, out of which 98 were executed and many were sent to labour camps. Among the executed was the poetNikolay Gumilev, the co-founder of the influentialAcmeist movement.
In 1992, all those convicted in the Petrograd Combat Organization (PBO) case were rehabilitated and the case was declared fabricated. However, in the 1990s, documents confirming the existence of the organization were introduced into scientific circulation.
The affair was named after Vladimir Nikolaevich Tagantsev, a geographer and member of theRussian Academy of Sciences, who was arrested, tortured, and tricked into disclosing hundreds of names of people who did not like the Bolshevik regime. Among the security officers that manufactured the case wasYakov Agranov, who later became one of the chief organizers ofStalinist show trialsand theGreat Purgein the 1930s. The case was officially declared fabricated and its victimsrehabilitatedby Russian authorities in 1992.[2]
On December 5, 1920, all departments of the Soviet secret policeChekareceived a top secret order fromFeliks Dzerzhinskyto start creatingfalse flagWhite Armyorganizations, "underground and terrorist groups" to facilitate finding "foreign agents on our territory". This was planned partially as a provocation, in order to identify potentially disloyal citizens who might wish to join the Bolsheviks' enemies.[2][3]
A few months later, in February 1921, theKronstadt rebellionbegan. This was aleft-wing uprisingagainst the Bolshevik regime by soldiers and sailors. Additionally, the Bolsheviks understood that the majority ofintelligentsiadid not support them. On March 8, theCouncil of People's Commissars(Sovnarkom) send a letter toPeople's Commissariat for Education(Narkompros) asking to identify a group of unreliable intellectuals who could be a target of future repressions.[2]
On June 4, Bolshevik leaderVladimir Leninreceived a telegram fromLeonid Krasinabout a convention of monarchists,cadetsand right-wing members of theSocialist-Revolutionary Partyin Paris who anticipated an uprising against Bolsheviks in Petrograd. Lenin sent a telegram to Cheka co-founderJózef Unszlichtstating that he did not trust the Cheka in Petrograd any longer. In the telegram, he issued an order to urgently send "the most experienced Chekists to Piter" and find the conspirators. This was a signal for the Cheka in Petrograd to fabricate the case.[2]
On May 31, Yuri German, a former officer ofImperial Russian Armyand a Finnish spy, was killed while crossing the Finnish border. He had a notebook with numerous addresses, one of which belonged to professor Vladimir Tagantsev, who was previously identified by Cheka agents as an "disloyal person". Tagantsev and other people whose addresses were found in the notebook were arrested. During the next month the investigators of Cheka worked hard to manufacture the case, but without much success. The arrested refused to admit any guilt. After intense interrogation, Tagantsev tried to hang himself on June 21. In addition, it was difficult to connect together so many completely unrelated people.[2]
On June 25, two investigators of Petrograd Cheka, Gubin and Popov, prepared a report, according to which the "organization included only Tagantsev and a few couriers and supporters," the conspirators planned "to establish common language between intelligentsia and masses," the "terror was not their intention," and German delivered news about current events to Tagantsev from abroad. According to report, "the organization of Tagantsev had no connection and received no support from the Finnish or other counter-intelligence organizations." The report also noted that "Tagantsev is a cabinet scientist who thought about his organization theoretically" and "was incapable of doing practical work." After this report, names of Gubin and Popov disappeared from the case, meaning they have been replaced by other investigators.[4]
After the initial failure,Yakov Agranovwas appointed to lead the case. He arrested more people and took Tagantsev for interrogation after keeping him for 45 days in solitary confinement. Agranov gave an ultimatum; if Tagantsev did not confess, then he and all other hostages would be executed after three hours, however, no one would be harmed if he agreed to cooperate. According to publications by Russian emigrants, the agreement was even signed on paper and personally guaranteed byVyacheslav Menzhinsky.[2]After signing the agreement, Tagantsev called names of several hundred people who criticized the Bolshevik regime. All of them were arrested on July 31 and during the first days of August.
According to official version invented by Agranov, the leaders of conspiracy included Tagantsev, Finnish spy German, and former colonel of Russian army Vycheslav Shvedov who acted under pseudonym "Vyacheslavsky" and shot two Chekists during his arrest. To make the conspiracy bigger, they included many completely unrelated people, including former aristocrats, contrabandists, suspicious people and wives of those who were already arrested. The newspaperThe Petrograd Pravdapublished a report by the Petrograd Cheka that the military organization of Tagantsev planned to burn plants, kill people and commit other terrorism acts using weapons and dynamite.
The most famous victim of the case was the poetNikolay Gumilev. Gumilev was arrested by Cheka on August 3. He admitted that he thought about joining the Kronstadt rebellion if it were to spread to Petrograd and talked about this with Vyacheslavsky. Gumilev was executed by a firing squad, together with 60 other people on August 24 in theKovalevsky Forest.[5]Thirty-seven others were shot on October 3.[6][7]Agranov commented about the operation: "Seventy percent of the Petrograd intelligentsia had one leg in the enemy camp. We had to burn that leg."[8]
The action reportedly failed to terrify the population. According to academicianVladimir Vernadsky, the case "had a shocking effect and produced not a feeling of fear, but of hatred and contempt" against the Bolsheviks.[7]After the Tagantsev case Lenin decided that it would be easierto exile undesired intellectuals.[7]
|
https://en.wikipedia.org/wiki/Tagantsev_conspiracy
|
Variousanti-spam techniquesare used to preventemail spam(unsolicited bulk email).
No technique is a complete solution to the spam problem, and each hastrade-offsbetween incorrectly rejecting legitimate email (false positives) as opposed to not rejecting all spam email (false negatives) – and the associated costs in time, effort, and cost of wrongfully obstructing good mail.[1]
Anti-spam techniques can be broken into four broad categories: those that require actions by individuals, those that can be automated by email administrators, those that can be automated by email senders and those employed by researchers and law enforcement officials.
There are a number of techniques that individuals can use to restrict the availability of their email addresses, with the goal of reducing their chance of receiving spam.
Sharing an email address only among a limited group of correspondents is one way to limit the chance that the address will be "harvested" and targeted by spam. Similarly, when forwarding messages to a number of recipients who don't know one another, recipient addresses can be put in the "bcc: field" so that each recipient does not get a list of the other recipients' email addresses.
Email addresses posted onwebpages,Usenetorchat roomsare vulnerable toe-mail address harvesting.[2]Address munging is the practice of disguising ane-mail addressto prevent it from being automatically collected in this way, but still allow a human reader to reconstruct the original: an email address such as, "no-one@example.com", might be written as "no-one at example dot com", for instance. A related technique is to display all or part of the email address as an image, or as jumbled text with the order of characters restored usingCSS.
A common piece of advice is to not to reply to spam messages[3]as spammers may simply regard responses as confirmation that an email address is valid. Similarly, many spam messages contain web links or addresses which the user is directed to follow to be removed from the spammer's mailing list – and these should be treated as dangerous. In any case, sender addresses are often forged in spam messages, so that responding to spam may result in failed deliveries – or may reach completely innocent third parties.
Businesses and individuals sometimes avoid publicising an email address by asking for contact to come via a "contact form" on a webpage – which then typically forwards the information via email. Such forms, however, are sometimes inconvenient to users, as they are not able to use their preferred email client, risk entering a faulty reply address, and are typically not notified about delivery problems. Further, contact forms have the drawback that they require a website with the appropriate technology.
In some cases contact forms also send the message to the email address given by the user. This allows the contact form to be used for sending spam, which may incur email deliverability problems from the site once the spam is reported and the sending IP is blacklisted.
Many modern mail programs incorporateweb browserfunctionality, such as the display ofHTML, URLs, and images.
Avoiding or disabling this feature does not help avoid spam. It may, however, be useful to avoid some problems if a user opens a spam message: offensive images, obfuscated hyperlinks, being tracked byweb bugs, being targeted byJavaScriptor attacks upon security vulnerabilities in the HTML renderer. Mail clients which do not automatically download and display HTML, images or attachments have fewer risks, as do clients who have been configured to not display these by default.
An email user may sometimes need to give an address to a site without complete assurance that the site owner will not use it for sending spam. One way to mitigate the risk is to provide adisposableemail address — an address which the user can disable or abandon which forwards email to a real account. A number of services provide disposable address forwarding. Addresses can be manually disabled, can expire after a given time interval, or can expire after a certain number of messages have been forwarded.
Disposable email addresses can be used by users to track whether a site owner has disclosed an address, or had asecurity breach.[4]
Systems that use "ham passwords" ask unrecognised senders to include in their email a password that demonstrates that the email message is a "ham" (not spam) message. Typically the email address and ham password would be described on a web page, and the ham password would be included in the subject line of an email message (or appended to the "username" part of the email address using the "plus addressing" technique). Ham passwords are often combined with filtering systems which let through only those messages that have identified themselves as "ham".[5]
Tracking down a spammer's ISP and reporting the offense can lead to the spammer's service being terminated[6]and criminal prosecution.[7]Unfortunately, it can be difficult to track down the spammer, and while there are some online tools such asSpamCopand Network Abuse Clearinghouse to assist, they are not always accurate. Historically, reporting spam in this way has not played a large part in abating spam, since the spammers simply move their operation to another URL, ISP or network of IP addresses.
In many countries consumers may also report unwanted and deceptive commercial email to the authorities, e.g. in the US to theUS Federal Trade Commission(FTC),[8]or similar agencies in other countries.[9]
There are now a large number of applications, appliances, services, and software systems that email administrators can use to reduce the load of spam on their systems and mailboxes. In general these attempt to reject (or "block"), the majority of spam email outright at the SMTP connection stage. If they do accept a message, they will typically then analyze the content further – and may decide to "quarantine" any categorised as spam.
A number of systems have been developed that allow domain name owners to identify email as authorized. Many of these systems use the DNS to list sites authorized to send email on their behalf. After many other proposals,SPF,DKIMandDMARCare all now widely supported with growing adoption.[10][11][12]While not directly attacking spam, these systems make it much harder tospoof addresses, a common technique of spammers - but also used inphishing, and other types of fraud via email.
A method which may be used by internet service providers, by specialized services or enterprises to combat spam is to require unknown senders to pass various tests before their messages are delivered. These strategies are termed "challenge/response systems".
Checksum-based filterexploits the fact that the messages are sent in bulk, that is that they will be identical with small variations. Checksum-based filters strip out everything that might vary between messages, reduce what remains to achecksum, and look that checksum up in a database such as theDistributed Checksum Clearinghousewhich collects the checksums of messages that email recipients consider to be spam (some people have a button on their email client which they can click to nominate a message as being spam); if the checksum is in the database, the message is likely to be spam. To avoid being detected in this way, spammers will sometimes insert unique invisible gibberish known ashashbustersinto the middle of each of their messages, to make each message have a unique checksum.
Some email servers expect to never communicate with particular countries from which they receive a great deal of spam. Therefore, they use country-based filtering – a technique that blocks email from certain countries. This technique is based on country of origin determined by the sender's IP address rather than any trait of the sender.
There are large number of free and commercial DNS-based Blacklists, orDNSBLswhich allow a mail server to quickly look up the IP of an incoming mail connection - and reject it if it is listed there. Administrators can choose from scores of DNSBLs, each of which reflects different policies: some list sites known to emit spam; others listopen mail relaysor proxies; others list ISPs known to support spam.
Most spam/phishing messages contain an URL that they entice victims into clicking on. Thus, a popular technique since the early 2000s consists of extracting URLs from messages and looking them up in databases such asSpamhaus' Domain Block List (DBL),SURBL, and URIBL.[13]
Many spammers use poorly written software or are unable to comply with the standards because they do not have legitimate control of the computer they are using to send spam (zombie computer). By setting tighter limits on the deviation from RFC standards that theMTAwill accept, a mail administrator can reduce spam significantly - but this also runs the risk of rejecting mail from older or poorly written or configured servers.
Greeting delay– A sending server is required to wait until it has received the SMTP greeting banner before it sends any data. A deliberate pause can be introduced by receiving servers to allow them to detect and deny any spam-sending applications that do not wait to receive this banner.
Temporary rejection– Thegreylistingtechnique is built on the fact that theSMTPprotocol allows for temporary rejection of incoming messages. Greylisting temporarily rejects all messages from unknown senders or mail servers – using the standard 4xx error codes.[14]All compliant MTAs will proceed to retry delivery later, but many spammers and spambots will not. The downside is that all legitimate messages from first-time senders will experience a delay in delivery.
HELO/EHLO checking–RFC5321says that an SMTP server "MAY verify that the domain name argument in the EHLO command actually corresponds to the IP address of the client. However, if the verification fails, the server MUST NOT refuse to accept a message on that basis." Systems can, however, be configured to
Invalid pipelining– Several SMTP commands are allowed to be placed in one network packet and "pipelined". For example, if an email is sent with a CC: header, several SMTP "RCPT TO" commands might be placed in a single packet instead of one packet per "RCPT TO" command. The SMTP protocol, however, requires that errors be checked and everything is synchronized at certain points. Many spammers will send everything in a single packet since they do not care about errors and it is more efficient. Some MTAs will detect this invalid pipelining and reject email sent this way.
Nolisting– The email servers for any given domain are specified in a prioritized list, via theMX records. Thenolistingtechnique is simply the adding of an MX record pointing to a non-existent server as the "primary" (i.e. that with the lowest preference value) – which means that an initial mail contact will always fail. Many spam sources do not retry on failure, so the spammer will move on to the next victim; legitimate email servers should retry the next higher numbered MX, and normal email will be delivered with only a brief delay.
Quit detection– An SMTP connection should always be closed with a QUIT command. Many spammers skip this step because their spam has already been sent and taking the time to properly close the connection takes time and bandwidth. Some MTAs are capable of detecting whether or not the connection is closed correctly and use this as a measure of how trustworthy the other system is.
Another approach is simply creating an imitation MTA that gives the appearance of being an open mail relay, or an imitation TCP/IP proxy server that gives the appearance of being an open proxy. Spammers who probe systems for open relays and proxies will find such a host and attempt to send mail through it, wasting their time and resources, and potentially, revealing information about themselves and the origin of the spam they are sending to the entity that operates the honeypot. Such a system may simply discard the spam attempts, submit them toDNSBLs, or store them for analysis by the entity operating the honeypot that may enable identification of the spammer for blocking.
SpamAssassin, Policyd-weight and others use some or all of the various tests for spam, and assign a numerical score to each test. Each message is scanned for these patterns, and the applicable scores tallied up. If the total is above a fixed value, the message is rejected or flagged as spam. By ensuring that no single spam test by itself can flag a message as spam, the false positive rate can be greatly reduced.
Outbound spam protection involves scanning email traffic as it exits a network, identifying spam messages and then taking an action such as blocking the message or shutting off the source of the traffic. While the primary impact ofspamis on spam recipients, sending networks also experience financial costs, such as wasted bandwidth, and the risk of having their IP addresses blocked by receiving networks.
Outbound spam protection not only stops spam, but also lets system administrators track down spam sources on their network and remediate them – for example, clearing malware from machines which have become infected with avirusor are participating in abotnet.
The PTR DNS records in the reverse DNS can be used for a number of things, including:
Content filtering techniques rely on the specification of lists of words orregular expressionsdisallowed in mail messages. Thus, if a site receives spam advertising "herbal Viagra", the administrator might place this phrase in the filter configuration. The mail server would then reject any message containing the phrase.
Header filtering looks at the header of the email which contains information about the origin, destination and content of the message. Although spammers will oftenspooffields in the header in order to hide their identity, or to try to make the email look more legitimate than it is, many of these spoofing methods can be detected, and any violation of, e.g.,RFC5322,7208, standards on how the header is to be formed can also serve as a basis for rejecting the message.
Since a large percentage of spam has forged and invalid sender ("from") addresses, some spam can be detected by checking that this "from" address is valid. A mail server can try to verify the sender address by making an SMTP connection back to the mail exchanger for the address, as if it were creating a bounce, but stopping just before any email is sent.
Callback verification has various drawbacks: (1) Since nearly all spam has forgedreturn addresses, nearly all callbacks are to innocent third party mail servers that are unrelated to the spam; (2) When the spammer uses atrap addressas his sender's address. If the receiving MTA tries to make the callback using the trap address in a MAIL FROM command, the receiving MTA's IP address will be blacklisted; (3) Finally, the standard VRFY and EXPN commands[16]used to verify an address have been so exploited by spammers that few mail administrators enable them, leaving the receiving SMTP server no effective way to validate the sender's email address.[17]
SMTP proxies allow combating spam in real time, combining sender's behavior controls, providing legitimate users immediate feedback, eliminating a need for quarantine.
Spamtrapping is the seeding of an email address so that spammers can find it, but normal users can not. If the email address is used then the sender must be a spammer and they are black listed.
As an example, if the email address "spamtrap@example.org" is placed in the source HTML of a web site in a way that it isn't displayed on the web page, human visitors to the website would not see it. Spammers, on the other hand, use web page scrapers and bots to harvest email addresses from HTML source code - so they would find this address. When the spammer later sends to the address the spamtrap knows this is highly likely to be a spammer and can take appropriate action.
Statistical, or Bayesian, filtering once set up requires no administrative maintenance per se: instead, users mark messages asspamornonspamand the filtering software learns from these judgements. Thus, it is matched to theend user'sneeds, and as long as users consistently mark/tag the emails, can respond quickly to changes in spam content. Statistical filters typically also look at message headers, considering not just the content but also peculiarities of the transport mechanism of the email.
Software programs that implement statistical filtering includeBogofilter,DSPAM,SpamBayes,ASSP,CRM114, the email programsMozillaandMozilla Thunderbird,Mailwasher, and later revisions ofSpamAssassin.
Atarpitis any server software which intentionally responds extremely slowly to client commands. By running a tarpit which treats acceptable mail normally and known spam slowly or which appears to be an open mail relay, a site can slow down the rate at which spammers can inject messages into the mail facility. Depending on the server and internet speed, a tarpit can slow an attack by a factor of around 500.[18]Many systems will simply disconnect if the server doesn't respond quickly, which will eliminate the spam. However, a few legitimate email systems will also not deal correctly with these delays. The fundamental idea is to slow the attack so that the perpetrator has to waste time without any significant success.[19]
An organization can successfully deploy a tarpit if it is able to define the range of addresses, protocols, and ports for deception.[20]The process involves a router passing the supported traffic to the appropriate server while those sent by other contacts are sent to the tarpit.[20]Examples of tarpits include the Labrea tarpit, Honeyd,[21]SMTP tarpits, and IP-level tarpits.
Measures to protect against spam can cause collateral damage. This includes:
There are a variety of techniques that email senders use to try to make sure that they do not send spam. Failure to control the amount of spam sent, as judged by email receivers, can often cause even legitimate email to be blocked and for the sender to be put onDNSBLs.
Since spammer's accounts are frequently disabled due to violations of abuse policies, they are constantly trying to create new accounts. Due to the damage done to an ISP's reputation when it is the source of spam, many ISPs and web email providers useCAPTCHAson new accounts to verify that it is a real human registering the account, and not an automated spamming system. They can also verify that credit cards are not stolen before accepting new customers, checkthe Spamhaus ProjectROKSO list, and do other background checks.
A malicious person can easily attempt to subscribe another user to amailing list— to harass them, or to make the company or organisation appear to be spamming. To prevent this, all modern mailing list management programs (such asGNU Mailman,LISTSERV,Majordomo, andqmail's ezmlm) support "confirmed opt-in" by default. Whenever an email address is presented for subscription to the list, the software will send a confirmation message to that address. The confirmation message contains no advertising content, so it is not construed to be spam itself, and the address is not added to the live mail list unless the recipient responds to the confirmation message.
Email senders typically now do the same type of anti-spam checks on email coming from their users and customers as for inward email coming from the rest of the Internet. This protects their reputation, which could otherwise be harmed in the case of infection by spam-sending malware.
If a receiving server initially fully accepts an email, and only later determines that the message is spam or to a non-existent recipient, it will generate abounce messageback to the supposed sender. However, if (as is often the case with spam), the sender information on the incoming email was forged to be that of an unrelated third party then this bounce message isbackscatter spam. For this reason it is generally preferable for most rejection of incoming email to happen during the SMTP connection stage, with a 5xx error code, while the sending server is still connected. In this case then thesendingserver will report the problem to the real sender cleanly.
Firewallsandrouterscan be programmed to not allowSMTPtraffic (TCP port 25) from machines on the network that are not supposed to runMail Transfer Agentsor send email.[22]This practice is somewhat controversial when ISPs block home users, especially if the ISPs do not allow the blocking to be turned off upon request. Email can still be sent from these computers to designatedsmart hostsvia port 25 and to other smart hosts via the email submission port 587.
Network address translationcan be used to intercept all port 25 (SMTP) traffic and direct it to a mail server that enforces rate limiting and egress spam filtering. This is commonly done in hotels,[23]but it can causeemail privacyproblems, as well making it impossible to useSTARTTLSandSMTP-AUTHif the port 587 submission port isn't used.
Machines that suddenly start sending lots of email may well have becomezombie computers. By limiting the rate that email can be sent around what is typical for the computer in question, legitimate email can still be sent, but large spam runs can be slowed down until manual investigation can be done.[24]
By monitoring spam reports from places such asspamcop,AOL's feedback loop, and Network Abuse Clearinghouse, the domain's abuse@ mailbox, etc., ISPs can often learn of problems before they seriously damage the ISP's reputation and have their mail servers blacklisted.
Both malicious software and human spam senders often use forged FROM addresses when sending spam messages. Control may be enforced on SMTP servers to ensure senders can only use their correct email address in the FROM field of outgoing messages. In an email users database each user has a record with an email address. The SMTP server must check if the email address in the FROM field of an outgoing message is the same address that belongs to the user's credentials, supplied for SMTP authentication. If the FROM field is forged, an SMTP error will be returned to the email client (e.g. "You do not own the email address you are trying to send from").
Most ISPs andwebmailproviders have either anAcceptable Use Policy(AUP) or aTerms of Service(TOS) agreement that discourages spammers from using their system and allows the spammer to be terminated quickly for violations.
From 2000 onwards, many countries enacted specific legislation to criminalize spamming, and appropriatelegislationandenforcementcan have a significant impact on spamming activity.[25]Where legislation provides specific text that bulk emailers must include, this also makes "legitimate" bulk email easier to identify.
Increasingly, anti-spam efforts have led to co-ordination between law enforcement, researchers, major consumer financial service companies andInternet service providersin monitoring and tracking email spam,identity theftandphishingactivities and gathering evidence for criminal cases.[26]
Analysis of the sites beingspamvertisedby a given piece of spam can often be followed up with domain registrars with good results.[27]
Several approaches have been proposed to improve the email system.
Since spamming is facilitated by the fact that large volumes of email are very inexpensive to send, one proposed set of solutions would require that senders pay some cost in order to send email, making it prohibitively expensive for spammers. Anti-spam activistDaniel Balsamattempts to make spamming less profitable by bringing lawsuits against spammers.[28]
Artificial intelligence techniques can be deployed for filtering spam emails, such as artificial neural networks algorithms and Bayesian filters. These methods use probabilistic methods to train the networks, such as examination of the concentration or frequency of words seen in the spam versus legitimate email contents.[29]
Channel email is a new proposal for sending email that attempts to distribute anti-spam activities by forcing verification (probably usingbounce messagesso back-scatter does not occur) when the first email is sent for new contacts.
Spam is the subject of several research conferences, including:
|
https://en.wikipedia.org/wiki/Anti-spam_techniques_(e-mail)
|
Smtp-sinkis a utility program in thePostfix Mailsoftware package that implements a "black hole" function. It listens on the named host (or address) and port. It acceptsSimple Mail Transfer Protocol(SMTP) messages from the network and discards them. The purpose is to support measurement of client performance. It is not SMTP protocol compliant.
Connections can be accepted onIPv4orIPv6endpoints, or onUNIX-domain sockets. IPv4 and IPv6 are the default. This program is the complement of the smtp-source(1) program.[1]
|
https://en.wikipedia.org/wiki/Mail-sink
|
Incognitive linguistics,conceptual metaphor, orcognitive metaphor, refers to the understanding of one idea, orconceptual domain, in terms of another. An example of this is the understanding ofquantityin terms ofdirectionality(e.g. "the price of peace isrising") or the understanding of time in terms of money (e.g. "Ispenttime at work today").
A conceptual domain can be any mental organization of human experience. The regularity with which different languages employ the same metaphors, often perceptually based, has led to the hypothesis that the mapping between conceptual domains corresponds to neural mappings in the brain.[1][2]This theory gained wide attention in the 1990s and early 2000s, although some researchers question its empirical accuracy.[3]
The conceptual metaphor theory proposed byGeorge Lakoffand his colleagues arose from linguistics, but became of interest tocognitive scientistsdue to its claims about the mind, the brain and their connections to the body. There is empirical evidence that supports the claim that at least some metaphors are conceptual.[4]However, the empirical evidence for some aspects of the theory has been mixed. It is generally agreed that metaphors form an important part of human verbal conceptualization, but there is disagreement about the more specific claims conceptual metaphor theory makes about metaphor comprehension. For instance, metaphoric expressions of the formX is a Y(e.g.My job is a jail) may not activate conceptual mappings in the same way that other metaphoric expressions do. Furthermore, evidence suggests that the links between the body and conceptual metaphor, while present, may not be as extreme as some conceptual metaphor theorists have suggested.[5]
Furthermore, certain claims from early conceptual metaphor theory have not been borne out. For instance, Lakoff asserted that human metaphorical thinking seems to work effortlessly,[6]but psychological research on comprehension (as opposed, for example, to invention) has found that metaphors are actually more difficult to process than non-metaphoric expressions.[citation needed]Furthermore, when metaphors lose their novelty and become conventionalized, they eventually lose their status as metaphors and become processed like ordinary words (an instance ofgrammaticalization).[citation needed]Therefore, the role of the conceptual metaphor in processing human thinking is more limited than what was claimed by some linguistic theories.[5][need quotation to verify]
The idea of conceptual metaphors as being the basis of rational thinking, and a detailed examination of the underlying processes, was first extensively explored by George Lakoff andMark Johnsonin their workMetaphors We Live Byin 1980. Since then, the field of metaphor studies within the larger discipline ofcognitive linguisticshas increasingly developed, with several annual academic conferences, scholarly societies, and research labs contributing to the subject area. Some researchers, such as Gerard Steen, have worked to develop empirical investigative tools for metaphor research, including theMetaphor Identification Procedure, or MIP.[7]In Psychology,Raymond W. Gibbs, Jr., has investigated conceptual metaphor andembodimentthrough a number of psychological experiments. Othercognitive scientists, for exampleGilles Fauconnier, study subjects similar to conceptual metaphor under the labels "analogy", "conceptual blending" and "ideasthesia".
Conceptual metaphors are useful for understanding complex ideas in simple terms and therefore are frequently used to give insight to abstract theories and models. For example, the conceptual metaphor of viewing communication as aconduitis one large theory explained with a metaphor. So not only is our everyday communication shaped by the language of conceptual metaphors, but so is the very way we understand scholarly theories. These metaphors are prevalent in communication and we do not just use them in language; we actually perceive and act in accordance with the metaphors.
In the Western philosophical tradition,Aristotleis often situated as the first commentator on the nature of metaphor, writing in thePoetics, "A 'metaphorical term' involves the transferred use of a term that properly belongs to something else,"[8]and elsewhere in theRhetoriche says that metaphors make learning pleasant; "To learn easily is naturally pleasant to all people, and words signify something, so whatever words create knowledge in us are the pleasantest."[9]Aristotle's writings on metaphor constitute a "substitution view" of metaphor, wherein a metaphor is simply a decorative word or phrase substituted for a more ordinary one. This has been sometimes called the "Traditional View of Metaphor"[10]and at other times the "Classical Theory of Metaphor".[11]Later in the first century A.D., the Roman rhetoricianQuintilianbuilds upon Aristotle's earlier work of metaphor by focusing more on the comparative function of metaphorical language. In his workInstitutio Oratoria,Quintilian states," In totum autem metaphora brevior est similitudo" or "on the whole, metaphor is a shorter form of simile".[12]Other philosophers throughout history have lent their perspectives to the discussion of metaphor as well.Friedrich Nietzschefor example, claimed that language as a whole did not portray reality but instead made a series of bold metaphors. Nietzsche believed that each step of cognition, the transfer of real world information to nerve stimuli, the culmination of nerve stimuli into mental images, the translation of mental images to words, was metaphorical.[13]Modern interpretations of these early theories have also been intensely debated.Janet Soskice, Professor ofPhilosophical Theologyat theUniversity of Cambridge, writes in summary that "it is certain that we shall taste the freshness of their insights only if we free them from the obligation to answer questions that were never theirs to ask".[10]George Lakoff and Mark Johnson, although originally taking a hard-line interpretation of these early authors[11][14]later concede that Aristotle was working within a different philosophical framework from what we engage with today and that critical interpretations should take this in to account.[15]
In his 2007 bookThe Stuff of Thought, cognitive scientistSteven Pinkerlays out several useful classifications for the study of conceptual metaphor. Pinker first contrasts two perspectives on metaphor, what he calls the killjoy theory and the messianic theory. The killjoy theory categorizes metaphors as "dead", that is it asserts that modern day speakers are not aware of the comparison made between source and target domains in the everyday metaphors they use. For example, many are not cognizant that the phrase "to come to a head" refers to the accumulation of pus in a pimple. In contrast, the messianic theory correlates more closely with Lakoff and Johnson's idea of a conceptual metaphor. This view states that users of metaphors are aware of how the metaphor maps onto the domains and use them to relate shared perceptual experiences to more complex thoughts.[16]
Another important distinction made by Pinker is that between literary, or poetic metaphors, and conceptual, or generative metaphors. Poetic metaphors are used for a variety of reasons but ultimately highlight similarities or incongruencies in an expressive manner. Pinker's example of this being the classic Shakespearian line "Juliet is the sun". These metaphors can often appear convoluted or unclear without deeper context. Conceptual metaphors result from some inherent relation between two domains. These metaphors, so innate they are considered cliche, are interestingly able to generate infinite new metaphors.[16]For example, thinking back on the conceptual metaphorARGUMENT IS WAR, one can build many new metaphors such as "I shot him down" or "he blew my argument to pieces".
Pinker himself settles on a moderate view that falls in between the messianic and killjoy theories on metaphor. Perhaps most interestingly, while Pinker concedes that metaphor is a useful way to combat the limited ability of language to express thought, he postulates that a higher level of abstract thought must still be present. Otherwise, Pinker points out, how could we engage in critique of metaphors or employ metaphors for comedic effect?[16]
Major criticisms of work done on conceptual metaphor stem from the way many researchers conduct their research. Many study metaphors in a "top-down" direction, looking first at a few examples to suggest conceptual metaphors, then examining the structure of those metaphors. Researchers would look at their own lexicon, dictionaries, thesauri, and other corpora to study metaphors in language. Critics say this ignored the way language was actually used and focused too much on the hypothetical metaphors, so many irregularities were overlooked in favor of postulating universal conceptual metaphors.[17]In 2007, Pragglejaz Group came up with a methodology for identifying metaphorical expressions as a response to these criticisms.[18]
There are two main roles for the conceptual domains posited in conceptual metaphors:
Amappingis the way in which a source domain tracks onto and describes aspects of the target domain. Mappings describe the mental organization of information in domains, the underlying phenomenon that drives metaphorical usage in language. This conceptualization relates closely toimage schemas, mental representations used in reasoning, through the extension of spatial and physical laws to more complex situations.[19]
A primary tenet of this theory is that metaphors are matter of thought and not merely of language: hence, the termconceptual metaphor. The metaphor may seem to consist of words or other linguistic expressions that come from the terminology of the more concrete conceptual domain, but conceptual metaphors underlie a system of related metaphorical expressions that appear on the linguistic surface. Similarly, the mappings of a conceptual metaphor are themselves motivated byimage schemaswhich are pre-linguistic schemas concerning space, time, moving, controlling, and other core elements of embodied human experience.
Conceptual metaphors typically employ a more abstract concept as target and a more concrete or physical concept as their source. For instance, metaphors such as 'the days [the more abstract or target concept] ahead' or 'giving my time' rely on more concrete concepts, thus expressing time as a path into physical space, or as a substance that can be handled and offered as a gift. Different conceptual metaphors tend to be invoked when the speaker is trying to make a case for a certain point of view or course of action. For instance, one might associate "the days ahead" with leadership, whereas the phrase "giving my time" carries stronger connotations of bargaining. Selection of such metaphors tends to be directed by a subconscious or implicit habit in the mind of the person employing them.
The principle of unidirectionality states that the metaphorical process typically goes from the more concrete to the more abstract, and not the other way around. Accordingly, abstract concepts are understood in terms of prototype concrete processes. The term "concrete," in this theory, has been further specified by Lakoff and Johnson as more closely related to the developmental, physical neural, and interactive body (seeembodied philosophy). One manifestation of this view is found in thecognitive science of mathematics, where it is proposed that mathematics itself, the most widely accepted means of abstraction in the human community, is largely metaphorically constructed, and thereby reflects acognitive biasunique to humans that uses embodied prototypical processes (e.g. counting, moving along a path) that are understood by all human beings through their experiences.
Theconduit metaphoris a dominant class of figurative expressions used when discussing communication itself (metalanguage). It operates whenever people speak or write as if they "insert" theirmental contents(feelings, meanings, thoughts, concepts, etc.) into "containers" (words, phrases, sentences, etc.) whose contents are then "extracted" by listeners and readers. Thus, language is viewed as a "conduit" conveying mental content between people.
Defined and described by linguist Michael J. Reddy, PhD, his proposal of this conceptual metaphor refocused debate within and outside the linguistic community on the importance of metaphorical language.[20]
In their 1980 work,LakoffandJohnsonclosely examined a collection of basic conceptual metaphors, including:
The latter half of each of these phrases invokes certain assumptions about concrete experience and requires the reader or listener to apply them to the preceding abstract concepts of love or organizing in order to understand the sentence in which the conceptual metaphor is used.
There are numerous ways in which conceptual metaphors shape human perception and communication, especially in mass media and in public policy. Recent experiments by Thibodeau and Boroditsky substantiate this line of thought, termed "framing". In the experiments, conceptual metaphors that compared crime to either a beast or a disease had drastic effects on public policy opinions.[21]
Conceptual metaphors are commonplace in language. George Lakoff and Mark Johnson suggest that metaphors may unconsciously shape the way we think and act in their founding work,Metaphors We Live By(1980). For example, take the commonly used conceptual metaphor,ARGUMENT IS WAR.[22]This metaphor shapes our language in the way we view argument as a battle to be won. It is not uncommon to hear someone say "He won that argument" or "I attacked every weak point in his argument". The very way argument is conceptualized is shaped by this metaphor of arguments being a war. Argument can be seen in other ways than a battle, but we use this concept to shape the way we think of argument and the way we go about arguing. The same applies for the other conceptual metaphors.
Similarly,Colin Murray Turbaynesuggested in hisThe Myth of Metaphor(1962) that ancient "dead metaphors" have also influenced the evolution over time of modern scientific theories in a subtle manner. As examples of mankind's victimization by dead metaphors, Turbayne points to the incorporation of mechanistic metaphors first developed byIsaac NewtonandRené Descartesinto modern theories developed by philosophers including:Immanuel Kant,George BerkeleyandDavid Hume.[23][24][25]In hisMetaphors for the Mind: The Creative Mind and Its Origins(1991), he also points to the manner in which metaphors first found in Plato'sTimaeushave exerted a profound influence upon the development of modern theories of both thought and language in general.[26][27]
Lakoff and Johnson focus on English, and cognitive scholars writing in English have tended not to investigate the discourse of foreign languages in any great detail to determine the creative ways in which individuals negotiate, resist, and consolidate conceptual metaphors.Andrew Goatlyin his bookWashing the Brain(2007)[28]considers ideological conceptual metaphors as well as Chinese conceptual metaphors.
James W. Underhill, a modern Humboldtian scholar, attempts to reestablishWilhelm von Humboldt's concern for the different ways languages frame reality, and the strategies individuals adopt in creatively resisting and modifying existing patterns of thought. Taking on board the Lakoff-Johnson paradigm of conceptual metaphor, he investigates the way in which Czech communists appropriated the concept of the people, the state and struggle, and the way German Communists harnessed concepts of eternity and purity. He also reminds us that, as Klemperer demonstrates, resisting patterns of thought means engaging in conceptual metaphors and refusing the logic that ideologies impose upon them. In multilingual studies (based on Czech, German, French & English), Underhill considers how different cultures reformulate key concepts such as truth, love, hate and war.[29]
George Lakoffmakes similar claims on the overlap of conceptual metaphors, culture, and society in his bookMoral Politicsand his later book on framing,Don't Think of an Elephant!.Lakoff claims that the public political arena in America reflects a basic conceptual metaphor of 'the family.' Accordingly, people understand political leaders in terms of 'strict father' and 'nurturant mother' roles. Two basic views ofpolitical economyarise from this desire to see the nation-state act 'more like a father' or 'more like a mother.' He further amplified these views in his latest book,The Political Mind.
Urban theorist and ethicistJane Jacobsmade this distinction in less gender-driven terms by differentiating between a 'Guardian Ethic' and a 'Trader Ethic'.[30]She states that guarding and trading are two concrete activities that human beings must learn to apply metaphorically to all choices in later life. In a society where guarding children is the primary female duty and trading in a market economy is the primary male duty, Lakoff posits that children assign the 'guardian' and 'trader' roles to their mothers and fathers, respectively.
Lakoff, Johnson, and Pinker are among the many cognitive scientists that devote a significant amount of time to current events and political theory, suggesting that respected linguists and theorists of conceptual metaphor may tend to channel their theories into political realms.
Critics of this ethics-driven approach to language tend to accept thatidiomsreflect underlying conceptual metaphors, but that actual grammar, and the more basic cross-cultural concepts ofscientific methodandmathematical practicetend to minimize the impact of metaphors. Such critics tend to see Lakoff and Jacobs as 'left-wing figures,' and would not accept their politics as any kind of crusade against anontologyembedded in language and culture, but rather, as an idiosyncratic pastime, not part of the science of linguistics nor of much use. And others further, such asDeleuzeandGuattari,Michel Foucaultand, more recently,Manuel de Landawould criticize both of these two positions for mutually constituting the same old ontological ideology that would try to separate two parts of a whole that is greater than the sum of its parts.
Lakoff's 1987 work,Women, Fire, and Dangerous Things,answered some of these criticisms before they were even made: he explores the effects of cognitive metaphors (both culturally specific and human-universal) on the grammar per se of several languages, and the evidence of the limitations of the classical logical-positivist orAnglo-American Schoolphilosophical concept of the category usually used to explain or describe the scientific method. Lakoff's reliance on empirical scientific evidence,i.e.specificallyfalsifiablepredictions, in the 1987 work and inPhilosophy in the Flesh(1999) suggests that the cognitive-metaphor position has no objections to the scientific method, but instead considers the scientific method a finely developed reasoning system used to discover phenomena which are subsequently understood in terms of new conceptual metaphors (such as the metaphor of fluid motion for conducted electricity, which is described in terms of "current" "flowing" against "impedance," or the gravitational metaphor for static-electric phenomena, or the "planetary orbit" model of the atomic nucleus and electrons, as used byNiels Bohr).
Further, partly in response to such criticisms, Lakoff andRafael E. Núñez, in 2000, proposed acognitive science of mathematicsthat would explain mathematics as a consequence of, not an alternative to, the human reliance on conceptual metaphor to understand abstraction in terms of basic experiential concretes.
The Linguistic Society of America has argued that "the most recentlinguisticapproach toliteratureis that of cognitive metaphor, which claims that metaphor is not a mode of language, but a mode of thought. Metaphors project structures from source domains of schematized bodily or enculturated experience into abstract target domains. We conceive the abstract idea of life in terms of our experiences of a journey, a year, or a day. We do not understandRobert Frost's 'Stopping by Woods on a Snowy Evening' to be about a horse-and-wagon journey but about life. We understandEmily Dickinson's 'Because I could not stop for Death' as apoemabout the end of the human life span, not a trip in a carriage. This work is redefining the critical notion ofimagery. Perhaps for this reason, cognitive metaphor has significant promise for some kind of rapprochement betweenlinguisticsandliterary study."[31]
Teaching thinking by analogy (metaphor) is one of the main themes of The Private Eye Project. The idea of encouraging use of conceptual metaphors can also be seen in other educational programs touting the cultivation of "critical thinking skills".
The work of political scientist Rūta Kazlauskaitė examines metaphorical models in school-history knowledge of the controversial Polish-Lithuanian past. On the basis of Lakoff and Johnson's conceptual metaphor theory, she shows how the implicit metaphorical models of everyday experience, which inform the abstract conceptualization of the past, truth, objectivity, knowledge, andmultiperspectivityin the school textbooks, obstruct an understanding of the divergent narratives of past experience.[32]
There is some evidence that an understanding of underlying conceptual metaphors can aid the retention of vocabulary for people learning aforeign language.[33]To improve learners' awareness of conceptual metaphor, onemonolingual learner's dictionary, theMacmillan English Dictionaryhas introduced 50 or so 'metaphor boxes'[34]covering the most salient Lakoffian metaphors in English.[35][36]For example, the dictionary entry forconversationincludes a box with the heading: 'A conversation is like ajourney, with the speakers going from one place to another', followed by vocabulary items (words and phrases) which embody this metaphorical schema.[37]Language teaching experts are beginning to explore the relevance of conceptual metaphor to how learners learn and what teachers do in the classroom.[38]
A current study showed a natural tendency to systematically map an abstract dimension, such as social status, in our closest and non-linguistic relatives, the chimpanzees.[39]In detail, discrimination performances between familiar conspecific faces were systematically modulated by the spatial location and the social status of the presented individuals, leading to discrimination facilitation or deterioration. High-ranked individuals presented at spatially higher position and low-ranked individuals presented at lower position led to discrimination facilitation, while high-ranked individuals at lower positions and low-ranked individuals at higher position led to discrimination deterioration. This suggests that this tendency had already evolved in the common ancestors of humans and chimpanzees and is not uniquely human, but describes a conceptual metaphorical mapping that predates language.
|
https://en.wikipedia.org/wiki/Conceptual_metaphor
|
Horatio Alger Jr.(/ˈældʒər/; January 13, 1832 – July 18, 1899) was an American author who wroteyoung adult novelsabout impoverished boys and their rise from humble backgrounds to middle-class security and comfort through good works. His writings were characterized by the "rags-to-riches" narrative, which had a formative effect on the United States from 1868 through to his death in 1899.
Alger secured his literary niche in 1868 with the publication of his fourth book,Ragged Dick, the story of a poorbootblack's rise to middle-class respectability. This novel was a huge success. His many books that followed were essentially variations onRagged Dickand featuredstock characters: the valiant, hardworking, honest youth; the noble mysterious stranger; the snobbish youth; and the evil, greedy squire. In the 1870s, Alger's fiction was growing stale. His publisher suggested he tour theWestern United Statesfor fresh material to incorporate into his fiction. Alger took a trip to California, but the trip had little effect on his writing: he remained mired in the staid theme of "poor boy makes good". The backdrops of these novels, however, became the Western United States, rather than the urban environments of theNortheastern United States.
Alger was born on January 13, 1832, inChelsea, Massachusetts, the son of Horatio Alger Sr., aUnitarianminister, and Olive Augusta Fenno.[1][2]
He had many connections with the New England Puritan aristocracy of the early 19th century. He was the descendant ofPilgrim FathersRobert Cushman, Thomas Cushman, and William Bassett. He was also the descendant of Sylvanus Lazell, aMinutemanandbrigadier generalin theWar of 1812, andEdmund Lazell, a member of theConstitutional Conventionin 1788.[3]
Alger's siblings Olive Augusta and James were born in 1833 and 1836, respectively. A disabled sister, Annie, was born in 1840, and a brother, Francis, in 1842.[4]Alger was a precocious boy afflicted withmyopiaandasthma,[5][6]but Alger Sr. decided early that his eldest son would one day enter the ministry. To that end, Alger's father tutored him inclassical studiesand allowed him to observe the responsibilities of ministering to parishioners.[7]
Alger began attending Chelsea Grammar School in 1842,[8]but by December 1844 his father's financial troubles had worsened considerably. In search of a better salary, he moved the family toMarlborough, Massachusetts, an agricultural town 25 miles west ofBoston, where he was installed as pastor of the Second Congregational Society in January 1845 with a salary sufficient to meet his needs.[9]Alger attended Gates Academy, a localpreparatory school,[8]and completed his studies at age 15.[10]He published his earliest literary works in local newspapers.[10]
In July 1848, Alger passed theHarvardentrance examinations[10]and was admitted to the class of 1852.[4]The 14-member, full-time Harvard faculty includedLouis AgassizandAsa Gray(sciences),Cornelius Conway Felton(classics),James Walker(religion and philosophy), andHenry Wadsworth Longfellow(belles-lettres).Edward Everettserved as president.[11]Alger's classmateJoseph Hodges Choatedescribed Harvard at this time as "provincial and local because its scope and outlook hardly extended beyond the boundaries of New England; besides which it was very denominational, being held exclusively in the hands of Unitarians".[11]
Alger thrived in the highly disciplined and regimented Harvard environment, winning scholastic and other prestigious awards.[12]His genteel poverty and less-than-aristocratic heritage, however, barred him from membership in theHasty Pudding Cluband thePorcellian Club.[13]In 1849, he became a professional writer when he sold two essays and a poem to thePictorial National Library, a Boston magazine.[14]He began readingWalter Scott,James Fenimore Cooper,Herman Melville, and other modern writers of fiction and cultivated a lifelong love for Longfellow, whose verse he sometimes employed as a model for his own. He was chosen Class Odist and graduated withPhi Beta Kappa Societyhonors in 1852, eighth in a class of 88.[15]
Alger had no job prospects following graduation and returned home. He continued to write, submitting his work to religious and literary magazines, with varying success.[16]He briefly attendedHarvard Divinity Schoolin 1853, possibly to be reunited with a romantic interest,[17]but he left in November 1853 to take a job as an assistant editor at theBoston Daily Advertiser.[18]He loathed editing and quit in 1854 to teach at The Grange, a boys'boarding schoolinRhode Island. When The Grange suspended operations in 1856, Alger found employment directing the 1856 summer session atDeerfield Academy.[19][20]
His first book,Bertha's Christmas Vision: An Autumn Sheaf, a collection of short pieces, was published in 1856, and his second book,Nothing to Do: A Tilt at Our Best Society, a lengthy satirical poem, was published in 1857.[21]He attendedHarvard Divinity Schoolfrom 1857 to 1860 and, upon graduation, toured Europe.[22]In the spring of 1861, he returned to a nation in the throes of theCivil War.[23]Exempted from military service for health reasons in July 1863, he wrote in support of theUnioncause and associated with New England intellectuals. He was elected an officer in theNew England Historic Genealogical Societyin 1863.[24]
His first novel,Marie Bertrand: The Felon's Daughter, wasserializedin theNew York Weeklyin 1864, and his first boys' book,Frank's Campaign, was published by A. K. Loring in Boston the same year.[25]Alger initially wrote for adult magazines, includingHarper's MagazineandFrank Leslie's Illustrated Newspaper, but a friendship withWilliam Taylor Adams, a boys' author, led him to write for the young.[26]
On December 8, 1864, Alger was enlisted as a pastor with the First Unitarian Church and Society ofBrewster, Massachusetts.[27]Between ministerial duties, he organized games and amusements for boys in the parish, railed against smoking and drinking, and organized and served as president of the local chapter of the Cadets for Temperance.[28][29]He submitted stories toThe Student and Schoolmate, a boys' monthly magazine of moral writings, edited by William Taylor Adams and published in Boston by Joseph H. Allen.[26][30]In September 1865, his second boys' book,Paul Prescott's Charge, was published and received favorable reviews.[30][31][32]
Early in 1866, a church committee of men was formed to investigate reports that Alger hadsexually molestedboys. Church officials reported to the hierarchy in Boston that Alger had been charged with "the abominable and revolting crime of gross familiarity with boys".[33][a]Alger denied nothing, admitted he had been imprudent, considered his association with the church dissolved, and left town.[35][36]Alger sent Unitarian officials in Boston a letter of remorse, and his father assured them his son would never seek another post in the church. The officials were satisfied and decided no further action would be taken.[37]
In 1866, Alger relocated toNew York Citywhere he studied the condition of the street boys, and found in them an abundance of interesting material for stories.[38]He abandoned forever any thought of a career in the church, and focused instead on his writing. He wrote "Friar Anselmo" at this time, a poem that tells of a sinning cleric's atonement through good deeds. He became interested in the welfare of the thousands of vagrant children who flooded New York City following the Civil War. He attended a children's church service atFive Points, which led to "John Maynard", aballadabout an actual shipwreck onLake Erie, which brought Alger not only the respect of the literati but a letter from Longfellow. He published two poorly received adult novels,Helen FordandTimothy Crump's Ward. He fared better with stories for boys published inStudent and Schoolmateand a third boys' book,Charlie Codman's Cruise.[39]
In January 1867, the first of 12 installments ofRagged Dickappeared inStudent and Schoolmate. The story, about a poor bootblack's rise to middle-class respectability, was a huge success. It was expanded and published as a novel in 1868.[40]It proved to be his best-selling work. AfterRagged Dickhe wrote almost entirely for boys,[41]and he signed a contract with publisher Loring for a Ragged Dick Series.[42]
In spite of the series' success, Alger was on financially uncertain ground and tutored the five sons of the international bankerJoseph Seligman. He wrote serials forYoung Israel[43]and lived in the Seligman home until 1876.[44]In 1875, Alger produced the serialShifting for HimselfandSam's Chance, a sequel toThe Young Outlaw.[45]It was evident in these books that Alger had grown stale. Profits suffered, and he headed West for new material at Loring's behest, arriving in California in February 1877.[44][46]He enjoyed a reunion with his brother James in San Francisco and returned to New York late in 1877 on a schooner that sailed aroundCape Horn.[44][47]He wrote a few lackluster books in the following years, rehashing his established themes, but this time the tales were played before a Western background rather than an urban one.[48]
In New York, Alger continued to tutor the town's aristocratic youth and to rehabilitate boys from the streets.[49]He was writing both urban and Western-themed tales. In 1879, for example, he publishedThe District Messenger BoyandThe Young Miner.[50]In 1877, Alger's fiction became a target of librarians concerned about sensational juvenile fiction.[44]An effort was made to remove his works from public collections, but the debate was only partially successful, defeated by the renewed interest in his work after his death.[51]
In 1881, Alger informally adopted Charlie Davis, a street boy, and another, John Downie, in 1883; they lived in Alger's apartment.[44]In 1881, he wrote a biography of PresidentJames A. Garfield[44]but filled the work with contrived conversations and boyish excitements rather than facts. The book sold well. Alger was commissioned to write a biography ofAbraham Lincoln, but again it was Alger the boys' novelist opting for thrills rather than facts.[52]
In 1882, Alger's father died. Alger continued to produce stories of honest boys outwitting evil, greedy squires and malicious youths. His work appeared in hardcover and paperback, and decades-old poems were published in anthologies. He led a busy life with street boys, Harvard classmates, and the social elite. In Massachusetts, he was regarded with the same reverence asHarriet Beecher Stowe.
In the last two decades of the 19th century, the quality of Alger's books deteriorated, and his boys' works became nothing more than reruns of the plots and themes of his past.[53]The times had changed, boys expected more, and a streak of violence entered Alger's work. InThe Young Bank Messenger, for example, a woman is throttled and threatened with death—something that never occurred in his earlier work.[54]
He attended the theater and Harvard reunions, read literary magazines, and wrote a poem at Longfellow's death in 1882.[55]His last novel for adults,The Disagreeable Woman, was published under the pseudonym Julian Starr.[55]He took pleasure in the successes of the boys he had informally adopted over the years, retained his interest in reform, accepted speaking engagements, and read portions ofRagged Dickto boys' assemblies.[56]
His popularity—and income—dwindled in the 1890s. In 1896, he had what he called a "nervous breakdown"; he relocated permanently to his sister's home inSouth Natick, Massachusetts.[56]
He suffered frombronchitisandasthmafor two years. He died on July 18, 1899, at the home of his sister.[57][58]His death was barely noticed.[59][60]He is buried in the family lot at Glenwood Cemetery, South Natick, Massachusetts.[61]
Before his death, Alger askedEdward Stratemeyerto complete his unfinished works.[59]In 1901,Young Captain Jackwas completed by Stratemeyer and promoted as Alger's last work.[58]Alger once estimated that he earned only $100,000 between 1866 and 1896;[60]at his death he had little money, leaving only small sums to family and friends. His literary work was bequeathed to his niece, to two boys he had casually adopted, and to his sister Olive Augusta, who destroyed his manuscripts and his letters, according to his wishes.[58][62]
Alger's works received favorable comments and experienced a resurgence following his death. By 1926, he sold around 20 million copies in the United States.[63]In 1926, however, reader interest plummeted, and his major publisher ceased printing the books altogether. Surveys in 1932 and 1947 revealed very few children had read or even heard of Alger.[64]The first Alger biography was a heavily fictionalized account published in 1928 byHerbert R. Mayes, who later admitted the work was a fraud.[65][66]
Since 1947, theHoratio Alger Association of Distinguished Americanshas bestowed an annual award on "outstanding individuals in our society who have succeeded in the face of adversity" and scholarships "to encourage young people to pursue their dreams with determination and perseverance".[67]
InMaya Angelou's 1969 autobiography,I Know Why the Caged Bird Sings, she describes her childhood belief that he was "the greatest writer in the world" and envy that all his protagonists were boys.[68]
In 1982, to mark his 150th birthday, theChildren's Aid Societyheld a celebration. Helen M. Gray, the executive director of the Horatio Alger Association of Distinguished Americans, presented a selection of Alger's books to Philip Coltoff, the Children's Aid Society executive director.[69]
A 1982 musical,Shine!, was based on Alger's work, particularlyRagged DickandSilas Snobden's Office Boy.[70][71]
In 2015, many of Alger's books were published as illustrated paperbacks and ebooks under the title "Stories of Success" by Horatio Alger. In addition, Alger's books were offered as dramatic audiobooks by the same publisher.[72]
Alger scholar Gary Scharnhorst describes Alger's style as "anachronistic", "often laughable", "distinctive", and "distinguished by the quality of its literary allusions". Ranging from the Bible andWilliam Shakespeare(half of Alger's books contain Shakespearean references) toJohn MiltonandCicero, the allusions he employed were a testament to his erudition. Scharnhorst credits these allusions with distinguishing Alger's novels frompulp fiction.[73]
Scharnhorst describes six major themes in Alger's boys' books. The first, the Rise to Respectability, he observes, is evident in both his early and his late books, notablyRagged Dick, whose impoverished young hero declares, "I mean to turn over a new leaf, and try to grow up 'spectable." His virtuous life wins him not riches but, more realistically, a comfortable clerical position and salary.[74]The second major theme is Character Strengthened Through Adversity. InStrong and SteadyandShifting for Himself, for example, the affluent heroes are reduced to poverty and forced to meet the demands of their new circumstances. Alger occasionally cited the young Abe Lincoln as a representative of this theme for his readers. The third theme is Beauty versus Money, which became central to Alger's adult fiction. Characters fall in love and marry on the basis of their character, talents, or intellect rather than the size of their bank accounts. InThe Train Boy, for example, a wealthy heiress chooses to marry a talented but struggling artist, and inThe Erie Train Boya poor woman wins her true love despite the machinations of a rich, depraved suitor.[75]Other major themes include the Old World versus the New.
All of Alger's novels have similar plots: a boy struggles to escape poverty through hard work and clean living. However, it is not always the hard work and clean living that rescue the boy from his situation, but rather a wealthy older gentleman, who admires the boy as a result of some extraordinary act of bravery or honesty that the boy has performed.[76]For example, the boy rescues a child from an overturned carriage or finds and returns the man's stolen watch. Often the older man takes the boy into his home as a ward or companion and helps him find a better job, sometimes replacing a less honest or less industrious boy.
According to Scharnhorst, Alger's father was "an impoverished man" who defaulted on his debts in 1844. His properties around Chelsea were seized and assigned to a local squire who held the mortgages. Scharnhorst speculates this episode in Alger's childhood accounts for the recurrent theme in his boys' books of heroes threatened with eviction or foreclosure and may account for Alger's "consistent espousal of environmental reform proposals". Scharnhorst writes, "Financially insecure throughout his life, the younger Alger may have been active in reform organizations such as those for temperance and children's aid as a means of resolving his status-anxiety and establish his genteel credentials for leadership."[77]
Alger scholar Edwin P. Hoyt notes that Alger's morality "coarsened" around 1880, possibly influenced by the Western tales he was writing, because "the most dreadful things were now almost casually proposed and explored".[50]Although he continued to write for boys, Alger explored subjects like violence and "openness in the relations between the sexes and generations"; Hoyt attributes this shift to the decline of Puritan ethics in America.[78]
Scholar John Geck notes that Alger relied on "formulas for experience rather than shrewd analysis of human behavior", and that these formulas were "culturally centered" and "strongly didactic". Although the frontier society was a thing of the past during Alger's career, Geck contends that "the idea of the frontier, even in urban slums, provides a kind of fairy tale orientation in which aJackmentality can be both celebrated and critiqued". He claims that Alger's intended audience were youths whose "motivations for action are effectively shaped by the lessons they learn".
Geck notes that perception of the "pluck" characteristic of an Alger hero has changed over the decades. During theJazz Ageand theGreat Depression, "the Horatio Alger plot was viewed from the perspective ofProgressivismas a staunch defense oflaissez-fairecapitalism, yet at the same time criticizing the cutthroat business techniques and offering hope to a suffering young generation during the Great Depression". By theAtomic Age, however "Alger's hero was no longer a poor boy who, through determination and providence rose to middle-class respectability. He was instead the crafty street urchin who through quick wits and luck rose from impoverishment to riches".
Geck observes that Alger's themes have been transformed in modern America from their original meanings into a "maleCinderella" myth and are an Americanization of the traditional Jack tales. Each story has its clever hero, its "fairy godmother", and obstacles and hindrances to the hero's rise. "However", he writes, "the true Americanization of this fairy tale occurs in its subversion of this claiming of nobility; rather, the Alger hero achieves the American Dream in its nascent form, he gains a position of middle-class respectability that promises to lead wherever his motivation may take him". The reader may speculate what Cinderella achieved as Queen and what an Alger hero attained once his middle-class status was stabilized, and "[i]t is this commonality that fixes Horatio Alger firmly in the ranks of modern adaptors of the Cinderella myth".[79]
Scharnhorst writes that Alger "exercised a certain discretion in discussing his probable homosexuality" and was known to have mentioned his sexuality only once after the Brewster incident. In 1870,Henry James Sr.wrote that Alger "talks freely about his own late insanity—which he in fact appears to enjoy as a subject of conversation". Although Alger was willing to speak to James, his sexuality was a closely guarded secret. According to Scharnhorst, Alger made veiled references to homosexuality in his boys' books, and these references, Scharnhorst speculates, indicate Alger was "insecure with his sexual orientation". Alger wrote, for example, that it was difficult to distinguish whether Tattered Tom was a boy or a girl and in other instances, he introduces foppish, effeminate, lisping "stereotypical homosexuals" who are treated with scorn and pity by others. InSilas Snobden's Office Boy, a kidnapped boy disguised as a girl is threatened with being sent to the "insane asylum" if he should reveal his actual sex. Scharnhorst believes Alger's desire to atone for his "secret sin" may have "spurred him to identify his own charitable acts of writing didactic books for boys with the acts of the charitable patrons in his books who wish to atone for a secret sin in their past by aiding the hero". Scharnhorst points out that the patron inTry and Trust, for example, conceals a "sad secret" from which he is redeemed only after saving the hero's life.[80]
Alan Trachtenberg, in his introduction to theSignet Classicedition ofRagged Dick(1990), points out that Alger had tremendous sympathy for boys and discovered a calling for himself in the composition of boys' books. "He learned to consult the boy in himself", Trachtenberg writes, "to transmute and recast himself—his genteel culture, his liberal patrician sympathy for underdogs, his shaky economic status as an author, and not least, his dangerous erotic attraction to boys—into his juvenile fiction".[81]He observes that it is impossible to know whether Alger lived the life of a secret homosexual, "[b]ut there are hints that the male companionship he describes as a refuge from the streets—the cozy domestic arrangements between Dick and Fosdick, for example—may also be an erotic relationship". Trachtenberg observes that nothing prurient occurs inRagged Dickbut believes the few instances in Alger's work of two boys touching or a man and a boy touching "might arouse erotic wishes in readers prepared to entertain such fantasies". Such images, Trachtenberg believes, may imply "a positive view of homoeroticism as an alternative way of life, of living by sympathy rather than aggression". Trachtenberg concludes, "inRagged Dickwe see Alger plotting domestic romance, complete with a surrogate marriage of two homeless boys, as the setting for his formulaic metamorphosis of an outcast street boy into a self-respecting citizen".[82]
|
https://en.wikipedia.org/wiki/Horatio_Alger_myth
|
Neurath's boat(orNeurath's ship) is asimileused inanti-foundationalaccounts ofknowledge, especially in thephilosophy of science. It was first formulated byOtto Neurath. It is based in part on theShip of Theseuswhich, however, is standardly used to illustrate other philosophical questions, to do with problems ofidentity.[1]It was popularised byWillard Van Orman QuineinWord and Object(1960).
Neurath used the simile in several occasions,[1][2]the first being in Neurath's text "Problems in War Economics" (1913). In "Anti-Spengler" (1921) Neurath wrote:
We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom. Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as support. In this way, by using the old beams and driftwood the ship can be shaped entirely anew, but only by gradual reconstruction.[2]
Neurath's non-foundational analogy of reconstructing piecemeal a ship at sea contrasts withDescartes' much earlierfoundationalistanalogy—inDiscourse on the Method(1637) andMeditations on First Philosophy(1641)—of demolishing a building all at once and rebuilding from the ground up.[3]Neurath himself pointed out this contrast.[2][4]
The boat was replaced by araftin discussions by some philosophers, such asPaul Lorenzenin 1968,[5]Susan Haackin 1974,[6]andErnest Sosain 1980.[7]Lorenzen's use of the simile of the raft was a kind of foundationalist modification of Neurath's original, disagreeing with Neurath by asserting that it is possible to jump into the water and to build a new raft while swimming, i.e., to "start from scratch" to build a new system of knowledge.[5][8]
Prior to Neurath's simile,Charles Sanders Peircehad used with similar purpose the metaphor of walking on abog: one only takes another step when the ground beneath one's feet begins to give way.[9]
Keith Stanovich, in his bookThe Robot's Rebellion, refers to it as aNeurathian bootstrap, usingbootstrappingas an analogy to therecursivenature of revising one's beliefs.[10]A "rotten plank" on the ship, for instance, might represent ameme virusor a junk meme (i.e., a meme that is either maladaptive to the individual, or serves no beneficial purpose for the realization of an individual's life goals). It may be impossible to bring the ship to shore for repairs, therefore one may stand on planks that are not rotten in order to repair or replace the ones that are. At a later time, the planks previously used for support may be tested by standing on other planks that are not rotten:
We can conduct certain tests assuming that certainmemeplexes(e.g., science, logic, rationality) are foundational, but at a later time we might want to bring these latter memeplexes into question too. The more comprehensively we have tested our interlocking memeplexes, the more confident we can be that we have not let a meme virus enter into our mindware….[10]: 181
In this way, people might proceed to examine and revise their beliefs so as to become morerational.[10]: 92
|
https://en.wikipedia.org/wiki/Neurathian_bootstrap
|
Robert Anson Heinlein(/ˈhaɪnlaɪn/HYNE-lyne;[2][3][4]July 7, 1907 – May 8, 1988) was an Americanscience fictionauthor,aeronautical engineer, andnaval officer. Sometimes called the "dean of science fiction writers",[5]he was among the first to emphasize scientific accuracy in his fiction, and was thus a pioneer of the subgenre ofhard science fiction. His published works, both fiction and non-fiction, express admiration for competence and emphasize the value ofcritical thinking.[6]His plots often posed provocative situations which challenged conventionalsocial mores.[7]His work continues to have an influence on the science-fiction genre, and on modern culture more generally.
Heinlein became one of the first American science-fiction writers to break into mainstream magazines such asThe Saturday Evening Postin the late 1940s. He was one of the best-selling science-fiction novelists for many decades, and he,Isaac Asimov, andArthur C. Clarkeare often considered the "Big Three" ofEnglish-languagescience fiction authors.[8][9][10]Notable Heinlein works includeStranger in a Strange Land,[11]Starship Troopers(which helped mold thespace marineandmechaarchetypes) andThe Moon Is a Harsh Mistress.[12]His work sometimes had controversial aspects, such asplural marriageinThe Moon Is a Harsh Mistress,militarisminStarship Troopersand technologically competent women characters who were formidable,[13]yet often stereotypically feminine—such asFriday.
Heinlein used his science fiction as a way to explore provocative social and political ideas and to speculate how progress in science and engineering might shape the future of politics, race, religion, and sex.
Within the framework of his science-fiction stories, Heinlein repeatedly addressed certain social themes: the importance of individuallibertyandself-reliance, the nature of sexual relationships, the obligation individuals owe to their societies, the influence oforganized religionon culture and government, and the tendency of society to repressnonconformistthought. He also speculated on the influence of space travel on human cultural practices.
Heinlein was heavily influenced by the visionary writers and philosophers of his day. William H. Patterson Jr., writing inRobert A. Heinlein: In Dialogue with His Century, states that by 1930, Heinlein was a progressive liberal who had spent some time in the open sexuality climate ofNew York'sJazz AgeGreenwich Village. Heinlein believed that some level of socialism was inevitable and was already occurring in America. He was absorbing the social concepts of writers such asH. G. WellsandUpton Sinclair. He adopted many of the progressive social beliefs of his day and projected them forward.[14]In later years, he began to espouseconservativeviews and to believe that a strongworld governmentwas the only way to avoidmutual nuclear annihilation.[15]
Heinlein was named the firstScience Fiction Writers Grand Masterin 1974.[16]Four of his novels wonHugo Awards. In addition, fifty years after publication, seven of his works were awarded "Retro Hugos"—awards given retrospectively for works that were published before the Hugo Awards came into existence.[17]In his fiction, Heinlein coined terms that have become part of the English language, includinggrok,waldoandspeculative fiction, as well as popularizing existing terms like "TANSTAAFL", "pay it forward", and "space marine". He also anticipated mechanicalcomputer-aided designwith "Drafting Dan" in his novelThe Door into Summerand described a modern version of awaterbedin his novelStranger in a Strange Land.
Heinlein, born on July 7, 1907, to Rex Ivar Heinlein (an accountant) and Bam Lyle Heinlein, inButler, Missouri, was the third of seven children. He was a sixth-generationGerman-American; a family tradition had it that Heinleins fought in every American war, starting with theWar of Independence.[18]
He spent his childhood inKansas City, Missouri.[19]The outlook and values of this time and place (in his own words, "TheBible Belt") had an influence on his fiction, especially in his later works, as he drew heavily upon his childhood in establishing the setting and cultural atmosphere in works likeTime Enough for LoveandTo Sail Beyond the Sunset.[citation needed]The 1910 appearance ofHalley's Cometinspired the young child's life-long interest in astronomy.[20]
In January 1924, the sixteen year old Heinlein lied about his age to enlist in Company C,110th Engineer Regiment, of theMissouri National Guard, in Kansas City. His family could not afford to send Heinlein to college, so he sought an appointment to a military academy.[21]When Heinlein graduated fromKansas City Central High Schoolin 1924, he was initially prevented from attending theUnited States Naval Academyat Annapolis because his older brother Rex was a student there, and at the time, regulations discouraged multiple family members from attending the academy simultaneously.[citation needed]He instead matriculated atKansas City Community Collegeand began vigorously petitioning Missouri SenatorJames A. Reedfor an appointment to the Naval Academy. In part due to the influence of thePendergast machine, the Naval Academy admitted him in June 1925.[12]Heinlein received his discharge from the Missouri National Guard as a staff sergeant. Reed later told Heinlein that he had received 100 letters of recommendation for nomination to the Naval Academy, 50 for other candidates and 50 for Heinlein.[21]
Heinlein's experience in theU.S. Navyexerted a strong influence on his character and writing. In 1929, he graduated from the Naval Academy with the equivalent of abachelor of artsin engineering.[22](At that time, the Academy did not confer degrees.) He ranked fifth in his class academically but with a class standing of 20th of 243 due to disciplinary demerits. The U.S. Navy commissioned him as an ensign shortly after his graduation. He advanced to lieutenant junior grade in 1931 while serving aboard the newaircraft carrierUSSLexington, where he worked inradio communications—a technology then still in its earlier stages. Thecaptainof this carrier,Ernest J. King, later served as theChief of Naval OperationsandCommander-in-Chief, U.S. FleetduringWorld War II. Military historians frequently[quantify]interviewed Heinlein during his later years and asked him about Captain King and his service as the commander of the U.S. Navy's first modern aircraft carrier. Heinlein also served as gunnery officer aboard thedestroyerUSSRoperin 1933 and 1934, reaching the rank of lieutenant.[23]His brother, Lawrence Heinlein, served in the U.S. Army, the U.S. Air Force, and theMissouri National Guard, reaching the rank ofmajor generalin the National Guard.[24]
In 1929, Heinlein married Elinor Curry of Kansas City.[25]However, their marriage lasted only about one year.[3]His second marriage, to Leslyn MacDonald (1904–1981) in 1932, lasted 15 years. MacDonald was, according to the testimony of Heinlein's Navy friend,Rear AdmiralCal Laning, "astonishingly intelligent, widely read, and extremely liberal, though a registeredRepublican",[26]while Isaac Asimov later recalled that Heinlein was, at the time, "a flamingliberal".[27](See section:Politics of Robert Heinlein.)
At thePhiladelphia Naval Shipyard, Heinlein met and befriended achemical engineernamedVirginia "Ginny" Gerstenfeld. After the war, her engagement having fallen through, she attendedUCLAfor doctoral studies inchemistry, and while there reconnected with Heinlein. As his second wife'salcoholismgradually spun out of control,[28]Heinlein moved out and the couple filed for divorce. Heinlein's friendship with Virginia turned into a relationship and on October 21, 1948—shortly after thedecree nisicame through—they married in the town ofRaton, New Mexico. Soon thereafter, they set up housekeeping in the Broadmoor district ofColorado Springs, Colorado, in a house that Heinlein and his wife designed. As the area was newly developed, they were allowed to choose their own house number, 1776 Mesa Avenue.[29]The design of the house was featured inPopular Mechanics.[30]They remained married until Heinlein's death. In 1965, after various chronic health problems of Virginia's were traced back toaltitude sickness, they moved toSanta Cruz, California, which is atsea level. Robert and Virginia designed and built a new residence, circular in shape, in the adjacent village ofBonny Doon.[31][32]
Ginny undoubtedly served as a model for many of his intelligent, fiercely independent female characters.[33][34]She was a chemist androcket test engineer, and held a higher rank in the Navy than Heinlein himself. She was also an accomplished college athlete, earning fourvarsity letters.[1]In 1953–1954, the Heinleins voyaged around the world (mostly viaocean linersandcargo liners, as Ginny detested flying), which Heinlein described inTramp Royale.The trip provided background material for science fiction novels set aboard spaceships on long voyages, such asPodkayne of Mars,FridayandJob: A Comedy of Justice, the latter initially being set on a cruise much as detailed inTramp Royale. Ginny acted as the first reader of hismanuscripts. Isaac Asimov believed that Heinlein made a swing to therightpolitically at the same time he married Ginny.
In 1934, Heinlein was discharged from the Navy, owing topulmonary tuberculosis. During a lengthy hospitalization, and inspired by his own experience while bed-ridden, he developed a design for awaterbed.[35]
After his discharge, Heinlein attended a few weeks of graduate classes inmathematicsandphysicsat theUniversity of California, Los Angeles(UCLA), but he soon quit, either because of his ill-health or because of a desire to enter politics.[36]
Heinlein supported himself at several occupations, includingreal estate salesandsilver mining, but for some years found money in short supply. Heinlein was active inUpton Sinclair's socialistEnd Poverty in California movement(EPIC) in the early 1930s. He was deputy publisher of theEPIC News, which Heinlein noted "recalled a mayor, kicked out a district attorney, replaced the governor with one of our choice."[37]When Sinclair gained theDemocraticnomination forGovernor of Californiain 1934, Heinlein worked actively in the campaign. Heinlein himself ran for theCalifornia State Assemblyin 1938, but was unsuccessful. Heinlein was running as a left-wing Democrat in a conservative district, and he never made it past the Democratic primary.[38]
While not destitute after the campaign—he had a small disability pension from the Navy—Heinlein turned to writing to pay off his mortgage. His first published story, "Life-Line", was printed in the August 1939 issue ofAstounding Science Fiction.[39]Originally written for a contest, it sold toAstoundingfor significantly more than the contest's first-prize payoff. AnotherFuture Historystory, "Misfit", followed in November.[39]Some saw Heinlein's talent and stardom from his first story,[40]and he was quickly acknowledged as a leader of the new movement toward"social" science fiction. In California he hosted theMañana Literary Society, a 1940–41 series of informal gatherings of new authors.[41]He was the guest of honor at Denvention, the 1941Worldcon, held in Denver. DuringWorld War II, Heinlein was employed by the Navy as a civilian aeronautical engineer at the Navy Aircraft Materials Center at thePhiladelphia Naval ShipyardinPennsylvania.[42]Heinlein recruitedIsaac AsimovandL. Sprague de Campto also work there.[35]While at the Philadelphia Naval Shipyards, Asimov, Heinlein, and de Camp brainstormed unconventional approaches to kamikaze attacks, such as using sound to detect approaching planes.[43]
As the war wound down in 1945, Heinlein began to re-evaluate his career. Theatomic bombings of Hiroshima and Nagasaki, along with the outbreak of theCold War, galvanized him to write nonfiction on political topics. In addition, he wanted to break into better-paying markets. He published four influentialshort storiesforThe Saturday Evening Postmagazine, leading off, in February 1947, with "The Green Hills of Earth". That made him the first science fiction writer to break out of the "pulp ghetto". In 1950, the movieDestination Moon—the documentary-like film for which he had written the story and scenario, co-written the script, and invented many of the effects—won anAcademy Awardforspecial effects.
Heinlein created SF stories with social commentary about relationships. InThe Puppet Masters, a 1951 alien invasion novel, the point of view character Sam persuades fellow operative Mary to marry him. When they go to the county clerk, they are offered a variety of marriage possibilities; “Term, renewable or lifetime”, as short as six months or as long as forever.[44]
Also, he embarked on a series ofjuvenile novelsfor theCharles Scribner's Sonspublishing company that went from 1947 through 1959, at the rate of one book each autumn, in time forChristmaspresents to teenagers. He also wrote forBoys' Lifein 1952.
Heinlein used topical materials throughout hisjuvenile seriesbeginning in 1947, but in 1958 he interrupted work onThe Heretic(the working title ofStranger in a Strange Land) to write and publish a book exploring ideas of civic virtue, initially serialized asStarship Soldiers. In 1959, his novel (now entitledStarship Troopers) was considered by the editors and owners of Scribner's to be too controversial for one of its prestige lines, and it was rejected.[45]Heinlein found another publisher (Putnam), feeling himself released from the constraints of writing novels for children. He had told an interviewer that he did not want to do stories that merely added to categories defined by other works. Rather he wanted to do his own work, stating that: "I want to do my own stuff, my own way".[46]He would go on to write a series of challenging books that redrew the boundaries of science fiction, includingStranger in a Strange Land(1961) andThe Moon Is a Harsh Mistress(1966).
Beginning in 1970, Heinlein had a series of health crises, broken by strenuous periods of activity in his hobby ofstonemasonry: in a private correspondence, he referred to that as his "usual and favorite occupation between books".[47]The decade began with a life-threatening attack ofperitonitis, recovery from which required more than two years, and treatment of which required multiple transfusions of Heinlein'srare blood type, A2 negative.[citation needed]As soon as he was well enough to write again, he began work onTime Enough for Love(1973), which introduced many of the themes found in his later fiction.
In the mid-1970s, Heinlein wrote two articles for theBritannicaCompton Yearbook.[48]He and Ginny crisscrossed the country helping to reorganizeblood donationin the United States in an effort to assist the system which had saved his life.[citation needed]At science fiction conventions to receive his autograph, fans would be asked to co-sign with Heinlein a beautifully embellished pledge form he supplied stating that the recipient agrees that they willdonate blood. He was the guest of honor at the Worldcon in 1976 for the third time atMidAmeriConinKansas City, Missouri. At that Worldcon, Heinlein hosted a blood drive and donors' reception to thank all those who had helped save lives.
Beginning in 1977, and including an episode while vacationing inTahitiin early 1978, he had episodes of reversible neurologic dysfunction due totransient ischemic attacks.[49]Over the next few months, he became more and more exhausted, and his health again began to decline. The problem was determined to bea blocked carotid artery, and he had one of the earliest known carotid bypass operations to correct it.
In 1980, Robert Heinlein was a member of theCitizen's Advisory Council on National Space Policy, chaired byJerry Pournelle, which met at the home of SF writerLarry Nivento write space policy papers for the incomingReagan administration. Members included such aerospace industry leaders as former astronautBuzz Aldrin, GeneralDaniel O. Graham,aerospace engineerMax HunterandNorth American RockwellVP for Space Shuttle development George Merrick. Policy recommendations from the Council included ballistic missile defense concepts which were later transformed into what was called theStrategic Defense Initiative. Heinlein assisted with Council contribution to the Reagan SDI spring 1983 speech. Asked to appear before aJoint Committee of the United States Congressthat year, he testified on his belief thatspin-offsfromspace technologywere benefiting the infirm and the elderly.
Heinlein's surgical treatment re-energized him, and he wrote five novels from 1980 until he died in his sleep fromemphysemaandheart failureon May 8, 1988.
In 1995,Spider Robinsonwrote the novelVariable Starbased on an outline and notes created by Heinlein.[50]Heinlein's posthumously published nonfiction includes a selection of correspondence and notes edited into a somewhat autobiographical examination of his career, published in 1989 under the titleGrumbles from the Graveby his wife, Virginia; his book on practical politics written in 1946 and published asTake Back Your Governmentin 1992; and a travelogue of their first around-the-world tour in 1954,Tramp Royale. The novelPodkayne of Mars,which had been edited against Heinlein's wishes in their original release, was reissued with the original ending.Stranger In a Strange Landwas originally published in a shorter form, but both the long and short versions are now simultaneously available in print.
Heinlein's archive is housed by the Special Collections department ofMcHenry Libraryat theUniversity of California at Santa Cruz. The collection includes manuscript drafts, correspondence, photographs and artifacts. A substantial portion of the archive has been digitized and it is available online through the Robert A. and Virginia Heinlein Archives.[51]
Heinlein published 32 novels, 59 short stories, and 16 collections during his life. Nine films, two television series, several episodes of a radio series, and a board game have been derived more or less directly from his work. He wrote a screenplay for one of the films. Heinlein edited an anthology of other writers' SF short stories.
Three nonfiction books and two poems have been published posthumously.For Us, the Living: A Comedy of Customswas published posthumously in 2003;[52]Variable Star, written by Spider Robinson based on an extensive outline by Heinlein, was published in September 2006. Four collections have been published posthumously.[39]
Heinlein began his career as a writer of stories forAstounding Science Fictionmagazine, which was edited by John Campbell. The science fiction writerFrederik Pohlhas described Heinlein as "that greatest of Campbell-era sf writers".[53]Isaac Asimov said that, from the time of his first story, the science fiction world accepted that Heinlein was the best science fiction writer in existence, adding that he would hold this title through his lifetime.[54]
Alexei and Cory Panshin noted that Heinlein's impact was immediately felt. In 1940, the year after selling 'Life-Line' to Campbell, he wrote three short novels, four novelettes, and seven short stories. They went on to say that "No one ever dominated the science fiction field as Bob did in the first few years of his career."[55]Alexei expresses awe in Heinlein's ability to show readers a world so drastically different from the one we live in now, yet have so many similarities. He says that "We find ourselves not only in a world other than our own, but identifying with a living, breathing individual who is operating within its context, and thinking and acting according to its terms."[56]
The first novel that Heinlein wrote,For Us, the Living: A Comedy of Customs(1939), did not see print during his lifetime, but Robert James tracked down the manuscript and it was published in 2003. Though some regard it as a failure as a novel,[19]considering it little more than a disguised lecture on Heinlein'ssocial theories, some readers took a very different view. In a review of it,John Clutewrote:
I'm not about to suggest that if Heinlein had been able to publish [such works] openly in the pages ofAstoundingin 1939, SF would have gotten the future right; I would suggest, however, that if Heinlein, and his colleagues, had been able to publish adult SF inAstoundingand its fellow journals, then SF might not have done such a grotesquely poor job of prefiguring something of the flavor of actually living here at the onset of 2004.[57]
For Us, the Livingwas intriguing as a window into the development of Heinlein's radical ideas about man as asocial animal, including his interest infree love. The root of many themes found in his later stories can be found in this book. It also contained a large amount of material that could be considered background for his other novels. This included a detailed description of the protagonist's treatment to avoid being banished toCoventry(a lawless land in the Heinlein mythos where unrepentant law-breakers are exiled).[58]
It appears that Heinlein at least attempted to live in a manner consistent with these ideals, even in the 1930s, and had anopen relationshipin his marriage to his second wife, Leslyn. He was also anudist;[3]nudism and bodytaboosare frequently discussed in his work. At the height of theCold War, he built abomb shelterunder his house, like the one featured inFarnham's Freehold.[3]
AfterFor Us, the Living, Heinlein began selling (to magazines) first short stories, then novels, set in aFuture History, complete with a time line of significant political, cultural, and technological changes. A chart of the future history was published in the May 1941 issue ofAstounding. Over time, Heinlein wrote many novels and short stories that deviated freely from the Future History on some points, while maintaining consistency in some other areas. The Future History was eventually overtaken by actual events. These discrepancies were explained, after a fashion, in his later World as Myth stories.
Heinlein's first novel published as a book,Rocket Ship Galileo, was initially rejected because going to the Moon was considered too far-fetched, but he soon found a publisher,Scribner's, that began publishing a Heinleinjuvenileonce a year for the Christmas season.[59]Eight of these books were illustrated byClifford Gearyin a distinctive white-on-blackscratchboardstyle.[60]Some representative novels of this type areHave Space Suit—Will Travel,Farmer in the Sky, andStarman Jones. Many of these were first published in serial form under other titles, e.g.,Farmer in the Skywas published asSatellite Scoutin theBoy ScoutmagazineBoys' Life. There has been speculation that Heinlein's intense obsession with his privacy was due at least in part to the apparent contradiction between his unconventional private life[clarification needed]and his career as an author of books for children. However,For Us, the Livingexplicitly discusses the political importance Heinlein attached to privacy as a matter of principle.[63]
The novels that Heinlein wrote for a young audience are commonly called "the Heinlein juveniles", and they feature a mixture of adolescent and adult themes. Many of the issues that he takes on in these books have to do with the kinds of problems that adolescents experience. His protagonists are usually intelligent teenagers who have to make their way in the adult society they see around them. On the surface, they are simple tales of adventure, achievement, and dealing with stupid teachers and jealous peers. Heinlein was a vocal proponent of the notion that juvenile readers were far more sophisticated and able to handle more complex or difficult themes than most people realized. His juvenile stories often had a maturity to them that made them readable for adults.Red Planet, for example, portrays some subversive themes, including a revolution in which young students are involved; his editor demanded substantial changes in this book's discussion of topics such as the use of weapons by children and the misidentified sex of the Martian character. Heinlein was always aware of the editorial limitations put in place by the editors of his novels and stories, and while he observed those restrictions on the surface, was often successful in introducing ideas not often seen in other authors' juvenile SF.
In 1957,James Blishwrote that one reason for Heinlein's success "has been the high grade of machinery which goes, today as always, into his story-telling. Heinlein seems to have known from the beginning, as if instinctively, technical lessons about fiction which other writers must learn the hard way (or often enough, never learn). He does not always operate the machinery to the best advantage, but he always seems to be aware of it."[64]
Heinlein decisively ended his juvenile novels withStarship Troopers(1959), a controversial work and his personal riposte to leftists calling for PresidentDwight D. Eisenhowerto stop nuclear testing in 1958. "The 'Patrick Henry' ad shocked 'em", he wrote many years later of the campaign. "Starship Troopersoutraged 'em."[65]Starship Troopersis a coming-of-age story about duty, citizenship, and the role of the military in society.[66]The book portrays a society in whichsuffrageis earned by demonstrated willingness to place society's interests before one's own, at least for a short time and often under onerous circumstances, in government service; in the case of the protagonist, this was military service.
Later, inExpanded Universe, Heinlein said that it was his intention in the novel that service could include positions outside strictly military functions such as teachers, police officers, and other government positions. This is presented in the novel as an outgrowth of the failure of unearned suffrage government and as a very successful arrangement. In addition, the franchise was only awarded after leaving the assigned service; thus those serving their terms—in the military, or any other service—were excluded from exercising any franchise. Career military were completely disenfranchised until retirement.
From about 1961 (Stranger in a Strange Land) to 1973 (Time Enough for Love), Heinlein explored some of his most important themes, such asindividualism,libertarianism, and free expression of physical and emotional love. Three novels from this period,Stranger in a Strange Land,The Moon Is a Harsh Mistress, andTime Enough for Love, won theLibertarian Futurist Society'sPrometheus Hall of Fame Award, designed to honor classic libertarian fiction.[67]Jeff Riggenbach describedThe Moon Is a Harsh Mistressas "unquestionably one of the three or four most influential libertarian novels of the last century".[68]
Heinlein did not publishStranger in a Strange Landuntil some time after it was written, and the themes of free love and radicalindividualismare prominently featured in his long-unpublished first novel,For Us, the Living: A Comedy of Customs.
The Moon Is a Harsh Mistresstells of a war of independence waged by the Lunar penal colonies, with significant comments from a major character, Professor La Paz, regarding the threat posed by government to individual freedom.
Although Heinlein had previously written a few short stories in thefantasygenre, during this period he wrote his first fantasy novel,Glory Road. InStranger in a Strange LandandI Will Fear No Evil, he began to mix hard science with fantasy, mysticism, and satire of organized religion. Critics William H. Patterson, Jr., and Andrew Thornton believe that this is simply an expression of Heinlein's longstanding philosophical opposition topositivism.[69]Heinlein stated that he was influenced byJames Branch Cabellin taking this new literary direction. The penultimate novel of this period,I Will Fear No Evil, is according to critic James Gifford "almost universally regarded as a literary failure"[70]and he attributes its shortcomings to Heinlein's near-death fromperitonitis.
After a seven-year hiatus brought on by poor health, Heinlein produced five new novels in the period from 1980 (The Number of the Beast) to 1987 (To Sail Beyond the Sunset). These books have a thread of common characters and time and place. They most explicitly communicated Heinlein's philosophies and beliefs, and many long, didactic passages of dialog and exposition deal with government, sex, and religion. These novels are controversial among his readers and one critic,David Langford, has written about them very negatively.[71]Heinlein's four Hugo awards were all for books written before this period.
Most of the novels from this period are recognized by critics as forming an offshoot from the Future History series and are referred to by the termWorld as Myth.[72]
The tendency toward authorial self-reference begun inStranger in a Strange LandandTime Enough for Lovebecomes even more evident in novels such asThe Cat Who Walks Through Walls, whose first-person protagonist is a disabled military veteran who becomes a writer, and finds love with a female character.[73]
The 1982 novelFriday, a more conventional adventure story (borrowing a character and backstory from the earlier short storyGulf, also containing suggestions of connection toThe Puppet Masters) continued a Heinlein theme of expecting what he saw as the continued disintegration of Earth's society, to the point where the title character is strongly encouraged to seek a new life off-planet. It concludes with a traditional Heinlein note, as inThe Moon Is a Harsh MistressorTime Enough for Love, that freedom is to be found on the frontiers.
The 1984 novelJob: A Comedy of Justiceis a sharp satire of organized religion. Heinlein himself was agnostic.[74][75]
Several Heinlein works have been published since his death, including the aforementionedFor Us, the Livingas well as 1989'sGrumbles from the Grave, a collection of letters between Heinlein and his editors and agent; 1992'sTramp Royale, a travelogue of a southern hemisphere tour the Heinleins took in the 1950s;Take Back Your Government, a how-to book about participatory democracy written in 1946 and reflecting his experience as an organizer with theEPIC campaign of 1934and the movement's aftermath as an important factor in California politics before the Second World War; and a tribute volume calledRequiem: Collected Works and Tributes to the Grand Master, containing some additional short works previously unpublished in book form.Off the Main Sequence, published in 2005, includes three short stories never before collected in any Heinlein book (Heinlein called them "stinkeroos").
Spider Robinson, a colleague, friend, and admirer of Heinlein,[76]wroteVariable Star, based on an outline and notes for a novel that Heinlein prepared in 1955. The novel was published as a collaboration, with Heinlein's name above Robinson's on the cover, in 2006.
A complete collection of Heinlein's published work has been published[77]by the Heinlein Prize Trust as the "Virginia Edition", after his wife. See the Complete Works section ofRobert A. Heinlein bibliographyfor details.
On February 1, 2019, Phoenix Pick announced that through a collaboration with the Heinlein Prize Trust, a reconstruction of the full text of an unpublished Heinlein novel had been produced. It was published in March 2020. The reconstructed novel, entitledThe Pursuit of the Pankera: A Parallel Novel about Parallel Universes,[78]is an alternative version ofThe Number of the Beast, with the first one-third ofThe Pursuit of the Pankeramostly the same as the first one-third ofThe Number of the Beastbut the remainder ofThe Pursuit of the Pankeradeviating entirely fromThe Number of the Beast, with a completely different story-line. The newly reconstructed novel pays homage toEdgar Rice BurroughsandE. E. "Doc" Smith. It was edited byPatrick Lobrutto. Some reviewers describe the newly reconstructed novel as more in line with the style of a traditional Heinlein novel than wasThe Number of the Beast.[79]The Pursuit of the Pankerawas considered superior to the original version ofThe Number of the Beastby some reviewers.[80]BothThe Pursuit of the Pankeraand a new edition ofThe Number of the Beast[81]were published in March 2020. The new edition of the latter shares the subtitle ofThe Pursuit of the Pankera, hence entitledThe Number of the Beast: A Parallel Novel about Parallel Universes.[82][83]
Heinlein contributed to the final draft of the script forDestination Moon(1950) and served as a technical adviser for the film.[84]Heinlein also shared screenwriting credit forProject Moonbase(1953).
The primary influence on Heinlein's writing style may have beenRudyard Kipling. Kipling is the first known modern example of "indirect exposition", a writing technique for which Heinlein later became famous.[85]In his famous text on "On the Writing of Speculative Fiction", Heinlein quotes Kipling:
There are nine-and-sixty waysOf constructing tribal laysAnd every single one of them is right
Stranger in a Strange Landoriginated as a modernized version of Kipling'sThe Jungle Book. His wife suggested that the child be raised by Martians instead of wolves. Likewise,Citizen of the Galaxycan be seen as a reboot of Kipling's novelKim.[86]
TheStarship Troopersidea of needing to serve in the military in order to vote can be found in Kipling's "The Army of a Dream":
But as a little detail we never mention, if we don't volunteer in some corps or other—as combatants if we're fit, as non-combatants if we ain't—till we're thirty-five—we don't vote, and we don't get poor-relief, and the women don't love us.
Poul Anderson once said of Kipling's science fiction story "As Easy as A.B.C.", "a wonderful science fiction yarn, showing the same eye for detail that would later distinguish the work of Robert Heinlein".
Heinlein described himself as also being influenced byGeorge Bernard Shaw, having read most of his plays.[87]Shaw is an example of an earlier author who used thecompetent man, a favorite Heinlein archetype.[88]He denied, though, any direct influence ofBack to MethuselahonMethuselah's Children.
Heinlein's books probe a range of ideas about a range of topics such as sexuality, race, politics, and the military. Many were seen as radical or as ahead of their time in their social criticism. His books have inspired considerable debate about the specifics, and the evolution, of Heinlein's own opinions, and have earned him both lavish praise and a degree of criticism. He has also been accused of contradicting himself on various philosophical questions.[89]
Brian Dohertycites William Patterson, saying that the best way to gain an understanding of Heinlein is as a "full-service iconoclast, the unique individual who decides that things do not have to be, and won't continue, as they are". He says this vision is "at the heart of Heinlein, science fiction, libertarianism, and America. Heinlein imagined how everything about the human world, from our sexual mores to our religion to our automobiles to our government to our plans for cultural survival, might be flawed, even fatally so."[90]
The criticElizabeth Anne Hull, for her part, has praised Heinlein for his interest in exploring fundamental life questions, especially questions about "political power—our responsibilities to one another" and about "personal freedom, particularly sexual freedom".[91]
Edward R. Murrowhosted a series onCBS RadiocalledThis I Believe, which solicited an entry from Heinlein in 1952. Titled "Our Noble, Essential Decency". In it, Heinlein broke with the normal trends, stating that he believed in his neighbors (some of whom he named and described), community, and towns across America that share the same sense of good will and intentions as his own, going on to apply this same philosophy to the US, and humanity in general.
I believe in my fellow citizens. Our headlines are splashed with crime. Yet for every criminal, there are ten thousand honest, decent, kindly men. If it were not so, no child would live to grow up. Business could not go on from day to day. Decency is not news. It is buried in the obituaries, but it is a force stronger than crime.
Heinlein's political positions shifted throughout his life. Heinlein's early political leanings wereliberal.[92]In 1934, he worked actively for theDemocraticcampaign ofUpton SinclairforGovernor of California. After Sinclair lost, Heinlein became an anti-communist Democratic activist. He made an unsuccessful bid for aCalifornia State Assemblyseat in 1938.[92]Heinlein's first novel,For Us, the Living(written 1939), consists largely of speeches advocating theSocial Creditphilosophy, and the early story "Misfit" (1939) deals with an organization—"The Cosmic Construction Corps"—that seems to beFranklin D. Roosevelt'sCivilian Conservation Corpstranslated into outer space.[93]
Of this time in his life, Heinlein later said:
At the time I wroteMethuselah's ChildrenI was still politically quite naïve and still had hopes that various libertarian notions could be put over by political processes... It [now] seems to me that every time we manage to establish one freedom, they take another one away. Maybe two. And that seems to me characteristic of a society as it gets older, and more crowded, and higher taxes, and more laws.[87]
Heinlein's fiction of the 1940s and 1950s, however, began to espouseconservativeviews. After 1945, he came to believe that a strongworld governmentwas the only way to avoidmutual nuclear annihilation.[94]His 1949 novelSpace Cadetdescribes a future scenario where a military-controlled global government enforces world peace. Heinlein ceased considering himself a Democrat in 1954.[92]
The Heinleins formed thePatrick Henry Leaguein 1958, and they worked in the 1964Barry Goldwaterpresidential campaign.[27]
When Robert A. Heinlein opened hisColorado Springsnewspaper on April 5, 1958, he read a full-page ad demanding that the Eisenhower Administration stop testing nuclear weapons. The science fiction author was flabbergasted. He called for the formation of the Patrick Henry League and spent the next several weeks writing and publishing his own polemic that lambasted "Communist-line goals concealed in idealistic-sounding nonsense" and urged Americans not to become "soft-headed".[65]
Heinlein's response ad was entitled "Who Are the Heirs of Patrick Henry?". It started with the famous Henry quotation: "Is life so dear, or peace so sweet, as to be purchased at the price of chains and slavery? Forbid it, Almighty God! I know not what course others may take, but as for me, give me liberty, or give me death!!" It then went on to admit that there was some risk to nuclear testing (albeit less than the "willfully distorted" claims of the test ban advocates), and risk of nuclear war, but that "The alternative is surrender. We accept the risks." Heinlein was among those who in 1968 signed a pro–Vietnam Warad inGalaxy Science Fiction.[95]
Heinlein always considered himself a libertarian; in a letter to Judith Merril in 1967 (never sent) he said, "As for libertarian, I've been one all my life, a radical one. You might use the term 'philosophical anarchist' or 'autarchist' about me, but 'libertarian' is easier to define and fits well enough."[96]
Stranger in a Strange Landwas embraced by the 1960scounterculture, and libertarians have found inspiration inThe Moon Is a Harsh Mistress. Both groups found resonance with his themes of personal freedom in both thought and action.[68]
Heinlein grew up in the era ofracial segregation in the United Statesand wrote some of his most influential fiction at the height of theCivil Rights Movement. He explicitly made the case for using his fiction not only to predict the future but also to educate his readers about the value ofracial equalityand the importance of racial tolerance.[97]His early novels were ahead of their time both in their explicit rejection of racism and in their inclusion of protagonists of color. In the context of science fiction before the 1960s, the mere existence of characters of color was a remarkable novelty, with green occurring more often than brown.[98]For example, his 1948 novelSpace Cadetexplicitly uses aliens as a metaphor for minorities. The 1947 story "Jerry Was a Man" uses enslaved genetically modified chimpanzees as a symbol for Black Americans fighting for civil rights.[99]In his novelThe Star Beast, thede factoforeign minister of the Terran government is an undersecretary, a Mr. Kiku, who is from Africa.[100]Heinlein explicitly states his skin is "ebony black" and that Kiku is in anarranged marriagethat is happy.[101]
In a number of his stories, Heinlein challenges his readers' possible racial preconceptions by introducing a strong, sympathetic character, only to reveal much later that he or she is of African or other ancestry. In several cases, the covers of the books show characters as being light-skinned when the text states or at least implies that they are dark-skinned or of African ancestry.[104]Heinlein repeatedly denounced racism in his nonfiction works, including numerous examples inExpanded Universe.
Heinlein reveals inStarship Troopersthat the novel's protagonist and narrator,Johnny Rico, the formerly disaffected scion of a wealthy family, isFilipino, actually named "Juan Rico" and speaksTagalogin addition to English.
Race was a central theme in some of Heinlein's fiction. The most prominent example isFarnham's Freehold, which casts awhitefamily into a future in which white people are the slaves of cannibalistic black rulers. In the 1941 novelSixth Column(also known asThe Day After Tomorrow), a white resistance movement in the United States defends itself against an invasion by an Asian fascist state (the "Pan-Asians") using a "super-science" technology that allows ray weapons to be tuned to specific races. The idea for the story was pushed on Heinlein by editorJohn W. Campbelland the story itself was based on a then-unpublished story by Campbell, and Heinlein wrote later that he had "had to re-slant it to remove racist aspects of the original story line" and that he did not "consider it to be an artistic success".[105][106]However, the novel prompted a heated debate in the scientific community regarding the plausibility of developingethnic bioweapons.[107]John Hickman, writing in theEuropean Journal of American Studies, identifies examples of anti–East Asian racism in some of Heinlein's works, particularlySixth Column.[108]
Heinlein summed up his attitude toward people of any race in his essay "Our Noble, Essential Decency" thus:
And finally, I believe in my whole race—yellow, white, black, red, brown—in the honesty, courage, intelligence, durability, and goodness of the overwhelming majority of my brothers and sisters everywhere on this planet. I am proud to be a human being.
In keeping with his belief inindividualism, his work for adults—and sometimes even his work for juveniles—often portrays both the oppressors and the oppressed with considerable ambiguity. Heinlein believed that individualism was incompatible with ignorance. He believed that an appropriate level of adult competence was achieved through a wide-ranging education, whether this occurred in a classroom or not. In his juvenile novels, more than once a character looks with disdain at a student's choice of classwork, saying, "Why didn't you study something useful?"[109]InTime Enough for Love,Lazarus Longgives a long list of capabilities that anyone should have, concluding, "Specialization is for insects." The ability of the individual to create himself is explored in stories such asI Will Fear No Evil, "'—All You Zombies—'", and "By His Bootstraps".
Heinlein claimed to have writtenStarship Troopersin response to "calls for the unilateral ending of nuclear testing by the United States".[110]Heinlein suggests in the book that the Bugs are a good example of Communism being something that humans cannot successfully adhere to, since humans are strongly defined individuals, whereas the Bugs, being a collective, can all contribute to the whole without consideration of individual desire.[111]
A common theme in Heinlein's writing is his frequent use of the "competent man", astock characterwho exhibits a very wide range of abilities and knowledge, making him a form ofpolymath. This trope was notably common in 1950s U.S. science fiction.[112]While Heinlein was not the first to use such a character type, the heroes and heroines of his fiction (withJubal Harshawbeing a prime example) generally have a wide range of abilities, and one of Heinlein's characters,Lazarus Long, gives a wide summary of requirements:
A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.
Predecessors of Heinlein's competent heroes include the protagonists ofGeorge Bernard Shaw, like Henry Higgins inPygmalionand Caesar inCaesar and Cleopatra, as well as the citizen soldiers inRudyard Kipling's "The Army of a Dream".
For Heinlein, personal liberation includedsexual liberation, andfree lovewas a major subject of his writing starting in 1939, withFor Us, the Living. During his early period, Heinlein's writing for younger readers needed to take account of both editorial perceptions of sexuality in his novels, and potential perceptions among the buying public; as critic William H. Patterson has put it, his dilemma was "to sort out what was really objectionable from what was only excessive over-sensitivity to imaginary librarians".[115]
By his middle period, sexual freedom and the elimination of sexual jealousy became a major theme; for instance, inStranger in a Strange Land(1961), the progressively minded but sexually conservative reporter, Ben Caxton, acts as adramatic foilfor the less parochial characters,Jubal Harshawand Valentine Michael Smith (Mike). Another of the main characters, Jill, is homophobic, and says that "nine times out of ten, if a girl gets raped it's partly her own fault."[116]
According to Gary Westfahl,
Heinlein is a problematic case for feminists; on the one hand, his works often feature strong female characters and vigorous statements that women are equal to or even superior to men; but these characters and statements often reflect hopelessly stereotypical attitudes about typical female attributes. It is disconcerting, for example, that inExpanded UniverseHeinlein calls for a society where all lawyers and politicians are women, essentially on the grounds that they possess a mysterious feminine practicality that men cannot duplicate.[117]
In books written as early as 1956, Heinlein dealt with incest and the sexual nature of children. Many of his books includingTime for the Stars,Glory Road,Time Enough for Love, andThe Number of the Beastdealt explicitly or implicitly with incest, sexual feelings and relations between adults, children, or both.[118]The treatment of these themes include the romantic relationship and eventual marriage of two characters inThe Door into Summerwho met when one was a 30-year-old engineer and the other was an 11-year-old girl, and who eventually married when time-travel rendered the girl an adult while the engineer aged minimally, or the more overt intra-familial incest inTo Sail Beyond the SunsetandTime Enough for Love. Heinlein often posed situations where the nominal purpose of sexual taboos was irrelevant to a particular situation, due to future advances in technology. For example, inTime Enough for LoveHeinlein describes a brother and sister (Joe and Llita) who were mirror twins, being complementary diploids with entirely disjoint genomes, and thus not at increased risk for unfavorable gene duplication due toconsanguinity. In this instance, Llita and Joe were props used to explore the concept of incest, where the usual objection to incest—heightened risk of genetic defect in their children—was not a consideration.[119]Peers such asL. Sprague de CampandDamon Knighthave commented critically on Heinlein's portrayal of incest and pedophilia in a lighthearted and even approving manner.[118]Diane Parkin-Speer suggests that Heinlein's intent seems more to provoke the reader and to question sexual norms than to promote any particular sexual agenda.[120]
InTo Sail Beyond the Sunset, Heinlein has the main character,Maureen, state that the purpose ofmetaphysicsis to ask questions: "Why are we here?" "Where are we going after we die?" (and so on); and that you are not allowed to answer the questions.Askingthe questions is the point of metaphysics, butansweringthem is not, because once you answer this kind of question, you cross the line into religion. Maureen does not state a reason for this; she simply remarks that such questions are "beautiful" but lack answers. Maureen's son/lover Lazarus Long makes a related remark inTime Enough for Love. In order for us to answer the "big questions" about the universe, Lazarus states at one point, it would be necessary to standoutsidethe universe.
During the 1930s and 1940s, Heinlein was deeply interested inAlfred Korzybski'sgeneral semanticsand attended a number of seminars on the subject. His views onepistemologyseem to have flowed from that interest, and his fictional characters continue to express Korzybskian views to the very end of his writing career. Many of his stories, such asGulf,If This Goes On—, andStranger in a Strange Land, depend strongly on the premise, related to the well-knownSapir–Whorf hypothesis, that by using a correctlydesigned language, one can change or improve oneself mentally, or even realize untapped potential (as in the case of Joe inGulf—whose last name may be Greene, Gilead or Briggs).[121]
WhenAyn Rand's novelThe Fountainheadwas published, Heinlein was very favorably impressed, as quoted in "Grumbles ..." and mentioned John Galt—the hero in Rand'sAtlas Shrugged—as a heroic archetype inThe Moon Is a Harsh Mistress. He was also strongly affected by the religious philosopherP. D. Ouspensky.[19]Freudianismandpsychoanalysiswere at the height of their influence during the peak of Heinlein's career, and stories such asTime for the Starsindulged in psychological theorizing.
However, he was skeptical about Freudianism, especially after a struggle with an editor who insisted on reading Freudian sexual symbolism into hisjuvenile novels. Heinlein was fascinated by thesocial creditmovement in the 1930s. This is shown inBeyond This Horizonand in his 1938 novelFor Us, the Living: A Comedy of Customs, which was finally published in 2003, long after his death.
On that theme, the phrase "pay it forward", though it was already in occasional use as a quotation, was popularized by Robert A. Heinlein in his bookBetween Planets,[122]published in 1951:
The banker reached into the folds of his gown, pulled out a single credit note. "But eat first—a full belly steadies the judgment. Do me the honor of accepting this as our welcome to the newcomer."
His pride said no; his stomach said YES! Don took it and said, "Uh, thanks! That's awfully kind of you. I'll pay it back, first chance."
"Instead, pay it forward to some other brother who needs it."
He referred to this in a number of other stories, although sometimes just saying to pay a debt back by helping others, as in one of his last works,Job, a Comedy of Justice.
Heinlein was a mentor toRay Bradbury, giving him help and quite possibly passing on the concept, made famous by the publication of a letter from him to Heinlein thanking him.[123]In Bradbury's novelDandelion Wine, published in 1957, when the main character Douglas Spaulding is reflecting on his life being saved by Mr. Jonas, the Junkman:
How do I thank Mr. Jonas, he wondered, for what he's done? How do I thank him, how pay him back? No way, no way at all. You just can't pay. What then? What? Pass it on somehow, he thought, pass it on to someone else. Keep the chain moving. Look around, find someone, and pass it on. That was the only way…
Bradbury has also advised that writers he has helped thank him by helping other writers.[124]
Heinlein both preached and practiced this philosophy; now theHeinlein Society, a humanitarian organization founded in his name, does so, attributing the philosophy to its various efforts, including Heinlein for Heroes, the Heinlein Society Scholarship Program, and Heinlein Society blood drives.[125]Author Spider Robinson made repeated reference to the doctrine, attributing it to his spiritual mentor Heinlein.[126]
Heinlein is usually identified, along withIsaac AsimovandArthur C. Clarke, as one of the three masters of science fiction to arise in the so-calledGolden Age of science fiction, associated withJohn W. Campbelland his magazineAstounding.[127]In the 1950s he was a leader in bringing science fiction out of the low-paying and less prestigious "pulpghetto". Most of his works, including short stories, have been continuously in print in many languages since their initial appearance and are still available as new paperbacks decades after his death.
He was at the top of his form during, and himself helped to initiate, the trend towardsocial science fiction, which went along with a general maturing of the genre away fromspace operato a more literary approach touching on such adult issues as politics andhuman sexuality. In reaction to this trend,hard science fictionbegan to be distinguished as a separate subgenre, but paradoxically Heinlein is also considered a seminal figure in hard science fiction, due to his extensive knowledge of engineering and the careful scientific research demonstrated in his stories. Heinlein himself stated—with obvious pride—that in the days before pocket calculators, he and his wife Virginia once worked for several days on a mathematical equation describing an Earth–Mars rocket orbit, which was then subsumed in a single sentence of the novelSpace Cadet.
Heinlein is often credited with bringing serious writing techniques to the genre of science fiction. For example, when writing about fictional worlds, previous authors were often limited by the reader's existing knowledge of a typical "space opera" setting, leading to a relatively low creativity level: The same starships, death rays, and horrifying rubbery aliens becoming ubiquitous.[citation needed]This was necessary unless the author was willing to go into longexpositionsabout the setting of the story, at a time when the word count was at a premium in SF.[citation needed]
But Heinlein utilized a technique called "indirect exposition", perhaps first introduced byRudyard Kiplingin his own science fiction venture, theAerial Board of Controlstories. Kipling had picked this up during his time inIndia, using it to avoid bogging down his stories set in India with explanations for his English readers.[128]This technique — mentioning details in a way that lets the reader infer more about the universe than is actually spelled out[129]— became a trademark rhetorical technique of both Heinlein and writers influenced by him. Heinlein was significantly influenced by Kipling beyond this, for example quoting him in "On the Writing of Speculative Fiction".[130]
Likewise, Heinlein's name is often associated with thecompetent hero, a character archetype who, though he or she may have flaws and limitations, is a strong, accomplished person able to overcome any soluble problem set in their path. They tend to feel confident overall, have a broad life experience and set of skills, and not give up when the going gets tough. This style influenced not only the writing style of a generation of authors, but even their personal character.Harlan Ellisononce said, "Very early in life when I read Robert Heinlein I got the thread that runs through his stories—the notion of the competent man ... I've always held that as my ideal. I've tried to be a very competent man."[131]
When fellow writers, or fans, wrote Heinlein asking for writing advice, he famously gave out his own list of rules for becoming a successful writer:
About which he said:
The above five rules really have more to do with how to write speculative fiction than anything said above them. But they are amazingly hard to follow—which is why there are so few professional writers and so many aspirants, and which is why I am not afraid to give away the racket![132]
Heinlein later published an entire article, "On the Writing of Speculative Fiction", which included his rules, and from which the above quote is taken. When he says "anything said above them", he refers to his other guidelines. For example, he describes most stories as fitting into one of a handful of basic categories:
In the article, Heinlein proposes that most stories fit into either the gadget story or the human interest story, which is itself subdivided into the three latter categories. He also creditsL. Ron Hubbardas having identified "The Man-Who-Learned-Better".
Heinlein has had a pervasive influence on other science fiction writers. In a 1953 poll of leading science fiction authors, he was cited more frequently as an influence than any other modern writer.[133]Critic James Gifford writes that
Although many other writers have exceeded Heinlein's output, few can claim to match his broad and seminal influence. Scores of science fiction writers from the prewar Golden Age through the present day loudly and enthusiastically credit Heinlein for blazing the trails of their own careers, and shaping their styles and stories.
Heinlein gave Larry Niven and Jerry Pournelle extensive advice on a draft manuscript ofThe Mote in God's Eye.[134]He contributed a cover blurb "Possibly the finest science fiction novel I have ever read." In their novelFootfall, Niven and Pournelle included Robert A. Heinlein as a character under the name "Bob Anson." Anson in the novel is a respected and well-known science-fiction author. WriterDavid Gerrold, responsible for creating the tribbles inStar Trek, also credited Heinlein as the inspiration for hisDingilliadseries of novels.Gregory Benfordrefers to his novelJupiter Projectas a Heinlein tribute. Similarly,Charles Strosssays his Hugo Award-nominated novelSaturn's Childrenis "a space opera and late-period Robert A. Heinlein tribute",[135]referring to Heinlein'sFriday.[136]The theme and plot of Kameron Hurley's novel,The Light Brigadeclearly echo those of Heinlein'sStarship Troopers.[137]
Even outside the science fiction community, several words and phrases coined or adopted by Heinlein have passed into common English usage:
In 1962,Oberon Zell-Ravenheart(then still using his birth name, Tim Zell) founded theChurch of All Worlds, aNeopaganreligious organization modeled in many ways (including its name) after the treatment of religion in the novelStranger in a Strange Land. This spiritual path included several ideas from the book, including non-mainstream family structures, social libertarianism, water-sharing rituals, an acceptance of all religious paths by a single tradition, and the use of several terms such as "grok", "Thou art God", and "Never Thirst". Though Heinlein was neither a member nor a promoter of the Church, there was a frequent exchange of correspondence between Zell and Heinlein, and he was a paid subscriber to their magazine,Green Egg. This Church still exists as a501(C)(3)religious organization incorporated in California, with membership worldwide, and it remains an active part of the neopagan community today.[139]Zell-Ravenheart's wife,Morning Glorycoined the termpolyamoryin 1990,[140]another movement that includes Heinlein concepts among its roots.
Heinlein was influential in makingspace explorationseem to the public more like a practical possibility. His stories in publications such asThe Saturday Evening Posttook a matter-of-fact approach to their outer-space setting, rather than the "gee whiz" tone that had previously been common. The documentary-like filmDestination Moonadvocated aSpace Racewith an unspecified foreign power almost a decade before such an idea became commonplace, and was promoted by an unprecedented publicity campaign in print publications. Many of the astronauts and others working in the U.S. space program grew up on a diet of the Heinleinjuveniles,[original research?]best evidenced by the naming of a crater on Mars after him, and a tribute interspersed by theApollo 15astronauts into their radio conversations while on the moon.[141]
Heinlein was also a guest commentator (along with fellow SF authorArthur C. Clarke) forWalter Cronkite's coverage of theApollo 11Moon landing.[142]He remarked to Cronkite during the landing that, "This is the greatest event in human history, up to this time. This is—today is New Year's Day of the Year One."[143]
Heinlein has inspired many transformational figures in business and technology includingLee Felsenstein, the designer of the first mass-produced portable computer,[144]Marc Andreessen,[145]co-author of the first widely-used web browser, andElon Musk, CEO ofTeslaand founder ofSpaceX.[146]
The Heinlein Society was founded byVirginia Heinleinon behalf of her husband, to "pay forward" the legacy of the writer to future generations of "Heinlein's Children". The foundation has programs to:
The Heinlein society also established theRobert A. Heinlein Awardin 2003 "for outstanding published works in science fiction and technical writings to inspire the human exploration of space".[147][148]
In his lifetime, Heinlein received fourHugo Awards, forDouble Star,Starship Troopers,Stranger in a Strange Land, andThe Moon Is a Harsh Mistress, and was nominated for fourNebula Awards, forThe Moon Is a Harsh Mistress,Friday,Time Enough for Love, andJob: A Comedy of Justice.[149]He was also given seven Retro-Hugos: two for best novel:Beyond This HorizonandFarmer in the Sky; three for best novella:If This Goes On...,Waldo, andThe Man Who Sold the Moon; one for best novelette: "The Roads Must Roll"; and one for best dramatic presentation: "Destination Moon".[150][151][152]
Heinlein was also nominated for sixHugo Awardsfor the worksHave Space Suit: Will Travel,Glory Road,Time Enough for Love,Friday,Job: A Comedy of JusticeandGrumbles from the Grave, as well as sixRetro Hugo AwardsforMagic, Inc., "Requiem", "Coventry", "Blowups Happen", "Goldfish Bowl", and "The Unpleasant Profession of Jonathan Hoag".
Heinlein won theLocus Awardfor "All-Time Favorite Author" in 1973, and for "All-Time Best Author" in 1988.[153][154]
TheScience Fiction Writers of Americanamed Heinlein its firstGrand Masterin 1974, presented 1975. Officers and past presidents of the Association select a living writer for lifetime achievement (now annually and includingfantasyliterature).[16][17]
In 1977, Heinlein was awarded theInkpot Award,[155]and in 1985, he was awarded theEisner Awards"Bob Clampett Humanitarian Award".[156]
Main-beltasteroid6312 Robheinlein(1990 RH4), discovered on September 14, 1990, byH. E. Holtat Palomar, was named after him.[157]
In 1994 theInternational Astronomical UnionnamedHeinlein crateron Mars in his honor.[158][159]
TheScience Fiction and Fantasy Hall of Fameinducted Heinlein in 1998.[160]
In 2001 the United States Naval Academy created the Robert A. Heinlein Chair in Aerospace Engineering.[161]
Heinlein was the Ghost of Honor at the 2008World Science Fiction Conventionin Denver, Colorado, which held several panels on his works. Nearly seventy years earlier, he had been a Guest of Honor at the same convention.[162]
In 2016, after an intensive online campaign to win a vote for the opening, Heinlein was inducted into theHall of Famous Missourians.[163]His bronze bust, created by Kansas City sculptorE. Spencer Schubert, is on permanent display in theMissouri State CapitolinJefferson City.[164]
The Libertarian Futurist Society has honored eight of Heinlein's novels and two short stories with theirHall of Fameaward.[165]The first two were given during his lifetime forThe Moon Is a Harsh MistressandStranger in a Strange Land. Five more were awarded posthumously forRed Planet,Methuselah's Children,Time Enough for Love, and the short stories "Requiem" and "Coventry".
|
https://en.wikipedia.org/wiki/Robert_A._Heinlein
|
"By His Bootstraps" is a 20,000 wordscience fictionnovellaby American writerRobert A. Heinlein. It plays with some of the inherentparadoxesthat would be caused bytime travel.
The story was published in the October 1941 issue ofAstounding Science Fictionunder thepen nameAnson MacDonald; the same issue has "Common Sense" under Heinlein's name.[1]"By His Bootstraps" was reprinted in Heinlein's 1959 collectionThe Menace From Earth, and in several subsequent anthologies,[2]and is now available in at least two audio editions. Under the title "The Time Gate", it was also included in a 1958 Crest paperback anthology,Race to the Stars.
In 1952, Bob Wilson locks himself in his room to finish hisgraduate thesison a mathematical aspect ofmetaphysics, using the concept of time travel as a case in point. Bob does not care much at this point whether his thesis (that time travel is impossible) is valid; he is desperate for sleep and just wants to get it done and typed up by the deadline the next day to become an academic, since he thinks academia beats working for a living. Suddenly, although Bob had locked himself alone in his room, someone says, “Don’t bother with it. It's a lot of utterhogwashanyhow." The interloper, who looks strangely familiar, and to whom Bob takes a dislike, calls himself "Joe", and explains that he has come from the future through a Time Gate, a circle about 6 ft (1.8 m) in diameter in the air behind Joe. Joe tells Bob that great opportunities await him through the Gate and thousands of years in his future. By way of demonstration, Joe tosses Bob's hat into the Gate. It disappears.
Bob is reluctant. Joe plies him withdrink, which Joe (a stranger, from Bob's point of view) inexplicably retrieves from its hiding place in Bob's apartment, and Bob becomes intoxicated. Bob's talk with Joe is interrupted by odd phone calls, first from a man who sounds familiar, and then from his sometime girlfriend, who gets upset when Bob says he hasn't seen her recently. Finally, Joe is about to manhandle Bob through the Gate when another man appears, one who looks very much like Joe. The newcomer does not want Bob to go. During the ensuing fight, Bob gets punched, sending him through the Gate.
He recovers his senses in a strange place. A somewhat older-looking, bearded man explains that he is some 30,000 years in the future. The man, calling himself Diktor,[3]treats Bob to a sumptuous breakfast served by beautiful women, one of whom Bob speaks of admiringly. Diktor immediately gives that woman to Bob as a slave. Diktor explains that humans in the future are handsome, cultured in a primitive fashion, but much more docile and good natured than their ancestors. Analien race, the High Ones, built the Gate and refashioned humanity into compliant slaves, but the High Ones are gone now, leaving a world where a 20th-century "go-getter" can make himself king.
Diktor asks him to go back through the Gate and bring back the man he finds on the other side. Bob agrees. Stepping through, he finds himself back in his own room, watching himself typing his thesis. Without much memory of what happened before, he reenacts the scene, this time from the other point of view, and calling himself "Joe" so as not to confuse his earlier self. Just as he is about to shove Bob through the Gate, another version of himself shows up. The fight happens as before, and Bob goes through the Gate.
His future self claims that Diktor is just trying to tangle them up so badly that they can never get untangled, but Joe goes through and meets Diktor again. Diktor gives him a list of things to buy in his own time and bring back. A little annoyed by Diktor's manner, Bob argues with him, but eventually returns to the past, back in his room once again.
He lives through the same scene for the third time, then realizes that he is now free of Diktor. Bob ponders the nature of the 20th-century society he lives in, finding it seedy and depressing. He is sure he no longer has time to finish his thesis, but it is obviously incorrect anyway, so he decides he will go back to the future through the Time Gate. While setting the Gate, he finds two things beside the controls: his hat, and a notebook containing translations between English words and the language of Diktor's slaves. He returns to his own time and collects the items on Diktor's list, which seem to be things a 20th-century man could find useful in making himself king in the future, intentionally writing abad checkfor the purchases, after persuading the cashier that the check is good. He then visits a woman he had been dating, but has begun to dislike, and has his way with her, smugly intending to never see her again, and leaving his hat in her apartment. He phones his past self as a prank, quickly hanging up. After returning to the future, he adjusts the Gate to send himself back to a point ten years earlier, to give himself time to establish himself as the local chieftain. Thus he hopes to preempt Diktor's influence, charting his own course instead.
He sets himself up as chief, taking precautions against the arrival of Diktor. He adopts the name Diktor, which is simply the local word for "chief." He experiments with the Time Gate, hoping to see the High Ones. Once, he does catch a glimpse of one and has a brief mental contact with it. The experience is so traumatizing that he runs away screaming, for the creature feels such sadness and other deep emotions that a 20th-century go-getter like Bob cannot bear it. He forces himself to return long enough to shut down the Gate, then stays away from it for more than two years. He does not notice that his hair has begun to whiten prematurely, as a result of the stress and shock. Having worn out the notebook through long use, he copies its text into a new, identical, one.
One day, upon setting the Gate to view his old room in the past, he sees three versions of himself in a familiar arrangement. Shortly, his earliest self comes through. The circle has closed.Heis Diktor—the only Diktor there ever was. Wondering who actually compiled the notebook, Diktor prepares to brief Bob, who has to orchestrate events to ensure his own past.
Floyd C. GaleofGalaxy Science Fictionsaid of "By His Bootstraps", "In 18 years I haven't seen its equal" as a temporal-paradox story.[4]PhilosopherDavid Lewisconsidered "By His Bootstraps" and"'—All You Zombies—'" to be examples of "perfectly consistent" time travel stories.[5]Stating that it and other Heinlein time-travel stories "force the reader into contemplations of the nature of causality and the arrow of time",Carl Saganlisted "By His Bootstraps" as an example of how science fiction "can convey bits and pieces, hints and phrases, of knowledge unknown or inaccessible to the reader".[6]
|
https://en.wikipedia.org/wiki/By_His_Bootstraps
|
Rugged individualism, derived fromindividualism, is a term that indicates that an individual is self-reliant and independent from outside (usually government or some other form of collective) assistance or support. While the term is often associated with the notion oflaissez-faireand associated adherents, it was actuallycoinedby United States presidentHerbert Hoover.[1][2]
American rugged individualism has its origins in the American frontier experience. Throughout its evolution, theAmerican frontierwas generally sparsely populated and had very little infrastructure in place. Under such conditions, individuals had to provide for themselves to survive. This kind of environment forced people to work in isolation from the larger community and may have altered attitudes at the frontier in favor of individualistic thought over collectivism.[3]
Through the mid-twentieth century, the concept was championed by Hoover's former Secretary of the Interior and long-time president ofStanford University,Ray Lyman Wilbur, who wrote: "It is common talk that every individual is entitled to economic security. The only animals and birds I know that have economic security are those that have been domesticated—and the economic security they have is controlled by the barbed-wire fence, the butcher's knife and the desire of others. They are milked, skinned, egged or eaten up by their protectors."[4]
Martin Luther King Jr.notably remarked on the term in his speech "The Other America" on March 10, 1968: "This country hassocialism for the rich, rugged individualism for the poor."[5]Bernie Sandersreferenced King's quote in a 2019 speech.[6]
The ideal of rugged individualism continues to be a part of American thought. In 2016, a poll byPew Researchfound that 57% of Americans did not believe that success in life was determined by forces outside of their control. Additionally, the same poll found that 58% of Americans valued a non-interventionist government over one that actively worked to further the needs of society.[7]
Academics interviewed in the 2020 bookRugged Individualism and the Misunderstanding of American Inequality, co-written byNoam Chomsky, largely found that the continued belief in this brand of individualism is a strong factor in American policies surrounding social spending and welfare. Americans who more strongly believe in the values espoused by rugged individualism tend to view those who seek government assistance as being responsible for their[who?]position, leading to decreased support for welfare programs and increased support for stricter criteria for receiving government help.[8]The influence of American individualistic thought extends to government regulation as well. Areas of the country which were part of the American frontier for longer, and were therefore more influenced by the frontier experience, were found to be more likely to be supportive ofRepublicancandidates, who often vote against regulations such asgun control, minimum wage increases, and environmental regulation.[3]
A 2021 research article posits that American “rugged individualism” hampered social distancing and mask use during theCOVID-19 pandemic.[9]
|
https://en.wikipedia.org/wiki/Rugged_individualism
|
Instatistics, thejackknife(jackknife cross-validation) is across-validationtechnique and, therefore, a form ofresampling.
It is especially useful forbiasandvarianceestimation. The jackknife pre-dates other common resampling methods such as thebootstrap. Given a sample of sizen{\displaystyle n}, a jackknifeestimatorcan be built by aggregating the parameter estimates from each subsample of size(n−1){\displaystyle (n-1)}obtained by omitting one observation.[1]
The jackknife technique was developed byMaurice Quenouille(1924–1973) from 1949 and refined in 1956.John Tukeyexpanded on the technique in 1958 and proposed the name "jackknife" because, like a physicaljack-knife(a compact folding knife), it is arough-and-readytool that can improvise a solution for a variety of problems even though specific problems may be more efficiently solved with a purpose-designed tool.[2]
The jackknife is a linear approximation of thebootstrap.[2]
The jackknifeestimatorof a parameter is found by systematically leaving out each observation from a dataset and calculating the parameter estimate over the remaining observations and then aggregating these calculations.
For example, if the parameter to be estimated is the population mean of random variablex{\displaystyle x}, then for a given set ofi.i.d.observationsx1,...,xn{\displaystyle x_{1},...,x_{n}}the natural estimator is the sample mean:
where the last sum used another way to indicate that the indexi{\displaystyle i}runs over the set[n]={1,…,n}{\displaystyle [n]=\{1,\ldots ,n\}}.
Then we proceed as follows: For eachi∈[n]{\displaystyle i\in [n]}we compute the meanx¯(i){\displaystyle {\bar {x}}_{(i)}}of the jackknife subsample consisting of all but thei{\displaystyle i}-th data point, and this is called thei{\displaystyle i}-th jackknife replicate:
It could help to think that thesen{\displaystyle n}jackknife replicatesx¯(1),…,x¯(n){\displaystyle {\bar {x}}_{(1)},\ldots ,{\bar {x}}_{(n)}}approximate the distribution of the sample meanx¯{\displaystyle {\bar {x}}}. A largern{\displaystyle n}improves the approximation. Then finally to get the jackknife estimator, then{\displaystyle n}jackknife replicates are averaged:
One may ask about the bias and the variance ofx¯jack{\displaystyle {\bar {x}}_{\mathrm {jack} }}. From the definition ofx¯jack{\displaystyle {\bar {x}}_{\mathrm {jack} }}as the average of the jackknife replicates one could try to calculate explicitly. The bias is a trivial calculation, but the variance ofx¯jack{\displaystyle {\bar {x}}_{\mathrm {jack} }}is more involved since the jackknife replicates are not independent.
For the special case of the mean, one can show explicitly that the jackknife estimate equals the usual estimate:
This establishes the identityx¯jack=x¯{\displaystyle {\bar {x}}_{\mathrm {jack} }={\bar {x}}}. Then taking expectations we getE[x¯jack]=E[x¯]=E[x]{\displaystyle E[{\bar {x}}_{\mathrm {jack} }]=E[{\bar {x}}]=E[x]}, sox¯jack{\displaystyle {\bar {x}}_{\mathrm {jack} }}is unbiased, while taking variance we getV[x¯jack]=V[x¯]=V[x]/n{\displaystyle V[{\bar {x}}_{\mathrm {jack} }]=V[{\bar {x}}]=V[x]/n}. However, these properties do not generally hold for parameters other than the mean.
This simple example for the case of mean estimation is just to illustrate the construction of a jackknife estimator, while the real subtleties (and the usefulness) emerge for the case of estimating other parameters, such as higher moments than the mean or other functionals of the distribution.
x¯jack{\displaystyle {\bar {x}}_{\mathrm {jack} }}could be used to construct an empirical estimate of the bias ofx¯{\displaystyle {\bar {x}}}, namelybias^(x¯)jack=c(x¯jack−x¯){\displaystyle {\widehat {\operatorname {bias} }}({\bar {x}})_{\mathrm {jack} }=c({\bar {x}}_{\mathrm {jack} }-{\bar {x}})}with some suitable factorc>0{\displaystyle c>0}, although in this case we know thatx¯jack=x¯{\displaystyle {\bar {x}}_{\mathrm {jack} }={\bar {x}}}so this construction does not add any meaningful knowledge, but it gives the correct estimation of the bias (which is zero).
A jackknife estimate of the variance ofx¯{\displaystyle {\bar {x}}}can be calculated from the variance of the jackknife replicatesx¯(i){\displaystyle {\bar {x}}_{(i)}}:[3][4]
The left equality defines the estimatorvar^(x¯)jack{\displaystyle {\widehat {\operatorname {var} }}({\bar {x}})_{\mathrm {jack} }}and the right equality is an identity that can be verified directly. Then taking expectations we getE[var^(x¯)jack]=V[x]/n=V[x¯]{\displaystyle E[{\widehat {\operatorname {var} }}({\bar {x}})_{\mathrm {jack} }]=V[x]/n=V[{\bar {x}}]}, so this is an unbiased estimator of the variance ofx¯{\displaystyle {\bar {x}}}.
The jackknife technique can be used to estimate (and correct) the bias of an estimator calculated over the entire sample.
Supposeθ{\displaystyle \theta }is the target parameter of interest, which is assumed to be some functional of the distribution ofx{\displaystyle x}. Based on a finite set of observationsx1,...,xn{\displaystyle x_{1},...,x_{n}}, which is assumed to consist ofi.i.d.copies ofx{\displaystyle x}, the estimatorθ^{\displaystyle {\hat {\theta }}}is constructed:
The value ofθ^{\displaystyle {\hat {\theta }}}is sample-dependent, so this value will change from one random sample to another.
By definition, the bias ofθ^{\displaystyle {\hat {\theta }}}is as follows:
One may wish to compute several values ofθ^{\displaystyle {\hat {\theta }}}from several samples, and average them, to calculate an empirical approximation ofE[θ^]{\displaystyle E[{\hat {\theta }}]}, but this is impossible when there are no "other samples" when the entire set of available observationsx1,...,xn{\displaystyle x_{1},...,x_{n}}was used to calculateθ^{\displaystyle {\hat {\theta }}}. In this kind of situation the jackknife resampling technique may be of help.
We construct the jackknife replicates:
where each replicate is a "leave-one-out" estimate based on the jackknife subsample consisting of all but one of the data points:
Then we define their average:
The jackknife estimate of the bias ofθ^{\displaystyle {\hat {\theta }}}is given by:
and the resulting bias-corrected jackknife estimate ofθ{\displaystyle \theta }is given by:
This removes the bias in the special case that the bias isO(n−1){\displaystyle O(n^{-1})}and reduces it toO(n−2){\displaystyle O(n^{-2})}in other cases.[2]
The jackknife technique can be also used to estimate the variance of an estimator calculated over the entire sample.
|
https://en.wikipedia.org/wiki/Jackknife_(statistics)
|
TheNelson–Aalen estimatoris anon-parametric estimatorof thecumulative hazard ratefunction in case ofcensored dataorincomplete data.[1]It is used insurvival theory,reliability engineeringandlife insuranceto estimate the cumulative number of expected events. An "event" can be the failure of a non-repairable component, the death of a human being, or any occurrence for which the experimental unit remains in the "failed" state (e.g., death) from the point at which it changed on. Theestimatoris given by
withdi{\displaystyle d_{i}}the number of events at timeti{\displaystyle t_{i}}andni{\displaystyle n_{i}}the total individuals at risk atti{\displaystyle t_{i}}.[2]
The curvature of the Nelson–Aalen estimator gives an idea of the hazard rate shape. A concave shape is an indicator forinfant mortalitywhile a convex shape indicateswear out mortality.
It can be used for example when testing the homogeneity ofPoisson processes.[3]
It was constructed byWayne NelsonandOdd Aalen.[4][5][6]The Nelson-Aalen estimator is directly related to theKaplan-Meier estimatorand both maximize theempirical likelihood.[7]
|
https://en.wikipedia.org/wiki/Nelson%E2%80%93Aalen_estimator
|
Survival analysisis a branch ofstatisticsfor analyzing the expected duration of time until one event occurs, such as death inbiological organismsand failure in mechanical systems.[1]This topic is calledreliability theory,reliability analysisorreliability engineeringinengineering,duration analysisorduration modellingineconomics, andevent history analysisinsociology. Survival analysis attempts to answer certain questions, such as what is the proportion of a population which will survive past a certain time? Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability ofsurvival?
To answer such questions, it is necessary to define "lifetime". In the case of biological survival,deathis unambiguous, but for mechanical reliability,failuremay not be well-defined, for there may well be mechanical systems in which failure is partial, a matter of degree, or not otherwise localized intime. Even in biological problems, some events (for example,heart attackor other organ failure) may have the same ambiguity. Thetheoryoutlined below assumes well-defined events at specific times; other cases may be better treated by models which explicitly account for ambiguous events.
More generally, survival analysis involves the modelling of time to event data; in this context, death or failure is considered an "event" in the survival analysis literature – traditionally only a single event occurs for each subject, after which the organism or mechanism is dead or broken.Recurring eventorrepeated eventmodels relax that assumption. The study of recurring events is relevant insystems reliability, and in many areas of social sciences and medical research.
Survival analysis is used in several ways:
The following terms are commonly used in survival analyses:
This example uses theAcute Myelogenous Leukemiasurvival data set "aml" from the "survival" package in R. The data set is from Miller (1997)[2]and the question is whether the standard course of chemotherapy should be extended ('maintained') for additional cycles.
The aml data set sorted by survival time is shown in the box.
(weeks)
The last observation (11), at 161 weeks, is censored. Censoring indicates that the patient did not have an event (no recurrence of aml cancer). Another subject, observation 3, was censored at 13 weeks (indicated by status=0). This subject was in the study for only 13 weeks, and the aml cancer did not recur during those 13 weeks. It is possible that this patient was enrolled near the end of the study, so that they could be observed for only 13 weeks. It is also possible that the patient was enrolled early in the study, but was lost to follow up or withdrew from the study. The table shows that other subjects were censored at 16, 28, and 45 weeks (observations 17, 6, and9 with status=0). The remaining subjects all experienced events (recurrence of aml cancer) while in the study. The question of interest is whether recurrence occurs later in maintained patients than in non-maintained patients.
Thesurvival functionS(t), is the probability that a subject survives longer than timet.S(t) is theoretically a smooth curve, but it is usually estimated using theKaplan–Meier(KM) curve. The graph shows the KM plot for the aml data and can be interpreted as follows:
Alife tablesummarizes survival data in terms of the number of events and the proportion surviving at each event time point. The life table for the aml data, created using the Rsoftware, is shown.
The life table summarizes the events and the proportion surviving at each event time point. The columns in the life table have the following interpretation:
Thelog-rank testcompares the survival times of two or more groups. This example uses a log-rank test for a difference in survival in the maintained versus non-maintained treatment groups in the aml data. The graph shows KM plots for the aml data broken out by treatment group, which is indicated by the variable "x" in the data.
The null hypothesis for a log-rank test is that the groups have the same survival. The expected number of subjects surviving at each time point in each is adjusted for the number of subjects at risk in the groups at each event time. The log-rank test determines if the observed number of events in each group is significantly different from the expected number. The formal test is based on a chi-squared statistic. When the log-rank statistic is large, it is evidence for a difference in the survival times between the groups. The log-rank statistic approximately has aChi-squared distributionwith one degree of freedom, and thep-valueis calculated using theChi-squared test.
For the example data, the log-rank test for difference in survival gives a p-value of p=0.0653, indicating that the treatment groups do not differ significantly in survival, assuming an alpha level of 0.05. The sample size of 23 subjects is modest, so there is littlepowerto detect differences between the treatment groups. The chi-squared test is based on asymptotic approximation, so the p-value should be regarded with caution for smallsample sizes.
Kaplan–Meier curves and log-rank tests are most useful when the predictor variable is categorical (e.g., drug vs. placebo), or takes a small number of values (e.g., drug doses 0, 20, 50, and 100 mg/day) that can be treated as categorical. The log-rank test and KM curves don't work easily with quantitative predictors such as gene expression, white blood count, or age. For quantitative predictor variables, an alternative method isCox proportional hazards regressionanalysis. Cox PH models work also with categorical predictor variables, which are encoded as {0,1} indicator or dummy variables. The log-rank test is a special case of a Cox PH analysis, and can be performed using Cox PH software.
This example uses the melanoma data set from Dalgaard Chapter 14.[3]
Data are in the R package ISwR. The Cox proportional hazards regression usingR gives the results shown in the box.
The Cox regression results are interpreted as follows.
The summary output also gives upper and lower 95% confidence intervals for the hazard ratio: lower 95% bound = 1.15; upper 95% bound = 3.26.
Finally, the output gives p-values for three alternative tests for overall significance of the model:
These three tests are asymptotically equivalent. For large enough N, they will give similar results. For small N, they may differ somewhat. The last row, "Score (logrank) test" is the result for the log-rank test, with p=0.011, the same result as the log-rank test, because the log-rank test is a special case of a Cox PH regression. The Likelihood ratio test has better behavior for small sample sizes, so it is generally preferred.
The Cox model extends the log-rank test by allowing the inclusion of additional covariates.[4]This example use the melanoma data set where the predictor variables include a continuous covariate, the thickness of the tumor (variable name = "thick").
In the histograms, the thickness values arepositively skewedand do not have aGaussian-like,Symmetric probability distribution. Regression models, including the Cox model, generally give more reliable results with normally-distributed variables.[citation needed]For this example we may use alogarithmictransform. The log of the thickness of the tumor looks to be more normally distributed, so the Cox models will use log thickness. The Cox PH analysis gives the results in the box.
The p-value for all three overall tests (likelihood, Wald, and score) are significant, indicating that the model is significant. The p-value for log(thick) is 6.9e-07, with a hazard ratio HR = exp(coef) = 2.18, indicating a strong relationship between the thickness of the tumor and increased risk of death.
By contrast, the p-value for sex is now p=0.088. The hazard ratio HR = exp(coef) = 1.58, with a 95% confidence interval of 0.934 to 2.68. Because the confidence interval for HR includes 1, these results indicate that sex makes a smaller contribution to the difference in the HR after controlling for the thickness of the tumor, and only trend toward significance. Examination of graphs of log(thickness) by sex and a t-test of log(thickness) by sex both indicate that there is a significant difference between men and women in the thickness of the tumor when they first see the clinician.
The Cox model assumes that the hazards are proportional. The proportional hazard assumption may be tested using the Rfunction cox.zph(). A p-value which is less than 0.05 indicates that the hazards are not proportional. For the melanoma data we obtain p=0.222. Hence, we cannot reject the null hypothesis of the hazards being proportional. Additional tests and graphs for examining a Cox model are described in the textbooks cited.
Cox models can be extended to deal with variations on the simple analysis.
The Cox PH regression model is a linear model. It is similar to linear regression and logistic regression. Specifically, these methods assume that a single line, curve, plane, or surface is sufficient to separate groups (alive, dead) or to estimate a quantitative response (survival time).
In some cases alternative partitions give more accurate classification or quantitative estimates. One set of alternative methods are tree-structured survival models,[5][6][7]including survival random forests.[8]Tree-structured survival models may give more accurate predictions than Cox models. Examining both types of models for a given data set is a reasonable strategy.
This example of a survival tree analysis uses the Rpackage "rpart".[9]The example is based on 146 stageC prostate cancer patients in the data set stagec in rpart. Rpart and the stagec example are described in Atkinson and Therneau (1997),[10]which is also distributed as a vignette of the rpart package.[9]
The variables in stages are:
The survival tree produced by the analysis is shown in the figure.
Each branch in the tree indicates a split on the value of a variable. For example, the root of the tree splits subjects with grade < 2.5 versus subjects with grade 2.5 or greater. The terminal nodes indicate the number of subjects in the node, the number of subjects who have events, and the relative event rate compared to the root. In the node on the far left, the values 1/33 indicate that one of the 33 subjects in the node had an event, and that the relative event rate is 0.122. In the node on the far right bottom, the values 11/15 indicate that 11 of 15 subjects in the node had an event, and the relative event rate is 2.7.
An alternative to building a single survival tree is to build many survival trees, where each tree is constructed using a sample of the data, and average the trees to predict survival.[8]This is the method underlying the survival random forest models. Survival random forest analysis is available in the Rpackage "randomForestSRC".[11]
The randomForestSRC package includes an example survival random forest analysis using the data set pbc. This data is from the Mayo Clinic Primary Biliary Cirrhosis (PBC) trial of the liver conducted between 1974 and 1984. In the example, the random forest survival model gives more accurate predictions of survival than the Cox PH model. The prediction errors are estimated bybootstrap re-sampling.
Recent advancements in deep representation learning have been extended to survival estimation. The DeepSurv[12]model proposes to replace the log-linear parameterization of the CoxPH model with a multi-layer perceptron. Further extensions like Deep Survival Machines[13]and Deep Cox Mixtures[14]involve the use of latent variable mixture models to model the time-to-event distribution as a mixture of parametric or semi-parametric distributions while jointly learning representations of the input covariates. Deep learning approaches have shown superior performance especially on complex input data modalities such as images and clinical time-series.
The object of primary interest is thesurvival function, conventionally denotedS, which is defined as
S(t)=Pr(T>t){\displaystyle S(t)=\Pr(T>t)}wheretis some time,Tis arandom variabledenoting the time of death, and "Pr" stands forprobability. That is, the survival function is the probability that the time of death is later than some specified timet.
The survival function is also called thesurvivor functionorsurvivorship functionin problems of biological survival, and thereliability functionin mechanical survival problems. In the latter case, the reliability function is denotedR(t).
Usually one assumesS(0) = 1, although it could be less than 1if there is the possibility of immediate death or failure.
The survival function must be non-increasing:S(u) ≤S(t) ifu≥t. This property follows directly becauseT>uimpliesT>t. This reflects the notion that survival to a later age is possible only if all younger ages are attained. Given this property, the lifetime distribution function and event density (Fandfbelow) are well-defined.
The survival function is usually assumed to approach zero as age increases without bound (i.e.,S(t) → 0 ast→ ∞), although the limit could be greater than zero if eternal life is possible. For instance, we could apply survival analysis to a mixture of stable and unstablecarbon isotopes; unstable isotopes would decay sooner or later, but the stable isotopes would last indefinitely.
Related quantities are defined in terms of the survival function.
Thelifetime distribution function, conventionally denotedF, is defined as the complement of the survival function,
F(t)=Pr(T≤t)=1−S(t).{\displaystyle F(t)=\Pr(T\leq t)=1-S(t).}IfFisdifferentiablethen the derivative, which is the density function of the lifetime distribution, is conventionally denotedf,
f(t)=F′(t)=ddtF(t).{\displaystyle f(t)=F'(t)={\frac {d}{dt}}F(t).}The functionfis sometimes called theevent density; it is the rate of death or failure events per unit time.
The survival function can be expressed in terms ofprobability distributionandprobability density functions
S(t)=Pr(T>t)=∫t∞f(u)du=1−F(t).{\displaystyle S(t)=\Pr(T>t)=\int _{t}^{\infty }f(u)\,du=1-F(t).}Similarly, a survival event density function can be defined as
s(t)=S′(t)=ddtS(t)=ddt∫t∞f(u)du=ddt[1−F(t)]=−f(t).{\displaystyle s(t)=S'(t)={\frac {d}{dt}}S(t)={\frac {d}{dt}}\int _{t}^{\infty }f(u)\,du={\frac {d}{dt}}[1-F(t)]=-f(t).}In other fields, such as statistical physics, the survival event density function is known as thefirst passage timedensity.
Thehazard functionh{\displaystyle h}is defined as the event rate at timet,{\displaystyle t,}conditional on survival at timet.{\displaystyle t.}
Synonyms forhazard functionin different fields include hazard rate,force of mortality(demographyandactuarial science, denoted byμ{\displaystyle \mu }), force of failure, orfailure rate(engineering, denotedλ{\displaystyle \lambda }). For example, in actuarial science,μ(x){\displaystyle \mu (x)}denotes rate of death for people agedx{\displaystyle x}, whereas inreliability engineeringλ(t){\displaystyle \lambda (t)}denotes rate of failure of components after operation for timet{\displaystyle t}.
Suppose that an item has survived for a timet{\displaystyle t}and we desire the probability that it will not survive for an additional timedt{\displaystyle dt}:
h(t)=limdt→0Pr(t≤T<t+dt)dt⋅S(t)=f(t)S(t)=−S′(t)S(t).{\displaystyle h(t)=\lim _{dt\rightarrow 0}{\frac {\Pr(t\leq T<t+dt)}{dt\cdot S(t)}}={\frac {f(t)}{S(t)}}=-{\frac {S'(t)}{S(t)}}.}
Any functionh{\displaystyle h}is a hazard function if and only if it satisfies the following properties:
In fact, the hazard rate is usually more informative about the underlying mechanism of failure than the other representations of a lifetime distribution.
The hazard function must be non-negative,λ(t)≥0{\displaystyle \lambda (t)\geq 0}, and its integral over[0,∞]{\displaystyle [0,\infty ]}must be infinite, but is not otherwise constrained; it may be increasing or decreasing, non-monotonic, or discontinuous. An example is thebathtub curvehazard function, which is large for small values oft{\displaystyle t}, decreasing to some minimum, and thereafter increasing again; this can model the property of some mechanical systems to either fail soon after operation, or much later, as the system ages.
The hazard function can alternatively be represented in terms of thecumulative hazard function, conventionally denotedΛ{\displaystyle \Lambda }orH{\displaystyle H}:
Λ(t)=−logS(t){\displaystyle \,\Lambda (t)=-\log S(t)}so transposing signs and exponentiating
S(t)=exp(−Λ(t)){\displaystyle \,S(t)=\exp(-\Lambda (t))}or differentiating (with the chain rule)
ddtΛ(t)=−S′(t)S(t)=λ(t).{\displaystyle {\frac {d}{dt}}\Lambda (t)=-{\frac {S'(t)}{S(t)}}=\lambda (t).}The name "cumulative hazard function" is derived from the fact that
Λ(t)=∫0tλ(u)du{\displaystyle \Lambda (t)=\int _{0}^{t}\lambda (u)\,du}which is the "accumulation" of the hazard over time.
From the definition ofΛ(t){\displaystyle \Lambda (t)}, we see that it increases without bound asttends to infinity (assuming thatS(t){\displaystyle S(t)}tends to zero). This implies thatλ(t){\displaystyle \lambda (t)}must not decrease too quickly, since, by definition, the cumulative hazard has to diverge. For example,exp(−t){\displaystyle \exp(-t)}is not the hazard function of any survival distribution, because its integral converges to 1.
The survival functionS(t){\displaystyle S(t)}, the cumulative hazard functionΛ(t){\displaystyle \Lambda (t)}, the densityf(t){\displaystyle f(t)}, the hazard functionλ(t){\displaystyle \lambda (t)}, and the lifetime distribution functionF(t){\displaystyle F(t)}are related throughS(t)=exp[−Λ(t)]=f(t)λ(t)=1−F(t),t>0.{\displaystyle S(t)=\exp[-\Lambda (t)]={\frac {f(t)}{\lambda (t)}}=1-F(t),\quad t>0.}
Future lifetimeat a given timet0{\displaystyle t_{0}}is the time remaining until death, given survival to aget0{\displaystyle t_{0}}. Thus, it isT−t0{\displaystyle T-t_{0}}in the present notation. Theexpected future lifetimeis theexpected valueof future lifetime. The probability of death at or before aget0+t{\displaystyle t_{0}+t}, given survival until aget0{\displaystyle t_{0}}, is just
P(T≤t0+t∣T>t0)=P(t0<T≤t0+t)P(T>t0)=F(t0+t)−F(t0)S(t0).{\displaystyle P(T\leq t_{0}+t\mid T>t_{0})={\frac {P(t_{0}<T\leq t_{0}+t)}{P(T>t_{0})}}={\frac {F(t_{0}+t)-F(t_{0})}{S(t_{0})}}.}Therefore, the probability density of future lifetime is
ddtF(t0+t)−F(t0)S(t0)=f(t0+t)S(t0){\displaystyle {\frac {d}{dt}}{\frac {F(t_{0}+t)-F(t_{0})}{S(t_{0})}}={\frac {f(t_{0}+t)}{S(t_{0})}}}and the expected future lifetime is
1S(t0)∫0∞tf(t0+t)dt=1S(t0)∫t0∞S(t)dt,{\displaystyle {\frac {1}{S(t_{0})}}\int _{0}^{\infty }t\,f(t_{0}+t)\,dt={\frac {1}{S(t_{0})}}\int _{t_{0}}^{\infty }S(t)\,dt,}where the second expression is obtained usingintegration by parts.
Fort0=0{\displaystyle t_{0}=0}, that is, at birth, this reduces to the expected lifetime.
In reliability problems, the expected lifetime is called themean time to failure, and the expected future lifetime is called themean residual lifetime.
As the probability of an individual surviving until agetor later isS(t), by definition, the expected number of survivors at agetout of an initialpopulationofnnewborns isn×S(t), assuming the same survival function for all individuals. Thus the expected proportion of survivors isS(t).
If the survival of different individuals is independent, the number of survivors at agethas abinomial distributionwith parametersnandS(t), and thevarianceof the proportion of survivors isS(t) × (1-S(t))/n.
The age at which a specified proportion of survivors remain can be found by solving the equationS(t) =qfort, whereqis thequantilein question. Typically one is interested in themedianlifetime, for whichq= 1/2, or other quantiles such asq= 0.90 orq= 0.99.
Censoringis a form of missing data problem in which time to event is not observed for reasons such as termination of study before all recruited subjects have shown the event of interest or the subject has left the study prior to experiencing an event. Censoring is common in survival analysis.
If only the lower limitlfor the true event timeTis known such thatT>l, this is calledright censoring. Right censoring will occur, for example, for those subjects whose birth date is known but who are still alive when they arelost to follow-upor when the study ends. We generally encounter right-censored data.
If the event of interest has already happened before the subject is included in the study but it is not known when it occurred, the data is said to beleft-censored.[15]When it can only be said that the event happened between two observations or examinations, this isinterval censoring.
Left censoring occurs for example when a permanent tooth has already emerged prior to the start of a dental study that aims to estimate its emergence distribution. In the same study, an emergence time is interval-censored when the permanent tooth is present in the mouth at the current examination but not yet at the previous examination. Interval censoring often occurs in HIV/AIDS studies. Indeed, time to HIV seroconversion can be determined only by a laboratory assessment which is usually initiated after a visit to the physician. Then one can only conclude that HIV seroconversion has happened between two examinations. The same is true for the diagnosis of AIDS, which is based on clinical symptoms and needs to be confirmed by a medical examination.
It may also happen that subjects with a lifetime less than some threshold may not be observed at all: this is calledtruncation. Note that truncation is different from left censoring, since for a left censored datum, we know the subject exists, but for a truncated datum, we may be completely unaware of the subject. Truncation is also common. In a so-calleddelayed entrystudy, subjects are not observed at all until they have reached a certain age. For example, people may not be observed until they have reached the age to enter school. Any deceased subjects in the pre-school age group would be unknown. Left-truncated data are common inactuarial workforlife insuranceandpensions.[16]
Left-censored data can occur when a person's survival time becomes incomplete on the left side of the follow-up period for the person. For example, in an epidemiological example, we may monitor a patient for an infectious disorder starting from the time when he or she is tested positive for the infection. Although we may know the right-hand side of the duration of interest, we may never know the exact time of exposure to the infectious agent.[17]
Survival models can be usefully viewed as ordinary regression models in which the response variable is time. However, computing the likelihood function (needed for fitting parameters or making other kinds of inferences) is complicated by the censoring. Thelikelihood functionfor a survival model, in the presence of censored data, is formulated as follows. By definition the likelihood function is theconditional probabilityof the data given the parameters of the model.
It is customary to assume that the data are independent given the parameters. Then the likelihood function is the product of the likelihood of each datum. It is convenient to partition the data into four categories: uncensored, left censored, right censored, and interval censored. These are denoted "unc.", "l.c.", "r.c.", and "i.c." in the equation below.
L(θ)=∏Ti∈unc.Pr(T=Ti∣θ)∏i∈l.c.Pr(T<Ti∣θ)∏i∈r.c.Pr(T>Ti∣θ)∏i∈i.c.Pr(Ti,l<T<Ti,r∣θ).{\displaystyle L(\theta )=\prod _{T_{i}\in unc.}\Pr(T=T_{i}\mid \theta )\prod _{i\in l.c.}\Pr(T<T_{i}\mid \theta )\prod _{i\in r.c.}\Pr(T>T_{i}\mid \theta )\prod _{i\in i.c.}\Pr(T_{i,l}<T<T_{i,r}\mid \theta ).}For uncensored data, withTi{\displaystyle T_{i}}equal to the age at death, we have
Pr(T=Ti∣θ)=f(Ti∣θ).{\displaystyle \Pr(T=T_{i}\mid \theta )=f(T_{i}\mid \theta ).}For left-censored data, such that the age at death is known to be less thanTi{\displaystyle T_{i}}, we have
Pr(T<Ti∣θ)=F(Ti∣θ)=1−S(Ti∣θ).{\displaystyle \Pr(T<T_{i}\mid \theta )=F(T_{i}\mid \theta )=1-S(T_{i}\mid \theta ).}For right-censored data, such that the age at death is known to be greater thanTi{\displaystyle T_{i}}, we have
Pr(T>Ti∣θ)=1−F(Ti∣θ)=S(Ti∣θ).{\displaystyle \Pr(T>T_{i}\mid \theta )=1-F(T_{i}\mid \theta )=S(T_{i}\mid \theta ).}For an interval censored datum, such that the age at death is known to be less thanTi,r{\displaystyle T_{i,r}}and greater thanTi,l{\displaystyle T_{i,l}}, we have
Pr(Ti,l<T<Ti,r∣θ)=S(Ti,l∣θ)−S(Ti,r∣θ).{\displaystyle \Pr(T_{i,l}<T<T_{i,r}\mid \theta )=S(T_{i,l}\mid \theta )-S(T_{i,r}\mid \theta ).}An important application where interval-censored data arises is current status data, where an eventTi{\displaystyle T_{i}}is known not to have occurred before an observation time and to have occurred before the next observation time.
TheKaplan–Meier estimatorcan be used to estimate the survival function. TheNelson–Aalen estimatorcan be used to provide anon-parametricestimate of the cumulative hazard rate function. These estimators require lifetime data. Periodic case (cohort) and death (and recovery) counts are statistically sufficient to make nonparametric maximum likelihood and least squares estimates of survival functions, without lifetime data.
While many parametric models assume a continuous-time, discrete-time survival models can be mapped to a binary classification problem. In a discrete-time survival model the survival period is artificially resampled in intervals where for each interval a binary target indicator is recorded if the event takes place in a certain time horizon.[18]If a binary classifier (potentially enhanced with a different likelihood to take more structure of the problem into account) iscalibrated, then the classifier score is the hazard function (i.e. the conditional probability of failure).[18]
Discrete-time survival models are connected toempirical likelihood.[19][20]
The goodness of fit of survival models can be assessed usingscoring rules.[21]
The textbook by Kleinbaum has examples of survival analyses using SAS, R, and other packages.[22]The textbooks by Brostrom,[23]Dalgaard[3]and Tableman and Kim[24]give examples of survival analyses using R (or using S, and which run in R).
|
https://en.wikipedia.org/wiki/Survival_analysis#Discrete-time_survival_models
|
Instatistics,censoringis a condition in which thevalueof ameasurementorobservationis only partially known.
For example, suppose a study is conducted to measure the impact of a drug onmortality rate. In such a study, it may be known that an individual's age at death isat least75 years (but may be more). Such a situation could occur if the individual withdrew from the study at age 75, or if the individual is currently alive at the age of 75.
Censoring also occurs when a value occurs outside the range of ameasuring instrument. For example, a bathroom scale might only measure up to 140 kg, after which it rolls over 0 and continues to count up from there. If a 160 kg individual is weighed using the scale, the observer would only know that the individual's weight is 20mod140 kg (in addition to 160kg, they could weigh 20kg, 300kg, 440kg, and so on).
The problem of censored data, in which the observed value of some variable is partially known, is related to the problem ofmissing data, where the observed value of some variable is unknown.
Censoring should not be confused with the related idea oftruncation. With censoring, observations result either in knowing the exact value that applies, or in knowing that the value lies within aninterval. With truncation, observations never result in values outside a given range: values in the population outside the range are never seen or never recorded if they are seen. Note that in statistics, truncation is not the same asrounding.
Interval censoring can occur when observing a value requires follow-ups or inspections. Left and right censoring are special cases of interval censoring, with the beginning of the interval at zero or the end at infinity, respectively.
Estimation methodsfor using left-censored data vary, and not all methods of estimation may be applicable to, or the most reliable, for all data sets.[1]
A common misconception with time interval data is to class asleft censoredintervals when the start time is unknown. In these cases, we have a lower bound on the timeinterval; thus, the data isright censored(despite the fact that the missing start point is to the left of the known interval when viewed as a timeline!).
Special techniques may be used to handle censored data. Tests with specific failure times are coded as actual failures; censored data are coded for the type of censoring and the known interval or limit. Special software programs (oftenreliabilityoriented) can conduct amaximum likelihood estimationfor summary statistics, confidence intervals, etc.
One of the earliest attempts to analyse a statistical problem involving censored data wasDaniel Bernoulli's 1766 analysis ofsmallpoxmorbidity and mortality data to demonstrate the efficacy ofvaccination.[2]An early paper to use theKaplan–Meier estimatorfor estimating censored costs was Quesenberry et al. (1989),[3]however this approach was found to be invalid by Lin et al.[4]unless all patients accumulated costs with a common deterministic rate function over time, they proposed an alternative estimation technique known as the Lin estimator.[5]
Reliabilitytesting often consists of conducting a test on an item (under specified conditions) to determine the time it takes for a failure to occur.
An analysis of the data from replicate tests includes both the times-to-failure for the items that failed and the time-of-test-termination for those that did not fail.
An earlier model forcensored regression, thetobit model, was proposed byJames Tobinin 1958.[6]
Thelikelihoodis the probability or probability density of what was observed, viewed as a function of parameters in an assumed model. To incorporate censored data points in the likelihood the censored data points are represented by the probability of the censored data points as a function of the model parameters given a model, i.e. a function of CDF(s) instead of the density or probability mass.
The most general censoring case is interval censoring:Pr(a<x⩽b)=F(b)−F(a){\displaystyle Pr(a<x\leqslant b)=F(b)-F(a)}, whereF(x){\displaystyle F(x)}is the CDF of the probability distribution, and the two special cases are:
For continuous probability distributions:Pr(a<x⩽b)=Pr(a<x<b){\displaystyle Pr(a<x\leqslant b)=Pr(a<x<b)}
Suppose we are interested in survival times,T1,T2,...,Tn{\displaystyle T_{1},T_{2},...,T_{n}}, but we don't observeTi{\displaystyle T_{i}}for alli{\displaystyle i}. Instead, we observe
WhenTi>Ui,Ui{\displaystyle T_{i}>U_{i},U_{i}}is called thecensoring time.[7]
If the censoring times are all known constants, then the likelihood is
wheref(ui){\displaystyle f(u_{i})}= the probability density function evaluated atui{\displaystyle u_{i}},
andS(ui){\displaystyle S(u_{i})}= the probability thatTi{\displaystyle T_{i}}is greater thanui{\displaystyle u_{i}}, called thesurvival function.
This can be simplified by defining thehazard function, the instantaneous force of mortality, as
so
Then
For theexponential distribution, this becomes even simpler, because the hazard rate,λ{\displaystyle \lambda }, is constant, andS(u)=exp(−λu){\displaystyle S(u)=\exp(-\lambda u)}. Then:
wherek=∑δi{\displaystyle k=\sum {\delta _{i}}}.
From this we easily computeλ^{\displaystyle {\hat {\lambda }}}, themaximum likelihood estimate (MLE)ofλ{\displaystyle \lambda }, as follows:
Then
We set this to 0 and solve forλ{\displaystyle \lambda }to get:
Equivalently, themean time to failureis:
This differs from the standard MLE for theexponential distributionin that the any censored observations are considered only in the numerator.
|
https://en.wikipedia.org/wiki/Censoring_(statistics)
|
Indata analysisinvolving geographical locations,geo-imputationorgeographical imputationmethods are steps taken to replacemissing valuesfor exact locations with approximate locations derived from associated data. They assign a reasonable location or geographic based attribute (e.g.,census tract) to a person by using both the demographic characteristics of the person
and the population characteristics from a larger geographic aggregate area in which the person was geocoded (e.g., postal delivery area or county). For example, if a person's census tract was known and no other address information was available then geo-imputation methods could be used to probabilistically assign that person to a smaller geographic area, such as a census block group.[1]
|
https://en.wikipedia.org/wiki/Geo-imputation
|
Instatistics,maximum likelihood estimation(MLE) is a method ofestimatingtheparametersof an assumedprobability distribution, given some observed data. This is achieved bymaximizingalikelihood functionso that, under the assumedstatistical model, theobserved datais most probable. Thepointin theparameter spacethat maximizes the likelihood function is called the maximum likelihood estimate.[1]The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means ofstatistical inference.[2][3][4]
If the likelihood function isdifferentiable, thederivative testfor finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, theordinary least squaresestimator for alinear regressionmodel maximizes the likelihood when the random errors are assumed to havenormaldistributions with the same variance.[5]
From the perspective ofBayesian inference, MLE is generally equivalent tomaximum a posteriori (MAP) estimationwith aprior distributionthat isuniformin the region of interest. Infrequentist inference, MLE is a special case of anextremum estimator, with the objective function being the likelihood.
We model a set of observations as a randomsamplefrom an unknownjoint probability distributionwhich is expressed in terms of a set ofparameters. The goal of maximum likelihood estimation is to determine the parameters for which the observed data have the highest joint probability. We write the parameters governing the joint distribution as a vectorθ=[θ1,θ2,…,θk]T{\displaystyle \;\theta =\left[\theta _{1},\,\theta _{2},\,\ldots ,\,\theta _{k}\right]^{\mathsf {T}}\;}so that this distribution falls within aparametric family{f(⋅;θ)∣θ∈Θ},{\displaystyle \;\{f(\cdot \,;\theta )\mid \theta \in \Theta \}\;,}whereΘ{\displaystyle \,\Theta \,}is called theparameter space, a finite-dimensional subset ofEuclidean space. Evaluating the joint density at the observed data sampley=(y1,y2,…,yn){\displaystyle \;\mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})\;}gives a real-valued function,Ln(θ)=Ln(θ;y)=fn(y;θ),{\displaystyle {\mathcal {L}}_{n}(\theta )={\mathcal {L}}_{n}(\theta ;\mathbf {y} )=f_{n}(\mathbf {y} ;\theta )\;,}which is called thelikelihood function. Forindependent random variables,fn(y;θ){\displaystyle f_{n}(\mathbf {y} ;\theta )}will be the product of univariatedensity functions:fn(y;θ)=∏k=1nfkunivar(yk;θ).{\displaystyle f_{n}(\mathbf {y} ;\theta )=\prod _{k=1}^{n}\,f_{k}^{\mathsf {univar}}(y_{k};\theta )~.}
The goal of maximum likelihood estimation is to find the values of the model parameters that maximize the likelihood function over the parameter space,[6]that is:θ^=argmaxθ∈ΘLn(θ;y).{\displaystyle {\hat {\theta }}={\underset {\theta \in \Theta }{\operatorname {arg\;max} }}\,{\mathcal {L}}_{n}(\theta \,;\mathbf {y} )~.}
Intuitively, this selects the parameter values that make the observed data most probable. The specific valueθ^=θ^n(y)∈Θ{\displaystyle ~{\hat {\theta }}={\hat {\theta }}_{n}(\mathbf {y} )\in \Theta ~}that maximizes the likelihood functionLn{\displaystyle \,{\mathcal {L}}_{n}\,}is called the maximum likelihood estimate. Further, if the functionθ^n:Rn→Θ{\displaystyle \;{\hat {\theta }}_{n}:\mathbb {R} ^{n}\to \Theta \;}so defined ismeasurable, then it is called the maximum likelihoodestimator. It is generally a function defined over thesample space, i.e. taking a given sample as its argument. Asufficient but not necessarycondition for its existence is for the likelihood function to becontinuousover a parameter spaceΘ{\displaystyle \,\Theta \,}that iscompact.[7]For anopenΘ{\displaystyle \,\Theta \,}the likelihood function may increase without ever reaching a supremum value.
In practice, it is often convenient to work with thenatural logarithmof the likelihood function, called thelog-likelihood:ℓ(θ;y)=lnLn(θ;y).{\displaystyle \ell (\theta \,;\mathbf {y} )=\ln {\mathcal {L}}_{n}(\theta \,;\mathbf {y} )~.}Since the logarithm is amonotonic function, the maximum ofℓ(θ;y){\displaystyle \;\ell (\theta \,;\mathbf {y} )\;}occurs at the same value ofθ{\displaystyle \theta }as does the maximum ofLn.{\displaystyle \,{\mathcal {L}}_{n}~.}[8]Ifℓ(θ;y){\displaystyle \ell (\theta \,;\mathbf {y} )}isdifferentiableinΘ,{\displaystyle \,\Theta \,,}sufficient conditionsfor the occurrence of a maximum (or a minimum) are∂ℓ∂θ1=0,∂ℓ∂θ2=0,…,∂ℓ∂θk=0,{\displaystyle {\frac {\partial \ell }{\partial \theta _{1}}}=0,\quad {\frac {\partial \ell }{\partial \theta _{2}}}=0,\quad \ldots ,\quad {\frac {\partial \ell }{\partial \theta _{k}}}=0~,}known as the likelihood equations. For some models, these equations can be explicitly solved forθ^,{\displaystyle \,{\widehat {\theta \,}}\,,}but in general no closed-form solution to the maximization problem is known or available, and an MLE can only be found vianumerical optimization. Another problem is that in finite samples, there may exist multiplerootsfor the likelihood equations.[9]Whether the identified rootθ^{\displaystyle \,{\widehat {\theta \,}}\,}of the likelihood equations is indeed a (local) maximum depends on whether the matrix of second-order partial and cross-partial derivatives, the so-calledHessian matrix
H(θ^)=[∂2ℓ∂θ12|θ=θ^∂2ℓ∂θ1∂θ2|θ=θ^…∂2ℓ∂θ1∂θk|θ=θ^∂2ℓ∂θ2∂θ1|θ=θ^∂2ℓ∂θ22|θ=θ^…∂2ℓ∂θ2∂θk|θ=θ^⋮⋮⋱⋮∂2ℓ∂θk∂θ1|θ=θ^∂2ℓ∂θk∂θ2|θ=θ^…∂2ℓ∂θk2|θ=θ^],{\displaystyle \mathbf {H} \left({\widehat {\theta \,}}\right)={\begin{bmatrix}\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}\,\partial \theta _{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}\,\partial \theta _{k}}}\right|_{\theta ={\widehat {\theta \,}}}\\\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}\,\partial \theta _{1}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}\,\partial \theta _{k}}}\right|_{\theta ={\widehat {\theta \,}}}\\\vdots &\vdots &\ddots &\vdots \\\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}\,\partial \theta _{1}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}\,\partial \theta _{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}\end{bmatrix}}~,}
isnegative semi-definiteatθ^{\displaystyle {\widehat {\theta \,}}}, as this indicates localconcavity. Conveniently, most commonprobability distributions– in particular theexponential family– arelogarithmically concave.[10][11]
While the domain of the likelihood function—theparameter space—is generally a finite-dimensional subset ofEuclidean space, additionalrestrictionssometimes need to be incorporated into the estimation process. The parameter space can be expressed asΘ={θ:θ∈Rk,h(θ)=0},{\displaystyle \Theta =\left\{\theta :\theta \in \mathbb {R} ^{k},\;h(\theta )=0\right\}~,}
whereh(θ)=[h1(θ),h2(θ),…,hr(θ)]{\displaystyle \;h(\theta )=\left[h_{1}(\theta ),h_{2}(\theta ),\ldots ,h_{r}(\theta )\right]\;}is avector-valued functionmappingRk{\displaystyle \,\mathbb {R} ^{k}\,}intoRr.{\displaystyle \;\mathbb {R} ^{r}~.}Estimating the true parameterθ{\displaystyle \theta }belonging toΘ{\displaystyle \Theta }then, as a practical matter, means to find the maximum of the likelihood function subject to theconstrainth(θ)=0.{\displaystyle ~h(\theta )=0~.}
Theoretically, the most natural approach to thisconstrained optimizationproblem is the method of substitution, that is "filling out" the restrictionsh1,h2,…,hr{\displaystyle \;h_{1},h_{2},\ldots ,h_{r}\;}to a seth1,h2,…,hr,hr+1,…,hk{\displaystyle \;h_{1},h_{2},\ldots ,h_{r},h_{r+1},\ldots ,h_{k}\;}in such a way thath∗=[h1,h2,…,hk]{\displaystyle \;h^{\ast }=\left[h_{1},h_{2},\ldots ,h_{k}\right]\;}is aone-to-one functionfromRk{\displaystyle \mathbb {R} ^{k}}to itself, and reparameterize the likelihood function by settingϕi=hi(θ1,θ2,…,θk).{\displaystyle \;\phi _{i}=h_{i}(\theta _{1},\theta _{2},\ldots ,\theta _{k})~.}[12]Because of the equivariance of the maximum likelihood estimator, the properties of the MLE apply to the restricted estimates also.[13]For instance, in amultivariate normal distributionthecovariance matrixΣ{\displaystyle \,\Sigma \,}must bepositive-definite; this restriction can be imposed by replacingΣ=ΓTΓ,{\displaystyle \;\Sigma =\Gamma ^{\mathsf {T}}\Gamma \;,}whereΓ{\displaystyle \Gamma }is a realupper triangular matrixandΓT{\displaystyle \Gamma ^{\mathsf {T}}}is itstranspose.[14]
In practice, restrictions are usually imposed using the method of Lagrange which, given the constraints as defined above, leads to therestricted likelihood equations∂ℓ∂θ−∂h(θ)T∂θλ=0{\displaystyle {\frac {\partial \ell }{\partial \theta }}-{\frac {\partial h(\theta )^{\mathsf {T}}}{\partial \theta }}\lambda =0}andh(θ)=0,{\displaystyle h(\theta )=0\;,}
whereλ=[λ1,λ2,…,λr]T{\displaystyle ~\lambda =\left[\lambda _{1},\lambda _{2},\ldots ,\lambda _{r}\right]^{\mathsf {T}}~}is a column-vector ofLagrange multipliersand∂h(θ)T∂θ{\displaystyle \;{\frac {\partial h(\theta )^{\mathsf {T}}}{\partial \theta }}\;}is thek × rJacobian matrixof partial derivatives.[12]Naturally, if the constraints are not binding at the maximum, the Lagrange multipliers should be zero.[15]This in turn allows for a statistical test of the "validity" of the constraint, known as theLagrange multiplier test.
Nonparametric maximum likelihood estimation can be performed using theempirical likelihood.
A maximum likelihood estimator is anextremum estimatorobtained by maximizing, as a function ofθ, theobjective functionℓ^(θ;x){\displaystyle {\widehat {\ell \,}}(\theta \,;x)}. If the data areindependent and identically distributed, then we haveℓ^(θ;x)=∑i=1nlnf(xi∣θ),{\displaystyle {\widehat {\ell \,}}(\theta \,;x)=\sum _{i=1}^{n}\ln f(x_{i}\mid \theta ),}this being the sample analogue of the expected log-likelihoodℓ(θ)=E[lnf(xi∣θ)]{\displaystyle \ell (\theta )=\operatorname {\mathbb {E} } [\,\ln f(x_{i}\mid \theta )\,]}, where this expectation is taken with respect to the true density.
Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value.[16]However, like other estimation methods, maximum likelihood estimation possesses a number of attractivelimiting properties: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties:
Under the conditions outlined below, the maximum likelihood estimator isconsistent. The consistency means that if the data were generated byf(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}and we have a sufficiently large number of observationsn, then it is possible to find the value ofθ0with arbitrary precision. In mathematical terms this means that asngoes to infinity the estimatorθ^{\displaystyle {\widehat {\theta \,}}}converges in probabilityto its true value:θ^mle→pθ0.{\displaystyle {\widehat {\theta \,}}_{\mathrm {mle} }\ {\xrightarrow {\text{p}}}\ \theta _{0}.}
Under slightly stronger conditions, the estimator convergesalmost surely(orstrongly):θ^mle→a.s.θ0.{\displaystyle {\widehat {\theta \,}}_{\mathrm {mle} }\ {\xrightarrow {\text{a.s.}}}\ \theta _{0}.}
In practical applications, data is never generated byf(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}. Rather,f(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}is a model, often in idealized form, of the process generated by the data. It is a common aphorism in statistics thatall models are wrong. Thus, true consistency does not occur in practical applications. Nevertheless, consistency is often considered to be a desirable property for an estimator to have.
To establish consistency, the following conditions are sufficient.[17]
θ≠θ0⇔f(⋅∣θ)≠f(⋅∣θ0).{\displaystyle \theta \neq \theta _{0}\quad \Leftrightarrow \quad f(\cdot \mid \theta )\neq f(\cdot \mid \theta _{0}).}In other words, different parameter valuesθcorrespond to different distributions within the model. If this condition did not hold, there would be some valueθ1such thatθ0andθ1generate an identical distribution of the observable data. Then we would not be able to distinguish between these two parameters even with an infinite amount of data—these parameters would have beenobservationally equivalent.
The identification condition establishes that the log-likelihood has a unique global maximum. Compactness implies that the likelihood cannot approach the maximum value arbitrarily close at some other point (as demonstrated for example in the picture on the right).
Compactness is only a sufficient condition and not a necessary condition. Compactness can be replaced by some other conditions, such as:
P[lnf(x∣θ)∈C0(Θ)]=1.{\displaystyle \operatorname {\mathbb {P} } {\Bigl [}\;\ln f(x\mid \theta )\;\in \;C^{0}(\Theta )\;{\Bigr ]}=1.}
The dominance condition can be employed in the case ofi.i.d.observations. In the non-i.i.d. case, the uniform convergence in probability can be checked by showing that the sequenceℓ^(θ∣x){\displaystyle {\widehat {\ell \,}}(\theta \mid x)}isstochastically equicontinuous.
If one wants to demonstrate that the ML estimatorθ^{\displaystyle {\widehat {\theta \,}}}converges toθ0almost surely, then a stronger condition of uniform convergence almost surely has to be imposed:supθ∈Θ‖ℓ^(θ∣x)−ℓ(θ)‖→a.s.0.{\displaystyle \sup _{\theta \in \Theta }\left\|\;{\widehat {\ell \,}}(\theta \mid x)-\ell (\theta )\;\right\|\ \xrightarrow {\text{a.s.}} \ 0.}
Additionally, if (as assumed above) the data were generated byf(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}, then under certain conditions, it can also be shown that the maximum likelihood estimatorconverges in distributionto a normal distribution. Specifically,[18]n(θ^mle−θ0)→dN(0,I−1){\displaystyle {\sqrt {n}}\left({\widehat {\theta \,}}_{\mathrm {mle} }-\theta _{0}\right)\ \xrightarrow {d} \ {\mathcal {N}}\left(0,\,I^{-1}\right)}whereIis theFisher information matrix.
The maximum likelihood estimator selects the parameter value which gives the observed data the largest possible probability (or probability density, in the continuous case). If the parameter consists of a number of components, then we define their separate maximum likelihood estimators, as the corresponding component of the MLE of the complete parameter. Consistent with this, ifθ^{\displaystyle {\widehat {\theta \,}}}is the MLE forθ{\displaystyle \theta }, and ifg(θ){\displaystyle g(\theta )}is any transformation ofθ{\displaystyle \theta }, then the MLE forα=g(θ){\displaystyle \alpha =g(\theta )}is by definition[19]
α^=g(θ^).{\displaystyle {\widehat {\alpha }}=g(\,{\widehat {\theta \,}}\,).\,}
It maximizes the so-calledprofile likelihood:
L¯(α)=supθ:α=g(θ)L(θ).{\displaystyle {\bar {L}}(\alpha )=\sup _{\theta :\alpha =g(\theta )}L(\theta ).\,}
The MLE is also equivariant with respect to certain transformations of the data. Ify=g(x){\displaystyle y=g(x)}whereg{\displaystyle g}is one to one and does not depend on the parameters to be estimated, then the density functions satisfy
fY(y)=fX(g−1(y))|(g−1(y))′|{\displaystyle f_{Y}(y)=f_{X}(g^{-1}(y))\,|(g^{-1}(y))^{\prime }|}
and hence the likelihood functions forX{\displaystyle X}andY{\displaystyle Y}differ only by a factor that does not depend on the model parameters.
For example, the MLE parameters of the log-normal distribution are the same as those of the normal distribution fitted to the logarithm of the data. In fact, in the log-normal case ifX∼N(0,1){\displaystyle X\sim {\mathcal {N}}(0,1)}, thenY=g(X)=eX{\displaystyle Y=g(X)=e^{X}}follows alog-normal distribution. The density of Y follows withfX{\displaystyle f_{X}}standardNormalandg−1(y)=log(y){\displaystyle g^{-1}(y)=\log(y)},|(g−1(y))′|=1y{\displaystyle |(g^{-1}(y))^{\prime }|={\frac {1}{y}}}fory>0{\displaystyle y>0}.
As assumed above, if the data were generated byf(⋅;θ0),{\displaystyle ~f(\cdot \,;\theta _{0})~,}then under certain conditions, it can also be shown that the maximum likelihood estimatorconverges in distributionto a normal distribution. It is√n-consistent and asymptotically efficient, meaning that it reaches theCramér–Rao bound. Specifically,[18]
n(θ^mle−θ0)→dN(0,I−1),{\displaystyle {\sqrt {n\,}}\,\left({\widehat {\theta \,}}_{\text{mle}}-\theta _{0}\right)\ \ \xrightarrow {d} \ \ {\mathcal {N}}\left(0,\ {\mathcal {I}}^{-1}\right)~,}whereI{\displaystyle ~{\mathcal {I}}~}is theFisher information matrix:Ijk=E[−∂2lnfθ0(Xt)∂θj∂θk].{\displaystyle {\mathcal {I}}_{jk}=\operatorname {\mathbb {E} } \,{\biggl [}\;-{\frac {\partial ^{2}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{j}\,\partial \theta _{k}}}\;{\biggr ]}~.}
In particular, it means that thebiasof the maximum likelihood estimator is equal to zero up to the order1/√n.
However, when we consider the higher-order terms in theexpansionof the distribution of this estimator, it turns out thatθmlehas bias of order1⁄n. This bias is equal to (componentwise)[20]
bh≡E[(θ^mle−θ0)h]=1n∑i,j,k=1mIhiIjk(12Kijk+Jj,ik){\displaystyle b_{h}\;\equiv \;\operatorname {\mathbb {E} } {\biggl [}\;\left({\widehat {\theta }}_{\mathrm {mle} }-\theta _{0}\right)_{h}\;{\biggr ]}\;=\;{\frac {1}{\,n\,}}\,\sum _{i,j,k=1}^{m}\;{\mathcal {I}}^{hi}\;{\mathcal {I}}^{jk}\left({\frac {1}{\,2\,}}\,K_{ijk}\;+\;J_{j,ik}\right)}
whereIjk{\displaystyle {\mathcal {I}}^{jk}}(with superscripts) denotes the (j,k)-th component of theinverseFisher information matrixI−1{\displaystyle {\mathcal {I}}^{-1}}, and
12Kijk+Jj,ik=E[12∂3lnfθ0(Xt)∂θi∂θj∂θk+∂lnfθ0(Xt)∂θj∂2lnfθ0(Xt)∂θi∂θk].{\displaystyle {\frac {1}{\,2\,}}\,K_{ijk}\;+\;J_{j,ik}\;=\;\operatorname {\mathbb {E} } \,{\biggl [}\;{\frac {1}{2}}{\frac {\partial ^{3}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{i}\;\partial \theta _{j}\;\partial \theta _{k}}}+{\frac {\;\partial \ln f_{\theta _{0}}(X_{t})\;}{\partial \theta _{j}}}\,{\frac {\;\partial ^{2}\ln f_{\theta _{0}}(X_{t})\;}{\partial \theta _{i}\,\partial \theta _{k}}}\;{\biggr ]}~.}
Using these formulae it is possible to estimate the second-order bias of the maximum likelihood estimator, andcorrectfor that bias by subtracting it:θ^mle∗=θ^mle−b^.{\displaystyle {\widehat {\theta \,}}_{\text{mle}}^{*}={\widehat {\theta \,}}_{\text{mle}}-{\widehat {b\,}}~.}This estimator is unbiased up to the terms of order1/n, and is called thebias-corrected maximum likelihood estimator.
This bias-corrected estimator issecond-order efficient(at least within the curved exponential family), meaning that it has minimal mean squared error among all second-order bias-corrected estimators, up to the terms of the order1/n2. It is possible to continue this process, that is to derive the third-order bias-correction term, and so on. However, the maximum likelihood estimator isnotthird-order efficient.[21]
A maximum likelihood estimator coincides with themost probableBayesian estimatorgiven auniformprior distributionon theparameters. Indeed, themaximum a posteriori estimateis the parameterθthat maximizes the probability ofθgiven the data, given by Bayes' theorem:
P(θ∣x1,x2,…,xn)=f(x1,x2,…,xn∣θ)P(θ)P(x1,x2,…,xn){\displaystyle \operatorname {\mathbb {P} } (\theta \mid x_{1},x_{2},\ldots ,x_{n})={\frac {f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )\operatorname {\mathbb {P} } (\theta )}{\operatorname {\mathbb {P} } (x_{1},x_{2},\ldots ,x_{n})}}}
whereP(θ){\displaystyle \operatorname {\mathbb {P} } (\theta )}is the prior distribution for the parameterθand whereP(x1,x2,…,xn){\displaystyle \operatorname {\mathbb {P} } (x_{1},x_{2},\ldots ,x_{n})}is the probability of the data averaged over all parameters. Since the denominator is independent ofθ, the Bayesian estimator is obtained by maximizingf(x1,x2,…,xn∣θ)P(θ){\displaystyle f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )\operatorname {\mathbb {P} } (\theta )}with respect toθ. If we further assume that the priorP(θ){\displaystyle \operatorname {\mathbb {P} } (\theta )}is a uniform distribution, the Bayesian estimator is obtained by maximizing the likelihood functionf(x1,x2,…,xn∣θ){\displaystyle f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )}. Thus the Bayesian estimator coincides with the maximum likelihood estimator for a uniform prior distributionP(θ){\displaystyle \operatorname {\mathbb {P} } (\theta )}.
In many practical applications inmachine learning, maximum-likelihood estimation is used as the model for parameter estimation.
The Bayesian Decision theory is about designing a classifier that minimizes total expected risk, especially, when the costs (the loss function) associated with different decisions are equal, the classifier is minimizing the error over the whole distribution.[22]
Thus, the Bayes Decision Rule is stated as
wherew1,w2{\displaystyle \;w_{1}\,,w_{2}\;}are predictions of different classes. From a perspective of minimizing error, it can also be stated asw=argmaxw∫−∞∞P(error∣x)P(x)dx{\displaystyle w={\underset {w}{\operatorname {arg\;max} }}\;\int _{-\infty }^{\infty }\operatorname {\mathbb {P} } ({\text{ error}}\mid x)\operatorname {\mathbb {P} } (x)\,\operatorname {d} x~}whereP(error∣x)=P(w1∣x){\displaystyle \operatorname {\mathbb {P} } ({\text{ error}}\mid x)=\operatorname {\mathbb {P} } (w_{1}\mid x)~}if we decidew2{\displaystyle \;w_{2}\;}andP(error∣x)=P(w2∣x){\displaystyle \;\operatorname {\mathbb {P} } ({\text{ error}}\mid x)=\operatorname {\mathbb {P} } (w_{2}\mid x)\;}if we decidew1.{\displaystyle \;w_{1}\;.}
By applyingBayes' theoremP(wi∣x)=P(x∣wi)P(wi)P(x){\displaystyle \operatorname {\mathbb {P} } (w_{i}\mid x)={\frac {\operatorname {\mathbb {P} } (x\mid w_{i})\operatorname {\mathbb {P} } (w_{i})}{\operatorname {\mathbb {P} } (x)}}},
and if we further assume the zero-or-one loss function, which is a same loss for all errors, the Bayes Decision rule can be reformulated as:hBayes=argmaxw[P(x∣w)P(w)],{\displaystyle h_{\text{Bayes}}={\underset {w}{\operatorname {arg\;max} }}\,{\bigl [}\,\operatorname {\mathbb {P} } (x\mid w)\,\operatorname {\mathbb {P} } (w)\,{\bigr ]}\;,}wherehBayes{\displaystyle h_{\text{Bayes}}}is the prediction andP(w){\displaystyle \;\operatorname {\mathbb {P} } (w)\;}is theprior probability.
Findingθ^{\displaystyle {\hat {\theta }}}that maximizes the likelihood is asymptotically equivalent to finding theθ^{\displaystyle {\hat {\theta }}}that defines a probability distribution (Qθ^{\displaystyle Q_{\hat {\theta }}}) that has a minimal distance, in terms ofKullback–Leibler divergence, to the real probability distribution from which our data were generated (i.e., generated byPθ0{\displaystyle P_{\theta _{0}}}).[23]In an ideal world, P and Q are the same (and the only thing unknown isθ{\displaystyle \theta }that defines P), but even if they are not and the model we use is misspecified, still the MLE will give us the "closest" distribution (within the restriction of a model Q that depends onθ^{\displaystyle {\hat {\theta }}}) to the real distributionPθ0{\displaystyle P_{\theta _{0}}}.[24]
For simplicity of notation, let's assume that P=Q. Let there beni.i.ddata samplesy=(y1,y2,…,yn){\displaystyle \mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})}from some probabilityy∼Pθ0{\displaystyle y\sim P_{\theta _{0}}}, that we try to estimate by findingθ^{\displaystyle {\hat {\theta }}}that will maximize the likelihood usingPθ{\displaystyle P_{\theta }}, then:θ^=argmaxθLPθ(y)=argmaxθPθ(y)=argmaxθP(y∣θ)=argmaxθ∏i=1nP(yi∣θ)=argmaxθ∑i=1nlogP(yi∣θ)=argmaxθ(∑i=1nlogP(yi∣θ)−∑i=1nlogP(yi∣θ0))=argmaxθ∑i=1n(logP(yi∣θ)−logP(yi∣θ0))=argmaxθ∑i=1nlogP(yi∣θ)P(yi∣θ0)=argminθ∑i=1nlogP(yi∣θ0)P(yi∣θ)=argminθ1n∑i=1nlogP(yi∣θ0)P(yi∣θ)=argminθ1n∑i=1nhθ(yi)⟶n→∞argminθE[hθ(y)]=argminθ∫Pθ0(y)hθ(y)dy=argminθ∫Pθ0(y)logP(y∣θ0)P(y∣θ)dy=argminθDKL(Pθ0∥Pθ){\displaystyle {\begin{aligned}{\hat {\theta }}&={\underset {\theta }{\operatorname {arg\,max} }}\,L_{P_{\theta }}(\mathbf {y} )={\underset {\theta }{\operatorname {arg\,max} }}\,P_{\theta }(\mathbf {y} )={\underset {\theta }{\operatorname {arg\,max} }}\,P(\mathbf {y} \mid \theta )\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\prod _{i=1}^{n}P(y_{i}\mid \theta )={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\log P(y_{i}\mid \theta )\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\left(\sum _{i=1}^{n}\log P(y_{i}\mid \theta )-\sum _{i=1}^{n}\log P(y_{i}\mid \theta _{0})\right)={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\left(\log P(y_{i}\mid \theta )-\log P(y_{i}\mid \theta _{0})\right)\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta )}{P(y_{i}\mid \theta _{0})}}={\underset {\theta }{\operatorname {arg\,min} }}\,\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta _{0})}{P(y_{i}\mid \theta )}}={\underset {\theta }{\operatorname {arg\,min} }}\,{\frac {1}{n}}\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta _{0})}{P(y_{i}\mid \theta )}}\\&={\underset {\theta }{\operatorname {arg\,min} }}\,{\frac {1}{n}}\sum _{i=1}^{n}h_{\theta }(y_{i})\quad {\underset {n\to \infty }{\longrightarrow }}\quad {\underset {\theta }{\operatorname {arg\,min} }}\,E[h_{\theta }(y)]\\&={\underset {\theta }{\operatorname {arg\,min} }}\,\int P_{\theta _{0}}(y)h_{\theta }(y)dy={\underset {\theta }{\operatorname {arg\,min} }}\,\int P_{\theta _{0}}(y)\log {\frac {P(y\mid \theta _{0})}{P(y\mid \theta )}}dy\\&={\underset {\theta }{\operatorname {arg\,min} }}\,D_{\text{KL}}(P_{\theta _{0}}\parallel P_{\theta })\end{aligned}}}
Wherehθ(x)=logP(x∣θ0)P(x∣θ){\displaystyle h_{\theta }(x)=\log {\frac {P(x\mid \theta _{0})}{P(x\mid \theta )}}}. Usinghhelps see how we are using thelaw of large numbersto move from the average ofh(x) to theexpectancyof it using thelaw of the unconscious statistician. The first several transitions have to do with laws oflogarithmand that findingθ^{\displaystyle {\hat {\theta }}}that maximizes some function will also be the one that maximizes some monotonic transformation of that function (i.e.: adding/multiplying by a constant).
Sincecross entropyis justShannon's entropyplus KL divergence, and since the entropy ofPθ0{\displaystyle P_{\theta _{0}}}is constant, then the MLE is also asymptotically minimizing cross entropy.[25]
Consider a case wherentickets numbered from 1 tonare placed in a box and one is selected at random (seeuniform distribution); thus, the sample size is 1. Ifnis unknown, then the maximum likelihood estimatorn^{\displaystyle {\widehat {n}}}ofnis the numbermon the drawn ticket. (The likelihood is 0 forn<m,1⁄nforn≥m, and this is greatest whenn=m. Note that the maximum likelihood estimate ofnoccurs at the lower extreme of possible values {m,m+ 1, ...}, rather than somewhere in the "middle" of the range of possible values, which would result in less bias.) Theexpected valueof the numbermon the drawn ticket, and therefore the expected value ofn^{\displaystyle {\widehat {n}}}, is (n+ 1)/2. As a result, with a sample size of 1, the maximum likelihood estimator fornwill systematically underestimatenby (n− 1)/2.
Suppose one wishes to determine just how biased anunfair coinis. Call the probability of tossing a 'head'p. The goal then becomes to determinep.
Suppose the coin is tossed 80 times: i.e. the sample might be something likex1= H,x2= T, ...,x80= T, and the count of the number ofheads"H" is observed.
The probability of tossingtailsis 1 −p(so herepisθabove). Suppose the outcome is 49 heads and 31tails, and suppose the coin was taken from a box containing three coins: one which gives heads with probabilityp=1⁄3, one which gives heads with probabilityp=1⁄2and another which gives heads with probabilityp=2⁄3. The coins have lost their labels, so which one it was is unknown. Using maximum likelihood estimation, the coin that has the largest likelihood can be found, given the data that were observed. By using theprobability mass functionof thebinomial distributionwith sample size equal to 80, number successes equal to 49 but for different values ofp(the "probability of success"), the likelihood function (defined below) takes one of three values:
P[H=49∣p=13]=(8049)(13)49(1−13)31≈0.000,P[H=49∣p=12]=(8049)(12)49(1−12)31≈0.012,P[H=49∣p=23]=(8049)(23)49(1−23)31≈0.054.{\displaystyle {\begin{aligned}\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {1}{3}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {1}{3}})^{49}(1-{\tfrac {1}{3}})^{31}\approx 0.000,\\[6pt]\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {1}{2}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {1}{2}})^{49}(1-{\tfrac {1}{2}})^{31}\approx 0.012,\\[6pt]\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {2}{3}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {2}{3}})^{49}(1-{\tfrac {2}{3}})^{31}\approx 0.054~.\end{aligned}}}
The likelihood is maximized whenp=2⁄3, and so this is themaximum likelihood estimateforp.
Now suppose that there was only one coin but itspcould have been any value0 ≤p≤ 1 .The likelihood function to be maximised isL(p)=fD(H=49∣p)=(8049)p49(1−p)31,{\displaystyle L(p)=f_{D}(\mathrm {H} =49\mid p)={\binom {80}{49}}p^{49}(1-p)^{31}~,}
and the maximisation is over all possible values0 ≤p≤ 1 .
One way to maximize this function is bydifferentiatingwith respect topand setting to zero:
0=∂∂p((8049)p49(1−p)31),0=49p48(1−p)31−31p49(1−p)30=p48(1−p)30[49(1−p)−31p]=p48(1−p)30[49−80p].{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial p}}\left({\binom {80}{49}}p^{49}(1-p)^{31}\right)~,\\[8pt]0&=49p^{48}(1-p)^{31}-31p^{49}(1-p)^{30}\\[8pt]&=p^{48}(1-p)^{30}\left[49(1-p)-31p\right]\\[8pt]&=p^{48}(1-p)^{30}\left[49-80p\right]~.\end{aligned}}}
This is a product of three terms. The first term is 0 whenp= 0. The second is 0 whenp= 1. The third is zero whenp=49⁄80. The solution that maximizes the likelihood is clearlyp=49⁄80(sincep= 0 andp= 1 result in a likelihood of 0). Thus themaximum likelihood estimatorforpis49⁄80.
This result is easily generalized by substituting a letter such assin the place of 49 to represent the observed number of 'successes' of ourBernoulli trials, and a letter such asnin the place of 80 to represent the number of Bernoulli trials. Exactly the same calculation yieldss⁄nwhich is the maximum likelihood estimator for any sequence ofnBernoulli trials resulting ins'successes'.
For thenormal distributionN(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}which hasprobability density function
f(x∣μ,σ2)=12πσ2exp(−(x−μ)22σ2),{\displaystyle f(x\mid \mu ,\sigma ^{2})={\frac {1}{{\sqrt {2\pi \sigma ^{2}}}\ }}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right),}
the correspondingprobability density functionfor a sample ofnindependent identically distributednormal random variables (the likelihood) is
f(x1,…,xn∣μ,σ2)=∏i=1nf(xi∣μ,σ2)=(12πσ2)n/2exp(−∑i=1n(xi−μ)22σ2).{\displaystyle f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})=\prod _{i=1}^{n}f(x_{i}\mid \mu ,\sigma ^{2})=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left(-{\frac {\sum _{i=1}^{n}(x_{i}-\mu )^{2}}{2\sigma ^{2}}}\right).}
This family of distributions has two parameters:θ= (μ,σ); so we maximize the likelihood,L(μ,σ2)=f(x1,…,xn∣μ,σ2){\displaystyle {\mathcal {L}}(\mu ,\sigma ^{2})=f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})}, over both parameters simultaneously, or if possible, individually.
Since thelogarithmfunction itself is acontinuousstrictly increasingfunction over therangeof the likelihood, the values which maximize the likelihood will also maximize its logarithm (the log-likelihood itself is not necessarily strictly increasing). The log-likelihood can be written as follows:
log(L(μ,σ2))=−n2log(2πσ2)−12σ2∑i=1n(xi−μ)2{\displaystyle \log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=-{\frac {\,n\,}{2}}\log(2\pi \sigma ^{2})-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}(\,x_{i}-\mu \,)^{2}}
(Note: the log-likelihood is closely related toinformation entropyandFisher information.)
We now compute the derivatives of this log-likelihood as follows.
0=∂∂μlog(L(μ,σ2))=0−−2n(x¯−μ)2σ2.{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial \mu }}\log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=0-{\frac {\;-2n({\bar {x}}-\mu )\;}{2\sigma ^{2}}}.\end{aligned}}}wherex¯{\displaystyle {\bar {x}}}is thesample mean. This is solved by
μ^=x¯=∑i=1nxin.{\displaystyle {\widehat {\mu }}={\bar {x}}=\sum _{i=1}^{n}{\frac {\,x_{i}\,}{n}}.}
This is indeed the maximum of the function, since it is the only turning point inμand the second derivative is strictly less than zero. Itsexpected valueis equal to the parameterμof the given distribution,
E[μ^]=μ,{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;{\widehat {\mu }}\;{\bigr ]}=\mu ,\,}
which means that the maximum likelihood estimatorμ^{\displaystyle {\widehat {\mu }}}is unbiased.
Similarly we differentiate the log-likelihood with respect toσand equate to zero:
0=∂∂σlog(L(μ,σ2))=−nσ+1σ3∑i=1n(xi−μ)2.{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial \sigma }}\log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=-{\frac {\,n\,}{\sigma }}+{\frac {1}{\sigma ^{3}}}\sum _{i=1}^{n}(\,x_{i}-\mu \,)^{2}.\end{aligned}}}
which is solved by
σ^2=1n∑i=1n(xi−μ)2.{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}.}
Inserting the estimateμ=μ^{\displaystyle \mu ={\widehat {\mu }}}we obtain
σ^2=1n∑i=1n(xi−x¯)2=1n∑i=1nxi2−1n2∑i=1n∑j=1nxixj.{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}x_{i}x_{j}.}
To calculate its expected value, it is convenient to rewrite the expression in terms of zero-mean random variables (statistical error)δi≡μ−xi{\displaystyle \delta _{i}\equiv \mu -x_{i}}. Expressing the estimate in these variables yields
σ^2=1n∑i=1n(μ−δi)2−1n2∑i=1n∑j=1n(μ−δi)(μ−δj).{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(\mu -\delta _{i})^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}(\mu -\delta _{i})(\mu -\delta _{j}).}
Simplifying the expression above, utilizing the facts thatE[δi]=0{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;\delta _{i}\;{\bigr ]}=0}andE[δi2]=σ2{\displaystyle \operatorname {E} {\bigl [}\;\delta _{i}^{2}\;{\bigr ]}=\sigma ^{2}}, allows us to obtain
E[σ^2]=n−1nσ2.{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;{\widehat {\sigma }}^{2}\;{\bigr ]}={\frac {\,n-1\,}{n}}\sigma ^{2}.}
This means that the estimatorσ^2{\displaystyle {\widehat {\sigma }}^{2}}is biased forσ2{\displaystyle \sigma ^{2}}. It can also be shown thatσ^{\displaystyle {\widehat {\sigma }}}is biased forσ{\displaystyle \sigma }, but that bothσ^2{\displaystyle {\widehat {\sigma }}^{2}}andσ^{\displaystyle {\widehat {\sigma }}}are consistent.
Formally we say that themaximum likelihood estimatorforθ=(μ,σ2){\displaystyle \theta =(\mu ,\sigma ^{2})}is
θ^=(μ^,σ^2).{\displaystyle {\widehat {\theta \,}}=\left({\widehat {\mu }},{\widehat {\sigma }}^{2}\right).}
In this case the MLEs could be obtained individually. In general this may not be the case, and the MLEs would have to be obtained simultaneously.
The normal log-likelihood at its maximum takes a particularly simple form:
log(L(μ^,σ^))=−n2(log(2πσ^2)+1){\displaystyle \log {\Bigl (}{\mathcal {L}}({\widehat {\mu }},{\widehat {\sigma }}){\Bigr )}={\frac {\,-n\;\;}{2}}{\bigl (}\,\log(2\pi {\widehat {\sigma }}^{2})+1\,{\bigr )}}
This maximum log-likelihood can be shown to be the same for more generalleast squares, even fornon-linear least squares. This is often used in determining likelihood-based approximateconfidence intervalsandconfidence regions, which are generally more accurate than those using the asymptotic normality discussed above.
It may be the case that variables are correlated, or more generally, not independent. Two random variablesy1{\displaystyle y_{1}}andy2{\displaystyle y_{2}}are independent only if their joint probability density function is the product of the individual probability density functions, i.e.
f(y1,y2)=f(y1)f(y2){\displaystyle f(y_{1},y_{2})=f(y_{1})f(y_{2})\,}
Suppose one constructs an order-nGaussian vector out of random variables(y1,…,yn){\displaystyle (y_{1},\ldots ,y_{n})}, where each variable has means given by(μ1,…,μn){\displaystyle (\mu _{1},\ldots ,\mu _{n})}. Furthermore, let thecovariance matrixbe denoted byΣ{\displaystyle {\mathit {\Sigma }}}. The joint probability density function of thesenrandom variables then follows amultivariate normal distributiongiven by:
f(y1,…,yn)=1(2π)n/2det(Σ)exp(−12[y1−μ1,…,yn−μn]Σ−1[y1−μ1,…,yn−μn]T){\displaystyle f(y_{1},\ldots ,y_{n})={\frac {1}{(2\pi )^{n/2}{\sqrt {\det({\mathit {\Sigma }})}}}}\exp \left(-{\frac {1}{2}}\left[y_{1}-\mu _{1},\ldots ,y_{n}-\mu _{n}\right]{\mathit {\Sigma }}^{-1}\left[y_{1}-\mu _{1},\ldots ,y_{n}-\mu _{n}\right]^{\mathrm {T} }\right)}
In thebivariatecase, the joint probability density function is given by:
f(y1,y2)=12πσ1σ21−ρ2exp[−12(1−ρ2)((y1−μ1)2σ12−2ρ(y1−μ1)(y2−μ2)σ1σ2+(y2−μ2)2σ22)]{\displaystyle f(y_{1},y_{2})={\frac {1}{2\pi \sigma _{1}\sigma _{2}{\sqrt {1-\rho ^{2}}}}}\exp \left[-{\frac {1}{2(1-\rho ^{2})}}\left({\frac {(y_{1}-\mu _{1})^{2}}{\sigma _{1}^{2}}}-{\frac {2\rho (y_{1}-\mu _{1})(y_{2}-\mu _{2})}{\sigma _{1}\sigma _{2}}}+{\frac {(y_{2}-\mu _{2})^{2}}{\sigma _{2}^{2}}}\right)\right]}
In this and other cases where a joint density function exists, the likelihood function is defined as above, in the section "principles," using this density.
X1,X2,…,Xm{\displaystyle X_{1},\ X_{2},\ldots ,\ X_{m}}are counts in cells / boxes 1 up to m; each box has a different probability (think of the boxes being bigger or smaller) and we fix the number of balls that fall to ben{\displaystyle n}:x1+x2+⋯+xm=n{\displaystyle x_{1}+x_{2}+\cdots +x_{m}=n}. The probability of each box ispi{\displaystyle p_{i}}, with a constraint:p1+p2+⋯+pm=1{\displaystyle p_{1}+p_{2}+\cdots +p_{m}=1}. This is a case in which theXi{\displaystyle X_{i}}sare not independent, the joint probability of a vectorx1,x2,…,xm{\displaystyle x_{1},\ x_{2},\ldots ,x_{m}}is called the multinomial and has the form:
f(x1,x2,…,xm∣p1,p2,…,pm)=n!∏xi!∏pixi=(nx1,x2,…,xm)p1x1p2x2⋯pmxm{\displaystyle f(x_{1},x_{2},\ldots ,x_{m}\mid p_{1},p_{2},\ldots ,p_{m})={\frac {n!}{\prod x_{i}!}}\prod p_{i}^{x_{i}}={\binom {n}{x_{1},x_{2},\ldots ,x_{m}}}p_{1}^{x_{1}}p_{2}^{x_{2}}\cdots p_{m}^{x_{m}}}
Each box taken separately against all the other boxes is a binomial and this is an extension thereof.
The log-likelihood of this is:
ℓ(p1,p2,…,pm)=logn!−∑i=1mlogxi!+∑i=1mxilogpi{\displaystyle \ell (p_{1},p_{2},\ldots ,p_{m})=\log n!-\sum _{i=1}^{m}\log x_{i}!+\sum _{i=1}^{m}x_{i}\log p_{i}}
The constraint has to be taken into account and use the Lagrange multipliers:
L(p1,p2,…,pm,λ)=ℓ(p1,p2,…,pm)+λ(1−∑i=1mpi){\displaystyle L(p_{1},p_{2},\ldots ,p_{m},\lambda )=\ell (p_{1},p_{2},\ldots ,p_{m})+\lambda \left(1-\sum _{i=1}^{m}p_{i}\right)}
By posing all the derivatives to be 0, the most natural estimate is derived
p^i=xin{\displaystyle {\hat {p}}_{i}={\frac {x_{i}}{n}}}
Maximizing log likelihood, with and without constraints, can be an unsolvable problem in closed form, then we have to use iterative procedures.
Except for special cases, the likelihood equations∂ℓ(θ;y)∂θ=0{\displaystyle {\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}=0}
cannot be solved explicitly for an estimatorθ^=θ^(y){\displaystyle {\widehat {\theta }}={\widehat {\theta }}(\mathbf {y} )}. Instead, they need to be solvediteratively: starting from an initial guess ofθ{\displaystyle \theta }(sayθ^1{\displaystyle {\widehat {\theta }}_{1}}), one seeks to obtain a convergent sequence{θ^r}{\displaystyle \left\{{\widehat {\theta }}_{r}\right\}}. Many methods for this kind ofoptimization problemare available,[26][27]but the most commonly used ones are algorithms based on an updating formula of the formθ^r+1=θ^r+ηrdr(θ^){\displaystyle {\widehat {\theta }}_{r+1}={\widehat {\theta }}_{r}+\eta _{r}\mathbf {d} _{r}\left({\widehat {\theta }}\right)}
where the vectordr(θ^){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)}indicates thedescent directionof therth "step," and the scalarηr{\displaystyle \eta _{r}}captures the "step length,"[28][29]also known as thelearning rate.[30]
(Note: here it is a maximization problem, so the sign before gradient is flipped)
ηr∈R+{\displaystyle \eta _{r}\in \mathbb {R} ^{+}}that is small enough for convergence anddr(θ^)=∇ℓ(θ^r;y){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=\nabla \ell \left({\widehat {\theta }}_{r};\mathbf {y} \right)}
Gradient descent method requires to calculate the gradient at the rth iteration, but no need to calculate the inverse of second-order derivative, i.e., the Hessian matrix. Therefore, it is computationally faster than Newton-Raphson method.
ηr=1{\displaystyle \eta _{r}=1}anddr(θ^)=−Hr−1(θ^)sr(θ^){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=-\mathbf {H} _{r}^{-1}\left({\widehat {\theta }}\right)\mathbf {s} _{r}\left({\widehat {\theta }}\right)}
wheresr(θ^){\displaystyle \mathbf {s} _{r}({\widehat {\theta }})}is thescoreandHr−1(θ^){\displaystyle \mathbf {H} _{r}^{-1}\left({\widehat {\theta }}\right)}is theinverseof theHessian matrixof the log-likelihood function, both evaluated therth iteration.[31][32]But because the calculation of the Hessian matrix iscomputationally costly, numerous alternatives have been proposed. The popularBerndt–Hall–Hall–Hausman algorithmapproximates the Hessian with theouter productof the expected gradient, such that
dr(θ^)=−[1n∑t=1n∂ℓ(θ;y)∂θ(∂ℓ(θ;y)∂θ)T]−1sr(θ^){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=-\left[{\frac {1}{n}}\sum _{t=1}^{n}{\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}\left({\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}\right)^{\mathsf {T}}\right]^{-1}\mathbf {s} _{r}\left({\widehat {\theta }}\right)}
Other quasi-Newton methods use more elaborate secant updates to give approximation of Hessian matrix.
DFP formula finds a solution that is symmetric, positive-definite and closest to the current approximate value of second-order derivative:Hk+1=(I−γkykskT)Hk(I−γkskykT)+γkykykT,{\displaystyle \mathbf {H} _{k+1}=\left(I-\gamma _{k}y_{k}s_{k}^{\mathsf {T}}\right)\mathbf {H} _{k}\left(I-\gamma _{k}s_{k}y_{k}^{\mathsf {T}}\right)+\gamma _{k}y_{k}y_{k}^{\mathsf {T}},}
where
yk=∇ℓ(xk+sk)−∇ℓ(xk),{\displaystyle y_{k}=\nabla \ell (x_{k}+s_{k})-\nabla \ell (x_{k}),}γk=1ykTsk,{\displaystyle \gamma _{k}={\frac {1}{y_{k}^{T}s_{k}}},}sk=xk+1−xk.{\displaystyle s_{k}=x_{k+1}-x_{k}.}
BFGS also gives a solution that is symmetric and positive-definite:
Bk+1=Bk+ykykTykTsk−BkskskTBkTskTBksk,{\displaystyle B_{k+1}=B_{k}+{\frac {y_{k}y_{k}^{\mathsf {T}}}{y_{k}^{\mathsf {T}}s_{k}}}-{\frac {B_{k}s_{k}s_{k}^{\mathsf {T}}B_{k}^{\mathsf {T}}}{s_{k}^{\mathsf {T}}B_{k}s_{k}}}\ ,}
where
yk=∇ℓ(xk+sk)−∇ℓ(xk),{\displaystyle y_{k}=\nabla \ell (x_{k}+s_{k})-\nabla \ell (x_{k}),}sk=xk+1−xk.{\displaystyle s_{k}=x_{k+1}-x_{k}.}
BFGS method is not guaranteed to converge unless the function has a quadraticTaylor expansionnear an optimum. However, BFGS can have acceptable performance even for non-smooth optimization instances
Another popular method is to replace the Hessian with theFisher information matrix,I(θ)=E[Hr(θ^)]{\displaystyle {\mathcal {I}}(\theta )=\operatorname {\mathbb {E} } \left[\mathbf {H} _{r}\left({\widehat {\theta }}\right)\right]}, giving us the Fisher scoring algorithm. This procedure is standard in the estimation of many methods, such asgeneralized linear models.
Although popular, quasi-Newton methods may converge to astationary pointthat is not necessarily a local or global maximum,[33]but rather a local minimum or asaddle point. Therefore, it is important to assess the validity of the obtained solution to the likelihood equations, by verifying that the Hessian, evaluated at the solution, is bothnegative definiteandwell-conditioned.[34]
Early users of maximum likelihood includeCarl Friedrich Gauss,Pierre-Simon Laplace,Thorvald N. Thiele, andFrancis Ysidro Edgeworth.[35][36]It wasRonald Fisherhowever, between 1912 and 1922, who singlehandedly created the modern version of the method.[37][38]
Maximum-likelihood estimation finally transcendedheuristicjustification in a proof published bySamuel S. Wilksin 1938, now calledWilks' theorem.[39]The theorem shows that the error in the logarithm of likelihood values for estimates from multiple independent observations is asymptoticallyχ2-distributed, which enables convenient determination of aconfidence regionaround any estimate of the parameters. The only difficult part of Wilks' proof depends on the expected value of theFisher informationmatrix, which is provided by a theorem proven by Fisher.[40]Wilks continued to improve on the generality of the theorem throughout his life, with his most general proof published in 1962.[41]
Reviews of the development of maximum likelihood estimation have been provided by a number of authors.[42][43][44][45][46][47][48][49]
|
https://en.wikipedia.org/wiki/Full_information_maximum_likelihood
|
In statistical models applied topsychometrics,congeneric reliabilityρC{\displaystyle \rho _{C}}("rho C")[1]a single-administration test score reliability (i.e., the reliability of persons over items holding occasion fixed) coefficient, commonly referred to ascomposite reliability,construct reliability, andcoefficient omega.ρC{\displaystyle \rho _{C}}is a structural equation model (SEM)-based reliability coefficients and is obtained from a unidimensional model.ρC{\displaystyle \rho _{C}}is the second most commonly used reliability factor aftertau-equivalent reliability(ρT{\displaystyle \rho _{T}}; also known as Cronbach's alpha), and is often recommended as its alternative.
A quantity similar (but not mathematically equivalent) to congeneric reliability first appears in the appendix to McDonald's 1970 paper onfactor analysis, labeledθ{\displaystyle \theta }.[2]In McDonald's work, the new quantity is primarily a mathematical convenience: a well-behaved intermediate thatseparatestwo values.[3][4]Seemingly unaware of McDonald's work, Jöreskog first analyzed a quantity equivalent to congeneric reliability in a paper the following year.[4][5]Jöreskog defined congeneric reliability (now labeled ρ) withcoordinate-freenotation,[5]and three years later, Werts gave the modern, coordinatized formula for the same.[6]Both of the latter two papers named the new quantity simply "reliability".[5][6]The modern name originates with Jöreskog's name for the model whence he derivedρC{\displaystyle \rho _{C}}: a "congeneric model".[1][7][8]
Applied statisticians have subsequently coined many names forρC{\displaystyle {\rho }_{C}}. "Composite reliability" emphasizes thatρC{\displaystyle {\rho }_{C}}measures thestatistical reliabilityof composite scores.[1][9]As psychology calls "constructs" anylatent characteristicsonly measurable through composite scores,[10]ρC{\displaystyle {\rho }_{C}}has also been called "construct reliability".[11]Following McDonald's more recent expository work ontesting theory, some SEM-based reliability coefficients, including congeneric reliability, are referred to as "reliability coefficientω{\displaystyle \omega }", often without a definition.[1][12][13]
Congeneric reliability applies todatasetsofvectors: each rowXin the dataset is a listXiof numerical scores corresponding to one individual. The congeneric model supposes that there is a single underlying property ("factor") of the individualF, such that each numerical scoreXiis a noisy measurement ofF. Moreover, that the relationship betweenXandFisapproximately linear: there exist (non-random) vectorsλandμsuch thatXi=λiF+μi+Ei,{\displaystyle X_{i}=\lambda _{i}F+\mu _{i}+E_{i}{\text{,}}}whereEiis astatistically independentnoise term.[5]
In this context,λiis often referred to as thefactor loadingon itemi.
Becauseλandμare free parameters, the model exhibitsaffine invariance, andFmay be normalized tomean0andvariance1without loss of generality. Thefraction of variance explainedin itemXibyFis then simplyρi=λi2λi2+V[Ei].{\displaystyle \rho _{i}={\frac {\lambda _{i}^{2}}{\lambda _{i}^{2}+\mathbb {V} [E_{i}]}}{\text{.}}}More generally, given anycovectorw, the proportion of variance inwXexplained byFisρ=(wλ)2(wλ)2+E[(wE)2],{\displaystyle \rho ={\frac {(w\lambda )^{2}}{(w\lambda )^{2}+\mathbb {E} [(wE)^{2}]}}{\text{,}}}which is maximized whenw∝ 𝔼[EE*]-1λ.[5]
ρCis this proportion of explained variance in the case wherew∝ [1 1 ... 1](all components ofXequally important):ρC=(∑i=1kλi)2(∑i=1kλi)2+∑i=1kσEi2{\displaystyle \rho _{C}={\frac {\left(\sum _{i=1}^{k}\lambda _{i}\right)^{2}}{\left(\sum _{i=1}^{k}\lambda _{i}\right)^{2}+\sum _{i=1}^{k}\sigma _{E_{i}}^{2}}}}
These are the estimates of the factor loadings and errors:
Compare this value with the value of applyingtau-equivalent reliabilityto the same data.
Tau-equivalent reliability(ρT{\displaystyle \rho _{T}}), which has traditionally been called "Cronbach'sα{\displaystyle \alpha }", assumes that all factor loadings are equal (i.e.λ1=λ2=...=λk{\displaystyle \lambda _{1}=\lambda _{2}=...=\lambda _{k}}). In reality, this is rarely the case and, thus, it systematically underestimates the reliability. In contrast, congeneric reliability (ρC{\displaystyle \rho _{C}}) explicitly acknowledges the existence of different factor loadings. According to Bagozzi & Yi (1988),ρC{\displaystyle \rho _{C}}should have a value of at least around 0.6.[14]Often, higher values are desirable. However, such values should not be misunderstood as strict cutoff boundaries between "good" and "bad".[15]Moreover,ρC{\displaystyle \rho _{C}}values close to 1 might indicate that items are too similar. Another property of a "good" measurement model besides reliability isconstruct validity.
A related coefficient isaverage variance extracted.
|
https://en.wikipedia.org/wiki/Congeneric_reliability
|
Instatistics,consistencyof procedures, such as computingconfidence intervalsor conductinghypothesis tests, is a desired property of their behaviour as the number of items in the data set to which they are applied increases indefinitely. In particular, consistency requires that as the dataset size increases, the outcome of the procedure approaches the correct outcome.[1]Use of the term in statistics derives from SirRonald Fisherin 1922.[2]
Use of the termsconsistencyandconsistentin statistics is restricted to cases where essentially the same procedure can be applied to any number of data items. In complicated applications of statistics, there may be several ways in which the number of data items may grow. For example, records for rainfall within an area might increase in three ways: records for additional time periods; records for additional sites with a fixed area; records for extra sites obtained by extending the size of the area. In such cases, the property of consistency may be limited to one or more of the possible ways a sample size can grow.
Aconsistent estimatoris one for which, when the estimate is considered as arandom variableindexed by the numbernof items in the data set, asnincreases the estimatesconverge in probabilityto the value that the estimator is designed to estimate.
An estimator that hasFisher consistencyis one for which, if the estimator were applied to the entire population rather than a sample, the true value of the estimated parameter would be obtained.
Aconsistent testis one for which thepowerof the test for a fixed untrue hypothesis increases to one as the number of data items increases.[1]
Instatistical classification, a consistent classifier is one for which the probability of correct classification, given a training set, approaches, as the size of the training set increases, the best probability theoretically possible if the population distributions were fully known.
An estimator or test may be consistent without being unbiased.[3]A classic example is thesample standard deviationwhich is a biased estimator, but converges to the expectedstandard deviationalmost surely by thelaw of large numbers. Phrased otherwise, unbiasedness is not a requirement for consistency, sobiased estimatorsand tests may be used in practice with the expectation that the outcomes are reliable, especially when the sample size is large (recall the definition of consistency). In contrast, an estimator or test which is not consistent may be difficult to justify in practice, since gathering additional data does not have the asymptotic guarantee of improving the quality of the outcome.
|
https://en.wikipedia.org/wiki/Consistency_(statistics)
|
Instatistics,homogeneityand its opposite,heterogeneity, arise in describing the properties of adataset, or several datasets. They relate to the validity of the often convenient assumption that the statistical properties of any one part of an overall dataset are the same as any other part. Inmeta-analysis, which combines the data from several studies, homogeneity measures the differences or similarities between the several studies (see alsoStudy heterogeneity).
Homogeneity can be studied to several degrees of complexity. For example, considerations ofhomoscedasticityexamine how much thevariabilityof data-values changes throughout a dataset. However, questions of homogeneity apply to all aspects of thestatistical distributions, including thelocation parameter. Thus, a more detailed study would examine changes to the whole of themarginal distribution. An intermediate-level study might move from looking at the variability to studying changes in theskewness. In addition to these, questions of homogeneity apply also to thejoint distributions.
The concept of homogeneity can be applied in many different ways and, for certain types of statistical analysis, it is used to look for further properties that might need to be treated as varying within a dataset once some initial types of non-homogeneity have been dealt with.
Instatistics, asequenceofrandom variablesishomoscedastic(/ˌhoʊmoʊskəˈdæstɪk/) if all its random variables have the same finitevariance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as heterogeneity of variance. The spellingshomoskedasticityandheteroskedasticityare also frequently used. “Skedasticity” comes from the Ancient Greek word “skedánnymi”, meaning “to scatter”.[1][2][3]Assuming a variable is homoscedastic when in reality it is heteroscedastic (/ˌhɛtəroʊskəˈdæstɪk/) results inunbiasedbutinefficientpoint estimatesand in biased estimates ofstandard errors, and may result in overestimating thegoodness of fitas measured by thePearson coefficient.
The existence of heteroscedasticity is a major concern inregression analysisand theanalysis of variance, as it invalidatesstatistical tests of significancethat assume that themodelling errorsall have the same variance. While theordinary least squaresestimator is still unbiased in the presence of heteroscedasticity, it is inefficient and inference based on the assumption of homoskedasticity is misleading. In that case,generalized least squares(GLS) was frequently used in the past.[4][5]Nowadays, standard practice in econometrics is to includeHeteroskedasticity-consistent standard errorsinstead of using GLS, as GLS can exhibit strong bias in small samples if the actualskedastic functionis unknown.[6]
Because heteroscedasticity concernsexpectationsof the secondmomentof the errors, its presence is referred to asmisspecificationof the second order.[7]
Differences in the typical values across the dataset might initially be dealt with by constructing a regression model using certain explanatory variables to relate variations in the typical value to known quantities. There should then be a later stage of analysis to examine whether the errors in the predictions from the regression behave in the same way across the dataset. Thus the question becomes one of the homogeneity of the distribution of the residuals, as the explanatory variables change. Seeregression analysis.
The initial stages in the analysis of a time series may involve plotting values against time to examine homogeneity of the series in various ways: stability across time as opposed to a trend; stability of local fluctuations over time.
Inhydrology, data-series across a number of sites composed of annual values of the within-year annual maximum river-flow are analysed. A common model is that the distributions of these values are the same for all sites apart from a simple scaling factor, so that the location and scale are linked in a simple way. There can then be questions of examining the homogeneity across sites of the distribution of the scaled values.
Inmeteorology, weather datasets are acquired over many years of record and, as part of this, measurements at certain stations may cease occasionally while, at around the same time, measurements may start at nearby locations. There are then questions as to whether, if the records are combined to form a single longer set of records, those records can be considered homogeneous over time. An example of homogeneity testing of wind speed and direction data can be found in Romanićet al., 2015.[9]
Simple populations surveys may start from the idea that responses will be homogeneous across the whole of a population. Assessing the homogeneity of the population would involve looking to see whether the responses of certain identifiablesubpopulationsdiffer from those of others. For example, car-owners may differ from non-car-owners, or there may be differences between different age-groups.
A test for homogeneity, in the sense of exact equivalence of statistical distributions, can be based on anE-statistic. Alocation testtests the simpler hypothesis that distributions have the samelocation parameter.
|
https://en.wikipedia.org/wiki/Homogeneity_(statistics)
|
Repeatabilityortest–retest reliability[1]is the closeness of the agreement between the results of successivemeasurementsof the samemeasure, when carried out under the same conditions of measurement.[2]In other words, the measurements are taken by a single person orinstrumenton the same item, under the same conditions, and in a short period of time. A less-than-perfect test–retest reliability causestest–retest variability. Suchvariabilitycan be caused by, for example,intra-individual variabilityandinter-observer variability. A measurement may be said to berepeatablewhen this variation is smaller than a predetermined acceptance criterion.
Test–retest variability is practically used, for example, inmedical monitoringof conditions. In these situations, there is often a predetermined "critical difference", and for differences in monitored values that are smaller than this critical difference, the possibility of variability as a sole cause of the difference may be considered in addition to, for example, changes in diseases or treatments.[3]
The following conditions need to be fulfilled in the establishment of repeatability:[2][4]
Repeatability methods were developed by Bland and Altman (1986).[5]
If thecorrelationbetween separate administrations of the test is high (e.g. 0.7 or higher as inthis Cronbach's alpha-internal consistency-table[6]), then it has good test–retest reliability.
The repeatability coefficient is a precision measure which represents the value below which theabsolute differencebetween two repeated test results may be expected to lie with a probability of 95%.[citation needed]
Thestandard deviationunder repeatability conditions is part ofprecisionandaccuracy.[citation needed]
An attribute agreement analysis is designed to simultaneously evaluate the impact of repeatability andreproducibilityon accuracy. It allows the analyst to examine the responses from multiple reviewers as they look at several scenarios multiple times. It produces statistics that evaluate the ability of the appraisers to agree with themselves (repeatability), with each other (reproducibility), and with a known master or correct value (overall accuracy) for each characteristic – over and over again.[7]
Because the same test is administered twice and every test is parallel with itself, differences between scores on the test and scores on the retest should be due solely to measurement error. This sort of argument is quite probably true for many physical measurements. However, this argument is often inappropriate for psychological measurement, because it is often impossible to consider the second administration of a test a parallel measure to the first.[8]
The second administration of a psychological test might yield systematically different scores than the first administration due to the following reasons:[8]
|
https://en.wikipedia.org/wiki/Test-retest_reliability
|
Instatisticsandresearch,internal consistencyis typically a measure based on thecorrelationsbetween different items on the same test (or the same subscale on a larger test). It measures whether several items that propose to measure the same generalconstructproduce similar scores. For example, if a respondent expressed agreement with the statements "I like to ride bicycles" and "I've enjoyed riding bicycles in the past", and disagreement with the statement "I hate bicycles", this would be indicative of good internal consistency of the test.
Internal consistency is usually measured with Cronbach's alpha, astatisticcalculated from the pairwise correlations between items. Internal consistency ranges between negative infinity and one. Coefficient alpha will be negative whenever there is greater within-subject variability than between-subject variability.[1]
A commonly accepted rule of thumb for describing internal consistency is as follows:[2]
Very high reliabilities (0.95 or higher) are not necessarily desirable, as this indicates that the items may be redundant.[3]The goal in designing a reliable instrument is for scores on similar items to be related (internally consistent), but for each to contribute some unique information as well. Note further that Cronbach's alpha is necessarily higher for tests measuring more narrow constructs, and lower when more generic, broad constructs are measured. This phenomenon, along with a number of other reasons, argue against using objective cut-off values for internal consistency measures.[4]Alpha is also afunctionof the number of items, so shorter scales will often have lower reliability estimates yet still be preferable in many situations because they are lower burden.
An alternative way of thinking about internal consistency is that it is the extent to which all of the items of a test measure the samelatent variable. The advantage of this perspective over the notion of a high average correlation among the items of a test – the perspective underlying Cronbach's alpha – is that the average item correlation is affected byskewness(in the distribution of item correlations) just as any otheraverageis. Thus, whereas themodalitem correlation is zero when the items of a test measure several unrelated latent variables, the average item correlation in such cases will be greater than zero. Thus, whereas the ideal of measurement is for all items of a test to measure the same latent variable, alpha has been demonstrated many times to attain quite high values even when the set of items measures several unrelated latent variables.[5][6][7][8][9][10][11]The hierarchical "coefficient omega" may be a more appropriate index of the extent to which all of the items in a test measure the same latent variable.[12][13]Several different measures of internal consistency are reviewed by Revelle & Zinbarg (2009).[14][15]
|
https://en.wikipedia.org/wiki/Internal_consistency
|
Level of measurementorscale of measureis a classification that describes the nature of information within the values assigned tovariables.[1]PsychologistStanley Smith Stevensdeveloped the best-known classification with four levels, or scales, of measurement:nominal,ordinal,interval, andratio.[1][2]This framework of distinguishing levels of measurement originated in psychology and has since had a complex history, being adopted and extended in some disciplines and by some scholars, and criticized or rejected by others.[3]Other classifications include those by Mosteller andTukey,[4]and by Chrisman.[5]
Stevens proposed his typology in a 1946Sciencearticle titled "On the theory of scales of measurement".[2]In that article, Stevens claimed that allmeasurementin science was conducted using four different types of scales that he called "nominal", "ordinal", "interval", and "ratio", unifying both "qualitative" (which are described by his "nominal" type) and "quantitative" (to a different degree, all the rest of his scales). The concept of scale types later received the mathematical rigour that it lacked at its inception with the work of mathematical psychologists Theodore Alper (1985, 1987), Louis Narens (1981a, b), andR. Duncan Luce(1986, 1987, 2001). As Luce (1997, p. 395) wrote:
S. S. Stevens (1946, 1951, 1975) claimed that what counted was having an interval or ratio scale. Subsequent research has given meaning to this assertion, but given his attempts to invoke scale type ideas it is doubtful if he understood it himself ...no measurement theorist I know accepts Stevens's broad definition of measurement ...in our view, the only sensible meaning for 'rule' is empirically testable laws about the attribute.
A nominal scale consists only of a number of distinct classes or categories, for example: [Cat, Dog, Rabbit]. Unlike the other scales, no kind of relationship between the classes can be relied upon. Thus measuring with the nominal scale is equivalent toclassifying.
Nominal measurement may differentiate between items or subjects based only on their names or (meta-)categories and other qualitative classifications they belong to. Thus it has been argued that evendichotomousdata relies on aconstructivist epistemology. In this case, discovery of an exception to a classification can be viewed as progress.
Numbers may be used to represent the variables but the numbers do not have numerical value or relationship: for example, aglobally unique identifier.
Examples of these classifications include gender, nationality, ethnicity, language, genre, style, biological species, and form.[6][7]In a university one could also use residence hall or department affiliation as examples. Other concrete examples are
Nominal scales were often called qualitative scales, and measurements made on qualitative scales were called qualitative data. However, the rise of qualitative research has made this usage confusing. If numbers are assigned as labels in nominal measurement, they have no specific numerical value or meaning. No form of arithmetic computation (+, −, ×, etc.) may be performed on nominal measures. The nominal level is the lowest measurement level used from a statistical point of view.
Equalityand other operations that can be defined in terms of equality, such asinequalityandset membership, are the onlynon-trivialoperationsthat generically apply to objects of the nominal type.
Themode, i.e. themost commonitem, is allowed as the measure ofcentral tendencyfor the nominal type. On the other hand, themedian, i.e. themiddle-rankeditem, makes no sense for the nominal type of data since ranking is meaningless for the nominal type.[8]
The ordinal type allows forrank order(1st, 2nd, 3rd, etc.) by which data can be sorted but still does not allow for a relativedegree of differencebetween them. Examples include, on one hand,dichotomousdata with dichotomous (or dichotomized) values such as "sick" vs. "healthy" when measuring health, "guilty" vs. "not-guilty" when making judgments in courts, "wrong/false" vs. "right/true" when measuringtruth value, and, on the other hand,non-dichotomousdata consisting of a spectrum of values, such as "completely agree", "mostly agree", "mostly disagree", "completely disagree" when measuringopinion.
The ordinal scale places events in order, but there is no attempt to make the intervals of the scale equal in terms of some rule. Rank orders represent ordinal scales and are frequently used in research relating to qualitative phenomena. A student's rank in his graduation class involves the use of an ordinal scale. One has to be very careful in making a statement about scores based on ordinal scales. For instance, if Devi's position in his class is 10th and Ganga's position is 40th, it cannot be said that Devi's position is four times as good as that of Ganga.
Ordinal scales only permit the ranking of items from highest to lowest. Ordinal measures have no absolute values, and the real differences between adjacent ranks may not be equal. All that can be said is that one person is higher or lower on the scale than another, but more precise comparisons cannot be made. Thus, the use of an ordinal scale implies a statement of "greater than" or "less than" (an equality statement is also acceptable) without our being able to state how much greater or less. The real difference between ranks 1 and 2, for instance, may be more or less than the difference between ranks 5 and 6. Since the numbers of this scale have only a rank meaning, the appropriate measure of central tendency is the median. A percentile or quartile measure is used for measuring dispersion. Correlations are restricted to various rank order methods. Measures of statistical significance are restricted to the non-parametric methods (R. M. Kothari, 2004).
Themedian, i.e.middle-ranked, item is allowed as the measure ofcentral tendency; however, the mean (or average) as the measure ofcentral tendencyis not allowed. Themodeis allowed.
In 1946, Stevens observed that psychological measurement, such as measurement of opinions, usually operates on ordinal scales; thus means and standard deviations have novalidity, but they can be used to get ideas for how to improveoperationalizationof variables used inquestionnaires. Mostpsychologicaldata collected bypsychometricinstruments and tests, measuringcognitiveand other abilities, are ordinal, although some theoreticians have argued they can be treated as interval or ratio scales. However, there is littleprima facieevidence to suggest that such attributes are anything more than ordinal (Cliff, 1996; Cliff & Keats, 2003; Michell, 2008).[9]In particular,[10]IQ scores reflect an ordinal scale, in which all scores are meaningful for comparison only.[11][12][13]There is no absolute zero, and a 10-point difference may carry different meanings at different points of the scale.[14][15]
The interval type allows for defining thedegree of differencebetween measurements, but not the ratio between measurements. Examples includetemperature scaleswith theCelsius scale, which has two defined points (the freezing and boiling point of water at specific conditions) and then separated into 100 intervals,datewhen measured from an arbitrary epoch (such as AD),locationin Cartesian coordinates, anddirectionmeasured in degrees from true or magnetic north. Ratios are not meaningful since 20 °C cannot be said to be "twice as hot" as 10 °C (unlike temperature inkelvins), nor can multiplication/division be carried out between any two dates directly. However,ratios of differencescan be expressed; for example, one difference can be twice another; for example, the ten-degree difference between 15 °C and 25 °C is twice the five-degree difference between 17 °C and 22 °C. Interval type variables are sometimes also called "scaled variables", but the formal mathematical term is anaffine space(in this case anaffine line).
Themode,median, andarithmetic meanare allowed to measure central tendency of interval variables, while measures of statistical dispersion includerangeandstandard deviation. Since one can only divide bydifferences, one cannot define measures that require some ratios, such as thecoefficient of variation. More subtly, while one can definemomentsabout theorigin, only central moments are meaningful, since the choice of origin is arbitrary. One can definestandardized moments, since ratios of differences are meaningful, but one cannot define the coefficient of variation, since the mean is a moment about the origin, unlike the standard deviation, which is (the square root of) a central moment.
The ratio type takes its name from the fact that measurement is the estimation of the ratio between a magnitude of a continuous quantity and aunit of measurementof the same kind (Michell, 1997, 1999). Most measurement in the physical sciences and engineering is done on ratio scales. Examples includemass,length,duration,plane angle,energyandelectric charge. In contrast to interval scales, ratios can be compared usingdivision. Very informally, many ratio scales can be described as specifying "how much" of something (i.e. an amount or magnitude). Ratio scales are often used to express anorder of magnitudesuch as for temperature inOrders of magnitude (temperature).
Thegeometric meanand theharmonic meanare allowed to measure the central tendency, in addition to the mode, median, and arithmetic mean. Thestudentized rangeand thecoefficient of variationare allowed to measure statistical dispersion. All statistical measures are allowed because all necessary mathematical operations are defined for the ratio scale.
While Stevens's typology is widely adopted, it is still being challenged by other theoreticians, particularly in the cases of the nominal and ordinal types (Michell, 1986).[16]Duncan (1986), for example, objected to the use of the wordmeasurementin relation to the nominal type and Luce (1997) disagreed with Stevens's definition of measurement.
On the other hand, Stevens (1975) said of his own definition of measurement that "the assignment can be any consistent rule. The only rule not allowed would be random assignment, for randomness amounts in effect to a nonrule". Hand says, "Basic psychology texts often begin with Stevens's framework and the ideas are ubiquitous. Indeed, the essential soundness of his hierarchy has been established for representational measurement by mathematicians, determining the invariance properties of mappings from empirical systems to real number continua. Certainly the ideas have been revised, extended, and elaborated, but the remarkable thing is his insight given the relatively limited formal apparatus available to him and how many decades have passed since he coined them."[17]
The use of the mean as a measure of the central tendency for the ordinal type is still debatable among those who accept Stevens's typology. Many behavioural scientists use the mean for ordinal data anyway. This is often justified on the basis that the ordinal type in behavioural science is in fact somewhere between the true ordinal and interval types; although the interval difference between two ordinal ranks is not constant, it is often of the same order of magnitude.
For example, applications of measurement models in educational contexts often indicate that total scores have a fairly linear relationship with measurements across the range of an assessment. Thus, some argue that so long as the unknown interval difference between ordinal scale ranks is not too variable, interval scale statistics such as means can meaningfully be used on ordinal scale variables. Statistical analysis software such asSPSSrequires the user to select the appropriate measurement class for each variable. This ensures that subsequent user errors cannot inadvertently perform meaningless analyses (for example correlation analysis with a variable on a nominal level).
L. L. Thurstonemade progress toward developing a justification for obtaining the interval type, based on thelaw of comparative judgment. A common application of the law is theanalytic hierarchy process. Further progress was made byGeorg Rasch(1960), who developed the probabilisticRasch modelthat provides a theoretical basis and justification for obtaining interval-level measurements from counts of observations such as total scores on assessments.
Typologies aside from Stevens's typology have been proposed. For instance,MostellerandTukey(1977) and Nelder (1990)[18]described continuous counts, continuous ratios, count ratios, and categorical modes of data. See also Chrisman (1998), van den Berg (1991).[19]
Mosteller and Tukey[4]noted that the four levels are not exhaustive and proposed seven instead:
For example, percentages (a variation on fractions in the Mosteller–Tukey framework) do not fit well into Stevens's framework: No transformation is fully admissible.[16]
Nicholas R. Chrisman[5]introduced an expanded list of levels of measurement to account for various measurements that do not necessarily fit with the traditional notions of levels of measurement. Measurements bound to a range and repeating (like degrees in a circle, clock time, etc.), graded membership categories, and other types of measurement do not fit to Stevens's original work, leading to the introduction of six new levels of measurement, for a total of ten:
While some claim that the extended levels of measurement are rarely used outside of academic geography,[20]graded membership is central tofuzzy set theory, while absolute measurements include probabilities and the plausibility and ignorance inDempster–Shafer theory. Cyclical ratio measurements include angles and times. Counts appear to be ratio measurements, but the scale is not arbitrary and fractional counts are commonly meaningless. Log-interval measurements are commonly displayed in stock market graphics. All these types of measurements are commonly used outside academic geography, and do not fit well to Stevens's original work.
The theory of scale types is the intellectual handmaiden to Stevens's "operational theory of measurement", which was to become definitive within psychology and thebehavioral sciences,[citation needed]despite Michell's characterization as its being quite at odds with measurement in the natural sciences (Michell, 1999). Essentially, the operational theory of measurement was a reaction to the conclusions of a committee established in 1932 by theBritish Association for the Advancement of Scienceto investigate the possibility of genuine scientific measurement in the psychological and behavioral sciences. This committee, which became known as theFerguson committee, published a Final Report (Ferguson, et al., 1940, p. 245) in which Stevens'ssonescale (Stevens & Davis, 1938) was an object of criticism:
…any law purporting to express a quantitative relation between sensation intensity and stimulus intensity is not merely false but is in fact meaningless unless and until a meaning can be given to the concept of addition as applied to sensation.
That is, if Stevens'ssonescale genuinely measured the intensity of auditory sensations, then evidence for such sensations as being quantitative attributes needed to be produced. The evidence needed was the presence ofadditive structure—a concept comprehensively treated by the German mathematicianOtto Hölder(Hölder, 1901). Given that the physicist and measurement theoristNorman Robert Campbelldominated the Ferguson committee's deliberations, the committee concluded that measurement in the social sciences was impossible due to the lack ofconcatenationoperations. This conclusion was later rendered false by the discovery of thetheory of conjoint measurementby Debreu (1960) and independently by Luce & Tukey (1964). However, Stevens's reaction was not to conduct experiments to test for the presence of additive structure in sensations, but instead to render the conclusions of the Ferguson committee null and void by proposing a new theory of measurement:
Paraphrasing N. R. Campbell (Final Report, p. 340), we may say that measurement, in the broadest sense, is defined as the assignment of numerals to objects and events according to rules (Stevens, 1946, p. 677).
Stevens was greatly influenced by the ideas of another Harvard academic,[21]theNobel laureatephysicistPercy Bridgman(1927), whose doctrine ofoperationalismStevens used to define measurement. In Stevens's definition, for example, it is the use of a tape measure that defines length (the object of measurement) as being measurable (and so by implication quantitative). Critics of operationalism object that it confuses the relations between two objects or events for properties of one of those of objects or events (Moyer, 1981a, b; Rogers, 1989).[22][23]
The Canadian measurement theorist William Rozeboom was an early and trenchant critic of Stevens's theory of scale types.[24]
Another issue is that the same variable may be a different scale type depending on how it is measured and on the goals of the analysis. For example, hair color is usually thought of as a nominal variable, since it has no apparent ordering.[25]However, it is possible to order colors (including hair colors) in various ways, including by hue; this is known ascolorimetry. Hue is an interval level variable.
|
https://en.wikipedia.org/wiki/Levels_of_measurement
|
Reliability engineeringis a sub-discipline ofsystems engineeringthat emphasizes the ability of equipment to function without failure. Reliability is defined as the probability that a product, system, or service will perform its intended function adequately for a specified period of time, OR will operate in a defined environment without failure.[1]Reliability is closely related toavailability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.
Thereliability functionis theoretically defined as theprobabilityof success. In practice, it is calculated using different techniques, and its value ranges between 0 and 1, where 0 indicates no probability of success while 1 indicates definite success. This probability is estimated from detailed (physics of failure) analysis, previous data sets, or through reliability testing and reliability modeling.Availability,testability,maintainability, andmaintenanceare often defined as a part of "reliability engineering" in reliability programs. Reliability often plays a key role in thecost-effectivenessof systems.
Reliability engineering deals with the prediction, prevention, and management of high levels of "lifetime" engineeringuncertaintyandrisksof failure. Althoughstochasticparameters define and affect reliability, reliability is not only achieved by mathematics and statistics.[2][3]"Nearly all teaching and literature on the subject emphasize these aspects and ignore the reality that the ranges of uncertainty involved largely invalidate quantitative methods forpredictionand measurement."[4]For example, it is easy to represent "probability of failure" as a symbol or value in an equation, but it is almost impossible to predict its true magnitude in practice, which is massivelymultivariate, so having the equation for reliability does not begin to equal having an accurate predictive measurement of reliability.
Reliability engineering relates closely to Quality Engineering,safety engineering, andsystem safety, in that they use common methods for their analysis and may require input from each other. It can be said that a system must be reliably safe.
Reliability engineering focuses on the costs of failure caused by system downtime, cost of spares, repair equipment, personnel, and cost of warranty claims.[5]
The wordreliabilitycan be traced back to 1816 and is first attested to the poetSamuel Taylor Coleridge.[6]Before World War II the term was linked mostly torepeatability; a test (in any type of science) was considered "reliable" if the same results would be obtained repeatedly. In the 1920s, product improvement through the use ofstatistical process controlwas promoted by Dr.Walter A. ShewhartatBell Labs,[7]around the time thatWaloddi Weibullwas working on statistical models for fatigue. The development of reliability engineering was here on a parallel path with quality. The modern use of the word reliability was defined by the U.S. military in the 1940s, characterizing a product that would operate when expected and for a specified period.
In World War II, many reliability issues were due to the inherent unreliability of electronic equipment available at the time, and to fatigue issues. In 1945, M.A. Miner published a seminal paper titled "Cumulative Damage in Fatigue" in an ASME journal. A main application for reliability engineering in the military was for the vacuum tube as used in radar systems and other electronics, for which reliability proved to be very problematic and costly. TheIEEEformed the Reliability Society in 1948. In 1950, theUnited States Department of Defenseformed a group called the "Advisory Group on the Reliability of Electronic Equipment" (AGREE) to investigate reliability methods for military equipment.[8]This group recommended three main ways of working:
In the 1960s, more emphasis was given to reliability testing on component and system levels. The famous military standard MIL-STD-781 was created at that time. Around this period also the much-used predecessor to military handbook 217 was published byRCAand was used for the prediction of failure rates of electronic components. The emphasis on component reliability and empirical research (e.g. Mil Std 217) alone slowly decreased. More pragmatic approaches, as used in the consumer industries, were being used. In the 1980s, televisions were increasingly made up of solid-state semiconductors. Automobiles rapidly increased their use of semiconductors with a variety of microcomputers under the hood and in the dash. Large air conditioning systems developed electronic controllers, as did microwave ovens and a variety of other appliances. Communications systems began to adopt
electronics to replace older mechanical switching systems.Bellcoreissued the first consumer prediction methodology for telecommunications, andSAEdeveloped a similar document SAE870050 for automotive applications. The nature of predictions evolved during the decade, and it became apparent that die complexity wasn't the only factor that determined failure rates for integrated circuits (ICs).
Kam Wong published a paper questioning the bathtub curve[9]—see alsoreliability-centered maintenance. During this decade, the failure rate of many components dropped by a factor of 10. Software became important to the reliability of systems. By the 1990s, the pace of IC development was picking up. Wider use of stand-alone microcomputers was common, and the PC market helped keep IC densities following Moore's law and doubling about every 18 months. Reliability engineering was now changing as it moved towards understanding thephysics of failure. Failure rates for components kept dropping, but system-level issues became more prominent.Systems thinkinghas become more and more important. For software, the CMM model (Capability Maturity Model) was developed, which gave a more qualitative approach to reliability. ISO 9000 added reliability measures as part of the design and development portion of certification. The expansion of the World Wide Web created new challenges of security and trust. The older problem of too little reliable information available had now been replaced by too much information of questionable value. Consumer reliability problems could now be discussed online in real-time using data. New technologies such as micro-electromechanical systems (MEMS), handheldGPS, and hand-held devices that combine cell phones and computers all represent challenges to maintaining reliability. Product development time continued to shorten through this
decade and what had been done in three years was being done in 18 months. This meant that reliability tools and tasks had to be more closely tied to the development process itself. In many ways, reliability has become part of everyday life and consumer expectations.
Reliability is the probability of a product performing its intended function under specified operating conditions in a manner that meets or exceeds customer expectations.[10]
The objectives of reliability engineering, in decreasing order of priority, are:[11]
The reason for the priority emphasis is that it is by far the most effective way of working, in terms of minimizing costs and generating reliable products. The primary skills that are required, therefore, are the ability to understand and anticipate the possible causes of failures, and knowledge of how to prevent them. It is also necessary to know the methods that can be used for analyzing designs and data.
Reliability engineering for "complex systems" requires a different, more elaborate systems approach than for non-complex systems. Reliability engineering may in that case involve:
Effective reliability engineering requires understanding of the basics offailure mechanismsfor which experience, broad engineering skills and good knowledge from many different special fields of engineering are required,[12]for example:
Reliability may be defined in the following ways:
Many engineering techniques are used in reliabilityrisk assessments, such as reliability block diagrams,hazard analysis,failure mode and effects analysis(FMEA),[13]fault tree analysis(FTA),Reliability Centered Maintenance, (probabilistic) load and material stress and wear calculations, (probabilistic) fatigue and creep analysis, human error analysis, manufacturing defect analysis, reliability testing, etc. These analyses must be done properly and with much attention to detail to be effective. Because of the large number of reliability techniques, their expense, and the varying degrees of reliability required for different situations, most projects develop a reliability program plan to specify the reliability tasks (statement of work(SoW) requirements) that will be performed for that specific system.
Consistent with the creation ofsafety cases, for example perARP4761, the goal of reliability assessments is to provide a robust set of qualitative and quantitative evidence that the use of a component or system will not be associated with unacceptable risk. The basic steps to take[14]are to:
Theriskhere is the combination of probability and severity of the failure incident (scenario) occurring. The severity can be looked at from a system safety or a system availability point of view. Reliability for safety can be thought of as a very different focus from reliability for system availability. Availability and safety can exist in dynamic tension as keeping a system too available can be unsafe. Forcing an engineering system into a safe state too quickly can force false alarms that impede the availability of the system.
In ade minimisdefinition, the severity of failures includes the cost of spare parts, man-hours, logistics, damage (secondary failures), and downtime of machines which may cause production loss. A more complete definition of failure also can mean injury, dismemberment, and death of people within the system (witness mine accidents, industrial accidents, space shuttle failures) and the same to innocent bystanders (witness the citizenry of cities like Bhopal, Love Canal, Chernobyl, or Sendai, and other victims of the 2011 Tōhoku earthquake and tsunami)—in this case, reliability engineering becomes system safety. What is acceptable is determined by the managing authority or customers or the affected communities. Residual risk is the risk that is left over after all reliability activities have finished, and includes the unidentified risk—and is therefore not completely quantifiable.
The complexity of the technical systems such as improvements of design and materials, planned inspections, fool-proof design, and backup redundancy decreases risk and increases the cost. The risk can be decreased to ALARA (as low as reasonably achievable) or ALAPA (as low as practically achievable) levels.
Implementing a reliability program is not simply a software purchase; it is not just a checklist of items that must be completed that ensure one has reliable products and processes. A reliability program is a complex learning and knowledge-based system unique to one's products and processes. It is supported by leadership, built on the skills that one develops within a team, integrated into business processes, and executed by following proven standard work practices.[15]
A reliability program plan is used to document exactly what "best practices" (tasks, methods, tools, analysis, and tests) are required for a particular (sub)system, as well as clarify customer requirements for reliability assessment. For large-scale complex systems, the reliability program plan should be a separatedocument. Resource determination for manpower and budgets for testing and other tasks is critical for a successful program. In general, the amount of work required for an effective program for complex systems is large.
A reliability program plan is essential for achieving high levels of reliability, testability,maintainability, and the resulting systemavailability, and is developed early during system development and refined over the system's life cycle. It specifies not only what the reliability engineer does, but also the tasks performed by otherstakeholders. An effective reliability program plan must be approved by top program management, which is responsible for the allocation of sufficient resources for its implementation.
A reliability program plan may also be used to evaluate and improve the availability of a system by the strategy of focusing on increasing testability & maintainability and not on reliability. Improving maintainability is generally easier than improving reliability. Maintainability estimates (repair rates) are also generally more accurate. However, because the uncertainties in the reliability estimates are in most cases very large, they are likely to dominate the availability calculation (prediction uncertainty problem), even when maintainability levels are very high. When reliability is not under control, more complicated issues may arise, like manpower (maintainers/customer service capability) shortages, spare part availability, logistic delays, lack of repair facilities, extensive retrofit and complex configuration management costs, and others. The problem of unreliability may be increased also due to the "domino effect" of maintenance-induced failures after repairs. Focusing only on maintainability is therefore not enough. If failures are prevented, none of the other issues are of any importance, and therefore reliability is generally regarded as the most important part of availability. Reliability needs to be evaluated and improved related to both availability and thetotal cost of ownership(TCO) due to the cost of spare parts, maintenance man-hours, transport costs, storage costs, part obsolete risks, etc. But, as GM and Toyota have belatedly discovered, TCO also includes the downstream liability costs when reliability calculations have not sufficiently or accurately addressed customers' bodily risks. Often a trade-off is needed between the two. There might be a maximum ratio between availability and cost of ownership. The testability of a system should also be addressed in the plan, as this is the link between reliability and maintainability. The maintenance strategy can influence the reliability of a system (e.g., by preventive and/orpredictive maintenance), although it can never bring it above the inherent reliability.
The reliability plan should clearly provide a strategy for availability control. Whether only availability or also cost of ownership is more important depends on the use of the system. For example, a system that is a critical link in a production system—e.g., a big oil platform—is normally allowed to have a very high cost of ownership if that cost translates to even a minor increase in availability, as the unavailability of the platform results in a massive loss of revenue which can easily exceed the high cost of ownership. A proper reliability plan should always address RAMT analysis in its total context. RAMT stands for reliability, availability, maintainability/maintenance, and testability in the context of the customer's needs.
For any system, one of the first tasks of reliability engineering is to adequately specify the reliability and maintainability requirements allocated from the overallavailabilityneeds and, more importantly, derived from proper design failure analysis or preliminary prototype test results. Clear requirements (able to be designed to) should constrain the designers from designing particular unreliable items/constructions/interfaces/systems. Setting only availability, reliability, testability, or maintainability targets (e.g., max. failure rates) is not appropriate. This is a broad misunderstanding about Reliability Requirements Engineering. Reliability requirements address the system itself, including test and assessment requirements, and associated tasks and documentation. Reliability requirements are included in the appropriate system or subsystem requirements specifications, test plans, and contract statements. The creation of proper lower-level requirements is critical.[16]The provision of only quantitative minimum targets (e.g.,Mean Time Between Failure(MTBF) values or failure rates) is not sufficient for different reasons. One reason is that a full validation (related to correctness and verifiability in time) of a quantitative reliability allocation (requirement spec) on lower levels for complex systems can (often) not be made as a consequence of (1) the fact that the requirements are probabilistic, (2) the extremely high level of uncertainties involved for showing compliance with all these probabilistic requirements, and because (3) reliability is a function of time, and accurate estimates of a (probabilistic) reliability number per item are available only very late in the project, sometimes even after many years of in-service use. Compare this problem with the continuous (re-)balancing of, for example, lower-level-system mass requirements in the development of an aircraft, which is already often a big undertaking. Notice that in this case, masses do only differ in terms of only some %, are not a function of time, and the data is non-probabilistic and available already in CAD models. In the case of reliability, the levels of unreliability (failure rates) may change with factors of decades (multiples of 10) as a result of very minor deviations in design, process, or anything else.[17]The information is often not available without huge uncertainties within the development phase. This makes this allocation problem almost impossible to do in a useful, practical, valid manner that does not result in massive over- or under-specification. A pragmatic approach is therefore needed—for example: the use of general levels/classes of quantitative requirements depending only on severity of failure effects. Also, the validation of results is a far more subjective task than any other type of requirement. (Quantitative) reliability parameters—in terms of MTBF—are by far the most uncertain design parameters in any design.
Furthermore, reliability design requirements should drive a (system or part) design to incorporate features that prevent failures from occurring, or limit consequences from failure in the first place. Not only would it aid in some predictions, this effort would keep from distracting the engineering effort into a kind of accounting work. A design requirement should be precise enough so that a designer can "design to" it and can also prove—through analysis or testing—that the requirement has been achieved, and, if possible, within some a stated confidence. Any type of reliability requirement should be detailed and could be derived from failure analysis (Finite-Element Stress and Fatigue analysis, Reliability Hazard Analysis, FTA, FMEA, Human Factor Analysis, Functional Hazard Analysis, etc.) or any type of reliability testing. Also, requirements are needed for verification tests (e.g., required overload stresses) and test time needed. To derive these requirements in an effective manner, asystems engineering-based risk assessment and mitigation logic should be used. Robust hazard log systems must be created that contain detailed information on why and how systems could or have failed. Requirements are to be derived and tracked in this way. These practical design requirements shall drive the design and not be used only for verification purposes. These requirements (often design constraints) are in this way derived from failure analysis or preliminary tests. Understanding of this difference compared to only purely quantitative (logistic) requirement specification (e.g., Failure Rate / MTBF target) is paramount in the development of successful (complex) systems.[18]
The maintainability requirements address the costs of repairs as well as repair time. Testability (not to be confused with test requirements) requirements provide the link between reliability and maintainability and should address detectability of failure modes (on a particular system level), isolation levels, and the creation of diagnostics (procedures).
As indicated above, reliability engineers should also address requirements for various reliability tasks and documentation during system development, testing, production, and operation. These requirements are generally specified in the contract statement of work and depend on how much leeway the customer wishes to provide to the contractor. Reliability tasks include various analyses, planning, and failure reporting. Task selection depends on the criticality of the system as well as cost. A safety-critical system may require a formal failure reporting and review process throughout development, whereas a non-critical system may rely on final test reports. The most common reliability program tasks are documented in reliability program standards, such as MIL-STD-785 and IEEE 1332. Failure reporting analysis and corrective action systems are a common approach for product/process reliability monitoring.
In practice, most failures can be traced back to some type ofhuman error, for example in:
However, humans are also very good at detecting such failures, correcting them, and improvising when abnormal situations occur. Therefore, policies that completely rule out human actions in design and production processes to improve reliability may not be effective. Some tasks are better performed by humans and some are better performed by machines.[19]
Furthermore, human errors in management; the organization of data and information; or the misuse or abuse of items, may also contribute to unreliability. This is the core reason why high levels of reliability for complex systems can only be achieved by following a robustsystems engineeringprocess with proper planning and execution of the validation and verification tasks. This also includes the careful organization of data and information sharing and creating a "reliability culture", in the same way, that having a "safety culture" is paramount in the development of safety-critical systems.
Reliability prediction combines:
For existing systems, it is arguable that any attempt by a responsible program to correct the root cause of discovered failures may render the initial MTBF estimate invalid, as new assumptions (themselves subject to high error levels) of the effect of this correction must be made. Another practical issue is the general unavailability of detailed failure data, with those available often featuring inconsistent filtering of failure (feedback) data, and ignoring statistical errors (which are very high for rare events like reliability related failures). Very clear guidelines must be present to count and compare failures related to different type of root-causes (e.g. manufacturing-, maintenance-, transport-, system-induced or inherent design failures). Comparing different types of causes may lead to incorrect estimations and incorrect business decisions about the focus of improvement.
To perform a proper quantitative reliability prediction for systems may be difficult and very expensive if done by testing. At the individual part-level, reliability results can often be obtained with comparatively high confidence, as testing of many sample parts might be possible using the available testing budget. However, unfortunately these tests may lack validity at a system-level due to assumptions made at part-level testing. These authors emphasized the importance of initial part- or system-level testing until failure, and to learn from such failures to improve the system or part. The general conclusion is drawn that an accurate and absolute prediction – by either field-data comparison or testing – of reliability is in most cases not possible. An exception might be failures due to wear-out problems such as fatigue failures. In the introduction of MIL-STD-785 it is written that reliability prediction should be used with great caution, if not used solely for comparison in trade-off studies.
Design for Reliability (DfR) is a process that encompasses tools and procedures to ensure that a product meets its reliability requirements, under its use environment, for the duration of its lifetime. DfR is implemented in the design stage of a product to proactively improve product reliability.[21]DfR is often used as part of an overallDesign for Excellence (DfX)strategy.
Reliability design begins with the development of a (system)model. Reliability and availability models useblock diagramsandFault Tree Analysisto provide a graphical means of evaluating the relationships between different parts of the system. These models may incorporate predictions based on failure rates taken from historical data. While the (input data) predictions are often not accurate in an absolute sense, they are valuable to assess relative differences in design alternatives. Maintainability parameters, for exampleMean time to repair(MTTR), can also be used as inputs for such models.
The most important fundamental initiating causes and failure mechanisms are to be identified and analyzed with engineering tools. A diverse set of practical guidance as to performance and reliability should be provided to designers so that they can generate low-stressed designs and products that protect, or are protected against, damage and excessive wear. Proper validation of input loads (requirements) may be needed, in addition to verification for reliability "performance" by testing.
One of the most important design techniques isredundancy. This means that if one part of the system fails, there is an alternate success path, such as a backup system. The reason why this is the ultimate design choice is related to the fact that high-confidence reliability evidence for new parts or systems is often not available, or is extremely expensive to obtain. By combining redundancy, together with a high level of failure monitoring, and the avoidance of common cause failures; even a system with relatively poor single-channel (part) reliability, can be made highly reliable at a system level (up to mission critical reliability). No testing of reliability has to be required for this. In conjunction with redundancy, the use of dissimilar designs or manufacturing processes (e.g. via different suppliers of similar parts) for single independent channels, can provide less sensitivity to quality issues (e.g. early childhood failures at a single supplier), allowing very-high levels of reliability to be achieved at all moments of the development cycle (from early life to long-term). Redundancy can also be applied in systems engineering by double checking requirements, data, designs, calculations, software, and tests to overcome systematic failures.
Another effective way to deal with reliability issues is to perform analysis that predicts degradation, enabling the prevention of unscheduled downtime events / failures.RCM(Reliability Centered Maintenance) programs can be used for this.
For electronic assemblies, there has been an increasing shift towards a different approach calledphysics of failure. This technique relies on understanding the physical static and dynamic failure mechanisms. It accounts for variation in load, strength, and stress that lead to failure with a high level of detail, made possible with the use of modernfinite element method(FEM) software programs that can handle complex geometries and mechanisms such as creep, stress relaxation, fatigue, and probabilistic design (Monte Carlo Methods/DOE). The material or component can be re-designed to reduce the probability of failure and to make it more robust against such variations. Another common design technique is componentderating: i.e. selecting components whose specifications significantly exceed the expected stress levels, such as using heavier gauge electrical wire than might normally be specified for the expectedelectric current.
Many of the tasks, techniques, and analyses used in Reliability Engineering are specific to particular industries and applications, but can commonly include:
Results from these methods are presented during reviews of part or system design, and logistics. Reliability is just one requirement among many for a complex part or system. Engineering trade-off studies are used to determine theoptimumbalance between reliability requirements and other constraints.
Reliability engineers, whether using quantitative or qualitative methods to describe a failure or hazard, rely on language to pinpoint the risks and enable issues to be solved. The language used must help create an orderly description of the function/item/system and its complex surrounding as it relates to the failure of these functions/items/systems. Systems engineering is very much about finding the correct words to describe the problem (and related risks), so that they can be readily solved via engineering solutions. Jack Ring said that a systems engineer's job is to "language the project." (Ring et al. 2000)[23]For part/system failures, reliability engineers should concentrate more on the "why and how", rather that predicting "when". Understanding "why" a failure has occurred (e.g. due to over-stressed components or manufacturing issues) is far more likely to lead to improvement in the designs and processes used[4]than quantifying "when" a failure is likely to occur (e.g. via determining MTBF). To do this, first the reliability hazards relating to the part/system need to be classified and ordered (based on some form of qualitative and quantitative logic if possible) to allow for more efficient assessment and eventual improvement. This is partly done in pure language andpropositionlogic, but also based on experience with similar items. This can for example be seen in descriptions of events infault tree analysis,FMEAanalysis, and hazard (tracking) logs. In this sense language and proper grammar (part of qualitative analysis) plays an important role in reliability engineering, just like it does insafety engineeringor in-general withinsystems engineering.
Correct use of language can also be key to identifying or reducing the risks ofhuman error, which are often the root cause of many failures. This can include proper instructions in maintenance manuals, operation manuals, emergency procedures, and others to prevent systematic human errors that may result in system failures. These should be written by trained or experienced technical authors using so-called simplified English orSimplified Technical English, where words and structure are specifically chosen and created so as to reduce ambiguity or risk of confusion (e.g. an "replace the old part" could ambiguously refer to a swapping a worn-out part with a non-worn-out part, or replacing a part with one using a more recent and hopefully improved design).
Reliability modeling is the process of predicting or understanding the reliability of a component or system prior to its implementation. Two types of analysis that are often used to model a complete system'savailabilitybehavior including effects from logistics issues like spare part provisioning, transport and manpower are fault tree analysis andreliability block diagrams. At a component level, the same types of analyses can be used together with others. The input for the models can come from many sources including testing; prior operational experience; field data; as well as data handbooks from similar or related industries. Regardless of source, all model input data must be used with great caution, as predictions are only valid in cases where the same product was used in the same context. As such, predictions are often only used to help compare alternatives.
For part level predictions, two separate fields of investigation are common:
Reliability is defined as theprobabilitythat a device will perform its intended function during a specified period of time under stated conditions. Mathematically, this may be expressed as,
R(t)=Pr{T>t}=∫t∞f(x)dx{\displaystyle R(t)=Pr\{T>t\}=\int _{t}^{\infty }f(x)\,dx\ \!},
wheref(x){\displaystyle f(x)\!}is the failureprobability density functionandt{\displaystyle t}is the length of the period of time (which is assumed to start from time zero).
There are a few key elements of this definition:
Quantitative requirements are specified using reliabilityparameters. The most common reliability parameter is themean time to failure(MTTF), which can also be specified as thefailure rate(this is expressed as a frequency or conditional probability density function (PDF)) or the number of failures during a given period. These parameters may be useful for higher system levels and systems that are operated frequently (i.e. vehicles, machinery, and electronic equipment). Reliability increases as the MTTF increases. The MTTF is usually specified in hours, but can also be used with other units of measurement, such as miles or cycles. Using MTTF values on lower system levels can be very misleading, especially if they do not specify the associated Failures Modes and Mechanisms (The F in MTTF).[17]
In other cases, reliability is specified as the probability of mission success. For example, reliability of a scheduled aircraft flight can be specified as a dimensionless probability or a percentage, as often used insystem safetyengineering.
A special case of mission success is the single-shot device or system. These are devices or systems that remain relatively dormant and only operate once. Examples include automobileairbags, thermalbatteriesandmissiles. Single-shot reliability is specified as a probability of one-time success or is subsumed into a related parameter. Single-shot missile reliability may be specified as a requirement for the probability of a hit. For such systems, theprobability of failure on demand(PFD) is the reliability measure – this is actually an "unavailability" number. The PFD is derived from failure rate (a frequency of occurrence) and mission time for non-repairable systems.
For repairable systems, it is obtained from failure rate, mean-time-to-repair (MTTR), and test interval. This measure may not be unique for a given system as this measure depends on the kind of demand. In addition to system level requirements, reliability requirements may be specified for critical subsystems. In most cases, reliability parameters are specified with appropriate statisticalconfidence intervals.
The purpose ofreliability testingorreliability verificationis to discover potential problems with the design as early as possible and, ultimately, provide confidence that the system meets its reliability requirements. The reliability of the product in all environments such as expected use, transportation, or storage during the specified lifespan should be considered.[10]It is to expose the product to natural or artificial environmental conditions to undergo its action to evaluate the performance of the product under the environmental conditions of actual use, transportation, and storage, and to analyze and study the degree of influence of environmental factors and their mechanism of action.[24]Through the use of various environmental test equipment to simulate the high temperature, low temperature, and high humidity, and temperature changes in the climate environment, to accelerate the reaction of the product in the use environment, to verify whether it reaches the expected quality inR&D, design, and manufacturing.[25]
Reliability verification is also called reliability testing, which refers to the use of modeling, statistics, and other methods to evaluate the reliability of the product based on the product's life span and expected performance.[26]Most product on the market requires reliability testing, such as automotive,integrated circuit, heavy machinery used to mine nature resources, Aircraft auto software.[27][28]
Reliability testing may be performed at several levels and there are different types of testing. Complex systems may be tested at component, circuit board, unit, assembly, subsystem and system levels.[29](The test level nomenclature varies among applications.) For example, performingenvironmental stress screeningtests at lower levels, such as piece parts or small assemblies, catches problems before they cause failures at higher levels. Testing proceeds during each level of integration through full-up system testing, developmental testing, and operational testing, thereby reducing program risk. However, testing does not mitigate unreliability risk.
With each test both statisticaltype I and type II errorscould be made, depending on sample size, test time, assumptions and the needed discrimination ratio. There is risk of incorrectly rejecting a good design (type I error) and the risk of incorrectly accepting a bad design (type II error).
It is not always feasible to test all system requirements. Some systems are prohibitively expensive to test; somefailure modesmay take years to observe; some complex interactions result in a huge number of possible test cases; and some tests require the use of limited test ranges or other resources. In such cases, different approaches to testing can be used, such as (highly) accelerated life testing,design of experiments, andsimulations.
The desired level of statistical confidence also plays a role in reliability testing. Statistical confidence is increased by increasing either the test time or the number of items tested. Reliability test plans are designed to achieve the specified reliability at the specifiedconfidence levelwith the minimum number of test units and test time. Different test plans result in different levels of risk to the producer and consumer. The desired reliability, statistical confidence, and risk levels for each side influence the ultimate test plan. The customer and developer should agree in advance on how reliability requirements will be tested.
A key aspect of reliability testing is to define "failure". Although this may seem obvious, there are many situations where it is not clear whether a failure is really the fault of the system. Variations in test conditions, operator differences, weather and unexpected situations create differences between the customer and the system developer. One strategy to address this issue is to use a scoring conference process. A scoring conference includes representatives from the customer, the developer, the test organization, the reliability organization, and sometimes independent observers. The scoring conference process is defined in the statement of work. Each test case is considered by the group and "scored" as a success or failure. This scoring is the official result used by the reliability engineer.
As part of the requirements phase, the reliability engineer develops a test strategy with the customer. The test strategy makes trade-offs between the needs of the reliability organization, which wants as much data as possible, and constraints such as cost, schedule and available resources. Test plans and procedures are developed for each reliability test, and results are documented.
Reliability testing is common in the Photonics industry. Examples of reliability tests of lasers are life test andburn-in. These tests consist of the highly accelerated aging, under controlled conditions, of a group of lasers. The data collected from these life tests are used to predict laser life expectancy under the intended operating characteristics.[30]
There are many criteria to test depends on the product or process that are testing on, and mainly, there are five components that are most common:[31][32]
The product life span can be split into four different for analysis. Useful life is the estimated economic life of the product, which is defined as the time can be used before the cost of repair do not justify the continue use to the product. Warranty life is the product should perform the function within the specified time period. Design life is where during the design of the product, designer take into consideration on the life time of competitive product and customer desire and ensure that the product do not result in customer dissatisfaction.[34][35]
Reliability test requirements can follow from any analysis for which the first estimate of failure probability, failure mode or effect needs to be justified. Evidence can be generated with some level of confidence by testing. With software-based systems, the probability is a mix of software and hardware-based failures. Testing reliability requirements is problematic for several reasons. A single test is in most cases insufficient to generate enough statistical data. Multiple tests or long-duration tests are usually very expensive. Some tests are simply impractical, and environmental conditions can be hard to predict over a systems life-cycle.
Reliability engineering is used to design a realistic and affordable test program that provides empirical evidence that the system meets its reliability requirements. Statisticalconfidence levelsare used to address some of these concerns. A certain parameter is expressed along with a corresponding confidence level: for example, anMTBFof 1000 hours at 90% confidence level. From this specification, the reliability engineer can, for example, design a test with explicit criteria for the number of hours and number of failures until the requirement is met or failed. Different sorts of tests are possible.
The combination of required reliability level and required confidence level greatly affects the development cost and the risk to both the customer and producer. Care is needed to select the best combination of requirements—e.g. cost-effectiveness. Reliability testing may be performed at various levels, such as component,subsystemandsystem. Also, many factors must be addressed during testing and operation, such as extreme temperature and humidity, shock, vibration, or other environmental factors (like loss of signal, cooling or power; or other catastrophes such as fire, floods, excessive heat, physical or security violations or other myriad forms of damage or degradation). For systems that must last many years, accelerated life tests may be needed.
A systematic approach to reliability testing is to, first, determine reliability goal, then do tests that are linked to performance and determine the reliability of the product.[36]A reliability verification test in modern industries should clearly determine how they relate to the product's overall reliability performance and how individual tests impact the warranty cost and customer satisfaction.[37]
The purpose ofaccelerated life testing (ALT test)is to induce field failure in the laboratory at a much faster rate by providing a harsher, but nonetheless representative, environment. In such a test, the product is expected to fail in the lab just as it would have failed in the field—but in much less time.
The main objective of an accelerated test is either of the following:
An accelerated testing program can be broken down into the following steps:
Common ways to determine a life stress relationship are:
Software reliability is a special aspect of reliability engineering. It focuses on foundations and techniques to make software more reliable, i.e., resilient to faults. System reliability, by definition, includes all parts of the system, including hardware, software, supporting infrastructure (including critical external interfaces), operators and procedures. Traditionally, reliability engineering focuses on critical hardware parts of the system. Since the widespread use of digitalintegrated circuittechnology, software has become an increasingly critical part of most electronics and, hence, nearly all present day systems. Therefore, software reliability has gained prominence within the field of system reliability.
There are significant differences, however, in how software and hardware behave.
Most hardware unreliability is the result of a component or material failure that results in the system not performing its intended function. Repairing or replacing the hardware component restores the system to its original operating state.
However, software does not fail in the same sense that hardware fails. Instead, software unreliability is the result of unanticipated results of software operations. Even relatively small software programs can have astronomically largecombinationsof inputs and states that are infeasible to exhaustively test. Restoring software to its original state only works until the same combination of inputs and states results in the same unintended result. Software reliability engineering must take this into account.
Despite this difference in the source of failure between software and hardware, severalsoftware reliability modelsbased on statistics have been proposed to quantify what we experience with software: the longer software is run, the higher the probability that it will eventually be used in an untested manner and exhibit a latent defect that results in a failure (Shooman1987), (Musa 2005), (Denney 2005).
As with hardware, software reliability depends on good requirements, design and implementation. Software reliability engineering relies heavily on a disciplinedsoftware engineeringprocess to anticipate and design againstunintended consequences. There is more overlap between softwarequality engineeringand software reliability engineering than between hardware quality and reliability. A good software development plan is a key aspect of the software reliability program. The software development plan describes the design and coding standards,peer reviews,unit tests,configuration management,software metricsand software models to be used during software development.
A common reliability metric is the number of software faults per line of code (FLOC), usually expressed as faults per thousand lines of code. This metric, along with software execution time, is key to most software reliability models and estimates. The theory is that the software reliability increases as the number of faults (or fault density) decreases. Establishing a direct connection between fault density and mean-time-between-failure is difficult, however, because of the way software faults are distributed in the code, their severity, and the probability of the combination of inputs necessary to encounter the fault. Nevertheless, fault density serves as a useful indicator for the reliability engineer. Other software metrics, such as complexity, are also used. This metric remains controversial, since changes in software development and verification practices can have dramatic impact on overall defect rates.
Software testingis an important aspect of software reliability. Even the best software development process results in some software faults that are nearly undetectable until tested. Software is tested at several levels, starting with individualunits, throughintegrationand full-upsystem testing. All phases of testing, software faults are discovered, corrected, and re-tested. Reliability estimates are updated based on the fault density and other metrics. At a system level, mean-time-between-failure data can be collected and used to estimate reliability. Unlike hardware, performing exactly the same test on exactly the same software configuration does not provide increased statistical confidence. Instead, software reliability uses different metrics, such ascode coverage.
The Software Engineering Institute'scapability maturity modelis a common means of assessing the overall software development process for reliability and quality purposes.
Structural reliabilityor the reliability of structures is the application of reliability theory to the behavior ofstructures. It is used in both the design and maintenance of different types of structures including concrete and steel structures.[38][39]In structural reliability studies both loads and resistances are modeled as probabilistic variables. Using this approach the probability of failure of a structure is calculated.
Reliability for safety and reliability for availability are often closely related. Lost availability of an engineering system can cost money. If a subway system is unavailable the subway operator will lose money for each hour the system is down. The subway operator will lose more money if safety is compromised. The definition of reliability is tied to a probability of not encountering a failure. A failure can cause loss of safety, loss of availability or both. It is undesirable to lose safety or availability in a critical system.
Reliability engineering is concerned with overall minimisation of failures that could lead to financial losses for the responsible entity, whereassafety engineeringfocuses on minimising a specific set of failure types that in general could lead to loss of life, injury or damage to equipment.
Reliability hazards could transform into incidents leading to a loss of revenue for the company or the customer, for example due to direct and indirect costs associated with: loss of production due to system unavailability; unexpected high or low demands for spares; repair costs; man-hours; re-designs or interruptions to normal production.[40]
Safety engineering is often highly specific, relating only to certain tightly regulated industries, applications, or areas. It primarily focuses on system safety hazards that could lead to severe accidents including: loss of life; destruction of equipment; or environmental damage. As such, the related system functional reliability requirements are often extremely high. Although it deals with unwanted failures in the same sense as reliability engineering, it, however, has less of a focus on direct costs, and is not concerned with post-failure repair actions. Another difference is the level of impact of failures on society, leading to a tendency for strict control by governments or regulatory bodies (e.g. nuclear, aerospace, defense, rail and oil industries).[40]
Safety can be increased using a 2oo2 cross checked redundant system. Availability can be increased by using "1oo2" (1 out of 2) redundancy at a part or system level. If both redundant elements disagree the more permissive element will maximize availability. A 1oo2 system should never be relied on for safety. Fault-tolerant systems often rely on additional redundancy (e.g.2oo3 voting logic) where multiple redundant elements must agree on a potentially unsafe action before it is performed. This increases both availability and safety at a system level. This is common practice in aerospace systems that need continued availability and do not have afail-safemode. For example, aircraft may use triple modular redundancy forflight computersand control surfaces (including occasionally different modes of operation e.g. electrical/mechanical/hydraulic) as these need to always be operational, due to the fact that there are no "safe" default positions for control surfaces such as rudders or ailerons when the aircraft is flying.
The above example of a 2oo3 fault tolerant system increases both mission reliability as well as safety. However, the "basic" reliability of the system will in this case still be lower than a non-redundant (1oo1) or 2oo2 system. Basic reliability engineering covers all failures, including those that might not result in system failure, but do result in additional cost due to: maintenance repair actions; logistics; spare parts etc. For example, replacement or repair of 1 faulty channel in a 2oo3 voting system, (the system is still operating, although with one failed channel it has actually become a 2oo2 system) is contributing to basic unreliability but not mission unreliability. As an example, the failure of the tail-light of an aircraft will not prevent the plane from flying (and so is not considered a mission failure), but it does need to be remedied (with a related cost, and so does contribute to the basic unreliability levels).
When using fault tolerant (redundant) systems or systems that are equipped with protection functions, detectability of failures and avoidance of common cause failures becomes paramount for safe functioning and/or mission reliability.
Quality often focuses on manufacturing defects during the warranty phase. Reliability looks at the failure intensity over the whole life of a product or engineering system from commissioning to decommissioning.Six Sigmahas its roots in statistical control in quality of manufacturing. Reliability engineering is a specialty part of systems engineering. The systems engineering process is a discovery process that is often unlike a manufacturing process. A manufacturing process is often focused on repetitive activities that achieve high quality outputs with minimum cost and time.[41]
The everyday usage term "quality of a product" is loosely taken to mean its inherent degree of excellence. In industry, a more precise definition of quality as "conformance to requirements or specifications at the start of use" is used. Assuming the final product specification adequately captures the original requirements and customer/system needs, the quality level can be measured as the fraction of product units shipped that meet specifications.[42]Manufactured goods quality often focuses on the number of warranty claims during the warranty period.
Quality is a snapshot at the start of life through the warranty period and is related to the control of lower-level product specifications. This includes time-zero defects i.e. where manufacturing mistakes escaped final Quality Control. In theory the quality level might be described by a single fraction of defective products. Reliability, as a part of systems engineering, acts as more of an ongoing assessment of failure rates over many years. Theoretically, all items will fail over an infinite period of time.[43]Defects that appear over time are referred to as reliability fallout. To describe reliability fallout a probability model that describes the fraction fallout over time is needed. This is known as the life distribution model.[42]Some of these reliability issues may be due to inherent design issues, which may exist even though the product conforms to specifications. Even items that are produced perfectly will fail over time due to one or more failure mechanisms (e.g. due to human error or mechanical, electrical, and chemical factors). These reliability issues can also be influenced by acceptable levels of variation during initial production.
Quality and reliability are, therefore, related to manufacturing. Reliability is more targeted towards clients who are focused on failures throughout the whole life of the product such as the military, airlines or railroads. Items that do not conform to product specification will generally do worse in terms of reliability (having a lower MTTF), but this does not always have to be the case. The full mathematical quantification (in statistical models) of this combined relation is in general very difficult or even practically impossible. In cases where manufacturing variances can be effectively reduced, six sigma tools have been shown to be useful to find optimal process solutions which can increase quality and reliability. Six Sigma may also help to design products that are more robust to manufacturing induced failures and infant mortality defects in engineering systems and manufactured product.
In contrast with Six Sigma, reliability engineering solutions are generally found by focusing on reliability testing and system design. Solutions are found in different ways, such as by simplifying a system to allow more of the mechanisms of failure involved to be understood; performing detailed calculations of material stress levels allowing suitable safety factors to be determined; finding possible abnormal system load conditions and using this to increase robustness of a design to manufacturing variance related failure mechanisms. Furthermore, reliability engineering uses system-level solutions, like designing redundant and fault-tolerant systems for situations with high availability needs (seeReliability engineering vs Safety engineeringabove).
Note: A "defect" in six-sigma/quality literature is not the same as a "failure" (Field failure | e.g. fractured item) in reliability. A six-sigma/quality defect refers generally to non-conformance with a requirement (e.g. basic functionality or a key dimension). Items can, however, fail over time, even if these requirements are all fulfilled. Quality is generally not concerned with asking the crucial question "are the requirements actually correct?", whereas reliability is.
Once systems or parts are being produced, reliability engineering attempts to monitor, assess, and correct deficiencies. Monitoring includes electronic and visual surveillance of critical parameters identified during the fault tree analysis design stage. Data collection is highly dependent on the nature of the system. Most large organizations havequality controlgroups that collect failure data on vehicles, equipment and machinery. Consumer product failures are often tracked by the number of returns. For systems in dormant storage or on standby, it is necessary to establish a formal surveillance program to inspect and test random samples. Any changes to the system, such as field upgrades or recall repairs, require additional reliability testing to ensure the reliability of the modification. Since it is not possible to anticipate all the failure modes of a given system, especially ones with a human element, failures will occur. The reliability program also includes a systematicroot cause analysisthat identifies the causal relationships involved in the failure such that effective corrective actions may be implemented. When possible, system failures and corrective actions are reported to the reliability engineering organization.
Some of the most common methods to apply to a reliability operational assessment arefailure reporting, analysis, and corrective action systems(FRACAS). This systematic approach develops a reliability, safety, and logistics assessment based on failure/incident reporting, management, analysis, and corrective/preventive actions. Organizations today are adopting this method and utilizing commercial systems (such as Web-based FRACAS applications) that enable them to create a failure/incident data repository from which statistics can be derived to view accurate and genuine reliability, safety, and quality metrics.
It is extremely important for an organization to adopt a common FRACAS system for all end items. Also, it should allow test results to be captured in a practical way. Failure to adopt one easy-to-use (in terms of ease of data-entry for field engineers and repair shop engineers) and easy-to-maintain integrated system is likely to result in a failure of the FRACAS program itself.
Some of the common outputs from a FRACAS system include Field MTBF, MTTR, spares consumption, reliability growth, failure/incidents distribution by type, location, part no., serial no., and symptom.
The use of past data to predict the reliability of new comparable systems/items can be misleading as reliability is a function of the context of use and can be affected by small changes in design/manufacturing.
Systems of any significant complexity are developed by organizations of people, such as a commercialcompanyor agovernmentagency. The reliability engineering organization must be consistent with the company'sorganizational structure. For small, non-critical systems, reliability engineering may be informal. As complexity grows, the need arises for a formal reliability function. Because reliability is important to the customer, the customer may even specify certain aspects of the reliability organization.
There are several common types of reliability organizations. The project manager or chief engineer may employ one or more reliability engineers directly. In larger organizations, there is usually a product assurance orspecialty engineeringorganization, which may include reliability,maintainability,quality, safety,human factors,logistics, etc. In such case, the reliability engineer reports to the product assurance manager or specialty engineering manager.
In some cases, a company may wish to establish an independent reliability organization. This is desirable to ensure that the system reliability, which is often expensive and time-consuming, is not unduly slighted due to budget and schedule pressures. In such cases, the reliability engineer works for the project day-to-day, but is actually employed and paid by a separate organization within the company.
Because reliability engineering is critical to early system design, it has become common for reliability engineers, however, the organization is structured, to work as part of anintegrated product team.
Some universities offer graduate degrees in reliability engineering. Other reliability professionals typically have a physics degree from a university or college program. Many engineering programs offer reliability courses, and some universities have entire reliability engineering programs. A reliability engineer must be registered as aprofessional engineerby the state or province by law, but not all reliability professionals are engineers. Reliability engineers are required in systems where public safety is at risk. There are many professional conferences and industry training programs available for reliability engineers. Several professional organizations exist for reliability engineers, including the American Society for Quality Reliability Division (ASQ-RD),[44]theIEEE Reliability Society, theAmerican Society for Quality(ASQ),[45]and the Society of Reliability Engineers (SRE).[46]
http://standards.sae.org/ja1000/1_199903/SAE JA1000/1 Reliability Program Standard Implementation Guide
In the UK, there are more up to date standards maintained under the sponsorship of UK MOD as Defence Standards. The relevant Standards include:
DEF STAN 00-40 Reliability and Maintainability (R&M)
DEF STAN 00-42 RELIABILITY AND MAINTAINABILITY ASSURANCE GUIDES
DEF STAN 00-43 RELIABILITY AND MAINTAINABILITY ASSURANCE ACTIVITY
DEF STAN 00-44 RELIABILITY AND MAINTAINABILITY DATA COLLECTION AND CLASSIFICATION
DEF STAN 00-45 Issue 1: RELIABILITY CENTERED MAINTENANCE
DEF STAN 00-49 Issue 1: RELIABILITY AND MAINTAINABILITY MOD GUIDE TO TERMINOLOGY DEFINITIONS
These can be obtained fromDSTAN. There are also many commercial standards, produced by many organisations including the SAE, MSG, ARP, and IEE.
|
https://en.wikipedia.org/wiki/Reliability_theory
|
Accuracy and precisionare two measures ofobservational error.Accuracyis how close a given set ofmeasurements(observationsor readings) are to theirtrue value.Precisionis how close the measurements are to each other.
TheInternational Organization for Standardization(ISO) defines a related measure:[1]trueness, "the closeness of agreement between thearithmetic meanof a large number of test results and the true or accepted reference value."
Whileprecisionis a description ofrandom errors(a measure ofstatistical variability),accuracyhas two different definitions:
In simpler terms, given astatistical sampleor set of data points from repeated measurements of the same quantity, the sample or set can be said to beaccurateif theiraverageis close to the true value of the quantity being measured, while the set can be said to bepreciseif theirstandard deviationis relatively small.
In the fields ofscienceandengineering, the accuracy of ameasurementsystem is the degree of closeness of measurements of aquantityto that quantity's truevalue.[3]The precision of a measurement system, related toreproducibilityandrepeatability, is the degree to which repeated measurements under unchanged conditions show the sameresults.[3][4]Although the two words precision and accuracy can besynonymousincolloquialuse, they are deliberately contrasted in the context of thescientific method.
The field ofstatistics, where the interpretation of measurements plays a central role, prefers to use the termsbiasandvariabilityinstead of accuracy and precision: bias is the amount of inaccuracy and variability is the amount of imprecision.
A measurement system can be accurate but not precise, precise but not accurate, neither, or both. For example, if an experiment contains asystematic error, then increasing thesample sizegenerally increases precision but does not improve accuracy. The result would be a consistent yet inaccurate string of results from the flawed experiment. Eliminating the systematic error improves accuracy but does not change precision.
A measurement system is consideredvalidif it is bothaccurateandprecise. Related terms includebias(non-randomor directed effects caused by a factor or factors unrelated to theindependent variable) anderror(random variability).
The terminology is also applied to indirect measurements—that is, values obtained by a computational procedure from observed data.
In addition to accuracy and precision, measurements may also have ameasurement resolution, which is the smallest change in the underlying physical quantity that produces a response in the measurement.
Innumerical analysis, accuracy is also the nearness of a calculation to the true value; while precision is the resolution of the representation, typically defined by the number of decimal or binary digits.
In military terms, accuracy refers primarily to the accuracy of fire (justesse de tir), the precision of fire expressed by the closeness of a grouping of shots at and around the centre of the target.[5]
A shift in the meaning of these terms appeared with the publication of the ISO 5725 series of standards in 1994, which is also reflected in the 2008 issue of the BIPMInternational Vocabulary of Metrology(VIM), items 2.13 and 2.14.[3]
According to ISO 5725-1,[1]the general term "accuracy" is used to describe the closeness of a measurement to the true value. When the term is applied to sets of measurements of the samemeasurand, it involves a component of random error and a component of systematic error. In this case trueness is the closeness of the mean of a set of measurement results to the actual (true) value, that is the systematic error, and precision is the closeness of agreement among a set of results, that is the random error.
ISO 5725-1 and VIM also avoid the use of the term "bias", previously specified in BS 5497-1,[6]because it has different connotations outside the fields of science and engineering, as in medicine and law.
In industrial instrumentation, accuracy is the measurement tolerance, or transmission of the instrument and defines the limits of the errors made when the instrument is used in normal operating conditions.[7]
Ideally a measurement device is both accurate and precise, with measurements all close to and tightly clustered around the true value. The accuracy and precision of a measurement process is usually established by repeatedly measuring sometraceablereferencestandard. Such standards are defined in theInternational System of Units(abbreviated SI from French:Système international d'unités) and maintained by nationalstandards organizationssuch as theNational Institute of Standards and Technologyin the United States.
This also applies when measurements are repeated and averaged. In that case, the termstandard erroris properly applied: the precision of the average is equal to the known standard deviation of the process divided by the square root of the number of measurements averaged. Further, thecentral limit theoremshows that theprobability distributionof the averaged measurements will be closer to a normal distribution than that of individual measurements.
With regard to accuracy we can distinguish:
A common convention in science and engineering is to express accuracy and/or precision implicitly by means ofsignificant figures. Where not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of 843.6 m, or 843.0 m, or 800.0 m would imply a margin of 0.05 m (the last significant place is the tenths place), while a recording of 843 m would imply a margin of error of 0.5 m (the last significant digits are the units).
A reading of 8,000 m, with trailing zeros and no decimal point, is ambiguous; the trailing zeros may or may not be intended as significant figures. To avoid this ambiguity, the number could be represented in scientific notation: 8.0 × 103m indicates that the first zero is significant (hence a margin of 50 m) while 8.000 × 103m indicates that all three zeros are significant, giving a margin of 0.5 m. Similarly, one can use a multiple of the basic measurement unit: 8.0 km is equivalent to 8.0 × 103m. It indicates a margin of 0.05 km (50 m). However, reliance on this convention can lead tofalse precisionerrors when accepting data from sources that do not obey it. For example, a source reporting a number like 153,753 with precision +/- 5,000 looks like it has precision +/- 0.5. Under the convention it would have been rounded to 150,000.
Alternatively, in a scientific context, if it is desired to indicate the margin of error with more precision, one can use a notation such as 7.54398(23) × 10−10m, meaning a range of between 7.54375 and 7.54421 × 10−10m.
Precision includes:
In engineering, precision is often taken as three times Standard Deviation of measurements taken, representing the range that 99.73% of measurements can occur within.[8]For example, an ergonomist measuring the human body can be confident that 99.73% of their extracted measurements fall within ± 0.7 cm - if using the GRYPHON processing system - or ± 13 cm - if using unprocessed data.[9]
Accuracyis also used as a statistical measure of how well abinary classificationtest correctly identifies or excludes a condition. That is, the accuracy is the proportion of correct predictions (bothtrue positivesandtrue negatives) among the total number of cases examined.[10]As such, it compares estimates ofpre- and post-test probability. To make the context clear by the semantics, it is often referred to as the "Rand accuracy" or "Rand index".[11][12][13]It is a parameter of the test.
The formula for quantifying binary accuracy is:Accuracy=TP+TNTP+TN+FP+FN{\displaystyle {\text{Accuracy}}={\frac {TP+TN}{TP+TN+FP+FN}}}whereTP = True positive;FP = False positive;TN = True negative;FN = False negative
In this context, the concepts of trueness and precision as defined by ISO 5725-1 are not applicable. One reason is that there is not a single “true value” of a quantity, but rather two possible true values for every case, while accuracy is an average across all cases and therefore takes into account both values. However, the termprecisionis used in this context to mean a different metric originating from the field of information retrieval (see below).
When computing accuracy in multiclass classification, accuracy is simply the fraction of correct classifications:[14][15]Accuracy=correct classificationsall classifications{\displaystyle {\text{Accuracy}}={\frac {\text{correct classifications}}{\text{all classifications}}}}This is usually expressed as a percentage. For example, if a classifier makes ten predictions and nine of them are correct, the accuracy is 90%.
Accuracy is sometimes also viewed as amicro metric, to underline that it tends to be greatly affected by the particular class prevalence in a dataset and the classifier's biases.[14]
Furthermore, it is also called top-1 accuracy to distinguish it from top-5 accuracy, common inconvolutional neural networkevaluation. To evaluate top-5 accuracy, the classifier must provide relative likelihoods for each class. When these are sorted, a classification is considered correct if the correct classification falls anywhere within the top 5 predictions made by the network. Top-5 accuracy was popularized by theImageNetchallenge. It is usually higher than top-1 accuracy, as any correct predictions in the 2nd through 5th positions will not improve the top-1 score, but do improve the top-5 score.
Inpsychometricsandpsychophysics, the termaccuracyis interchangeably used withvalidityandconstant error.Precisionis a synonym forreliabilityandvariable error. The validity of a measurement instrument or psychological test is established through experiment or correlation with behavior. Reliability is established with a variety of statistical techniques, classically through an internal consistency test likeCronbach's alphato ensure sets of related questions have related responses, and then comparison of those related question between reference and target population.[citation needed]
Inlogic simulation, a common mistake in evaluation of accurate models is to compare alogic simulation modelto atransistorcircuit simulation model. This is a comparison of differences in precision, not accuracy. Precision is measured with respect to detail and accuracy is measured with respect to reality.[16][17]
Information retrieval systems, such asdatabasesandweb search engines, are evaluated bymany different metrics, some of which are derived from theconfusion matrix, which divides results into true positives (documents correctly retrieved), true negatives (documents correctly not retrieved), false positives (documents incorrectly retrieved), and false negatives (documents incorrectly not retrieved). Commonly used metrics include the notions ofprecision and recall. In this context, precision is defined as the fraction of documents correctly retrieved compared to the documents retrieved (true positives divided by true positives plus false positives), using a set ofground truthrelevant results selected by humans. Recall is defined as the fraction of documents correctly retrieved compared to the relevant documents (true positives divided by true positives plus false negatives). Less commonly, the metric of accuracy is used, is defined as the fraction of documents correctly classified compared to the documents (true positives plus true negatives divided by true positives plus true negatives plus false positives plus false negatives).
None of these metrics take into account the ranking of results. Ranking is very important for web search engines because readers seldom go past the first page of results, and there are too many documents on the web to manually classify all of them as to whether they should be included or excluded from a given search. Adding a cutoff at a particular number of results takes ranking into account to some degree. The measureprecision at k, for example, is a measure of precision looking only at the top ten (k=10) search results. More sophisticated metrics, such asdiscounted cumulative gain, take into account each individual ranking, and are more commonly used where this is important.
In cognitive systems, accuracy and precision is used to characterize and measure results of a cognitive process performed by biological or artificial entities where a cognitive process is a transformation of data, information, knowledge, or wisdom to a higher-valued form. (DIKW Pyramid) Sometimes, a cognitive process produces exactly the intended or desired output but sometimes produces output far from the intended or desired. Furthermore, repetitions of a cognitive process do not always produce the same output.Cognitive accuracy(CA) is the propensity of a cognitive process to produce the intended or desired output.Cognitive precision(CP) is the propensity of a cognitive process to produce the same output.[18][19][20]To measureaugmented cognitionin human/cog ensembles, where one or more humans work collaboratively with one or more cognitive systems (cogs), increases in cognitive accuracy and cognitive precision assist in measuring the degree ofcognitive augmentation.
|
https://en.wikipedia.org/wiki/Accuracy
|
ANOVA gaugerepeatabilityandreproducibilityis ameasurement systems analysistechnique that uses ananalysis of variance(ANOVA)random effects modelto assess a measurement system.
The evaluation of a measurement system isnotlimited togaugebut to all types ofmeasuring instruments,test methods, and other measurement systems.
ANOVA Gage R&R measures the amount of variability induced in measurements by the measurement system itself, and compares it to the total variability observed to determine the viability of the measurement system. There are several factors affecting a measurement system, including:
There are two important aspects of a Gage R&R:
It is important to understand the difference betweenaccuracy and precisionto understand the purpose of Gage R&R. Gage R&R addresses only the precision of a measurement system. It is common to examine theP/T ratiowhich is the ratio of the precision of a measurement system to the (total) tolerance of the manufacturing process of which it is a part. If the P/T ratio is low, the impact on product quality of variation due to the measurement system is small. If the P/T ratio is larger, it means the measurement system is "eating up" a large fraction of the tolerance, in that the parts that do not have sufficient tolerance may be measured as acceptable by the measurement system. Generally, a P/T ratio less than 0.1 indicates that the measurement system can reliably determine whether any given part meets the tolerance specification.[2]A P/T ratio greater than 0.3 suggests that unacceptable parts will be measured as acceptable (or vice versa) by the measurement system, making the system inappropriate for the process for which it is being used.[2]
ANOVA Gage R&R is an important tool within theSix Sigmamethodology, and it is also a requirement for aproduction part approval process(PPAP) documentation package.[3]Examples of Gage R&R studies can be found in part 1 of Czitrom & Spagon.[4]
There is not a universal criterion of minimum sample requirements for the GRR matrix, it being a matter for the Quality Engineer to assess risks depending on how critical the measurement is and how costly they are. The "10×2×2" (ten parts, two operators, two repetitions) is an acceptable sampling for some studies, although it has very few degrees of freedom for the operator component. Several methods of determining thesample sizeand degree ofreplicationare used.
In one common crossed study, 10 parts might each be measured two times by two different operators. The ANOVA then allows the individual sources of variation in the measurement data to be identified; the part-to-part variation, the repeatability of the measurements, the variation due to different operators; and the variation due to part by operator interaction.
The calculation of variance components and standard deviations using ANOVA is equivalent to calculating variance and standard deviation for a single variable but it enables multiple sources of variation to be individually quantified which are simultaneously influencing a single data set. When calculating the variance for a data set the sum of the squared differences between each measurement and the mean is calculated and then divided by the degrees of freedom (n– 1). The sums of the squared differences are calculated for measurements of the same part, by the same operator, etc., as given by the below equations for the part (SSPart), the operator (SSOp), repeatability (SSRep) and total variation (SSTotal).
wherenOpis the number of operators,nRepis the number of replicate measurements of each part by each operator,nPart{\displaystyle n_{\text{Part}}}is the number of parts,x̄is the grand mean,x̄i..is the mean for each part,x̄·j·is the mean for each operator,xijk'is each observation andx̄ijis the mean for each factor level. When following the spreadsheet method of calculation thenterms are not explicitly required since each squared difference is automatically repeated across the rows for the number of measurements meeting each condition.
The sum of the squared differences for part by operator interaction (SSPart · Op) is the residual variation given by
|
https://en.wikipedia.org/wiki/ANOVA_gauge_R%26R
|
In logic,contingencyis the feature of a statement making it neither necessary nor impossible.[1][2]Contingency is a fundamental concept ofmodal logic. Modal logic concerns the manner, ormode, in which statements are true. Contingency is one of three basic modes alongside necessity and possibility. In modal logic, a contingent statement stands in the modal realm between what is necessary and what is impossible, never crossing into the territory of either status. Contingent and necessary statements form the complete set of possible statements. While this definition is widely accepted, the precise distinction (or lack thereof) between what is contingent and what is necessary has been challenged since antiquity.
In logic, a thing is considered to be possible when it is true in at least onepossible world. This means there is a way to imagine a world in which a statement is true and in which its truth does not contradict any other truth in that world. If it were impossible, there would be no way to conceive such a world: the truth of any impossible statement must contradict some other fact in that world. Contingency isnot impossible, so a contingent statement is therefore one which is true in at least one possible world. But contingency is alsonot necessary, so a contingent statement is false in at least one possible world.αWhile contingent statements are false in at least one possible world, possible statements are not also defined this way. Since necessary statements are a kind of possible statement (e.g. 2=2 is possible and necessary), then to define possible statements as 'false in some possible world' is to affect the definition of necessary statements. Since necessary statements are never false in any possible world, then some possible statements are never false in any possible world. So the idea that a statement might ever be false and yet remain an unrealizedpossibilityis entirely reserved to contingent statements alone. While all contingent statements are possible, not all possible statements are contingent.[3]The truth of a contingent statement is consistent with all other truths in a given world, but not necessarily so. They are always possible in every imaginable world but not always trueβin every imaginable world.
This distinction begins to reveal the ordinary English meaning of the word "contingency", in which the truth of one thing depends on the truth of another. On the one hand, the mathematical idea thata sum of two and two is fouris always possible and always true, which makes it necessary and therefore not contingent. This mathematical truth does not depend on any other truth, it is true by definition. On the other hand, since a contingent statement is always possible but not necessarily true, we can always conceive it to be false in a world in which it is also always logically achievable. In such a world, the contingent idea is never necessarily false since this would make it impossible in that world. But if it's false and yet still possible, this means the truths or facts in that world would have to change in order for the contingent truth to becomeactualized. When a statement's truth depends on this kind of change, it is contingent: possible but dependent on whatever facts are actually taking place in a given world.
Some philosophical distinctions are used to examine the line between contingent and necessary statements. These includeanalyticandepistemicdistinctions as well as the modal distinctions already noted. But there is not always agreement about exactly what these distinctions mean or how they are used. Philosophers such asJaakko Hintikkaand Arthur Pap consider the concept of analytic truths, for example (as distinct from synthetic ones) to be ambiguous since in practice they are defined or used in different ways.[4][5]And whileSaul Kripkestipulates that analytic statements are always necessary anda priori,[6]Edward Zaltaclaims that there are examples in which analytic statements are not necessary.[7]Kripke uses the example of a meter stick to support the idea that somea prioritruths are contingent.[8]
InTime and Modality,A. N. Priorargues that a cross-examination between the basic principles of modal logic and those ofquantificational logicseems to require that "whatever exists exists necessarily". He says this threatens the definition of contingent statements as non-necessary things when one generically intuits that some of what exists does so contingently, rather than necessarily.[9]Harry Deutsch acknowledged Prior's concern and outlines rudimentary notes about a "Logic for Contingent Beings."[10]Deutsch believes that the solution to Prior's concern begins by removing the assumption that logical statements are necessary. He believes the statement format, "If all objects are physical, and Φ exists, then Φ is physical," is logically true by form but is not necessarily true if Φrigidly designates, for example, a specific person who is not alive.[11]
In chapter 9 ofDe Interpretatione,Aristotleobserves an apparent paradox in the nature of contingency. He considers that while the truth values of contingent past- and present-tense statements can be expressed in pairs ofcontradictionsto represent their truth or falsity, this may not be the case of contingent future-tense statements. Aristotle asserts that if this were the case for future contingent statements as well, some of themwould be necessarily true, a fact which seems to contradict their contingency.[12]Aristotle's intention with these claims breaks down into two primary readings of his work. The first view, considered notably by Boethius,[13]supposes that Aristotle's intentions were to argue against this logical determinism only by claiming future contingent statements are neither true nor false.[14][15][16]This reading of Aristotle regards future contingents as simply disqualified from possessing any truth value at all until they areactualized. The opposing view, with an early version from Cicero,[17]is that Aristotle was not attempting to disqualify assertoric statements about future contingents from being either true or false, but that their truth value was indeterminant.[18][19][20]This latter reading takes future contingents to possess a truth value, one which is necessary but which is unknown. This view understands Aristotle to be saying that while some event's occurrence at a specified time was necessary, a fact of necessity which could not have been known to us, its occurrence at simply any time was not necessary.
Medieval thinkers studied logical contingency as a way to analyze the relationship between Early Modern conceptions of God and the modal status of the worldquaHis creation.[21]Early Modern writers studied contingency against the freedom of theChristian Trinitynot to create the universe or set in order a series of natural events.
In the 16th century,European Reformed Scholasticismsubscribed toJohn Duns Scotus'idea of synchronic contingency, which attempted to remove perceived contradictions between necessity, human freedom and the free will of God to create the world. In the 17th Century, Baruch Spinoza in hisEthicsstates that a thing is called contingent when "we do not know whether the essence does or does not involve a contradiction, or of which, knowing that it does not involve a contradiction, we are still in doubt concerning the existence, because the order of causes escape us".[22]Further, he states, "It is in the nature of reason to perceive things under a certain form of eternity as necessary and it is only through our imagination that we consider things, whether in respect to the future or the past, as contingent".[23]
The eighteenth-century philosopher Jonathan Edwards in his workA Careful and Strict Enquiry into the Modern Prevailing Notions of that Freedom of Will which is supposed to be Essential to Moral Agency, Virtue and Vice, Reward and Punishment, Praise and Blame(1754), reviewed the relationships between action, determinism, and personal culpability. Edwards begins his argument by establishing the ways in which necessary statements are made in logic. He identifies three ways necessary statements can be made for which only the third kind can legitimately be used to make necessary claims about the future. This third way of making necessary statements involves conditional or consequential necessity, such that if a contingent outcome could be caused by something that was necessary, then this contingent outcome could be considered necessary itself "by a necessity of consequence".[24]Prior interprets[25]Edwards by supposing that any necessary consequence of any already necessary truth would "also 'always have existed,' so that it is only by a necessary connexion (sic) with 'what has already come to pass' that what is still merely future can be necessary."[26]Further, inPast, Present, and Future, Prior attributes an argument against the incompatibility of God's foreknowledge or foreordaining with future contingency to Edward'sEnquiry.[27]
|
https://en.wikipedia.org/wiki/Contingency_(philosophy)
|
Corroborating evidence, also referred to ascorroboration, is a type of evidence in lawful command.
Corroborating evidence tends to support a proposition that is already supported by some initial evidence, therefore confirming the proposition. For example, W, a witness, testifies that she saw X drive his automobile into a green car. Meanwhile, Y, another witness,corroboratesthe proposition by testifying that when he examined X's car, later that day, he noticed green paint on its fender. There can also be corroborating evidence related to a certain source, such as what makes an author think a certain way due to the evidence that was supplied by witnesses or objects.[1]
Another type of corroborating evidence comes from using theBaconian method, i.e., themethod of agreement,method of difference, andmethod of concomitant variations.
These methods are followed inexperimental design. They were codified byFrancis Bacon, and developed further byJohn Stuart Milland consist of controlling severalvariables, in turn, to establish which variables arecausallyconnected. These principles are widely used intuitively in various kinds of proofs, demonstrations, and investigations, in addition to being fundamental to experimental design.
In law, corroboration refers to the requirement in some jurisdictions, such as inScots law, that any evidence adduced be backed up by at least one other source (seeCorroboration in Scots law).
Defendant says, "It was like what he/she (a witness) said but...". This is Corroborative evidence from the defendant that the evidence the witness gave is true and correct.
Corroboration is not needed in certain instances. For example, there are certain statutory exceptions. In theEducation (Scotland) Act, it is only necessary to produce a register as proof of lack of attendance. No further evidence is needed.
Perjury
See section 13 of thePerjury Act 1911.
Speeding offences
See section 89(2) of theRoad Traffic Regulation Act 1984.
Sexual offences
See section 32 of theCriminal Justice and Public Order Act 1994.
Confessions by mentally handicapped persons
See section 77 of thePolice and Criminal Evidence Act 1984.
Evidence of children
See section 34 of theCriminal Justice Act 1988.
Evidence of accomplices
See section 32 of theCriminal Justice and Public Order Act 1994.
Thisphilosophy of science-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Corroboration
|
Reproducible builds, also known asdeterministic compilation, is a process ofcompilingsoftware which ensures the resultingbinary codecan bereproduced.Source codecompiled using deterministic compilation will always output the same binary.[1][2][3]
Reproducible builds can act as part of achain of trust;[1]the source code can be signed, and deterministic compilation can prove that the binary was compiled from trusted source code. Verified reproducible builds provide a strong countermeasure against attacks where binaries do not match their source code, e.g., because an attacker has inserted malicious code into a binary. This is a relevant attack; attackers sometimes attack binaries but not the source code, e.g., because they can only change the distributed binary or to evade detection since it is the source code that developers normally review and modify. In a survey of 17 experts, reproducible builds had a very high utility rating from 58.8% participants, but also a high-cost rating from 70.6%.[4]Various efforts are being made to modify software development tools to reduce these costs.
For the compilation process to be deterministic, the input to the compiler must be the same, regardless of the build environment used. This typically involves normalizingvariablesthat may change, such as order of input files,timestamps,locales, andpaths.
Additionally, the compilers must not introduce non-determinism themselves. This sometimes happens when using hash tables with a random hash seed value. It can also happen when using the address of variables because that varies fromaddress space layout randomization(ASLR).
Build systems, such asBazeland Gitian,[5]can be used to automate deterministic build processes.
TheGNU Projectused reproducible builds in the early 1990s. Changelogs from 1992 indicate the ongoing effort.[6]
One of the older[7]projects to promote reproducible builds is theBitcoinproject withGitian. Later, in 2013, theTor (anonymity network)project started using Gitian for their reproducible builds.[8]
From 2011 a reproducible Java build system was developed for a decentralized peer-to-peer FOSS project: DirectDemocracyP2P.[9]The concepts of the system's application to automated updates recommendation support was first presented in April 2013 at Decentralized Coordination.[10][11]A treatise focusing on the implementation details of the reproducible Java compilation tool itself was published in 2015.[12]
In July 2013 on theDebianproject started implementing reproducible builds across its entire package archive.[13][14]By July 2017 more than 90% of the packages in the repository have been proven to build reproducibly.[15]
In November 2018, the Reproducible Builds project joined theSoftware Freedom Conservancy.[16]
F-Droiduses reproducible builds to provide a guarantee that the distributed APKs use the claimedfree source code.[17]
TheTailsportable operating system uses reproducible builds and explains to others how to verify their distribution.[18]
NixOSclaims 100% reproducible build in June 2021 for their minimal ISO releases.[19]
As of May 2020[update],Arch Linuxis working on making all official packages reproducible.[20]
As of March 2025[update]Debian live images for bookworm are reproducible.[21]
According to the Reproducible Builds project, timestamps are "the biggest source of reproducibility issues. Many build tools record the current date and time... and most archive formats will happily record modification times on top of their own timestamps."[22]They recommend that "it is better to use a date that is relevant to the source code instead of the build: old software can always be built later" if it is reproducible. They identify several ways to modify build processes to do this:
In some cases other changes must be made to make a build process reproducible. For example, some data structures do not guarantee a stable order in each execution. A typical solution is to modify the build process to specify a sorted output from those structures.[23]
|
https://en.wikipedia.org/wiki/Reproducible_builds
|
Ahypothesis(pl.:hypotheses) is a proposedexplanationfor aphenomenon. Ascientifichypothesis must be based onobservationsand make atestableandreproduciblepredictionaboutreality, in a process beginning with an educated guess or thought.
If a hypothesis is repeatedly independently demonstrated byexperimentto be true, it becomes ascientific theory.[1][2]In colloquial usage, the words "hypothesis" and "theory" are often used interchangeably, but this is incorrect in the context of science.
Aworking hypothesisis a provisionally-accepted hypothesis used for the purpose of pursuing further progress inresearch. Working hypotheses are frequently discarded, and often proposed with knowledge (andwarning) that they are incomplete and thus false, with the intent of moving research in at least somewhat the right direction, especially when scientists are stuck on an issue andbrainstormingideas.
A different meaning of the termhypothesisis used informal logic, to denote theantecedentof aproposition; thus in the proposition "IfP, thenQ",Pdenotes the hypothesis (or antecedent);Qcan be called aconsequent.Pis theassumptionin a (possiblycounterfactual)What Ifquestion. The adjectivehypothetical, meaning "having the nature of a hypothesis", or "being assumed to exist as an immediate consequence of a hypothesis", can refer to any of these meanings of the term "hypothesis".
In its ancient usage,hypothesisreferred to a summary of theplotof aclassical drama. The English wordhypothesiscomes from theancient Greekwordὑπόθεσις(hypothesis), whose literal or etymological sense is "putting or placing under" and hence in extended use has many other meanings including "supposition".[1][3][4][5]
InPlato'sMeno(86e–87b),Socratesdissectsvirtuewith a method which he says is used by mathematicians,[6]that of "investigating from a hypothesis".[7]In this sense, 'hypothesis' refers to a clever idea or a short cut, or a convenient mathematical approach that simplifies cumbersomecalculations.[8]CardinalRobert Bellarminegave a famous example of this usage in the warning issued toGalileoin the early 17th century: that he must not treat the motion of the Earth as a reality, but merely as a hypothesis.[9]
In common usage in the 21st century, ahypothesisrefers to a provisional idea whose merit requires evaluation. For proper evaluation, the framer of a hypothesis needs to define specifics in operational terms. A hypothesis requires more work by the researcher in order to either confirm or disprove it. In due course, a confirmed hypothesis may become part of a theory or occasionally may grow to become a theory itself. Normally, scientific hypotheses have the form of amathematical model.[10]Sometimes, but not always, one can also formulate them asexistential statements, stating that some particular instance of the phenomenon under examination has some characteristic and causal explanations, which have the general form ofuniversal statements, stating that every instance of the phenomenon has a particular characteristic.[clarification needed]
In entrepreneurial setting, a hypothesis is used to formulate provisional ideas about the attributes of products or business models. The formulated hypothesis is then evaluated, where the hypothesis is proven to be either "true" or "false" through averifiability- orfalsifiability-orientedexperiment.[11][12]
Any useful hypothesis will enablepredictionsbyreasoning(includingdeductive reasoning). It might predict the outcome of anexperimentin alaboratorysetting or the observation of a phenomenon innature. The prediction may also invoke statistics and only talk about probabilities.Karl Popper, following others, has argued that a hypothesis must befalsifiable, and that one cannot regard a proposition or theory as scientific if it does not admit the possibility of being shown to be false. Other philosophers of science have rejected the criterion of falsifiability or supplemented it with other criteria, such as verifiability (e.g.,verificationism) or coherence (e.g.,confirmation holism). Thescientific methodinvolves experimentation to test the ability of some hypothesis to adequately answer the question under investigation. In contrast, unfettered observation is not as likely to raise unexplained issues or open questions in science, as would the formulation of acrucial experimentto test the hypothesis. Athought experimentmight also be used to test the hypothesis.
In framing a hypothesis, the investigator must not currently know the outcome of a test or that it remains reasonably under continuing investigation. Only in such cases does the experiment, test or study potentially increase the probability of showing the truth of a hypothesis.[13]: pp17, 49–50If the researcher already knows the outcome, it counts as a "consequence" — and the researcher should have already considered this while formulating the hypothesis. If one cannot assess the predictions by observation or byexperience, the hypothesis needs to be tested by others providing observations. For example, a new technology or theory might make the necessary experiments feasible.
A trial solution to a problem is commonly referred to as a hypothesis—or, often, as an "educated guess"[14][2]—because it provides a suggested outcome based on the evidence. However, some scientists reject the term "educated guess" as incorrect. Experimenters may test and reject several hypotheses before solving the problem.
According to Schick and Vaughn,[15]researchers weighing up alternative hypotheses may take into consideration:
Aworking hypothesisis a hypothesis that is provisionally accepted as a basis for further research[16]in the hope that a tenable theory will be produced, even if the hypothesis ultimately fails.[17]Like all hypotheses, a working hypothesis is constructed as a statement of expectations, which can be linked to theexploratory researchpurpose in empirical investigation. Working hypotheses are often used as aconceptual frameworkin qualitative research.[18][19]
The provisional nature of working hypotheses makes them useful as an organizing device in applied research. Here they act like a useful guide to address problems that are still in a formative phase.[20]
In recent years, philosophers of science have tried to integrate the various approaches to evaluating hypotheses, and the scientific method in general, to form a more complete system that integrates the individual concerns of each approach. Notably,Imre LakatosandPaul Feyerabend, Karl Popper's colleague and student, respectively, have produced novel attempts at such a synthesis.
Conceptsin Hempel'sdeductive-nomological modelplay a key role in the development and testing of hypotheses. Most formal hypotheses connect concepts by specifying the expected relationships betweenpropositions. When a set of hypotheses are grouped together, they become a type ofconceptual framework. When aconceptual frameworkis complex and incorporates causality or explanation, it is generally referred to as a theory. According to noted philosopher of scienceCarl Gustav Hempel,
Hempel provides a useful metaphor that describes the relationship between aconceptual frameworkand the framework as it is observed and perhaps tested (interpreted framework). "The whole system floats, as it were, above the plane of observation and is anchored to it by rules of interpretation. These might be viewed as strings which are not part of the network but link certain points of the latter with specific places in the plane of observation. By virtue of those interpretative connections, the network can function as a scientific theory."[21]: 36Hypotheses with concepts anchored in the plane of observation are ready to be tested. In "actual scientific practice the process of framing a theoretical structure and of interpreting it are not always sharply separated, since the intended interpretation usually guides the construction of the theoretician".[21]: 33It is, however, "possible and indeed desirable, for the purposes of logical clarification, to separate the two steps conceptually".[21]: 33
When a possiblecorrelationor similar relation between phenomena is investigated, such as whether a proposed remedy is effective in treating a disease, the hypothesis that a relation exists cannot be examined the same way one might examine a proposed new law of nature. In such an investigation, if the tested remedy shows no effect in a few cases, these do not necessarily falsify the hypothesis. Instead,statistical testsare used to determine how likely it is that the overall effect would be observed if the hypothesized relation does not exist. If that likelihood is sufficiently small (e.g., less than 1%), the existence of a relation may be assumed. Otherwise, any observed effect may be due to pure chance.
In statistical hypothesis testing, two hypotheses are compared. These are called thenull hypothesisand thealternative hypothesis. The null hypothesis is the hypothesis that states that there is no relation between the phenomena whose relation is under investigation, or at least not of the form given by the alternative hypothesis. The alternative hypothesis, as the name suggests, is the alternative to the null hypothesis: it states that thereissome kind of relation. The alternative hypothesis may take several forms, depending on the nature of the hypothesized relation; in particular, it can be two-sided (for example: there issomeeffect, in a yet unknown direction) or one-sided (the direction of the hypothesized relation, positive or negative, is fixed in advance).[22]
Conventional significance levels for testing hypotheses (acceptable probabilities of wrongly rejecting a true null hypothesis) are .10, .05, and .01. The significance level for deciding whether the null hypothesis is rejected and the alternative hypothesis is accepted must be determined in advance, before the observations are collected or inspected. If these criteria are determined later, when the data to be tested are already known, the test is invalid.[23]
The above procedure is actually dependent on the number of the participants (units orsample size) that are included in the study. For instance, to avoid having the sample size be too small to reject a null hypothesis, it is recommended that one specify a sufficient sample size from the beginning. It is advisable to define a small, medium and large effect size for each of a number of important statistical tests which are used to test the hypotheses.[24]
Mount HypothesisinAntarcticais named in appreciation of the role of hypotheses in scientific research.
Several hypotheses have been put forth, in different subject areas:
hypothesis[...]—Working hypothesis, a hypothesis suggested or supported in some measure by features of observed facts, from which consequences may be deduced which can be tested by experiment and special observations, and which it is proposed to subject to an extended course of such investigation, with the hope that, even should the hypothesis thus be overthrown, such research may lead to a tenable theory.
|
https://en.wikipedia.org/wiki/Hypothesis
|
Pathological scienceis an area of research where "people are tricked into false results ... by subjective effects,wishful thinkingor threshold interactions."[1][2]The term was first used byIrving Langmuir,Nobel Prize-winningchemist, during a 1953colloquiumat theKnolls Research Laboratory.[3]Langmuir said a pathological science is an area of research that simply will not "go away"—long after it was given up on as "false" by the majority of scientists in the field. He called pathological science "the science of things that aren't so."[4][5]
In his 2002 book,Undead Science, sociology and anthropology Professor Bart Simon lists it among practices that are falsely perceived or presented to be science, "categories ... such as ...pseudoscience,amateur science, deviant or fraudulent science, bad science,junk science, pathological science,cargo cult science, andvoodoo science."[6]Examples of pathological science include theMartian canals,N-rays,polywater, andcold fusion. The theories and conclusions behind all of these examples are currently rejected or disregarded by the majority of scientists.
Pathological science, as defined by Langmuir, is a psychological process in which a scientist, originally conforming to thescientific method, unconsciously veers from that method, and begins a pathological process of wishful data interpretation(see theobserver-expectancy effectandcognitive bias). Some characteristics of pathological science are:
Langmuir never intended the term to be rigorously defined; it was simply the title of his talk on some examples of "weird science". As with any attempt to define the scientific endeavor, examples and counterexamples can always be found.
Langmuir's discussion ofN-rayshas led to their traditional characterization as an instance of pathological science.[7]
In 1903,Prosper-René Blondlotwas working onX-rays(as were other physicists of the era) and noticed a new visible radiation that could penetratealuminium. He devised experiments in which a barely visible object was illuminated by these N-rays, and thus became "more visible". Blondlot claimed that N-rays were causing a small visual reaction, too small to be seen under normal illumination, but just visible when most normal light sources were removed and the target was just barely visible to begin with.
N-rays became the topic of some debate within the science community. After a time, American physicistRobert W. Wooddecided to visit Blondlot's lab, which had moved on to the physical characterization of N-rays. An experiment passed the rays from a 2 mm slit through an aluminumprism, from which he was measuring theindex of refractionto a precision that required measurements accurate to within 0.01 mm. Wood asked how it was possible that he could measure something to 0.01 mm from a 2 mm source, a physical impossibility in the propagation of any kind of wave. Blondlot replied, "That's one of the fascinating things about the N-rays. They don't follow the ordinary laws of science that you ordinarily think of." Wood then asked to see the experiments being run as usual, which took place in a room required to be very dark so the target was barely visible. Blondlot repeated his most recent experiments and got the same results—despite the fact that Wood had reached over and covertly sabotaged the N-ray apparatus by removing the prism.[1][8]
Langmuir offered additional examples of what he regarded as pathological science in his original speech:[9]
A 1985 version[citation needed]of Langmuir's speech offered more examples, although at least one of these (polywater) occurred entirely after Langmuir's death in 1957:
Since Langmuir's original talk, a number of newer examples of what appear to be pathological science have appeared.Denis Rousseau, one of the main debunkers of polywater, gave an update of Langmuir in 1992, and he specifically cited as examples the cases of polywater,Martin Fleischmann'scold fusion andJacques Benveniste's"infinite dilution".[20]
Polywaterwas a form of water which appeared to have a much higherboiling pointand much lowerfreezing pointthan normal water. During the 1960s, a number of articles were published on the subject, and research on polywater was done around the world with mixed results. Eventually it was determined that some of the properties of polywater could be explained by biological contamination. When more rigorous cleaning ofglasswareandexperimental controlswere introduced, polywater could no longer be produced. It took several years for the concept of polywater to die in spite of the later negative results.
In 1989,Martin FleischmannandStanley Ponsannounced the discovery of a simple and cheap procedure to obtain room-temperaturenuclear fusion. Although there were multiple instances where successful results were reported, they lacked consistency and hence cold fusion came to be considered to be an example of pathological science.[21]Two panels convened by theUS Department of Energy, one in 1989 and a second in 2004, did not recommend a dedicated federal program for cold fusion research. A small number of researchers continue working in the field.
Jacques Benveniste was a Frenchimmunologistwho in 1988 published a paper in the prestigious scientific journalNaturedescribing the action of high dilutions ofanti-IgE antibodyon thedegranulationof humanbasophils, findings which seemed to support the concept ofhomeopathy. Biologists were puzzled by Benveniste's results, as only molecules of water, and no molecules of the original antibody, remained in these high dilutions. Benveniste concluded that the configuration of molecules in water was biologically active. Subsequent investigations have not supported Benveniste's findings.
|
https://en.wikipedia.org/wiki/Pathological_science
|
Pseudoscienceconsists of statements,beliefs, or practices that claim to be bothscientificand factual but are incompatible with thescientific method.[Note 1]Pseudoscience is often characterized by contradictory, exaggerated orunfalsifiable claims; reliance onconfirmation biasrather than rigorous attempts at refutation; lack of openness toevaluation by other experts; absence of systematic practices when developinghypotheses; and continued adherence long after the pseudoscientific hypotheses have been experimentally discredited.[4]It is not the same asjunk science.[7]
Thedemarcation between science and pseudosciencehasscientific,philosophical, andpoliticalimplications.[8]Philosophers debate the nature of science and the general criteria for drawing the line betweenscientific theoriesand pseudoscientific beliefs, but there is widespread agreement "thatcreationism,astrology,homeopathy,Kirlian photography,dowsing,ufology,ancient astronaut theory,Holocaust denialism,Velikovskian catastrophism, andclimate change denialismare pseudosciences."[9]There are implications forhealth care, the use ofexpert testimony, and weighingenvironmental policies.[9]Recent empirical research has shown that individuals who indulge in pseudoscientific beliefs generally show lower evidential criteria, meaning they often require significantly less evidence before coming to conclusions. This can be coined as a 'jump-to-conclusions' bias that can increase the spread of pseudoscientific beliefs.[10]Addressing pseudoscience is part ofscience educationand developing scientific literacy.[11][12]
Pseudoscience can have dangerous effects. For example, pseudoscientificanti-vaccine activismand promotion of homeopathic remedies as alternative disease treatments can result in people forgoing important medical treatments with demonstrable health benefits, leading to ill-health and deaths.[13][14][15]Furthermore, people who refuse legitimate medical treatments for contagious diseases may put others at risk. Pseudoscientific theories aboutracialand ethnic classifications have led toracismandgenocide.
The termpseudoscienceis often consideredpejorative, particularly by its purveyors, because it suggests something is being presented as science inaccurately or even deceptively. Therefore, practitioners and advocates of pseudoscience frequently dispute the characterization.[4][16]
The wordpseudoscienceis derived from the Greek rootpseudomeaning "false"[17][18]and the English wordscience, from the Latin wordscientia, meaning "knowledge". Although the term has been in use since at least the late 18th century (e.g., in 1796 byJames Pettit Andrewsin reference toalchemy[19][20]), the concept of pseudoscience as distinct from real or proper science seems to have become more widespread during the mid-19th century. Among the earliest uses of "pseudo-science" was in an 1844 article in theNorthern Journal of Medicine, issue 387:
That opposite kind of innovation which pronounces what has been recognized as a branch of science, to have been a pseudo-science, composed merely of so-called facts, connected together by misapprehensions under the disguise of principles.
An earlier use of the term was in 1843 by the French physiologistFrançois Magendie, that refers tophrenologyas "a pseudo-science of the present day".[3][21][22]During the 20th century, the word was used pejoratively to describe explanations of phenomena which were claimed to be scientific, but which were not in fact supported by reliable experimental evidence.
From time to time, however, the usage of the word occurred in a more formal, technical manner in response to a perceived threat to individual and institutional security in a social and cultural setting.[25]
Pseudoscience is differentiated from science because – although it usually claims to be science – pseudoscience does not adhere to scientific standards, such as thescientific method,falsifiability of claims, andMertonian norms.
A number of basic principles are accepted by scientists as standards for determining whether a body of knowledge, method, or practice is scientific. Experimental results should bereproducibleandverifiedby other researchers.[26]These principles are intended to ensure experiments can be reproduced measurably given the same conditions, allowing further investigation to determine whether ahypothesisortheoryrelated to givenphenomenaisvalidand reliable. Standards require the scientific method to be applied throughout, andbiasto be controlled for or eliminated throughrandomization, fair sampling procedures,blindingof studies, and other methods. All gathered data, including the experimental or environmental conditions, are expected to be documented for scrutiny and made available forpeer review, allowing further experiments or studies to be conducted to confirm or falsify results. Statistical quantification ofsignificance,confidence, anderror[27]are also important tools for the scientific method.
During the mid-20th century, the philosopherKarl Popperemphasized the criterion offalsifiabilityto distinguishsciencefromnon-science.[28]Statements,hypotheses, ortheorieshave falsifiability or refutability if there is the inherent possibility that they can be provenfalse, that is, if it is possible to conceive of an observation or an argument that negates them. Popper usedastrologyandpsychoanalysisas examples of pseudoscience and Einstein'stheory of relativityas an example of science. He subdivided non-science into philosophical, mathematical, mythological, religious and metaphysical formulations on one hand, and pseudoscientific formulations on the other.[29]
Another example which shows the distinct need for a claim to be falsifiable was stated inCarl Sagan's publicationThe Demon-Haunted Worldwhen he discusses an invisibledragonthat he has in his garage. The point is made that there is no physical test to refute the claim of the presence of this dragon. Whatever test one thinks can be devised, there is a reason why it does not apply to the invisible dragon, so one can never prove that the initial claim is wrong. Sagan concludes; "Now, what's the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all?". He states that "your inability to invalidate my hypothesis is not at all the same thing as proving it true",[30]once again explaining that even if such a claim were true, it would be outside the realm ofscientific inquiry.
During 1942,Robert K. Mertonidentified a set of five "norms" which characterize real science. If any of the norms were violated, Merton considered the enterprise to be non-science. His norms were:
In 1978,Paul Thagardproposed that pseudoscience is primarily distinguishable from science when it is less progressive than alternative theories over a long period of time, and its proponents fail to acknowledge or address problems with the theory.[32]In 1983,Mario Bungesuggested the categories of "belief fields" and "research fields" to help distinguish between pseudoscience and science, where the former is primarily personal and subjective and the latter involves a certain systematic method.[33]The 2018 book aboutscientific skepticismbySteven Novella, et al.The Skeptics' Guide to the Universelists hostility to criticism as one of the major features of pseudoscience.[34]
Larry Laudanhas suggested pseudoscience has no scientific meaning and is mostly used to describe human emotions: "If we would stand up and be counted on the side of reason, we ought to drop terms like 'pseudo-science' and 'unscientific' from our vocabulary; they are just hollow phrases which do only emotive work for us".[35]Likewise,Richard McNallystates, "The term 'pseudoscience' has become little more than an inflammatory buzzword for quickly dismissing one's opponents in media sound-bites" and "When therapeutic entrepreneurs make claims on behalf of their interventions, we should not waste our time trying to determine whether their interventions qualify as pseudoscientific. Rather, we should ask them: How do you know that your intervention works? What is your evidence?"[36]
For philosophersSilvio FuntowiczandJerome R. Ravetz"pseudo-science may be defined as one where the uncertainty of its inputs must be suppressed, lest they render its outputs totally indeterminate". The definition, in the bookUncertainty and Quality in Science for Policy,[37]alludes to the loss of craft skills in handling quantitative information, and to the bad practice of achieving precision in prediction (inference) only at the expenses of ignoring uncertainty in the input which was used to formulate the prediction. This use of the term is common among practitioners ofpost-normal science. Understood in this way, pseudoscience can be fought using good practices to assess uncertainty in quantitative information, such asNUSAPand – in the case of mathematical modelling –sensitivity auditing.
The history of pseudoscience is the study of pseudoscientific theories over time. A pseudoscience is a set of ideas that presents itself as science, while it does not meet the criteria to be properly called such.[38][39]
Distinguishing between proper science and pseudoscience is sometimes difficult.[40]One proposal for demarcation between the two is the falsification criterion, attributed most notably to the philosopherKarl Popper.[41]In thehistory of scienceand thehistory of pseudoscienceit can be especially difficult to separate the two, because some sciences developed from pseudosciences. An example of this transformation is the science ofchemistry, which traces its origins to the pseudoscientific orpre-scientificstudy ofalchemy.
The vast diversity in pseudosciences further complicates the history of science. Some modern pseudosciences, such asastrologyandacupuncture, originated before the scientific era. Others developed as part of an ideology, such asLysenkoism, or as a response to perceived threats to an ideology. Examples of this ideological process arecreation scienceandintelligent design, which were developed in response to the scientific theory ofevolution.[42]
A topic, practice, or body of knowledge might reasonably be termed pseudoscientific when it is presented as consistent with the norms of scientific research, but it demonstrably fails to meet these norms.[43][44]
TheMinistry of AYUSHin the Government of India is purposed with developing education, research and propagation of indigenous alternative medicine systems in India. The ministry has faced significant criticism for funding systems that lackbiological plausibilityand are either untested or conclusively proven as ineffective. Quality of research has been poor, and drugs have been launched without any rigorous pharmacological studies and meaningfulclinical trials on Ayurvedaor other alternative healthcare systems.[69][70]There is no credible efficacy or scientific basis of any of these forms of treatment.[71]
In his bookThe Demon-Haunted World, Carl Sagan discusses thegovernment of Chinaand theChinese Communist Party's concern about Western pseudoscience developments and certain ancient Chinese practices in China. He sees pseudoscience occurring in the United States as part of a worldwide trend and suggests its causes, dangers, diagnosis and treatment may be universal.[72]
A large percentage of the United States population lacks scientific literacy, not adequately understanding scientific principles andmethod.[Note 6][Note 7][75][Note 8]In theJournal of College Science Teaching, Art Hobson writes, "Pseudoscientific beliefs are surprisingly widespread in our culture even among public school science teachers and newspaper editors, and are closely related to scientific illiteracy."[77]However, a 10,000-student study in the same journal concluded there was no strong correlation between science knowledge and belief in pseudoscience.[78]
During 2006, the U.S.National Science Foundation(NSF) issued an executive summary of a paper on science and engineering which briefly discussed the prevalence of pseudoscience in modern times. It said, "belief in pseudoscience is widespread" and, referencing aGallup Poll,[79][80]stated that belief in the 10 commonly believed examples of paranormal phenomena listed in the poll were "pseudoscientific beliefs".[81]The items were "extrasensory perception (ESP), thathouses can be haunted,ghosts,telepathy,clairvoyance, astrology, that people canmentally communicate with the dead,witches,reincarnation, andchannelling".[81]Such beliefs in pseudoscience represent a lack of knowledge of how science works. Thescientific communitymay attempt to communicate information about science out of concern for the public's susceptibility to unproven claims.[81]The NSF stated that pseudoscientific beliefs in the U.S. became more widespread during the 1990s, peaked about 2001, and then decreased slightly since with pseudoscientific beliefs remaining common. According to the NSF report, there is a lack of knowledge of pseudoscientific issues in society and pseudoscientific practices are commonly followed.[82]Surveys indicate about a third of adult Americans consider astrology to be scientific.[83][84][85]
In Russia, in the late 20th and early 21st century, significant budgetary funds were spent on programs for the experimental study of "torsion fields",[86]the extraction of energy from granite,[87]the study of "cold nuclear fusion", andastrologicalandextrasensory"research" by theMinistry of Defense, theMinistry of Emergency Situations, theMinistry of Internal Affairs, and theState Duma[86](seeMilitary Unit 10003). In 2006, Deputy Chairman of theSecurity Council of the Russian FederationNikolai Spasskypublished an article inRossiyskaya Gazeta, where among the priority areas for the development of theRussian energy sector, the task of extractingenergy from a vacuumwas in the first place.[88]TheClean Waterproject was adopted as aUnited Russiaparty project; in the version submitted to the government, the program budget for 2010–2017 exceeded $14 billion.[89][88]
There have been many connections between pseudoscientific writers and researchers and their anti-semitic, racist andneo-Nazibackgrounds. They often use pseudoscience to reinforce their beliefs. One of the most predominant pseudoscientific writers isFrank Collin, a self-proclaimed Nazi who goes by Frank Joseph in his writings.[90]The majority of his works include the topics ofAtlantis, extraterrestrial encounters, andLemuriaas well as other ancient civilizations, often withwhite supremacistundertones. For example, he posited that European peoples migrated to North America beforeColumbus, and that all Native American civilizations were initiated by descendants ofwhite people.[91]
TheAlt-Rightusing pseudoscience to base their ideologies on is not a new issue. The entire foundation of anti-semitism is based on pseudoscience, orscientific racism. In an article fromNewsweekby Sander Gilman, Gilman describes the pseudoscience community's anti-semitic views. "Jews as they appear in this world of pseudoscience are an invented group of ill, stupid or stupidly smart people who use science to their own nefarious ends. Other groups, too, are painted similarly in 'race science', as it used to call itself: African-Americans, the Irish, the Chinese and, well, any and all groups that you want to prove inferior to yourself".[92]Neo-Nazis and white supremacist often try to support their claims with studies that "prove" that their claims are more than just harmful stereotypes. For exampleBret Stephenspublished a column inThe New York Timeswhere he claimed thatAshkenazi Jewshad the highestIQamong any ethnic group.[93]However, the scientific methodology and conclusions reached by the article Stephens cited has been called into question repeatedly since its publication. It has been found that at least one of that study's authors has been identified by theSouthern Poverty Law Centeras a white nationalist.[94]
The journalNaturehas published a number of editorials in the last few years warning researchers about extremists looking to abuse their work, particularly population geneticists and those working with ancientDNA. One article inNature, titled "Racism in Science: The Taint That Lingers" notes that early-twentieth-centuryeugenicpseudoscience has been used to influence public policy, such as theImmigration Act of 1924in the United States, which sought to prevent immigration from Asia and parts of Europe.[95]
In a 1981 report Singer and Benassi wrote that pseudoscientific beliefs have their origin from at least four sources:[96]
A 1990 study by Eve and Dunn supported the findings of Singer and Benassi and found pseudoscientific belief being promoted by high school life science and biology teachers.[97]
The psychology of pseudoscience attempts to explore and analyze pseudoscientific thinking by means of thorough clarification on making the distinction of what is considered scientific vs. pseudoscientific. The human proclivity for seeking confirmation rather than refutation (confirmation bias),[98]the tendency to hold comforting beliefs, and the tendency to overgeneralize have been proposed as reasons for pseudoscientific thinking. According to Beyerstein, humans are prone to associations based on resemblances only, and often prone to misattribution in cause-effect thinking.[99]
Michael Shermer's theory of belief-dependent realism is driven by the belief that the brain is essentially a "belief engine" which scans data perceived by the senses and looks for patterns and meaning. There is also the tendency for the brain to createcognitive biases, as a result of inferences and assumptions made without logic and based on instinct – usually resulting in patterns in cognition. These tendencies ofpatternicityand agenticity are also driven "by a meta-bias called thebias blind spot, or the tendency to recognize the power of cognitive biases in other people but to be blind to their influence on our own beliefs".[100]Lindeman states that social motives (i.e., "to comprehend self and the world, to have a sense of control over outcomes, to belong, to find the world benevolent and to maintain one's self-esteem") are often "more easily" fulfilled by pseudoscience than by scientific information. Furthermore, pseudoscientific explanations are generally not analyzed rationally, but instead experientially. Operating within a different set of rules compared to rational thinking, experiential thinking regards an explanation as valid if the explanation is "personally functional, satisfying and sufficient", offering a description of the world that may be more personal than can be provided by science and reducing the amount of potential work involved in understanding complex events and outcomes.[101]
Anyone searching for psychological help that is based in science should seek a licensed therapist whose techniques are not based in pseudoscience. Hupp and Santa Maria provide a complete explanation of what that person should look for.[102]
There is a trend to believe in pseudoscience more thanscientific evidence.[103]Some people believe the prevalence of pseudoscientific beliefs is due to widespreadscientific illiteracy.[104]Individuals lacking scientific literacy are more susceptible to wishful thinking, since they are likely to turn to immediate gratification powered by System 1, our default operating system which requires little to no effort. This system encourages one toaccept the conclusions they believe, and reject the ones they do not. Further analysis of complex pseudoscientific phenomena require System 2, which follows rules, compares objects along multiple dimensions and weighs options. These two systems have several other differences which are further discussed in thedual-process theory.[105]The scientific and secular systems of morality and meaning are generally unsatisfying to most people. Humans are, by nature, a forward-minded species pursuing greater avenues of happiness and satisfaction, but we are all too frequently willing to grasp at unrealistic promises of a better life.[106]
Psychology has much to discuss about pseudoscience thinking, as it is the illusory perceptions of causality and effectiveness of numerous individuals that needs to be illuminated. Research suggests that illusionary thinking happens in most people when exposed to certain circumstances such as reading a book, an advertisement or the testimony of others are the basis of pseudoscience beliefs. It is assumed that illusions are not unusual, and given the right conditions, illusions are able to occur systematically even in normal emotional situations. One of the things pseudoscience believers quibble most about is that academic science usually treats them as fools. Minimizing these illusions in the real world is not simple.[107]To this aim, designing evidence-based educational programs can be effective to help people identify and reduce their own illusions.[107]
Philosophers classify types ofknowledge. In English, the wordscienceis used to indicate specifically thenatural sciencesand related fields, which are called thesocial sciences.[108]Different philosophers of science may disagree on the exact limits – for example, is mathematics aformal sciencethat is closer to the empirical ones, or is pure mathematics closer to the philosophical study oflogicand therefore not a science?[109]– but all agree that all of the ideas that are not scientific are non-scientific. The large category ofnon-scienceincludes all matters outside the natural and social sciences, such as the study ofhistory,metaphysics,religion,art, and thehumanities.[108]Dividing the category again, unscientific claims are a subset of the large category of non-scientific claims. This category specifically includes all matters that are directly opposed to good science.[108]Un-science includes both "bad science" (such as an error made in a good-faith attempt at learning something about the natural world) and pseudoscience.[108]Thus pseudoscience is a subset of un-science, and un-science, in turn, is subset of non-science.
Science is also distinguishable from revelation, theology, or spirituality in that it offers insight into the physical world obtained by empirical research and testing.[110][111]The most notable disputes concern theevolutionof living organisms, the idea of common descent, the geologic history of the Earth, the formation of theSolar System, and the origin of the universe.[112]Systems of belief that derive from divine or inspired knowledge are not considered pseudoscience if they do not claim either to be scientific or to overturn well-established science. Moreover, some specific religious claims, such asthe power of intercessory prayer to heal the sick, although they may be based on untestable beliefs, can be tested by the scientific method.
Some statements and common beliefs ofpopular sciencemay not meet the criteria of science. "Pop" science may blur the divide between science and pseudoscience among the general public, and may also involvescience fiction.[113]Indeed, pop science is disseminated to, and can also easily emanate from, persons not accountable to scientific methodology and expert peer review.
If claims of a given field can be tested experimentally and standards are upheld, it is not pseudoscience, regardless of how odd, astonishing, or counterintuitive those claims are. If claims made are inconsistent with existing experimental results or established theory, but the method is sound, caution should be used, since science consists of testing hypotheses which may turn out to be false. In such a case, the work may be better described as ideas that are "not yet generally accepted".Protoscienceis a term sometimes used to describe a hypothesis that has not yet been tested adequately by the scientific method, but which is otherwise consistent with existing science or which, where inconsistent, offers reasonable account of the inconsistency. It may also describe the transition from a body of practical knowledge into a scientific field.[28]
Karl Popper stated it is insufficient to distinguish science from pseudoscience, or frommetaphysics(such as the philosophical question of whatexistencemeans), by the criterion of rigorous adherence to theempirical method, which is essentially inductive, based on observation or experimentation.[46]He proposed a method to distinguish between genuine empirical, nonempirical or even pseudoempirical methods. The latter case was exemplified by astrology, which appeals to observation and experimentation. While it hadempirical evidencebased on observation, onhoroscopesandbiographies, it crucially failed to use acceptable scientific standards.[46]Popper proposed falsifiability as an important criterion in distinguishing science from pseudoscience.
To demonstrate this point, Popper[46]gave two cases of human behavior and typical explanations fromSigmund FreudandAlfred Adler's theories: "that of a man who pushes a child into the water with the intention of drowning it; and that of a man who sacrifices his life in an attempt to save the child."[46]From Freud's perspective, the first man would have suffered frompsychological repression, probably originating from anOedipus complex, whereas the second man had attainedsublimation. From Adler's perspective, the first and second man suffered from feelings ofinferiorityand had to prove himself, which drove him to commit the crime or, in the second case, drove him to rescue the child. Popper was not able to find any counterexamples of human behavior in which the behavior could not be explained in the terms of Adler's or Freud's theory. Popper argued[46]it was that the observation always fitted or confirmed the theory which, rather than being its strength, was actually its weakness. In contrast, Popper[46]gave the example of Einstein'sgravitational theory, which predicted "light must be attracted by heavy bodies (such as the Sun), precisely as material bodies were attracted."[46]Following from this, stars closer to the Sun would appear to have moved a small distance away from the Sun, and away from each other. This prediction was particularly striking to Popper because it involved considerable risk. The brightness of the Sun prevented this effect from being observed under normal circumstances, so photographs had to be taken during an eclipse and compared to photographs taken at night. Popper states, "If observation shows that the predicted effect is definitely absent, then the theory is simply refuted."[46]Popper summed up his criterion for the scientific status of a theory as depending on its falsifiability, refutability, ortestability.
Paul R. Thagardused astrology as a case study to distinguish science from pseudoscience and proposed principles and criteria to delineate them.[114]First, astrology has not progressed in that it has not been updated nor added any explanatory power sincePtolemy. Second, it has ignored outstanding problems such as theprecession of equinoxesin astronomy. Third, alternative theories ofpersonalityand behavior have grown progressively to encompass explanations of phenomena which astrology statically attributes to heavenly forces. Fourth, astrologers have remained uninterested in furthering the theory to deal with outstanding problems or in critically evaluating the theory in relation to other theories. Thagard intended this criterion to be extended to areas other than astrology. He believed it would delineate as pseudoscientific such practices aswitchcraftandpyramidology, while leavingphysics,chemistry,astronomy,geoscience,biology, andarchaeologyin the realm of science.[114]
In thephilosophyand history of science,Imre Lakatosstresses the social and political importance of the demarcation problem, the normative methodological problem of distinguishing between science and pseudoscience. His distinctive historical analysis of scientific methodology based on research programmes suggests: "scientists regard the successful theoretical prediction of stunning novel facts – such as the return of Halley's comet or the gravitational bending of light rays – as what demarcates good scientific theories from pseudo-scientific and degenerate theories, and in spite of all scientific theories being forever confronted by 'an ocean of counterexamples'".[8]Lakatos offers a "novelfallibilistanalysis of the development of Newton's celestial dynamics, [his] favourite historical example of his methodology" and argues in light of this historical turn, that his account answers for certain inadequacies in those ofKarl Popperand Thomas Kuhn.[8]"Nonetheless, Lakatos did recognize the force of Kuhn's historical criticism of Popper – all important theories have been surrounded by an 'ocean of anomalies', which on a falsificationist view would require the rejection of the theory outright...Lakatos sought to reconcile therationalismof Popperian falsificationism with what seemed to be its own refutation by history".[115]
Many philosophers have tried to solve the problem of demarcation in the following terms: a statement constitutes knowledge if sufficiently many people believe it sufficiently strongly. But the history of thought shows us that many people were totally committed to absurd beliefs. If the strengths of beliefs were a hallmark of knowledge, we should have to rank some tales about demons, angels, devils, and of heaven and hell as knowledge. Scientists, on the other hand, are very sceptical even of their best theories. Newton's is the most powerful theory science has yet produced, but Newton himself never believed that bodies attract each other at a distance. So no degree of commitment to beliefs makes them knowledge. Indeed, the hallmark of scientific behaviour is a certain scepticism even towards one's most cherished theories. Blind commitment to a theory is not an intellectual virtue: it is an intellectual crime.Thus a statement may be pseudoscientific even if it is eminently 'plausible' and everybody believes in it, and it may be scientifically valuable even if it is unbelievable and nobody believes in it. A theory may even be of supreme scientific value even if no one understands it, let alone believes in it.[8]
The boundary between science and pseudoscience is disputed and difficult to determine analytically, even after more than a century of study by philosophers of science andscientists, and despite some basic agreements on the fundamentals of the scientific method.[43][116][117]The concept of pseudoscience rests on an understanding that the scientific method has been misrepresented or misapplied with respect to a given theory, but many philosophers of science maintain that different kinds of methods are held as appropriate across different fields and different eras of human history. According to Lakatos, the typical descriptive unit of great scientific achievements is not an isolated hypothesis but "a powerful problem-solving machinery, which, with the help of sophisticated mathematical techniques, digests anomalies and even turns them into positive evidence".[8]
To Popper, pseudoscience uses induction to generate theories, and only performs experiments to seek to verify them. To Popper, falsifiability is what determines the scientific status of a theory. Taking a historical approach, Kuhn observed that scientists did not follow Popper's rule, and might ignore falsifying data, unless overwhelming. To Kuhn, puzzle-solving within a paradigm is science. Lakatos attempted to resolve this debate, by suggesting history shows that science occurs in research programmes, competing according to how progressive they are. The leading idea of a programme could evolve, driven by its heuristic to make predictions that can be supported by evidence. Feyerabend claimed that Lakatos was selective in his examples, and the whole history of science shows there is no universal rule of scientific method, and imposing one on the scientific community impedes progress.[118]
Laudan maintained that the demarcation between science and non-science was a pseudo-problem, preferring to focus on the more general distinction between reliable and unreliable knowledge.[119]
[Feyerabend] regards Lakatos's view as being closet anarchism disguised as methodological rationalism. Feyerabend's claim was not that standard methodological rules should never be obeyed, but rather that sometimes progress is made by abandoning them. In the absence of a generally accepted rule, there is a need for alternative methods of persuasion. According to Feyerabend, Galileo employed stylistic and rhetorical techniques to convince his reader, while he also wrote in Italian rather than Latin and directed his arguments to those already temperamentally inclined to accept them.[115]
The demarcation problem between science and pseudoscience brings up debate in the realms of science,philosophyandpolitics.Imre Lakatos, for instance, points out that theCommunist Party of the Soviet Unionat one point declared thatMendelian geneticswas pseudoscientific and had its advocates, including well-established scientists such asNikolai Vavilov, sent to aGulagand that the "liberal Establishment of the West" denies freedom of speech to topics it regards as pseudoscience, particularly where they run up against social mores.[8]
Something becomes pseudoscientific when science cannot be separated fromideology, scientists misrepresent scientific findings to promote or draw attention for publicity, when politicians, journalists and a nation's intellectual elitedistort the facts of science for short-term political gain, or when powerful individuals of the public conflate causation and cofactors by clever wordplay. These ideas reduce the authority, value, integrity and independence of science insociety.[120]
Distinguishing science from pseudoscience has practical implications in the case ofhealth care, expert testimony,environmental policies, andscience education. Treatments with a patina of scientific authority which have not actually been subjected to actual scientific testing may be ineffective, expensive and dangerous to patients and confuse health providers, insurers, government decision makers and the public as to what treatments are appropriate. Claims advanced by pseudoscience may result in government officials and educators making bad decisions in selecting curricula.[Note 9]
The extent to which students acquire a range of social andcognitivethinking skills related to the proper usage of science and technology determines whether they are scientifically literate. Education in the sciences encounters new dimensions with the changing landscape ofscience and technology, a fast-changing culture and a knowledge-driven era. A reinvention of the school science curriculum is one that shapes students to contend with its changing influence on human welfare. Scientific literacy, which allows a person to distinguish science from pseudosciences such as astrology, is among the attributes that enable students to adapt to the changing world. Its characteristics are embedded in a curriculum where students are engaged in resolving problems, conducting investigations, or developing projects.[11]
Alan J. Friedmanmentions why most scientists avoid educating about pseudoscience, including that paying undue attention to pseudoscience could dignify it.[121]
On the other hand,Robert L. Parkemphasizes how pseudoscience can be a threat to society and considers that scientists have a responsibility to teach how to distinguish science from pseudoscience.[122]
Pseudosciences such as homeopathy, even if generally benign, are used bycharlatans. This poses a serious issue because it enables incompetent practitioners to administer health care. True-believing zealots may pose a more serious threat than typical con men because of their delusion to homeopathy's ideology. Irrational health care is not harmless and it is careless to create patient confidence in pseudomedicine.[123]
On 8 December 2016, journalist Michael V. LeVine pointed out the dangers posed by theNatural Newswebsite: "Snake-oil salesmen have pushed false cures since the dawn of medicine, and now websites likeNatural Newsflood social media with dangerous anti-pharmaceutical, anti-vaccination and anti-GMO pseudoscience that puts millions at risk of contracting preventable illnesses."[124]
Theanti-vaccine movementhas persuaded large numbers of parents not to vaccinate their children, citing pseudoscientific research that linkschildhood vaccines with the onset of autism.[125]These include the study byAndrew Wakefield, which claimed that a combination ofgastrointestinal diseaseanddevelopmental regression, which are often seen in children withASD, occurred within two weeks of receiving vaccines.[126][127]The study was eventually retracted by its publisher, and Wakefield was stripped of his license to practice medicine.[125]
Alkaline wateris water that has a pH of higher than 7, purported to host numerous health benefits, with no empirical backing. A practitioner known asRobert O. Youngwho promoted alkaline water and an "Alkaline diet" was sent to jail for 3 years in 2017 for practicing medicine without a license.[128]
|
https://en.wikipedia.org/wiki/Pseudoscience
|
ReScience Cis a journal created in 2015 by Nicolas Rougier and Konrad Hinsen with the aim of publishing researchers' attempts to replicate computations made by other authors, using independently written,free and open-source software(FOSS), with an open process ofpeer review.[1]The journal states that requiring the replication software to be free and open-source ensures thereproducibilityof the original research.[3]
ReScience Cwas created in 2015 by Nicolas Rougier and Konrad Hinsen in the context of thereplication crisisof the early 2010s, in which concern about difficulty in replicating (different data or details of method) or reproducing (same data, same method) peer-reviewed, published research papers was widely discussed.[4]ReScience C's scope is computational research, with the motivation that journals rarely require the provision of source code, and when source code is provided, it is rarely checked against the results claimed in the research article.[5]
The scope ofReScience Cis mainly focussed on researchers' attempts to replicate computations made by other authors, using independently written,free and open-source software(FOSS).[1]Articles are submitted using the "issues" feature of agitrepository run byGitHub, together with other online archiving services, includingZenodoandSoftware Heritage.Peer reviewtakes place publicly in the same "issues" online format.[2]
In 2020,Naturereported on the results ofReScience C's "Ten Years' Reproducibility Challenge", in which scientists were asked to try reproducing the results from peer-reviewed articles that they had published at least ten years earlier, using the same data and software if possible, updated to a modern software environment and free licensing.[1]As of 24 August 2020[update], out of 35 researchers who had proposed to reproduce the results of 43 of their old articles, 28 reports had been written, 13 had been accepted after peer review and published, among which 11 documented successful reproductions.[1]
|
https://en.wikipedia.org/wiki/ReScience_C
|
Inacademic publishing, aretractionis a mechanism by which apublished paperin anacademic journalis flagged for being seriously flawed to the extent that their results and conclusions can no longer be relied upon. Retracted articles are not removed from the published literature but marked as retracted. In some cases it may be necessary to remove an article from publication, such as when the article is clearlydefamatory, violatespersonal privacy, is the subject of acourt order, or might pose a serious health risk to the general public.[1]
Although the majority of retractions are linked toscientific misconduct,[2]they are often cited as evidence of the self-correcting nature of science.[3]However, some scholars argue this view is misleading, describing it as a myth.[4]
A retraction may be initiated by the editors of a journal, or by the author(s) of the papers (or their institution). Retractions are typically accompanied by a retraction notice written by the editors or authors explaining the reason for the retraction. Such notices may also include a note from the authors with apologies for the previous error and/or expressions of gratitude to persons who disclosed the error to the author.[5]Retractions must not be confused withsmall correctionsin published articles.
There have been numerous examples of retracted scientific publications.Retraction Watchprovides updates on new retractions, and discusses general issues in relation to retractions.[6][7]
An early example of a retraction in a scholarly, peer-reviewed publication can be traced to"A Retraction, by Mr. Benjamin Wilson, F.R.S. of his former Opinion, concerning the Explication of the Leyden Experiment,"published in thePhilosophical Transactions of the Royal Societyon 24 June 1756. This is recognized as the earliest recorded retraction in scientific publishing.[8]In it,Benjamin Wilson, a British painter and scientist, formally withdrew his previous explanation of theLeyden jarexperiment, a foundational study in the field of electricity. He acknowledged that subsequent discoveries, particularly those byBenjamin Franklin, had shown his original interpretation to be incorrect.[9]
A 2011 paper in theJournal of Medical Ethicsattempted to quantify retraction rates inPubMedover time to determine if the rate was increasing, even while taking into account the increased number of overall publications occurring each year.[10]The author found that the rate of increase in retractions was greater than the rate of increase in publications. Moreover, the author notes the following:
"It is particularly striking that the number of papers retracted for fraud increased more than sevenfold in the 6 years between 2004 and 2009. During the same period, the number of papers retracted for a scientific mistake did not even double..." (p. 251).[10]
Although the author suggests that his findings may indeed indicate a recent increase in scientific fraud, he also acknowledges other possibilities. For example, increased rates of fraud in recent years may simply indicate that journals are doing a better job of policing the scientific literature than they have in the past. Furthermore, because retractions occur for a very small percentage of overall publications (fewer than 1 in 1,000 articles[11][12]), a few scientists who are willing to commit large amounts of fraud can highly impact retraction rates. For example, the author points out thatJan Hendrik Schönfabricated results in 15 retracted papers in the dataset he reviewed, all of which were retracted in 2002 and 2003, "so he alone was responsible for 56% of papers retracted for fraud in 2002—2003" (p 252).[10]
During the COVID-19 pandemic, academia had seen a quick increase in fast-track peer-review articles dealing with SARS-CoV-2 problems.[13]As a result, a number of papers have been retracted made "Retraction Tsunami"[14]due to quality and/or data issues, leading many experts to ponder not just the quality of peer review but also standards of retraction practices.[15]
Retracted studies may continue to be cited. This may happen in cases where scholars are unaware of the retraction, in particular when the retraction occurs long after the original publication.[16]
The number of journal articles being retracted had risen from about 1,600 in 2013 to 10,000 in 2023. Most of the retractions in 2023 were contributed byHindawijournals.[17]The significant number of retractions involving Chinese co-authors—over 17,000 since 2021, including 8,000 from Hindawi journals—has led China to launch a nationwide audit addressing retractions and research misconduct.[18]Retractions are alsomeasuredamong highly cited researchers.[19]
A low percentage of retracted papers can be due to unintentional error within the author(s) work. Rather than removing the entire article, retraction with replacement has been a new practice to help authors avoid being seen as dishonest for mistakes that were not purposefully done.[20]This method allows the author to fix their mistakes from the original paper, and submit an edited version to take the original paper’s place. The journal can decide to retract the original paper then upload the fixed version online, usually with a notice placed stating “Retraction and Replacement,” or “Correction,” on the article page. For example,JAMAwill post the edited version with a retraction and replacement notice, along with a link to the original article, whileResearch Evaluationwill use the term "correction" with a link posted on the updated article, referring to the old article.
Self-retraction is a request from an author and/or co-authors to retract its own work from being published. Self-retraction by an author is recommended because once it gets retracted from the journal, then it can affect the author(s) because investigations can begin which will have an effect the author's reputation. If one retracts their own work on their terms, it would show more integrity and honesty as they are owning up to their own mistakes,[21]just like the authors mentioned inThe Wall Street Journalhave done. Scientists at times have been asked to retract their work even though their work is exact and bold; the root cause of the problem should be looked into to avoid retractions.[21]A system to distinguish papers from "good" and "bad" would be beneficial to researchers. This system may save the reputation of scientists and researchers. Most researchers publish honest work and sometimes simple mistakes happen to be overlooked by thepeer reviewprocess. Retraction should not be for simple spelling errors, but for inaccurate, skewed, and fraudulent data. For example, today new technologies are being developed in a culture of transparency to align the opportunity to record false claims.[21]Another solution is for researchers to use a term “self-citation” since citations look identical therefore they are classified in databases.[21]Recommending a same database to evaluate the researchers own work can help lessen retractions.
|
https://en.wikipedia.org/wiki/Retraction_in_academic_publishing
|
Testabilityis a primary aspect ofscience[1]and thescientific method. There are two components to testability:
In short, ahypothesisistestableif there is a possibility of deciding whether it is true or false based onexperimentationby anyone. This allows anyone to decide whether atheorycan besupportedorrefutedbydata. However, the interpretation of experimental data may be also inconclusive oruncertain.Karl Popperintroduced the concept that scientific knowledge had the property offalsifiabilityas published inThe Logic of Scientific Discovery.[2]
Thisphilosophy of science-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Testability
|
Instatistical inference, the concept of aconfidence distribution(CD) has often been loosely referred to as a distribution function on the parameter space that can represent confidence intervals of all levels for a parameter of interest. Historically, it has typically been constructed by inverting the upper limits of lower sided confidence intervals of all levels. It was also commonly associated with a fiducial[1]interpretation (fiducial distribution), although it is a purely frequentist concept.[2]A confidence distribution isnota probability distribution function of the parameter of interest, but may still be a function useful for making inferences.[3]
In recent years, there has been a surge of renewed interest in confidence distributions.[3]In the more recent developments, the concept of confidence distribution has emerged as a purelyfrequentistconcept, without any fiducial interpretation or reasoning. Conceptually, a confidence distribution is no different from apoint estimatoror an interval estimator (confidence interval), but it uses a sample-dependent distribution function on the parameter space (instead of a point or an interval) to estimate the parameter of interest.
A simple example of a confidence distribution, that has been broadly used in statistical practice, is abootstrapdistribution.[4]The development and interpretation of a bootstrap distribution does not involve any fiducial reasoning; the same is true for the concept of a confidence distribution. But the notion of confidence distribution is much broader than that of a bootstrap distribution. In particular, recent research suggests that it encompasses and unifies a wide range of examples, from regular parametric cases (including most examples of the classical development of Fisher's fiducial distribution) to bootstrap distributions,p-valuefunctions,[5]normalizedlikelihood functionsand, in some cases, Bayesianpriorsand Bayesianposteriors.[6]
Just as a Bayesian posterior distribution contains a wealth of information for any type ofBayesian inference, a confidence distribution contains a wealth of information for constructing almost all types of frequentist inferences, includingpoint estimates,confidence intervals, critical values,statistical powerand p-values,[7]among others. Some recent developments have highlighted the promising potentials of the CD concept, as an effective inferential tool.[3]
Neyman(1937)[8]introduced the idea of "confidence" in his seminal paper on confidence intervals which clarified the frequentist repetition property. According to Fraser,[9]the seed (idea) of confidence distribution can even be traced back to Bayes (1763)[10]and Fisher (1930).[1]Although the phrase seems to first be used in Cox (1958).[11]Some researchers view the confidence distribution as "the Neymanian interpretation of Fisher's fiducial distributions",[12]which was "furiously disputed by Fisher".[13]It is also believed that these "unproductive disputes" and Fisher's "stubborn insistence"[13]might be the reason that the concept of confidence distribution has been long misconstrued as a fiducial concept and not been fully developed under the frequentist framework.[6][14]Indeed, the confidence distribution is a purely frequentist concept with a purely frequentist interpretation, although it also has ties to Bayesian and fiducial inference concepts.
Classically, a confidence distribution is defined by inverting the upper limits of a series of lower-sided confidence intervals.[15][16][page needed]In particular,
Efron stated that this distribution "assigns probability 0.05 toθlying between the upper endpoints of the 0.90 and 0.95 confidence interval,etc." and "it has powerful intuitive appeal".[16]In the classical literature,[3]the confidence distribution function is interpreted as a distribution function of the parameterθ, which is impossible unless fiducial reasoning is involved since, in a frequentist setting, the parameters are fixed and nonrandom.
To interpret the CD function entirely from a frequentist viewpoint and not interpret it as a distribution function of a (fixed/nonrandom) parameter is one of the major departures of recent development relative to the classical approach. The nice thing about treating confidence distributions as a purely frequentist concept (similar to a point estimator) is that it is now free from those restrictive, if not controversial, constraints set forth by Fisher on fiducial distributions.[6][14]
The following definition applies;[12][17][18]Θis the parameter space of the unknown parameter of interestθ, andχis the sample space corresponding to dataXn={X1, ...,Xn}:
Also, the functionHis an asymptotic CD (aCD), if theU[0, 1] requirement is true only asymptotically and the continuity requirement onHn(•) is dropped.
In nontechnical terms, a confidence distribution is a function of both the parameter and the random sample, with two requirements. The first requirement (R1) simply requires that a CD should be a distribution on the parameter space. The second requirement (R2) sets a restriction on the function so that inferences (point estimators, confidence intervals and hypothesis testing, etc.) based on the confidence distribution have desired frequentist properties. This is similar to the restrictions in point estimation to ensure certain desired properties, such as unbiasedness, consistency, efficiency, etc.[6][19]
A confidence distribution derived by inverting the upper limits of confidence intervals (classical definition) also satisfies the requirements in the above definition and this version of the definition is consistent with the classical definition.[18]
Unlike the classical fiducial inference, more than one confidence distributions may be available to estimate a parameter under any specific setting. Also, unlike the classical fiducial inference, optimality is not a part of requirement. Depending on the setting and the criterion used, sometimes there is a unique "best" (in terms of optimality) confidence distribution. But sometimes there is no optimal confidence distribution available or, in some extreme cases, we may not even be able to find a meaningful confidence distribution. This is not different from the practice of point estimation.
A confidence distribution[20]C{\displaystyle C}for aparameterγ{\displaystyle \gamma }in ameasurable spaceis a distributionestimatorwithC(Ap)=p{\displaystyle C(A_{p})=p}for a family ofconfidence regionsAp{\displaystyle A_{p}}forγ{\displaystyle \gamma }with levelp{\displaystyle p}for all levels0<p<1{\displaystyle 0<p<1}. The family of confidence regions is not unique.[21]IfAp{\displaystyle A_{p}}only exists forp∈I⊂(0,1){\displaystyle p\in I\subset (0,1)}, thenC{\displaystyle C}is a confidence distribution with level setI{\displaystyle I}. BothC{\displaystyle C}and allAp{\displaystyle A_{p}}are measurable functions of the data. This implies thatC{\displaystyle C}is arandom measureandAp{\displaystyle A_{p}}is arandom set.If the defining requirementP(γ∈Ap)≥p{\displaystyle P(\gamma \in A_{p})\geq p}holds with equality, then the confidence distribution is by definition exact. If, additionally,γ{\displaystyle \gamma }is a real parameter, then the measure theoretic definition coincides with the above classical definition.
Suppose anormalsampleXi~N(μ,σ2),i= 1, 2, ...,nis given.
(1) Varianceσ2is known
Let,Φbe the cumulative distribution function of the standard normal distribution, andFtn−1{\displaystyle F_{t_{n-1}}}the cumulative distribution function of the Studenttn−1{\displaystyle t_{n-1}}distribution. Both the functionsHΦ(μ){\displaystyle H_{\mathit {\Phi }}(\mu )}andHt(μ){\displaystyle H_{t}(\mu )}given by
satisfy the two requirements in the CD definition, and they are confidence distribution functions forμ.[3]Furthermore,
satisfies the definition of an asymptotic confidence distribution whenn→∞, and it is an asymptotic confidence distribution forμ. The uses ofHt(μ){\displaystyle H_{t}(\mu )}andHA(μ){\displaystyle H_{A}(\mu )}are equivalent to state that we useN(X¯,σ2){\displaystyle N({\bar {X}},\sigma ^{2})}andN(X¯,s2){\displaystyle N({\bar {X}},s^{2})}to estimateμ{\displaystyle \mu }, respectively.
(2) Varianceσ2is unknown
For the parameterμ, sinceHΦ(μ){\displaystyle H_{\mathit {\Phi }}(\mu )}involves the unknown parameterσand it violates the two requirements in the CD definition, it is no longer a "distribution estimator" or a confidence distribution forμ.[3]However,Ht(μ){\displaystyle H_{t}(\mu )}is still a CD forμandHA(μ){\displaystyle H_{A}(\mu )}is an aCD forμ.
For the parameterσ2, the sample-dependent cumulative distribution function
is a confidence distribution function forσ2.[6]Here,Fχn−12{\displaystyle F_{\chi _{n-1}^{2}}}is the cumulative distribution function of theχn−12{\displaystyle \chi _{n-1}^{2}}distribution .
In the case when the varianceσ2is known,HΦ(μ)=Φ(n(μ−X¯)σ){\displaystyle H_{\mathit {\Phi }}(\mu )={\mathit {\Phi }}\left({\frac {{\sqrt {n}}(\mu -{\bar {X}})}{\sigma }}\right)}is optimal in terms of producing the shortest confidence intervals at any given level. In the case when the varianceσ2is unknown,Ht(μ)=Ftn−1(n(μ−X¯)s){\displaystyle H_{t}(\mu )=F_{t_{n-1}}\left({\frac {{\sqrt {n}}(\mu -{\bar {X}})}{s}}\right)}is an optimal confidence distribution forμ.
Letρdenotes thecorrelation coefficientof abivariate normalpopulation. It is well known that Fisher'szdefined by theFisher transformation:
has thelimiting distributionN(12ln1+ρ1−ρ,1n−3){\displaystyle N({1 \over 2}\ln {{1+\rho } \over {1-\rho }},{1 \over n-3})}with a fast rate of convergence, whereris the sample correlation andnis the sample size.
The function
is an asymptotic confidence distribution forρ.[22]
An exact confidence density forρis[23][24]
π(ρ|r)=ν(ν−1)Γ(ν−1)2πΓ(ν+12)(1−r2)ν−12⋅(1−ρ2)ν−22⋅(1−rρ)1−2ν2F(32,−12;ν+12;1+rρ2){\displaystyle \pi (\rho |r)={\frac {\nu (\nu -1)\Gamma (\nu -1)}{{\sqrt {2\pi }}\Gamma (\nu +{\frac {1}{2}})}}(1-r^{2})^{\frac {\nu -1}{2}}\cdot (1-\rho ^{2})^{\frac {\nu -2}{2}}\cdot (1-r\rho )^{\frac {1-2\nu }{2}}F({\frac {3}{2}},-{\frac {1}{2}};\nu +{\frac {1}{2}};{\frac {1+r\rho }{2}})}
whereF{\displaystyle F}is the Gaussian hypergeometric function andν=n−1>1{\displaystyle \nu =n-1>1}. This is also the posterior density of a Bayes matching prior for the five parameters in the binormal distribution.[25]
The very last formula in the classical book byFishergives
π(ρ|r)=(1−r2)ν−12⋅(1−ρ2)ν−22π(ν−2)!∂ρrν−2{θ−12sin2θsin3θ}{\displaystyle \pi (\rho |r)={\frac {(1-r^{2})^{\frac {\nu -1}{2}}\cdot (1-\rho ^{2})^{\frac {\nu -2}{2}}}{\pi (\nu -2)!}}\partial _{\rho r}^{\nu -2}\left\{{\frac {\theta -{\frac {1}{2}}\sin 2\theta }{\sin ^{3}\theta }}\right\}}
wherecosθ=−ρr{\displaystyle \cos \theta =-\rho r}and0<θ<π{\displaystyle 0<\theta <\pi }. This formula was derived byC. R. Rao.[26]
Let data be generated byY=γ+U{\displaystyle Y=\gamma +U}whereγ{\displaystyle \gamma }is an unknown vector in theplaneandU{\displaystyle U}has abinormaland known distribution in the plane. The distribution ofΓy=y−U{\displaystyle \Gamma ^{y}=y-U}defines a confidence distribution forγ{\displaystyle \gamma }. The confidence regionsAp{\displaystyle A_{p}}can be chosen as the interior ofellipsescentered atγ{\displaystyle \gamma }and axes given by the eigenvectors of thecovariancematrix ofΓy{\displaystyle \Gamma ^{y}}. The confidence distribution is in this case binormal with meanγ{\displaystyle \gamma }, and the confidence regions can be chosen in many other ways.[21]The confidence distribution coincides in this case with the Bayesian posterior using the right Haar prior.[27]The argument generalizes to the case of an unknown meanγ{\displaystyle \gamma }in an infinite-dimensionalHilbert space, but in this case the confidence distribution is not a Bayesian posterior.[28]
From the CD definition, it is evident that the interval(−∞,Hn−1(1−α)],[Hn−1(α),∞){\displaystyle (-\infty ,H_{n}^{-1}(1-\alpha )],[H_{n}^{-1}(\alpha ),\infty )}and[Hn−1(α/2),Hn−1(1−α/2)]{\displaystyle [H_{n}^{-1}(\alpha /2),H_{n}^{-1}(1-\alpha /2)]}provide 100(1 −α)%-level confidence intervals of different kinds, forθ, for anyα∈ (0, 1). Also[Hn−1(α1),Hn−1(1−α2)]{\displaystyle [H_{n}^{-1}(\alpha _{1}),H_{n}^{-1}(1-\alpha _{2})]}is a level 100(1 −α1−α2)% confidence interval for the parameterθfor anyα1> 0,α2> 0 andα1+α2< 1. Here,Hn−1(β){\displaystyle H_{n}^{-1}(\beta )}is the 100β% quantile ofHn(θ){\displaystyle H_{n}(\theta )}or it solves forθin equationHn(θ)=β{\displaystyle H_{n}(\theta )=\beta }. The same holds for a CD, where the confidence level is achieved in limit. Some authors have proposed using them for graphically viewing what parameter values are consistent with the data, instead of coverage or performance purposes.[29][30]
Point estimators can also be constructed given a confidence distribution estimator for the parameter of interest. For example, givenHn(θ) the CD for a parameterθ, natural choices of point estimators include the medianMn=Hn−1(1/2), the meanθ¯n=∫−∞∞tdHn(t){\displaystyle {\bar {\theta }}_{n}=\int _{-\infty }^{\infty }t\,\mathrm {d} H_{n}(t)}, and the maximum point of the CD density
Under some modest conditions, among other properties, one can prove that these point estimators are all consistent.[6][22]Certain confidence distributions can give optimal frequentist estimators.[28]
One can derive a p-value for a test, either one-sided or two-sided, concerning the parameterθ, from its confidence distributionHn(θ).[6][22]Denote by the probability mass of a setCunder the confidence distribution functionps(C)=Hn(C)=∫CdH(θ).{\displaystyle p_{s}(C)=H_{n}(C)=\int _{C}\mathrm {d} H(\theta ).}Thisps(C) is called "support" in the CD inference and also known as "belief" in the fiducial literature.[31]We have
(1) For the one-sided testK0:θ∈Cvs.K1:θ∈Cc, whereCis of the type of (−∞,b] or [b, ∞), one can show from the CD definition that supθ∈CPθ(ps(C) ≤α) =α. Thus,ps(C) =Hn(C) is the corresponding p-value of the test.
(2) For the singleton testK0:θ=bvs.K1:θ≠b,P{K0:θ=b}(2 min{ps(Clo), one can show from the CD definition that ps(Cup)} ≤α) =α. Thus, 2 min{ps(Clo),ps(Cup)} = 2 min{Hn(b), 1 −Hn(b)} is the corresponding p-value of the test. Here,Clo= (−∞,b] andCup= [b, ∞).
See Figure 1 from Xie and Singh (2011)[6]for a graphical illustration of the CD inference.
A few statistical programs have implemented the ability to construct and graph confidence distributions.
R, via theconcurve,[32][33]pvaluefunctions,[34]andepisheet[35]packages
Excel, viaepisheet[36]
Stata, viaconcurve[32]
|
https://en.wikipedia.org/wiki/Confidence_distribution
|
Nonparametric statisticsis a type of statistical analysis that makes minimal assumptions about the underlyingdistributionof the data being studied. Often these models are infinite-dimensional, rather than finite dimensional, as inparametric statistics.[1]Nonparametric statistics can be used fordescriptive statisticsorstatistical inference. Nonparametric tests are often used when the assumptions of parametric tests are evidently violated.[2]
The term "nonparametric statistics" has been defined imprecisely in the following two ways, among others:
The first meaning ofnonparametricinvolves techniques that do not rely on data belonging to any particular parametric family of probability distributions.
These include, among others:
An example isOrder statistics, which are based onordinal rankingof observations.
The discussion following is taken fromKendall's Advanced Theory of Statistics.[3]
Statistical hypotheses concern the behavior of observable random variables.... For example, the hypothesis (a) that a normal distribution has a specified mean and variance is statistical; so is the hypothesis (b) that it has a given mean but unspecified variance; so is the hypothesis (c) that a distribution is of normal form with both mean and variance unspecified; finally, so is the hypothesis (d) that two unspecified continuous distributions are identical.
It will have been noticed that in the examples (a) and (b) the distribution underlying the observations was taken to be of a certain form (the normal) and the hypothesis was concerned entirely with the value of one or both of its parameters. Such a hypothesis, for obvious reasons, is calledparametric.
Hypothesis (c) was of a different nature, as no parameter values are specified in the statement of the hypothesis; we might reasonably call such a hypothesisnon-parametric. Hypothesis (d) is also non-parametric but, in addition, it does not even specify the underlying form of the distribution and may now be reasonably termeddistribution-free. Notwithstanding these distinctions, the statistical literature now commonly applies the label "non-parametric" to test procedures that we have just termed "distribution-free", thereby losing a useful classification.
The second meaning ofnon-parametricinvolves techniques that do not assume that thestructureof a model is fixed. Typically, the model grows in size to accommodate the complexity of the data. In these techniques, individual variablesaretypically assumed to belong to parametric distributions, and assumptions about the types of associations among variables are also made. These techniques include, among others:
Non-parametric methods are widely used for studying populations that have a ranked order (such as movie reviews receiving one to five "stars"). The use of non-parametric methods may be necessary when data have arankingbut no clearnumericalinterpretation, such as when assessingpreferences. In terms oflevels of measurement, non-parametric methods result inordinal data.
As non-parametric methods make fewer assumptions, their applicability is much more general than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are morerobust.
Non-parametric methods are sometimes considered simpler to use and more robust than parametric methods, even when the assumptions of parametric methods are justified. This is due to their more general nature, which may make them less susceptible to misuse and misunderstanding. Non-parametric methods can be considered a conservative choice, as they will work even when their assumptions are not met, whereas parametric methods can produce misleading results when their assumptions are violated.
The wider applicability and increasedrobustnessof non-parametric tests comes at a cost: in cases where a parametric test's assumptions are met, non-parametric tests have lessstatistical power. In other words, a larger sample size can be required to draw conclusions with the same degree of confidence.
Non-parametric modelsdiffer fromparametricmodels in that the model structure is not specifieda prioribut is instead determined from data. The termnon-parametricis not meant to imply that such models completely lack parameters but that the number and nature of the parameters are flexible and not fixed in advance.
Non-parametric(ordistribution-free)inferential statistical methodsare mathematical procedures for statistical hypothesis testing which, unlikeparametric statistics, make no assumptions about theprobability distributionsof the variables being assessed. The most frequently used tests include
Early nonparametric statistics include themedian(13th century or earlier, use in estimation byEdward Wright, 1599; seeMedian § History) and thesign testbyJohn Arbuthnot(1710) in analyzing thehuman sex ratioat birth (seeSign test § History).[5][6]
|
https://en.wikipedia.org/wiki/Nonparametric_statistics
|
Pseudoreplication(sometimesunit of analysis error[1]) has many definitions. Pseudoreplication was originally defined in 1984 byStuart H. Hurlbert[2]as the use of inferential statistics to test for treatment effects with data from experiments where either treatments are not replicated (though samples may be) or
replicates are not statistically independent. Subsequently, Millar and Anderson[3]identified it as a special case of inadequate specification of random factors where both random and fixed factors are present. It is sometimes narrowly interpreted as an inflation of the number of samples or replicates which are not statistically independent.[4]This definition omits the confounding of unit and treatment effects in a misspecifiedF-ratio. In practice, incorrect F-ratios for statistical tests of fixed effects often arise from a default F-ratio that is formed over the error rather the mixed term.
Lazic[5]defined pseudoreplication as a problem of correlated samples (e.g. fromlongitudinal studies) where correlation is not taken into account when computing the confidence interval for the sample mean. For the effect of serial or temporal correlation also seeMarkov chain central limit theorem.
The problem of inadequate specification arises when treatments are assigned to units that are subsampled and the treatmentF-ratioin an analysis of variance (ANOVA) table is formed with respect to the residual mean square rather than with respect to the among unit mean square. The F-ratio relative to the within unit mean square is vulnerable to theconfoundingof treatment and unit effects, especially when experimental unit number is small (e.g. four tank units, two tanks treated, two not treated, several subsamples per tank). The problem is eliminated by forming the F-ratio relative to the correct mean square in the ANOVA table (tank by treatment MS in the example above), where this is possible. The problem is addressed by the use of mixed models.[3]
Hurlbert reported "pseudoreplication" in 48% of the studies he examined, that used inferential statistics.[2]Several studies examining scientific papers published up to 2016 similarly found about half of the papers were suspected of pseudoreplication.[4]When time and resources limit the number ofexperimental units, and unit effects cannot be eliminated statistically by testing over the unit variance, it is important to use other sources of information to evaluate the degree to which an F-ratio is confounded by unit effects.
Replicationincreases the precision of an estimate, while randomization addresses the broader applicability of a sample to a population. Replication must be appropriate: replication at the experimental unit level must be considered, in addition to replication within units.
Statistical tests(e.g.t-testand the related ANOVA family of tests) rely on appropriate replication to estimatestatistical significance. Tests based on the t and F distributions assume homogeneous, normal, and independent errors. Correlated errors can lead to false precision and p-values that are too small.[6]
Hurlbert (1984) defined four types of pseudoreplication.
|
https://en.wikipedia.org/wiki/Pseudoreplication
|
Non-uniform random variate generationorpseudo-random number samplingis thenumericalpractice of generatingpseudo-random numbers(PRN) that follow a givenprobability distribution.
Methods are typically based on the availability of auniformly distributedPRN generator. Computational algorithms are then used to manipulate a singlerandom variate,X, or often several such variates, into a new random variateYsuch that these values have the required distribution.
The first methods were developed forMonte-Carlo simulationsin theManhattan project,[citation needed]published byJohn von Neumannin the early 1950s.[1]
For adiscrete probability distributionwith a finite numbernof indices at which theprobability mass functionftakes non-zero values, the basic sampling algorithm is straightforward. The interval [0, 1) is divided innintervals [0,f(1)), [f(1),f(1) +f(2)), ... The width of intervaliequals the probabilityf(i).
One draws a uniformly distributed pseudo-random numberX, and searches for the indexiof the corresponding interval. The so determinediwill have the distributionf(i).
Formalizing this idea becomes easier by using the cumulative distribution function
It is convenient to setF(0) = 0. Thenintervals are then simply [F(0),F(1)), [F(1),F(2)), ..., [F(n− 1),F(n)). The main computational task is then to determineifor whichF(i− 1) ≤X<F(i).
This can be done by different algorithms:
Generic methods for generatingindependentsamples:
Generic methods for generatingcorrelatedsamples (often necessary for unusually-shaped or high-dimensional distributions):
For generating anormal distribution:
For generating aPoisson distribution:
GNU Scientific Libraryhas a section entitled "Random Number Distributions" with routines for sampling under more than twenty different distributions.[5]
|
https://en.wikipedia.org/wiki/Non-uniform_random_variate_generation
|
Arandom permutationis asequencewhere any order of its items is equally likely atrandom, that is, it is apermutation-valuedrandom variableof a set of objects. The use of random permutations is common ingames of chanceand inrandomized algorithmsincoding theory,cryptography, andsimulation. A good example of a random permutation is the fairshufflingof a standarddeck of cards: this is ideally a random permutation of the 52 cards.
One algorithm for generating a random permutation of a set of sizenuniformly at random, i.e., such that each of then!permutationsis equally likely to appear, is to generate asequenceby uniformly randomly selecting an integer between 1 andn(inclusive), sequentially and without replacementntimes, and then to interpret this sequence (x1, ...,xn) as the permutation
shown here intwo-line notation.
An inefficient brute-force method for sampling without replacement could select from the numbers between 1 andnat every step, retrying the selection whenever the random number picked is a repeat of a number already selected until selecting a number that has not yet been selected. The expected number of retries per step in such cases will scale with the inverse of the fraction of numbers already selected, and the overall number of retries as the sum of those inverses, making this an inefficient approach.
Such retries can be avoided using an algorithm where, on eachith step whenx1, ...,xi− 1have already been chosen, one chooses a uniformly random numberjfrom between 1 andn−i+ 1 (inclusive) and setsxiequal to thejth largest of the numbers that have not yet been selected. This selects uniformly randomly among the remaining numbers at every step without retries.
A simplealgorithmto generate a permutation ofnitems uniformly at random without retries, known as theFisher–Yates shuffle, is to start with any permutation (for example, theidentity permutation), and then go through the positions 0 throughn− 2 (we use a convention where the first element has index 0, and the last element has indexn− 1), and for each positioniswapthe element currently there with a randomly chosen element from positionsithroughn− 1 (the end), inclusive. Any permutation ofnelements will be produced by this algorithm with probability exactly 1/n!, thus yielding a uniform distribution of the permutations.
If theuniform()function is implemented simply asrandom() % (m)then there will be a bias in the distribution of permutations if the number of return values ofrandom()is not a multiple of m. However, this effect is small if the number of return values ofrandom()is orders of magnitude greater than m.
As with all computational implementations of random processes, the quality of the distribution generated by an implementation of a randomized algorithm such as the Fisher-Yates shuffle, i.e., how close the actually generated distribution is to the desired distribution, will depend on the quality of underlying sources of randomness in the implementation such aspseudorandom number generatorsorhardware random number generators. There are manyrandomness testsfor random permutations, such as the "overlapping permutations" test of theDiehard tests. A typical form of such tests is to take somepermutation statisticfor which the distribution is theoretically known and then test whether the distribution of that statistic on a set of randomly generated permutations from an implementation closely approximates the distribution of that statistic from the true distribution.
Theprobability distributionfor the number offixed pointsof a uniformly distributed random permutation ofnelements approaches aPoisson distributionwithexpected value1 asngrows.[1]The firstnmomentsof this distribution are exactly those of the Poisson distribution. In particular, the probability that a random permutation has no fixed points (i.e., that the permutation is aderangement) approaches 1/easnincreases.
|
https://en.wikipedia.org/wiki/Random_permutation
|
Surrogate data testing[1](or themethod of surrogate data) is a statisticalproof by contradictiontechnique similar topermutation tests[2]andparametric bootstrapping. It is used to detectnon-linearityin atime series.[3]The technique involves specifying anull hypothesisH0{\displaystyle H_{0}}describing alinear processand then generating severalsurrogate datasets according toH0{\displaystyle H_{0}}usingMonte Carlomethods. A discriminating statistic is then calculated for the original time series and all the surrogate set. If the value of the statistic is significantly different for the original series than for the surrogate set, the null hypothesis is rejected and non-linearity assumed.[3]
The particular surrogate data testing method to be used is directly related to the null hypothesis. Usually this is similar to the following:The data is a realization of a stationary linear system, whose output has been possibly measured by a monotonically increasing possibly nonlinear (but static) function.[1]Herelinearmeans that each value is linearly dependent on past values or on present and past values of some independent identically distributed (i.i.d.) process, usually also Gaussian. This is equivalent to saying that the process isARMAtype. In case of fluxes (continuous mappings), linearity of system means that it can be expressed by a linear differential equation. In this hypothesis, thestaticmeasurement function is one which depends only on the present value of its argument, not on past ones.
Many algorithms to generate surrogate data have been proposed. They are usually classified in two groups:[4]
The last surrogate data methods do not depend on a particular model, nor on any parameters, thus they are non-parametric methods. These surrogate data methods are usually based on preserving the linear structure of the original series (for instance, by preserving theautocorrelation function, or equivalently theperiodogram, an estimate of the sample spectrum).[5]Among constrained realizations methods, the most widely used (and thus could be called theclassical methods) are:
Many other surrogate data methods have been proposed, some based on optimizations to achieve an autocorrelation close to the original one,[9][10][11]some based on wavelet transform[12][13][14]and some capable of dealing with some types of non-stationary data.[15][16][17]
The above mentioned techniques are called linear surrogate methods, because they are based on a linear process and address a linear null hypothesis.[9]Broadly speaking, these methods are useful for data showing irregular fluctuations (short-term variabilities) and data with such a behaviour abound in the real world. However, we often observe data with obvious periodicity, for example, annual sunspot numbers, electrocardiogram (ECG) and so on. Time series exhibiting strong periodicities are clearly not consistent with the linear null hypotheses. To tackle this case, some algorithms and null hypotheses have been proposed.[18][19][20]
|
https://en.wikipedia.org/wiki/Surrogate_data_testing
|
Inmachine learning,ensemble averagingis the process of creating multiple models (typicallyartificial neural networks) and combining them to produce a desired output, as opposed to creating just one model. Ensembles of models often outperform individual models, as the various errors of the ensemble constituents "average out".[citation needed]
Ensemble averaging is one of the simplest types ofcommittee machines. Along withboosting, it is one of the two major types of static committee machines.[1]In contrast to standard neural network design, in which many networks are generated but only one is kept, ensemble averaging keeps the less satisfactory networks, but with less weight assigned to their outputs.[2]The theory of ensemble averaging relies on two properties of artificial neural networks:[3]
This is known as thebias–variance tradeoff. Ensemble averaging creates a group of networks, each with low bias and high variance, and combines them to form a new network which should theoretically exhibit low bias and low variance. Hence, this can be thought of as a resolution of the bias–variance tradeoff.[4]The idea of combining experts can be traced back toPierre-Simon Laplace.[5]
The theory mentioned above gives an obvious strategy: create a set of experts with low bias and high variance, and average them. Generally, what this means is to create a set of experts with varying parameters; frequently, these are the initial synaptic weights of a neural network, although other factors (such as learning rate, momentum, etc.) may also be varied. Some authors recommend against varying weight decay and early stopping.[3]The steps are therefore:
Alternatively,domain knowledgemay be used to generate severalclassesof experts. An expert from each class is trained, and then combined.
A more complex version of ensemble average views the final result not as a mere average of all the experts, but rather as a weighted sum. If each expert isyi{\displaystyle y_{i}}, then the overall resulty~{\displaystyle {\tilde {y}}}can be defined as:
whereα{\displaystyle \mathbf {\alpha } }is a set of weights. The optimization problem of finding alpha is readily solved through neural networks, hence a "meta-network" where each "neuron" is in fact an entire neural network can be trained, and the synaptic weights of the final network is the weight applied to each expert. This is known as alinear combination of experts.[2]
It can be seen that most forms of neural network are some subset of a linear combination: the standard neural net (where only one expert is used) is simply a linear combination with allαj=0{\displaystyle \alpha _{j}=0}and oneαk=1{\displaystyle \alpha _{k}=1}. A raw average is where allαj{\displaystyle \alpha _{j}}are equal to some constant value, namely one over the total number of experts.[2]
A more recent ensemble averaging method is negative correlation learning,[6]proposed by Y. Liu and X. Yao. This method has been widely used inevolutionary computing.
|
https://en.wikipedia.org/wiki/Ensemble_averaging_(machine_learning)
|
Bayesian structural time series(BSTS) model is astatisticaltechnique used forfeature selection, time series forecasting,nowcasting, inferring causal impact and other applications. The model is designed to work withtime seriesdata.
The model has also promising application in the field of analyticalmarketing. In particular, it can be used in order to assess how much different marketing campaigns have contributed to the change in web search volumes, product sales, brand popularity and other relevant indicators.Difference-in-differencesmodels[1]andinterrupted time seriesdesigns[2]are alternatives to this approach. "In contrast to classical difference-in-differences schemes, state-space models make it possible to (i) infer the temporal evolution of attributable impact, (ii) incorporate empirical priors on the parameters in a fully Bayesian treatment, and (iii) flexibly accommodate multiple sources of variation, including the time-varying influence of contemporaneous covariates, i.e., synthetic controls."[1]
The model consists of three main components:
The model could be used to discover the causations with its counterfactual prediction and the observed data.[1]
A possible drawback of the model can be its relatively complicated mathematical underpinning and difficult implementation as a computer program. However, the programming languageRhas ready-to-use packages for calculating the BSTS model,[3][4]which do not require strong mathematical background from a researcher.
|
https://en.wikipedia.org/wiki/Bayesian_structural_time_series
|
Mixture of experts(MoE) is amachine learningtechnique where multiple expertnetworks(learners) are used to divide a problem space into homogeneous regions.[1]MoE represents a form ofensemble learning.[2]They were also calledcommittee machines.[3]
MoE always has the following components, but they are implemented and combined differently according to the problem being solved:
Both the experts and the weighting function are trained by minimizing someloss function, generally viagradient descent. There is much freedom in choosing the precise form of experts, the weighting function, and the loss function.
Themeta-pi network, reported by Hampshire and Waibel,[4]usesf(x)=∑iw(x)ifi(x){\displaystyle f(x)=\sum _{i}w(x)_{i}f_{i}(x)}as the output. The model is trained by performing gradient descent on the mean-squared error lossL:=1N∑k‖yk−f(xk)‖2{\displaystyle L:={\frac {1}{N}}\sum _{k}\|y_{k}-f(x_{k})\|^{2}}. The experts may be arbitrary functions.
In their original publication, they were solving the problem of classifyingphonemesin speech signal from 6 different Japanese speakers, 2 females and 4 males. They trained 6 experts, each being a "time-delayed neural network"[5](essentially a multilayeredconvolution networkover themel spectrogram). They found that the resulting mixture of experts dedicated 5 experts for 5 of the speakers, but the 6th (male) speaker does not have a dedicated expert, instead his voice was classified by a linear combination of the experts for the other 3 male speakers.
Theadaptive mixtures of local experts[6][7]uses aGaussian mixture model. Each expert simply predicts a Gaussian distribution, and totally ignores the input. Specifically, thei{\displaystyle i}-th expert predicts that the output isy∼N(μi,I){\displaystyle y\sim N(\mu _{i},I)}, whereμi{\displaystyle \mu _{i}}is a learnable parameter. The weighting function is a linear-softmax function:w(x)i=ekiTx+bi∑jekjTx+bj{\displaystyle w(x)_{i}={\frac {e^{k_{i}^{T}x+b_{i}}}{\sum _{j}e^{k_{j}^{T}x+b_{j}}}}}The mixture of experts predict that the output is distributed according to the probability density function:fθ(y|x)=ln[∑iekiTx+bi∑jekjTx+bjN(y|μi,I)]=ln[(2π)−d/2∑iekiTx+bi∑jekjTx+bje−12‖y−μi‖2]{\displaystyle f_{\theta }(y|x)=\ln \left[\sum _{i}{\frac {e^{k_{i}^{T}x+b_{i}}}{\sum _{j}e^{k_{j}^{T}x+b_{j}}}}N(y|\mu _{i},I)\right]=\ln \left[(2\pi )^{-d/2}\sum _{i}{\frac {e^{k_{i}^{T}x+b_{i}}}{\sum _{j}e^{k_{j}^{T}x+b_{j}}}}e^{-{\frac {1}{2}}\|y-\mu _{i}\|^{2}}\right]}It is trained by maximal likelihood estimation, that is, gradient ascent onf(y|x){\displaystyle f(y|x)}. The gradient for thei{\displaystyle i}-th expert is
∇μifθ(y|x)=w(x)iN(y|μi,I)∑jw(x)jN(y|μj,I)(y−μi){\displaystyle \nabla _{\mu _{i}}f_{\theta }(y|x)={\frac {w(x)_{i}N(y|\mu _{i},I)}{\sum _{j}w(x)_{j}N(y|\mu _{j},I)}}\;(y-\mu _{i})}
and the gradient for the weighting function is∇[ki,bi]fθ(y|x)=[x1]w(x)i∑jw(x)jN(y|μj,I)(fi(x)−fθ(y|x)){\displaystyle \nabla _{[k_{i},b_{i}]}f_{\theta }(y|x)={\begin{bmatrix}x\\1\end{bmatrix}}{\frac {w(x)_{i}}{\sum _{j}w(x)_{j}N(y|\mu _{j},I)}}(f_{i}(x)-f_{\theta }(y|x))}
For each input-output pair(x,y){\displaystyle (x,y)}, the weighting function is changed to increase the weight on all experts that performed above average, and decrease the weight on all experts that performed below average. This encourages the weighting function to learn to select only the experts that make the right predictions for each input.
Thei{\displaystyle i}-th expert is changed to make its prediction closer toy{\displaystyle y}, but the amount of change is proportional tow(x)iN(y|μi,I){\displaystyle w(x)_{i}N(y|\mu _{i},I)}. This has a Bayesian interpretation. Given inputx{\displaystyle x}, theprior probabilitythat experti{\displaystyle i}is the right one isw(x)i{\displaystyle w(x)_{i}}, andN(y|μi,I){\displaystyle N(y|\mu _{i},I)}is thelikelihoodof evidencey{\displaystyle y}. So,w(x)iN(y|μi,I)∑jw(x)jN(y|μj,I){\displaystyle {\frac {w(x)_{i}N(y|\mu _{i},I)}{\sum _{j}w(x)_{j}N(y|\mu _{j},I)}}}is theposterior probabilityfor experti{\displaystyle i}, and so the rate of change for thei{\displaystyle i}-th expert is proportional to its posterior probability.
In words, the experts that, in hindsight, seemed like the good experts to consult, are asked to learn on the example. The experts that, in hindsight, were not, are left alone.
The combined effect is that the experts become specialized: Suppose two experts are both good at predicting a certain kind of input, but one is slightly better, then the weighting function would eventually learn to favor the better one. After that happens, the lesser expert is unable to obtain a high gradient signal, and becomes even worse at predicting such kind of input. Conversely, the lesser expert can become better at predicting other kinds of input, and increasingly pulled away into another region. This has a positive feedback effect, causing each expert to move apart from the rest and take care of a local region alone (thus the name "localexperts").
Hierarchical mixtures of experts[8][9]uses multiple levels of gating in a tree. Each gating is a probability distribution over the next level of gatings, and the experts are on the leaf nodes of the tree. They are similar todecision trees.
For example, a 2-level hierarchical MoE would have a first order gating functionwi{\displaystyle w_{i}}, and second order gating functionswj|i{\displaystyle w_{j|i}}and expertsfj|i{\displaystyle f_{j|i}}. The total prediction is then∑iwi(x)∑jwj|i(x)fj|i(x){\displaystyle \sum _{i}w_{i}(x)\sum _{j}w_{j|i}(x)f_{j|i}(x)}.
The mixture of experts, being similar to the gaussian mixture model, can also be trained by theexpectation-maximization algorithm, just like gaussian mixture models. Specifically, during the expectation step, the "burden" for explaining each data point is assigned over the experts, and during the maximization step, the experts are trained to improve the explanations they got a high burden for, while the gate is trained to improve its burden assignment. This can converge faster than gradient ascent on the log-likelihood.[9][10]
The choice of gating function is often softmax. Other than that, gating may usegaussian distributions[11]andexponential families.[10]
Instead of performing a weighted sum of all the experts, in hard MoE,[12]only the highest ranked expert is chosen. That is,f(x)=fargmaxiwi(x)(x){\displaystyle f(x)=f_{\arg \max _{i}w_{i}(x)}(x)}. This can accelerate training and inference time.[13]
The experts can use more general forms of multivariant gaussian distributions. For example,[8]proposedfi(y|x)=N(y|Aix+bi,Σi){\displaystyle f_{i}(y|x)=N(y|A_{i}x+b_{i},\Sigma _{i})}, whereAi,bi,Σi{\displaystyle A_{i},b_{i},\Sigma _{i}}are learnable parameters. In words, each expert learns to do linear regression, with a learnable uncertainty estimate.
One can use different experts than gaussian distributions. For example, one can useLaplace distribution,[14]orStudent's t-distribution.[15]For binary classification, it also proposedlogistic regressionexperts, withfi(y|x)={11+eβiTx+βi,0,y=01−11+eβiTx+βi,0,y=1{\displaystyle f_{i}(y|x)={\begin{cases}{\frac {1}{1+e^{\beta _{i}^{T}x+\beta _{i,0}}}},&y=0\\1-{\frac {1}{1+e^{\beta _{i}^{T}x+\beta _{i,0}}}},&y=1\end{cases}}}whereβi,βi,0{\displaystyle \beta _{i},\beta _{i,0}}are learnable parameters. This is later generalized for multi-class classification, withmultinomial logistic regressionexperts.[16]
One paper proposedmixture of softmaxesfor autoregressive language modelling.[17]Specifically, consider a language model that given a previous textc{\displaystyle c}, predicts the next wordx{\displaystyle x}. The network encodes the text into a vectorvc{\displaystyle v_{c}}, and predicts the probability distribution of the next word asSoftmax(vcW){\displaystyle \mathrm {Softmax} (v_{c}W)}for an embedding matrixW{\displaystyle W}. In mixture of softmaxes, the model outputs multiple vectorsvc,1,…,vc,n{\displaystyle v_{c,1},\dots ,v_{c,n}}, and predict the next word as∑i=1npiSoftmax(vc,iWi){\displaystyle \sum _{i=1}^{n}p_{i}\;\mathrm {Softmax} (v_{c,i}W_{i})}, wherepi{\displaystyle p_{i}}is a probability distribution by a linear-softmax operation on the activations of the hidden neurons within the model. The original paper demonstrated its effectiveness forrecurrent neural networks. This was later found to work for Transformers as well.[18]
The previous section described MoE as it was used before the era ofdeep learning. After deep learning, MoE found applications in running the largest models, as a simple way to performconditional computation: only parts of the model are used, the parts chosen according to what the input is.[19]
The earliest paper that applies MoE to deep learning dates back to 2013,[20]which proposed to use a different gating network at each layer in a deep neural network. Specifically, each gating is a linear-ReLU-linear-softmax network, and each expert is a linear-ReLU network. Since the output from the gating is notsparse, all expert outputs are needed, and no conditional computation is performed.
The key goal when using MoE in deep learning is to reduce computing cost. Consequently, for each query, only a small subset of the experts should be queried. This makes MoE in deep learning different from classical MoE. In classical MoE, the output for each query is a weighted sum ofallexperts' outputs. In deep learning MoE, the output for each query can only involve a few experts' outputs. Consequently, the key design choice in MoE becomes routing: given a batch of queries, how to route the queries to the best experts.
Thesparsely-gated MoE layer,[21]published by researchers fromGoogle Brain, usesfeedforward networksas experts, and linear-softmax gating. Similar to the previously proposed hard MoE, they achieve sparsity by a weighted sum of only the top-k experts, instead of the weighted sum of all of them. Specifically, in a MoE layer, there arefeedforward networksf1,...,fn{\displaystyle f_{1},...,f_{n}}, and a gating networkw{\displaystyle w}. The gating network is defined byw(x)=softmax(topk(Wx+noise)){\displaystyle w(x)=\mathrm {softmax} (\mathrm {top} _{k}(Wx+{\text{noise}}))}, wheretopk{\displaystyle \mathrm {top} _{k}}is a function that keeps the top-k entries of a vector the same, but sets all other entries to−∞{\displaystyle -\infty }. The addition of noise helps with load balancing.
The choice ofk{\displaystyle k}is a hyperparameter that is chosen according to application. Typical values arek=1,2{\displaystyle k=1,2}. Thek=1{\displaystyle k=1}version is also called the Switch Transformer. The original Switch Transformer was applied to aT5 language model.[22]
As demonstration, they trained a series of models for machine translation with alternating layers of MoE andLSTM, and compared with deep LSTM models.[23]Table 3 shows that the MoE models used less inference time compute, despite having 30x more parameters.
Vanilla MoE tend to have issues ofload balancing: some experts are consulted often, while other experts rarely or not at all. To encourage the gate to select each expert with equal frequency (proper load balancing) within each batch, each MoE layer has two auxiliary loss functions. This is improved by Switch Transformer[22]into a singleauxiliary lossfunction. Specifically, letn{\displaystyle n}be the number of experts, then for a given batch of queries{x1,x2,...,xT}{\displaystyle \{x_{1},x_{2},...,x_{T}\}}, the auxiliary loss for the batch isn∑i=1nfiPi{\displaystyle n\sum _{i=1}^{n}f_{i}P_{i}}Here,fi=1T#(queries sent to experti){\displaystyle f_{i}={\frac {1}{T}}\#({\text{queries sent to expert }}i)}is the fraction of tokens that chose experti{\displaystyle i}, andPi=1T∑j=1Twi(xj)∑i′∈expertswi′(xj){\displaystyle P_{i}={\frac {1}{T}}\sum _{j=1}^{T}{\frac {w_{i}(x_{j})}{\sum _{i'\in {\text{experts}}}w_{i'}(x_{j})}}}is the fraction of weight on experti{\displaystyle i}. This loss is minimized at1{\displaystyle 1}, precisely when every expert has equal weight1/n{\displaystyle 1/n}in all situations.
Researchers atDeepSeekdesigned a variant of MoE, with "shared experts" that are always queried, and "routed experts" that might not be. They found that standard load balancing encourages the experts to be equally consulted, but this then causes experts to replicate the same core capacity, such as English grammar. They proposed the shared experts to learn core capacities that are often used, and let the routed experts to learn the peripheral capacities that are rarely used.[25]
They also proposed "auxiliary-loss-free load balancing strategy", which does not use auxiliary loss. Instead, each experti{\displaystyle i}has an extra "expert bias"bi{\displaystyle b_{i}}. If an expert is being neglected, then their bias increases, and vice versa. During token assignment, each token picks the top-k experts, but with the bias added in. That is:[26]f(x)=∑iis in the top-k of{w(x)j+bj}jw(x)ifi(x){\displaystyle f(x)=\sum _{i{\text{ is in the top-k of }}\{w(x)_{j}+b_{j}\}_{j}}w(x)_{i}f_{i}(x)}Note that the expert bias matters for picking the experts, but not in adding up the responses from the experts.
Suppose there aren{\displaystyle n}experts in a layer. For a given batch of queries{x1,x2,...,xT}{\displaystyle \{x_{1},x_{2},...,x_{T}\}}, each query is routed to one or more experts. For example, if each query is routed to one expert as in Switch Transformers, and if the experts are load-balanced, then each expert should expect on averageT/n{\displaystyle T/n}queries in a batch. In practice, the experts cannot expect perfect load balancing: in some batches, one expert might be underworked, while in other batches, it would be overworked.
Since the inputs cannot move through the layer until every expert in the layer has finished the queries it is assigned, load balancing is important. Thecapacity factoris sometimes used to enforce a hard constraint on load balancing. Each expert is only allowed to process up toc⋅T/n{\displaystyle c\cdot T/n}queries in a batch. The ST-MoE report foundc∈[1.25,2]{\displaystyle c\in [1.25,2]}to work well in practice.[27]
In the original sparsely-gated MoE, only the top-k experts are queried, and their outputs are weighted-summed. There are other methods.[27]Generally speaking, routing is anassignment problem: How to assign tokens to experts, such that a variety of constraints are followed (such as throughput, load balancing, etc.)? There are typically three classes of routing algorithm: the experts choose the tokens ("expert choice"),[28]the tokens choose the experts (the original sparsely-gated MoE), and a global assigner matching experts and tokens.[29]
During inference, the MoE works over a large batch of tokens at any time. If the tokens were to choose the experts, then some experts might few tokens, while a few experts get so many tokens that it exceeds their maximum batch size, so they would have to ignore some of the tokens. Similarly, if the experts were to choose the tokens, then some tokens might not be picked by any expert. This is the "token drop" problem. Dropping a token is not necessarily a serious problem, since in Transformers, due toresidual connections, if a token is "dropped", it does not disappear. Instead, its vector representation simply passes through the feedforward layer without change.[29]
Other approaches include solving it as aconstrained linear programmingproblem,[30]usingreinforcement learningto train the routing algorithm (since picking an expert is a discrete action, like in RL).[31]The token-expert match may involve no learning ("static routing"): It can be done by a deterministichash function[32]or a random number generator.[33]
MoE layers are used in the largesttransformer models, for which learning and inferring over the full model is too costly. They are typically sparsely-gated, with sparsity 1 or 2. In Transformer models, the MoE layers are often used to select thefeedforward layers(typically a linear-ReLU-linear network), appearing in each Transformer block after the multiheaded attention. This is because the feedforward layers take up an increasing portion of the computing cost as models grow larger. For example, in the Palm-540B model, 90% of parameters are in its feedforward layers.[34]
A trained Transformer can be converted to a MoE by duplicating its feedforward layers, with randomly initialized gating, then trained further. This is a technique called "sparse upcycling".[35]
There are a large number of design choices involved in Transformer MoE that affect the training stability and final performance. The OLMoE report describes these in some detail.[36]
As of 2023[update], models large enough to use MoE tend to belarge language models, where each expert has on the order of 10 billion parameters. Other than language models, Vision MoE[37]is a Transformer model with MoE layers. They demonstrated it by training a model with 15 billion parameters. MoE Transformer has also been applied fordiffusion models.[38]
A series of large language models fromGoogleused MoE. GShard[39]uses MoE with up to top-2 experts per layer. Specifically, the top-1 expert is always selected, and the top-2th expert is selected with probability proportional to that experts' weight according to the gating function. Later, GLaM[40]demonstrated a language model with 1.2 trillion parameters, each MoE layer using top-2 out of 64 experts. Switch Transformers[22]use top-1 in all MoE layers.
The NLLB-200 byMeta AIis a machine translation model for 200 languages.[41]Each MoE layer uses a hierarchical MoE with two levels. On the first level, the gating function chooses to use either a "shared" feedforward layer, or to use the experts. If using the experts, then another gating function computes the weights and chooses the top-2 experts.[42]
MoE large language models can be adapted for downstream tasks byinstruction tuning.[43]
In December 2023,Mistral AIreleased Mixtral 8x7B under Apache 2.0 license. It is a MoE language model with 46.7B parameters, 8 experts, and sparsity 2. They also released a version finetuned for instruction following.[44][45]
In March 2024, Databricks releasedDBRX. It is a MoE language model with 132B parameters, 16 experts, and sparsity 4. They also released a version finetuned for instruction following.[46][47]
|
https://en.wikipedia.org/wiki/Mixture_of_experts
|
CatBoost[6]is anopen-sourcesoftware librarydeveloped byYandex. It provides agradient boostingframework which, among other features, attempts to solve for categorical features using a permutation-driven alternative to the classical algorithm.[7]It works onLinux,Windows,macOS, and is available inPython,[8]R,[9]and models built using CatBoost can be used for predictions inC++,Java,[10]C#,Rust,Core ML,ONNX, andPMML. The source code is licensed underApache Licenseand available on GitHub.[6]
InfoWorldmagazine awarded the library "The best machine learning tools" in 2017.[11]along withTensorFlow,Pytorch,XGBoostand 8 other libraries.
Kagglelisted CatBoost as one of the most frequently used machine learning (ML) frameworks in the world. It was listed as the top-8 most frequently used ML framework in the 2020 survey[12]and as the top-7 most frequently used ML framework in the 2021 survey.[13]
As of April 2022, CatBoost is installed about 100000 times per day fromPyPIrepository[14]
CatBoost has gained popularity compared to other gradient boosting algorithms primarily due to the following features[15]
In 2009 Andrey Gulin developedMatrixNet, a proprietary gradient boosting library that was used in Yandex to rank search results.
Since 2009 MatrixNet has been used in different projects in Yandex, including recommendation systems and weather prediction.
In 2014–2015 Andrey Gulin with a team of researchers has started a new project called Tensornet that was aimed at solving the problem of "how to work withcategorical data". It resulted in several proprietary Gradient Boosting libraries with different approaches to handling categorical data.
In 2016 Machine Learning Infrastructure team led by Anna Dorogush started working on Gradient Boosting in Yandex, including Matrixnet and Tensornet. They implemented and open-sourced the next version of Gradient Boosting library called CatBoost, which has support of categorical and text data, GPU training, model analysis, visualization tools.
CatBoost was open-sourced in July 2017 and is under active development in Yandex and the open-source community.
|
https://en.wikipedia.org/wiki/Catboost
|
LightGBM, short forLight Gradient-Boosting Machine, is afree and open-sourcedistributedgradient-boostingframework formachine learning, originally developed byMicrosoft.[4][5]It is based ondecision treealgorithms and used forranking,classificationand other machine learning tasks. The development focus is on performance and scalability.
The LightGBM framework supports different algorithms including GBT,GBDT,GBRT,GBM,MART[6][7]andRF.[8]LightGBM has many ofXGBoost's advantages, including sparse optimization, parallel training, multiple loss functions, regularization, bagging, and early stopping. A major difference between the two lies in the construction of trees. LightGBM does not grow a tree level-wise — row by row — as most other implementations do.[9]Instead it grows trees leaf-wise. It will choose the leaf with max delta loss to grow.[10]Besides, LightGBM does not use the widely used sorted-based decision tree learning algorithm, which searches the best split point on sorted feature values,[11]asXGBoostor other implementations do. Instead, LightGBM implements a highly optimized histogram-based decision tree learning algorithm, which yields great advantages on both efficiency and memory consumption.[12]The LightGBM algorithm utilizes two novel techniques called Gradient-Based One-Side Sampling (GOSS) and Exclusive Feature Bundling (EFB) which allow the algorithm to run faster while maintaining a high level of accuracy.[13]
LightGBM works onLinux,Windows, andmacOSand supportsC++,Python,[14]R, andC#.[15]The source code is licensed underMIT Licenseand available onGitHub.[16]
When usinggradient descent, one thinks about the space of possible configurations of the model as a valley, in which the lowest part of the valley is the model which most closely fits the data. In this metaphor, one walks in different directions to learn how much lower the valley becomes.
Typically, in gradient descent, one uses the whole set of data to calculate the valley's slopes. However, this commonly-used method assumes that every data point is equally informative.
By contrast, Gradient-Based One-Side Sampling (GOSS), a method first developed forgradient-boosted decision trees, does not rely on the assumption that all data are equally informative. Instead, it treats data points with smaller gradients (shallower slopes) as less informative by randomly dropping them. This is intended to filter out data which may have been influenced by noise, allowing the model to more accurately model the underlying relationships in the data.[13]
Exclusive feature bundling (EFB) is a near-lossless method to reduce the number of effective features. In a sparse feature space many features are nearly exclusive, implying they rarely take nonzero values simultaneously. One-hot encoded features are a perfect example of exclusive features. EFB bundles these features, reducing dimensionality to improve efficiency while maintaining a high level of accuracy. The bundle of exclusive features into a single feature is called an exclusive feature bundle.[13]
|
https://en.wikipedia.org/wiki/LightGBM
|
Instatistics,cumulative distribution function(CDF)-based nonparametric confidence intervalsare a general class ofconfidence intervalsaroundstatistical functionalsof a distribution. To calculate these confidence intervals, all that is required is anindependently and identically distributed(iid) sample from the distribution and known bounds on the support of the distribution. The latter requirement simply means that all the nonzero probability mass of the distribution must be contained in some known interval[a,b]{\displaystyle [a,b]}.
The intuition behind the CDF-based approach is that bounds on the CDF of a distribution can be translated into bounds on statistical functionals of that distribution. Given an upper and lower bound on the CDF, the approach involves finding the CDFs within the bounds that maximize and minimize the statistical functional of interest.
Unlike approaches that make asymptotic assumptions, includingbootstrap approachesand those that rely on thecentral limit theorem, CDF-based bounds are valid for finite sample sizes. And unlike bounds based on inequalities such asHoeffding'sandMcDiarmid'sinequalities, CDF-based bounds use properties of the entire sample and thus often produce significantly tighter bounds.
When producing bounds on the CDF, we must differentiate betweenpointwise and simultaneous bands.
A pointwise CDF bound is one which only guarantees theirCoverage probabilityof1−α{\displaystyle 1-\alpha }percent on any individual point of the empirical CDF. Because of the relaxed guarantees, these intervals can be much smaller.
One method of generating them is based on the Binomial distribution. Considering a single point of a CDF of valueF(xi){\displaystyle F(x_{i})}, then the empirical distribution at that point will be distributed proportional to the binomial distribution withp=F(xi){\displaystyle p=F(x_{i})}andn{\displaystyle n}set equal to the number of samples in the empirical distribution. Thus, any of the methods available for generating aBinomial proportion confidence intervalcan be used to generate a CDF bound as well.
CDF-based confidence intervals require a probabilistic bound on the CDF of the distribution from which the sample were generated. A variety of methods exist for generating confidence intervals for the CDF of a distribution,F{\displaystyle F}, given an i.i.d. sample drawn from the distribution. These methods are all based on theempirical distribution function(empirical CDF). Given an i.i.d. sample of sizen,x1,…,xn∼F{\displaystyle x_{1},\ldots ,x_{n}\sim F}, the empirical CDF is defined to be
where1{A}{\displaystyle 1\{A\}}is the indicator of event A. TheDvoretzky–Kiefer–Wolfowitz inequality,[1]whose tight constant was determined by Massart,[2]places a confidence interval around theKolmogorov–Smirnov statisticbetween the CDF and the empirical CDF. Given an i.i.d. sample of sizenfromF{\displaystyle F}, the bound states
This can be viewed as a confidence envelope that runs parallel to, and is equally above and below, the empirical CDF.
The interval that contains the true CDF,F(x){\displaystyle F(x)}, with probability1−α{\displaystyle 1-\alpha }is often specified as
The equally spaced confidence interval around the empirical CDF allows for different rates of violations across the support of the distribution. In particular, it is more common for a CDF to be outside of the CDF bound estimated using the Dvoretzky–Kiefer–Wolfowitz inequality near the
median of the distribution than near the endpoints of the distribution. In contrast, the order statistics-based bound introduced by Learned-Miller and DeStefano[3]allows for an equal rate
of violation across all of the order statistics. This in turn results in a bound that is tighter near the ends of the support of the distribution and looser in the middle of the support. Other types of bounds can be generated by varying the rate of violation for the order statistics. For example, if a tighter bound on the distribution is desired on the upper portion of the support, a higher rate of violation can be allowed at the upper portion of the support at the expense of having a lower rate of violation, and thus a looser bound, for the lower portion of the support.
Assume without loss of generality that the support of the distribution is contained in[0,1].{\displaystyle [0,1].}Given a confidence envelope for the CDF ofF{\displaystyle F}it is easy to derive a corresponding confidence interval for the mean ofF{\displaystyle F}. It can be shown[4]that the CDF that maximizes
the mean is the one that runs along the lower confidence envelope,L(x){\displaystyle L(x)}, and the CDF that minimizes the mean is the one that runs along the upper envelope,U(x){\displaystyle U(x)}. Using the identity
the confidence interval for the mean can be computed as
Assume without loss of generality that the support of the distribution of interest,F{\displaystyle F}, is contained in[0,1]{\displaystyle [0,1]}. Given a confidence envelope forF{\displaystyle F}, it can be shown[5]that the CDF within the envelope that minimizes the variance begins on the lower envelope, has a jump discontinuity to the upper envelope, and then continues along the upper envelope. Further, it can be shown that this variance-minimizing CDF, F', must satisfy the constraint that the jump discontinuity occurs atE[F′]{\displaystyle E[F']}. The variance maximizing CDF begins on the upper envelope, horizontally transitions to the lower envelope, then continues along the lower envelope. Explicit algorithms for calculating these variance-maximizing and minimizing CDFs are given by Romano and Wolf.[5]
The CDF-based framework for generating confidence intervals is very general and can be applied to a variety of other statistical functionals including
|
https://en.wikipedia.org/wiki/CDF-based_nonparametric_confidence_interval
|
Parametric statisticsis a branch of statistics which leverages models based on a fixed (finite) set ofparameters.[1]Converselynonparametric statisticsdoes not assume explicit (finite-parametric) mathematical forms for distributions when modeling data. However, it may make some assumptions about that distribution, such as continuity or symmetry, or even an explicit mathematical shape but have a model for a distributional parameter that is not itself finite-parametric.
Most well-known statistical methods are parametric.[2]Regarding nonparametric (and semiparametric) models,Sir David Coxhas said, "These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies".[3]
Thenormal family of distributionsall have the same general shape and areparameterizedbymeanandstandard deviation. That means that if the mean and standard deviation are known and if the distribution is normal, the probability of any future observation lying in a given range is known.
Suppose that we have a sample of 99 test scores with a mean of 100 and a standard deviation of 1. If we assume all 99 test scores are random observations from a normal distribution, then we predict there is a 1% chance that the 100th test score will be higher than 102.33 (that is, the mean plus 2.33 standard deviations), assuming that the 100th test score comes from the same distribution as the others. Parametric statistical methods are used to compute the 2.33 value above, given 99independentobservations from the same normal distribution.
Anon-parametricestimate of the same thing is the maximum of the first 99 scores. We don't need to assume anything about the distribution of test scores to reason that before we gave the test it was equally likely that the highest score would be any of the first 100. Thus there is a 1% chance that the 100th score is higher than any of the 99 that preceded it.
Parametric statistics was mentioned byR. A. Fisherin his workStatistical Methods for Research Workersin 1925, which created the foundation for modern statistics.
|
https://en.wikipedia.org/wiki/Parametric_statistics
|
Instatistics, asemiparametric modelis astatistical modelthat hasparametricandnonparametriccomponents.
A statistical model is aparameterized familyof distributions:{Pθ:θ∈Θ}{\displaystyle \{P_{\theta }:\theta \in \Theta \}}indexed by aparameterθ{\displaystyle \theta }.
It may appear at first that semiparametric models include nonparametric models, since they have an infinite-dimensional as well as a finite-dimensional component. However, a semiparametric model is considered to be "smaller" than a completely nonparametric model because we are often interested only in the finite-dimensional component ofθ{\displaystyle \theta }. That is, the infinite-dimensional component is regarded as anuisance parameter.[2]In nonparametric models, by contrast, the primary interest is in estimating the infinite-dimensional parameter. Thus the estimation task is statistically harder in nonparametric models.
These models often usesmoothingorkernels.
A well-known example of a semiparametric model is theCox proportional hazards model.[3]If we are interested in studying the timeT{\displaystyle T}to an event such as death due to cancer or failure of a light bulb, the Cox model specifies the following distribution function forT{\displaystyle T}:
wherex{\displaystyle x}is the covariate vector, andβ{\displaystyle \beta }andλ0(u){\displaystyle \lambda _{0}(u)}are unknown parameters.θ=(β,λ0(u)){\displaystyle \theta =(\beta ,\lambda _{0}(u))}. Hereβ{\displaystyle \beta }is finite-dimensional and is of interest;λ0(u){\displaystyle \lambda _{0}(u)}is an unknown non-negative function of time (known as the baseline hazard function) and is often anuisance parameter. The set of possible candidates forλ0(u){\displaystyle \lambda _{0}(u)}is infinite-dimensional.
|
https://en.wikipedia.org/wiki/Semiparametric_model
|
Theapproximate counting algorithmallows the counting of a large number of events using a small amount of memory. Invented in 1977 byRobert MorrisofBell Labs, it usesprobabilistic techniquesto increment thecounter. It was fully analyzed in the early 1980s byPhilippe FlajoletofINRIARocquencourt, who coined the nameapproximate counting, and strongly contributed to its recognition among the research community. When focused on high quality of approximation and low probability of failure,Nelsonand Yu showed that a very slight modification to the Morris Counter is asymptotically optimal amongst all algorithms for the problem.[1]The algorithm is considered one of the precursors ofstreaming algorithms, and the more general problem of determining the frequency moments of a data stream has been central to the field.
Using Morris' algorithm, the counter represents an "order of magnitudeestimate" of the actual count. The approximation is mathematicallyunbiased.
To increment the counter, apseudo-randomevent is used, such that the incrementing is a probabilistic event. To save space, only the exponent is kept. For example, in base 2, the counter can estimate the count to be 1, 2, 4, 8, 16, 32, and all of thepowers of two. The memory requirement is simply to hold theexponent.
As an example, to increment from 4 to 8, a pseudo-random number would be generated such that the probability the counter is increased is 0.25. Otherwise, the counter remains at 4.
The table below illustrates some of the potential values of the counter:
If the counter holds the value of 101, which equates to an exponent of 5 (the decimal equivalent of 101), then the estimated count is25{\displaystyle 2^{5}}, or 32. There is a fairly low probability that the actual count of increment events was 5 (11024=1×12×14×18×116{\displaystyle {\frac {1}{1024}}=1\times {\frac {1}{2}}\times {\frac {1}{4}}\times {\frac {1}{8}}\times {\frac {1}{16}}}). The actual count of increment events is likely to be "around 32", but it could be arbitrarily high (with decreasing probabilities for actual counts above 39).
While using powers of 2 as counter values is memory efficient, arbitrary values tend to create a dynamic error range, and the smaller values will have a greater error ratio than bigger values. Other methods of selecting counter values consider parameters such as memory availability, desired error ratio, or counting range to provide an optimal set of values.[2]
However, when several counters share the same values, values are optimized according to the counter with the largest counting range, and produce sub-optimal accuracy for smaller counters. Mitigation is achieved by maintaining Independent Counter Estimation buckets,[3]which restrict the effect of a larger counter to the other counters in the bucket.
The algorithm can be implemented by hand. When incrementing the counter, flip a coin a number of times corresponding to the counter's current value. If it comes up heads each time, then increment the counter. Otherwise, do not increment it.
This can be easily achieved on a computer. Letc{\displaystyle c}be the current value of the counter. Generatingc{\displaystyle c}pseudo-random bits and using thelogical ANDof all those bits and add the result to the counter. As the result was zero if any of those pseudo-random bits are zero, achieving an increment probability of2−c{\displaystyle 2^{-c}}. This procedure is executed each time the request is made to increment the counter.
The algorithm is useful in examining large data streams for patterns. This is particularly useful in applications ofdata compression, sight and sound recognition, and otherartificial intelligenceapplications.
|
https://en.wikipedia.org/wiki/Approximate_counting_algorithm
|
Atlantic City algorithmis aprobabilisticpolynomial timealgorithm(PP Complexity Class) that answers correctly at least 75% of the time (or, in some versions, some other value greater than 50%). The term "Atlantic City" was first introduced in 1982 byJ. Finnin an unpublished manuscript entitledComparison of probabilistic tests for primality.[1]
Two other common classes of probabilistic algorithms areMonte Carlo algorithmsandLas Vegas algorithms. Monte Carlo algorithms are always fast, but only probably correct. On the other hand, Las Vegas algorithms are always correct, but only probably fast. The Atlantic City algorithms, which are bounded probabilistic polynomial time algorithms are probably correct and probably fast.[2]
Thistheoretical computer science–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Atlantic_City_algorithm
|
Incomputer science,bogosort[1][2](also known aspermutation sortandstupid sort[3]) is asorting algorithmbased on thegenerate and testparadigm. The function successively generatespermutationsof its input until it finds one that is sorted. It is not considered useful for sorting, but may be used for educational purposes, to contrast it with more efficient algorithms. The algorithm's name is aportmanteauof the wordsbogusandsort.[4]
Two versions of this algorithm exist: a deterministic version that enumerates all permutations until it hits a sorted one,[2][5]and arandomizedversion that randomly permutes its input and checks whether it is sorted. An analogy for the working of the latter version is to sort adeck of cardsby throwing the deck into the air, picking the cards up at random, and repeating the process until the deck is sorted. In a worst-case scenario with this version, the random source is of low quality and happens to make the sorted permutation unlikely to occur.
The following is a description of the randomized algorithm inpseudocode:
An implementation inC:
An implementation inPython:
This code creates a random array - random_array - in generate_random_array that would be sorted by shuffling it in bogosort. All data in the array is natural numbers from 1 - 100.
If all elements to be sorted are distinct, the expected number of comparisons performed in the average case by randomized bogosort isasymptotically equivalent to(e− 1)n!, and the expected number of swaps in the average case equals(n− 1)n!.[1]The expected number of swaps grows faster than the expected number of comparisons, because if the elements are not in order, this will usually be discovered after only a few comparisons, no matter how many elements there are; but the work of shuffling the collection is proportional to its size. In the worst case, the number of comparisons and swaps are both unbounded, for the same reason that a tossed coin might turn up heads any number of times in a row.
The best case occurs if the list as given is already sorted; in this case the expected number of comparisons isn− 1, and no swaps at all are carried out.[1]
For any collection of fixed size, the expected running time of the algorithm is finite for much the same reason that theinfinite monkey theoremholds: there is some probability of getting the right permutation, so given an unbounded number of tries it willalmost surelyeventually be chosen.
|
https://en.wikipedia.org/wiki/Bogosort
|
HyperLogLogis an algorithm for thecount-distinct problem, approximating the number of distinct elements in amultiset.[1]Calculating theexactcardinalityof the distinct elements of a multiset requires an amount of memory proportional to the cardinality, which is impractical for very large data sets. Probabilistic cardinality estimators, such as the HyperLogLog algorithm, use significantly less memory than this, but can only approximate the cardinality. The HyperLogLog algorithm is able to estimate cardinalities of > 109with a typical accuracy (standard error) of 2%, using 1.5 kB of memory.[1]HyperLogLog is an extension of the earlier LogLog algorithm,[2]itself deriving from the 1984Flajolet–Martin algorithm.[3]
In the original paper by Flajoletet al.[1]and in related literature on thecount-distinct problem, the term "cardinality" is used to mean the number of distinct elements in a data stream with repeated elements. However in the theory ofmultisetsthe term refers to the sum of multiplicities of each member of a multiset. This article chooses to use Flajolet's definition for consistency with the sources.
The basis of the HyperLogLog algorithm is the observation that the cardinality of a multiset of uniformly distributed random numbers can be estimated by calculating the maximum number of leading zeros in the binary representation of each number in the set. If the maximum number of leading zeros observed isn, an estimate for the number of distinct elements in the set is 2n.[1]
In the HyperLogLog algorithm, ahash functionis applied to each element in the original multiset to obtain a multiset of uniformly distributed random numbers with the same cardinality as the original multiset. The cardinality of this randomly distributed set can then be estimated using the algorithm above.
The simple estimate of cardinality obtained using the algorithm above has the disadvantage of a largevariance. In the HyperLogLog algorithm, the variance is minimised by splitting the multiset into numerous subsets, calculating the maximum number of leading zeros in the numbers in each of these subsets, and using aharmonic meanto combine these estimates for each subset into an estimate of the cardinality of the whole set.[4]
The HyperLogLog has three main operations:addto add a new element to the set,countto obtain the cardinality of the set andmergeto obtain the union of two sets. Some derived operations can be computed using theinclusion–exclusion principlelike thecardinality of the intersectionor thecardinality of the differencebetween two HyperLogLogs combining the merge and count operations.
The data of the HyperLogLog is stored in an arrayMofmcounters (or "registers") that are initialized to 0. ArrayMinitialized from a multisetSis calledHyperLogLogsketch of S.
The add operation consists of computing the hash of the input datavwith a hash functionh, getting the firstbbits (wherebislog2(m){\textstyle \log _{2}(m)}), and adding 1 to them to obtain the address of the register to modify. With the remaining bits computeρ(w){\textstyle \rho (w)}which returns the position of the leftmost 1, where leftmost position is 1 (in other words: number of leading zeros plus 1). The new value of the register will be the maximum between the current value of the register andρ(w){\textstyle \rho (w)}.
The count algorithm consists in computing the harmonic mean of themregisters, and using a constant to derive an estimateE{\textstyle E}of the count:
The intuition is thatnbeing the unknown cardinality ofM, each subsetMj{\textstyle M_{j}}will haven/m{\textstyle n/m}elements. Thenmaxx∈Mjρ(x){\textstyle \max _{x\in M_{j}}\rho (x)}should be close tolog2(n/m){\textstyle \log _{2}(n/m)}. The harmonic mean of 2 to these quantities ismZ{\textstyle mZ}which should be nearn/m{\textstyle n/m}. Thus,m2Z{\textstyle m^{2}Z}should benapproximately.
Finally, the constantαm{\textstyle \alpha _{m}}is introduced to correct a systematic multiplicative bias present inm2Z{\textstyle m^{2}Z}due to hash collisions.
The constantαm{\textstyle \alpha _{m}}is not simple to calculate, and can be approximated with the formula[1]
The HyperLogLog technique, though, is biased for small cardinalities below a threshold of52m{\textstyle {\frac {5}{2}}m}. The original paper proposes using a different algorithm for small cardinalities known as Linear Counting.[5]In the case where the estimate provided above is less than the thresholdE<52m{\textstyle E<{\frac {5}{2}}m}, the alternative calculation can be used:
Additionally, for very large cardinalities approaching the limit of the size of the registers (E>23230{\textstyle E>{\frac {2^{32}}{30}}}for 32-bit registers), the cardinality can be estimated with:
With the above corrections for lower and upper bounds, the error can be estimated asσ=1.04/m{\textstyle \sigma =1.04/{\sqrt {m}}}.
The merge operation for two HLLs (hll1,hll2{\textstyle {\mathit {hll}}_{1},{\mathit {hll}}_{2}}) consists in obtaining the maximum for each pair of registersj:1..m{\textstyle j:1..m}
To analyze the complexity, the data streaming(ϵ,δ){\displaystyle (\epsilon ,\delta )}model[6]is used, which analyzes the space necessary to get a1±ϵ{\displaystyle 1\pm \epsilon }approximation with a fixed success probability1−δ{\displaystyle 1-\delta }. The relative error of HLL is1.04/m{\displaystyle 1.04/{\sqrt {m}}}and it needsO(ϵ−2loglogn+logn){\displaystyle O(\epsilon ^{-2}\log \log n+\log n)}space, wherenis the set cardinality andmis the number of registers (usually less than one byte size).
Theaddoperation depends on the size of the output of the hash function. As this size is fixed, we can consider the running time for the add operation to beO(1){\displaystyle O(1)}.
Thecountandmergeoperations depend on the number of registersmand have a theoretical cost ofO(m){\displaystyle O(m)}. In some implementations (Redis)[7]the number of registers is fixed and the cost is considered to beO(1){\displaystyle O(1)}in the documentation.
The HyperLogLog++ algorithm proposes several improvements in the HyperLogLog algorithm to reduce memory requirements and increase accuracy in some ranges of cardinalities:[6]
When the data arrives in a single stream, the Historic Inverse Probability or martingale estimator[8][9]significantly improves the accuracy of the HLL sketch and uses 36% less memory to achieve a given error level. This estimator is provably optimal for any duplicate insensitive approximate distinct counting sketch on a single stream.
The single stream scenario also leads to variants in the HLL sketch construction.
HLL-TailCut+ uses 45% less memory than the original HLL sketch but at the cost of being dependent on the data insertion order and not being able to merge sketches.[10]
|
https://en.wikipedia.org/wiki/HyperLogLog
|
Incomputer scienceandgraph theory,Karger's algorithmis arandomized algorithmto compute aminimum cutof a connectedgraph. It was invented byDavid Kargerand first published in 1993.[1]
The idea of the algorithm is based on the concept ofcontraction of an edge(u,v){\displaystyle (u,v)}in an undirected graphG=(V,E){\displaystyle G=(V,E)}. Informally speaking, the contraction of an edge merges the nodesu{\displaystyle u}andv{\displaystyle v}into one, reducing the total number of nodes of the graph by one. All other edges connecting eitheru{\displaystyle u}orv{\displaystyle v}are "reattached" to the merged node, effectively producing amultigraph. Karger's basic algorithm iteratively contracts randomly chosen edges until only two nodes remain; those nodes represent acutin the original graph. By iterating this basic algorithm a sufficient number of times, a minimum cut can be foundwith high probability.
Acut(S,T){\displaystyle (S,T)}in an undirected graphG=(V,E){\displaystyle G=(V,E)}is a partition of the verticesV{\displaystyle V}into two non-empty, disjoint setsS∪T=V{\displaystyle S\cup T=V}. Thecutsetof a cut consists of the edges{uv∈E:u∈S,v∈T}{\displaystyle \{\,uv\in E\colon u\in S,v\in T\,\}}between the two parts. Thesize(orweight) of a cut in an unweighted graph is the cardinality of the cutset, i.e., the number of edges between the two parts,
There are2|V|{\displaystyle 2^{|V|}}ways of choosing for each vertex whether it belongs toS{\displaystyle S}or toT{\displaystyle T}, but two of these choices makeS{\displaystyle S}orT{\displaystyle T}empty and do not give rise to cuts. Among the remaining choices, swapping the roles ofS{\displaystyle S}andT{\displaystyle T}does not change the cut, so each cut is counted twice; therefore, there are2|V|−1−1{\displaystyle 2^{|V|-1}-1}distinct cuts.
Theminimum cut problemis to find a cut of smallest size among these cuts.
For weighted graphs with positive edge weightsw:E→R+{\displaystyle w\colon E\rightarrow \mathbf {R} ^{+}}the weight of the cut is the sum of the weights of edges between vertices in each part
which agrees with the unweighted definition forw=1{\displaystyle w=1}.
A cut is sometimes called a “global cut” to distinguish it from an “s{\displaystyle s}-t{\displaystyle t}cut” for a given pair of vertices, which has the additional requirement thats∈S{\displaystyle s\in S}andt∈T{\displaystyle t\in T}. Every global cut is ans{\displaystyle s}-t{\displaystyle t}cut for somes,t∈V{\displaystyle s,t\in V}. Thus, the minimum cut problem can be solved inpolynomial timeby iterating over all choices ofs,t∈V{\displaystyle s,t\in V}and solving the resulting minimums{\displaystyle s}-t{\displaystyle t}cut problem using themax-flow min-cut theoremand a polynomial time algorithm formaximum flow, such as thepush-relabel algorithm, though this approach is not optimal. Better deterministic algorithms for the global minimum cut problem include theStoer–Wagner algorithm, which has a running time ofO(mn+n2logn){\displaystyle O(mn+n^{2}\log n)}.[2]
The fundamental operation of Karger’s algorithm is a form ofedge contraction. The result of contracting the edgee={u,v}{\displaystyle e=\{u,v\}}is a new nodeuv{\displaystyle uv}. Every edge{w,u}{\displaystyle \{w,u\}}or{w,v}{\displaystyle \{w,v\}}forw∉{u,v}{\displaystyle w\notin \{u,v\}}to the endpoints of the contracted edge is replaced by an edge{w,uv}{\displaystyle \{w,uv\}}to the new node. Finally, the contracted nodesu{\displaystyle u}andv{\displaystyle v}with all their incident edges are removed. In particular, the resulting graph contains no self-loops. The result of contracting edgee{\displaystyle e}is denotedG/e{\displaystyle G/e}.
The contraction algorithm repeatedly contracts random edges in the graph, until only two nodes remain, at which point there is only a single cut.
The key idea of the algorithm is that it is far more likely for non min-cut edges than min-cut edges to be randomly selected and lost to contraction, since min-cut edges are usually vastly outnumbered by non min-cut edges. Subsequently, it is plausible that the min-cut edges will survive all the edge contraction, and the algorithm will correctly identify the min-cut edge.
When the graph is represented usingadjacency listsor anadjacency matrix, a single edge contraction operation can be implemented with a linear number of updates to the data structure, for a total running time ofO(|V|2){\displaystyle O(|V|^{2})}. Alternatively, the procedure can be viewed as an execution ofKruskal’s algorithmfor constructing theminimum spanning treein a graph where the edges have weightsw(ei)=π(i){\displaystyle w(e_{i})=\pi (i)}according to a random permutationπ{\displaystyle \pi }. Removing the heaviest edge of this tree results in two components that describe a cut. In this way, the contraction procedure can be implemented likeKruskal’s algorithmin timeO(|E|log|E|){\displaystyle O(|E|\log |E|)}.
The best known implementations useO(|E|){\displaystyle O(|E|)}time and space, orO(|E|log|E|){\displaystyle O(|E|\log |E|)}time andO(|V|){\displaystyle O(|V|)}space, respectively.[1]
In a graphG=(V,E){\displaystyle G=(V,E)}withn=|V|{\displaystyle n=|V|}vertices, the contraction algorithm returns a minimum cut with polynomially small probability(n2)−1{\displaystyle {\binom {n}{2}}^{-1}}. Recall that every graph has2n−1−1{\displaystyle 2^{n-1}-1}cuts (by the discussion in the previous section), among which at most(n2){\displaystyle {\tbinom {n}{2}}}can be minimum cuts. Therefore, the success probability for this algorithm is much better than the probability for picking a cut at random, which is at most(n2)2n−1−1{\displaystyle {\frac {\tbinom {n}{2}}{2^{n-1}-1}}}.
For instance, thecycle graphonn{\displaystyle n}vertices has exactly(n2){\displaystyle {\binom {n}{2}}}minimum cuts, given by every choice of 2 edges. The contraction procedure finds each of these with equal probability.
To further establish the lower bound on the success probability, letC{\displaystyle C}denote the edges of a specific minimum cut of sizek{\displaystyle k}. The contraction algorithm returnsC{\displaystyle C}if none of the random edges deleted by the algorithm belongs to the cutsetC{\displaystyle C}. In particular, the first edge contraction avoidsC{\displaystyle C}, which happens with probability1−k/|E|{\displaystyle 1-k/|E|}. The minimumdegreeofG{\displaystyle G}is at leastk{\displaystyle k}(otherwise a minimum degree vertex would induce a smaller cut where one of the two partitions contains only the minimum degree vertex), so|E|⩾nk/2{\displaystyle |E|\geqslant nk/2}. Thus, the probability that the contraction algorithm picks an edge fromC{\displaystyle C}is
The probabilitypn{\displaystyle p_{n}}that the contraction algorithm on ann{\displaystyle n}-vertex graph avoidsC{\displaystyle C}satisfies the recurrencepn⩾(1−2n)pn−1{\displaystyle p_{n}\geqslant \left(1-{\frac {2}{n}}\right)p_{n-1}}, withp2=1{\displaystyle p_{2}=1}, which can be expanded as
By repeating the contraction algorithmT=(n2)lnn{\displaystyle T={\binom {n}{2}}\ln n}times with independent random choices and returning the smallest cut, the probability of not finding a minimum cut is
The total running time forT{\displaystyle T}repetitions for a graph withn{\displaystyle n}vertices andm{\displaystyle m}edges isO(Tm)=O(n2mlogn){\displaystyle O(Tm)=O(n^{2}m\log n)}.
An extension of Karger’s algorithm due toDavid KargerandClifford Steinachieves an order of magnitude improvement.[3]
The basic idea is to perform the contraction procedure until the graph reachest{\displaystyle t}vertices.
The probabilitypn,t{\displaystyle p_{n,t}}that this contraction procedure avoids a specific cutC{\displaystyle C}in ann{\displaystyle n}-vertex graph is
pn,t≥∏i=0n−t−1(1−2n−i)=(t2)/(n2).{\displaystyle p_{n,t}\geq \prod _{i=0}^{n-t-1}{\Bigl (}1-{\frac {2}{n-i}}{\Bigr )}={\binom {t}{2}}{\Bigg /}{\binom {n}{2}}\,.}
This expression is approximatelyt2/n2{\displaystyle t^{2}/n^{2}}and becomes less than12{\displaystyle {\frac {1}{2}}}aroundt=n/2{\displaystyle t=n/{\sqrt {2}}}. In particular, the probability that an edge fromC{\displaystyle C}is contracted grows towards the end. This motivates the idea of switching to a slower algorithm after a certain number of contraction steps.
The contraction parametert{\displaystyle t}is chosen so that each call to contract has probability at least 1/2 of success (that is, of avoiding the contraction of an edge from a specific cutsetC{\displaystyle C}). This allows the successful part of the recursion tree to be modeled as arandom binary treegenerated by a criticalGalton–Watson process, and to be analyzed accordingly.[3]
The probabilityP(n){\displaystyle P(n)}that this random tree of successful calls contains a long-enough path to reach the base of the recursion and findC{\displaystyle C}is given by the recurrence relation
with solutionP(n)=Ω(1logn){\displaystyle P(n)=\Omega \left({\frac {1}{\log n}}\right)}. The running time of fastmincut satisfies
with solutionT(n)=O(n2logn){\displaystyle T(n)=O(n^{2}\log n)}. To achieve error probabilityO(1/n){\displaystyle O(1/n)}, the algorithm can be repeatedO(logn/P(n)){\displaystyle O(\log n/P(n))}times, for an overall running time ofT(n)⋅lognP(n)=O(n2log3n){\displaystyle T(n)\cdot {\frac {\log n}{P(n)}}=O(n^{2}\log ^{3}n)}. This is an order of magnitude improvement over Karger’s original algorithm.[3]
To determine a min-cut, one has to touch every edge in the graph at least once, which isΘ(n2){\displaystyle \Theta (n^{2})}time in adense graph. The Karger–Stein's min-cut algorithm takes the running time ofO(n2lnO(1)n){\displaystyle O(n^{2}\ln ^{O(1)}n)}, which is very close to that.
|
https://en.wikipedia.org/wiki/Karger%27s_algorithm
|
Incomputing, aLas Vegas algorithmis arandomized algorithmthat always givescorrectresults; that is, it always produces the correct result or it informs about the failure. However, the runtime of a Las Vegas algorithm differs depending on the input. The usual definition of a Las Vegas algorithm includes the restriction that theexpectedruntime be finite, where the expectation is carried out over the space of random information, or entropy, used in the algorithm. An alternative definition requires that a Las Vegas algorithm always terminates (iseffective), but may output asymbol not part of the solution spaceto indicate failure in finding a solution.[1]The nature of Las Vegas algorithms makes them suitable in situations where the number of possible solutions is limited, and where verifying the correctness of a candidate solution is relatively easy while finding a solution is complex.
Systematic search methods for computationally hard problems, such as some variants of theDavis–Putnam algorithmfor propositional satisfiability (SAT), also utilize non-deterministic decisions, and can thus also be considered Las Vegas algorithms.[2]
Las Vegas algorithms were introduced byLászló Babaiin 1979, in the context of thegraph isomorphism problem, as a dual toMonte Carlo algorithms.[3]Babai[4]introduced the term "Las Vegas algorithm" alongside an example involving coin flips: the algorithm depends on a series of independent coin flips, and there is a small chance of failure (no result). However, in contrast to Monte Carlo algorithms, the Las Vegas algorithm can guarantee the correctness of any reported result.
As mentioned above, Las Vegas algorithms always return correct results. The code above illustrates this property. A variablekis generated randomly; afterkis generated,kis used to index the arrayA. If this index contains the value 1, thenkis returned; otherwise, the algorithm repeats this process until it finds 1. Although this Las Vegas algorithm is guaranteed to find the correct answer, it does not have a fixed runtime; due to the randomization (inline 3of the above code), it is possible for arbitrarily much time to elapse before the algorithm terminates.
This section provides the conditions that characterize an algorithm's being of Las Vegas type.
An algorithm A is a Las Vegas algorithm for problem class X, if[5]
There are three notions ofcompletenessfor Las Vegas algorithms:
Let P(RTA,x≤ t) denote the probability that A finds a solution for a soluble instance x in time within t, then A is complete exactly if for each x there exists
some tmaxsuch that P(RTA,x≤ tmax) = 1.
Approximate completeness is primarily of theoretical interest, as the time limits for finding solutions are usually too large to be of practical use.
Las Vegas algorithms have different criteria for the evaluation based on the problem setting. These criteria are divided into three categories with different time limits since Las Vegas algorithms do not have set time complexity. Here are some possible application scenarios:
(Type 1 and Type 2 are special cases of Type 3.)
For Type 1 where there is no time limit, the average run-time can represent the run-time behavior. This is not the same case for Type 2.
Here,P(RT≤tmax), which is the probability of finding a solution within time, describes its run-time behavior.
In case of Type 3, its run-time behavior can only be represented by the run-time distribution functionrtd:R→ [0,1] defined asrtd(t) =P(RT≤t) or its approximation.
The run-time distribution (RTD) is the distinctive way to describe the run-time behavior of a Las Vegas algorithm.
With this data, we can easily get other criteria such as the mean run-time, standard deviation, median, percentiles, or success probabilitiesP(RT≤t) for arbitrary time-limitst.
Las Vegas algorithms arise frequently insearch problems. For example, one looking for some information online might search related websites for the desired information. The time complexity thus ranges from getting "lucky" and finding the content immediately, to being "unlucky" and spending large amounts of time. Once the right website is found, then there is no possibility of error.[6]
A simple example is randomizedquicksort, where the pivot is chosen randomly, and divides the elements into three partitions: elements less than pivot, elements equal to pivot, and elements greater than pivot. QuickSort always generates the solution, which in this case the sorted array. Unfortunately, the time complexity is not that obvious. It turns out that the runtime depends on which element we pick as a pivot.
The runtime of quicksort depends heavily on how well the pivot is selected. If a value of pivot is either too big or small, the partition will be unbalanced, resulting in a poor runtime efficiency. However, if the value of pivot is near the middle of the array, then the split will be reasonably well balanced, yielding a faster runtime. Since the pivot is randomly picked, the running time will be good most of the time and bad occasionally.
In the case of average case, it is hard to determine since the analysis does not depend on the input distribution but on the random choices that the algorithm makes. The average of quicksort is computed over all possible random choices that the algorithm might make when choosing the pivot.
Although the worst-case runtime is Θ(n2), the average-case runtime is Θ(nlogn). It turns out that the worst-case does not happen often. For large values ofn, the runtime is Θ(nlogn) with a high probability.
Note that the probability that the pivot is the middle value element each time is one out ofnnumbers, which is very rare. However, it is still the same runtime when the split is 10%-90% instead of a 50%–50% because the depth of the recursion tree will still beO(logn) withO(n) times taken each level of recursion.
Theeight queens problemis usually solved with a backtracking algorithm. However, a Las Vegas algorithm can be applied; in fact, it is more efficient than backtracking.
Place 8 queens on a chessboard so that no one attacks another. Remember that a queen attacks other pieces on the same row, column and diagonals.
Assume thatkrows, 0 ≤k≤ 8, are successfully occupied by queens.
Ifk= 8, then stop with success. Otherwise, proceed to occupy rowk+ 1.
Calculate all positions on this row not attacked by existing queens. If there are none, then fail. Otherwise, pick one at random, incrementkand repeat.
Note that the algorithm simply fails if a queen cannot be placed. But the process can be repeated and every time will generate different arrangement.[7]
Thecomplexity classofdecision problemsthat have Las Vegas algorithms withexpectedpolynomial runtime isZPP.
It turns out that
which is intimately connected with the way Las Vegas algorithms are sometimes constructed. Namely the classRPconsists of all decision problems for which a randomized polynomial-time algorithm exists that always answers correctly when the correct answer is "no", but is allowed to be wrong with a certain probability bounded away from one when the answer is "yes". When such an algorithm exists for both a problem and its complement (with the answers "yes" and "no" swapped), the two algorithms can be run simultaneously and repeatedly: run each for a constant number of steps, taking turns, until one of them returns a definitive answer. This is the standard way to construct a Las Vegas algorithm that runs in expected polynomial time. Note that in general there is no worst case upper bound on the run time of a Las Vegas algorithm.
In order to make a Las Vegas algorithm optimal, the expected run time should be minimized. This can be done by:
The existence of the optimal strategy might be a fascinating theoretical observation. However, it is not practical in real life because it is not easy to find the information of distribution ofTA(x). Furthermore, there is no point of running the experiment repeatedly to obtain the information about the distribution since most of the time, the answer is needed only once for anyx.[8]
Las Vegas algorithms can be contrasted withMonte Carlo algorithms, in which the resources used are bounded but the answer may be incorrect with a certain (typically small)probability. A Las Vegas algorithm can be converted into a Monte Carlo algorithm by running it for set time and generating a random answer when it fails to terminate. By an application ofMarkov's inequality, we can set the bound on the probability that the Las Vegas algorithm would go over the fixed limit.
Here is a table comparing Las Vegas and Monte Carlo algorithms:[9]
If a deterministic way to test for correctness is available, then it is possible to turn a Monte Carlo algorithm into a Las Vegas algorithm. However, it is hard to convert a Monte Carlo algorithm to a Las Vegas algorithm without a way to test the algorithm. On the other hand, changing a Las Vegas algorithm to a Monte Carlo algorithm is easy. This can be done by running a Las Vegas algorithm for a specific period of time given by confidence parameter. If the algorithm finds the solution within the time, then it is success and if not then output can simply be "sorry".
This is an example of Las Vegas and Monte Carlo algorithms for comparison:[10]
Assume that there is an array with the length of evenn. Half of the entries in the array are 0's and the remaining half are 1's. The goal here is to find an index that contains a 1.
Since Las Vegas does not end until it finds 1 in the array, it does not gamble with the correctness but run-time. On the other hand, Monte Carlo runs 300 times, which means it is impossible to know that Monte Carlo will find "1" in the array within 300 times of loops until it actually executes the code. It might find the solution or not. Therefore, unlike Las Vegas, Monte Carlo does not gamble with run-time but correctness.
|
https://en.wikipedia.org/wiki/Las_Vegas_algorithm
|
Incomputing, aMonte Carlo algorithmis arandomized algorithmwhose output may be incorrect with a certain (typically small)probability. Two examples of such algorithms are theKarger–Stein algorithm[1]and the Monte Carlo algorithm forminimum feedback arc set.[2]
The name refers to theMonte Carlo casinoin thePrincipality of Monaco, which is well-known around the world as an icon of gambling. The term "Monte Carlo" was first introduced in 1947 byNicholas Metropolis.[3]
Las Vegas algorithmsare adualof Monte Carlo algorithms and never return an incorrect answer. However, they may make random choices as part of their work. As a result, the time taken might vary between runs, even with the same input.
If there is a procedure for verifying whether the answer given by a Monte Carlo algorithm is correct, and the probability of a correct answer is bounded above zero, then with probability one, running the algorithm repeatedly while testing the answers will eventually give a correct answer. Whether this process is a Las Vegas algorithm depends on whether halting with probability one is considered to satisfy the definition.
While the answer returned by adeterministic algorithmis always expected to be correct, this is not the case for Monte Carlo algorithms. Fordecision problems, these algorithms are generally classified as eitherfalse-biased ortrue-biased. Afalse-biased Monte Carlo algorithm is always correct when it returnsfalse; atrue-biased algorithm is always correct when it returnstrue. While this describes algorithms withone-sided errors, others might have no bias; these are said to havetwo-sided errors. The answer they provide (eithertrueorfalse) will be incorrect, or correct, with some bounded probability.
For instance, theSolovay–Strassen primality testis used to determine whether a given number is aprime number. It always answerstruefor prime number inputs; for composite inputs, it answersfalsewith probability at least1⁄2andtruewith probability less than1⁄2. Thus,falseanswers from the algorithm are certain to be correct, whereas thetrueanswers remain uncertain; this is said to be a1⁄2-correct false-biased algorithm.
For a Monte Carlo algorithm with one-sided errors, the failure probability can be reduced (and the success probability amplified) by running the algorithmktimes. Consider again the Solovay–Strassen algorithm which is1⁄2-correct false-biased. One may run this algorithm multiple times returning afalseanswer if it reaches afalseresponse withinkiterations, and otherwise returningtrue. Thus, if the number is prime then the answer is always correct, and if the number is composite then the answer is correct with probability at least 1−(1−1⁄2)k= 1−2−k.
For Monte Carlo decision algorithms with two-sided error, the failure probability may again be reduced by running the algorithmktimes and returning themajority functionof the answers.
Thecomplexity classBPPdescribesdecision problemsthat can be solved by polynomial-time Monte Carlo algorithms with a bounded probability of two-sided errors, and the complexity classRPdescribes problems that can be solved by a Monte Carlo algorithm with a bounded probability of one-sided error: if the correct answer isfalse, the algorithm always says so, but it may answerfalseincorrectly for some instances where the correct answer istrue.[4]In contrast, the complexity classZPPdescribes problems solvable by polynomial expected time Las Vegas algorithms.ZPP ⊆ RP ⊆ BPP, but it is not known whether any of these complexity classes is distinct from each other; that is, Monte Carlo algorithms may have more computational power than Las Vegas algorithms, but this has not been proven.[4]Another complexity class,PP, describes decision problems with a polynomial-time Monte Carlo algorithm that is more accurate than flipping a coin but where the error probability cannot necessarily be bounded away from1⁄2.[4]
Randomized algorithms are primarily divided by its two main types, Monte Carlo and Las Vegas, however, these represent only a top of the hierarchy and can be further categorized.[4]
"Both Las Vegas and Monte Carlo are dealing with decisions, i.e., problems in theirdecision version."[4]"This however should not give a wrong impression and confine these algorithms to such problems—both types ofrandomized algorithmscan be used on numerical problems as well, problems where the output is not simple ‘yes’/‘no’, but where one needs to receive a result that is numerical in nature."[4]
(stronger bound than regular LV)
Sherwood probabilistic
will inhibit usefulness of the algorithm; typical case is<12{\displaystyle <{\tfrac {1}{2}}})
Previous table represents a general framework for Monte Carlo and Las Vegas randomized algorithms.[4]Instead of the mathematical symbol<{\displaystyle <}one could use≤{\displaystyle \leq }, thus making probabilities in the worst case equal.[4]
Well-known Monte Carlo algorithms include the Solovay–Strassen primality test, theBaillie–PSW primality test, theMiller–Rabin primality test, and certain fast variants of theSchreier–Sims algorithmincomputational group theory.
For algorithms that are a part ofStochastic Optimization(SO) group of algorithms, where probability is not known in advance and is empirically determined, it is sometimes possible to merge Monte Carlo and such an algorithm "to have both probability bound calculated in advance and a Stochastic Optimization component."[4]"Example of such an algorithm isAnt InspiredMonte Carlo."[4][5]In this way, "drawback of SO has been mitigated, and a confidence in a solution has been established."[4][5]
|
https://en.wikipedia.org/wiki/Monte_Carlo_algorithm
|
Principle of deferred decisionsis a technique used in analysis ofrandomized algorithms.
Arandomized algorithmmakes a set of random choices. Theserandomchoices may be intricately related making it difficult to analyze it. In many of these casesPrinciple of Deferred Decisionsis used. The idea behind the principle is that the entire set of random choices are not made in advance, but rather fixed only as they are revealed to the algorithm.
The principle is used to evaluate and determine the probability of "win" from adeck of cards. The idea is to let the random choices unfold, until the iteration ends at 52, where if the fourth card is drawn out of a group labeled "K", the game terminates.[citation needed]
Thisalgorithmsordata structures-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Principle_of_deferred_decision
|
Inanalysis of algorithms,probabilistic analysis of algorithmsis an approach to estimate thecomputational complexityof analgorithmor a computational problem. It starts from an assumption about a probabilistic distribution of the set of all possible inputs. This assumption is then used to design an efficient algorithm or to derive the complexity of a known algorithm.
This approach is not the same as that ofprobabilistic algorithms, but the two may be combined.
For non-probabilistic, more specificallydeterministic, algorithms, the most common types of complexity estimates are theaverage-case complexityand the almost-always complexity. To obtain the average-case complexity, given an input distribution, the expected time of an algorithm is evaluated, whereas for the almost-always complexity estimate, it is evaluated that the algorithm admits a given complexity estimate thatalmost surelyholds.
In probabilistic analysis of probabilistic (randomized) algorithms, the distributions or average of all possible choices in randomized steps is also taken into account, in addition to the input distributions.
Thisalgorithmsordata structures-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Probabilistic_analysis_of_algorithms
|
Theprobabilistic roadmap[1]planner is amotion planningalgorithm in robotics, which solves the problem of determining a path between a starting configuration of the robot and a goal configuration while avoiding collisions.
The basic idea behind PRM is to take random samples from theconfiguration spaceof the robot, testing them for whether they are in the free space, and use a local planner to attempt to connect these configurations to other nearby configurations. The starting and goal configurations are added in, and agraph search algorithmis applied to the resultinggraphto determine a path between the starting and goal configurations.
The probabilistic roadmap planner consists of two phases: a construction and a query phase. In the construction phase, a roadmap (graph) is built, approximating the motions that can be made in the environment. First, a random configuration is created. Then, it is connected to some neighbors, typically either theknearest neighbors or all neighbors less than some predetermined distance. Configurations and connections are added to the graph until the roadmap is dense enough. In the query phase, the start and goal configurations are connected to the graph, and the path is obtained by aDijkstra's shortest pathquery.
Given certain relatively weak conditions on the shape of the free space, PRM is provably probabilistically complete, meaning that as the number of sampled points increases without bound, the probability that the algorithm will not find a path if one exists approaches zero. The rate of convergence depends on certain visibility properties of the free space, where visibility is determined by the local planner. Roughly, if each point can "see" a large fraction of the space, and also if a large fraction of each subset of the space can "see" a large fraction of its complement, then the planner will find a path quickly.
The invention of the PRM method is credited toLydia E. Kavraki.[2][3]There are many variants on the basic PRM method, some quite sophisticated, that vary the sampling strategy and connection strategy to achieve faster performance. See e.g.Geraerts & Overmars (2002)[4]for a discussion.
This robotics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Probabilistic_roadmap
|
Incomputational complexity theory,Yao's principle(also calledYao's minimax principleorYao's lemma) relates the performance ofrandomized algorithmsto deterministic (non-random) algorithms. It states that, for certain classes of algorithms, and certain measures of the performance of the algorithms, the following two quantities are equal:
Yao's principle is often used to prove limitations on the performance of randomized algorithms, by finding a probability distribution on inputs that is difficult for deterministic algorithms, and inferring that randomized algorithms have the same limitation on their worst case performance.[1]
This principle is named afterAndrew Yao, who first proposed it in a 1977 paper.[2]It is closely related to theminimax theoremin the theory ofzero-sum games, and to theduality theory of linear programs.
Consider an arbitrary real valued cost measurec(A,x){\displaystyle c(A,x)}of an algorithmA{\displaystyle A}on an inputx{\displaystyle x}, such as its running time, for which we want to study theexpected valueover randomized algorithms and random inputs. Consider, also, afinite setA{\displaystyle {\mathcal {A}}}of deterministic algorithms (made finite, for instance, by limiting the algorithms to a specific input size), and a finite setX{\displaystyle {\mathcal {X}}}of inputs to these algorithms. LetR{\displaystyle {\mathcal {R}}}denote the class of randomized algorithms obtained from probability distributions over the deterministic behaviors inA{\displaystyle {\mathcal {A}}}, and letD{\displaystyle {\mathcal {D}}}denote the class of probability distributions on inputs inX{\displaystyle {\mathcal {X}}}. Then, Yao's principle states that:[1]
maxD∈DminA∈AEx∼D[c(A,x)]=minR∈Rmaxx∈XE[c(R,x)].{\displaystyle \max _{D\in {\mathcal {D}}}\min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}[c(A,x)]=\min _{R\in {\mathcal {R}}}\max _{x\in {\mathcal {X}}}\mathbb {E} [c(R,x)].}
Here,E{\displaystyle \mathbb {E} }is notation for the expected value, andx∼D{\displaystyle x\sim D}means thatx{\displaystyle x}is a random variable distributed according toD{\displaystyle D}. Finiteness ofA{\displaystyle {\mathcal {A}}}andX{\displaystyle {\mathcal {X}}}allowsD{\displaystyle {\mathcal {D}}}andR{\displaystyle {\mathcal {R}}}to be interpreted assimplicesofprobability vectors,[3]whosecompactnessimplies that the minima and maxima in these formulas exist.[4]
Another version of Yao's principle weakens it from an equality to an inequality, but at the same time generalizes it by relaxing the requirement that the algorithms and inputs come from a finite set. The direction of the inequality allows it to be used when a specific input distribution has been shown to be hard for deterministic algorithms, converting it into alower boundon the cost of all randomized algorithms. In this version, for every inputdistributionD∈D{\displaystyle D\in {\mathcal {D}}},and for every randomizedalgorithmR{\displaystyle R}inR{\displaystyle {\mathcal {R}}},[1]minA∈AEx∼D[c(A,x)]≤maxx∈XE[c(R,x)].{\displaystyle \min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}[c(A,x)]\leq \max _{x\in {\mathcal {X}}}\mathbb {E} [c(R,x)].}That is, the best possible deterministic performance against distributionD{\displaystyle D}is alower boundfor the performance of each randomized algorithmR{\displaystyle R}against its worst-case input. This version of Yao's principle can be proven through the chain of inequalitiesminA∈AEx∼D[c(A,x)]≤Ex∼D[c(R,x)]≤maxx∈XE[c(R,x)],{\displaystyle \min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}[c(A,x)]\leq \mathbb {E} _{x\sim D}[c(R,x)]\leq \max _{x\in {\mathcal {X}}}\mathbb {E} [c(R,x)],}each of which can be shown using onlylinearity of expectationand the principle thatmin≤E≤max{\displaystyle \min \leq \mathbb {E} \leq \max }for all distributions. By avoiding maximization and minimization overD{\displaystyle {\mathcal {D}}}andR{\displaystyle {\mathcal {R}}}, this version of Yao's principle can apply in some cases whereX{\displaystyle {\mathcal {X}}}orA{\displaystyle {\mathcal {A}}}are not finite.[5]Although this direction of inequality is the direction needed for proving lower bounds on randomized algorithms, the equality version of Yao's principle, when it is available, can also be useful in these proofs. The equality of the principle implies that there is no loss of generality in using the principle to prove lower bounds: whatever the actual best randomized algorithm might be, there is some input distribution through which a matching lower bound on its complexity can be proven.[6]
When the costc{\displaystyle c}denotes the running time of an algorithm, Yao's principle states that the best possible running time of a deterministic algorithm, on a hard input distribution, gives a lower bound for theexpected timeof anyLas Vegas algorithmon its worst-case input. Here, a Las Vegas algorithm is a randomized algorithm whose runtime may vary, but for which the result is always correct.[7][8]For example, this form of Yao's principle has been used to prove the optimality of certainMonte Carlo tree searchalgorithms for the exact evaluation ofgame trees.[8]
The time complexity ofcomparison-based sortingandselection algorithmsis often studied using the number of comparisons between pairs of data elements as a proxy for the total time. When these problems are considered over a fixed set of elements, their inputs can be expressed aspermutationsand a deterministic algorithm can be expressed as adecision tree. In this way both the inputs and the algorithms form finite sets as Yao's principle requires. Asymmetrizationargument identifies the hardest input distributions: they are therandom permutations, the distributions onn{\displaystyle n}distinct elements for which allpermutationsare equally likely. This is because, if any other distribution were hardest, averaging it with all permutations of the same hard distribution would be equally hard, and would produce the distribution for a random permutation. Yao's principle extends lower bounds for the average case number of comparisons made by deterministic algorithms, for random permutations, to the worst case analysis of randomized comparison algorithms.[2]
An example given by Yao is the analysis of algorithms for finding thek{\displaystyle k}th largest of a given set ofn{\displaystyle n}values, the selection problem.[2]Subsequently, to Yao's work, Walter Cunto andIan Munroshowed that, for random permutations, any deterministic algorithm must perform at leastn+min(k,n−k)−O(1){\displaystyle n+\min(k,n-k)-O(1)}expected comparisons.[9]By Yao's principle, the same number of comparisons must be made by randomized algorithms on their worst-case input.[10]TheFloyd–Rivest algorithmcomes withinO(nlogn){\displaystyle O({\sqrt {n\log n}})}comparisons of this bound.[11]
Another of the original applications by Yao of his principle was to theevasiveness of graph properties, the number of tests of the adjacency of pairs of vertices needed to determine whether a graph has a given property, when the only access to the graph is through such tests.[2]Richard M. Karpconjectured that every randomized algorithm for every nontrivial monotone graph property (a property that remains true for every subgraph of a graph with the property) requires a quadratic number of tests, but only weaker bounds have been proven.[12]
As Yao stated, for graph properties that are true of the empty graph but false for some other graph onn{\displaystyle n}vertices with only a bounded numbers{\displaystyle s}of edges, a randomized algorithm must probe a quadratic number of pairs of vertices. For instance, for the property of being aplanar graph,s=9{\displaystyle s=9}because the 9-edgeutility graphis non-planar. More precisely, Yao states that for these properties, at least(12−p)1s(n2){\displaystyle \left({\tfrac {1}{2}}-p\right){\tfrac {1}{s}}{\tbinom {n}{2}}}tests are needed, for everyε>0{\displaystyle \varepsilon >0}, for a randomized algorithm to have probability at mostp{\displaystyle p}of making a mistake. Yao also used this method to show that quadratically many queries are needed for the properties of containing a giventreeorcliqueas a subgraph, of containing aperfect matching, and of containing aHamiltonian cycle, for small enough constant error probabilities.[2]
Inblack-box optimization, the problem is to determine the minimum or maximum value of a function, from a given class of functions, accessible only through calls to the function on arguments from some finite domain. In this case, the cost to be optimized is the number of calls. Yao's principle has been described as "the only method available for proving lower bounds for all randomized search heuristics for selected classes of problems".[13]Results that can be proven in this way include the following:
Incommunication complexity, an algorithm describes acommunication protocolbetween two or more parties, and its cost may be the number of bits or messages transmitted between the parties. In this case, Yao's principle describes an equality between theaverage-case complexityof deterministic communication protocols, on an input distribution that is the worst case for the problem, and the expected communication complexity of randomized protocols on their worst-case inputs.[6][14]
An example described byAvi Wigderson(based on a paper by Manu Viola) is the communication complexity for two parties, each holdingn{\displaystyle n}-bit input values, to determine which value is larger. For deterministic communication protocols, nothing better thann{\displaystyle n}bits of communication is possible, easily achieved by one party sending their whole input to the other. However, parties with a shared source of randomness and a fixed error probability can exchange 1-bithash functionsofprefixesof the input to perform a noisybinary searchfor the first position where their inputs differ, achievingO(logn){\displaystyle O(\log n)}bits of communication. This is within a constant factor of optimal, as can be shown via Yao's principle with an input distribution that chooses the position of the first difference uniformly at random, and then chooses random strings for the shared prefix up to that position and the rest of the inputs after that position.[6][15]
Yao's principle has also been applied to thecompetitive ratioofonline algorithms. An online algorithm must respond to a sequence of requests, without knowledge of future requests, incurring some cost or profit per request depending on its choices. The competitive ratio is the ratio of its cost or profit to the value that could be achieved achieved by anoffline algorithmwith access to knowledge of all future requests, for a worst-case request sequence that causes this ratio to be as far from one as possible. Here, one must be careful to formulate the ratio with the algorithm's performance in the numerator and the optimal performance of an offline algorithm in the denominator, so that the cost measure can be formulated as an expected value rather than as thereciprocalof an expected value.[5]
An example given byBorodin & El-Yaniv (2005)concernspage replacement algorithms, which respond to requests forpagesof computer memory by using acacheofk{\displaystyle k}pages, for a given parameterk{\displaystyle k}. If a request matches a cached page, it costs nothing; otherwise one of the cached pages must be replaced by the requested page, at a cost of onepage fault. A difficult distribution of request sequences for this model can be generated by choosing each request uniformly at random from a pool ofk+1{\displaystyle k+1}pages. Any deterministic online algorith hasnk+1{\displaystyle {\tfrac {n}{k+1}}}expected page faults, overn{\displaystyle n}requests. Instead, an offline algorithm can divide the request sequence into phases within which onlyk{\displaystyle k}pages are used, incurring only one fault at the start of a phase to replace the one page that is unused within the phase. As an instance of thecoupon collector's problem, the expected requests per phase is(k+1)Hk{\displaystyle (k+1)H_{k}}, whereHk=1+12+⋯+1k{\displaystyle H_{k}=1+{\tfrac {1}{2}}+\cdots +{\tfrac {1}{k}}}is thek{\displaystyle k}thharmonic number. Byrenewal theory, the offline algorithm incursn(k+1)Hk+o(n){\displaystyle {\tfrac {n}{(k+1)H_{k}}}+o(n)}page faults with high probability, so the competitive ratio of any deterministic algorithm against this input distribution is at leastHk{\displaystyle H_{k}}. By Yao's principle,Hk{\displaystyle H_{k}}also lower bounds the competitive ratio of any randomized page replacement algorithm against a request sequence chosen by anoblivious adversaryto be a worst case for the algorithm but without knowledge of the algorithm's random choices.[16]
For online problems in a general class related to theski rental problem, Seiden has proposed a cookbook method for deriving optimally hard input distributions, based on certain parameters of the problem.[17]
Yao's principle may be interpreted ingame theoreticterms, via a two-playerzero-sum gamein which one player,Alice, selects a deterministic algorithm, the other player, Bob, selects an input, and the payoff is the cost of the selected algorithm on the selected input. Any randomized algorithmR{\displaystyle R}may be interpreted as a randomized choice among deterministic algorithms, and thus as amixed strategyfor Alice. Similarly, a non-random algorithm may be thought of as apure strategyfor Alice. In any two-player zero-sum game, if one player chooses a mixed strategy, then the other player has an optimal pure strategy against it. By theminimax theoremofJohn von Neumann, there exists a game valuec{\displaystyle c}, and mixed strategies for each player, such that the players can guarantee expected valuec{\displaystyle c}or better by playing those strategies, and such that the optimal pure strategy against either mixed strategy produces expected value exactlyc{\displaystyle c}. Thus, the minimax mixed strategy for Alice, set against the best opposing pure strategy for Bob, produces the same expected game valuec{\displaystyle c}as the minimax mixed strategy for Bob, set against the best opposing pure strategy for Alice. This equality of expected game values, for the game described above, is Yao's principle in its form as an equality.[5]Yao's 1977 paper, originally formulating Yao's principle, proved it in this way.[2]
The optimal mixed strategy for Alice (a randomized algorithm) and the optimal mixed strategy for Bob (a hard input distribution) may each be computed using a linear program that has one player's probabilities as its variables, with a constraint on the game value for each choice of the other player. The two linear programs obtained in this way for each player aredual linear programs, whose equality is an instance of linear programming duality.[3]However, although linear programs may be solved inpolynomial time, the numbers of variables and constraints in these linear programs (numbers of possible algorithms and inputs) are typically too large to list explicitly. Therefore, formulating and solving these programs to find these optimal strategies is often impractical.[13][14]
ForMonte Carlo algorithms, algorithms that use a fixed amount of computational resources but that may produce an erroneous result, a form of Yao's principle applies to the probability of an error, the error rate of an algorithm. Choosing the hardest possible input distribution, and the algorithm that achieves the lowest error rate against that distribution, gives the same error rate as choosing an optimal algorithm and its worst case input distribution. However, the hard input distributions found in this way are not robust to changes in the parameters used when applying this principle. If an input distribution requires high complexity to achieve a certain error rate, it may nevertheless have unexpectedly low complexity for a different error rate. Ben-David and Blais show that, forBoolean functionsunder many natural measures of computational complexity, there exists an input distribution that is simultaneously hard for all error rates.[18]
Variants of Yao's principle have also been considered forquantum computing. In place of randomized algorithms, one may consider quantum algorithms that have a good probability of computing the correct value for every input (probability at least23{\displaystyle {\tfrac {2}{3}}}); this condition together withpolynomial timedefines the complexity classBQP. It does not make sense to ask for deterministic quantum algorithms, but instead one may consider algorithms that, for a given input distribution, have probability 1 of computing a correct answer, either in aweaksense that the inputs for which this is true have probability≥23{\displaystyle \geq {\tfrac {2}{3}}}, or in astrongsense in which, in addition, the algorithm must have probability 0 or 1 of generating any particular answer on the remaining inputs. For any Boolean function, the minimum complexity of a quantum algorithm that is correct with probability≥23{\displaystyle \geq {\tfrac {2}{3}}}against its worst-case input is less than or equal to the minimum complexity that can be attained, for a hard input distribution, by the best weak or strong quantum algorithm against that distribution. The weak form of this inequality is within a constant factor of being an equality, but the strong form is not.[19]
|
https://en.wikipedia.org/wiki/Randomized_algorithms_as_zero-sum_games
|
Inmachine learning, aneural network(alsoartificial neural networkorneural net, abbreviatedANNorNN) is a computational model inspired by the structure and functions of biological neural networks.[1][2]
A neural network consists of connected units or nodes calledartificial neurons, which loosely model theneuronsin the brain. Artificial neuron models that mimic biological neurons more closely have also been recently investigated and shown to significantly improve performance. These are connected byedges, which model thesynapsesin the brain. Each artificial neuron receives signals from connected neurons, then processes them and sends a signal to other connected neurons. The "signal" is areal number, and the output of each neuron is computed by some non-linear function of the sum of its inputs, called theactivation function. The strength of the signal at each connection is determined by aweight, which adjusts during the learning process.
Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (theinput layer) to the last layer (theoutput layer), possibly passing through multiple intermediate layers (hidden layers). A network is typically called a deep neural network if it has at least two hidden layers.[3]
Artificial neural networks are used for various tasks, includingpredictive modeling,adaptive control, and solving problems inartificial intelligence. They can learn from experience, and can derive conclusions from a complex and seemingly unrelated set of information.
Neural networks are typically trained throughempirical risk minimization. This method is based on the idea of optimizing the network's parameters to minimize the difference, or empirical risk, between the predicted output and the actual target values in a given dataset.[4]Gradient-based methods such asbackpropagationare usually used to estimate the parameters of the network.[4]During the training phase, ANNs learn fromlabeledtraining data by iteratively updating their parameters to minimize a definedloss function.[5]This method allows the network to generalize to unseen data.
Today's deep neural networks are based on early work instatisticsover 200 years ago. The simplest kind offeedforward neural network(FNN) is a linear network, which consists of a single layer of output nodes with linear activation functions; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated at each node. Themean squared errorsbetween these calculated outputs and the given target values are minimized by creating an adjustment to the weights. This technique has been known for over two centuries as themethod of least squaresorlinear regression. It was used as a means of finding a good rough linear fit to a set of points byLegendre(1805) andGauss(1795) for the prediction of planetary movement.[7][8][9][10][11]
Historically, digital computers such as thevon Neumann modeloperate via the execution of explicit instructions with access to memory by a number of processors. Some neural networks, on the other hand, originated from efforts to model information processing in biological systems through the framework ofconnectionism. Unlike the von Neumann model, connectionist computing does not separate memory and processing.
Warren McCullochandWalter Pitts[12](1943) considered a non-learning computational model for neural networks.[13]This model paved the way for research to split into two approaches. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence.
In the late 1940s,D. O. Hebb[14]proposed a learninghypothesisbased on the mechanism ofneural plasticitythat became known asHebbian learning. It was used in many early neural networks, such as Rosenblatt'sperceptronand theHopfield network. Farley andClark[15](1954) used computational machines to simulate a Hebbian network. Other neural network computational machines were created byRochester, Holland, Habit and Duda (1956).[16]
In 1958, psychologistFrank Rosenblattdescribed the perceptron, one of the first implemented artificial neural networks,[17][18][19][20]funded by the United StatesOffice of Naval Research.[21]R. D. Joseph (1960)[22]mentions an even earlier perceptron-like device by Farley and Clark:[10]"Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject."
The perceptron raised public excitement for research in Artificial Neural Networks, causing the US government to drastically increase funding. This contributed to "the Golden Age of AI" fueled by the optimistic claims made by computer scientists regarding the ability of perceptrons to emulate human intelligence.[23]
The first perceptrons did not have adaptive hidden units. However, Joseph (1960)[22]also discussedmultilayer perceptronswith an adaptive hidden layer. Rosenblatt (1962)[24]: section 16cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e.,deep learning.
Fundamental research was conducted on ANNs in the 1960s and 1970s. The first working deep learning algorithm was theGroup method of data handling, a method to train arbitrarily deep neural networks, published byAlexey Ivakhnenkoand Lapa in theSoviet Union(1965). They regarded it as a form of polynomial regression,[25]or a generalization of Rosenblatt's perceptron.[26]A 1971 paper described a deep network with eight layers trained by this method,[27]which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates."[10]
The first deep learningmultilayer perceptrontrained bystochastic gradient descent[28]was published in 1967 byShun'ichi Amari.[29]In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learnedinternal representationsto classify non-linearily separable pattern classes.[10]Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique.
In 1969,Kunihiko Fukushimaintroduced theReLU(rectified linear unit) activation function.[10][30][31]The rectifier has become the most popular activation function for deep learning.[32]
Nevertheless, research stagnated in the United States following the work ofMinskyandPapert(1969),[33]who emphasized that basic perceptrons were incapable of processing the exclusive-or circuit. This insight was irrelevant for the deep networks of Ivakhnenko (1965) and Amari (1967).
In 1976 transfer learning was introduced in neural networks learning.[34][35]
Deep learning architectures forconvolutional neural networks(CNNs) with convolutional layers and downsampling layers and weight replication began with theNeocognitronintroduced by Kunihiko Fukushima in 1979, though not trained by backpropagation.[36][37][38]
Backpropagationis an efficient application of thechain rulederived byGottfried Wilhelm Leibnizin 1673[39]to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt,[24]but he did not know how to implement this, althoughHenry J. Kelleyhad a continuous precursor of backpropagation in 1960 in the context ofcontrol theory.[40]In 1970,Seppo Linnainmaapublished the modern form of backpropagation in his Master'sthesis(1970).[41][42][10]G.M. Ostrovski et al. republished it in 1971.[43][44]Paul Werbosapplied backpropagation to neural networks in 1982[45][46](his 1974 PhD thesis, reprinted in a 1994 book,[47]did not yet describe the algorithm[44]). In 1986,David E. Rumelhartet al. popularised backpropagation but did not cite the original work.[48]
Kunihiko Fukushima'sconvolutional neural network(CNN) architecture of 1979[36]also introducedmax pooling,[49]a popular downsampling procedure for CNNs. CNNs have become an essential tool forcomputer vision.
Thetime delay neural network(TDNN) was introduced in 1987 byAlex Waibelto apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation.[50][51]In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.[52]In 1989,Yann LeCunet al. created a CNN calledLeNetforrecognizing handwritten ZIP codeson mail. Training required 3 days.[53]In 1990, Wei Zhang implemented a CNN onoptical computinghardware.[54]In 1991, a CNN was applied to medical image object segmentation[55]and breast cancer detection in mammograms.[56]LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32×32 pixel images.[57]
From 1988 onward,[58][59]the use of neural networks transformed the field ofprotein structure prediction, in particular when the first cascading networks were trained onprofiles(matrices) produced by multiplesequence alignments.[60]
One origin of RNN wasstatistical mechanics. In 1972,Shun'ichi Amariproposed to modify the weights of anIsing modelbyHebbian learningrule as a model ofassociative memory, adding in the component of learning.[61]This was popularized as the Hopfield network byJohn Hopfield(1982).[62]Another origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901,Cajalobserved "recurrent semicircles" in thecerebellar cortex.[63]Hebbconsidered "reverberating circuit" as an explanation for short-term memory.[64]The McCulloch and Pitts paper (1943) considered neural networks that contain cycles, and noted that the current activity of such networks can be affected by activity indefinitely far in the past.[12]
In 1982 a recurrent neural network with an array architecture (rather than a multilayer perceptron architecture), namely a Crossbar Adaptive Array,[65][66]used direct recurrent connections from the output to the supervisor (teaching) inputs. In addition of computing actions (decisions), it computed internal state evaluations (emotions) of the consequence situations. Eliminating the external supervisor, it introduced the self-learning method in neural networks.
In cognitive psychology, the journal American Psychologist in early 1980's carried out a debate on the relation between cognition and emotion. Zajonc in 1980 stated that emotion is computed first and is independent from cognition, while Lazarus in 1982 stated that cognition is computed first and is inseparable from emotion.[67][68]In 1982 the Crossbar Adaptive Array gave a neural network model of cognition-emotion relation.[65][69]It was an example of a debate where an AI system, a recurrent neural network, contributed to an issue in the same time addressed by cognitive psychology.
Two early influential works were theJordan network(1986) and theElman network(1990), which applied RNN to studycognitive psychology.
In the 1980s, backpropagation did not work well for deep RNNs. To overcome this problem, in 1991,Jürgen Schmidhuberproposed the "neural sequence chunker" or "neural history compressor"[70][71]which introduced the important concepts of self-supervised pre-training (the "P" inChatGPT) and neuralknowledge distillation.[10]In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequentlayersin an RNN unfolded in time.[72]
In 1991,Sepp Hochreiter's diploma thesis[73]identified and analyzed thevanishing gradient problem[73][74]and proposed recurrentresidualconnections to solve it. He and Schmidhuber introducedlong short-term memory(LSTM), which set accuracy records in multiple applications domains.[75][76]This was not yet the modern version of LSTM, which required the forget gate, which was introduced in 1999.[77]It became the default choice for RNN architecture.
During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed byTerry Sejnowski,Peter Dayan,Geoffrey Hinton, etc., including theBoltzmann machine,[78]restricted Boltzmann machine,[79]Helmholtz machine,[80]and thewake-sleep algorithm.[81]These were designed for unsupervised learning of deep generative models.
Between 2009 and 2012, ANNs began winning prizes in image recognition contests, approaching human level performance on various tasks, initially inpattern recognitionandhandwriting recognition.[82][83]In 2011, a CNN namedDanNet[84][85]by Dan Ciresan, Ueli Meier, Jonathan Masci,Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3.[38]It then won more contests.[86][87]They also showed howmax-poolingCNNs on GPU improved performance significantly.[88]
In October 2012,AlexNetbyAlex Krizhevsky,Ilya Sutskever, and Geoffrey Hinton[89]won the large-scaleImageNet competitionby a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network byKaren SimonyanandAndrew Zisserman[90]and Google'sInceptionv3.[91]
In 2012,NgandDeancreated a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images.[92]Unsupervised pre-training and increased computing power fromGPUsanddistributed computingallowed the use of larger networks, particularly in image and visual recognition problems, which became known as "deep learning".[5]
Radial basis functionand wavelet networks were introduced in 2013. These can be shown to offer best approximation properties and have been applied innonlinear system identificationand classification applications.[93]
Generative adversarial network(GAN) (Ian Goodfellowet al., 2014)[94]became state of the art in generative modeling during 2014–2018 period. The GAN principle was originally published in 1991 by Jürgen Schmidhuber who called it "artificial curiosity": two neural networks contest with each other in the form of azero-sum game, where one network's gain is the other network's loss.[95][96]The first network is agenerative modelthat models aprobability distributionover output patterns. The second network learns bygradient descentto predict the reactions of the environment to these patterns. Excellent image quality is achieved byNvidia'sStyleGAN(2018)[97]based on the Progressive GAN by Tero Karras et al.[98]Here, the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerningdeepfakes.[99]Diffusion models(2015)[100]eclipsed GANs in generative modeling since then, with systems such asDALL·E 2(2022) andStable Diffusion(2022).
In 2014, the state of the art was training "very deep neural network" with 20 to 30 layers.[101]Stacking too many layers led to a steep reduction intrainingaccuracy,[102]known as the "degradation" problem.[103]In 2015, two techniques were developed to train very deep networks: thehighway networkwas published in May 2015,[104]and the residual neural network (ResNet) in December 2015.[105][106]ResNet behaves like an open-gated Highway Net.
During the 2010s, theseq2seqmodel was developed, and attention mechanisms were added. It led to the modern Transformer architecture in 2017 inAttention Is All You Need.[107]It requires computation time that is quadratic in the size of the context window. Jürgen Schmidhuber's fast weight controller (1992)[108]scales linearly and was later shown to be equivalent to the unnormalized linear Transformer.[109][110][10]Transformers have increasingly become the model of choice fornatural language processing.[111]Many modernlarge language modelssuch asChatGPT,GPT-4, andBERTuse this architecture.
ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. They soon reoriented towards improving empirical results, abandoning attempts to remain true to their biological precursors. ANNs have the ability to learn and model non-linearities and complex relationships. This is achieved by neurons being connected in various patterns, allowing the output of some neurons to become the input of others. The network forms adirected,weighted graph.[112]
An artificial neural network consists of simulated neurons. Each neuron is connected to othernodesvialinkslike a biological axon-synapse-dendrite connection. All the nodes connected by links take in some data and use it to perform specific operations and tasks on the data. Each link has a weight, determining the strength of one node's influence on another,[113]allowing weights to choose the signal between neurons.
ANNs are composed ofartificial neuronswhich are conceptually derived from biologicalneurons. Each artificial neuron has inputs and produces a single output which can be sent to multiple other neurons.[114]The inputs can be the feature values of a sample of external data, such as images or documents, or they can be the outputs of other neurons. The outputs of the finaloutput neuronsof the neural net accomplish the task, such as recognizing an object in an image.[citation needed]
To find the output of the neuron we take the weighted sum of all the inputs, weighted by theweightsof theconnectionsfrom the inputs to the neuron. We add abiasterm to this sum.[115]This weighted sum is sometimes called theactivation. This weighted sum is then passed through a (usually nonlinear) activation function to produce the output. The initial inputs are external data, such as images and documents. The ultimate outputs accomplish the task, such as recognizing an object in an image.[116]
The neurons are typically organized into multiple layers, especially in deep learning. Neurons of one layer connect only to neurons of the immediately preceding and immediately following layers. The layer that receives external data is theinput layer. The layer that produces the ultimate result is theoutput layer. In between them are zero or morehidden layers. Single layer and unlayered networks are also used. Between two layers, multiple connection patterns are possible. They can be 'fully connected', with every neuron in one layer connecting to every neuron in the next layer. They can bepooling, where a group of neurons in one layer connects to a single neuron in the next layer, thereby reducing the number of neurons in that layer.[117]Neurons with only such connections form adirected acyclic graphand are known asfeedforward networks.[118]Alternatively, networks that allow connections between neurons in the same or previous layers are known asrecurrent networks.[119]
Ahyperparameteris a constantparameterwhose value is set before the learning process begins. The values of parameters are derived via learning. Examples of hyperparameters includelearning rate, the number of hidden layers and batch size.[citation needed]The values of some hyperparameters can be dependent on those of other hyperparameters. For example, the size of some layers can depend on the overall number of layers.[citation needed]
Learning is the adaptation of the network to better handle a task by considering sample observations. Learning involves adjusting the weights (and optional thresholds) of the network to improve the accuracy of the result. This is done by minimizing the observed errors. Learning is complete when examining additional observations does not usefully reduce the error rate. Even after learning, the error rate typically does not reach 0. If after learning, the error rate is too high, the network typically must be redesigned. Practically this is done by defining acost functionthat is evaluated periodically during learning. As long as its output continues to decline, learning continues. The cost is frequently defined as astatisticwhose value can only be approximated. The outputs are actually numbers, so when the error is low, the difference between the output (almost certainly a cat) and the correct answer (cat) is small. Learning attempts to reduce the total of the differences across the observations. Most learning models can be viewed as a straightforward application ofoptimizationtheory andstatistical estimation.[112][120]
The learning rate defines the size of the corrective steps that the model takes to adjust for errors in each observation.[121]A high learning rate shortens the training time, but with lower ultimate accuracy, while a lower learning rate takes longer, but with the potential for greater accuracy. Optimizations such asQuickpropare primarily aimed at speeding up error minimization, while other improvements mainly try to increase reliability. In order to avoidoscillationinside the network such as alternating connection weights, and to improve the rate of convergence, refinements use anadaptive learning ratethat increases or decreases as appropriate.[122]The concept of momentum allows the balance between the gradient and the previous change to be weighted such that the weight adjustment depends to some degree on the previous change. A momentum close to 0 emphasizes the gradient, while a value close to 1 emphasizes the last change.[citation needed]
While it is possible to define a cost functionad hoc, frequently the choice is determined by the function's desirable properties (such asconvexity) because it arises from the model (e.g. in a probabilistic model, the model'sposterior probabilitycan be used as an inverse cost).[citation needed]
Backpropagation is a method used to adjust the connection weights to compensate for each error found during learning. The error amount is effectively divided among the connections. Technically, backpropagation calculates thegradient(the derivative) of thecost functionassociated with a given state with respect to the weights. The weight updates can be done via stochastic gradient descent or other methods, such asextreme learning machines,[123]"no-prop" networks,[124]training without backtracking,[125]"weightless" networks,[126][127]andnon-connectionist neural networks.[citation needed]
Machine learning is commonly separated into three main learning paradigms,supervised learning,[128]unsupervised learning[129]andreinforcement learning.[130]Each corresponds to a particular learning task.
Supervised learninguses a set of paired inputs and desired outputs. The learning task is to produce the desired output for each input. In this case, the cost function is related to eliminating incorrect deductions.[131]A commonly used cost is themean-squared error, which tries to minimize the average squared error between the network's output and the desired output. Tasks suited for supervised learning arepattern recognition(also known as classification) andregression(also known as function approximation). Supervised learning is also applicable to sequential data (e.g., for handwriting, speech andgesture recognition). This can be thought of as learning with a "teacher", in the form of a function that provides continuous feedback on the quality of solutions obtained thus far.
Inunsupervised learning, input data is given along with the cost function, some function of the datax{\displaystyle \textstyle x}and the network's output. The cost function is dependent on the task (the model domain) and anya prioriassumptions (the implicit properties of the model, its parameters and the observed variables). As a trivial example, consider the modelf(x)=a{\displaystyle \textstyle f(x)=a}wherea{\displaystyle \textstyle a}is a constant and the costC=E[(x−f(x))2]{\displaystyle \textstyle C=E[(x-f(x))^{2}]}. Minimizing this cost produces a value ofa{\displaystyle \textstyle a}that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, incompressionit could be related to themutual informationbetweenx{\displaystyle \textstyle x}andf(x){\displaystyle \textstyle f(x)}, whereas in statistical modeling, it could be related to theposterior probabilityof the model given the data (note that in both of those examples, those quantities would be maximized rather than minimized). Tasks that fall within the paradigm of unsupervised learning are in generalestimationproblems; the applications includeclustering, the estimation ofstatistical distributions,compressionandfiltering.
In applications such as playing video games, an actor takes a string of actions, receiving a generally unpredictable response from the environment after each one. The goal is to win the game, i.e., generate the most positive (lowest cost) responses. Inreinforcement learning, the aim is to weight the network (devise a policy) to perform actions that minimize long-term (expected cumulative) cost. At each point in time the agent performs an action and the environment generates an observation and aninstantaneouscost, according to some (usually unknown) rules. The rules and the long-term cost usually only can be estimated. At any juncture, the agent decides whether to explore new actions to uncover their costs or to exploit prior learning to proceed more quickly.
Formally the environment is modeled as aMarkov decision process(MDP) with statess1,...,sn∈S{\displaystyle \textstyle {s_{1},...,s_{n}}\in S}and actionsa1,...,am∈A{\displaystyle \textstyle {a_{1},...,a_{m}}\in A}. Because the state transitions are not known, probability distributions are used instead: the instantaneous cost distributionP(ct|st){\displaystyle \textstyle P(c_{t}|s_{t})}, the observation distributionP(xt|st){\displaystyle \textstyle P(x_{t}|s_{t})}and the transition distributionP(st+1|st,at){\displaystyle \textstyle P(s_{t+1}|s_{t},a_{t})}, while a policy is defined as the conditional distribution over actions given the observations. Taken together, the two define aMarkov chain(MC). The aim is to discover the lowest-cost MC.
ANNs serve as the learning component in such applications.[132][133]Dynamic programmingcoupled with ANNs (givingneurodynamicprogramming)[134]has been applied to problems such as those involved invehicle routing,[135]video games,natural resource management[136][137]andmedicine[138]because of ANNs ability to mitigate losses of accuracy even when reducing thediscretizationgrid density for numerically approximating the solution of control problems. Tasks that fall within the paradigm of reinforcement learning are control problems,gamesand other sequential decision making tasks.
Self-learning in neural networks was introduced in 1982 along with a neural network capable of self-learning namedcrossbar adaptive array(CAA).[139]It is a system with only one input, situation s, and only one output, action (or behavior) a. It has neither external advice input nor external reinforcement input from the environment. The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about encountered situations. The system is driven by the interaction between cognition and emotion.[140]Given the memory matrix, W =||w(a,s)||, the crossbar self-learning algorithm in each iteration performs the following computation:
The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is behavioral environment where it behaves, and the other is genetic environment, where from it initially and only once receives initial emotions about to be encountered situations in the behavioral environment. Having received the genome vector (species vector) from the genetic environment, the CAA will learn a goal-seeking behavior, in the behavioral environment that contains both desirable and undesirable situations.[141]
Neuroevolutioncan create neural network topologies and weights usingevolutionary computation. It is competitive with sophisticated gradient descent approaches.[142][143]One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".[144]
Stochastic neural networksoriginating fromSherrington–Kirkpatrick modelsare a type of artificial neural network built by introducing random variations into the network, either by giving the network's artificial neuronsstochastictransfer functions[citation needed], or by giving them stochastic weights. This makes them useful tools foroptimizationproblems, since the random fluctuations help the network escape fromlocal minima.[145]Stochastic neural networks trained using aBayesianapproach are known asBayesian neural networks.[146]
Topological deep learning, first introduced in 2017,[147]is an emerging approach inmachine learningthat integrates topology with deep neural networks to address highly intricate and high-order data. Initially rooted inalgebraic topology, TDL has since evolved into a versatile framework incorporating tools from other mathematical disciplines, such asdifferential topologyandgeometric topology. As a successful example of mathematical deep learning, TDL continues to inspire advancements in mathematicalartificial intelligence, fostering a mutually beneficial relationship between AI andmathematics.
In aBayesianframework, a distribution over the set of allowed models is chosen to minimize the cost.Evolutionary methods,[148]gene expression programming,[149]simulated annealing,[150]expectation–maximization,non-parametric methodsandparticle swarm optimization[151]are other learning algorithms. Convergent recursion is a learning algorithm forcerebellar model articulation controller(CMAC) neural networks.[152][153]
Two modes of learning are available: stochastic and batch. In stochastic learning, each input creates a weight adjustment. In batch learning weights are adjusted based on a batch of inputs, accumulating errors over the batch. Stochastic learning introduces "noise" into the process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the batch's average error. A common compromise is to use "mini-batches", small batches with samples in each batch selected stochastically from the entire data set.
ANNs have evolved into a broad family of techniques that have advanced the state of the art across multiple domains. The simplest types have one or more static components, including number of units, number of layers, unit weights andtopology. Dynamic types allow one or more of these to evolve via learning. The latter is much more complicated but can shorten learning periods and produce better results. Some types allow/require learning to be "supervised" by the operator, while others operate independently. Some types operate purely in hardware, while others are purely software and run on general purpose computers.
Some of the main breakthroughs include:
Using artificial neural networks requires an understanding of their characteristics.
Neural architecture search(NAS) uses machine learning to automate ANN design. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset, and use the results as feedback to teach the NAS network.[165]Available systems includeAutoMLand AutoKeras.[166]scikit-learn libraryprovides functions to help with building a deep network from scratch. We can then implement a deep network withTensorFloworKeras.
Hyperparameters must also be defined as part of the design (they are not learned), governing matters such as how many neurons are in each layer, learning rate, step, stride, depth, receptive field and padding (for CNNs), etc.[167]
[citation needed]
Because of their ability to reproduce and model nonlinear processes, artificial neural networks have found applications in many disciplines. These include:
ANNs have been used to diagnose several types of cancers[185][186]and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information.[187][188]
ANNs have been used to accelerate reliability analysis of infrastructures subject to natural disasters[189][190]and to predict foundation settlements.[191]It can also be useful to mitigate flood by the use of ANNs for modelling rainfall-runoff.[192]ANNs have also been used for building black-box models ingeoscience:hydrology,[193][194]ocean modelling andcoastal engineering,[195][196]andgeomorphology.[197]ANNs have been employed incybersecurity, with the objective to discriminate between legitimate activities and malicious ones. For example, machine learning has been used for classifying Android malware,[198]for identifying domains belonging to threat actors and for detecting URLs posing a security risk.[199]Research is underway on ANN systems designed for penetration testing, for detecting botnets,[200]credit cards frauds[201]and network intrusions.
ANNs have been proposed as a tool to solvepartial differential equationsin physics[202][203][204]and simulate the properties of many-bodyopen quantum systems.[205][206][207][208]In brain research ANNs have studied short-term behavior ofindividual neurons,[209]the dynamics of neural circuitry arise from interactions between individual neurons and how behavior can arise from abstract neural modules that represent complete subsystems. Studies considered long-and short-term plasticity of neural systems and their relation to learning and memory from the individual neuron to the system level.
It is possible to create a profile of a user's interests from pictures, using artificial neural networks trained for object recognition.[210]
Beyond their traditional applications, artificial neural networks are increasingly being utilized in interdisciplinary research, such as materials science. For instance, graph neural networks (GNNs) have demonstrated their capability in scaling deep learning for the discovery of new stable materials by efficiently predicting the total energy of crystals. This application underscores the adaptability and potential of ANNs in tackling complex problems beyond the realms of predictive modeling and artificial intelligence, opening new pathways for scientific discovery and innovation.[211]
Themultilayer perceptronis auniversal functionapproximator, as proven by theuniversal approximation theorem. However, the proof is not constructive regarding the number of neurons required, the network topology, the weights and the learning parameters.
A specific recurrent architecture withrational-valued weights (as opposed to full precision real number-valued weights) has the power of auniversal Turing machine,[212]using a finite number of neurons and standard linear connections. Further, the use ofirrationalvalues for weights results in a machine withsuper-Turingpower.[213][214][failed verification]
A model's "capacity" property corresponds to its ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity.
Two notions of capacity are known by the community. The information capacity and the VC Dimension. The information capacity of a perceptron is intensively discussed in Sir David MacKay's book[215]which summarizes work by Thomas Cover.[216]The capacity of a network of standard neurons (not convolutional) can be derived by four rules[217]that derive from understanding a neuron as an electrical element. The information capacity captures the functions modelable by the network given any data as input. The second notion, is theVC dimension. VC Dimension uses the principles ofmeasure theoryand finds the maximum capacity under the best possible circumstances. This is, given input data in a specific form. As noted in,[215]the VC Dimension for arbitrary inputs is half the information capacity of a Perceptron. The VC Dimension for arbitrary points is sometimes referred to as Memory Capacity.[218]
Models may not consistently converge on a single solution, firstly because local minima may exist, depending on the cost function and the model. Secondly, the optimization method used might not guarantee to converge when it begins far from any local minimum. Thirdly, for sufficiently large data or parameters, some methods become impractical.
Another issue worthy to mention is that training may cross someSaddle pointwhich may lead the convergence to the wrong direction.
The convergence behavior of certain types of ANN architectures are more understood than others. When the width of network approaches to infinity, the ANN is well described by its first order Taylor expansion throughout training, and so inherits the convergence behavior ofaffine models.[219][220]Another example is when parameters are small, it is observed that ANNs often fits target functions from low to high frequencies. This behavior is referred to as the spectral bias, or frequency principle, of neural networks.[221][222][223][224]This phenomenon is the opposite to the behavior of some well studied iterative numerical schemes such asJacobi method. Deeper neural networks have been observed to be more biased towards low frequency functions.[225]
Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training. This arises in convoluted or over-specified systems when the network capacity significantly exceeds the needed free parameters. Two approaches address over-training. The first is to usecross-validationand similar techniques to check for the presence of over-training and to select hyperparameters to minimize the generalization error.
The second is to use some form ofregularization. This concept emerges in a probabilistic (Bayesian) framework, where regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting.
Supervised neural networks that use amean squared error(MSE) cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate theconfidence intervalof network output, assuming anormal distribution. A confidence analysis made this way is statistically valid as long as the outputprobability distributionstays the same and the network is not modified.
By assigning asoftmax activation function, a generalization of thelogistic function, on the output layer of the neural network (or a softmax component in a component-based network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is useful in classification as it gives a certainty measure on classifications.
The softmax activation function is:
A common criticism of neural networks, particularly in robotics, is that they require too many training samples for real-world operation.[226]Any learning machine needs sufficient representative examples in order to capture the underlying structure that allows it to generalize to new cases. Potential solutions include randomly shuffling training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example, grouping examples in so-called mini-batches and/or introducing a recursive least squares algorithm forCMAC.[152]Dean Pomerleau uses a neural network to train a robotic vehicle to drive on multiple types of roads (single lane, multi-lane, dirt, etc.), and a large amount of his research is devoted to extrapolating multiple training scenarios from a single training experience, and preserving past training diversity so that the system does not become overtrained (if, for example, it is presented with a series of right turns—it should not learn to always turn right).[227]
A central claim[citation needed]of ANNs is that they embody new and powerful general principles for processing information. These principles are ill-defined. It is often claimed[by whom?]that they areemergentfrom the network itself. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. In 1997,Alexander Dewdney, a formerScientific Americancolumnist, commented that as a result, artificial neural networks have a "something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are. No human hand (or mind) intervenes; solutions are found as if by magic; and no one, it seems, has learned anything".[228]One response to Dewdney is that neural networks have been successfully used to handle many complex and diverse tasks, ranging from autonomously flying aircraft[229]to detecting credit card fraud to mastering the game ofGo.
Technology writer Roger Bridgman commented:
Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource".
In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having.[230]
Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Moreover, recent emphasis on theexplainabilityof AI has contributed towards the development of methods, notably those based onattentionmechanisms, for visualizing and explaining learned neural networks. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles that allow a learning machine to be successful. For example, Bengio and LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep architecture.[231]
Biological brains use both shallow and deep circuits as reported by brain anatomy,[232]displaying a wide variety of invariance. Weng[233]argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies.
Large and effective neural networks require considerable computing resources.[234]While the brain has hardware tailored to the task of processing signals through agraphof neurons, simulating even a simplified neuron onvon Neumann architecturemay consume vast amounts ofmemoryand storage. Furthermore, the designer often needs to transmit signals through many of these connections and their associated neurons – which require enormousCPUpower and time.[citation needed]
Some argue that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered byGPGPUs(onGPUs), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before.[38]The use of accelerators such asFPGAsand GPUs can reduce training times from months to days.[234][235]
Neuromorphic engineeringor aphysical neural networkaddresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry. Another type of chip optimized for neural network processing is called aTensor Processing Unit, or TPU.[236]
Analyzing what has been learned by an ANN is much easier than analyzing what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful. For example, local vs. non-local learning and shallow vs. deep architecture.[237]
Advocates ofhybridmodels (combining neural networks and symbolic approaches) say that such a mixture can better capture the mechanisms of the human mind.[238][239]
Neural networks are dependent on the quality of the data they are trained on, thus low quality data with imbalanced representativeness can lead to the model learning and perpetuating societal biases.[240][241]These inherited biases become especially critical when the ANNs are integrated into real-world scenarios where the training data may be imbalanced due to the scarcity of data for a specific race, gender or other attribute.[240]This imbalance can result in the model having inadequate representation and understanding of underrepresented groups, leading to discriminatory outcomes that exacerbate societal inequalities, especially in applications likefacial recognition, hiring processes, andlaw enforcement.[241][242]For example, in 2018,Amazonhad to scrap a recruiting tool because the model favored men over women for jobs in software engineering due to the higher number of male workers in the field.[242]The program would penalize any resume with the word "woman" or the name of any women's college. However, the use ofsynthetic datacan help reduce dataset bias and increase representation in datasets.[243]
Artificial neural networks (ANNs) have undergone significant advancements, particularly in their ability to model complex systems, handle large data sets, and adapt to various types of applications. Their evolution over the past few decades has been marked by a broad range of applications in fields such as image processing, speech recognition, natural language processing, finance, and medicine.[citation needed]
In the realm of image processing, ANNs are employed in tasks such as image classification, object recognition, and image segmentation. For instance, deep convolutional neural networks (CNNs) have been important in handwritten digit recognition, achieving state-of-the-art performance.[244]This demonstrates the ability of ANNs to effectively process and interpret complex visual information, leading to advancements in fields ranging from automated surveillance to medical imaging.[244]
By modeling speech signals, ANNs are used for tasks like speaker identification and speech-to-text conversion. Deep neural network architectures have introduced significant improvements in large vocabulary continuous speech recognition, outperforming traditional techniques.[244][245]These advancements have enabled the development of more accurate and efficient voice-activated systems, enhancing user interfaces in technology products.[citation needed]
In natural language processing, ANNs are used for tasks such as text classification, sentiment analysis, and machine translation. They have enabled the development of models that can accurately translate between languages, understand the context and sentiment in textual data, and categorize text based on content.[244][245]This has implications for automated customer service, content moderation, and language understanding technologies.[citation needed]
In the domain of control systems, ANNs are used to model dynamic systems for tasks such as system identification, control design, and optimization. For instance, deep feedforward neural networks are important in system identification and control applications.[citation needed]
ANNs are used forstock market predictionandcredit scoring:
ANNs require high-quality data and careful tuning, and their "black-box" nature can pose challenges in interpretation. Nevertheless, ongoing advancements suggest that ANNs continue to play a role in finance, offering valuable insights and enhancingrisk management strategies.[citation needed]
ANNs are able to process and analyze vast medical datasets. They enhance diagnostic accuracy, especially by interpreting complexmedical imagingfor early disease detection, and by predicting patient outcomes for personalized treatment planning.[245]In drug discovery, ANNs speed up the identification of potential drug candidates and predict their efficacy and safety, significantly reducing development time and costs.[244]Additionally, their application in personalized medicine and healthcare data analysis allows tailored therapies and efficient patient care management.[245]Ongoing research is aimed at addressing remaining challenges such as data privacy and model interpretability, as well as expanding the scope of ANN applications in medicine.[citation needed]
ANNs such as generative adversarial networks (GAN) andtransformersare used for content creation across numerous industries.[246]This is because deep learning models are able to learn the style of an artist or musician from huge datasets and generate completely new artworks and music compositions. For instance,DALL-Eis a deep neural network trained on 650 million pairs of images and texts across the internet that can create artworks based on text entered by the user.[247]In the field of music, transformers are used to create original music for commercials and documentaries through companies such asAIVAandJukedeck.[248]In the marketing industry generative models are used to create personalized advertisements for consumers.[246]Additionally, major film companies are partnering with technology companies to analyze the financial success of a film, such as the partnership between Warner Bros and technology company Cinelytic established in 2020.[249]Furthermore, neural networks have found uses in video game creation, where Non Player Characters (NPCs) can make decisions based on all the characters currently in the game.[250]
|
https://en.wikipedia.org/wiki/Neural_network_(machine_learning)
|
Incomputer science, asuffix automatonis an efficientdata structurefor representing thesubstring indexof a given string which allows the storage, processing, and retrieval of compressed information about all itssubstrings. The suffix automaton of a stringS{\displaystyle S}is the smallestdirected acyclic graphwith a dedicated initial vertex and a set of "final" vertices, such that paths from the initial vertex to final vertices represent the suffixes of the string.
In terms ofautomata theory, a suffix automaton is theminimalpartialdeterministic finite automatonthat recognizes the set ofsuffixesof a givenstringS=s1s2…sn{\displaystyle S=s_{1}s_{2}\dots s_{n}}. Thestate graphof a suffix automaton is called a directed acyclic word graph (DAWG), a term that is also sometimes used for anydeterministic acyclic finite state automaton.
Suffix automata were introduced in 1983 by a group of scientists from theUniversity of Denverand theUniversity of Colorado Boulder. They suggested alinear timeonline algorithmfor its construction and showed that the suffix automaton of a stringS{\displaystyle S}having length at least two characters has at most2|S|−1{\textstyle 2|S|-1}states and at most3|S|−4{\textstyle 3|S|-4}transitions. Further works have shown a close connection between suffix automata andsuffix trees, and have outlined several generalizations of suffix automata, such as compacted suffix automaton obtained by compression of nodes with a single outgoing arc.
Suffix automata provide efficient solutions to problems such assubstring searchand computation of thelargest common substringof two and more strings.
The concept of suffix automaton was introduced in 1983[1]by a group of scientists fromUniversity of DenverandUniversity of Colorado Boulderconsisting of Anselm Blumer, Janet Blumer,Andrzej Ehrenfeucht,David Hausslerand Ross McConnell, although similar concepts had earlier been studied alongside suffix trees in the works of Peter Weiner,[2]Vaughan Pratt[3]andAnatol Slissenko.[4]In their initial work, Blumeret al. showed a suffix automaton built for the stringS{\displaystyle S}of length greater than1{\displaystyle 1}has at most2|S|−1{\displaystyle 2|S|-1}states and at most3|S|−4{\displaystyle 3|S|-4}transitions, and suggested a linearalgorithmfor automaton construction.[5]
In 1983, Mu-Tian Chen and Joel Seiferas independently showed that Weiner's 1973 suffix-tree construction algorithm[2]while building a suffix tree of the stringS{\displaystyle S}constructs a suffix automaton of the reversed stringSR{\textstyle S^{R}}as an auxiliary structure.[6]In 1987, Blumeret al. applied the compressing technique used in suffix trees to a suffix automaton and invented the compacted suffix automaton, which is also called the compacted directed acyclic word graph (CDAWG).[7]In 1997,Maxime Crochemoreand Renaud Vérin developed a linear algorithm for direct CDAWG construction.[1]In 2001, Shunsuke Inenagaet al. developed an algorithm for construction of CDAWG for a set of words given by atrie.[8]
Usually when speaking about suffix automata and related concepts, some notions fromformal language theoryandautomata theoryare used, in particular:[9]
Formally,deterministic finite automatonis determined by5-tupleA=(Σ,Q,q0,F,δ){\displaystyle {\mathcal {A}}=(\Sigma ,Q,q_{0},F,\delta )}, where:[10]
Most commonly, deterministic finite automaton is represented as adirected graph("diagram") such that:[10]
In terms of its diagram, the automaton recognizes the wordω=ω1ω2…ωm{\displaystyle \omega =\omega _{1}\omega _{2}\dots \omega _{m}}only if there is a path from the initial vertexq0{\displaystyle q_{0}}to some final vertexq∈F{\displaystyle q\in F}such that concatenation of characters on this path formsω{\displaystyle \omega }. The set of words recognized by an automaton forms a language that is set to be recognized by the automaton. In these terms, the language recognized by a suffix automaton ofS{\displaystyle S}is the language of its (possibly empty) suffixes.[9]
"Right context" of the wordω{\displaystyle \omega }with respect to languageL{\displaystyle L}is a set[ω]R={α:ωα∈L}{\displaystyle [\omega ]_{R}=\{\alpha :\omega \alpha \in L\}}that is a set of wordsα{\displaystyle \alpha }such that their concatenation withω{\displaystyle \omega }forms a word fromL{\displaystyle L}. Right contexts induce a naturalequivalence relation[α]R=[β]R{\displaystyle [\alpha ]_{R}=[\beta ]_{R}}on the set of all words. If languageL{\displaystyle L}is recognized by some deterministic finite automaton, there exists unique up toisomorphismautomaton that recognizes the same language and has the minimum possible number of states. Such an automaton is called aminimal automatonfor the given languageL{\displaystyle L}.Myhill–Nerode theoremallows it to define it explicitly in terms of right contexts:[11][12]
Theorem—Minimal automaton recognizing languageL{\displaystyle L}over the alphabetΣ{\displaystyle \Sigma }may be explicitly defined in the following way:
In these terms, a "suffix automaton" is the minimal deterministic finite automaton recognizing the language of suffixes of the wordS=s1s2…sn{\displaystyle S=s_{1}s_{2}\dots s_{n}}. The right context of the wordω{\displaystyle \omega }with respect to this language consists of wordsα{\displaystyle \alpha }, such thatωα{\displaystyle \omega \alpha }is a suffix ofS{\displaystyle S}. It allows to formulate the following lemma defining abijectionbetween the right context of the word and the set of right positions of its occurrences inS{\displaystyle S}:[13][14]
Theorem—Letendpos(ω)={r:ω=sl…sr}{\displaystyle endpos(\omega )=\{r:\omega =s_{l}\dots s_{r}\}}be the set of right positions of occurrences ofω{\displaystyle \omega }inS{\displaystyle S}.
There is a following bijection betweenendpos(ω){\displaystyle endpos(\omega )}and[ω]R{\displaystyle [\omega ]_{R}}:
For example, for the wordS=abacaba{\displaystyle S=abacaba}and its subwordω=ab{\displaystyle \omega =ab}, it holdsendpos(ab)={2,6}{\displaystyle endpos(ab)=\{2,6\}}and[ab]R={a,acaba}{\displaystyle [ab]_{R}=\{a,acaba\}}. Informally,[ab]R{\displaystyle [ab]_{R}}is formed by words that follow occurrences ofab{\displaystyle ab}to the end ofS{\displaystyle S}andendpos(ab){\displaystyle endpos(ab)}is formed by right positions of those occurrences. In this example, the elementx=2∈endpos(ab){\displaystyle x=2\in endpos(ab)}corresponds with the words3s4s5s6s7=acaba∈[ab]R{\displaystyle s_{3}s_{4}s_{5}s_{6}s_{7}=acaba\in [ab]_{R}}while the worda∈[ab]R{\displaystyle a\in [ab]_{R}}corresponds with the element7−|a|=6∈endpos(ab){\displaystyle 7-|a|=6\in endpos(ab)}.
It implies several structure properties of suffix automaton states. Let|α|≤|β|{\displaystyle |\alpha |\leq |\beta |}, then:[14]
Any stateq=[α]R{\displaystyle q=[\alpha ]_{R}}of the suffix automaton recognizes some continuouschainof nested suffixes of the longest word recognized by this state.[14]
"Left extension"γ←{\displaystyle {\overset {\scriptstyle {\leftarrow }}{\gamma }}}of the stringγ{\displaystyle \gamma }is the longest stringω{\displaystyle \omega }that has the same right context asγ{\displaystyle \gamma }. Length|γ←|{\displaystyle |{\overset {\scriptstyle {\leftarrow }}{\gamma }}|}of the longest string recognized byq=[γ]R{\displaystyle q=[\gamma ]_{R}}is denoted bylen(q){\displaystyle len(q)}. It holds:[15]
Theorem—Left extension ofγ{\displaystyle \gamma }may be represented asγ←=βγ{\displaystyle {\overleftarrow {\gamma }}=\beta \gamma }, whereβ{\displaystyle \beta }is the longest word such that any occurrence ofγ{\displaystyle \gamma }inS{\displaystyle S}is preceded byβ{\displaystyle \beta }.
"Suffix link"link(q){\displaystyle link(q)}of the stateq=[α]R{\displaystyle q=[\alpha ]_{R}}is the pointer to the statep{\displaystyle p}that contains the largest suffix ofα{\displaystyle \alpha }that is not recognized byq{\displaystyle q}.
In this terms it can be saidq=[α]R{\displaystyle q=[\alpha ]_{R}}recognizes exactly all suffixes ofα←{\displaystyle {\overset {\scriptstyle {\leftarrow }}{\alpha }}}that is longer thanlen(link(q)){\displaystyle len(link(q))}and not longer thanlen(q){\displaystyle len(q)}. It also holds:[15]
Theorem—Suffix links form atreeT(V,E){\displaystyle {\mathcal {T}}(V,E)}that may be defined explicitly in the following way:
A "prefix tree" (or "trie") is a rooted directed tree in which arcs are marked by characters in such a way no vertexv{\displaystyle v}of such tree has two out-going arcs marked with the same character. Some vertices in trie are marked as final. Trie is said to recognize a set of words defined by paths from its root to final vertices. In this way prefix trees are a special kind of deterministic finite automata if you perceive its root as an initial vertex.[16]The "suffix trie" of the wordS{\displaystyle S}is a prefix tree recognizing a set of its suffixes. "Asuffix tree" is a tree obtained from a suffix trie via the compaction procedure, during which consequent edges are merged if thedegreeof the vertex between them is equal to two.[15]
By its definition, a suffix automaton can be obtained viaminimizationof the suffix trie. It may be shown that a compacted suffix automaton is obtained by both minimization of the suffix tree (if one assumes each string on the edge of the suffix tree is a solid character from the alphabet) and compaction of the suffix automaton.[17]Besides this connection between the suffix tree and the suffix automaton of the same string there is as well a connection between the suffix automaton of the stringS=s1s2…sn{\displaystyle S=s_{1}s_{2}\dots s_{n}}and the suffix tree of the reversed stringSR=snsn−1…s1{\displaystyle S^{R}=s_{n}s_{n-1}\dots s_{1}}.[18]
Similarly to right contexts one may introduce "left contexts"[ω]L={β∈Σ∗:βω∈L}{\displaystyle [\omega ]_{L}=\{\beta \in \Sigma ^{*}:\beta \omega \in L\}}, "right extensions"ω→{\displaystyle {\overset {\scriptstyle {\rightarrow }}{\omega ~}}}corresponding to the longest string having same left context asω{\displaystyle \omega }and the equivalence relation[α]L=[β]L{\displaystyle [\alpha ]_{L}=[\beta ]_{L}}. If one considers right extensions with respect to the languageL{\displaystyle L}of "prefixes" of the stringS{\displaystyle S}it may be obtained:[15]
Theorem—Suffix tree of the stringS{\displaystyle S}may be defined explicitly in the following way:
Here triplet(v1,ω,v2)∈E{\displaystyle (v_{1},\omega ,v_{2})\in E}means there is an edge fromv1{\displaystyle v_{1}}tov2{\displaystyle v_{2}}with the stringω{\displaystyle \omega }written on it
, which implies the suffix link tree of the stringS{\displaystyle S}and the suffix tree of the stringSR{\displaystyle S^{R}}are isomorphic:[18]
Similarly to the case of left extensions, the following lemma holds for right extensions:[15]
Theorem—Right extension of the stringγ{\displaystyle \gamma }may be represented asγ→=γα{\displaystyle {\overrightarrow {\gamma }}=\gamma \alpha }, whereα{\displaystyle \alpha }is the longest word such that every occurrence ofγ{\displaystyle \gamma }inS{\displaystyle S}is succeeded byα{\displaystyle \alpha }.
A suffix automaton of the stringS{\displaystyle S}of lengthn>1{\displaystyle n>1}has at most2n−1{\displaystyle 2n-1}states and at most3n−4{\displaystyle 3n-4}transitions. These bounds are reached on stringsabb…bb=abn−1{\displaystyle abb\dots bb=ab^{n-1}}andabb…bc=abn−2c{\displaystyle abb\dots bc=ab^{n-2}c}correspondingly.[13]This may be formulated in a stricter way as|δ|≤|Q|+n−2{\displaystyle |\delta |\leq |Q|+n-2}where|δ|{\displaystyle |\delta |}and|Q|{\displaystyle |Q|}are the numbers of transitions and states in automaton correspondingly.[14]
Initially the automaton only consists of a single state corresponding to the empty word, then characters of the string are added one by one and the automaton is rebuilt on each step incrementally.[19]
After a new character is appended to the string, some equivalence classes are altered. Let[α]Rω{\displaystyle [\alpha ]_{R_{\omega }}}be the right context ofα{\displaystyle \alpha }with respect to the language ofω{\displaystyle \omega }suffixes. Then the transition from[α]Rω{\displaystyle [\alpha ]_{R_{\omega }}}to[α]Rωx{\displaystyle [\alpha ]_{R_{\omega x}}}afterx{\displaystyle x}is appended toω{\displaystyle \omega }is defined by lemma:[14]
Theorem—Letα,ω∈Σ∗{\displaystyle \alpha ,\omega \in \Sigma ^{*}}be some words overΣ{\displaystyle \Sigma }andx∈Σ{\displaystyle x\in \Sigma }be some character from this alphabet. Then there is a following correspondence between[α]Rω{\displaystyle [\alpha ]_{R_{\omega }}}and[α]Rωx{\displaystyle [\alpha ]_{R_{\omega x}}}:
After addingx{\displaystyle x}to the current wordω{\displaystyle \omega }the right context ofα{\displaystyle \alpha }may change significantly only ifα{\displaystyle \alpha }is a suffix ofωx{\displaystyle \omega x}. It implies equivalence relation≡Rωx{\displaystyle \equiv _{R_{\omega x}}}is arefinementof≡Rω{\displaystyle \equiv _{R_{\omega }}}. In other words, if[α]Rωx=[β]Rωx{\displaystyle [\alpha ]_{R_{\omega x}}=[\beta ]_{R_{\omega x}}}, then[α]Rω=[β]Rω{\displaystyle [\alpha ]_{R_{\omega }}=[\beta ]_{R_{\omega }}}. After the addition of a new character at most two equivalence classes of≡Rω{\displaystyle \equiv _{R_{\omega }}}will be split and each of them may split in at most two new classes. First, equivalence class corresponding toemptyright context is always split into two equivalence classes, one of them corresponding toωx{\displaystyle \omega x}itself and having{ε}{\displaystyle \{\varepsilon \}}as a right context. This new equivalence class contains exactlyωx{\displaystyle \omega x}and all its suffixes that did not occur inω{\displaystyle \omega }, as the right context of such words was empty before and contains only empty word now.[14]
Given the correspondence between states of the suffix automaton and vertices of the suffix tree, it is possible to find out the second state that may possibly split after a new character is appended. The transition fromω{\displaystyle \omega }toωx{\displaystyle \omega x}corresponds to the transition fromωR{\displaystyle \omega ^{R}}toxωR{\displaystyle x\omega ^{R}}in the reversed string. In terms of suffix trees it corresponds to the insertion of the new longest suffixxωR{\displaystyle x\omega ^{R}}into the suffix tree ofωR{\displaystyle \omega ^{R}}. At most two new vertices may be formed after this insertion: one of them corresponding toxωR{\displaystyle x\omega ^{R}}, while the other one corresponds to its direct ancestor if there was a branching. Returning to suffix automata, it means the first new state recognizesωx{\displaystyle \omega x}and the second one (if there is a second new state) is its suffix link. It may be stated as a lemma:[14]
Theorem—Letω∈Σ∗{\displaystyle \omega \in \Sigma ^{*}},x∈Σ{\displaystyle x\in \Sigma }be some word and character overΣ{\displaystyle \Sigma }. Also letα{\displaystyle \alpha }be the longest suffix ofωx{\displaystyle \omega x}, which occurs inω{\displaystyle \omega }, and letβ=α←{\displaystyle \beta ={\overset {\scriptstyle {\leftarrow }}{\alpha }}}. Then for any substringsu,v{\displaystyle u,v}ofω{\displaystyle \omega }it holds:
It implies that ifα=β{\displaystyle \alpha =\beta }(for example, whenx{\displaystyle x}didn't occur inω{\displaystyle \omega }at all andα=β=ε{\displaystyle \alpha =\beta =\varepsilon }), then only the equivalence class corresponding to the empty right context is split.[14]
Besides suffix links it is also needed to define final states of the automaton. It follows from structure properties that all suffixes of a wordα{\displaystyle \alpha }recognized byq=[α]R{\displaystyle q=[\alpha ]_{R}}are recognized by some vertex onsuffix path(q,link(q),link2(q),…){\displaystyle (q,link(q),link^{2}(q),\dots )}ofq{\displaystyle q}. Namely, suffixes with length greater thanlen(link(q)){\displaystyle len(link(q))}lie inq{\displaystyle q}, suffixes with length greater thanlen(link(link(q)){\displaystyle len(link(link(q))}but not greater thanlen(link(q)){\displaystyle len(link(q))}lie inlink(q){\displaystyle link(q)}and so on. Thus if the state recognizingω{\displaystyle \omega }is denoted bylast{\displaystyle last}, then all final states (that is, recognizing suffixes ofω{\displaystyle \omega }) form up the sequence(last,link(last),link2(last),…){\displaystyle (last,link(last),link^{2}(last),\dots )}.[19]
After the characterx{\displaystyle x}is appended toω{\displaystyle \omega }possible new states of suffix automaton are[ωx]Rωx{\displaystyle [\omega x]_{R_{\omega x}}}and[α]Rωx{\displaystyle [\alpha ]_{R_{\omega x}}}. Suffix link from[ωx]Rωx{\displaystyle [\omega x]_{R_{\omega x}}}goes to[α]Rωx{\displaystyle [\alpha ]_{R_{\omega x}}}and from[α]Rωx{\displaystyle [\alpha ]_{R_{\omega x}}}it goes tolink([α]Rω){\displaystyle link([\alpha ]_{R_{\omega }})}. Words from[ωx]Rωx{\displaystyle [\omega x]_{R_{\omega x}}}occur inωx{\displaystyle \omega x}only as its suffixes therefore there should be no transitions at all from[ωx]Rωx{\displaystyle [\omega x]_{R_{\omega x}}}while transitions to it should go from suffixes ofω{\displaystyle \omega }having length at leastα{\displaystyle \alpha }and be marked with the characterx{\displaystyle x}. State[α]Rωx{\displaystyle [\alpha ]_{R_{\omega x}}}is formed by subset of[α]Rω{\displaystyle [\alpha ]_{R_{\omega }}}, thus transitions from[α]Rωx{\displaystyle [\alpha ]_{R_{\omega x}}}should be same as from[α]Rω{\displaystyle [\alpha ]_{R_{\omega }}}. Meanwhile, transitions leading to[α]Rωx{\displaystyle [\alpha ]_{R_{\omega x}}}should go from suffixes ofω{\displaystyle \omega }having length less than|α|{\displaystyle |\alpha |}and at leastlen(link([α]Rω)){\displaystyle len(link([\alpha ]_{R_{\omega }}))}, as such transitions have led to[α]Rω{\displaystyle [\alpha ]_{R_{\omega }}}before and corresponded to seceded part of this state. States corresponding to these suffixes may be determined via traversal of suffix link path for[ω]Rω{\displaystyle [\omega ]_{R_{\omega }}}.[19]
Theoretical results above lead to the following algorithm that takes characterxand rebuilds the suffix automaton ofωinto the suffix automaton ofωx{\displaystyle \omega x}:[19]
The whole procedure is described by the following pseudo-code:[19]
Hereq0{\displaystyle q_{0}}is the initial state of the automaton andnew_state()is a function creating new state for it. It is assumedlast,len,linkandδare stored as global variables.[19]
Complexity of the algorithm may vary depending on the underlying structure used to store transitions of the automaton. It may be implemented inO(nlog|Σ|){\displaystyle O(n\log |\Sigma |)}withO(n){\displaystyle O(n)}memory overhead or inO(n){\displaystyle O(n)}withO(n|Σ|){\displaystyle O(n|\Sigma |)}memory overhead if one assumes that memory allocation is done inO(1){\displaystyle O(1)}. To obtain such complexity, one has to use the methods ofamortized analysis. The value oflen(p){\displaystyle len(p)}strictly reduces with each iteration of the cycle while it may only increase by as much as one after the first iteration of the cycle on the nextadd_lettercall. Overall value oflen(p){\displaystyle len(p)}never exceedsn{\displaystyle n}and it is only increased by one between iterations of appending new letters that suggest total complexity is at most linear as well. The linearity of the second cycle is shown in a similar way.[19]
The suffix automaton is closely related to other suffix structures andsubstring indices. Given a suffix automaton of a specific string one may construct its suffix tree via compacting and recursive traversal in linear time.[20]Similar transforms are possible in both directions to switch between the suffix automaton ofS{\displaystyle S}and the suffix tree of reversed stringSR{\displaystyle S^{R}}.[18]Other than this several generalizations were developed to construct an automaton for the set of strings given by trie,[8]compacted suffix automation (CDAWG),[7]to maintain the structure of the automaton on the sliding window,[21]and to construct it in a bidirectional way, supporting the insertion of a characters to both the beginning and the end of the string.[22]
As was already mentioned above, a compacted suffix automaton is obtained via both compaction of a regular suffix automaton (by removing states which are non-final and have exactly one out-going arc) and the minimization of a suffix tree. Similarly to the regular suffix automaton, states of compacted suffix automaton may be defined in explicit manner.A two-way extensionγ⟷{\displaystyle {\overset {\scriptstyle {\longleftrightarrow }}{\gamma }}}of a wordγ{\displaystyle \gamma }is the longest wordω=βγα{\displaystyle \omega =\beta \gamma \alpha }, such that every occurrence ofγ{\displaystyle \gamma }inS{\displaystyle S}is preceded byβ{\displaystyle \beta }and succeeded byα{\displaystyle \alpha }. In terms of left and right extensions it means that two-way extension is the left extension of the right extension or, which is equivalent, the right extension of the left extension, that isγ⟷=γ→←=γ←→{\textstyle {\overset {\scriptstyle \longleftrightarrow }{\gamma }}={\overset {\scriptstyle \leftarrow }{\overset {\rightarrow }{\gamma }}}={\overset {\rightarrow }{\overset {\scriptstyle \leftarrow }{\gamma }}}}. In terms of two-way extensions compacted automaton is defined as follows:[15]
Theorem—Compacted suffix automaton of the wordS{\displaystyle S}is defined by a pair(V,E){\displaystyle (V,E)}, where:
Two-way extensions induce an equivalence relationα⟷=β⟷{\textstyle {\overset {\scriptstyle \longleftrightarrow }{\alpha }}={\overset {\scriptstyle \longleftrightarrow }{\beta }}}which defines the set of words recognized by the same state of compacted automaton. This equivalence relation is atransitive closureof the relation defined by(α→=β→)∨(α←=β←){\textstyle ({\overset {\scriptstyle {\rightarrow }}{\alpha \,}}={\overset {\scriptstyle {\rightarrow }}{\beta \,}})\vee ({\overset {\scriptstyle {\leftarrow }}{\alpha }}={\overset {\scriptstyle {\leftarrow }}{\beta }})}, which highlights the fact that a compacted automaton may be obtained by both gluing suffix tree vertices equivalent viaα←=β←{\displaystyle {\overset {\scriptstyle {\leftarrow }}{\alpha }}={\overset {\scriptstyle {\leftarrow }}{\beta }}}relation (minimization of the suffix tree) and gluing suffix automaton states equivalent viaα→=β→{\displaystyle {\overset {\scriptstyle {\rightarrow }}{\alpha \,}}={\overset {\scriptstyle {\rightarrow }}{\beta \,}}}relation (compaction of suffix automaton).[23]If wordsα{\displaystyle \alpha }andβ{\displaystyle \beta }have same right extensions, and wordsβ{\displaystyle \beta }andγ{\displaystyle \gamma }have same left extensions, then cumulatively all stringsα{\displaystyle \alpha },β{\displaystyle \beta }andγ{\displaystyle \gamma }have same two-way extensions. At the same time it may happen that neither left nor right extensions ofα{\displaystyle \alpha }andγ{\displaystyle \gamma }coincide. As an example one may takeS=β=ab{\displaystyle S=\beta =ab},α=a{\displaystyle \alpha =a}andγ=b{\displaystyle \gamma =b}, for which left and right extensions are as follows:α→=β→=ab=β←=γ←{\displaystyle {\overset {\scriptstyle {\rightarrow }}{\alpha \,}}={\overset {\scriptstyle {\rightarrow }}{\beta \,}}=ab={\overset {\scriptstyle {\leftarrow }}{\beta }}={\overset {\scriptstyle {\leftarrow }}{\gamma }}}, butγ→=b{\displaystyle {\overset {\scriptstyle {\rightarrow }}{\gamma \,}}=b}andα←=a{\displaystyle {\overset {\scriptstyle {\leftarrow }}{\alpha }}=a}. That being said, while equivalence relations of one-way extensions were formed by some continuous chain of nested prefixes or suffixes, bidirectional extensions equivalence relations are more complex and the only thing one may conclude for sure is that strings with the same two-way extension aresubstringsof the longest string having the same two-way extension, but it may even happen that they don't have any non-empty substring in common. The total number of equivalence classes for this relation does not exceedn+1{\displaystyle n+1}which implies that compacted suffix automaton of the string having lengthn{\displaystyle n}has at mostn+1{\displaystyle n+1}states. The amount of transitions in such automaton is at most2n−2{\displaystyle 2n-2}.[15]
Consider a set of wordsT={S1,S2,…,Sk}{\displaystyle T=\{S_{1},S_{2},\dots ,S_{k}\}}. It is possible to construct a generalization of suffix automaton that would recognize the language formed up by suffixes of all words from the set. Constraints for the number of states and transitions in such automaton would stay the same as for a single-word automaton if you putn=|S1|+|S2|+⋯+|Sk|{\displaystyle n=|S_{1}|+|S_{2}|+\dots +|S_{k}|}.[23]The algorithm is similar to the construction of single-word automaton except instead oflast{\displaystyle last}state, functionadd_letterwould work with the state corresponding to the wordωi{\displaystyle \omega _{i}}assuming the transition from the set of words{ω1,…,ωi,…,ωk}{\displaystyle \{\omega _{1},\dots ,\omega _{i},\dots ,\omega _{k}\}}to the set{ω1,…,ωix,…,ωk}{\displaystyle \{\omega _{1},\dots ,\omega _{i}x,\dots ,\omega _{k}\}}.[24][25]
This idea is further generalized to the case whenT{\displaystyle T}is not given explicitly but instead is given by aprefix treewithQ{\displaystyle Q}vertices. Mohriet al. showed such an automaton would have at most2Q−2{\displaystyle 2Q-2}and may be constructed in linear time from its size. At the same time, the number of transitions in such automaton may reachO(Q|Σ|){\displaystyle O(Q|\Sigma |)}, for example for the set of wordsT={σ1,aσ1,a2σ1,…,anσ1,anσ2,…,anσk}{\displaystyle T=\{\sigma _{1},a\sigma _{1},a^{2}\sigma _{1},\dots ,a^{n}\sigma _{1},a^{n}\sigma _{2},\dots ,a^{n}\sigma _{k}\}}over the alphabetΣ={a,σ1,…,σk}{\displaystyle \Sigma =\{a,\sigma _{1},\dots ,\sigma _{k}\}}the total length of words is equal toO(n2+nk){\textstyle O(n^{2}+nk)}, the number of vertices in corresponding suffix trie is equal toO(n+k){\displaystyle O(n+k)}and corresponding suffix automaton is formed ofO(n+k){\displaystyle O(n+k)}states andO(nk){\displaystyle O(nk)}transitions. Algorithm suggested by Mohri mainly repeats the generic algorithm for building automaton of several strings but instead of growing words one by one, it traverses the trie in abreadth-first searchorder and append new characters as it meet them in the traversal, which guarantees amortized linear complexity.[26]
Somecompression algorithms, such asLZ77andRLEmay benefit from storing suffix automaton or similar structure not for the whole string but for only lastk{\displaystyle k}its characters while the string is updated. This is because compressing data is usually expressively large and usingO(n){\displaystyle O(n)}memory is undesirable. In 1985, Janet Blumer developed an algorithm to maintain a suffix automaton on a sliding window of sizek{\displaystyle k}inO(nk){\displaystyle O(nk)}worst-case andO(nlogk){\displaystyle O(n\log k)}on average, assuming characters are distributed independently anduniformly. She also showedO(nk){\displaystyle O(nk)}complexity cannot be improved: if one considers words construed as a concatenation of several(ab)mc(ab)md{\displaystyle (ab)^{m}c(ab)^{m}d}words, wherek=6m+2{\displaystyle k=6m+2}, then the number of states for the window of sizek{\displaystyle k}would frequently change with jumps of orderm{\displaystyle m}, which renders even theoretical improvement ofO(nk){\displaystyle O(nk)}for regular suffix automata impossible.[27]
The same should be true for the suffix tree because its vertices correspond to states of the suffix automaton of the reversed string but this problem may be resolved by not explicitly storing every vertex corresponding to the suffix of the whole string, thus only storing vertices with at least two out-going edges. A variation of McCreight's suffix tree construction algorithm for this task was suggested in 1989 by Edward Fiala and Daniel Greene;[28]several years later a similar result was obtained with the variation ofUkkonen's algorithmby Jesper Larsson.[29][30]The existence of such an algorithm, for compacted suffix automaton that absorbs some properties of both suffix trees and suffix automata, was an open question for a long time until it was discovered by Martin Senft and Tomasz Dvorak in 2008, that it is impossible if the alphabet's size is at least two.[31]
One way to overcome this obstacle is to allow window width to vary a bit while stayingO(k){\displaystyle O(k)}. It may be achieved by an approximate algorithm suggested by Inenaga et al. in 2004. The window for which suffix automaton is built in this algorithm is not guaranteed to be of lengthk{\displaystyle k}but it is guaranteed to be at leastk{\displaystyle k}and at most2k+1{\displaystyle 2k+1}while providing linear overall complexity of the algorithm.[32]
Suffix automaton of the stringS{\displaystyle S}may be used to solve such problems as:[33][34]
It is assumed here thatT{\displaystyle T}is given on the input after suffix automaton ofS{\displaystyle S}is constructed.[33]
Suffix automata are also used in data compression,[35]music retrieval[36][37]and matching on genome sequences.[38]
|
https://en.wikipedia.org/wiki/Suffix_automaton
|
Incomputer science, ahashed array tree(HAT) is adynamic arraydata-structure published by Edward Sitarski in 1996,[1]maintaining an array of separate memory fragments (or "leaves") to store the data elements, unlike simple dynamic arrays which maintain their data in one contiguous memory area. Its primary objective is to reduce the amount of element copying due to automatic array resizing operations, and to improve memory usage patterns.
Whereas simple dynamic arrays based ongeometric expansionwaste linear (Ω(n)) space, wherenis the number of elements in thearray, hashed array trees waste only orderO(√n) storage space. An optimization of the algorithm allows elimination of data copying completely, at a cost of increasing the wasted space.
It can performaccessin constant (O(1)) time, though slightly slower than simple dynamic arrays. The algorithm has O(1) amortized performance when appending a series of objects to the end of a hashed array tree. Contrary to its name, it does not usehash functions.
As defined by Sitarski, a hashed array tree has a top-level directory containing apower of twonumber of leaf arrays. All leaf arrays are the same size as the top-level directory. This structure superficially resembles ahash tablewith array-based collision chains, which is the basis for the namehashed array tree. A full hashed array tree can holdm2elements, wheremis the size of the top-level directory.[1]The use of powers of two enables faster physical addressing through bit operations instead of arithmetic operations ofquotientandremainder[1]and ensures the O(1) amortized performance of append operation in the presence of occasional global array copy while expanding.
In a usual dynamic arraygeometric expansionscheme, the array is reallocated as a whole sequential chunk of memory with the new size a double of its current size (and the whole data is then moved to the new location). This ensures O(1) amortized operations at a cost of O(n) wasted space, as the enlarged array is filled to the half of its new capacity.
When a hashed array tree is full, its directory and leaves must be restructured to twice their prior size to accommodate additional append operations. The data held in old structure is then moved into the new locations. Only one new leaf is then allocated and added into the top array which thus becomes filled only to a quarter of its new capacity. All the extra leaves are not allocated yet, and will only be allocated when needed, thus wasting onlyO(√n) of storage.[2]
There are multiple alternatives for reducing size: when a hashed array tree is one eighth full, it can be restructured to a smaller, half-full hashed array tree; another option is only freeing unused leaf arrays, without resizing the leaves. Further optimizations include adding new leaves without resizing while growing the directory array as needed, possibly through geometric expansion. This will eliminate the need for data copying completely at the cost of making the wasted space beO(n), with a small constant, and only performing restructuring when a set threshold overhead is reached.[1]
Brodnik et al.[5]presented adynamic arrayalgorithm with a similar space wastage profile to hashed array trees. Brodnik's implementation retains previously allocated leaf arrays, with a more complicated address calculation function as compared to hashed array trees.
|
https://en.wikipedia.org/wiki/Hashed_array_tree
|
In computer science, atrie(/ˈtraɪ/,/ˈtriː/ⓘ), also known as adigital treeorprefix tree,[1]is a specializedsearch treedata structure used to store and retrieve strings from a dictionary or set. Unlike abinary search tree, nodes in a trie do not store their associated key. Instead, each node'spositionwithin the trie determines its associated key, with the connections between nodes defined by individualcharactersrather than the entire key.
Tries are particularly effective for tasks such as autocomplete, spell checking, and IP routing, offering advantages overhash tablesdue to their prefix-based organization and lack of hash collisions. Every child node shares a commonprefixwith its parent node, and the root node represents theempty string. While basic trie implementations can be memory-intensive, various optimization techniques such as compression and bitwise representations have been developed to improve their efficiency. A notable optimization is theradix tree, which provides more efficient prefix-based storage.
While tries commonly store character strings, they can be adapted to work with any ordered sequence of elements, such aspermutationsof digits or shapes. A notable variant is thebitwise trie, which uses individualbitsfrom fixed-length binary data (such asintegersormemory addresses) as keys.
The idea of a trie for representing a set of strings was first abstractly described byAxel Thuein 1912.[2][3]Tries were first described in a computer context by René de la Briandais in 1959.[4][3][5]: 336
The idea was independently described in 1960 byEdward Fredkin,[6]who coined the termtrie, pronouncing it/ˈtriː/(as "tree"), after the middle syllable ofretrieval.[7][8]However, other authors pronounce it/ˈtraɪ/(as "try"), in an attempt to distinguish it verbally from "tree".[7][8][3]
Tries are a form of string-indexed look-up data structure, which is used to store a dictionary list of words that can be searched on in a manner that allows for efficient generation ofcompletion lists.[9][10]: 1A prefix trie is anordered treedata structure used in the representation of a set of strings over a finite alphabet set, which allows efficient storage of words with common prefixes.[1]
Tries can be efficacious onstring-searching algorithmssuch aspredictive text,approximate string matching, andspell checkingin comparison to binary search trees.[11][8][12]: 358A trie can be seen as a tree-shapeddeterministic finite automaton.[13]
Tries support various operations: insertion, deletion, and lookup of a string key. Tries are composed of nodes that contain links, which either point to other suffix child nodes ornull. As for every tree, each node but the root is pointed to by only one other node, called itsparent. Each node contains as many links as the number of characters in the applicablealphabet(although tries tend to have a substantial number of null links). In some cases, the alphabet used is simply that of thecharacter encoding—resulting in, for example, a size of 256 in the case of (unsigned)ASCII.[14]: 732
The null links within the children of a node emphasize the following characteristics:[14]: 734[5]: 336
A basicstructure typeof nodes in the trie is as follows;Node{\displaystyle {\text{Node}}}may contain an optionalValue{\displaystyle {\text{Value}}}, which is associated with each key stored in the last character of string, or terminal node.
Searching for a value in a trie is guided by the characters in the search string key, as each node in the trie contains a corresponding link to each possible character in the given string. Thus, following the string within the trie yields the associated value for the given string key. A null link during the search indicates the inexistence of the key.[14]: 732-733
The following pseudocode implements the search procedure for a given stringkeyin a rooted triex.[15]: 135
In the above pseudocode,xandkeycorrespond to the pointer of trie's root node and the string key respectively. The search operation, in a standard trie, takesO(dm){\displaystyle O({\text{dm}})}time, wherem{\displaystyle {\text{m}}}is the size of the string parameterkey{\displaystyle {\text{key}}}, andd{\displaystyle {\text{d}}}corresponds to thealphabet size.[16]: 754Binary search trees, on the other hand, takeO(mlogn){\displaystyle O(m\log n)}in the worst case, since the search depends on the height of the tree (logn{\displaystyle \log n}) of the BST (in case ofbalanced trees), wheren{\displaystyle {\text{n}}}andm{\displaystyle {\text{m}}}being number of keys and the length of the keys.[12]: 358
The trie occupies less space in comparison with a BST in the case of a large number of short strings, since nodes share common initial string subsequences and store the keys implicitly.[12]: 358The terminal node of the tree contains a non-null value, and it is a searchhitif the associated value is found in the trie, and searchmissif it is not.[14]: 733
Insertion into trie is guided by using thecharacter setsas indexes to the children array until the last character of the string key is reached.[14]: 733-734Each node in the trie corresponds to one call of theradix sortingroutine, as the trie structure reflects the execution of pattern of the top-down radix sort.[15]: 135
If a null link is encountered prior to reaching the last character of the string key, a new node is created (line 3).[14]: 745The value of the terminal node is assigned to the input value; therefore, if the former was non-null at the time of insertion, it is substituted with the new value.
Deletion of akey–value pairfrom a trie involves finding the terminal node with the corresponding string key, marking the terminal indicator and value tofalseand null correspondingly.[14]: 740
The following is arecursiveprocedure for removing a stringkeyfrom rooted trie (x).
The procedure begins by examining thekey; null denotes the arrival of a terminal node or end of a string key. If the node is terminal it has no children, it is removed from the trie (line 14). However, an end of string key without the node being terminal indicates that the key does not exist, thus the procedure does not modify the trie. The recursion proceeds by incrementingkey's index.
A trie can be used to replace ahash table, over which it has the following advantages:[12]: 358
However, tries are less efficient than a hash table when the data is directly accessed on asecondary storage devicesuch as a hard disk drive that has higherrandom accesstime than themain memory.[6]Tries are also disadvantageous when the key value cannot be easily represented as string, such asfloating point numberswhere multiple representations are possible (e.g. 1 is equivalent to 1.0, +1.0, 1.00, etc.),[12]: 359however it can be unambiguously represented as abinary numberinIEEE 754, in comparison totwo's complementformat.[17]
Tries can be represented in several ways, corresponding to different trade-offs between memory use and speed of the operations.[5]: 341Using a vector of pointers for representing a trie consumes enormous space; however, memory space can be reduced at the expense of running time if asingly linked listis used for each node vector, as most entries of the vector containsnil{\displaystyle {\text{nil}}}.[3]: 495
Techniques such asalphabet reductionmay reduce the large space requirements by reinterpreting the original string as a longer string over a smaller alphabet i.e. a string ofnbytes can alternatively be regarded as a string of2nfour-bit unitsand stored in a trie with 16 instead of 256 pointers per node. Although this can reduce memory usage by up to a factor of eight, lookups need to visit twice as many nodes in the worst case.[5]: 347–352Other techniques include storing a vector of 256 ASCII pointers as a bitmap of 256 bits representing ASCII alphabet, which reduces the size of individual nodes dramatically.[18]
Bitwise tries are used to address the enormous space requirement for the trie nodes in a naive simple pointer vector implementations. Each character in the string key set is represented via individual bits, which are used to traverse the trie over a string key. The implementations for these types of trie usevectorizedCPU instructions tofind the first set bitin a fixed-length key input (e.g.GCC's__builtin_clz()intrinsic function). Accordingly, the set bit is used to index the first item, or child node, in the 32- or 64-entry based bitwise tree. Search then proceeds by testing each subsequent bit in the key.[19]
This procedure is alsocache-localandhighly parallelizabledue toregisterindependency, and thus performant onout-of-order executionCPUs.[19]
Radix tree, also known as acompressed trie, is a space-optimized variant of a trie in which any node with only one child gets merged with its parent; elimination of branches of the nodes with a single child results in better metrics in both space and time.[20][21]: 452This works best when the trie remains static and set of keys stored are very sparse within their representation space.[22]: 3–16
One more approach is to "pack" the trie, in which a space-efficient implementation of a sparse packed trie applied to automatichyphenation, in which the descendants of each node may be interleaved in memory.[8]
Patricia trees are a particular implementation of the compressed binary trie that uses thebinary encodingof the string keys in its representation.[23][15]: 140Every node in a Patricia tree contains an index, known as a "skip number", that stores the node's branching index to avoid empty subtrees during traversal.[15]: 140-141A naive implementation of a trie consumes immense storage due to larger number of leaf-nodes caused by sparse distribution of keys; Patricia trees can be efficient for such cases.[15]: 142[24]: 3
A representation of a Patricia tree is shown to the right. Each index value adjacent to the nodes represents the "skip number"—the index of the bit with which branching is to be decided.[24]: 3The skip number 1 at node 0 corresponds to the position 1 in the binary encoded ASCII where the leftmost bit differed in the key setX.[24]: 3-4The skip number is crucial for search, insertion, and deletion of nodes in the Patricia tree, and abit maskingoperation is performed during every iteration.[15]: 143
Trie data structures are commonly used inpredictive textorautocompletedictionaries, andapproximate matching algorithms.[11]Tries enable faster searches, occupy less space, especially when the set contains large number of short strings, thus used inspell checking, hyphenation applications andlongest prefix matchalgorithms.[8][12]: 358However, if storing dictionary words is all that is required (i.e. there is no need to store metadata associated with each word), a minimal deterministic acyclic finite state automaton (DAFSA) or radix tree would use less storage space than a trie. This is because DAFSAs and radix trees can compress identical branches from the trie which correspond to the same suffixes (or parts) of different words being stored. String dictionaries are also utilized innatural language processing, such as findinglexiconof atext corpus.[25]: 73
Lexicographic sortingof a set of string keys can be implemented by building a trie for the given keys and traversing the tree inpre-orderfashion;[26]this is also a form ofradix sort.[27]Tries are also fundamental data structures forburstsort, which is notable for being the fastest string sorting algorithm as of 2007,[28]accomplished by its efficient use of CPUcache.[29]
A special kind of trie, called asuffix tree, can be used to index allsuffixesin a text to carry out fast full-text searches.[30]
A specialized kind of trie called a compressed trie, is used inweb search enginesfor storing theindexes- a collection of all searchable words.[31]Each terminal node is associated with a list ofURLs—called occurrence list—to pages that match the keyword. The trie is stored in the main memory, whereas the occurrence is kept in an external storage, frequently in largeclusters, or the in-memory index points to documents stored in an external location.[32]
Tries are used inBioinformatics, notably insequence alignmentsoftware applications such asBLAST, which indexes all the different substring of lengthk(calledk-mers) of a text by storing the positions of their occurrences in a compressed trie sequence databases.[25]: 75
Compressed variants of tries, such as databases for managingForwarding Information Base(FIB), are used in storingIP address prefixeswithinroutersandbridgesfor prefix-based lookup to resolvemask-basedoperations inIP routing.[25]: 75
|
https://en.wikipedia.org/wiki/Prefix_tree
|
Indistributed data storage, aP-Gridis a self-organizing structuredpeer-to-peersystem, which can accommodate arbitrary key distributions (and hence support lexicographic key ordering and range queries), still providing storageload-balancingand efficient search by using randomized routing.
P-Grid abstracts atrieand resolves queries based on prefix matching. The actual topology has no hierarchy. Queries are resolved by matching prefixes. This also determines the choice of routing table entries. Each peer, for each level of the trie, maintains autonomously routing entries chosen randomly from the complementary sub-trees.[2]In fact, multiple entries are maintained for each level at each peer to provide fault-tolerance (as well as potentially for query-load management). For diverse reasons including fault-tolerance and load-balancing, multiple peers are responsible for each leaf node in the P-Grid tree. These are called replicas. The replica peers maintain an independent replica sub-network and uses gossip based communication to keep the replica group up-to-date.[3]The redundancy in both the replication of key-space partitions as well as the routing network together is called structural replication. The figure above shows how a query is resolved by forwarding it based on prefix matching.[citation needed]
P-Grid partitions the key-space in a granularity adaptive to the load at that part of the key-space. Consequently, its possible to realize a P-Grid overlay network where each peer has similar storage load even for non-uniform load distributions. This network probably provides as efficient search of keys as traditionaldistributed hash tables(DHTs) do. Note that in contrast to P-Grid, DHTs work efficiently only for uniform load-distributions.[4]
Hence we can use a lexicographic order preserving function to generate the keys, and still realize a load-balanced P-Grid network which supports efficient search of exact keys. Moreover, because of the preservation of lexicographic ordering, range queries can be done efficiently and precisely on P-Grid. The trie-structure of P-Grid allows different range query strategies, processed serially or in parallel, trading off message overheads and query resolution latency.[5]Simple vector-based data storage architectural frameworks are also subject to variable query limitations within the P-Grid environment.[6]
|
https://en.wikipedia.org/wiki/P-Grid
|
Incomputer science, theCommentz-Walter algorithmis astring searching algorithminvented byBeate Commentz-Walter.[1]Like theAho–Corasick string matching algorithm, it can search for multiple patterns at once. It combines ideas from Aho–Corasick with the fast matching of theBoyer–Moore string-search algorithm. For a text of lengthnand maximum pattern length ofm, its worst-case running time isO(mn), though the average case is often much better.[2]
GNUgreponce implemented a string matching algorithm very similar to Commentz-Walter.[3]
The paper on the algorithm was first published byBeate Commentz-Walterin 1979 through the Saarland University and typed by "R. Scherner".[1]The paper detailed two differing algorithms she claimed combined the idea of theAho-CorasickandBoyer-Moorealgorithms, which she called algorithms B and B1. The paper mostly focuses on algorithm B, however.
The Commentz-Walter algorithm combines two known algorithms in order to attempt to better address the multi-pattern matching problem. These two algorithms are theBoyer-Moore, which addresses single pattern matching using filtering, and theAho-Corasick. To do this, the algorithm implements a suffix automaton to search through patterns within an input string, while also using reverse patterns, unlike in theAho-Corasick.[4]
Commentz-Walter has two phases it must go through, these being a pre-computing phase and a matching phase. For the first phase, the Commentz-Walter algorithm uses a reversed pattern to build a pattern tree, this is considered the pre-computing phase. The second phase, known as the matching phase, takes into account the other two algorithms. Using theBoyer-Moore’s technique of shifting and theAho-Corasick's technique of finite automata, the Commentz-Walter algorithm can begin matching.[4]
The Commentz-Walter algorithm will scan backwards throughout an input string, checking for a mismatch. If and when the algorithm does find a mismatch, the algorithm will already know some of the characters that are matches, and then use this information as an index. Using the index, the algorithm checks the pre-computed table to find a distance that it must shift, after this, the algorithm once more begins another matching attempt.
Comparing theAho-Corasickto the Commentz-Walter Algorithm yields results with the idea of time complexity.Aho-Corasickis considered linearO(m+n+k) where k is the number of matches. Commentz-Walter may be considered quadraticO(mn). The reason for this lies in the fact that Commentz-Walter was developed by adding the shifts within theBoyer–Moore string-search algorithmto theAho-Corasick, thus moving its complexity from linear to quadratic.
According to a study done in “The Journal of National Science Foundation of Sri Lanka 46” Commentz-Walter seems to be generally faster than theAho–Corasick string matching algorithm. This, according to the journal, only exists when using long patterns. However, the journal does state that there is no critical analysis on this statement and that there is a lack of general agreement on the performance of the algorithm.[5]
As seen in a visualization of the algorithm’s running time done in a study by “The International Journal of Advanced Computer Science and Information Technology” the performance of the algorithm increased linearly as the shortest pattern within the pattern set increased.[4]
In the original Commentz-Walter paper, an alternative algorithm was also created. This algorithm, known as B1, operates similarly to the main Commentz-Walter algorithm with the only difference being in the way the pattern tree is used during the scanning phase.
The paper also claims this algorithm performs better at the cost of increasing the running time and space of both the preprocessing phase and search phase. This algorithm has not been formally tested in other studies however, so its actual performance is unknown.[1]
|
https://en.wikipedia.org/wiki/Commentz-Walter_algorithm
|
Inmathematics, adynamical systemis a system in which afunctiondescribes thetimedependence of apointin anambient space, such as in aparametric curve. Examples include themathematical modelsthat describe the swinging of a clockpendulum,the flow of water in a pipe, therandom motion of particles in the air, andthe number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such asordinary differential equationsandergodic theoryby allowing different choices of the space and how time is measured.[citation needed]Time can be measured by integers, byrealorcomplex numbersor can be a more general algebraic object, losing the memory of its physical origin, and the space may be amanifoldor simply aset, without the need of asmoothspace-time structure defined on it.
At any given time, a dynamical system has astaterepresenting a point in an appropriatestate space. This state is often given by atupleofreal numbersor by avectorin a geometrical manifold. Theevolution ruleof the dynamical system is a function that describes what future states follow from the current state. Often the function isdeterministic, that is, for a given time interval only one future state follows from the current state.[1][2]However, some systems arestochastic, in that random events also affect the evolution of the state variables.
The study of dynamical systems is the focus ofdynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics,[3][4]biology,[5]chemistry,engineering,[6]economics,[7]history, andmedicine. Dynamical systems are a fundamental part ofchaos theory,logistic mapdynamics,bifurcation theory, theself-assemblyandself-organizationprocesses, and theedge of chaosconcept.
The concept of a dynamical system has its origins inNewtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either adifferential equation,difference equationor othertime scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to assolving the systemorintegrating the system. If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as atrajectoryororbit.
Before the advent ofcomputers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system.
For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:
Many people regard French mathematicianHenri Poincaréas the founder of dynamical systems.[8]Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included thePoincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state.
Aleksandr Lyapunovdeveloped many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system.
In 1913,George David Birkhoffproved Poincaré's "Last Geometric Theorem", a special case of thethree-body problem, a result that made him world-famous. In 1927, he published hisDynamical Systems. Birkhoff's most durable result has been his 1931 discovery of what is now called theergodic theorem. Combining insights fromphysicson theergodic hypothesiswithmeasure theory, this theorem solved, at least in principle, a fundamental problem ofstatistical mechanics. The ergodic theorem has also had repercussions for dynamics.
Stephen Smalemade significant advances as well. His first contribution was theSmale horseshoethat jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others.
Oleksandr Mykolaiovych SharkovskydevelopedSharkovsky's theoremon the periods ofdiscrete dynamical systemsin 1964. One of the implications of the theorem is that if a discrete dynamical system on thereal linehas aperiodic pointof period 3, then it must have periodic points of every other period.
In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineerAli H. Nayfehappliednonlinear dynamicsinmechanicalandengineeringsystems.[9]His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance ofmachinesandstructuresthat are common in daily life, such asships,cranes,bridges,buildings,skyscrapers,jet engines,rocket engines,aircraftandspacecraft.[10]
In the most general sense,[11][12]adynamical systemis atuple(T,X, Φ) whereTis amonoid, written additively,Xis a non-emptysetand Φ is afunction
with
and for anyxinX:
fort1,t2+t1∈I(x){\displaystyle \,t_{1},\,t_{2}+t_{1}\in I(x)}andt2∈I(Φ(t1,x)){\displaystyle \ t_{2}\in I(\Phi (t_{1},x))}, where we have defined the setI(x):={t∈T:(t,x)∈U}{\displaystyle I(x):=\{t\in T:(t,x)\in U\}}for anyxinX.
In particular, in the case thatU=T×X{\displaystyle U=T\times X}we have for everyxinXthatI(x)=T{\displaystyle I(x)=T}and thus that Φ defines amonoid actionofTonX.
The function Φ(t,x) is called theevolution functionof the dynamical system: it associates to every pointxin the setXa unique image, depending on the variablet, called theevolution parameter.Xis calledphase spaceorstate space, while the variablexrepresents aninitial stateof the system.
We often write
if we take one of the variables as constant. The function
is called theflowthroughxand itsgraphis called thetrajectorythroughx. The set
is called theorbitthroughx.
The orbit throughxis theimageof the flow throughx.
A subsetSof the state spaceXis called Φ-invariantif for allxinSand alltinT
Thus, in particular, ifSis Φ-invariant,I(x)=T{\displaystyle I(x)=T}for allxinS. That is, the flow throughxmust be defined for all time for every element ofS.
More commonly there are two classes of definitions for a dynamical system: one is motivated byordinary differential equationsand is geometrical in flavor; and the other is motivated byergodic theoryand ismeasure theoreticalin flavor.
In the geometrical definition, a dynamical system is the tuple⟨T,M,f⟩{\displaystyle \langle {\mathcal {T}},{\mathcal {M}},f\rangle }.T{\displaystyle {\mathcal {T}}}is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative.M{\displaystyle {\mathcal {M}}}is amanifold, i.e. locally a Banach space or Euclidean space, or in the discrete case agraph.fis an evolution rulet→ft(witht∈T{\displaystyle t\in {\mathcal {T}}}) such thatftis adiffeomorphismof the manifold to itself. So, f is a "smooth" mapping of the time-domainT{\displaystyle {\mathcal {T}}}into the space of diffeomorphisms of the manifold to itself. In other terms,f(t) is a diffeomorphism, for every timetin the domainT{\displaystyle {\mathcal {T}}}.
Areal dynamical system,real-time dynamical system,continuous timedynamical system, orflowis a tuple (T,M, Φ) withTanopen intervalin thereal numbersR,Mamanifoldlocallydiffeomorphicto aBanach space, and Φ acontinuous function. If Φ iscontinuously differentiablewe say the system is adifferentiable dynamical system. If the manifoldMis locally diffeomorphic toRn, the dynamical system isfinite-dimensional; if not, the dynamical system isinfinite-dimensional. This does not assume asymplectic structure. WhenTis taken to be the reals, the dynamical system is calledglobalor aflow; and ifTis restricted to the non-negative reals, then the dynamical system is asemi-flow.
Adiscrete dynamical system,discrete-timedynamical systemis a tuple (T,M, Φ), whereMis amanifoldlocally diffeomorphic to aBanach space, and Φ is a function. WhenTis taken to be the integers, it is acascadeor amap. IfTis restricted to the non-negative integers we call the system asemi-cascade.[13]
Acellular automatonis a tuple (T,M, Φ), withTalatticesuch as theintegersor a higher-dimensionalinteger grid,Mis a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As suchcellular automataare dynamical systems. The lattice inMrepresents the "space" lattice, while the one inTrepresents the "time" lattice.
Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore calledmultidimensional systems. Such systems are useful for modeling, for example,image processing.
Given a global dynamical system (R,X, Φ) on alocally compactandHausdorfftopological spaceX, it is often useful to study the continuous extension Φ* of Φ to theone-point compactificationX*ofX. Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system (R,X*, Φ*).
In compact dynamical systems thelimit setof any orbit isnon-empty,compactandsimply connected.
A dynamical system may be defined formally as a measure-preserving transformation of ameasure space, the triplet (T, (X, Σ,μ), Φ). Here,Tis a monoid (usually the non-negative integers),Xis aset, and (X, Σ,μ) is aprobability space, meaning that Σ is asigma-algebraonXand μ is a finitemeasureon (X, Σ). A map Φ:X→Xis said to beΣ-measurableif and only if, for every σ in Σ, one hasΦ−1σ∈Σ{\displaystyle \Phi ^{-1}\sigma \in \Sigma }. A map Φ is said topreserve the measureif and only if, for everyσin Σ, one hasμ(Φ−1σ)=μ(σ){\displaystyle \mu (\Phi ^{-1}\sigma )=\mu (\sigma )}. Combining the above, a map Φ is said to be ameasure-preserving transformation ofX, if it is a map fromXto itself, it is Σ-measurable, and is measure-preserving. The triplet (T, (X, Σ,μ), Φ), for such a Φ, is then defined to be adynamical system.
The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems theiteratesΦn=Φ∘Φ∘⋯∘Φ{\displaystyle \Phi ^{n}=\Phi \circ \Phi \circ \dots \circ \Phi }for every integernare studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated.
The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called theKrylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance.
Some systems have a natural measure, such as theLiouville measureinHamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaoticdissipative systemsthe choice of invariant measure is technically more challenging. The measure needs to be supported on theattractor, but attractors have zeroLebesgue measureand the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution.
For hyperbolic dynamical systems, theSinai–Ruelle–Bowen measuresappear to be the natural choice. They are constructed on the geometrical structure ofstable and unstable manifoldsof the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems.
The concept ofevolution in timeis central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior ofclassical mechanical systems. But a system ofordinary differential equationsmust be solved before it becomes a dynamic system. For example, consider aninitial value problemsuch as the following:
where
There is no need for higher order derivatives in the equation, nor for the parametertinv(t,x), because these can be eliminated by considering systems of higher dimensions.
Depending on the properties of this vector field, the mechanical system is called
The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above
The dynamical system is then (T,M, Φ).
Some formal manipulation of the system ofdifferential equationsshown above gives a more general form of equations a dynamical system must satisfy
whereG:(T×M)M→C{\displaystyle {\mathfrak {G}}:{{(T\times M)}^{M}}\to \mathbf {C} }is afunctionalfrom the set of evolution functions to the field of the complex numbers.
This equation is useful when modeling mechanical systems with complicated constraints.
Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locallyBanach spaces—in which case the differential equations arepartial differential equations.
Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is theN-dimensional Euclidean space, so any point in phase space can be represented by a vector withNnumbers. The analysis of linear systems is possible because they satisfy asuperposition principle: ifu(t) andw(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so willu(t) +w(t).
For aflow, the vector field v(x) is anaffinefunction of the position in the phase space, that is,
withAa matrix,ba vector of numbers andxthe position vector. The solution to this system can be found by using the superposition principle (linearity).
The caseb≠ 0 withA= 0 is just a straight line in the direction ofb:
Whenbis zero andA≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, ifx0= 0, then the orbit remains there.
For other initial conditions, the equation of motion is given by theexponential of a matrix: for an initial pointx0,
Whenb= 0, theeigenvaluesofAdetermine the structure of the phase space. From the eigenvalues and theeigenvectorsofAit is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin.
The distance between two different initial conditions in the caseA≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions forchaotic behavior.
Adiscrete-time,affinedynamical system has the form of amatrix difference equation:
withAa matrix andba vector. As in the continuous case, the change of coordinatesx→x+ (1 −A)–1bremoves the termbfrom the equation. In the newcoordinate system, the origin is a fixed point of the map and the solutions are of the linear systemAnx0.
The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map.
As in the continuous case, the eigenvalues and eigenvectors ofAdetermine the structure of phase space. For example, ifu1is an eigenvector ofA, with a real eigenvalue smaller than one, then the straight lines given by the points alongαu1, withα∈R, is an invariant curve of the map. Points in this straight line run into the fixed point.
There are also manyother discrete dynamical systems.
The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): asingular pointof the vector field (a point wherev(x) = 0) will remain a singular point under smooth transformations; aperiodic orbitis a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible.
A flow in most small patches of the phase space can be made very simple. Ifyis a point where the vector fieldv(y) ≠ 0, then there is a change of coordinates for a region aroundywhere the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem.
Therectification theoremsays that away fromsingular pointsthe dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase spaceMthe dynamical system isintegrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (wherev(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches.
In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a pointx0in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular tov(x0). These points are aPoincaré sectionS(γ,x0), of the orbit. The flow now defines a map, thePoincaré mapF:S→S, for points starting inSand returning toS. Not all these points will take the same amount of time to come back, but the times will be close to the time it takesx0.
The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré mapF. By a translation, the point can be assumed to be atx= 0. The Taylor series of the map isF(x) =J·x+ O(x2), so a change of coordinateshcan only be expected to simplifyFto its linear part
This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. Ifλ1, ...,λνare the eigenvalues ofJthey will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the formλi– Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the functionh, the non-resonant condition is also known as the small divisor problem.
The results on the existence of a solution to the conjugation equation depend on the eigenvalues ofJand the degree of smoothness required fromh. AsJdoes not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues ofJare not in the unit circle, the dynamics near the fixed pointx0ofFis calledhyperbolicand when the eigenvalues are on the unit circle and complex, the dynamics is calledelliptic.
In the hyperbolic case, theHartman–Grobman theoremgives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear mapJ·x. The hyperbolic case is alsostructurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues ofJin the complex plane, implying that the map is still hyperbolic.
TheKolmogorov–Arnold–Moser (KAM)theorem gives the behavior near an elliptic point.
When the evolution map Φt(or thevector fieldit is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in thephase spaceuntil a special valueμ0is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation.
Bifurcation theory considers a structure in phase space (typically afixed point, a periodic orbit, or an invarianttorus) and studies its behavior as a function of the parameterμ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems.
The bifurcations of a hyperbolic fixed pointx0of a system familyFμcan be characterized by theeigenvaluesof the first derivative of the systemDFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues ofDFμon the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article onBifurcation theory.
Some bifurcations can lead to very complicated structures in phase space. For example, theRuelle–Takens scenariodescribes how a periodic orbit bifurcates into a torus and the torus into astrange attractor. In another example,Feigenbaum period-doublingdescribes how a stable periodic orbit goes through a series ofperiod-doubling bifurcations.
In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subsetAinto the points Φt(A) and invariance of the phase space means that
In theHamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by theLiouville measure.
In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution.
For systems where the volume is preserved by the flow, Poincaré discovered therecurrence theorem: Assume the phase space has a finite Liouville volume and letFbe a phase space volume-preserving map andAa subset of the phase space. Then almost every point ofAreturns toAinfinitely often. The Poincaré recurrence theorem was used byZermeloto object toBoltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms.
One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called theergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a regionAis vol(A)/vol(Ω).
The ergodic hypothesis turned out not to be the essential property needed for the development ofstatistical mechanicsand a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems.Koopmanapproached the study of ergodic systems by the use offunctional analysis. An observableais a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φt. This introduces an operatorUt, thetransfer operator,
By studying the spectral properties of the linear operatorUit becomes possible to classify the ergodic properties of Φt. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φtgets mapped into an infinite-dimensional linear problem involvingU.
The Liouville measure restricted to the energy surface Ω is the basis for the averages computed inequilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with theBoltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems.SRB measuresreplace the Boltzmann factor and they are defined on attractors of chaotic systems.
Simple nonlinear dynamical systems, includingpiecewise linearsystems, can exhibit strongly unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This unpredictable behavior has been calledchaos.Hyperbolic systemsare precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems thetangent spacesperpendicular to an orbit can be decomposed into a combination of two parts: one with the points that converge towards the orbit (thestable manifold) and another of the points that diverge from the orbit (theunstable manifold).
This branch ofmathematicsdeals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to asteady statein the long term, and if so, what are the possibleattractors?" or "Does the long-term behavior of the system depend on its initial condition?"
The chaotic behavior of complex systems is not the issue.Meteorologyhas been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. ThePomeau–Manneville scenarioof thelogistic mapand theFermi–Pasta–Ulam–Tsingou problemarose with just second-degree polynomials; thehorseshoe mapis piecewise linear.
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration,[14]meaning here that in these solutions the system will reach the value zero at some time, called an ending time, and then stay there forever after. This can occur only when system trajectories are not uniquely determined forwards and backwards in time by the dynamics, thus solutions of finite duration imply a form of "backwards-in-time unpredictability" closely related to the forwards-in-time unpredictability of chaos. This behavior cannot happen forLipschitz continuousdifferential equations according to the proof of thePicard-Lindelof theorem. These solutions are non-Lipschitz functions at their ending times and cannot be analytical functions on the whole real line.
As example, the equation:
Admits the finite duration solution:
that is zero fort≥2{\displaystyle t\geq 2}and is not Lipschitz continuous at its ending timet=2.{\displaystyle t=2.}
Works providing a broad coverage:
Introductory texts with a unique perspective:
Textbooks
Popularizations:
|
https://en.wikipedia.org/wiki/Discrete-time_dynamical_system
|
ABrownian bridgeis a continuous-timegaussian processB(t) whoseprobability distributionis theconditional probability distributionof a standardWiener processW(t) (a mathematical model ofBrownian motion) subject to the condition (when standardized) thatW(T) = 0, so that the process is pinned to the same value at botht= 0 andt=T. More precisely:
The expected value of the bridge at anyt{\displaystyle t}in the interval[0,T]{\displaystyle [0,T]}is zero, with variancet(T−t)T{\displaystyle {\frac {t(T-t)}{T}}}, implying that the most uncertainty is in the middle of the bridge, with zero uncertainty at the nodes. ThecovarianceofB(s) andB(t) ismin(s,t)−stT{\displaystyle \min(s,t)-{\frac {s\,t}{T}}}, ors(T−t)T{\displaystyle {\frac {s(T-t)}{T}}}ifs<t{\displaystyle s<t}.
The increments in a Brownian bridge are not independent.
IfW(t){\textstyle W(t)}is a standard Wiener process (i.e., fort≥0{\textstyle t\geq 0},W(t){\textstyle W(t)}isnormally distributedwith expected value0{\textstyle 0}and variancet{\textstyle t}, and theincrements are stationary and independent), then
is a Brownian bridge fort∈[0,T]{\textstyle t\in [0,T]}. It is independent ofW(T){\textstyle W(T)}[1]
Conversely, ifB(t){\textstyle B(t)}is a Brownian bridge fort∈[0,1]{\textstyle t\in [0,1]}andZ{\textstyle Z}is a standardnormalrandom variable independent ofB{\textstyle B}, then the process
is a Wiener process fort∈[0,1]{\textstyle t\in [0,1]}. More generally, a Wiener processW(t){\textstyle W(t)}fort∈[0,T]{\textstyle t\in [0,T]}can be decomposed into
Another representation of the Brownian bridge based on the Brownian motion is, fort∈[0,T]{\textstyle t\in [0,T]}
Conversely, fort∈[0,∞]{\textstyle t\in [0,\infty ]}
The Brownian bridge may also be represented as a Fourier series with stochastic coefficients, as
whereZ1,Z2,…{\displaystyle Z_{1},Z_{2},\ldots }areindependent identically distributedstandard normal random variables (see theKarhunen–Loève theorem).
A Brownian bridge is the result ofDonsker's theoremin the area ofempirical processes. It is also used in theKolmogorov–Smirnov testin the area ofstatistical inference.
LetK=supt∈[0,1]|B(t)|{\displaystyle K=\sup _{t\in [0,1]}|B(t)|}, for a Brownian bridge withT=1{\displaystyle T=1}; then thecumulative distribution functionofK{\textstyle K}is given by[2]Pr(K≤x)=1−2∑k=1∞(−1)k−1e−2k2x2=2πx∑k=1∞e−(2k−1)2π2/(8x2).{\displaystyle \operatorname {Pr} (K\leq x)=1-2\sum _{k=1}^{\infty }(-1)^{k-1}e^{-2k^{2}x^{2}}={\frac {\sqrt {2\pi }}{x}}\sum _{k=1}^{\infty }e^{-(2k-1)^{2}\pi ^{2}/(8x^{2})}.}
The Brownian bridge can be "split" by finding the last zeroτ−{\displaystyle \tau _{-}}before the midpoint, and the first zeroτ+{\displaystyle \tau _{+}}after, forming a (scaled) bridge over[0,τ−]{\displaystyle [0,\tau _{-}]}, anexcursionover[τ−,τ+]{\displaystyle [\tau _{-},\tau _{+}]}, and another bridge over[τ+,1]{\displaystyle [\tau _{+},1]}. The joint pdf ofτ−,τ+{\displaystyle \tau _{-},\tau _{+}}is given by
which can be conditionally sampled as
whereU1,U2{\displaystyle U_{1},U_{2}}are uniformly distributed random variables over (0,1).
A standard Wiener process satisfiesW(0) = 0 and is therefore "tied down" to the origin, but other points are not restricted. In a Brownian bridge process on the other hand, not only isB(0) = 0 but we also require thatB(T) = 0, that is the process is "tied down" att=Tas well. Just as a literal bridge is supported by pylons at both ends, a Brownian Bridge is required to satisfy conditions at both ends of the interval [0,T]. (In a slight generalization, one sometimes requiresB(t1) =aandB(t2) =bwheret1,t2,aandbare known constants.)
Suppose we have generated a number of pointsW(0),W(1),W(2),W(3), etc. of a Wiener process path by computer simulation. It is now desired to fill in additional points in the interval [0,T], that is to interpolate between the already generated points. The solution is to use a collection ofTBrownian bridges, the first of which is required to go through the valuesW(0) andW(1), the second throughW(1) andW(2) and so on until theTth goes throughW(T-1) andW(T).
For the general case whenW(t1) =aandW(t2) =b, the distribution ofBat timet∈ (t1,t2) isnormal, withmean
andvariance
and thecovariancebetweenB(s) andB(t), withs<tis
|
https://en.wikipedia.org/wiki/Brownian_bridge
|
Instatisticsand inprobability theory,distance correlationordistance covarianceis a measure ofdependencebetween two pairedrandom vectorsof arbitrary, not necessarily equal,dimension. The population distance correlation coefficient is zero if and only if the random vectors areindependent. Thus, distance correlation measures both linear and nonlinear association between two random variables or random vectors. This is in contrast toPearson's correlation, which can only detect linear association between tworandom variables.
Distance correlation can be used to perform astatistical testof dependence with apermutation test. One first computes the distance correlation (involving the re-centering of Euclidean distance matrices) between two random vectors, and then compares this value to the distance correlations of many shuffles of the data.
The classical measure of dependence, thePearson correlation coefficient,[1]is mainly sensitive to a linear relationship between two variables. Distance correlation was introduced in 2005 byGábor J. Székelyin several lectures to address this deficiency of Pearson'scorrelation, namely that it can easily be zero for dependent variables. Correlation = 0 (uncorrelatedness) does not imply independence while distance correlation = 0 does imply independence. The first results on distance correlation were published in 2007 and 2009.[2][3]It was proved that distance covariance is the same as the Brownian covariance.[3]These measures are examples ofenergy distances.
The distance correlation is derived from a number of other quantities that are used in its specification, specifically:distance variance,distance standard deviation, anddistance covariance. These quantities take the same roles as the ordinarymomentswith corresponding names in the specification of thePearson product-moment correlation coefficient.
Let us start with the definition of thesample distance covariance. Let (Xk,Yk),k= 1, 2, ...,nbe astatistical samplefrom a pair of real valued or vector valued random variables (X,Y). First, compute thenbyndistance matrices(aj,k) and (bj,k) containing all pairwisedistances
where ||⋅ ||denotesEuclidean norm. Then take all doubly centered distances
wherea¯j⋅{\displaystyle \textstyle {\overline {a}}_{j\cdot }}is thej-th row mean,a¯⋅k{\displaystyle \textstyle {\overline {a}}_{\cdot k}}is thek-th column mean, anda¯⋅⋅{\displaystyle \textstyle {\overline {a}}_{\cdot \cdot }}is thegrand meanof the distance matrix of theXsample. The notation is similar for thebvalues. (In the matrices of centered distances (Aj,k) and (Bj,k) all rows and all columns sum to zero.) The squaredsample distance covariance(a scalar) is simply the arithmetic average of the productsAj,kBj,k:
The statisticTn=ndCov2n(X,Y) determines a consistent multivariate test of independence of random vectors in arbitrary dimensions. For an implementation seedcov.testfunction in theenergypackage forR.[4]
The population value ofdistance covariancecan be defined along the same lines. LetXbe a random variable that takes values in ap-dimensional Euclidean space with probability distributionμand letYbe a random variable that takes values in aq-dimensional Euclidean space with probability distributionν, and suppose thatXandYhave finite expectations. Write
Finally, define the population value of squared distance covariance ofXandYas
One can show that this is equivalent to the following definition:
whereEdenotes expected value, and(X,Y),{\displaystyle \textstyle (X,Y),}(X′,Y′),{\displaystyle \textstyle (X',Y'),}and(X″,Y″){\displaystyle \textstyle (X'',Y'')}are independent and identically distributed. The primed random variables(X′,Y′){\displaystyle \textstyle (X',Y')}and(X″,Y″){\displaystyle \textstyle (X'',Y'')}denote
independent and identically distributed (iid) copies of the variablesX{\displaystyle X}andY{\displaystyle Y}and are similarly iid.[5]Distance covariance can be expressed in terms of the classical Pearson'scovariance,cov, as follows:
This identity shows that the distance covariance is not the same as the covariance of distances,cov(‖X−X'‖, ‖Y−Y'‖). This can be zero even ifXandYare not independent.
Alternatively, the distance covariance can be defined as the weightedL2normof the distance between the jointcharacteristic functionof the random variables and the product of their marginal characteristic functions:[6]
whereφX,Y(s,t){\displaystyle \varphi _{X,Y}(s,t)},φX(s){\displaystyle \varphi _{X}(s)}, andφY(t){\displaystyle \varphi _{Y}(t)}are thecharacteristic functionsof(X,Y),X, andY, respectively,p,qdenote the Euclidean dimension ofXandY, and thus ofsandt, andcp,cqare constants. The weight function(cpcq|s|p1+p|t|q1+q)−1{\displaystyle ({c_{p}c_{q}}{|s|_{p}^{1+p}|t|_{q}^{1+q}})^{-1}}is chosen to produce a scale equivariant and rotationinvariant measurethat doesn't go to zero for dependent variables.[6][7]One interpretation of the characteristic function definition is that the variableseisXandeitYare cyclic representations ofXandYwith different periods given bysandt, and the expressionϕX,Y(s,t) −ϕX(s)ϕY(t)in the numerator of the characteristic function definition of distance covariance is simply the classical covariance ofeisXandeitY. The characteristic function definition clearly shows that
dCov2(X,Y) = 0 if and only ifXandYare independent.
Thedistance varianceis a special case of distance covariance when the two variables are identical. The population value of distance variance is the square root of
whereX{\displaystyle X},X′{\displaystyle X'}, andX″{\displaystyle X''}areindependent and identically distributed random variables,E{\displaystyle \operatorname {E} }denotes theexpected value, andf2(⋅)=(f(⋅))2{\displaystyle f^{2}(\cdot )=(f(\cdot ))^{2}}for functionf(⋅){\displaystyle f(\cdot )}, e.g.,E2[⋅]=(E[⋅])2{\displaystyle \operatorname {E} ^{2}[\cdot ]=(\operatorname {E} [\cdot ])^{2}}.
Thesample distance varianceis the square root of
which is a relative ofCorrado Gini'smean differenceintroduced in 1912 (but Gini did not work with centered distances).[8]
Thedistance standard deviationis the square root of thedistance variance.
Thedistance correlation[2][3]of two random variables is obtained by dividing theirdistance covarianceby the product of theirdistance standard deviations. The distance correlation is the square root of
and thesample distance correlationis defined by substituting the sample distance covariance and distance variances for the population coefficients above.
For easy computation of sample distance correlation see thedcorfunction in theenergypackage forR.[4]
This last property is the most important effect of working with centered distances.
The statisticdCovn2(X,Y){\displaystyle \operatorname {dCov} _{n}^{2}(X,Y)}is a biased estimator ofdCov2(X,Y){\displaystyle \operatorname {dCov} ^{2}(X,Y)}. Under independence of X and Y[9]
Anunbiased estimatorofdCov2(X,Y){\displaystyle \operatorname {dCov} ^{2}(X,Y)}is given by Székely and Rizzo.[10]
Equality holds in (iv) if and only if one of the random variablesXorYis a constant.
Distance covariance can be generalized to include powers of Euclidean distance. Define
Then for every0<α<2{\displaystyle 0<\alpha <2},X{\displaystyle X}andY{\displaystyle Y}are independent if and only ifdCov2(X,Y;α)=0{\displaystyle \operatorname {dCov} ^{2}(X,Y;\alpha )=0}. It is important to note that this characterization does not hold for exponentα=2{\displaystyle \alpha =2}; in this case for bivariate(X,Y){\displaystyle (X,Y)},dCor(X,Y;α=2){\displaystyle \operatorname {dCor} (X,Y;\alpha =2)}is a deterministic function of the Pearson correlation.[2]Ifak,ℓ{\displaystyle a_{k,\ell }}andbk,ℓ{\displaystyle b_{k,\ell }}areα{\displaystyle \alpha }powers of the corresponding distances,0<α≤2{\displaystyle 0<\alpha \leq 2}, thenα{\displaystyle \alpha }sample distance covariance can be defined as the nonnegative number for which
One can extenddCov{\displaystyle \operatorname {dCov} }tometric-space-valuedrandom variablesX{\displaystyle X}andY{\displaystyle Y}: IfX{\displaystyle X}has lawμ{\displaystyle \mu }in a metric space with metricd{\displaystyle d}, then defineaμ(x):=E[d(X,x)]{\displaystyle a_{\mu }(x):=\operatorname {E} [d(X,x)]},D(μ):=E[aμ(X)]{\displaystyle D(\mu ):=\operatorname {E} [a_{\mu }(X)]}, and (providedaμ{\displaystyle a_{\mu }}is finite, i.e.,X{\displaystyle X}has finite first moment),dμ(x,x′):=d(x,x′)−aμ(x)−aμ(x′)+D(μ){\displaystyle d_{\mu }(x,x'):=d(x,x')-a_{\mu }(x)-a_{\mu }(x')+D(\mu )}. Then ifY{\displaystyle Y}has lawν{\displaystyle \nu }(in a possibly different metric space with finite first moment), define
This is non-negative for all suchX,Y{\displaystyle X,Y}iff both metric spaces have negative type.[11]Here, a metric space(M,d){\displaystyle (M,d)}has negative type if(M,d1/2){\displaystyle (M,d^{1/2})}isisometricto a subset of aHilbert space.[12]If both metric spaces have strong negative type, thendCov2(X,Y)=0{\displaystyle \operatorname {dCov} ^{2}(X,Y)=0}iffX,Y{\displaystyle X,Y}are independent.[11]
The originaldistance covariancehas been defined as the square root ofdCov2(X,Y){\displaystyle \operatorname {dCov} ^{2}(X,Y)}, rather than the squared coefficient itself.dCov(X,Y){\displaystyle \operatorname {dCov} (X,Y)}has the property that it is theenergy distancebetween the joint distribution ofX,Y{\displaystyle \operatorname {X} ,Y}and the product of its marginals. Under this definition, however, the distance variance, rather than the distance standard deviation, is measured in the same units as theX{\displaystyle \operatorname {X} }distances.
Alternately, one could definedistance covarianceto be the square of the energy distance:dCov2(X,Y).{\displaystyle \operatorname {dCov} ^{2}(X,Y).}In this case, the distance standard deviation ofX{\displaystyle X}is measured in the same units asX{\displaystyle X}distance, and there exists an unbiased estimator for the population distance covariance.[10]
Under these alternate definitions, the distance correlation is also defined as the squaredCor2(X,Y){\displaystyle \operatorname {dCor} ^{2}(X,Y)}, rather than the square root.
Brownian covariance is motivated by generalization of the notion of covariance to stochastic processes. The square of the covariance of random variables X and Y can be written in the following form:
where E denotes theexpected valueand the prime denotes independent and identically distributed copies. We need the following generalization of this formula. If U(s), V(t) are arbitrary random processes defined for all real s and t then define the U-centered version of X by
whenever the subtracted conditional expected value exists and denote by YVthe V-centered version of Y.[3][13][14]The (U,V) covariance of (X,Y) is defined as the nonnegative number whose square is
whenever the right-hand side is nonnegative and finite. The most important example is when U and V are two-sided independentBrownian motions/Wiener processeswith expectation zero and covariance|s| + |t| − |s−t| = 2 min(s,t)(for nonnegative s, t only). (This is twice the covariance of the standard Wiener process; here the factor 2 simplifies the computations.) In this case the (U,V) covariance is calledBrownian covarianceand is denoted by
There is a surprising coincidence: The Brownian covariance is the same as the distance covariance:
and thusBrownian correlationis the same as distance correlation.
On the other hand, if we replace the Brownian motion with the deterministic identity functionidthen Covid(X,Y) is simply the absolute value of the classical Pearsoncovariance,
Other correlational metrics, including kernel-based correlational metrics (such as the Hilbert-Schmidt Independence Criterion or HSIC) can also detect linear and nonlinear interactions. Both distance correlation and kernel-based metrics can be used in methods such ascanonical correlation analysisandindependent component analysisto yield strongerstatistical power.
|
https://en.wikipedia.org/wiki/Brownian_covariance
|
Inphysics,Brownian dynamicsis a mathematical approach for describing thedynamicsof molecular systems in thediffusive regime. It is a simplified version ofLangevin dynamicsand corresponds to the limit where no average acceleration takes place. This approximation is also known asoverdampedLangevin dynamics or as Langevin dynamics withoutinertia.
In Brownian dynamics, the following equation of motion is used to describe the dynamics of astochastic systemwith coordinatesX=X(t){\displaystyle X=X(t)}:[1][2][3]
where:
InLangevin dynamics, the equation of motion using the same notation as above is as follows:[1][2][3]MX¨=−∇U(X)−ζX˙+2ζkBTR(t){\displaystyle M{\ddot {X}}=-\nabla U(X)-\zeta {\dot {X}}+{\sqrt {2\zeta k_{\text{B}}T}}R(t)}where:
The above equation may be rewritten asMX¨⏟inertial force+∇U(X)⏟potential force+ζX˙⏟viscous force−2ζkBTR(t)⏟random force=0{\displaystyle \underbrace {M{\ddot {X}}} _{\text{inertial force}}+\underbrace {\nabla U(X)} _{\text{potential force}}+\underbrace {\zeta {\dot {X}}} _{\text{viscous force}}-\underbrace {{\sqrt {2\zeta k_{\text{B}}T}}R(t)} _{\text{random force}}=0}In Brownian dynamics, the inertial force termMX¨(t){\displaystyle M{\ddot {X}}(t)}is so much smaller than the other three that it is considered negligible. In this case, the equation is approximately[1]
For spherical particles of radiusr{\displaystyle r}in the limit of lowReynolds number, we can use theStokes–Einstein relation. In this case,D=kBT/ζ{\displaystyle D=k_{\text{B}}T/\zeta }, and the equation reads:
For example, when the magnitude of the friction tensorζ{\displaystyle \zeta }increases, the damping effect of the viscous force becomes dominant relative to the inertial force. Consequently, the system transitions from the inertial to the diffusive (Brownian) regime. For this reason, Brownian dynamics are also known as overdamped Langevin dynamics or Langevin dynamics without inertia.
In 1978, Ermak and McCammon suggested an algorithm for efficiently computing Brownian dynamics with hydrodynamic interactions.[2]Hydrodynamic interactions occur when the particles interact indirectly by generating and reacting to local velocities in the solvent. For a system ofN{\displaystyle N}three-dimensional particle diffusing subject to a force vector F(X), the derived Brownian dynamics scheme becomes:[1]
whereDij{\displaystyle D_{ij}}is a diffusion matrix specifying hydrodynamic interactions, Oseen tensor[4]for example, in non-diagonal entries interacting between the target particlei{\displaystyle i}and the surrounding particlej{\displaystyle j},F{\displaystyle F}is the force exerted on the particlej{\displaystyle j}, andR(t){\displaystyle R(t)}is a Gaussian noise vector with zero mean and a standard deviation of2DΔt{\displaystyle {\sqrt {2D\Delta t}}}in each vector entry. The subscriptsi{\displaystyle i}andj{\displaystyle j}indicate the ID of the particles andN{\displaystyle N}refers to the total number of particles. This equation works for the dilute system where the near-field effect is ignored.
Thisclassical mechanics–related article is astub. You can help Wikipedia byexpanding it.
This article aboutstatistical mechanicsis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Brownian_dynamics
|
Brownian motorsarenanoscaleormolecular machinesthat usechemical reactionsto generate directed motion in space.[1]The theory behind Brownian motors relies on the phenomenon ofBrownian motion, random motion ofparticlessuspended in afluid(aliquidor agas) resulting from theircollisionwith the fast-movingmoleculesin the fluid.[2]
On thenanoscale(1–100 nm),viscositydominatesinertia, and the extremely high degree ofthermal noisein the environment makes conventional directed motion all but impossible, because theforcesimpelling these motors in the desired direction are minuscule when compared to therandom forcesexerted by the environment. Brownian motors operate specifically to utilise this high level ofrandom noiseto achieve directed motion, and as such are only viable on thenanoscale.[3]
The concept of Brownian motors is a recent one, having only been coined in 1995 byPeter Hänggi, but the existence of such motors in nature may have existed for a very long time and help to explain crucialcellular processesthat require movement at thenanoscale, such asprotein synthesisandmuscular contraction. If this is the case, Brownian motors may have implications for the foundations oflifeitself.[3]
In more recent times, humans have attempted to apply this knowledge of natural Brownian motors to solve human problems. The applications of Brownian motors are most obvious innanoroboticsdue to its inherent reliance on directed motion.[4][5]
Let the place of the solitairesBe a place of perpetual undulation.Whether it be in mid-seaOn the dark, green water-wheel,Or on the beaches,There must be no cessationOf motion, or of the noise of motion,The renewal of noiseAnd manifold continuation;And, most, of the motion of thoughtAnd its restless iteration,In the place of the solitaires,Which is to be a place of perpetual undulation.
The term "Brownian motor" was originally invented by Swiss theoretical physicistPeter Hänggiin 1995.[3]The Brownian motor, like the phenomenon of Brownian motion that underpinned its underlying theory, was also named after 19th century Scottish botanistRobert Brown, who, while looking through amicroscopeatpollenof theplantClarkia pulchellaimmersed in water, famously described the random motion of pollen particles in water in 1827. In 1905, almost eighty years later,theoretical physicistAlbert Einsteinpublisheda paperwhere he modeled the motion of the pollen as being moved by individualwater molecules,[6]and this was verified experimentally byJean Perrinin 1908, who was awarded theNobel Prize in Physicsin 1926 "for his work on the discontinuous structure of matter".[7]These developments helped to create the fundamentals of the present theories of thenanoscaleworld.
Nanosciencehas traditionally long remained at the intersection of the physical sciences ofphysicsandchemistry, but more recent developments in research increasingly position it beyond the scope of either of these two traditional fields.[8]
In 2002, a seminal paper on Brownian motors published in theAmerican Institute of PhysicsmagazinePhysics Today, "Brownian motors", byDean AstumianandPeter Hänggi. There, they proposed the then novel concept of Brownian motors and posited that "thermal motion combined with input energy gives rise to a channeling of chance that can be used to exercise control over microscopic systems". Astumian and Hänggi provide in their paper a copy ofWallace Stevens' 1919 poem "The Place of the Solitaires" to elegantly illustrate, from an abstract perspective, the ceaseless nature of noise.
Inspired by the fascinating mechanism by which proteins move in the face of thermal noise, many physicists are working to understand molecular motors at a mesoscopic scale. An important insight from this work is that, in some cases, thermal noise can assist directed motion by providing a mechanism for overcoming energy barriers. In those cases, one speaks of "Brownian motors." In this article, we focus on several examples that bring out some prominent underlying physical concepts that have emerged. But first we note that poets, too, have been fascinated by noise; see box 1.
...
In the microscopic world, "There must be no cessation / Of motion, or of the noise of motion" (box 1). Rather than fighting it, Brownian motors take advantage of the ceaseless noise to move particles efficiently and reliably.
A year after the Astumian-Hänggi paper,David Leigh'sorganic chemistry group reported the first artificial molecular Brownian motors.[9]In 2007 the same team reported aMaxwell's demon-inspired molecular information ratchet.[10]
Another important demonstration ofnanoengineeringandnanotechnologywas the building of a practical artificial Brownian motor byIBMin 2018.[11]Specifically, an energy landscape was created by accurately shaping ananofluidicslit, and alternate potentials and an oscillating electric field were then used to "rock"nanoparticlesto produce directed motion. The experiment successfully made thenanoparticlesmove along a track in the shape of the outline of the IBM logo and serves as an important milestone in the practical use of Brownian motors and other elements at thenanoscale.
Additionally, various institutions around the world, such as theUniversity of SydneyNano Institute, headquartered at theSydney Nanoscience Hub(SNH), and theSwiss Nanoscience Institute(SNI) at theUniversity of Basel, are examples of the research activity emerging in the field of nanoscience. Brownian motors remain a central concept in both the understanding of naturalmolecular motorsand the construction of usefulnanoscale machinesthat involve directed motion.[4][5]
Nanoscience research within the Swiss Nanoscience Institute (SNI) is focused on areas of potential benefit to the life sciences, sustainability, and information and communications technologies. The aim is to explore phenomena at a nanoscale and to identify and apply new pioneering principles. This involves researchers immersing themselves in the world of individual atoms and molecules. At this level, the classical disciplines of physics, biology and chemistry merge into one. Interdisciplinary collaboration between different branches of science and institutions is thus a key element of the SNI's day-to-day work.
Thethermal noiseon thenanoscaleis so great that moving in a particular direction is as difficult as "walking in ahurricane" or "swimming inmolasses".[8]The theoretical operation of the Brownian motor can be explained by ratchet theory, wherein strong randomthermal fluctuationsare allowed to move the particle in the desired direction, whileenergyis expended to counteractforcesthat would producemotionin the opposite direction. This motion can be both linear and rotational. In the biological sense and in the extent to which this phenomenon appears in nature, this exists aschemical energyis sourced from the moleculeadenosine triphosphate(ATP).
TheBrownian ratchetis an apparentperpetual motion machinethat appears to violate thesecond law of thermodynamics, but was later debunked upon more detailed analysis byRichard Feynmanand otherphysicists. The difference between real Brownian motors and fictionalBrownian ratchetsis that only in Brownian motors is there an input ofenergyin order to provide the necessaryforceto hold the motor in place to counteract thethermal noisethat try to move the motor in the opposite direction.[12]
Because Brownian motors rely on the random nature ofthermal noiseto achieve directed motion, they arestochasticin nature, in that they can be analysedstatisticallybut not predicted precisely.[13]
Inbiology, much of what we understand to beprotein-basedmolecular motorsmay also in fact be Brownian motors. These molecular motors facilitate criticalcellular processesinliving organismsand, indeed, are fundamental tolifeitself.
Researchers have made significant advances in terms of examining theseorganic processesto gain insight into their inner workings. For example, molecular Brownian motors in the form of several different types ofproteinexist within humans. Two commonbiomolecularBrownian motors areATP synthase, a rotary motor, andmyosin II, a linear motor.[13]The motor proteinATP synthaseproduces rotationaltorquethat facilitates the synthesis ofATPfromAdenosine diphosphate(ADP) and inorganicphosphate(Pi) through the following overall reaction:
ADP + Pi+ 3H+out⇌ ATP + H2O + 3H+in
In contrast, thetorqueproduced bymyosin IIis linear and is a basis for the process ofmuscle contraction.[13]Similarmotor proteinsincludekinesinanddynein, which all convert chemical energy into mechanical work by thehydrolysisofATP. Manymotor proteinswithinhuman cellsact as Brownian motors by producing directed motion on thenanoscale, and some common proteins of this type are illustrated by the followingcomputer-generated images.
The relevance of Brownian motors to the requirement of directed motion innanoroboticshas become increasingly apparent to researchers from both academia and industry.[4][5]
Artificial replications of Brownian motors are informed by and differ from nature, and one specific type is the photomotor, wherein the motor switchesstatesdue to pulses oflightand generates directed motion. These photomotors, in contrast to theirnatural counterpartsˇ, areinorganicand possess greaterefficiencyand averagevelocity, and are thus better suited to human use than existing alternatives, such asorganicprotein motors.[14]
Currently, one of the six current "Grand Challenges" of theUniversity of SydneyNano Institute is to develop nanorobotics forhealth, a key aspect of which is a "nanoscalepartsfoundry" that can producenanoscaleBrownian motors for "active transportaround the body". The institute predicts that among the implications of this research is a "paradigm shift" inhealthcare"away from the "break-fix" model to a focus onpreventionand early intervention," such as in the case withheart disease:[15]
The molecular-level changes in early heart disease occur on the nanoscale. To detect these changes, we are building nanoscale robots, smaller than cells, that will navigate the body. This will enable us to see inside even the narrowest blood vessels, to detect the fatty deposits (atherosclerotic plaque) that signal the start of arterial blockage and allow treatment before the disease progresses.
...
The impact of this project will be extensive. It will improve health outcomes for all Australians with heart disease and reduce healthcare costs. It has potential to benefit other health challenges, including cancer, dementia and other neurodegenerative diseases. It will provide a world-class collaborative environment to train the next generation of Australian researchers, driving innovation and development of new industries and jobs in Australia.
Professor Paul Bannon, an adultcardiothoracic surgeonof international standing and leadingmedical researcher,[16][17]summarises the benefits of nanorobotics in health.[15]
If I could miniaturise myself inside the body... I could detect early, treatable damage in your coronary arteries when you are 25 years old and thus avoid your premature death.
|
https://en.wikipedia.org/wiki/Brownian_motor
|
Inscience,Brownian noise, also known asBrown noiseorred noise, is the type ofsignal noiseproduced byBrownian motion, hence its alternative name ofrandom walknoise. The term "Brown noise" does not come fromthe color, but afterRobert Brown, who documented the erratic motion for multiple types of inanimate particles in water. The term "red noise" comes from the "white noise"/"white light" analogy; red noise is strong in longer wavelengths, similar to the red end of thevisible spectrum.
The graphic representation of the sound signal mimics a Brownian pattern. Itsspectral densityis inversely proportional tof2, meaning it has higher intensity at lower frequencies, even more so thanpink noise. It decreases in intensity by 6dBperoctave(20 dB perdecade) and, when heard, has a "damped" or "soft" quality compared towhiteandpinknoise. The sound is a low roar resembling awaterfallor heavyrainfall. See alsoviolet noise, which is a 6 dBincreaseper octave.
Strictly, Brownian motion has a Gaussian probability distribution, but "red noise" could apply to any signal with the 1/f2frequency spectrum.
A Brownian motion, also known as aWiener process, is obtained as the integral of awhite noisesignal:W(t)=∫0tdWdτ(τ)dτ{\displaystyle W(t)=\int _{0}^{t}{\frac {dW}{d\tau }}(\tau )d\tau }meaning that Brownian motion is the integral of the white noiset↦dW(t){\displaystyle t\mapsto dW(t)}, whosepower spectral densityis flat:[1]S0=|F[t↦dWdt(t)](ω)|2=const.{\displaystyle S_{0}=\left|{\mathcal {F}}\left[t\mapsto {\frac {dW}{dt}}(t)\right](\omega )\right|^{2}={\text{const}}.}
Note that hereF{\displaystyle {\mathcal {F}}}denotes theFourier transform, andS0{\displaystyle S_{0}}is a constant. An important property of this transform is that the derivative of any distribution transforms as[2]F[t↦dWdt(t)](ω)=iωF[t↦W(t)](ω),{\displaystyle {\mathcal {F}}\left[t\mapsto {\frac {dW}{dt}}(t)\right](\omega )=i\omega {\mathcal {F}}[t\mapsto W(t)](\omega ),}from which we can conclude that the power spectrum of Brownian noise isS(ω)=|F[t↦W(t)](ω)|2=S0ω2.{\displaystyle S(\omega )={\big |}{\mathcal {F}}[t\mapsto W(t)](\omega ){\big |}^{2}={\frac {S_{0}}{\omega ^{2}}}.}
An individual Brownian motion trajectory presents a spectrumS(ω)=S0/ω2{\displaystyle S(\omega )=S_{0}/\omega ^{2}}, where the amplitudeS0{\displaystyle S_{0}}is a random variable, even in the limit of an infinitely long trajectory.[3]
Brown noise can be produced byintegratingwhite noise.[4][5]That is, whereas (digital) white noise can be produced by randomly choosing eachsampleindependently, Brown noise can be produced by adding a random offset to each sample to obtain the next one. As Brownian noise contains infinite spectral power at low frequencies, the signal tends to drift away infinitely from the origin. Aleaky integratormight be used in audio or electromagnetic applications to ensure the signal does not “wander off”, that is, exceed the limits of the system'sdynamic range. This turns the Brownian noise intoOrnstein–Uhlenbecknoise, which has a flat spectrum at lower frequencies, and only becomes “red” above the chosen cutoff frequency.
Brownian noise can also be computer-generated by first generating a white noise signal, Fourier-transforming it, then dividing the amplitudes of the different frequency components by the frequency (in one dimension), or by the frequency squared (in two dimensions) etc.[6]Matlab programs are available to generate Brownian and other power-law coloured noise in one[7]or any number[8]of dimensions.
Evidence of Brownian noise, or more accurately, of Brownian processes has been found in different fields including chemistry,[9]electromagnetism,[10]fluid-dynamics,[11]economics,[12]and human neuromotor control.[13]
In human neuromotor control, Brownian processes were recognized as a biomarker of human natural drift in both postural tasks—such as quietly standing or holding an object in your hand—as well as dynamic tasks. The work by Tessari et al. highlighted how these Brownian processes in humans might provide the first behavioral support to the neuroscientific hypothesis that humans encode motion in terms of descending neural velocity commands.[13]
|
https://en.wikipedia.org/wiki/Brownian_noise
|
In thephilosophy of thermal and statistical physics, theBrownian ratchetorFeynman–Smoluchowski ratchetis an apparentperpetual motionmachine of thesecond kind(converting thermal energy into mechanical work), first analysed in 1912 as athought experimentby Polish physicistMarian Smoluchowski.[1]It was popularised by AmericanNobel laureatephysicistRichard Feynmanin aphysicslecture at theCalifornia Institute of Technologyon May 11, 1962, during hisMessenger LecturesseriesThe Character of Physical LawinCornell Universityin 1964 and in his textThe Feynman Lectures on Physics[2]as an illustration of the laws ofthermodynamics. The simple machine, consisting of a tinypaddle wheeland aratchet, appears to be an example of aMaxwell's demon, able to extract mechanical work fromrandom fluctuations(heat) in a system atthermal equilibrium, in violation of thesecond law of thermodynamics. Detailed analysis by Feynman and others showed why it cannot actually do this.
The device consists of a gear known as aratchetthat rotates freely in one direction but is prevented from rotating in the opposite direction by apawl. The ratchet is connected by an axle to apaddle wheelthat is immersed in afluidofmoleculesattemperatureT1{\displaystyle T_{1}}. The molecules constitute aheat bathin that they undergo randomBrownian motionwith a meankinetic energythat is determined by thetemperature. The device is imagined as being small enough that the impulse from a single molecular collision can turn the paddle. Although such collisions would tend to turn the rod in either direction with equal probability, the pawl allows the ratchet to rotate in one direction only. The net effect of many such random collisions would seem to be that the ratchet rotates continuously in that direction. The ratchet's motion then can be used to doworkon other systems, for example lifting a weight (m) against gravity. The energy necessary to do this work apparently would come from the heat bath, without any heat gradient (i.e. the motion leeches energy from the temperature of the air). Were such a machine to work successfully, its operation would violate thesecond law of thermodynamics, one form of which states: "It is impossible for any device that operates on a cycle to receive heat from a single reservoir and produce a net amount of work."
Although at first sight the Brownian ratchet seems to extract useful work from Brownian motion, Feynman demonstrated that if the entire device is at the same temperature, the ratchet will not rotate continuously in one direction but will move randomly back and forth, and therefore will not produce any useful work. The reason is that since the pawl is at the same temperature as the paddle, it will also undergo Brownian motion, "bouncing" up and down. It therefore will intermittently fail by allowing a ratchet tooth to slip backward under the pawl while it is up. Another issue is that when the pawl rests on the sloping face of the tooth, the spring which returns the pawl exerts a sideways force on the tooth which tends to rotate the ratchet in a backwards direction. Feynman demonstrated that if the temperatureT2{\displaystyle T_{2}}of the ratchet and pawl is the same as the temperatureT1{\displaystyle T_{1}}of the paddle, then the failure rate must equal the rate at which the ratchet ratchets forward, so that no net motion results over long enough periods or in an ensemble averaged sense.[2]A simple but rigorous proof that no net motion occurs no matter what shape the teeth are was given byMagnasco.[3][failed verification–see discussion]
If, on the other hand,T2{\displaystyle T_{2}}is less thanT1{\displaystyle T_{1}}, the ratchet will indeed move forward, and produce useful work. In this case, though, the energy is extracted from the temperature gradient between the two thermal reservoirs, and somewaste heatis exhausted into the lower temperature reservoir by the pawl. In other words, the device functions as a miniatureheat engine, in compliance with the second law of thermodynamics. Conversely, ifT2{\displaystyle T_{2}}is greater thanT1{\displaystyle T_{1}}, the device will rotate in the opposite direction.
The Feynman ratchet model led to the similar concept ofBrownian motors,nanomachineswhich can extract useful work not from thermal noise but fromchemical potentialsand other microscopicnonequilibriumsources, in compliance with the laws of thermodynamics.[3][4]Diodesare an electrical analog of the ratchet and pawl, and for the same reason cannot produce useful work by rectifyingJohnson noisein a circuit at uniform temperature.
Millonas[5]as well as Mahato[6]extended the same notion to correlation ratchets driven by mean-zero (unbiased) nonequilibrium noise with a
nonvanishing correlation function of odd order greater than one.
Theratchet and pawlwas first discussed as a Second Law-violating device byGabriel Lippmannin 1900.[7]In 1912, Polish physicistMarian Smoluchowski[1]gave the first correct qualitative explanation of why the device fails; thermal motion of the pawl allows the ratchet's teeth to slip backwards. Feynman did the first quantitative analysis of the device in 1962 using theMaxwell–Boltzmann distribution, showing that if the temperature of the paddleT1was greater than the temperature of the ratchetT2, it would function as aheat engine, but ifT1=T2there would be no net motion of the paddle. In 1996,Juan Parrondoand Pep Español used a variation of the above device in which no ratchet is present, only two paddles, to show that the axle connecting the paddles and ratchet conducts heat between reservoirs; they argued that although Feynman's conclusion was correct, his analysis was flawed because of his erroneous use of thequasistaticapproximation, resulting in incorrect equations for efficiency.[8]Magnascoand Stolovitzky (1998) extended this analysis to consider the full ratchet device, and showed that the power output of the device is far smaller than theCarnot efficiencyclaimed by Feynman.[9]A paper in 2000 byDerek Abbott,Bruce R. Davisand Juan Parrondo, reanalyzed the problem and extended it to the case of multiple ratchets, showing a link withParrondo's paradox.[10]
Léon Brillouinin 1950 discussed an electrical circuit analogue that uses arectifier(such as a diode) instead of a ratchet.[11]The idea was the diode would rectify theJohnson noisethermal current fluctuations produced by theresistor, generating adirect currentwhich could be used to perform work. In the detailed analysis it was shown that the thermal fluctuations within the diode generate anelectromotive forcethat cancels the voltage from rectified current fluctuations. Therefore, just as with the ratchet, the circuit will produce no useful energy if all the components are at thermal equilibrium (at the same temperature); a DC current will be produced only when the diode is at a lower temperature than the resistor.[12]
Researchers from theUniversity of Twente, theUniversity of Patrasin Greece, and the Foundation for Fundamental Research on Matter have constructed a Feynman–Smoluchowski engine which, when not in thermal equilibrium, converts pseudo-Brownian motionintoworkby means of agranular gas,[13]which is a conglomeration of solid particles vibrated with such vigour that the system assumes a gas-like state. The constructed engine consisted of four vanes which were allowed to rotate freely in a vibrofluidized granular gas.[14]Because the ratchet's gear and pawl mechanism, as described above, permitted the axle to rotate only in one direction, random collisions with the moving beads caused the vane to rotate. This seems to contradict Feynman's hypothesis. However, this system is not in perfect thermal equilibrium: energy is constantly being supplied to maintain the fluid motion of the beads. Vigorous vibrations on top of a shaking device mimic the nature of a molecular gas. Unlike anideal gas, though, in which tiny particles move constantly, stopping the shaking would simply cause the beads to drop. In the experiment, this necessary out-of-equilibrium environment was thus maintained. Work was not immediately being done, though; the ratchet effect only commenced beyond a critical shaking strength. For very strong shaking, the vanes of the paddle wheel interacted with the gas, forming a convection roll, sustaining their rotation.[14]
|
https://en.wikipedia.org/wiki/Brownian_ratchet
|
ABrownian surfaceis afractal surfacegenerated via afractalelevationfunction.[1][2][3]
The Brownian surface is named afterBrownian motion.
For instance, in the three-dimensional case, where two variablesXandYare given as coordinates, the elevation function between any two points (x1,y1) and (x2,y2) can be set to have a mean orexpected valuethat increases as thevector distancebetween (x1,y1) and (x2,y2).[1]There are, however, many ways of defining the elevation function. For instance, thefractional Brownian motionvariable may be used, or various rotation functions may be used to achieve more natural looking surfaces.[2]
Efficient generation of fractional Brownian surfaces poses significant challenges.[4]Since the Brownian surface represents aGaussian processwith a nonstationary covariance function,
one can use theCholesky decompositionmethod. A more efficient method is Stein's method,[5]which generates an auxiliary stationary Gaussian process using thecirculant embeddingapproach and then adjusts this auxiliary process to obtain the desired nonstationary Gaussian process. The figure below shows three typical realizations of fractional Brownian surfaces for different values of the roughness orHurst parameter. The Hurst parameter is always between zero and one, with values closer to one corresponding to smoother surfaces. These surfaces were generated using aMatlab implementationof Stein's method.
|
https://en.wikipedia.org/wiki/Brownian_surface
|
Inprobability theory, theBrownian tree, orAldous tree, orContinuum Random Tree (CRT)[1]is a randomreal treethat can be defined from aBrownian excursion. The Brownian tree was defined and studied byDavid Aldousin three articles published in 1991 and 1993. This tree has since then been generalized.
This random tree has several equivalent definitions and constructions:[2]using sub-trees generated by finitely many leaves, using a Brownian excursion, Poisson separating a straight line or as a limit of Galton-Watson trees.
Intuitively, the Brownian tree is a binary tree whose nodes (or branching points) aredensein the tree; which is to say that for any distinct two points of the tree, there will always exist a node between them. It is afractalobject which can be approximated with computers[3]or by physical processes withdendritic structures.
The following definitions are different characterisations of a Brownian tree, they are taken from Aldous's three articles.[4][5][6]The notions ofleaf, node, branch, rootare the intuitive notions on a tree (for details, seereal trees).
This definition gives the finite-dimensional laws of the subtrees generated by finitely many leaves.
Let us consider the space of all binary trees withk{\displaystyle k}leaves numbered from1{\displaystyle 1}tok{\displaystyle k}. These trees have2k−1{\displaystyle 2k-1}edges with lengths(ℓ1,…,ℓ2k−1)∈R+2k−1{\displaystyle (\ell _{1},\dots ,\ell _{2k-1})\in \mathbb {R} _{+}^{2k-1}}. A tree is then defined by its shapeτ{\displaystyle \tau }(which is to say the order of the nodes) and the edge lengths. We define aprobability lawP{\displaystyle \mathbb {P} }of a random variable(T,(Li)1≤i≤2k−1){\displaystyle (T,(L_{i})_{1\leq i\leq 2k-1})}on this space by:[clarification needed]
wheres=∑ℓi{\displaystyle \textstyle s=\sum \ell _{i}}.
In other words,P{\displaystyle \mathbb {P} }depends not on the shape of the tree but rather on the total sum of all the edge lengths.
Definition—LetX{\displaystyle X}be a random metric space with the tree property, meaning there exists a unique path between two points ofX{\displaystyle X}. EquipX{\displaystyle X}with a probability measureμ{\displaystyle \mu }. Suppose the sub-tree ofX{\displaystyle X}generated byk{\displaystyle k}points, chosen randomly underμ{\displaystyle \mu }, has lawP{\displaystyle \mathbb {P} }. ThenX{\displaystyle X}is called aBrownian tree.
In other words, the Brownian tree is defined from the laws of all the finite sub-trees one can generate from it.
The Brownian tree is areal treedefined from aBrownian excursion(see characterisation 4 inReal tree).
Lete=(e(x),0≤x≤1){\displaystyle e=(e(x),0\leq x\leq 1)}be a Brownian excursion. Define apseudometricd{\displaystyle d}on[0,1]{\displaystyle [0,1]}with
We then define anequivalence relation, noted∼e{\displaystyle \sim _{e}}on[0,1]{\displaystyle [0,1]}which relates all pointsx,y{\displaystyle x,y}such thatd(x,y)=0{\displaystyle d(x,y)=0}.
d{\displaystyle d}is then a distance on thequotient space[0,1]/∼e{\displaystyle [0,1]\,/\!\sim _{e}}.
Definition—The random metric space([0,1]/∼e,d){\displaystyle {\big (}[0,1]\,/\!\sim _{e},\,d{\big )}}is called aBrownian tree.
It is customary to consider the excursione/2{\displaystyle e/2}rather thane{\displaystyle e}.
This is also calledstick-breaking construction.
Consider a non-homogeneousPoisson point processNwith intensityr(t)=t{\displaystyle r(t)=t}. In other words, for anyt>0{\displaystyle t>0},Nt{\displaystyle N_{t}}is aPoisson variablewith parametert2{\displaystyle t^{2}}. LetC1,C2,…{\displaystyle C_{1},C_{2},\ldots }be the points ofN{\displaystyle N}. Then the lengths of the intervals[Ci,Ci+1]{\displaystyle [C_{i},C_{i+1}]}areexponential variableswith decreasing means. We then make the following construction:
Definition—Theclosure⋃k≥1Tk¯{\displaystyle {\overline {\bigcup _{k\geq 1}T_{k}}}}, equipped with the distance previously built, is called aBrownian tree.
This algorithm may be used to simulate numerically Brownian trees.
Consider aGalton-Watson treewhose reproduction law has finite non-zero variance, conditioned to haven{\displaystyle n}nodes. Let1nGn{\displaystyle {\tfrac {1}{\sqrt {n}}}G_{n}}be this tree, with the edge lengths divided byn{\displaystyle {\sqrt {n}}}. In other words, each edge has length1n{\displaystyle {\tfrac {1}{\sqrt {n}}}}. The construction can be formalized by considering the Galton-Watson tree as a metric space or by using renormalizedcontour processes.
Definition and Theorem—1nGn{\displaystyle {\frac {1}{\sqrt {n}}}G_{n}}converges in distribution to a random real tree, which we call aBrownian tree.
Here, the limit used is theconvergence in distributionofstochastic processesin theSkorokhod space(if we consider the contour processes) or the convergence in distribution defined from theHausdorff distance(if we consider the metric spaces).
|
https://en.wikipedia.org/wiki/Brownian_tree
|
Inprobability theory, theBrownian webis an uncountable collection of one-dimensional coalescingBrownian motions, starting from every point in space and time. It arises as the diffusive space-time scaling limit of a collection of coalescingrandom walks, with one walk starting from each point of the integer lattice Z at each time.
What is now known as the Brownian web was first conceived byArratiain his Ph.D. thesis[1]and a subsequent incomplete and unpublished manuscript.[2]Arratia studied thevoter model, aninteracting particle systemthat models the evolution of a population's political opinions. The individuals of the population are represented by the vertices of a graph, and each individual carries one of two possible opinions, represented as either 0 or 1. Independently at rate 1, each individual changes its opinion to that of a randomly chosen neighbor. The voter model is known to be dual to coalescingrandom walks(i.e., the random walks move independently when they are apart, and move as a single walk once they meet) in the sense that: each individual's opinion at any time can be traced backwards in time to an ancestor at time 0, and the joint genealogies of the opinions of different individuals at different times is a collection of coalescing random walks evolving backwards in time. In spatial dimension 1, coalescingrandom walksstarting from a finite number of space-time points converge to a finite number of coalescingBrownian motions, if space-time is rescaled diffusively (i.e., each space-time point (x,t) gets mapped to (εx,ε^2t), with ε↓0). This is a consequence ofDonsker's invariance principle. The less obvious question is:
What is the diffusive scaling limit of the joint collection of one-dimensional coalescing random walks starting fromeverypoint in space-time?
Arratia set out to construct this limit, which is what we now call the Brownian web. Formally speaking, it is a collection of one-dimensional coalescing Brownian motions starting from every space-time point inR2{\displaystyle \mathbb {R} ^{2}}. The fact that the Brownian web consists of anuncountablenumber of Brownian motions is what makes the construction highly non-trivial. Arratia gave a construction but was unable to prove convergence of coalescing random walks to a limiting object and characterize such a limiting object.
TóthandWernerin their study of thetrue self-repelling motion[3]obtained many detailed properties of this limiting object and its dual but did not prove convergence of coalescing walks to this limiting object or characterize it. The main difficulty in proving convergence stems from the existence of random points from which the limiting object can have multiple paths. Arratia andTóthandWernerwere aware of the existence of such points and they provided different conventions to avoid such multiplicity. Fontes, Isopi,Newmanand Ravishankar[4]introduced a topology for the limiting object so that it is realized as arandom variabletaking values in aPolish space, in this case, the space of compact sets of paths. This choice allows for the limiting object to have multiple paths from a random space time point. The introduction of this topology allowed them to prove the convergence of the coalescing random walks to a unique limiting object and characterize it. They named this limiting object Brownian web.
An extension of the Brownian web, called theBrownian net, has been introduced by Sun and Swart[5]by allowing the coalescing Brownian motions to undergo branching. An alternative construction of the Brownian net was given by Newman, Ravishankar and Schertzer.[6]
For a recent survey, see Schertzer, Sun and Swart.[7]
|
https://en.wikipedia.org/wiki/Brownian_web
|
Rotational Brownian motionis the random change in the orientation of apolar moleculedue to collisions with other molecules. It is an important element of theories ofdielectricmaterials.
Thepolarizationof a dielectric material is a competition betweentorquesdue to the imposedelectric field, which tend to align the molecules, and collisions, which tend to destroy the alignment. The theory of rotational Brownian motion allows one to calculate the net result of these two competing effects, and to predict how thepermittivityof a dielectric material depends on the strength and frequency of the imposed electric field.
Rotational Brownian motion was first discussed byPeter Debye,[1]who appliedAlbert Einstein's theory of translationalBrownian motionto the rotation of molecules having permanentelectric dipoles. Debye ignored inertial effects and assumed that the molecules were spherical, with an intrinsic, fixeddipole moment. He derived expressions for thedielectric relaxation timeand for thepermittivity. These formulae have been successfully applied to many materials. However, Debye's expression for the permittivity predicts that the absorption tends toward a constant value when the frequency of the applied electric field becomes very large—the "Debye plateau". This is not observed; instead, the absorption tends toward a maximum and then declines with increasing frequency.
The breakdown in Debye's theory in these regimes can be corrected by including inertial effects; allowing the molecules to be non-spherical; including dipole-dipole interactions between molecules; etc. These are computationally very difficult problems and rotational Brownian motion is a topic of much current research interest.
Thisphysics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Rotational_Brownian_motion
|
Thediffusion equationis aparabolic partial differential equation. In physics, it describes the macroscopic behavior of many micro-particles inBrownian motion, resulting from the random movements and collisions of the particles (seeFick's laws of diffusion). In mathematics, it is related toMarkov processes, such asrandom walks, and applied in many other fields, such asmaterials science,information theory, andbiophysics. The diffusion equation is a special case of theconvection–diffusion equationwhen bulk velocity is zero. It is equivalent to theheat equationunder some circumstances.
The equation is usually written as:∂ϕ(r,t)∂t=∇⋅[D(ϕ,r)∇ϕ(r,t)],{\displaystyle {\frac {\partial \phi (\mathbf {r} ,t)}{\partial t}}=\nabla \cdot {\big [}D(\phi ,\mathbf {r} )\ \nabla \phi (\mathbf {r} ,t){\big ]},}whereϕ(r,t)is thedensityof the diffusing material at locationrand timetandD(ϕ,r)is the collectivediffusion coefficientfor densityϕat locationr; and∇represents the vectordifferential operatordel. If the diffusion coefficient depends on the density then the equation is nonlinear, otherwise it is linear.
The equation above applies when the diffusion coefficient isisotropic; in the case of anisotropic diffusion,Dis a symmetricpositive definite matrix, and the equation is written (for three dimensional diffusion) as:∂ϕ(r,t)∂t=∑i=13∑j=13∂∂xi[Dij(ϕ,r)∂ϕ(r,t)∂xj]{\displaystyle {\frac {\partial \phi (\mathbf {r} ,t)}{\partial t}}=\sum _{i=1}^{3}\sum _{j=1}^{3}{\frac {\partial }{\partial x_{i}}}\left[D_{ij}(\phi ,\mathbf {r} ){\frac {\partial \phi (\mathbf {r} ,t)}{\partial x_{j}}}\right]}The diffusion equation has numerous analytic solutions.[1]
IfDis constant, then the equation reduces to the followinglinear differential equation:
which is identical to theheat equation.
Theparticle diffusion equationwas originally derived byAdolf Fickin 1855.[2]
The diffusion equation can be trivially derived from thecontinuity equation, which states that a change in density in any part of the system is due to inflow and outflow of material into and out of that part of the system. Effectively, no material is created or destroyed:∂ϕ∂t+∇⋅j=0,{\displaystyle {\frac {\partial \phi }{\partial t}}+\nabla \cdot \mathbf {j} =0,}wherejis the flux of the diffusing material. The diffusion equation can be obtained easily from this when combined with the phenomenologicalFick's first law, which states that the flux of the diffusing material in any part of the system is proportional to the local density gradient:j=−D(ϕ,r)∇ϕ(r,t).{\displaystyle \mathbf {j} =-D(\phi ,\mathbf {r} )\,\nabla \phi (\mathbf {r} ,t).}
If drift must be taken into account, theFokker–Planck equationprovides an appropriate generalization.
The diffusion equation is continuous in both space and time. One may discretize space, time, or both space and time, which arise in application. Discretizing time alone just corresponds to taking time slices of the continuous system, and no new phenomena arise.
In discretizing space alone, theGreen's functionbecomes thediscrete Gaussian kernel, rather than the continuousGaussian kernel. In discretizing both time and space, one obtains therandom walk.
Theproduct ruleis used to rewrite the anisotropic tensor diffusion equation, in standard discretization schemes, because direct discretization of the diffusion equation with only first order spatial central differences leads to checkerboard artifacts. The rewritten diffusion equation used in image filtering:∂ϕ(r,t)∂t=∇⋅[D(ϕ,r)]∇ϕ(r,t)+tr[D(ϕ,r)(∇∇Tϕ(r,t))]{\displaystyle {\frac {\partial \phi (\mathbf {r} ,t)}{\partial t}}=\nabla \cdot \left[D(\phi ,\mathbf {r} )\right]\nabla \phi (\mathbf {r} ,t)+{\rm {tr}}{\Big [}D(\phi ,\mathbf {r} ){\big (}\nabla \nabla ^{\text{T}}\phi (\mathbf {r} ,t){\big )}{\Big ]}}where "tr" denotes thetraceof the 2nd ranktensor, and superscript "T" denotestranspose, in which in image filteringD(ϕ,r) are symmetric matrices constructed from theeigenvectorsof the imagestructure tensors. The spatial derivatives can then be approximated by two first order and a second order centralfinite differences. The resulting diffusion algorithm can be written as an imageconvolutionwith a varying kernel (stencil) of size 3 × 3 in 2D and 3 × 3 × 3 in 3D.
|
https://en.wikipedia.org/wiki/Diffusion_equation
|
Ageometric Brownian motion (GBM)(also known asexponential Brownian motion) is a continuous-timestochastic processin which thelogarithmof the randomly varying quantity follows aBrownian motion(also called aWiener process) withdrift.[1]It is an important example of stochastic processes satisfying astochastic differential equation(SDE); in particular, it is used inmathematical financeto model stock prices in theBlack–Scholes model.
A stochastic processStis said to follow a GBM if it satisfies the followingstochastic differential equation(SDE):
whereWt{\displaystyle W_{t}}is aWiener process or Brownian motion, andμ{\displaystyle \mu }('the percentage drift') andσ{\displaystyle \sigma }('the percentage volatility') are constants.
The former parameter is used to model deterministic trends, while the latter parameter models unpredictable events occurring during the motion.
For an arbitrary initial valueS0the above SDE has the analytic solution (underItô's interpretation):
The derivation requires the use ofItô calculus. ApplyingItô's formulaleads to
wheredStdSt{\displaystyle dS_{t}\,dS_{t}}is thequadratic variationof the SDE.
Whendt→0{\displaystyle dt\to 0},dt{\displaystyle dt}converges to 0 faster thandWt{\displaystyle dW_{t}},
sincedWt2=O(dt){\displaystyle dW_{t}^{2}=O(dt)}. So the above infinitesimal can be simplified by
Plugging the value ofdSt{\displaystyle dS_{t}}in the above equation and simplifying we obtain
Taking the exponential and multiplying both sides byS0{\displaystyle S_{0}}gives the solution claimed above.
The process forXt=lnStS0{\displaystyle X_{t}=\ln {\frac {S_{t}}{S_{0}}}}, satisfying the SDE
or more generally the process solving the SDE
wherem{\displaystyle m}andv>0{\displaystyle v>0}are real constants and for an initial conditionX0{\displaystyle X_{0}}, is called an Arithmetic Brownian Motion (ABM). This was the model postulated byLouis Bachelierin 1900 for stock prices, in the first published attempt to model Brownian motion, known today asBachelier model. As was shown above, the ABM SDE can be obtained through the logarithm of a GBM via Itô's formula. Similarly, a GBM can be obtained by exponentiation of an ABM through Itô's formula.
The above solutionSt{\displaystyle S_{t}}(for any value of t) is alog-normally distributedrandom variablewithexpected valueandvariancegiven by[2]
They can be derived using the fact thatZt=exp(σWt−12σ2t){\displaystyle Z_{t}=\exp \left(\sigma W_{t}-{\frac {1}{2}}\sigma ^{2}t\right)}is amartingale, and that
Theprobability density functionofSt{\displaystyle S_{t}}is:
To derive the probability density function for GBM, we must use theFokker-Planck equationto evaluate the time evolution of the PDF:
whereδ(S){\displaystyle \delta (S)}is theDirac delta function. To simplify the computation, we may introduce a logarithmic transformx=log(S/S0){\displaystyle x=\log(S/S_{0})}, leading to the form of GBM:
Then the equivalent Fokker-Planck equation for the evolution of the PDF becomes:
DefineV=μ−σ2/2{\displaystyle V=\mu -\sigma ^{2}/2}andD=σ2/2{\displaystyle D=\sigma ^{2}/2}. By introducing the new variablesξ=x−Vt{\displaystyle \xi =x-Vt}andτ=Dt{\displaystyle \tau =Dt}, the derivatives in the Fokker-Planck equation may be transformed as:
Leading to the new form of the Fokker-Planck equation:
However, this is the canonical form of theheat equation. which has the solution given by theheat kernel:
Plugging in the original variables leads to the PDF for GBM:
When deriving further properties of GBM, use can be made of the SDE of which GBM is the solution, or the explicit solution given above can be used. For example, consider the stochastic process log(St). This is an interesting process, because in the Black–Scholes model it is related to thelog returnof the stock price. UsingItô's lemmawithf(S) = log(S) gives
It follows thatElog(St)=log(S0)+(μ−σ2/2)t{\displaystyle \operatorname {E} \log(S_{t})=\log(S_{0})+(\mu -\sigma ^{2}/2)t}.
This result can also be derived by applying the logarithm to the explicit solution of GBM:
Taking the expectation yields the same result as above:Elog(St)=log(S0)+(μ−σ2/2)t{\displaystyle \operatorname {E} \log(S_{t})=\log(S_{0})+(\mu -\sigma ^{2}/2)t}.
GBM can be extended to the case where there are multiple correlated price paths.[3]
Each price path follows the underlying process
where the Wiener processes are correlated such thatE(dWtidWtj)=ρi,jdt{\displaystyle \operatorname {E} (dW_{t}^{i}\,dW_{t}^{j})=\rho _{i,j}\,dt}whereρi,i=1{\displaystyle \rho _{i,i}=1}.
For the multivariate case, this implies that
A multivariate formulation that maintains the driving Brownian motionsWti{\displaystyle W_{t}^{i}}independent is
where the correlation betweenSti{\displaystyle S_{t}^{i}}andStj{\displaystyle S_{t}^{j}}is now expressed through theσi,j=ρi,jσiσj{\displaystyle \sigma _{i,j}=\rho _{i,j}\,\sigma _{i}\,\sigma _{j}}terms.
Geometric Brownian motion is used to model stock prices in the Black–Scholes model and is the most widely used model of stock price behavior.[4]
Some of the arguments for using GBM to model stock prices are:
However, GBM is not a completely realistic model, in particular it falls short of reality in the following points:
Apart from modeling stock prices, Geometric Brownian motion has also found applications in the monitoring of trading strategies.[5]
In an attempt to make GBM more realistic as a model for stock prices, also in relation to thevolatility smileproblem, one can drop the assumption that the volatility (σ{\displaystyle \sigma }) is constant. If we assume that the volatility is adeterministicfunction of the stock price and time, this is called alocal volatilitymodel. A straightforward extension of the Black Scholes GBM is a local volatility SDE whose distribution is a mixture of distributions of GBM, the lognormal mixture dynamics, resulting in a convex combination of Black Scholes prices for options.[3][6][7][8]If instead we assume that the volatility has a randomness of its own—often described by a different equation driven by a different Brownian Motion—the model is called astochastic volatilitymodel, see for example theHeston model.[9]
|
https://en.wikipedia.org/wiki/Geometric_Brownian_motion
|
Inmathematics– specifically, instochastic analysis– anItô diffusionis a solution to a specific type ofstochastic differential equation. That equation is similar to theLangevin equationused inphysicsto describe theBrownian motionof a particle subjected to a potential in aviscousfluid. Itô diffusions are named after theJapanesemathematicianKiyosi Itô.
A (time-homogeneous)Itô diffusioninn-dimensionalEuclidean spaceRn{\displaystyle {\boldsymbol {\textbf {R}}}^{n}}is aprocessX: [0, +∞) × Ω →Rndefined on aprobability space(Ω, Σ,P) and satisfying a stochastic differential equation of the form
whereBis anm-dimensionalBrownian motionandb:Rn→Rnand σ :Rn→Rn×msatisfy the usualLipschitz continuitycondition
for some constantCand allx,y∈Rn; this condition ensures the existence of a uniquestrong solutionXto the stochastic differential equation given above. Thevector fieldbis known as thedriftcoefficientofX; thematrix fieldσ is known as thediffusion coefficientofX. It is important to note thatband σ do not depend upon time; if they were to depend upon time,Xwould be referred to only as anItô process, not a diffusion. Itô diffusions have a number of nice properties, which include
In particular, an Itô diffusion is a continuous, strongly Markovian process such that the domain of its characteristic operator includes alltwice-continuously differentiablefunctions, so it is adiffusionin the sense defined by Dynkin (1965).
An Itô diffusionXis asample continuous process, i.e., foralmost allrealisationsBt(ω) of the noise,Xt(ω) is acontinuous functionof the time parameter,t. More accurately, there is a "continuous version" ofX, a continuous processYso that
This follows from the standard existence and uniqueness theory for strong solutions of stochastic differential equations.
In addition to being (sample) continuous, an Itô diffusionXsatisfies the stronger requirement to be aFeller-continuous process.
For a pointx∈Rn, letPxdenote the law ofXgiven initial datumX0=x, and letExdenoteexpectationwith respect toPx.
Letf:Rn→Rbe aBorel-measurable functionthat isbounded belowand define, for fixedt≥ 0,u:Rn→Rby
The behaviour of the functionuabove when the timetis varied is addressed by the Kolmogorov backward equation, the Fokker–Planck equation, etc. (See below.)
An Itô diffusionXhas the important property of beingMarkovian: the future behaviour ofX, given what has happened up to some timet, is the same as if the process had been started at the positionXtat time 0. The precise mathematical formulation of this statement requires some additional notation:
Let Σ∗denote thenaturalfiltrationof (Ω, Σ) generated by the Brownian motionB: fort≥ 0,
It is easy to show thatXisadaptedto Σ∗(i.e. eachXtis Σt-measurable), so the natural filtrationF∗=F∗Xof (Ω, Σ) generated byXhasFt⊆ Σtfor eacht≥ 0.
Letf:Rn→Rbe a bounded, Borel-measurable function. Then, for alltandh≥ 0, theconditional expectationconditioned on theσ-algebraΣtand the expectation of the process "restarted" fromXtsatisfy theMarkov property:
In fact,Xis also a Markov process with respect to the filtrationF∗, as the following shows:
The strong Markov property is a generalization of the Markov property above in whichtis replaced by a suitable random time τ : Ω → [0, +∞] known as astopping time. So, for example, rather than "restarting" the processXat timet= 1, one could "restart" wheneverXfirst reaches some specified pointpofRn.
As before, letf:Rn→Rbe a bounded, Borel-measurable function. Let τ be a stopping time with respect to the filtration Σ∗with τ < +∞almost surely. Then, for allh≥ 0,
Associated to each Itô diffusion, there is a second-orderpartial differential operatorknown as thegeneratorof the diffusion. The generator is very useful in many applications and encodes a great deal of information about the processX. Formally, theinfinitesimal generatorof an Itô diffusionXis the operatorA, which is defined to act on suitable functionsf:Rn→Rby
The set of all functionsffor which this limit exists at a pointxis denotedDA(x), whileDAdenotes the set of allffor which the limit exists for allx∈Rn. One can show that anycompactly-supportedC2(twice differentiable with continuous second derivative) functionflies inDAand that
or, in terms of thegradientandscalarandFrobeniusinner products,
The generatorAfor standardn-dimensional Brownian motionB, which satisfies the stochastic differential equation dXt= dBt, is given by
i.e.,A= Δ/2, where Δ denotes theLaplace operator.
The generator is used in the formulation of Kolmogorov's backward equation. Intuitively, this equation tells us how the expected value of any suitably smooth statistic ofXevolves in time: it must solve a certainpartial differential equationin which timetand the initial positionxare the independent variables. More precisely, iff∈C2(Rn;R) has compact support andu: [0, +∞) ×Rn→Ris defined by
thenu(t,x) is differentiable with respect tot,u(t, ·) ∈DAfor allt, andusatisfies the followingpartial differential equation, known asKolmogorov's backward equation:
The Fokker–Planck equation (also known asKolmogorov's forward equation) is in some sense the "adjoint" to the backward equation, and tells us how theprobability density functionsofXtevolve with timet. Let ρ(t, ·) be the density ofXtwith respect toLebesgue measureonRn, i.e., for any Borel-measurable setS⊆Rn,
LetA∗denote theHermitian adjointofA(with respect to theL2inner product). Then, given that the initial positionX0has a prescribed density ρ0, ρ(t,x) is differentiable with respect tot, ρ(t, ·) ∈DA*for allt, and ρ satisfies the following partial differential equation, known as theFokker–Planck equation:
The Feynman–Kac formula is a useful generalization of Kolmogorov's backward equation. Again,fis inC2(Rn;R) and has compact support, andq:Rn→Ris taken to be acontinuous functionthat is bounded below. Define a functionv: [0, +∞) ×Rn→Rby
TheFeynman–Kac formulastates thatvsatisfies the partial differential equation
Moreover, ifw: [0, +∞) ×Rn→RisC1in time,C2in space, bounded onK×Rnfor all compactK, and satisfies the above partial differential equation, thenwmust bevas defined above.
Kolmogorov's backward equation is the special case of the Feynman–Kac formula in whichq(x) = 0 for allx∈Rn.
Thecharacteristic operatorof an Itô diffusionXis a partial differential operator closely related to the generator, but somewhat more general. It is more suited to certain problems, for example in the solution of theDirichlet problem.
Thecharacteristic operatorA{\displaystyle {\mathcal {A}}}of an Itô diffusionXis defined by
where the setsUform a sequence ofopen setsUkthat decrease to the pointxin the sense that
and
is the first exit time fromUforX.DA{\displaystyle D_{\mathcal {A}}}denotes the set of allffor which this limit exists for allx∈Rnand all sequences {Uk}. IfEx[τU] = +∞ for all open setsUcontainingx, define
The characteristic operator and infinitesimal generator are very closely related, and even agree for a large class of functions. One can show that
and that
In particular, the generator and characteristic operator agree for allC2functionsf, in which case
Above, the generator (and hence characteristic operator) of Brownian motion onRnwas calculated to be1/2Δ, where Δ denotes the Laplace operator. The characteristic operator is useful in defining Brownian motion on anm-dimensionalRiemannian manifold(M,g): aBrownian motion onMis defined to be a diffusion onMwhose characteristic operatorA{\displaystyle {\mathcal {A}}}in local coordinatesxi, 1 ≤i≤m, is given by1/2ΔLB, where ΔLBis theLaplace-Beltrami operatorgiven in local coordinates by
where [gij] = [gij]−1in the sense ofthe inverse of a square matrix.
In general, the generatorAof an Itô diffusionXis not abounded operator. However, if a positive multiple of the identity operatorIis subtracted fromAthen the resulting operator is invertible. The inverse of this operator can be expressed in terms ofXitself using theresolventoperator.
For α > 0, theresolvent operatorRα, acting on bounded, continuous functionsg:Rn→R, is defined by
It can be shown, using the Feller continuity of the diffusionX, thatRαgis itself a bounded, continuous function. Also,Rαand αI−Aare mutually inverse operators:
Sometimes it is necessary to find aninvariant measurefor an Itô diffusionX, i.e. a measure onRnthat does not change under the "flow" ofX: i.e., ifX0is distributed according to such an invariant measure μ∞, thenXtis also distributed according to μ∞for anyt≥ 0. The Fokker–Planck equation offers a way to find such a measure, at least if it has a probability density function ρ∞: ifX0is indeed distributed according to an invariant measure μ∞with density ρ∞, then the density ρ(t, ·) ofXtdoes not change witht, so ρ(t, ·) = ρ∞, and so ρ∞must solve the (time-independent) partial differential equation
This illustrates one of the connections between stochastic analysis and the study of partial differential equations. Conversely, a given second-order linear partial differential equation of the form Λf= 0 may be hard to solve directly, but if Λ =A∗for some Itô diffusionX, and an invariant measure forXis easy to compute, then that measure's density provides a solution to the partial differential equation.
An invariant measure is comparatively easy to compute when the processXis a stochastic gradient flow of the form
where β > 0 plays the role of aninverse temperatureand Ψ :Rn→Ris a scalar potential satisfying suitable smoothness and growth conditions. In this case, the Fokker–Planck equation has a unique stationary solution ρ∞(i.e.Xhas a unique invariant measure μ∞with density ρ∞) and it is given by theGibbs distribution:
where thepartition functionZis given by
Moreover, the density ρ∞satisfies avariational principle: it minimizes over all probability densities ρ onRnthefree energyfunctionalFgiven by
where
plays the role of an energy functional, and
is the negative of the Gibbs-Boltzmann entropy functional. Even when the potential Ψ is not well-behaved enough for the partition functionZand the Gibbs measure μ∞to be defined, the free energyF[ρ(t, ·)] still makes sense for each timet≥ 0, provided that the initial condition hasF[ρ(0, ·)] < +∞. The free energy functionalFis, in fact, aLyapunov functionfor the Fokker–Planck equation:F[ρ(t, ·)] must decrease astincreases. Thus,Fis anH-functionfor theX-dynamics.
Consider theOrnstein-Uhlenbeck processXonRnsatisfying the stochastic differential equation
wherem∈Rnand β, κ > 0 are given constants. In this case, the potential Ψ is given by
and so the invariant measure forXis aGaussian measurewith density ρ∞given by
Heuristically, for larget,Xtis approximatelynormally distributedwith meanmand variance (βκ)−1. The expression for the variance may be interpreted as follows: large values of κ mean that the potential well Ψ has "very steep sides", soXtis unlikely to move far from the minimum of Ψ atm; similarly, large values of β mean that the system is quite "cold" with little noise, so, again,Xtis unlikely to move far away fromm.
In general, an Itô diffusionXis not amartingale. However, for anyf∈C2(Rn;R) with compact support, the processM: [0, +∞) × Ω →Rdefined by
whereAis the generator ofX, is a martingale with respect to the natural filtrationF∗of (Ω, Σ) byX. The proof is quite simple: it follows from the usual expression of the action of the generator on smooth enough functionsfandItô's lemma(the stochasticchain rule) that
Since Itô integrals are martingales with respect to the natural filtration Σ∗of (Ω, Σ) byB, fort>s,
Hence, as required,
sinceMsisFs-measurable.
Dynkin's formula, named afterEugene Dynkin, gives theexpected valueof any suitably smooth statistic of an Itô diffusionX(with generatorA) at a stopping time. Precisely, if τ is a stopping time withEx[τ] < +∞, andf:Rn→RisC2with compact support, then
Dynkin's formula can be used to calculate many useful statistics of stopping times. For example, canonical Brownian motion on the real line starting at 0 exits theinterval(−R, +R) at a random time τRwith expected value
Dynkin's formula provides information about the behaviour ofXat a fairly general stopping time. For more information on the distribution ofXat ahitting time, one can study theharmonic measureof the process.
In many situations, it is sufficient to know when an Itô diffusionXwill first leave ameasurable setH⊆Rn. That is, one wishes to study thefirst exit time
Sometimes, however, one also wishes to know the distribution of the points at whichXexits the set. For example, canonical Brownian motionBon the real line starting at 0 exits theinterval(−1, 1) at −1 with probability1/2and at 1 with probability1/2, soBτ(−1, 1)isuniformly distributedon the set {−1, 1}.
In general, ifGiscompactly embeddedwithinRn, then theharmonic measure(orhitting distribution) ofXon theboundary∂GofGis the measure μGxdefined by
forx∈GandF⊆ ∂G.
Returning to the earlier example of Brownian motion, one can show that ifBis a Brownian motion inRnstarting atx∈RnandD⊂Rnis anopen ballcentred onx, then the harmonic measure ofBon ∂Disinvariantunder allrotationsofDaboutxand coincides with the normalizedsurface measureon ∂D.
The harmonic measure satisfies an interestingmean value property: iff:Rn→Ris any bounded, Borel-measurable function and φ is given by
then, for all Borel setsG⊂⊂Hand allx∈G,
The mean value property is very useful in thesolution of partial differential equations using stochastic processes.
LetAbe a partial differential operator on a domainD⊆Rnand letXbe an Itô diffusion withAas its generator. Intuitively, the Green measure of a Borel setHis the expected length of time thatXstays inHbefore it leaves the domainD. That is, theGreen measureofXwith respect toDatx, denotedG(x, ·), is defined for Borel setsH⊆Rnby
or for bounded, continuous functionsf:D→Rby
The name "Green measure" comes from the fact that ifXis Brownian motion, then
whereG(x,y) isGreen's functionfor the operator1/2Δ on the domainD.
Suppose thatEx[τD] < +∞ for allx∈D. Then theGreen formulaholds for allf∈C2(Rn;R) with compact support:
In particular, if the support offiscompactly embeddedinD,
|
https://en.wikipedia.org/wiki/It%C3%B4_diffusion
|
In physics, aLangevin equation(named afterPaul Langevin) is astochastic differential equationdescribing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is toBrownian motion, which models the fluctuating motion of a small particle in a fluid.
The original Langevin equation[1][2]describesBrownian motion, the apparently random movement of a particle in a fluid due to collisions with the molecules of the fluid,mdvdt=−λv+η(t).{\displaystyle m{\frac {\mathrm {d} \mathbf {v} }{\mathrm {d} t}}=-\lambda \mathbf {v} +{\boldsymbol {\eta }}\left(t\right).}
Here,v{\displaystyle \mathbf {v} }is the velocity of the particle,λ{\displaystyle \lambda }is its damping coefficient, andm{\displaystyle m}is its mass. The force acting on the particle is written as a sum of a viscous force proportional to the particle's velocity (Stokes' law), and anoise termη(t){\displaystyle {\boldsymbol {\eta }}\left(t\right)}representing the effect of the collisions with the molecules of the fluid. The forceη(t){\displaystyle {\boldsymbol {\eta }}\left(t\right)}has aGaussian probability distributionwith correlation function⟨ηi(t)ηj(t′)⟩=2λkBTδi,jδ(t−t′),{\displaystyle \left\langle \eta _{i}\left(t\right)\eta _{j}\left(t'\right)\right\rangle =2\lambda k_{\text{B}}T\delta _{i,j}\delta \left(t-t'\right),}wherekB{\displaystyle k_{\text{B}}}is theBoltzmann constant,T{\displaystyle T}is the temperature andηi(t){\displaystyle \eta _{i}\left(t\right)}is the i-th component of the vectorη(t){\displaystyle {\boldsymbol {\eta }}\left(t\right)}. Theδ{\displaystyle \delta }-functionform of the time correlation means that the force at a timet{\displaystyle t}is uncorrelated with the force at any other time. This is an approximation: the actual random force has a nonzero correlation time corresponding to the collision time of the molecules. However, the Langevin equation is used to describe the motion of a "macroscopic" particle at a much longer time scale, and in this limit theδ{\displaystyle \delta }-correlation and the Langevin equation becomes virtually exact.
Another common feature of the Langevin equation is the occurrence of the damping coefficientλ{\displaystyle \lambda }in the correlation function of the random force, which in an equilibrium system is an expression of theEinstein relation.
A strictlyδ{\displaystyle \delta }-correlated fluctuating forceη(t){\displaystyle {\boldsymbol {\eta }}\left(t\right)}is not a function in the usual mathematical sense and even the derivativedv/dt{\displaystyle \mathrm {d} \mathbf {v} /\mathrm {d} t}is not defined in this limit. This problem disappears when the Langevin equation is written in integral formmv=∫t(−λv+η(t))dt.{\displaystyle m\mathbf {v} =\int ^{t}\left(-\lambda \mathbf {v} +{\boldsymbol {\eta }}\left(t\right)\right)\mathrm {d} t.}
Therefore, the differential form is only an abbreviation for its time integral. The general mathematical term for equations of this type is "stochastic differential equation".[3]
Another mathematical ambiguity occurs for Langevin equations with multiplicative noise, which refers to noise terms that are multiplied by a non-constant function of the dependent variables, e.g.,|v(t)|η(t){\displaystyle \left|{\boldsymbol {v}}(t)\right|{\boldsymbol {\eta }}(t)}. If a multiplicative noise is intrinsic to the system, its definition is ambiguous, as it is equally valid to interpret it according to Stratonovich- or Ito- scheme (seeItô calculus). Nevertheless, physical observables are independent of the interpretation, provided the latter is applied consistently when manipulating the equation. This is necessary because the symbolic rules of calculus differ depending on the interpretation scheme. If the noise is external to the system, the appropriate interpretation is the Stratonovich one.[4][5]
There is a formal derivation of a generic Langevin equation from classical mechanics.[6][7]This generic equation plays a central role in the theory ofcritical dynamics,[8]and other areas of nonequilibrium statistical mechanics. The equation for Brownian motion above is a special case.
An essential step in the derivation is the division of the degrees of freedom into the categoriesslowandfast. For example, local thermodynamic equilibrium in a liquid is reached within a few collision times, but it takes much longer for densities of conserved quantities like mass and energy to relax to equilibrium. Thus, densities of conserved quantities, and in particular their long wavelength components, are slow variable candidates. This division can be expressed formally with theZwanzig projection operator.[9]Nevertheless, the derivation is not completely rigorous from a mathematical physics perspective because it relies on assumptions that lack rigorous proof, and instead are justified only as plausible approximations of physical systems.
LetA={Ai}{\displaystyle A=\{A_{i}\}}denote the slow variables. The generic Langevin equation then readsdAidt=kBT∑j[Ai,Aj]dHdAj−∑jλi,j(A)dHdAj+∑jdλi,j(A)dAj+ηi(t).{\displaystyle {\frac {\mathrm {d} A_{i}}{\mathrm {d} t}}=k_{\text{B}}T\sum \limits _{j}{\left[{A_{i},A_{j}}\right]{\frac {{\mathrm {d} }{\mathcal {H}}}{\mathrm {d} A_{j}}}}-\sum \limits _{j}{\lambda _{i,j}\left(A\right){\frac {\mathrm {d} {\mathcal {H}}}{\mathrm {d} A_{j}}}+}\sum \limits _{j}{\frac {\mathrm {d} {\lambda _{i,j}\left(A\right)}}{\mathrm {d} A_{j}}}+\eta _{i}\left(t\right).}
The fluctuating forceηi(t){\displaystyle \eta _{i}\left(t\right)}obeys aGaussian probability distributionwith correlation function⟨ηi(t)ηj(t′)⟩=2λi,j(A)δ(t−t′).{\displaystyle \left\langle {\eta _{i}\left(t\right)\eta _{j}\left(t'\right)}\right\rangle =2\lambda _{i,j}\left(A\right)\delta \left(t-t'\right).}
This implies theOnsager reciprocity relationλi,j=λj,i{\displaystyle \lambda _{i,j}=\lambda _{j,i}}for the damping coefficientsλ{\displaystyle \lambda }. The dependencedλi,j/dAj{\displaystyle \mathrm {d} \lambda _{i,j}/\mathrm {d} A_{j}}ofλ{\displaystyle \lambda }onA{\displaystyle A}is negligible in most cases. The symbolH=−ln(p0){\displaystyle {\mathcal {H}}=-\ln \left(p_{0}\right)}denotes theHamiltonianof the system, wherep0(A){\displaystyle p_{0}\left(A\right)}is the equilibrium probability distribution of the variablesA{\displaystyle A}. Finally,[Ai,Aj]{\displaystyle [A_{i},A_{j}]}is the projection of thePoisson bracketof the slow variablesAi{\displaystyle A_{i}}andAj{\displaystyle A_{j}}onto the space of slow variables.
In the Brownian motion case one would haveH=p2/(2mkBT){\displaystyle {\mathcal {H}}=\mathbf {p} ^{2}/\left(2mk_{\text{B}}T\right)},A={p}{\displaystyle A=\{\mathbf {p} \}}orA={x,p}{\displaystyle A=\{\mathbf {x} ,\mathbf {p} \}}and[xi,pj]=δi,j{\displaystyle [x_{i},p_{j}]=\delta _{i,j}}. The equation of motiondx/dt=p/m{\displaystyle \mathrm {d} \mathbf {x} /\mathrm {d} t=\mathbf {p} /m}forx{\displaystyle \mathbf {x} }is exact: there is no fluctuating forceηx{\displaystyle \eta _{x}}and no damping coefficientλx,p{\displaystyle \lambda _{x,p}}.
There is a close analogy between the paradigmatic Brownian particle discussed above andJohnson noise, the electric voltage generated by thermal fluctuations in a resistor.[10]The diagram at the right shows an electric circuit consisting of aresistanceRand acapacitanceC. The slow variable is the voltageUbetween the ends of the resistor. The Hamiltonian readsH=E/kBT=CU2/(2kBT){\displaystyle {\mathcal {H}}=E/k_{\text{B}}T=CU^{2}/(2k_{\text{B}}T)}, and the Langevin equation becomesdUdt=−URC+η(t),⟨η(t)η(t′)⟩=2kBTRC2δ(t−t′).{\displaystyle {\frac {\mathrm {d} U}{\mathrm {d} t}}=-{\frac {U}{RC}}+\eta \left(t\right),\;\;\left\langle \eta \left(t\right)\eta \left(t'\right)\right\rangle ={\frac {2k_{\text{B}}T}{RC^{2}}}\delta \left(t-t'\right).}
This equation may be used to determine the correlation function⟨U(t)U(t′)⟩=kBTCexp(−|t−t′|RC)≈2RkBTδ(t−t′),{\displaystyle \left\langle U\left(t\right)U\left(t'\right)\right\rangle ={\frac {k_{\text{B}}T}{C}}\exp \left(-{\frac {\left|t-t'\right|}{RC}}\right)\approx 2Rk_{\text{B}}T\delta \left(t-t'\right),}which becomes white noise (Johnson noise) when the capacitanceCbecomes negligibly small.
The dynamics of theorder parameterφ{\displaystyle \varphi }of a second order phase transition slows down near thecritical pointand can be described with a Langevin equation.[8]The simplest case is theuniversality class"model A" with a non-conserved scalar order parameter, realized for instance in axial ferromagnets,∂∂tφ(x,t)=−λδHδφ+η(x,t),H=∫ddx[12r0φ2+uφ4+12(∇φ)2],⟨η(x,t)η(x′,t′)⟩=2λδ(x−x′)δ(t−t′).{\displaystyle {\begin{aligned}{\frac {\partial }{\partial t}}\varphi {\left(\mathbf {x} ,t\right)}&=-\lambda {\frac {\delta {\mathcal {H}}}{\delta \varphi }}+\eta {\left(\mathbf {x} ,t\right)},\\[2ex]{\mathcal {H}}&=\int d^{d}x\left[{\frac {1}{2}}r_{0}\varphi ^{2}+u\varphi ^{4}+{\frac {1}{2}}\left(\nabla \varphi \right)^{2}\right],\\[2ex]\left\langle \eta {\left(\mathbf {x} ,t\right)}\,\eta {\left(\mathbf {x} ',t'\right)}\right\rangle &=2\lambda \,\delta {\left(\mathbf {x} -\mathbf {x} '\right)}\;\delta {\left(t-t'\right)}.\end{aligned}}}Other universality classes (the nomenclature is "model A",..., "model J") contain a diffusing order parameter, order parameters with several components, other critical variables and/or contributions from Poisson brackets.[8]
mdvdt=−λv+η(t)−kx{\displaystyle m{\frac {dv}{dt}}=-\lambda v+\eta (t)-kx}
A particle in a fluid is described by a Langevin equation with a potential energy function, a damping force, and thermal fluctuations given by thefluctuation dissipation theorem. If the potential is quadratic then the constant energy curves are ellipses, as shown in the figure. If there is dissipation but no thermal noise, a particle continually loses energy to the environment, and its time-dependent phase portrait (velocity vs position) corresponds to an inward spiral toward 0 velocity. By contrast, thermal fluctuations continually add energy to the particle and prevent it from reaching exactly 0 velocity. Rather, the initial ensemble of stochastic oscillators approaches a steady state in which the velocity and position are distributed according to theMaxwell–Boltzmann distribution. In the plot below (figure 2), the long time velocity distribution (blue) and position distributions (orange) in a harmonic potential (U=12kx2{\textstyle U={\frac {1}{2}}kx^{2}}) is plotted with the Boltzmann probabilities for velocity (green) and position (red). In particular, the late time behavior depicts thermal equilibrium.
Consider a free particle of massm{\displaystyle m}with equation of motion described bymdvdt=−vμ+η(t),{\displaystyle m{\frac {d\mathbf {v} }{dt}}=-{\frac {\mathbf {v} }{\mu }}+{\boldsymbol {\eta }}(t),}wherev=dr/dt{\displaystyle \mathbf {v} =d\mathbf {r} /dt}is the particle velocity,μ{\displaystyle \mu }is the particle mobility, andη(t)=ma(t){\displaystyle {\boldsymbol {\eta }}(t)=m\mathbf {a} (t)}is a rapidly fluctuating force whose time-average vanishes over a characteristic timescaletc{\displaystyle t_{c}}of particle collisions, i.e.η(t)¯=0{\displaystyle {\overline {{\boldsymbol {\eta }}(t)}}=0}. The general solution to the equation of motion isv(t)=v(0)e−t/τ+∫0ta(t′)e−(t−t′)/τdt′,{\displaystyle \mathbf {v} (t)=\mathbf {v} (0)e^{-t/\tau }+\int _{0}^{t}\mathbf {a} (t')e^{-(t-t')/\tau }dt',}whereτ=mμ{\displaystyle \tau =m\mu }is the correlation time of the noise term. It can also be shown that theautocorrelation functionof the particle velocityv{\displaystyle \mathbf {v} }is given by[11]Rvv(t1,t2)≡⟨v(t1)⋅v(t2)⟩=v2(0)e−(t1+t2)/τ+∫0t1∫0t2Raa(t1′,t2′)e−(t1+t2−t1′−t2′)/τdt1′dt2′≃v2(0)e−|t2−t1|/τ+[3kBTm−v2(0)][e−|t2−t1|/τ−e−(t1+t2)/τ],{\displaystyle {\begin{aligned}R_{vv}(t_{1},t_{2})&\equiv \langle \mathbf {v} (t_{1})\cdot \mathbf {v} (t_{2})\rangle \\&=v^{2}(0)e^{-(t_{1}+t_{2})/\tau }+\int _{0}^{t_{1}}\int _{0}^{t_{2}}R_{aa}(t_{1}',t_{2}')e^{-(t_{1}+t_{2}-t_{1}'-t_{2}')/\tau }dt_{1}'dt_{2}'\\&\simeq v^{2}(0)e^{-|t_{2}-t_{1}|/\tau }+\left[{\frac {3k_{\text{B}}T}{m}}-v^{2}(0)\right]{\Big [}e^{-|t_{2}-t_{1}|/\tau }-e^{-(t_{1}+t_{2})/\tau }{\Big ]},\end{aligned}}}where we have used the property that the variablesa(t1′){\displaystyle \mathbf {a} (t_{1}')}anda(t2′){\displaystyle \mathbf {a} (t_{2}')}become uncorrelated for time separationst2′−t1′≫tc{\displaystyle t_{2}'-t_{1}'\gg t_{c}}. Besides, the value oflimt→∞⟨v2(t)⟩=limt→∞Rvv(t,t){\textstyle \lim _{t\to \infty }\langle v^{2}(t)\rangle =\lim _{t\to \infty }R_{vv}(t,t)}is set to be equal to3kBT/m{\displaystyle 3k_{\text{B}}T/m}such that it obeys theequipartition theorem. If the system is initially at thermal equilibrium already withv2(0)=3kBT/m{\displaystyle v^{2}(0)=3k_{\text{B}}T/m}, then⟨v2(t)⟩=3kBT/m{\displaystyle \langle v^{2}(t)\rangle =3k_{\text{B}}T/m}for allt{\displaystyle t}, meaning that the system remains at equilibrium at all times.
The velocityv(t){\displaystyle \mathbf {v} (t)}of the Brownian particle can be integrated to yield its trajectoryr(t){\displaystyle \mathbf {r} (t)}. If it is initially located at the origin with probability 1, then the result isr(t)=v(0)τ(1−e−t/τ)+τ∫0ta(t′)[1−e−(t−t′)/τ]dt′.{\displaystyle \mathbf {r} (t)=\mathbf {v} (0)\tau \left(1-e^{-t/\tau }\right)+\tau \int _{0}^{t}\mathbf {a} (t')\left[1-e^{-(t-t')/\tau }\right]dt'.}
Hence, the average displacement⟨r(t)⟩=v(0)τ(1−e−t/τ){\textstyle \langle \mathbf {r} (t)\rangle =\mathbf {v} (0)\tau \left(1-e^{-t/\tau }\right)}asymptotes tov(0)τ{\displaystyle \mathbf {v} (0)\tau }as the system relaxes. Themean squared displacementcan be determined similarly:⟨r2(t)⟩=v2(0)τ2(1−e−t/τ)2−3kBTmτ2(1−e−t/τ)(3−e−t/τ)+6kBTmτt.{\displaystyle \langle r^{2}(t)\rangle =v^{2}(0)\tau ^{2}\left(1-e^{-t/\tau }\right)^{2}-{\frac {3k_{\text{B}}T}{m}}\tau ^{2}\left(1-e^{-t/\tau }\right)\left(3-e^{-t/\tau }\right)+{\frac {6k_{\text{B}}T}{m}}\tau t.}
This expression implies that⟨r2(t≪τ)⟩≃v2(0)t2{\displaystyle \langle r^{2}(t\ll \tau )\rangle \simeq v^{2}(0)t^{2}}, indicating that the motion of Brownian particles at timescales much shorter than the relaxation timeτ{\displaystyle \tau }of the system is (approximately)time-reversalinvariant. On the other hand,⟨r2(t≫τ)⟩≃6kBTτt/m=6μkBTt=6Dt{\displaystyle \langle r^{2}(t\gg \tau )\rangle \simeq 6k_{\text{B}}T\tau t/m=6\mu k_{\text{B}}Tt=6Dt}, which indicates anirreversible,dissipative process.
If the external potential is conservative and the noise term derives from a reservoir in thermal equilibrium, then the long-time solution to the Langevin equation must reduce to theBoltzmann distribution, which is the probability distribution function for particles in thermal equilibrium. In the special case ofoverdampeddynamics, the inertia of the particle is negligible in comparison to the damping force, and the trajectoryx(t){\displaystyle x(t)}is described by the overdamped Langevin equationλdxdt=−∂V(x)∂x+η(t)≡−∂V(x)∂x+2λkBTdBtdt,{\displaystyle \lambda {\frac {dx}{dt}}=-{\frac {\partial V(x)}{\partial x}}+\eta (t)\equiv -{\frac {\partial V(x)}{\partial x}}+{\sqrt {2\lambda k_{\text{B}}T}}{\frac {dB_{t}}{dt}},}whereλ{\displaystyle \lambda }is the damping constant. The termη(t){\displaystyle \eta (t)}is white noise, characterized by⟨η(t)η(t′)⟩=2kBTλδ(t−t′){\displaystyle \left\langle \eta (t)\eta (t')\right\rangle =2k_{\text{B}}T\lambda \delta (t-t')}(formally, theWiener process). One way to solve this equation is to introduce a test functionf{\displaystyle f}and calculate its average. The average off(x(t)){\displaystyle f(x(t))}should be time-independent for finitex(t){\displaystyle x(t)}, leading toddt⟨f(x(t))⟩=0,{\displaystyle {\frac {d}{dt}}\left\langle f(x(t))\right\rangle =0,}
Itô's lemma for theItô drift-diffusion processdXt=μtdt+σtdBt{\displaystyle dX_{t}=\mu _{t}\,dt+\sigma _{t}\,dB_{t}}says that the differential of a twice-differentiable functionf(t,x)is given bydf=(∂f∂t+μt∂f∂x+σt22∂2f∂x2)dt+σt∂f∂xdBt.{\displaystyle df=\left({\frac {\partial f}{\partial t}}+\mu _{t}{\frac {\partial f}{\partial x}}+{\frac {\sigma _{t}^{2}}{2}}{\frac {\partial ^{2}f}{\partial x^{2}}}\right)dt+\sigma _{t}{\frac {\partial f}{\partial x}}\,dB_{t}.}
Applying this to the calculation of⟨f(x(t))⟩{\displaystyle \langle f(x(t))\rangle }gives⟨−f′(x)∂V∂x+kBTf″(x)⟩=0.{\displaystyle \left\langle -f'(x){\frac {\partial V}{\partial x}}+k_{\text{B}}Tf''(x)\right\rangle =0.}
This average can be written using the probability density functionp(x){\displaystyle p(x)};∫(−f′(x)∂V∂xp(x)+kBTf″(x)p(x))dx=∫(−f′(x)∂V∂xp(x)−kBTf′(x)p′(x))dx=0{\displaystyle {\begin{aligned}&\int \left(-f'(x){\frac {\partial V}{\partial x}}p(x)+{k_{\text{B}}T}f''(x)p(x)\right)dx\\=&\int \left(-f'(x){\frac {\partial V}{\partial x}}p(x)-{k_{\text{B}}T}f'(x)p'(x)\right)dx\\=&\;0\end{aligned}}}where the second term was integrated by parts (hence the negative sign). Since this is true for arbitrary functionsf{\displaystyle f}, it follows that∂V∂xp(x)+kBTp′(x)=0,{\displaystyle {\frac {\partial V}{\partial x}}p(x)+{k_{\text{B}}T}p'(x)=0,}thus recovering the Boltzmann distributionp(x)∝exp(−V(x)kBT).{\displaystyle p(x)\propto \exp \left({-{\frac {V(x)}{k_{\text{B}}T}}}\right).}
In some situations, one is primarily interested in the noise-averaged behavior of the Langevin equation, as opposed to the solution for particular realizations of the noise. This section describes techniques for obtaining this averaged behavior that are distinct from—but also equivalent to—the stochastic calculus inherent in the Langevin equation.
AFokker–Planck equationis a deterministic equation for the time dependent probability densityP(A,t){\displaystyle P\left(A,t\right)}of stochastic variablesA{\displaystyle A}. The Fokker–Planck equation corresponding to the generic Langevin equation described in this article is the following:[12]∂P(A,t)∂t=∑i,j∂∂Ai(−kBT[Ai,Aj]∂H∂Aj+λi,j∂H∂Aj+λi,j∂∂Aj)P(A,t).{\displaystyle {\frac {\partial P\left(A,t\right)}{\partial t}}=\sum _{i,j}{\frac {\partial }{\partial A_{i}}}\left(-k_{\text{B}}T\left[A_{i},A_{j}\right]{\frac {\partial {\mathcal {H}}}{\partial A_{j}}}+\lambda _{i,j}{\frac {\partial {\mathcal {H}}}{\partial A_{j}}}+\lambda _{i,j}{\frac {\partial }{\partial A_{j}}}\right)P\left(A,t\right).}The equilibrium distributionP(A)=p0(A)=const×exp(−H){\displaystyle P(A)=p_{0}(A)={\text{const}}\times \exp(-{\mathcal {H}})}is a stationary solution.
The Fokker–Planck equation for an underdamped Brownian particle is called theKlein–Kramers equation.[13][14]If the Langevin equations are written asr˙=pmp˙=−ξp−∇V(r)+2mξkBTη(t),⟨ηT(t)η(t′)⟩=Iδ(t−t′){\displaystyle {\begin{aligned}{\dot {\mathbf {r} }}&={\frac {\mathbf {p} }{m}}\\{\dot {\mathbf {p} }}&=-\xi \,\mathbf {p} -\nabla V(\mathbf {r} )+{\sqrt {2m\xi k_{\mathrm {B} }T}}{\boldsymbol {\eta }}(t),\qquad \langle {\boldsymbol {\eta }}^{\mathrm {T} }(t){\boldsymbol {\eta }}(t')\rangle =\mathbf {I} \delta (t-t')\end{aligned}}}wherep{\displaystyle \mathbf {p} }is the momentum, then the corresponding Fokker–Planck equation is∂f∂t+1mp⋅∇rf=ξ∇p⋅(pf)+∇p⋅(∇V(r)f)+mξkBT∇p2f{\displaystyle {\frac {\partial f}{\partial t}}+{\frac {1}{m}}\mathbf {p} \cdot \nabla _{\mathbf {r} }f=\xi \nabla _{\mathbf {p} }\cdot \left(\mathbf {p} \,f\right)+\nabla _{\mathbf {p} }\cdot \left(\nabla V(\mathbf {r} )\,f\right)+m\xi k_{\mathrm {B} }T\,\nabla _{\mathbf {p} }^{2}f}Here∇r{\displaystyle \nabla _{\mathbf {r} }}and∇p{\displaystyle \nabla _{\mathbf {p} }}are thegradient operatorwith respect torandp, and∇p2{\displaystyle \nabla _{\mathbf {p} }^{2}}is theLaplacianwith respect top.
Ind{\displaystyle d}-dimensional free space, corresponding toV(r)=constant{\displaystyle V(\mathbf {r} )={\text{constant}}}onRd{\displaystyle \mathbb {R} ^{d}}, this equation can be solved usingFourier transforms. If the particle is initialized att=0{\displaystyle t=0}with positionr′{\displaystyle \mathbf {r} '}and momentump′{\displaystyle \mathbf {p} '}, corresponding to initial conditionf(r,p,0)=δ(r−r′)δ(p−p′){\displaystyle f(\mathbf {r} ,\mathbf {p} ,0)=\delta (\mathbf {r} -\mathbf {r} ')\delta (\mathbf {p} -\mathbf {p} ')}, then the solution is[14][15]f(r,p,t)=1(2πσXσP1−β2)d×exp[−12(1−β2)(|r−μX|2σX2+|p−μP|2σP2−2β(r−μX)⋅(p−μP)σXσP)]{\displaystyle {\begin{aligned}f(\mathbf {r} ,\mathbf {p} ,t)=&{\frac {1}{\left(2\pi \sigma _{X}\sigma _{P}{\sqrt {1-\beta ^{2}}}\right)^{d}}}\times \\&\quad \exp \left[-{\frac {1}{2(1-\beta ^{2})}}\left({\frac {|\mathbf {r} -{\boldsymbol {\mu }}_{X}|^{2}}{\sigma _{X}^{2}}}+{\frac {|\mathbf {p} -{\boldsymbol {\mu }}_{P}|^{2}}{\sigma _{P}^{2}}}-{\frac {2\beta (\mathbf {r} -{\boldsymbol {\mu }}_{X})\cdot (\mathbf {p} -{\boldsymbol {\mu }}_{P})}{\sigma _{X}\sigma _{P}}}\right)\right]\end{aligned}}}whereσX2=kBTmξ2[1+2ξt−(2−e−ξt)2];σP2=mkBT(1−e−2ξt)β=kBTξσXσP(1−e−ξt)2μX=r′+(mξ)−1(1−e−ξt)p′;μP=p′e−ξt.{\displaystyle {\begin{aligned}&\sigma _{X}^{2}={\frac {k_{\mathrm {B} }T}{m\xi ^{2}}}\left[1+2\xi t-\left(2-e^{-\xi t}\right)^{2}\right];\qquad \sigma _{P}^{2}=mk_{\mathrm {B} }T\left(1-e^{-2\xi t}\right)\\&\beta ={\frac {k_{\text{B}}T}{\xi \sigma _{X}\sigma _{P}}}\left(1-e^{-\xi t}\right)^{2}\\&{\boldsymbol {\mu }}_{X}=\mathbf {r} '+(m\xi )^{-1}\left(1-e^{-\xi t}\right)\mathbf {p} ';\qquad {\boldsymbol {\mu }}_{P}=\mathbf {p} 'e^{-\xi t}.\end{aligned}}}In three spatial dimensions, the mean squared displacement is⟨r(t)2⟩=∫f(r,p,t)r2drdp=μX2+3σX2{\displaystyle \langle \mathbf {r} (t)^{2}\rangle =\int f(\mathbf {r} ,\mathbf {p} ,t)\mathbf {r} ^{2}\,d\mathbf {r} d\mathbf {p} ={\boldsymbol {\mu }}_{X}^{2}+3\sigma _{X}^{2}}
Apath integralequivalent to a Langevin equation can be obtained from the correspondingFokker–Planck equationor by transforming the Gaussian probability distributionP(η)(η)dη{\displaystyle P^{(\eta )}(\eta )\mathrm {d} \eta }of the fluctuating forceη{\displaystyle \eta }to a probability distribution of the slow variables, schematicallyP(A)dA=P(η)(η(A))det(dη/dA)dA{\displaystyle P(A)\mathrm {d} A=P^{(\eta )}(\eta (A))\det(\mathrm {d} \eta /\mathrm {d} A)\mathrm {d} A}.
The functional determinant and associated mathematical subtleties drop out if the Langevin equation is discretized in the natural (causal) way, whereA(t+Δt)−A(t){\displaystyle A(t+\Delta t)-A(t)}depends onA(t){\displaystyle A(t)}but not onA(t+Δt){\displaystyle A(t+\Delta t)}. It turns out to be convenient to introduce auxiliaryresponse variablesA~{\displaystyle {\tilde {A}}}. The path integral equivalent to the generic Langevin equation then reads[16]∫P(A,A~)dAdA~=N∫exp(L(A,A~))dAdA~,{\displaystyle \int P(A,{\tilde {A}})\,\mathrm {d} A\,\mathrm {d} {\tilde {A}}=N\int \exp \left(L(A,{\tilde {A}})\right)\mathrm {d} A\,\mathrm {d} {\tilde {A}},}whereN{\displaystyle N}is a normalization factor andL(A,A~)=∫∑i,j{A~iλi,jA~j−A~i{δi,jdAjdt−kBT[Ai,Aj]dHdAj+λi,jdHdAj−dλi,jdAj}}dt.{\displaystyle L(A,{\tilde {A}})=\int \sum _{i,j}\left\{{\tilde {A}}_{i}\lambda _{i,j}{\tilde {A}}_{j}-{\widetilde {A}}_{i}\left\{\delta _{i,j}{\frac {\mathrm {d} A_{j}}{\mathrm {d} t}}-k_{\text{B}}T\left[A_{i},A_{j}\right]{\frac {\mathrm {d} {\mathcal {H}}}{\mathrm {d} A_{j}}}+\lambda _{i,j}{\frac {\mathrm {d} {\mathcal {H}}}{\mathrm {d} A_{j}}}-{\frac {\mathrm {d} \lambda _{i,j}}{\mathrm {d} A_{j}}}\right\}\right\}\mathrm {d} t.}The path integral formulation allows for the use of tools fromquantum field theory, such as perturbation and renormalization group methods. This formulation is typically referred to as either the Martin-Siggia-Rose formalism[17]or the Janssen-De Dominicis[16][18]formalism after its developers. The mathematical formalism for this representation can be developed onabstract Wiener space.
|
https://en.wikipedia.org/wiki/Langevin_equation
|
Inprobability theory, thearcsine lawsare a collection of results for one-dimensionalrandom walksand Brownian motion (theWiener process). The best known of these is attributed toPaul Lévy(1939).
All three laws relate path properties of the Wiener process to thearcsine distribution. A random variableXon [0,1] is arcsine-distributed if
Throughout we suppose that (Wt)0 ≤t≤ 1∈Ris the one-dimensional Wiener process on [0,1].Scale invarianceensures that the results can be generalised to Wiener processes run fort∈[0,∞).
The first arcsine law states that the proportion of time that the one-dimensional Wiener process is positive follows an arcsine distribution. Let
be themeasureof the set of times in [0,1] at which the Wiener process is positive. ThenT+{\displaystyle T_{+}}is arcsine distributed.
The second arcsine law describes the distribution of the last time the Wiener process changes sign. Let
be the time of the last zero. ThenLis arcsine distributed.
The third arcsine law states that the time at which a Wiener process achieves its maximum is arcsine distributed.
The statement of the law relies on the fact that the Wiener process has an almost surely unique maxima,[1]and so we can define the random variableMwhich is the time at which the maxima is achieved. i.e. the uniqueMsuch that
ThenMis arcsine distributed.
Defining the running maximum processMtof the Wiener process
then the law ofXt=Mt−Wthas the same law as a reflected Wiener process |Bt| (whereBtis a Wiener process independent ofWt).[1]
Since the zeros ofBand |B| coincide, the last zero ofXhas the same distribution asL, the last zero of the Wiener process. The last zero ofXoccurs exactly whenWachieves its maximum.[1]It follows that the second and third laws are equivalent.
|
https://en.wikipedia.org/wiki/L%C3%A9vy_arcsine_law
|
In themathematicaltheory ofstochastic processes,local timeis a stochastic process associated withsemimartingaleprocesses such asBrownian motion, that characterizes the amount of time a particle has spent at a given level. Local time appears in variousstochastic integrationformulas, such asTanaka's formula, if the integrand is not sufficiently smooth. It is also studied in statistical mechanics in the context ofrandom fields.
For a continuous real-valued semimartingale(Bs)s≥0{\displaystyle (B_{s})_{s\geq 0}}, the local time ofB{\displaystyle B}at the pointx{\displaystyle x}is the stochastic process which is informally defined by
whereδ{\displaystyle \delta }is theDirac delta functionand[B]{\displaystyle [B]}is thequadratic variation. It is a notion invented byPaul Lévy. The basic idea is thatLx(t){\displaystyle L^{x}(t)}is an (appropriately rescaled and time-parametrized) measure of how much timeBs{\displaystyle B_{s}}has spent atx{\displaystyle x}up to timet{\displaystyle t}. More rigorously, it may be written as the almost sure limit
which may be shown to always exist. Note that in the special case of Brownian motion (or more generally a real-valueddiffusionof the formdB=b(t,B)dt+dW{\displaystyle dB=b(t,B)\,dt+dW}whereW{\displaystyle W}is a Brownian motion), the termd[B]s{\displaystyle d[B]_{s}}simply reduces tods{\displaystyle ds}, which explains why it is called the local time ofB{\displaystyle B}atx{\displaystyle x}. For a discrete state-space process(Xs)s≥0{\displaystyle (X_{s})_{s\geq 0}}, the local time can be expressed more simply as[1]
Tanaka's formula also provides a definition of local time for an arbitrary continuous semimartingale(Xs)s≥0{\displaystyle (X_{s})_{s\geq 0}}onR:{\displaystyle \mathbb {R} :}[2]
A more general form was proven independently by Meyer[3]and Wang;[4]the formula extends Itô's lemma for twice differentiable functions to a more general class of functions. IfF:R→R{\displaystyle F:\mathbb {R} \rightarrow \mathbb {R} }is absolutely continuous with derivativeF′,{\displaystyle F',}which is of bounded variation, then
whereF−′{\displaystyle F'_{-}}is the left derivative.
IfX{\displaystyle X}is a Brownian motion, then for anyα∈(0,1/2){\displaystyle \alpha \in (0,1/2)}the field of local timesL=(Lx(t))x∈R,t≥0{\displaystyle L=(L^{x}(t))_{x\in \mathbb {R} ,t\geq 0}}has a modification which is a.s. Hölder continuous inx{\displaystyle x}with exponentα{\displaystyle \alpha }, uniformly for boundedx{\displaystyle x}andt{\displaystyle t}.[5]In general,L{\displaystyle L}has a modification that is a.s. continuous int{\displaystyle t}andcàdlàginx{\displaystyle x}.
Tanaka's formula provides the explicitDoob–Meyer decompositionfor the one-dimensionalreflecting Brownian motion,(|Bs|)s≥0{\displaystyle (|B_{s}|)_{s\geq 0}}.
The field of local timesLt=(Ltx)x∈E{\displaystyle L_{t}=(L_{t}^{x})_{x\in E}}associated to a stochastic process on a spaceE{\displaystyle E}is a well studied topic in the area of random fields. Ray–Knight type theorems relate the fieldLtto an associatedGaussian process.
In general Ray–Knight type theorems of the first kind consider the fieldLtat a hitting time of the underlying process, whilst theorems of the second kind are in terms of a stopping time at which the field of local times first exceeds a given value.
Let (Bt)t≥ 0be a one-dimensional Brownian motion started fromB0=a> 0, and (Wt)t≥0be a standard two-dimensional Brownian motion started fromW0= 0 ∈R2. Define the stopping time at whichBfirst hits the origin,T=inf{t≥0:Bt=0}{\displaystyle T=\inf\{t\geq 0\colon B_{t}=0\}}. Ray[6]and Knight[7](independently) showed that
where (Lt)t≥ 0is the field of local times of (Bt)t≥ 0, and equality is in distribution onC[0,a]. The process |Wx|2is known as the squaredBessel process.
Let (Bt)t ≥ 0be a standard one-dimensional Brownian motionB0= 0 ∈R, and let (Lt)t≥ 0be the associated field of local times. LetTabe the first time at which the local time at zero exceedsa> 0
Let (Wt)t≥ 0be an independent one-dimensional Brownian motion started fromW0= 0, then[8]
Equivalently, the process(LTax)x≥0{\displaystyle (L_{T_{a}}^{x})_{x\geq 0}}(which is a process in the spatial variablex{\displaystyle x}) is equal in distribution to the square of a 0-dimensionalBessel processstarted ata{\displaystyle a}, and as such is Markovian.
Results of Ray–Knight type for more general stochastic processes have been intensively studied, and analogue statements of both (1) and (2) are known for strongly symmetric Markov processes.
|
https://en.wikipedia.org/wiki/Local_time_(mathematics)
|
Themany-body problemis a general name for a vast category of physical problems pertaining to the properties of microscopic systems made of many interacting particles.
Microscopichere implies thatquantum mechanicshas to be used to provide an accurate description of the system.Manycan be anywhere from three to infinity (in the case of a practically infinite,homogeneousor periodic system, such as acrystal), although three- and four-body systems can be treated by specific means (respectively theFaddeevand Faddeev–Yakubovsky equations) and are thus sometimes separately classified asfew-body systems.
In general terms, while the underlyingphysical lawsthat govern the motion of each individual particle may (or may not) be simple, the study of the collection of particles can be extremely complex. In such a quantum system, the repeated interactions between particles create quantum correlations, or entanglement. As a consequence, thewave functionof the system is a complicated object holding a large amount ofinformation, which usually makes exact or analytical calculations impractical or even impossible.
This becomes especially clear by a comparison to classical mechanics. Imagine a single particle that can be described withk{\displaystyle k}numbers (take for example a free particle described by its position and velocity vector, resulting ink=6{\displaystyle k=6}). In classical mechanics,n{\displaystyle n}such particles can simply be described byk⋅n{\displaystyle k\cdot n}numbers. The dimension of the classical many-body system scales linearly with the number of particlesn{\displaystyle n}.
In quantum mechanics, however, the many-body-system is in general in a superposition of combinations of single particle states - all thekn{\displaystyle k^{n}}different combinations have to be accounted for. The dimension of the quantum many body system therefore scales exponentially withn{\displaystyle n}, much faster than in classical mechanics.
Because the required numerical expense grows so quickly, simulating the dynamics of more than three quantum-mechanical particles is already infeasible for many physical systems.[1]Thus, many-body theoretical physics most often relies on a set ofapproximationsspecific to the problem at hand, and ranks among the mostcomputationally intensivefields of science.
In many cases,emergent phenomenamay arise which bear little resemblance to the underlying elementary laws.
Many-body problems play a central role incondensed matter physics.
|
https://en.wikipedia.org/wiki/Many-body_problem
|
TheMarangoni effect(also called theGibbs–Marangoni effect) is themass transferalong aninterfacebetween two phases due to agradientof thesurface tension. In the case of temperature dependence, this phenomenon may be calledthermo-capillary convection[1]orBénard–Marangoni convection.[2]
This phenomenon was first identified in the so-called "tears of wine" by physicistJames Thomson(Lord Kelvin's brother) in 1855.[3]The general effect is named afterItalianphysicistCarlo Marangoni, who studied it for his doctoral dissertation at theUniversity of Paviaand published his results in 1865.[4]A complete theoretical treatment of the subject was given byJ. Willard Gibbsin his workOn the Equilibrium of Heterogeneous Substances(1875–1878).[5]
Since a liquid with a high surface tension pulls more strongly on the surrounding liquid than one with a low surface tension, the presence of a gradient in surface tension will naturally cause the liquid to flow away from regions of low surface tension. The surface tension gradient can be caused by concentration gradient or by a temperature gradient (surface tension is a function of temperature).
In simple cases, the speed of the flowu≈Δγ/μ{\displaystyle u\approx \Delta \gamma /\mu }, whereΔγ{\displaystyle \Delta \gamma }is the difference in surface tension andμ{\displaystyle \mu }is theviscosityof the liquid. Water at room temperature has a surface tension of around 0.07 N/m and a viscosity of approximately 10−3Pa⋅s. So even variations of a few percent in the surface tension of water can generate Marangoni flows of almost 1 m/s. Thus Marangoni flows are common and easily observed.
For the case of a small drop of surfactant dropped onto the surface of water, Roché and coworkers[6]performed quantitative experiments and developed a simple model that was in approximate agreement with the experiments. This described the expansion in the radiusr{\displaystyle r}of a patch of the surface covered in surfactant, due to an outward Marangoni flow at a speedu{\displaystyle u}. They found that speed of expansion of the surfactant-covered patch of the water surface occurred at speed of approximately
forγw{\displaystyle \gamma _{\text{w}}}the surface tension of water,γs{\displaystyle \gamma _{\text{s}}}the (lower) surface tension of the surfactant-covered water surface,μ{\displaystyle \mu }the viscosity of water, andρ{\displaystyle \rho }the mass density of water. For(γw−γs)≈10−2{\displaystyle (\gamma _{\text{w}}-\gamma _{\text{s}})\approx 10^{-2}}N/m, i.e., of order of tens of percent reduction in surface tension of water, and as for waterμρ∼1{\displaystyle \mu \rho \sim 1}N⋅m−6⋅s3, we obtainu≈10−2r3{\displaystyle u\approx 10^{-2}\,r^{3}}withuin m/s andrin m. This gives speeds that decrease as surfactant-covered region grows, but are of order of cm/s to mm/s.
The equation is obtained by making a couple of simple approximations, the first is by equating the stress at the surface due to the concentration gradient of surfactant (which drives the Marangoni flow) with the viscous stresses (that oppose flow). The Marangoni stress∼(∂γ/∂r){\displaystyle \sim (\partial \gamma /\partial r)}, i.e., gradient in the surface tension due gradient in the surfactant concentration (from high in the centre of the expanding patch, to zero far from the patch). The viscousshear stressis simply the viscosity times the gradient in shear velocity∼μ(u/l){\displaystyle \sim \mu (u/l)}, forl{\displaystyle l}the depth into the water of the flow due to the spreading patch. Roché and coworkers[6]assume that the momentum (which is directed radially) diffuses down into the liquid, during spreading, and so when the patch has reached a radiusr{\displaystyle r},l∼(νr/u)1/2{\displaystyle l\sim (\nu r/u)^{1/2}}, forν=μ/ρ{\displaystyle \nu =\mu /\rho }thekinematic viscosity, which is the diffusion constant for momentum in a fluid. Equating the two stresses,
where we approximated the gradient(∂γ/∂r)≈(γw−γs)/r{\displaystyle (\partial \gamma /\partial r)\approx (\gamma _{\text{w}}-\gamma _{\text{s}})/r}. Taking the 2/3 power of both sides gives the expression above.
TheMarangoni number, a dimensionless value, can be used to characterize the relative effects of surface tension and viscous forces.
As an example,winemay exhibit a visible effect called "tears of wine". The effect is a consequence of the fact that alcohol has a lower surface tension and higher volatility than water. The water/alcohol solution rises up the surface of the glass lowering thesurface energyof the glass. Alcohol evaporates from the film leaving behind liquid with a higher surface tension (more water, less alcohol). This region with a lower concentration of alcohol (greater surface tension) pulls on the surrounding fluid more strongly than the regions with a higher alcohol concentration (lower in the glass). The result is the liquid is pulled up until its own weight exceeds the force of the effect, and the liquid drips back down the vessel's walls. This can also be easily demonstrated by spreading a thin film of water on a smooth surface and then allowing a drop of alcohol to fall on the center of the film. The liquid will rush out of the region where the drop of alcohol fell.
Under earth conditions, the effect of gravity causingnatural convectionin a system with a temperature gradient along a fluid/fluid interface is usually much stronger than the Marangoni effect. Many experiments (ESAMASER 1-3) have been conducted undermicrogravity conditionsaboardsounding rocketsto observe the Marangoni effect without the influence of gravity. Research onheat pipesperformed on theInternational Space Stationrevealed that whilst heat pipes exposed to a temperature gradient on Earth cause the inner fluid to evaporate at one end and migrate along the pipe, thus drying the hot end, in space (where the effects of gravity can be ignored) the opposite happens and the hot end of the pipe is flooded with liquid.[7]This is due to the Marangoni effect, together withcapillary action. The fluid is drawn to the hot end of the tube by capillary action. But the bulk of the liquid still ends up as a droplet a short distance away from the hottest part of the tube, explained by Marangoni flow. The temperature gradients in axial and radial directions makes the fluid flow away from the hot end and the walls of the tube, towards the center axis. The liquid forms a droplet with a small contact area with the tube walls, a thin film circulating liquid between the cooler droplet and the liquid at the hot end.
The effect of the Marangoni effect on heat transfer in the presence of gas bubbles on the heating surface (e.g., in subcooled nucleate boiling) has long been ignored, but it is currently a topic of ongoing research interest because of its potential fundamental importance to the understanding of heat transfer in boiling.[8]
A familiar example is insoap films: the Marangoni effectstabilizessoap films. Another instance of the Marangoni effect appears in the behavior of convection cells, the so-calledBénard cells.
One important application of the Marangoni effect is the use for dryingsiliconwafersafter a wet processing step during the manufacture ofintegrated circuits. Liquid spots left on the wafer surface can cause oxidation that damages components on the wafer. To avoid spotting, analcoholvapor(IPA) or other organic compound in gas, vapor, or aerosol form is blown through anozzleover the wet wafer surface (or at the meniscus formed between the cleaning liquid and wafer as the wafer is lifted from an immersion bath), and the subsequent Marangoni effect causes a surface-tension gradient in the liquid allowing gravity to more easily pull the liquid completely off the wafer surface, effectively leaving a dry wafer surface.
A similar phenomenon has been creatively utilized to self-assemblenanoparticlesinto ordered arrays[9]and to grow ordered nanotubes.[10]An alcohol containing nanoparticles is spread on the substrate, followed by blowing humid air over the substrate. The alcohol is evaporated under the flow. Simultaneously, water condenses and forms microdroplets on the substrate. Meanwhile, the nanoparticles in alcohol are transferred into the microdroplets and finally form numerouscoffee ringson the substrate after drying.
Another application is the manipulation of particles[11]taking advantage of the relevance of thesurface tensioneffects at small scales. A controlled thermo-capillary convection is created by locally heating the air–water interface using an infraredlaser. Then, this flow is used to control floating objects in both position and orientation and can prompt theself-assemblyof floating objects, profiting from theCheerios effect.
The Marangoni effect is also important to the fields ofwelding,crystal growthandelectron beammelting of metals.[1]
|
https://en.wikipedia.org/wiki/Marangoni_effect
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.